Comparing floating-point numbers

I've some questions regarding comparison of floating-point numbers. In this book: http://kysmykseka.net/kysmyk/Wizardry/Game%20Development/Programming/Real-Time%20Collision%20Detection.pdf page 481 in the PDF reads that:

1)

if (Abs(x - y) <= epsilon) ... // Absolute tolerance comparison

[...] Another way of looking at it is that as the tested numbers grow larger the absolute test requires that more and more digits agree. When the numbers are sufficiently larger than the fixed epsilon, the test will always fail unless the numbers are exactly equal, which is usually not intended. An absolute tolerance should therefore only be used when the orders of magnitude of the numbers are known in advance and the tolerance value can be set accordingly.


Can someone explain this phenomenon, i.e. why testing large numbers' absolute difference against a small expsilon, such as 10^-6, will fail ?

2)

The next page reads:

if (Abs(x - y) <= epsilon * Max(Abs(x), Abs(y))) ... // Relative tolerance comparison

[...] The relative test is also not without problems. Note that the test expression behaves as desired when Abs(x) and Abs(y) are greater than 1, but when they are less than 1 the effective epsilon is made smaller, and the smaller the numbers get the more digits of them are required to agree.


But when they're bigger than 1 the effective epsilon is made bigger by contrast, so what am I missing here?
1) The key term in a floating point number is just that - the floating point. In other words, the position of the decimal point moves to allow greater precision in the number. Therefore, as the numbers get larger, an epsilon of 10-6 might be entirely off the lowest precision for a large number, if its precision now only goes to 10-4 or the like.

2) This is the same as the previous question: As the numbers get bigger, the epsilon required to accurately measure the changes also grows. As the numbers grow very large, the inaccuracies could be unnoticeable at the high number ranges but very noticeable once you bring out down (i.e. a difference of 0.4 in two numbers that are 'equal'). Hence, a greater epsilon is required to account for this.
Can someone explain this phenomenon, i.e. why testing large numbers' absolute difference against a small expsilon, such as 10^-6, will fail ?
Think about which numbers of the form xyzE+a (three digits for the mantissa and one for the exponent) you can write You can write 0, 1, 100000, 10.1, etc. just fine. One number you can't write is 100.1, because you'd need to write 1001E-1 or some variation thereof.
So what happens if you want to compare 101E+1 to 100E+0 with epsilon = 1E-1?

But when they're bigger than 1 the effective epsilon is made bigger by contrast, so what am I missing here?
Suppose we want to compare
1
2
10240000.0
10240018.1
If we want to get a meaningful idea of how close they are, what we really should be comparing is
1
2
1024.000
1024.001
Making the epsilon bigger does basically this.
So what happens if you want to compare 101E+1 to 100E+0 with epsilon = 1E-1?


Well, 1010-100 = 910 > 0.1, so the numbers aren't equal...

Making the epsilon bigger does basically this.


I'm sorry, I still don't get it. Could you give 4 exact examples of comparisons with epsilons - maybe it'll be easier for me to see what's going on exactly? First two with numbers smaller than 1 using absolute tolerance comparison (which is supposedly appropriate for this) and relative tolerance comparison (bad option) and the second two with numbers greater than 1 (where relative comp. is appropriate and absolute not).
Well, 1010-100 = 910 > 0.1, so the numbers aren't equal...
I suspect I screwed up the exponent of the first number. I can't remember anymore.

|0.01 - 0.02| <= 0.01
|0.01 - 0.02| > 0.01 * 0.02 = 0.0002
|1.000001E+20 - 1.000002E+20| > 0.01
|1.000001E+20 - 1.000002E+20| = 1E+18 <= 0.01 * 1.000002E+20 = 1.000002e+18
Topic archived. No new replies allowed.