Floating numbers

Hello guys.
I have this problem:
Our teacher told us that floats and doubles are not accurate.
I think he said the if you have 0,1 stored in a float a and you try to retain it at some time you may get 0,9999999999 instead of 0,1. if this true? an if it's true is it dangerous to use floating-point numbers?
I am asking this because i have to solve a task involving densities. So every time i have to compare 2 items do i have to re-calculate their densities(goto *) or is it ok if i store them in an array?(beacause if you "lose" decimal points this could affect the result?)

*i mean if i have 2 item masses m1,m2 and volume v1,v2 without involving floating points i could do return m1*v2>m2*v1;(using the formula d=m/v)

Thanks in advance :)
I'd say the main problem is likely to be if you were testing to see whether two values were precisely equal, the comparison might say the values were different.

It's a general problem we might encounter in everyday life, 1/2 gives the decimal 0.5 while 1/3 gives 0.3333333...

Because the computer uses binary representation, we can sometimes be surprised at values we expect to be precise (like 0.1) end up stored internally as an approximation.

When dealing with currency, we try to avoid this by representing quantities as integers, but with scientific or engineering values, inevitable not all values will be integers, so we just have to live with the situation. I wouldn't say that floating point values were inherently dangerous, any more than using decimal values are dangerous in everyday life.

As for the question where you compare two densities, I don't think it would make any difference whether you stored the value or re-calculated it. It's not the act of storing the number which is a problem. More important, are the mass and volume values always integers? If these are real-world figures I'd have expected them to be floating point values too.
Last edited on
Your teacher is correct. Floating point numbers (ie something with a decimal point) can't be represented perfectly in binary. 0.1 for example will end up being an infinitely repeating floating point in binary, which will translate to 0.9999999999....

But as long as you use double and be wary of implicit casting that could happen, you'll be fine. Double can hold something like 20 digits after the radix point so that's plenty of precision for you.

Your teacher sounds very old though. There's been a ton of work put into floating-point arithmetic and it's pretty solid now as long as you aren't mixing types.

float x = 0.1;
double y = 0.1;

if(x == y)
std::cout << "x == y";
std::cout << "x != y";

Run that for a little test.
It isn't dangerous, per se, but it is important to remember that floating point numbers are an approximation to an exact value.

Since they are stored in a binary computer, some values cannot be exactly represented anyway. The example your teacher gave was 0.1. Remember, that is the same as one tenth, or 1/10. In binary, that fraction has an infinitely repeating sequence of digits after the decimal point.

Also, it essentially maps a very short sequence of digits/values (stored as an integer value) a specific distance from the decimal point. The idea, of course, is to represent the most significant values. You cannot, then, combine a very large value (like 1.234x1074) with a very small value (like 1.234x10-149), since one of those value will essentially be lost. Guess which one it is? So no matter how many times I add the small value to the large, the large will never change.

There are other considerations as well. The way the value is handled by the computer hardware may change it in minor ways, etc.
More reading:

Good luck!
I really got in the poiint now thanks (: so although that if i store them i will save time(to recalculate them) if i want to test equality it is dangerous right?
Instead of testing for equality directly, you could calculate the difference (subtract one from the other) and test whether the absolute value (ignore the sign) is less than some very small value.

Take an example, both these values are an approximation for sqrt(5): 2.23607, 2.236068. The difference is 0.000002.

It might be sufficient to decide this is smaller than the specified limit, and conclude the values are "equal". But because floating point numbers may be very large or very small due to the exponent part, it might be better to find this difference as a proportion of one or other. example, 0.000002 / 2.23607 = 0.0000008944 This makes it possible to compare numbers for approximately equal even if they are very large or very small.

Take sqrt(5000000000000) a second example, and our approximations are:
2236068 and 2236070. The difference is now 2, which we may think is not negligible. But as a proportion, it is 2/2236068 which gives 0.0000008944 which again we may consider smaller than our required tolerance.

There are other approaches, such as rounding the numbers to a specified number of significant digits before we compare.
Great ideaa! how small has the proportion have to be?
There are other things to consider.

Changing a type from float to double just means the problem might occur less often, but it is still there.

Also there is "user precision". For example, I might do something in units of metres, and I am only interested in answers to the millimetres or 3 decimal places, so an answer that is within 0.0005 is "equal". However there is still a problem, which brings me to the next thing.

Think about the "distance" between representable numbers in floating point. You can check out the value of numeric_limits<double>epsilon This is the 'distance' from 1.0 to the next representable number.

SAy you have MyEpsilon = numeric_limits<double>epsilon

If your number is 1000, then the 'distance' will be 1000 * epsilon:

MyEpsilon *= 1000.0

If your number is 1e16, then 'distance'will be close to 1.0, and for 1e19 it is 1000 .

Going back to rounding to 0.0005, there can still be problems when the value is close to 0.0005, so you still have to use an epsilon value for it to work properly.

Topic archived. No new replies allowed.