### float accuracy, 0.1 + 0.2 == 0.3 is false

I have just ran into this idea, that a 0.1 float in a computer is not actually 0.1, so I was wondering if there might be either a better data type than float or a better method at calculating a decimal sum in base 10 if I ever ran into the case where it was necessary. How might a bank computer keep track of money for example? I have tried doing internet searches, but I just must not know the right keyword to find the answer.

------------------------------------------
Never mind, I think I found a good link. Thanks.
Last edited on
The bank computer would use one of two solutions.

The first is to use a decimal rather than binary internal representation. Some processors can directly perform arithmetic using decimal numbers, thus avoiding such errors. Though you still need to take into account rounding issues, for example when calculating interest, the result may still have a fractional part to consider.

The second solution is to handle all monetary amounts using the smallest unit, such as cents rather than dollars, or pennies rather than pounds.

Actually, I've used an arbitrary precision library for the purpose of handling numbers with an unlimited number of digits, and some such libraries internally use decimal representation. I'm thinking here of the MAPM library which internally uses base 100, which is easily compatible with our normal decimal usage.

Topic archived. No new replies allowed.