Can someone please explain that in more detail? I don't understand it!
Loss of accuracy
Floating-point variables cannot solve all computational problems. Floatingpoint
variables have a limited precision of about 6 digits — an extra-economy
size, double-strength version of float can handle some 15 significant digits with
room left over for lunch.
To evaluate the problem, consider that 13 is expressed as 0.333 . . . in a continuing
sequence. The concept of an infinite series makes sense in math, but
not to a computer. The computer has a finite accuracy. Average 1, 2, and 2
(for example), and you get 1.666667.
C++ can correct for many forms of round-off error. For example, in output, C++
can determine that instead of 0.999999, that the user really meant 1. In other
cases, even C++ cannot correct for round-off error.
f1 = 0.3333333432674408
f2 = 0.3333333432674408
feq = true
d1 = 0.3333333333333333
d2 = 0.3333333333333333
deq = true
Process returned 0 (0x0) execution time : 0.031 s
Press any key to continue.
If you haven't looked into at least how to convert from binary to base 10 and back, then you should. The rules actually apply to any base, but you should be familiar with binary and base 10 at the least. And maybe hex, cause it's used a lot. Anyway, I'll give a quick run down
Basically, when you convert a number from floating point to binary, you get issues. An easy way to do conversions is by multiplying the number to the right of the radix point by 2, and using the number to the left as the binary digit. For example, take the very common number of .1 in base 10, and let's convert it to binary.
.1 * 2 = 0.2
.2 * 2 = 0.4
.4 * 2 = 0.8
.8 * 2 = 1.6
.6 * 2 = 1.2
.2 * 2 = 0.4
.4 * 2 = 0.8
.8 * 2 = 1.6
...
This gives us a binary string of .00011001
And as you can likely see, this will continue on forever in this pattern. In base 10, .1 is a real number. It means one tenth, or 1/10. But, in binary not so much. Now, let's take that binary string and put it into base 10 again to see what happens.
This will leave us with 0.09765625
Oh, well that's no longer .1 is it? It's damn close, but it's no cigar. For some computations, this may be no big deal. But, for computations that take in several thousand or million floating point calculations, this could easily become a big problem. There have actually been major military and NASA accidents in history due to this issue.
On a sort of side note, if us humans had 8 fingers instead of 10, we would have no problems with converting our number system to binary.