I thought I understood subtraction errors (subtracting two almost equal numbers effectively removes one digit of precision per equal digit) but apparently not; in the following snippet:
You are subtracting an from an integer on line 3.
You need to write 1. instead of 1 to signify that it's a double. Currently you use doubles instead of floats. Put 1.f to tell the compiler to interpret the number like a float.
1, 100000000000000.0 and 9999999999999.0 are valid values for the type double.
The decimal number 0.9999999999999 is not a valid double it happens to be between two valid doubles: 0.99999999999989996890548127339570783078670501708984375 and 0.999999999999900079927783735911361873149871826171875. The first one is closer to 0.9999999999999, and is what is actually compiled.