floats cant accurately represent all numbers which means the computer will sometimes round the number to get the next closest number. leaving you with numbers like .599999999
this is a problem when dealing with money so some programmers will convert a float to two integers. One integer to represent the dollars part and one to represent the cents part.
another way is to check that the float is within a certain rage usually called epsilon this is similar to what you are doing. epsilon being a very small number if the float in question is less than float + epsilon and greater than float - epsilon then its considered ok.
I believe that libraries such as boost have classes that can accurately represent numbers and this is probably what most professional programmers would use.
its a double buffered key input in opengl. once its hit it outputs the x value (same issues with the Y) of the location of my tested object to the console. its only adding or subtracting .10 at a time . being why i didn't understand why it occurred. then the console is throwing me numbers that would confuse the average user of my program. if you really wanted to look into it i don't mind sharing details just don't know if thats something you actually would want to look into. its just a simple opengl app that displays an object with the options of movement/rotation.