double precision problem

i am using a variable
 
double t = 0.3;

but when i did the debugging, i found that it is taking its value as 0.29999999999999999 also another variable
 
double h = 0.1;

is taken as 0.10000000000000001. can anyone tell why is this happening and how can i use exact 0.3 and 0.1 values?
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Some values that can be represented nice and tidy in fractional decimals cannot be done in binary, so they are approximated.

This is why it is unwise to use "==" when comparing floating point numbers. Instead, you should compare the difference of two numbers to some small number (typically called epsilon) as an "equality" test.

If you only need precision up to a couple of points, however, you may consider storing your data as an integer. Think of dollars and cents. If I want to represent $3.95, I could just use the integer 395 since I don't care about further precision. If I'm interpreting GPS data, maybe I only care about the first five or six decimal places (since such precision beyond a single meter is meaningless to me). [-180.000000, 180.000000] becomes [-180000000, 180000000], a range which fits into a signed integer.
Last edited on
how can it be eliminated? for now can u just tell me that? it would take me 5 hours to read that big page...i need to solve this problem asap...please.
Read my edit above.
so that means i cannot use directly 0.3 or values like that in my program by storing it in some variable? my program involves vigorous calculations and each time it does the rounding off error, the answer diverges like anything....
No, you cannot directly use 0.3. You can use the computer's approximation to 0.3 which is 0.2999...999
Topic archived. No new replies allowed.