• Forum
  • Lounge
  • How are floating point numbers represent

 
How are floating point numbers represented in a computer?

As in, in registers, how is a floating point number represented?
According to my Comp Sci 101 professor, they work based on a framework "Black magic and miracles"... and then he went on to give a real explanation. If I remember correctly they are represented as a value in scientific notation. The IEEE standard for a single percision float variable would have one bit for sing, 8 for the value of the exponent and 23 for the significand.

For example, if you had the decimal value 1.2345, the significand would equal 12345, the exponent would equal 2, and the sign would be negative, so it would be 12345 x 10-2
Very close. Since all numbers are binary, there is no need to actually store the leading bit (which, of course, is always set).

The IEEE numbers typically have a single bit for sign, N bits for the significand, and M<N bits for the exponent. As already mentioned, the significand excludes the leading 1 bit. And the exponent is a biased number.

More reading:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/IEEE_754-1985
http://en.wikipedia.org/wiki/Signed_number_representations#Excess-K (biased numbers)

Enjoy!
Here is a good article that may interest you:

What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html

It has been around for a number of years and I have seen it cited often.

Hope this helps.
Topic archived. No new replies allowed.