double vs. float

I have one other topic I have been stumped on for a couple days, and I could not find a decent answer in my book or on the Internet.

I had to create a program that declared an integer and initialized it to zero, and ultimately looped the program to add "1" until its value reached one billion. After that, we had to change the data type from integer to double, and then from double to float.

When I had the data type as integer, it displayed: 1000000000

When I had the data type as double, it displayed: 1e+009

When I had the data type as float, it displayed: 1.67772e+007



I have no idea why double type displayed it in this sort of engineering notation. Furthermore, float gave me a number far less than one billion. I know that double and float have to do with precision, but I did not think it would effect the results like this. Can anyone shed any light on what is going on? thank you, hints are definitely welcome!
You have to tell the I/O stream how you want the value output, if you don't want scientific notation. See setprecision().

As for the answer to your other question, since you only wanted a hint, you can find the answers here (some additional thought required):

http://en.wikipedia.org/wiki/Single_precision_floating-point_format
http://en.wikipedia.org/wiki/Double_precision_floating-point_format
jsmith, thanks for the hint! However I looked at the links and things get really technical quick. Can I trouble you to explain it to me? Thank you!
Can you post your code that output the 1.67772e+007?
Sure, here it is:

1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
using namespace std;

int main()
{
  float sum = 0;
    for (int i = 0; i < 1000000000; i++)
      sum += 1;

  cout << sum << endl;

  return 0;
}


Our professor had us start with declaring sum as an integer, then a double, and then float. I have to explain why we get three different answers....very perplexed lol
Last edited on
well you will get different answers because depending on the machine, an int, double, and float take use different amounts of bits. An int is typically a 32 bit number that goes as large as 2^31 +1 and when working with a large number you will get over flow. Over flow occurs when you have a change in the sign bit due to adding a number that goes past the largest number a 32 bit can be which is some were in the 4 billions. Doubles and floats on the other hand are larger bit values then a basic int. There for they will not over flow as fast and will cause numbers too be different.

too correct my self i actually think a floating point number is the lowest bit size out of the three but i may be wrong
Last edited on
On most modern machines an int and a short are both 32 bit while a double is 64 bit. However floats and doubles use the standard floating point style to store memory (mantissa, exponent, and two sign bits) where as ints don't (just a number and sign bit). Depending on the size of the exponent part of the short (maybe 3-4 bits) it will most likely not be able to store numbers as big as 10^9.

In short (no pun intended) although a 32 bit int can store numbers in the range of [-2^31 to 2^31 – 1]
or [-2,147,483,648 to 2,147,483,647] a 32 bit floating point number (a short) loses some of that range because it needs the memory to store the decimal precision.
Last edited on
Ok, yes. I wanted to make sure your for() loop index was still an int.

Essentially Mathhead200's answer is correct.
Thank you everyone! You are all truly awesome!
Topic archived. No new replies allowed.