The default meaning of precision?

I'm trying to understand what precision actually means by default (without using std::fixed and similar).

1
2
3
4
5
6
7
8
9
#include <iostream>
#include <iomanip>

int main()
{
	std::cout << std::setprecision(6);
	std::cout << 0.00001 << "\n"; // prints 1e-05
	std::cout << 1.00001 << "\n"; // prints 1.00001
}


If it means the maximum number of digits why isn't the first output 0.00001 since it obviously contains 6 digits?

If the decimal point is also counted then why is the second output printing 1.00001 instead of 1?
Last edited on
1e-05 is more compact and easy to read than 0.00001.

Only digits are counted.

See: http://www.cplusplus.com/reference/ios/ios_base/precision/

Then,
1
2
3
4
5
6
7
8
9
#include <iostream>
#include <iomanip>

int main()
{
	std::cout << std::setprecision(6);
	std::cout << 0.000087654321 << '\n';
	std::cout << 8.7654321 << '\n';
}
If neither ios_base::fixed nor ios_base::scientific is set (this is the default),
the conversion specifier used is %g or %G (%gL or %GL for long double).

The conversion specifier is then interpreted in the same manner as std::printf
Details: http://en.cppreference.com/w/cpp/locale/num_put/put

%g
Let P equal the precision if nonzero, 6 if the precision is not specified, or 1 if the precision is ​0​. Then, if a conversion with style E would have an exponent of X:

if P > X ≥ −4, the conversion is with style f or F and precision P − 1 − X.
otherwise, the conversion is with style e or E and precision P − 1.

http://en.cppreference.com/w/cpp/io/c/fprintf

An earlier post about this has an example program:
http://www.cplusplus.com/forum/general/221529/#msg1017273
Last edited on
0.00001
P=6
X=5
P > X so style=f and precision=0.
I guess this means it should print the same as printf("%.0f", 0.00001) but it doesn't. If I instead change to printf("%.0e\n", 0.00001) it looks the same. Did I make a mistake somewhere?

All this becomes very technical. I was hoping there was a simpler explanation that explains what all these rules are trying to accomplish.

Is it somewhat correct to say that it uses scientific notation if it gives a shorter output or a more precise answer, using at most as many digits as specified by the precision, not counting digits in the exponent?
Last edited on
> Did I make a mistake somewhere?

Yes; X = -5 (not +5)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <iostream>
#include <iomanip>

int main()
{
    std::cout << 0.00001234567 << '\n' ; // 1.23457e-005
    std::printf( "%.6g\n", 0.00001234567 ) ; // 1.23457e-005
    // P == 6
    // neither ios_base::fixed nor ios_base::scientific is set (default), so %.6g
    // X == -5
    // P > X >= -4 is false, so %e with precision (P-1) == 5

    std::cout << 0.00001 << '\n' ; // 1e-005
    std::printf( "%.6g\n", 0.00001 ) ; // 1e-005

    std::cout << std::showpoint << 0.00001 << '\n' ; // 1.00000e-005
    std::printf( "%#.6g\n", 0.00001 ) ; // 1.00000e-005
}

http://coliru.stacked-crooked.com/a/d861677070109bb2
Ah, OK, thank you, so it becomes printf("%.5e\n", 0.00001) which prints 1.00000e-05. This is still wrong but hold on ...

Right after the text you quoted about %g it says the following.

Unless alternative representation is requested the trailing zeros are removed, also the decimal point character is removed if no fractional part is left. For infinity and not-a-number conversion style see notes.

So I guess that is why 1.00000e-05 becomes 1e-05.
Topic archived. No new replies allowed.