### precise definition of "precision"

What exactly is the precise definition of "precision" of a numeric field?

As far as I know, it should be the "number of digits after the decimal point".

However, setting a stream's precision doesn't work as specified:

 ``12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273`` ``````#include /// cout #include /// manipulators using namespace std; /// Print a number /// at the specified precision template void print(T n, int p) { cout << setprecision(p) << "n = " << n << " (precision = " << p << ")" << endl; } void pi() { cout << "printing pi ..." << endl; float pi = 3.14159; print(pi, -1); print(pi, 0); print(pi, 1); print(pi, 2); cout << endl; } void fraction() { cout << "printing a fraction ..." << endl; double d = 0.0123456789; print(d, -1); print(d, 0); print(d, 1); print(d, 2); cout << endl; } void integer() { cout << "printing an integer ..." << endl; int i = 75; print(i, -1); print(i, 0); print(i, 1); print(i, 2); cout << endl; } int main() { pi(); fraction(); integer(); }``````

http://coliru.stacked-crooked.com/a/99cb2ef30cd18754

As you can see, the specified precision and the O/P differ:

 ``` printing pi ... n = 3.14159 (precision = -1) n = 3 (precision = 0) n = 3 (precision = 1) n = 3.1 (precision = 2) printing a fraction ... n = 0.0123457 (precision = -1) n = 0.01 (precision = 0) n = 0.01 (precision = 1) n = 0.012 (precision = 2) printing an integer ... n = 75 (precision = -1) n = 75 (precision = 0) n = 75 (precision = 1) n = 75 (precision = 2) ```

The observations are as follows:
1) Precision affects only floating-point numbers and not integers. That is as we might expect.

2) A floating-point with integer part > 0 (such as pi = 3.14...) prints differently than a fraction (with integer part = 0):
a) The former prints as an integer at precisions of either 0 or 1. A precision of 2 prints only 1 decimal digit.
b) The latter prints 2 decimal digits at precisions of either 0 or 1. A precision of 2 prints 3 decimal digits.

3) A negative value for precision prints the entire value.

How can observations # 2a and 2b be explained?

Also, what is the precise definition of "precision"?

Thanks.
In this context, precision specifies either how many significant (decimal) digits to print, or how many digits after the decimal point.

In the example below, for example precision of 3 gives:
 `precision = 3 n = 314159.265 3.14e+005`

See this example.
 ``123456789101112131415161718192021222324252627`` ``````#include #include using namespace std; // Print a number // at the specified precision template void print(T n, int p) { cout << " precision = " << setw(2) << p << fixed << setprecision(p) << " n = " << setw(20) << n << resetiosflags(ios::fixed) << setw(20) << n << endl; } int main() { const double pi = 3.14159265358979323846; const double pi5 = pi * 100000; for (int i=-2; i<10; ++i) print(pi5, i); }``````
 ``` precision = -2 n = 314159.265359 314159 precision = -1 n = 314159.265359 314159 precision = 0 n = 314159 3e+005 precision = 1 n = 314159.3 3e+005 precision = 2 n = 314159.27 3.1e+005 precision = 3 n = 314159.265 3.14e+005 precision = 4 n = 314159.2654 3.142e+005 precision = 5 n = 314159.26536 3.1416e+005 precision = 6 n = 314159.265359 314159 precision = 7 n = 314159.2653590 314159.3 precision = 8 n = 314159.26535898 314159.27 precision = 9 n = 314159.265358979 314159.265```

Note, to undo fixed-mode, I used `std::resetiosflags()`.

See
http://www.cplusplus.com/reference/iomanip/setprecision/
http://www.cplusplus.com/reference/ios/fixed/
http://www.cplusplus.com/reference/ios/scientific/

Last edited on
> what is the precise definition of "precision"?

Depends on the format flags std::ios_base::fixed and std::ios_base::scientific.
The meaning is different depending on whether neither, either one or both are set.

For details, see: 'If the type of v is a floating-point type, the the first applicable choice of the following is selected:'
http://en.cppreference.com/w/cpp/locale/num_put/put

And then, the explanation column of the table 'The following format specifiers are available:'
http://en.cppreference.com/w/cpp/io/c/fprintf

This program would give you the general idea:

 ``1234567891011121314151617181920212223242526272829303132333435363738`` ``````#include #include #include std::ostream& debug_print( double v1, double v2, std::ostream& stm = std::cout ) { const auto flags = stm.flags() ; const std::string info = std::string( flags&stm.fixed ? "fixed " : " x " ) + ( flags&stm.scientific ? "scientific " : " x " ) + " prec == " + std::to_string( stm.precision() ) ; return stm << info << " => " << v1 << " " << v2 << '\n' ; } int main() { for( int prec : { -3, 0, 1, 5, 10, 15 } ) { std::cout << std::setprecision(prec) ; const double d = 1234.56789012345 ; const double e = 1234.56789012345e+6 ; debug_print( d, e ) ; std::cout << std::fixed ; debug_print( d, e ) ; std::cout << std::scientific ; debug_print( d, e ) ; std::cout.setf( std::cout.fixed ) ; debug_print( d, e ) ; std::cout.unsetf( std::cout.fixed|std::cout.scientific ) ; // back to default debug_print( d, e ) ; std::cout << "\n-----------------------------\n\n" ; } }``````

 ``` x x prec == -3 => 1234.57 1.23457e+09 fixed x prec == -3 => 1234.567890 1234567890.123450 x scientific prec == -3 => 1.234568e+03 1.234568e+09 fixed scientific prec == -3 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == -3 => 1234.57 1.23457e+09 ----------------------------- x x prec == 0 => 1e+03 1e+09 fixed x prec == 0 => 1235 1234567890 x scientific prec == 0 => 1e+03 1e+09 fixed scientific prec == 0 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == 0 => 1e+03 1e+09 ----------------------------- x x prec == 1 => 1e+03 1e+09 fixed x prec == 1 => 1234.6 1234567890.1 x scientific prec == 1 => 1.2e+03 1.2e+09 fixed scientific prec == 1 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == 1 => 1e+03 1e+09 ----------------------------- x x prec == 5 => 1234.6 1.2346e+09 fixed x prec == 5 => 1234.56789 1234567890.12345 x scientific prec == 5 => 1.23457e+03 1.23457e+09 fixed scientific prec == 5 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == 5 => 1234.6 1.2346e+09 ----------------------------- x x prec == 10 => 1234.56789 1234567890 fixed x prec == 10 => 1234.5678901235 1234567890.1234500408 x scientific prec == 10 => 1.2345678901e+03 1.2345678901e+09 fixed scientific prec == 10 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == 10 => 1234.56789 1234567890 ----------------------------- x x prec == 15 => 1234.56789012345 1234567890.12345 fixed x prec == 15 => 1234.567890123450070 1234567890.123450040817261 x scientific prec == 15 => 1.234567890123450e+03 1.234567890123450e+09 fixed scientific prec == 15 => 0x1.34a4584fd0fc2p+10 0x1.26580b487e69bp+30 x x prec == 15 => 1234.56789012345 1234567890.12345```

http://coliru.stacked-crooked.com/a/a586a9b8755d5cbd
I would add that computers are NOT base 10. They are base 2 !!
so precision will not, in general, have anything to do with our base-10 understanding of the values. Post-processing display functions do what they will, and many of those ARE operating in base 10, but you should also understand the underlying floating point storage format.

Because of how they are stored, you must be very careful talking about a decimal point.
Last edited on
Unless both std::ios_base::fixed and std::ios_base::scientific are set (when the conversion specifier %a or %A is used), the output uses the decimal or decimal exponent notation, and the output precision is the precision in terms of decimal digits.

(If both std::ios_base::fixed and std::ios_base::scientific are set, the output is in the hexadecimal exponent notation, and the output precision is the precision in terms of hexadecimal digits.)
Thanks for the examples provided.

Stroustrup has provided an excellent definition of "precision" in "The C++ Programming Language", 4th Ed, pg 1093 (emphases added):

 Precision is an integer that determines the number of digits used to display a floating-point number: o The general format (defaultfloat) lets the implementation choose a format that presents a value in the style that best preserves the value in the space available. The precision specifies the maximum number of digits. o The scientific format (scientific) presents a value with one digit before a decimal point and an exponent. The precision specifies the maximum number of digits after the decimal point. o The fixed format (fixed) presents a value as an integer part followed by a decimal point and a fractional part. The precision specifies the maximum number of digits after the decimal point. ... precision() doesn't affect integer output.

So, coming back to my original example, Stroustrup's definition explains most of the 1st case (printing pi) and the last case (printing an integer).

However, it still doesn't explain the 2nd case (printing a fraction):

 ``` printing a fraction ... ... n = 0.012 (precision = 2) ```

Here, a precision of 2 prints 3 decimal digits rather than 2, which is incorrect.

I have written a variant of my original program and I believe it isolates the problem:

 ``123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172`` ``````#include /// cout #include /// manipulators using namespace std; /// Print a number /// at the specified precision template void print(T n, int p) { cout << setprecision(p) << "n = " << n << " (precision = " << p << ")" << endl; } /// Print a number /// at the specified precision /// using the defaultfloat format template void printdflt(T n, int p) { cout << setprecision(p) << defaultfloat << "n = " << n << " (precision = " << p << ") & defaultfloat" << endl; } /// Print a number /// at the specified precision /// using the fixed format template void printfixed(T n, int p) { cout << setprecision(p) << fixed << "n = " << n << " (precision = " << p << ") & fixed" << endl; } int main() { cout << "printing a fraction ..." << endl << endl; double d = 0.0123456789; cout << "wrong output ..." << endl; print(d, 2); /// wrong output printdflt(d, 2); /// wrong output cout << endl; cout << "correct output ..." << endl; printfixed(d, 2); /// correct output now onwards print(d, 2); /// correct output (sticky) cout << endl; cout << "wrong output ..." << endl; printdflt(d, 2); /// wrong output cout << endl; }``````

http://coliru.stacked-crooked.com/a/01a42d0b4cac4df5

Here's the output:
 ``` printing a fraction ... wrong output ... n = 0.012 (precision = 2) n = 0.012 (precision = 2) & defaultfloat correct output ... n = 0.01 (precision = 2) & fixed n = 0.01 (precision = 2) wrong output ... n = 0.012 (precision = 2) & defaultfloat ```

Thus, the real problem here is the defaultfloat flag.

The fixed flag has sticky properties (as does defaultfloat).

Thanks.
Last edited on
In the defaultfloat case,

 The precision specifies the maximum number of digits

That should probably say maximum number of significant digits. Leading zeroes are not counted.
> Here, a precision of 2 prints 3 decimal digits rather than 2, which is incorrect.

The manipulator std::defaultfloat clears both std::ios_base::fixed and std::ios_base::scientific flags.

Since both are cleared, std::num_put::do_put selects the conversion specifier %g

0.0123456789 is 1.23456789e-2; and 2 (precision) > -2 (exponent) >= -4
so conversion is performed with the conversion specifier %f and precision 2 - 1 - (-2) == 3
and three decimal digits after the decimal point are printed.

I would strongly suggest that you read the documentation;
(the two links which were posted earlier are repeated here for convenience):
http://en.cppreference.com/w/cpp/locale/num_put/put
http://en.cppreference.com/w/cpp/io/c/fprintf
Topic archived. No new replies allowed.