What exactly is the precise definition of "precision" of a numeric field?
As far as I know, it should be the "number of digits after the decimal point".
However, setting a stream's precision doesn't work as specified:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
|
#include <iostream> /// cout
#include <iomanip> /// manipulators
using namespace std;
/// Print a number
/// at the specified precision
template<typename T>
void print(T n,
int p)
{
cout << setprecision(p)
<< "n = " << n
<< " (precision = " << p << ")"
<< endl;
}
void pi()
{
cout << "printing pi ..."
<< endl;
float pi = 3.14159;
print(pi, -1);
print(pi, 0);
print(pi, 1);
print(pi, 2);
cout << endl;
}
void fraction()
{
cout << "printing a fraction ..."
<< endl;
double d = 0.0123456789;
print(d, -1);
print(d, 0);
print(d, 1);
print(d, 2);
cout << endl;
}
void integer()
{
cout << "printing an integer ..."
<< endl;
int i = 75;
print(i, -1);
print(i, 0);
print(i, 1);
print(i, 2);
cout << endl;
}
int main()
{
pi();
fraction();
integer();
}
|
http://coliru.stacked-crooked.com/a/99cb2ef30cd18754
As you can see, the specified precision and the O/P differ:
printing pi ...
n = 3.14159 (precision = -1)
n = 3 (precision = 0)
n = 3 (precision = 1)
n = 3.1 (precision = 2)
printing a fraction ...
n = 0.0123457 (precision = -1)
n = 0.01 (precision = 0)
n = 0.01 (precision = 1)
n = 0.012 (precision = 2)
printing an integer ...
n = 75 (precision = -1)
n = 75 (precision = 0)
n = 75 (precision = 1)
n = 75 (precision = 2)
|
The observations are as follows:
1) Precision affects only floating-point numbers and not integers. That is as we might expect.
2) A floating-point with integer part > 0 (such as pi = 3.14...) prints differently than a fraction (with integer part = 0):
a) The former prints as an integer at precisions of either 0 or 1. A precision of 2 prints only 1 decimal digit.
b) The latter prints 2 decimal digits at precisions of either 0 or 1. A precision of 2 prints 3 decimal digits.
3) A negative value for precision prints the entire value.
How can observations # 2a and 2b be explained?
Also, what is the precise definition of "precision"?
Thanks.