double vs long double -- any difference?

Hi,

I think my problem is quite basic, but I still don't get what's happening...
I need to deal with precision higher than double, so I figured long double would do the trick.

Here's the little code I thought would make me see the improvement:
-----------------
#include <stdio.h>
#include <iostream>
using namespace std;

int main(void)
{
long double PI;
double PI2;

PI = 3.14159265358979323846264338327950288419716939937510;
PI2 = 3.14159265358979323846264338327950288419716939937510;

cout.precision(50);

cout << "PI = " << PI << endl;
cout << "PI = " << PI2 << endl;
cout << "PI = 3.14159265358979323846264338327950288419716939937510" << endl;
cout << "sizeof(double) = " << sizeof(double) << endl;
cout << "sizeof(long double) = " << sizeof(long double) << endl;
}
-----------------

Here's what I get when I run it:
-----------------
PI = 3.141592653589793115997963468544185161590576171875
PI = 3.141592653589793115997963468544185161590576171875
PI = 3.14159265358979323846264338327950288419716939937510
sizeof(double) = 8
sizeof(long double) = 16
-----------------

What I think I understand from this is that the long double should be twice as precise as the double, but they both give the same result here (first two PIs), which is quite different from the input PI.

Am I doing things wrong?

In case it helps, I'm working on a Mac with OS X 10.6.5 and compiling things with g++-fsf-4.5.

Thanks in advance for the help!
PI = 3.14159265358979323846264338327950288419716939937510;

That literal doesn't have any suffix, and therefore it is a double.

So you're assigning a literal double (truncated at 8 bytes) to a variable long double. That's why it's no different form the double var.

To change this, I think you just need to give it the 'L' suffix:

PI = 3.14159265358979323846264338327950288419716939937510L; // <- end with L

upper or lowercase L should work.
Makes sense!
Things are getting better when I add the L at the end, but only by a few digits. I now get:
PI = 3.1415926535897932385128089594061862044327426701784
PI = 3.141592653589793115997963468544185161590576171875
PI = 3.14159265358979323846264338327950288419716939937510
(first is long double, second is double and third is the reference).

Is there any way I could go further?
Don't define pi that way, because the number of digits has no effect on how precise a long double is. Do this instead:
long double pi=acos(-1.0L)
This will give pi to howver many digits the processor supports.
I actually can't see the difference between the two anymore when doing this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <stdio.h>
#include <math.h>
#include <iostream>
using namespace std;

int main(void)
{
	double PI2=acos(-1.0);
	long double PI=acos(-1.0L);
	
	cout.precision(50); 
	
	cout << "PI = " << PI << endl;
	cout << "PI = " << PI2 << endl;
}


I now get:
PI = 3.141592653589793115997963468544185161590576171875
PI = 3.141592653589793115997963468544185161590576171875

But in any case, I just took pi to make a test. I want to find a way to make sure and visualize that using long double makes a significant difference compared to double. Any ideas? Does the previous test (i.e. putting the L at the end) shows me the limitation of my processor?

Thanks for the help.
Right, I think it was due to acos that (supposedly) returned a double in both case. When using acosl, I now get the exact same result as before, i.e. 3 more digits than double precision.
I guess it means I reached processor limit...(?)

In any case, thanks again for the help.
(I'd still be happy to learn a way to get higher precision if anybody knows one though...)
acos is overloaded with a long double only in <cmath>, not in <math.h>. acosl is right.
You have indeed reached the processor's limit. If you need higher precision, it will come at a large performance cost, that is, you will need a library that does its own math routines instead of using normal FPU instructions.
Such a library being GMP:
http://gmplib.org/
All right, brilliant.

Thanks!
All rise for todays reading, taken from "ISO/IEC 14882 INTERNATIONAL STANDARD First edition 1998-09-01" which yea and verily we may take to be the C++ 98 standard.

"3.9.1 Fundamental types

8. There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double. The value representation of floating-point types is implementation-defined. Integral and floating types are collectively called arithmetic types. Specializations of the standard template numeric_limits (18.2) shall specify the maximum and minimum values of each arithmetic type for an implementation."

All be seated.

Thus, we can see long double is guaranteed only to provide no less precision than double (i.e. it would be correct of your compiler used a long double type of the same precision as double). It is entirely implementation dependent and you cannot rely on long double to be anything except at least the same precision as double. It is quite possible that you could move this code to another compiler, and see a different precision.

Apologies for the preachy style; I just really like presenting the C++ standard this way :p






No problem!

There's still a point I don't get though. The following lines:
1
2
cout << "sizeof(double) = " << sizeof(double) << endl;
cout << "sizeof(long double) = " << sizeof(long double) << endl;

return this:
sizeof(double) = 8
sizeof(long double) = 16

Doesn't it mean I should expect long doubles to be twice as precise as doubles?
It does not; it means only that you know for sure that a long double uses twice as much memory as a double. The storage space is, at heart, a set of binary values, and these binary values are used to represent a decimal value. How those binary values are interpreted to a decimal value is not laid down in the standard, and is implementation specific. There are some decimal values that simply can never be represented exactly with a given float implementation.

Take a look here, where pi is specifically used as an example:

http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding

What do you mean by "twice as precise"? If you have a representation of pi as 3.1, and then another representation of it as 3.14, how much "more" precise is that? It's about 0.04 closer, so you could say it's about 1 percent more precise, or 1.01 times as precise, but you've had to use fifty percent more storage space to get this 1 percent precision improvement.

As you've realised, when you get to the actual mechanism of how numbers are represented inside the computer, things get interesting. It's definitely worth reading up on how numbers work inside the machine, and ultimately it's up to you to decide how much error is acceptable in your values, and use a primitive type or library-provided type that meets your requirements.
Last edited on
You should also note that it's quite possible on x86 systems that even if long double takes 16 bytes, all calculations might be done at 80-bit precision (which is what the FPU operates at) and the remaining 6 bytes of the long double actually remain unused.
GCC also supports a __float128 type, however without hardware support, all operations on these variables will be done in software.
I'd suggest you read up on some implementations of the long double type at http://en.wikipedia.org/wiki/Long_double#Implementations

For example, in Microsoft Visual C++ on x86 processors, long double is the same as double. In the case of GCC on x86 processors, it is 80 bits, though it may be stored as 96 bits or even 128 bits via compiler flags -- but you still only have 80 bits of precision.

I'd avoid long double as its behavior across various systems won't always be consistent. double is 64-bit in many implementations, and its precision suffices for most basic calculations. Otherwise, I'd recommend using a library such as MPFR when you are more proficient in C++.
Topic archived. No new replies allowed.