Why double variable cannot be initialized by value in HEX?

Hi.
That code
1
2
  double n = 0xC0A19CA16039C000;
printf("\n%lf\n%lf\n",n, 0xC0A19CA16039C000);

gave strange (for me) output to console:
13880547743450644480.000000
-2254.315187
Why double type have so strange limitations for initialization ?
read the damn warnings
warning: format specifies type 'double' but the argument has type 'unsigned long' [-Wformat]
printf("\n%lf\n%lf\n",n, 0xC0A19CA16039C000);
                ^        ^

@ne555

Well, two things:

First, this is a poster with (1) entry. A possible sign of a troll considering the question itself.

Second, it so happens that -2254.3151869997382 is represented as a double in hex 0xc0a19ca16030c000, which rounds to the value in the OP -2254.315187.

So, what the OP is noting is that the second value prints correctly, but the assignment doesn't produce the "expected" value for n.

So, while the warning is about printf formatting (which is accurate), the actual problem the OP is bringing up is about the initialization of n.

The problem, @yoos (if you're genuine), is that the literal value is not cast to a double.

This would sort of do what the OP is asking about.

1
2
 uint64_t d = 0xc0a19ca16030c000;
 double a = * (double *) &d;


It's tough to squeeze the literal hex into a single line that ends up stuffing those bits into a double.

> Why double variable cannot be initialized by value in HEX?

It can be; however, the exponent part must be present in a hexadecimal floating point literal.
(this applies to both C and C++).

1
2
3
4
5
6
7
8
9
10
11
#include <iostream>

int main()
{
    const unsigned long long ll = 0xC0A19CA16039C000 ;
    
    const double d = 0xC0A19CA16039C000p0 ;
    
    std::cout << ll << '\n'                // 13880547743450644480
              << std::fixed << d << '\n' ; // 13880547743450644480.000000
}

http://coliru.stacked-crooked.com/a/688153bc4bd53f70
@JLBorges,

I don't think that works as expected, though we can't be sure of the OP's intended output anyway (or if the post is not some re-post, or troll)

From what I see, the hex floats is an extension, so it should still be considered non-portable (though Clang/MSVC/GCC support it in certain modes).

Yet, we can't be certain what the OP is really expecting. The question is a bit too terse.

If we assumed that the output should be a match between the integer and floating point versions (which is a logical assumption on it's own), then your post would be appropriate.

If, however, we assumed that the OP has a 64 bit pattern expressed in hex that should merely become a floating point, as in the bits came in a stream of data without type and is assumed to be the bits of an IEEE 754 bit float, then your version doesn't do that (and I think this question has two different meanings, which we can't really determine).

The bits of a double representing the decimal value 13880547743450644480.0000000 is stored in the IEEE 754 format as 0x43e81433942c0738, but if 0xC0A19CA16039C000 represents the bits of an IEE 754 format, the decimal interpretation of that set of bits is -2254.3151869997382.

In the OP post, the warning issued is on a mismatch of the second parameter to printf where the output is rounded to 6 digits.

We can't be certain which the OP regarded as correct, if any, but the hint I take is that the result of your version produces the same decimal value as the OP obtained for 'n' - a double.

Which is to say that the OP's

double n = 0xC0A19CA16039C000;

Produces the same value for n as

const double d = 0xC0A19CA16039C000p0 ;

does for d.

On the other hand,

printf("\n%lf\n%lf\n",n, 0xC0A19CA16039C000p0 );

would have eliminated the compiler warning, and both values display the same output, where the hex value is an integer being assigned to a double - and that works with or without the hex float extension.

The initialization of the double via hex literally worked, if the integer value was the expected initialization for OP's 'n'. It was the parameter to printf that was ambiguous, where the output interpreted the value provided as if the bits were a floating point value (and is potentially a desired result, depending on where these bits came from and why they're being used).

I suspect any post with an entire history of only (1) at this point anyway, so I can't see if the original inquiry is genuine.

It just seems interesting as a subject, though, because I have, at times, required to initialize a double value with bits obtained from stream chopped up, but I don't recall ever wondering about initializing a double from a hex integer (or any integer), or expecting a printf to work with convoluted types as expected without casts of some kind.


Last edited on
Hex float literals were (finally) standardized in C++17 (not that I have any experience in using them).
https://en.cppreference.com/w/cpp/language/floating_literal
The hexadecimal floating-point literals were not part of C++ until C++17, although they can be parsed and printed by the I/O functions since C++11: both C++ I/O streams when std::hexfloat is enabled and the C I/O streams: std::printf, std::scanf, etc. See std::strtof for the format description
Last edited on
Hex float literals were (finally) standardized in C++17 (not that I have any experience in using them).
I've found that the hexadecimal representation of floating point values is an excellent teaching tool. It never contains apparent garbage that results from looking at an inexact decimal representation. It also makes it clear which values are representable, and why.

float-to-hex output is also extremely easy to implement, vs. float-to-decimal.
Last edited on
Yes. Even if many programmers wouldn't have encountered situations which need them,
there are cogent reasons for using textual hexadecimal floating point representations:
accuracy and for i/o (std::hexfloat, "%a"), portable performance.
Even Java had to add support for hexadecimal floating point.
Topic archived. No new replies allowed.