I don't think that works as expected, though we can't be sure of the OP's intended output anyway (or if the post is not some re-post, or troll)
From what I see, the hex floats is an extension, so it should still be considered non-portable (though Clang/MSVC/GCC support it in certain modes).
Yet, we can't be certain what the OP is really expecting. The question is a bit too terse.
If we assumed that the output should be a match between the integer and floating point versions (which is a logical assumption on it's own), then your post would be appropriate.
If, however, we assumed that the OP has a 64 bit pattern expressed in hex that should merely become a floating point, as in the bits came in a stream of data without type and is assumed to be the bits of an IEEE 754 bit float, then your version doesn't do that (and I think this question has two different meanings, which we can't really determine).
The bits of a double representing the decimal value 13880547743450644480.0000000 is stored in the IEEE 754 format as 0x43e81433942c0738, but if 0xC0A19CA16039C000 represents the bits of an IEE 754 format, the decimal interpretation of that set of bits is -2254.3151869997382.
In the OP post, the warning issued is on a mismatch of the second parameter to printf where the output is rounded to 6 digits.
We can't be certain which the OP regarded as correct, if any, but the hint I take is that the result of your version produces the same decimal value as the OP obtained for 'n' - a double.
Which is to say that the OP's
double n = 0xC0A19CA16039C000;
Produces the same value for n as
constdouble d = 0xC0A19CA16039C000p0 ;
does for d.
On the other hand,
printf("\n%lf\n%lf\n",n, 0xC0A19CA16039C000p0 );
would have eliminated the compiler warning, and both values display the same output, where the hex value is an integer being assigned to a double - and that works with or without the hex float extension.
The initialization of the double via hex literally worked, if the integer value was the expected initialization for OP's 'n'. It was the parameter to printf that was ambiguous, where the output interpreted the value provided as if the bits were a floating point value (and is potentially a desired result, depending on where these bits came from and why they're being used).
I suspect any post with an entire history of only (1) at this point anyway, so I can't see if the original inquiry is genuine.
It just seems interesting as a subject, though, because I have, at times, required to initialize a double value with bits obtained from stream chopped up, but I don't recall ever wondering about initializing a double from a hex integer (or any integer), or expecting a printf to work with convoluted types as expected without casts of some kind.
The hexadecimal floating-point literals were not part of C++ until C++17, although they can be parsed and printed by the I/O functions since C++11: both C++ I/O streams when std::hexfloat is enabled and the C I/O streams: std::printf,std::scanf, etc. See std::strtof for the format description
Hex float literals were (finally) standardized in C++17 (not that I have any experience in using them).
I've found that the hexadecimal representation of floating point values is an excellent teaching tool. It never contains apparent garbage that results from looking at an inexact decimal representation. It also makes it clear which values are representable, and why.
float-to-hex output is also extremely easy to implement, vs. float-to-decimal.
Yes. Even if many programmers wouldn't have encountered situations which need them,
there are cogent reasons for using textual hexadecimal floating point representations:
accuracy and for i/o (std::hexfloat, "%a"), portable performance.
Even Java had to add support for hexadecimal floating point.