Conversions between inteter primitive data types

Hi,

In my platform (Windows 7 Ultimate 64 bits with Service Pack 1 over a compatible PC with a AMD x86 microprocessor), the next sample C++ code,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#include <iostream>
#include <limits>

using std::cout;
using std::endl;
using std::hex;
using std::showbase;
using std::numeric_limits;

int main(int argc, char** argv, char** envp)
{
	/* Conversion from unsigned integer */

	// Most significant bit is 1, ui = 0xFFFFFFFF
	unsigned int ui = numeric_limits<unsigned int>::max();
	// Prints 0xffffffff
	cout << "ui  = " << showbase << hex << ui << endl;
	// Most significant bit of "ui" is extended in "ull" ?
	//		-> In my platform, NO, ull = 0x00000000FFFFFFFF
	unsigned long long ull = ui;
	// Prints 0xffffffff
	cout << "ull = " << showbase << hex << ull << endl;
	// Most significant bit of "ui" is extended in "sll" ?
	//		-> In my platform, NO, sll = 0x00000000FFFFFFFF
	long long sll = ui;
	// Prints 0xffffffff
	cout << "sll = " << showbase << hex << sll << endl;

	/* Conversion from signed integer */

	// Most significant bit is 1, si = 0xFFFFFFFF
	int si = -1;
	// Prints 0xffffffff
	cout << "si  = " << showbase << hex << si << endl;
	// Most significant bit of "si" is extended in "ull" ?
	//		-> In my platform, YES, ull = 0xFFFFFFFFFFFFFFFF
	ull = si;
	// Prints 0xffffffffffffffff
	cout << "ull = " << showbase << hex << ull << endl;
	// Most significant bit of "si" is extended in "sll" ?
	//		-> In my platform, YES, sll = 0xFFFFFFFFFFFFFFFF
	sll = si;
	// Prints 0xffffffffffffffff
	cout << "sll = " << showbase << hex << sll << endl;

	return 0;
}


compiled with Microsoft Visual C++ 2010 Express, prints this output:


ui  = 0xffffffff
ull = 0xffffffff
sll = 0xffffffff
si  = 0xffffffff
ull = 0xffffffffffffffff
sll = 0xffffffffffffffff


So, in my platform, conversion from an unsigend integer primitive data type to any bigger integer primitive data type never extends the most significant bit of the former integer and conversion from an signed integer primitive data type to any bigger integer primitive data type always extends the most significant bit of the former integer. This is convenient to mantain the same value when converting between integer primitive data types of the same signedness (i.e, signed integers or unsigned integers).

But, Does the C++ standard guarantee that this behaviour is always the same in all platforms ?

Thanks.
Last edited on
Yes. See section 4.7:
2 If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
3 If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.


Of course, this is if I have interpreted your question correctly, but that is probably the relevant part in the standard. Also, you do realize that you can download draft versions of the standard? Though, it can be a bit hard to find things sometimes...
Last edited on
Where can I download the C++ standard ?
You can get the current working draft standard (for C++14) from here: https://isocpp.org/files/papers/N3797.pdf

Otherwise, Google is your friend.
Topic archived. No new replies allowed.