quick question out of curiosity

OK, I know that an int is usually 4 bytes, ranging from -2^31 to 2^31-1 for a signed int and 0 to 2^32-1 for an unsigned int. My question is simply, bit-wise (I know they are labelled in the code), how does it determine whether to show -2^31 or 2^32-1 if it was 11111111 11111111 11111111 11111111 in bits? Is there a 5th byte to tell the compiler what data type to treat the input as?
2s compliment crash course:

Each bit has a "weight". The weight is equal to 2n where 'n' is the bit position.

For example... an 8-bit unsigned number: 00000101 has bits 0 and 2 set (bit 0 is the lowest/least significant bit... bit 7 would be the highest bit in an 8-bit var).

so since bits 0 and 2 are set... the value is:

20 + 22
= 1 + 4
= 5

therefore 00000101 in binary is 5 is decimal.


Signed numbers work the exact same way... only the weight of the highest bit is negative. Therefore:

10000001 <- bits 7 and 0 are set.
-(27) + 20
= -128 + 1
= -127


A 32-bit integer is the same, only bit 31 has a negative weight because it's the high bit (instead of bit 7).
closed account (o1vk4iN6)
C++ is a statically typed language, meaning the type has to be known before hand at compile time. There's no way to differentiate binary data at runtime unless you do some inspection yourself and you come to a conclusion that this *looks* like it could be a float or you specify some sort of identifier yourself as well as some cases of polymorphism (dynamic_cast) - the computer can't tell what a type is just by looking at the binary data.
Last edited on
It is the type declaration of the object that allows the compiler correctly to interpret the internal representation of the value of the object.
Signed integer types have sign bit.
Last edited on
Topic archived. No new replies allowed.