OK, I know that an int is usually 4 bytes, ranging from -2^31 to 2^31-1 for a signed int and 0 to 2^32-1 for an unsigned int. My question is simply, bit-wise (I know they are labelled in the code), how does it determine whether to show -2^31 or 2^32-1 if it was 11111111 11111111 11111111 11111111 in bits? Is there a 5th byte to tell the compiler what data type to treat the input as?
C++ is a statically typed language, meaning the type has to be known before hand at compile time. There's no way to differentiate binary data at runtime unless you do some inspection yourself and you come to a conclusion that this *looks* like it could be a float or you specify some sort of identifier yourself as well as some cases of polymorphism (dynamic_cast) - the computer can't tell what a type is just by looking at the binary data.