As a total amateur with programming I’ve started learning from ‘Herbert Schildt C++ A Beginner’s Guide’ and up till now I’ve been fine with it, but now it’s gone into more detail with Integers and I just can’t get my head around what the author is saying in this one paragraph:
‘Signed integers are important for a great many algorithms, but they have only half the absolute magnitude of their unsigned relatives. For example, assuming a 16-bit integer, here is 32,767:
For a signed value, if the high-order bit were set to 1, the number would then be interpreted as -1 (assuming the two’s complement format). However, if you declared this to be an unsigned int, then when the high-order bit was set to 1, the number would become 65,535.’
What the hell is with the ‘0’s’ and ‘1’s’ and how do they become 32,767 ???
I know to most of you this is probably a stupid question, but if somebody could spare a moment to explain it in simple English I’d really appreciate it.