Hi guys

As a total amateur with programming I’ve started learning from ‘Herbert Schildt C++ A Beginner’s Guide’ and up till now I’ve been fine with it, but now it’s gone into more detail with Integers and I just can’t get my head around what the author is saying in this one paragraph:

‘Signed integers are important for a great many algorithms, but they have only half the absolute magnitude of their unsigned relatives. For example, assuming a 16-bit integer, here is 32,767:

01111111 11111111

For a signed value, if the high-order bit were set to 1, the number would then be interpreted as -1 (assuming the two’s complement format). However, if you declared this to be an unsigned int, then when the high-order bit was set to 1, the number would become 65,535.’

What the hell is with the ‘0’s’ and ‘1’s’ and how do they become 32,767 ???

I know to most of you this is probably a stupid question, but if somebody could spare a moment to explain it in simple English I’d really appreciate it.

Many thanks
That's binary numbers for you. Internally, the only thing the computer understands is "off" and "on". Those two states represent the digit 0 and 1.

A decimal number such as 1234 can be understood as

  1 * 1000 
+ 2 * 100
+ 3 * 10
+ 4 * 1

For a binary numbers things are similar but instead of powers of 10 we use powers of 2.
so the binary number 11011010 can be understood as

  1 * 128
+ 1 * 64
+ 0 * 32
+ 1 * 16
+ 1 * 8
+ 0 * 4
+ 1 * 2
+ 0 * 1

which gives the decimal value 218.

There's a very simple binary-decimal converter on this page

If you need more, google "binary numbers".
Last edited on
That's great, thanks for that, I think I get it now.
Topic archived. No new replies allowed.