So here are the lines of code I need help understanding:
unsigned s = 555;
int i = (s >> 4) & ~(~0 << 3);
I think I understand most of this, but I'm not sure I understand why ~0 = -1.
How exactly is the literal 0 being stored? I initially assumed that 0 would be treated as a signed int and stored as 000...00000 with the left-most bit being the sign bit, but if that were the case then ~0 would have to be 111...11111, or -2^16, right?
So it seems that it must be stored as only 2 bits, 00, with the left-most bit being the sign bit, and ~0 being stored as 11, or -1, and then the computer allocates more space in the registers or something as it uses the left shift bitwise operator, to get 11000, or -8. This seems wrong to me, but it's the only way I've been able to make sense of this...
Anyway, tell me why I'm wrong.