getting headache with char datatype

hi,
i read every online material related to char, unsigned char and signed char but was unable to take out the conclusion. can somebody illustrate with example? i am getting confused with range -128 to 127 and 0 to 255. actually, i am not understanding what does it actually mean? can you give detail wiki about char and its function? also, can you illustrate the use of signed char with code example? how can we assign negative values to signed char?
signed char schar=-12;
unsigned char uchar = -12;
what is the operation behind this? how it works?
what is mean by assigning negstive value to a signed char?
i hope to get the reply soon and understand clearly about this topic and soon jump to another topic.
Thank You.
I believe that the char type is defined so that it has to be able to contain one character of the default character set of the system, which is usually ascii (so 1 byte / 8 bits per character)

with 8 bits you can store positive values from 0 - 255 (2^8 = 256)

however if you want to store negative numbers you have to sacrifice one of the bits to be used as a 'flag' to tell the system weather or not the other 7 bits represent a positive or negative number.

With the 7 bits you have left you are able to store a number from 0 - 127 (2^7 = 128)
and with your flag bit you are able to say weather or not it is negative
so you end up being able to store a number from -128 to 127

the way you tell the compiler to use one of the bits as a flag is by using the keyword 'signed' or 'unsigned'

signed meaning one of the bits should be used as a flag (the variable can be negative)
and unsigned meaning none of the bits should be used as a flag (the variable is only positive)
Last edited on

The first thing is to get the word "char" out of your head. This is not a character, and its not a letter, its just an integer. It can mean those things when dealing with text, but in c++ a character is an integer, and it is almost always (?) 1 byte (8 bits). For the rest of this conversation, you must think of char as just an integer type.

binary math works off what is called the twos complement. This is a simple operation: flip (logical not) all the bits, and add 1. That is the negative of the original value (regardless of whether it was + or - to begin with). This works regardless of the int type (char, int, short, int32_t, __int64, long long extra long with longness, whatever they are called now, etc) all work this way.

the number of bits gives the range.
a char is 1 byte, 8 bits, and 2^8 is 256. But you need room to store zero, so the range is 0-255, or 2^8 -1 where 8 is the # of bits. 16 bits works the same way ... 2^16 or 65536 -1 is 0 to 65535.

The twos complement *effectively* makes the most significant bit the 'sign' bit. that bit does nothing except describe the +- of the value. The twos complement is clever, add a + and - value by hand a couple of times to observe how it works and how the sign bit works with that, its somethign you have to see to understand so please do this.

With the sign bit used for sign, an 8 bit number is now 7 bits, a 16 bit number is now 15, etc.
an 8 bit char at 7 bits is 2^7 ... 128 ... -1 is 127. You can't represent as many possible values because you are down a bit.

signed is default and rarely used. unsigned is used to override the default.

if you assign an unsigned value to a signed or a signed to unsigned, you don't get the correct value unless its in the positive range. That is, 0-127 work fine, but 128 unsigned is seen as a negative value (I think its -1, but I don't think well and didnt write it down to look at it). If you assign -5 to an unsigned, you get a big positive value that is nothing like 5 or -5, its related (by the 2s complement) but its rarely useful. ASSIGNMENT JUST COPIES THE BITS, how they are interpreted (signed or unsigned) is not checked or validated at run time. At compile time this warning is your only way to know to fix it (if necessary).




Last edited on
Topic archived. No new replies allowed.