signed and unsigned usage

What is the exact usage of signed and unsigned identifier usage before a data type. Can someone explain char and int data types (signed and unsinged ones) with approprirate example programs.
This worded very much like a home work assignment would be...buuuut I'm going to trust it isn't

an integer that is signed (signed int) will have a maximum value that is half that of unsigned. See signed is literaly what it sounds like. signed (positive or negative) or unsigned (strictly positive)
And since the integer primitive has a specific amount of bits allotted to it per instance, representing negative numbers will halve the amount of possible numbers.

As for why char can be declared signed, the char primitive is more or less an integer, as it represents an ascii code.
Hi Seraphimsan,

Thanks for your time and explanation
Topic archived. No new replies allowed.