This worded very much like a home work assignment would be...buuuut I'm going to trust it isn't
an integer that is signed (signed int) will have a maximum value that is half that of unsigned. See signed is literaly what it sounds like. signed (positive or negative) or unsigned (strictly positive)
And since the integer primitive has a specific amount of bits allotted to it per instance, representing negative numbers will halve the amount of possible numbers.
As for why char can be declared signed, the char primitive is more or less an integer, as it represents an ascii code.