EXTENDED ASCII - How to 'cout' ?

Pages: 12
So when you assign any unsigned integer to char, two's complement will be performed on the bits of the integer and then the new bits are stored?

0 0 0 0 0 0 0 1
will be stored as
1 1 1 1 1 1 1 1 in signed char.

And when you cast again back to int, two's complement will again be performed on the signed char's bits.
1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 1

two's complement is used only and for all signed datatypes?

What about one's complement? Where is that used? And why is two's complement used over one's complement? Still very confused.

We learnt about one's and two's complement like 3-4 months ago and we never knew it had a use!
Other than the teacher telling "it is used to represent negative number" I mean what can one take from that.

By the way is there any reason why char has both a signed and unsigned datatype?
Last edited on
mbozzi wrote:
Yes, twos complement is required.

I did not know that. Interesting. If I had read a little farther in the standard I guess it would've become even more obvious with -1 having the same bit representation as the highest unsigned.

Grime wrote:
two's complement will be performed

There's no "performing" involved. The bits are simply interpreted as 2's complement. You are totally confused. What webpage did you read in your 2's complement studies? Try wikipedia. Read it. Think about it. Try some examples of 2's complement math.
11111111
In unsigned

is taken to signed char.
11111111

11111111 represents -1.

11111111 in unsigned is 256th value.

11111111 in signed, is 130th value.

But unsigned 11111111 and signed 11111111 are printing the same character when casted to unsigned and signed char. Why?
> I just want to know what happens when you try to cast a high integer to char though I know it's not useful.

If in the implementation, the underlying type of char is signed char, this would engender undefined behaviour. Even in C++20, where the representation is required to be twos-complement.

arithmetic for the unsigned type is performed modulo 2N. [ Note: Unsigned arithmetic does not overflow. Overflow for signed arithmetic yields undefined behavior — end note ] http://eel.is/c++draft/basic.fundamental#2
Last edited on
Grime wrote:
But unsigned 11111111 and signed 11111111 are printing the same character when casted to unsigned and signed char. Why?

Why not?
Okay that clarifies it, thanks dutch. Weird though.

JLBorgies wrote:

If in the implementation, the underlying type of char is signed char, this would engender undefined behaviour. Even in C++20, where the representation is required to be twos-complement.


So although it's undefined, because of two's complement, it is somewhat handled (if two's complement were used)?
Last edited on
> So although it's undefined, because of two's complement, it is somewhat handled

Undefined behaviour is undefined behaviour. Two's complement or no two's complement, the compiler (optimiser) can assume that in a meaningful program, signed integer overflow will never occur.

For example, both foo and bar below may generate the same code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int foo( int a, int b ) 
{
    // a+1 can't overflow, so if a>b is true, a+1 > b must be true
    if( a > b ) return a+1 > b ? 100 : 200 ;

    // b+1 can't overflow, so if a>b is false, b+1 > a must be true
    else return b+1 > a ? 300 : 400 ;
}

int bar( int a, int b )
{
    if( a > b ) return 100 ;  
    
    else return 300 ;
}

https://gcc.godbolt.org/z/5Y2jnB
> I just want to know what happens when you try to cast a high integer to char though I know it's not useful.

If in the implementation, the underlying type of char is signed char, this would engender undefined behaviour.

My understanding is that signed integer overflow doesn't occur as the result of an integral conversion (as part of a cast, for example). Instead:

[...] the result is the unique value of the destination type that is congruent to the source integer modulo 2^N, where N is the range exponent of the destination type.
http://eel.is/c++draft/conv#integral
Prior to C++20, the result is implementation-defined.

See also
https://en.cppreference.com/w/cpp/language/implicit_conversion#Integral_conversions

A range exponent is:
The range of representable values for a signed integer type is −2^(N−1) to 2^(N−1)−1 (inclusive), where N is called the range exponent of the type.
http://eel.is/c++draft/basic.fundamental#1.sentence-5
Last edited on
Topic archived. No new replies allowed.
Pages: 12