Bitwise operators changing my type?

Greetings,
i'm doing some bitwise operations concerning console colors. I've the following enum:
1
2
3
4
5
6
7
8
9
10
11
static enum color : __int8
	{
	blue = 0b00000001,
	green = 0b00000010,
	red = 0b00000100,
	aqua = blue | green,
	purple = blue | red,
	yellow = green | red,
	light = 0b00001000,
	dark = 0b00000000
	};


and i wanted to split the console color in foreground and background, to both make the usage simple and avoid having repeated names like "blue", "light blue", "green", "light green" etcc...

For that i'm using a function (for now only sets a variable)

1
2
3
4
5
color color_main =
	//background
	((background | (bg_light ? (color::light) : 0b00000000)) << 4) |
	//foreground
	(foreground | (fg_light ? (color::light) : 0b00000000));


But despite everything being 1 byte long (__int8), i get the following error:
"cannot assign int to color".
I did some tests, reducing the error to the following simplier code
 
color color_main = color::blue | color::light;

It still gives the same error. Is the bitwise operator giving an integer as result instead of mantaining the current type?
How do i get what i want without using more than 1 byte long types?
Yes, the color is automatically converted to an int. There is no automatic conversion back to color because that could easily lead to colors that are not among the listed ones. If you really want the conversion to happen you need to use an explicit cast.
For some reason setting the type to "unsigned char" or "unsigned short" it all works correctly. I still don't understand why __int8 gets converted to int when doing the bitwise operation.
I still don't understand why __int8 gets converted to int when doing the bitwise operation.

It's called integral promotion. Essentially what it means is that types smaller than int will be automatically converted to int when being used as operand to a binary operator.

http://en.cppreference.com/w/cpp/language/implicit_conversion#Numeric_promotions
Last edited on
And i guess there's no way to avoid that, right?
(also, unsigned short and unsigned char are still smaller than int, but they don't behave that way, using them i solved the problem)
unsigned short and unsigned char are still smaller than int, but they don't behave that way

Yes they do, but I don't think this is your problem.

So if I have understood correctly you are now using unsigned short or unsigned char to store the result.

 
unsigned char color_main = color::blue | color::light;

This compiles without errors because the expression on the right hand side gives you an int which can be implicitly converted to an unsigned char. When you used color as the variable type it didn't work because an int can't be implicitly converted to a value of type color.

So you need to decide why you use an enum. If you are just using it as a convenient way to define constants you probably don't want to define variables of type color but instead use some integer type. This means you will not get any help from the type system to ensure your color values stays within the valid set of color values.

If you use enum because you want the type safety you might instead want to overload the | operator (or some other operator if it makes more sense). The cast is still necessary but now it only needs to be in one place, hidden inside the operator.

1
2
3
4
color operator|(color c1, color c2)
{
	return static_cast<color>(c1 | c2);
}

You can of course extend this as much as you want. You might want to have similar functions to convert between background colors and foreground colors, and you might even want to make them different types. It's up to you how you want to do it and how much effort you want to put into making the operations easy to use in a type safe manner.
Last edited on
No, i'm still storing the result as color type. I just defined color as unsigned char instead of __int8
1
2
3
4
5
6
7
8
9
10
11
static enum color : unsigned char
	{
	blue = 0b00000001,
	green = 0b00000010,
	red = 0b00000100,
	aqua = blue | green,
	purple = blue | red,
	yellow = green | red,
	light = 0b00001000,
	dark = 0b00000000
	};


And the fact this way works, while having __int8 it doesn't work, seems really, really weird to me. A lot weird.
I still don't understand why __int8 gets converted to int

Because 0b00000000 is an int.
> I still don't understand why __int8 gets converted to int when doing the bitwise operation

Usual arithmetic conversions:
The arguments of the following arithmetic operators undergo implicit conversions for the purpose of obtaining the common real type, which is the type in which the calculation is performed:

binary arithmetic *, /, %, +, -
relational operators <, >, <=, >=, ==, !=
binary bitwise arithmetic &, ^, |,
the conditional operator ?:

...

4) Otherwise, both operands are integers. In that case,
First of all, both operands undergo integer promotions

...

Integer promotions:
Integer promotion is the implicit conversion of a value of any integer type with rank less or equal to rank of int ... to the value of type int or unsigned int
http://en.cppreference.com/w/c/language/conversion


What this means is that even if both arguments are of the same type std::int8_t,
integral promotions would still be applied.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#include <iostream>
#include <cstdint>
#include <type_traits>

int main()
{
    std::int8_t a = 0 ;
    std::int8_t b = 0 ;
    
    std::int8_t c = a | b ;
    // implemented "as-if"
    // promote a to int to yield a temporary ai
    // promote b to int to yield a temporary bi
    // evaluate ai | bi to yield a temporary (of type int) ti
    // narrow ti to std::int_8_t to yield a temporary (of type std::int8_t) ni
    // initialise c with ni
    
    static_assert( std::is_same< decltype( a | b ), int >::value, "type of a|b must be int" ) ; 
    
    // uncomment the following line to see the error 
    // c = { a | b } ; // *** error: narrowing conversion of int to std::int8_t within {} 
}

http://coliru.stacked-crooked.com/a/2b41f25b5996eb05
Note that Visual C++ defines __int8 as char so shifting color::light (0b0000'1000) four positions to the left will result in 0b1000'0000 which is bigger than char can handle (assuming char is signed).
Last edited on
Ok, thanks!
@Peter87: i'm using unsigned char
Last edited on
Topic archived. No new replies allowed.