Windows 8 SDL RGBA is strange?

I normally use the standard rgba bitmask when using SDL on windows:

1
2
3
4
5
6
7
8
9
10
11
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
    rmask = 0xff000000;
    gmask = 0x00ff0000;
    bmask = 0x0000ff00;
    amask = 0x000000ff;
#else
    rmask = 0x000000ff;
    gmask = 0x0000ff00;
    bmask = 0x00ff0000;
    amask = 0xff000000;
#endif 


This is also drawn on the rendering surface.

For some reason when I debug my application, the rendering surface (SDL_SetVideoMode result) gives entirely different values:

rmask = 0x00ff0000
gmask = 0x0000ff00
bmask = 0x000000ff
amask = 0x00000000

Which kind of seems of a mixture between big and little endian format, with the alpha values ignored?

Anyone can tell me why this is? Do I really have to use this strange format in my macros when compiling for windows? Will this always work? (pixels are directly plotted by using memcpy&memcmp)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
#define RGBA(r, g, b, a) ((a)|((b)<<8)|((g)<<16)|((r)<<24))
#define GETA(x) ((x)&0xFF)
#define GETB(x) (((x)>>8)&0xFF)
#define GETG(x) (((x)>>16)&0xFF)
#define GETR(x) (((x)>>24)&0xFF)
#else
#ifdef _WIN32
//Windows has irregular logic for some reason?
#define RGBA(r, g, b, a) ((b)|((g)<<8)|((r)<<16)|((a)<<24))
#define GETA(x) (((x)>>24)&0xFF)
#define GETR(x) (((x)>>16)&0xFF)
#define GETG(x) (((x)>>8)&0xFF)
#define GETB(x) ((x)&0xFF)
#else
//PSP logic?
#define RGBA(r, g, b, a) ((r)|((g)<<8)|((b)<<16)|((a)<<24))
#define GETR(x) ((x)&0xFF)
#define GETG(x) (((x)>>8)&0xFF)
#define GETB(x) (((x)>>16)&0xFF)
#define GETA(x) (((x)>>24)&0xFF)
#endif
#endif 
This is typical. Windows typically stores it as ARGB, rather than RGBA. And output video is typically 24-bit RGB (padded to 32 bit) and not 32-bit ARGB anyway... which is why there's no alpha.

Will this always work?


Probably not. That's the whole reason why SDL probes the video mode and gives you this stuff programmatically. Just get the mask/etc from the current video mode SDL provides you and work with it -- rather than assuming the video is a certain format. That way you know it will always work.




(But why are you messing with individual pixels outside of a shader anyway? This is 2015, not 1998)
Last edited on
Topic archived. No new replies allowed.