### Combine RGB(A) pixels into one?

I have a buffer which has a virtual width of 1024 pixels with virtual height 768 pixels (actual width stored in GPU.GPU_screenxsize, height in GPU.GPU_screenysize (both uint_32)).

I currently use the following function to plot the pixels to the screen:

 ``1234567891011121314151617181920212223`` ``````inline void GPU_defaultRenderer() //Default renderer (original, written by me)! { uint_32 pixel; int bufferx; //Buffer's x! int buffery; //Buffer's y! int bufferxstart; //Buffer's x start! int bufferystart; //Buffer's y start! int pspx; int pspy; //psp x&y! for (pspy=0; pspy

Anybody knows how to combine the pixels together (blend them into one goal pixel) to get a better view? (Currently it takes the bottom right pixel of the area that represents the goal pixel).

So (x1,y1,x2,y2)->(pspx,pspy) = (bufferystart,bufferxstart,buffery,bufferx)->(pspx,pspy).
Atm this is (x2,y2)->(pspx,pspy)

Btw the pixel format is uint_32 RGBA (only RGB used atm). The psp_graphics_putpixel draws it onto the real VRAM. PSP_SCREEN_ROWS and PSP_SCREEN_COLUMNS represent the destination screen (the real screen)'s height and width in pixels.
Last edited on
You're looking for a technique called bilinear filtering:
http://en.wikipedia.org/wiki/Bilinear_filtering

(Looking at the sample code is more helpful than looking at the "description".)
I've tried something:

 ``123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172`` ``````//Give the pixel from our rendering! #define GPU_GETPIXEL(x,y) GPU.emu_screenbuffer[(y*1024)+x] int single_refresh = 0; //Single refresh? void int10_preparescreenpixels(); //Prototype for refreshscreen! //Pull apart! #define GETRED(fullcolor) (fullcolor&0xFF) #define GETGREEN(fullcolor) ((fullcolor&0xFF00)>>8) #define GETBLUE(fullcolor) ((fullcolor&0xFF0000)>>16) #define GETTRANS(fullcolor) ((fullcolor&0xFF000000)>>24) //Or together! #define RGBRED(red) (red) #define RGBGREEN(green) (green<<8) #define RGBBLUE(blue) (blue<<16) #define RGBTRANS(trans) (trans<<24) //So simple: RGBA #define RGBA(trans,red,green,blue) (RGBTRANS(trans)|RGBRED(red)|RGBGREEN(green)|RGBBLUE(blue)) inline uint_32 fuse_pixelarea(word x1, word y1, word x2, word y2) //Fuse an area together into 1 pixel! { word x,y; //Current coordinates! uint_32 numpixels = (((x2-x1)+1)*((y2-y1)+1)); //Ammount of pixels! float pixelfactor = (256/numpixels); //Factor for each pixel! float r=0; float g=0; float b=0; //Our total rgb value! float a=0; //Transparency! uint_32 buffer; for (x=x1;x<=x2;x++) { for (y=y1;y<=y2;y++) { buffer = GPU_GETPIXEL(x,y); //The pixel! r += GETRED(buffer)*pixelfactor; //Add red! g += GETGREEN(buffer)*pixelfactor; //Add green! b += GETBLUE(buffer)*pixelfactor; //Add blue! a += GETTRANS(buffer)*pixelfactor; //Add transparency! } } byte r2,g2,b2,a2; r2 = (((int)(r)>>8)&0xFF); //R g2 = (((int)(g)>>8)&0xFF); //G b2 = (((int)(b)>>8)&0xFF); //B a2 = (((int)(a)>>8)&0xFF); //A return RGBA(a2,r2,g2,b2); //RGB value of all pixels combined! } inline void GPU_defaultRenderer() //Default renderer (original, written by me)! { uint_32 pixel; int bufferxend; //Buffer's x! int bufferyend; //Buffer's y! int bufferxstart; //Buffer's x start! int bufferystart; //Buffer's y start! int pspx; int pspy; //psp x&y! for (pspy=0; pspy

It seems to work. Can this still be optimized?
What do you mean by "optimized" ? In speed ?
If you're doing bilinear filtering in software, I think that's a good cause for concern. The graphics hardware should automatically take care of bilinear filtering for you while rendering a texture. Is there any reason for doing this in software?
@NGen, This is used by an emulator I'm writing. Somehow the SDL library wont compile (it gives me a duplicate main() error while only defined once in main.c). it's atm called at 60fps.
Last edited on
If you linked the SDL_main lib, that might cause it. SDL_main defines a platform-agnostic main(), while it uses a #define in the header to replace your main with a different function:

 ``123`` ``````// Some SDL header extern int main (int argc, char * argv[]); #define main SDL_Main ``````

 ``12345`` ``````// SDL lib you link to int main (int argc, char * argv []) { return SDL_Main (argc, argv); }``````

If you did an #undef on the SDL's main or defined main in a way that the symbol wouldn't be replaced, it would conflict with the SDL's version.

As for optimization, I'm not an expert on bilinear filter implementations, so I can't say any suggestions would be well-founded. Since you say it runs at 60FPS so far I wouldn't worry about it. Optimize only as needed and whatnot.

However, I'm still unsure of why you're using software rendering instead of hardware rendering. Even emulators make use of hardware acceleration for graphics, PCSX2 is the most notable one off the top of my head.
Last edited on
@NGen:

I'm using software rendering because I'm not that familiar with using the hardware version (using the sceGU* functions in the pspsdk), and the current version of rendering (basically the same as the VGA, only a different resolution) is nice and simple (linear 512 virtual pixels per row, uint_32, so plotting is as simple as:
`GPU.vram[y*512+x] = pixel` where GPU.vram is a (uint_32 *)0x4000000 to the start of VRAM in the psp's memory))

@toum: Yes, in speed (and in the ammount of resulting opcodes of course). It'll need to run fast, as to not delay the emulation too much. (I've written a timer for handling these by frequency using delays (multithreading)).
Last edited on
Topic archived. No new replies allowed.