Does anybody know of such a libary?

Hello. Does anybody know of a libary which is compleatly cross-platform, has no nasty dependancies and will allow me to allocate the colour of any pixel on the screen? It needs to be VERY fast; like able to set every pixel on a monitor many, many times a second. The reason I ask is because I have worked out an interseting alternative to vertex-based rendering, and I want to test it out by writing a simplistic graphics API.

Thankyou.
There is no such library. Such a library would have to work on my Amiga 500, a telephone, and everything else. You'll have to narrow it down a bit. Which platforms must it work on? Must it directly control the pixel colour on the screen (i.e. interact with the graphics hardware), or can it actually control the pixel colour of an image which is being shown on the screen? There's a huge difference between directly controlling the screen, and controlling a picture that you are asking the OS to display on screen.

Sorry, I will be a bit more specific.

It basicly needs to operate on a home computer so Windows, OS X, Linux etc. It would also be nice if it would work on a number of other devices like smartphones etc. I would like it to directly control the pixel colour so yes it would have to interact with the graphics hardware.
Any graphics lib that has a fullscreen mode will let you control each individual pixel onscreen. Many of them work on Windows/OSX/Linux. OpenGL is very popular, of course you can also use a friendlier wrapper around it such as SFML.

For handheld devices I think there's a lib called Marmalade or something -- I've never personally used it but I hear good things. I'm not sure if Marmalade will work for Windows, but you can write your own wrapper around it and whatever other lib you want to use for PCs.

All graphic libs are going to have dependencies. If you're looking for one that has none, I sincerely doubt you will ever find it.

Plotting individual pixels via software is going to be slow with any lib because it requires you send a lot of data from the CPU to the GPU. It's much faster to keep as much of the pixel plotting as possible on the GPU side.

This is typically why rendering is vertex based. It takes very little data to specify a single polygon, so that's a small amount of data that has to transfer to the GPU. Once it's there, the GPU can then use that small amount of data to plot tons of pixels very quickly.
Okay I will look into those. Well my idea wasn’t totally pixel based rendering so I wont have that issue.

EDIT:

A bit off topic; but does that mean the GPUs are desined to draw vector graphics only? (Excuse my lack of knowlage) So If I wanted to use another convention for rendering 3D geomatry I cant really do it?
Last edited on
Topic archived. No new replies allowed.