I know there is a debate about SDL vs SFML and I really don't care which is better over all. I only need one function, and that is putting a 2d array of 32-bit data on the screen fast. It does not matter how easy to use it is or how messy it looks as it is going to be the backend of custom 3d rendering software.
Also, which is fastest for read only data
hard-drive(not solid state)
Sending large blocks of data over the bus to GPU memory is going to be slow with any lib. READING data from GPU memory is going to be even slower (avoid it if you can!).
Modern hardware does rendering within the GPU with relatively little communication between CPU and GPU. If you want your code to be fast, that's the way to make it the fastest: let the GPU do the work.
The fastest way to do this in either lib would be to put your image data to a texture the render a single textured quad (or 'sprite' or whatever terminology the lib uses). Speeds in both SDL and SFML will be comparable in this regard.
But again... they'll be slow. Though you'll probably be able to get a full framerate depending on how much CPU time you're using to gen the image and the size of the image.
EDIT: I misunderstood the question about reading data. I thought you meant reading from GPU. I don't know the answer to the 2nd question, but I'd put money on either flash drive or SD card. Non SSD HD's and CDs are going to be slow because they have moving parts.
I know that it is not going to be faster than standard libs as it is in the CPU and not the GPU, but the main goal it to test concepts not current computer layout. I am trying to make rendering software that doesn't use triangle, but instead it will use equations. The algorithm is shorter and less data is need than triangle, but model design is a lot more complex.
Do you(or anyone reading this) know how to program for a GPU? I think I would have to mess around with drivers, which I have no plan of doing.
Shaders are custom programs that are executed by the GPU. DirectX and OpenGL both have their own shader language, and they allow pixel-level control over a displayed image.
In OpenGL, you could create a Geometry shader to do your custom rendering (render to a texture), then have the Fragment shader just render a normal 2D textured quad.
You could probably do something similar with DirectX but I'm not familiar with DX shaders.
That said... it'd probably be easier to just do it the slow way and send pixel data every frame. If you're just doing conceptual work and are not going for performance then it's an acceptable route to take. (It's what I would do, anyway).