SDL or SFML

Recently I've been learning SDL, but I've been hearing that SFML is easier and better than SDL. I was wondering what people here thought about it, and also if I should keep learning SDL or start learning SFML, and forget about SDL.


P.S. If I don't respond it doesn't mean I didn't read your respond. It's just that I don't have internet, so I use my friends internet, but I get on using a psp, and for some reason I can't post anything using the psp.
I personally like SFML more, but for me it's not so much better that I'd just say "drop SDL and get SFML while it's hot!!1!". I guess it really depends on how much you've learned about SDL. If you're still in the beginning stages, I'd say drop SDL and go learn SFML, but if you're a bit farther along, it may pay to make a slower transition.

-Albatross
SFML is pretty much superior in just about every way.

SDL made sense back when 2D drawing was hardware accelerated. Nowadays you get better performance with 3D libs like OpenGL (which SFML uses). SDL's video is too slow for practical use. I tried using it before, it just didn't cut it. You pretty much have to use SDL's OpenGL interface (which is minimal -- you basically have to use OpenGL directly) in order to get any kind of reasonable performance.

SFML doesn't have that problem.

SDL also forces you to do all audio mixing in software (at least as far as I know). Which isn't as big of a deal, but still hinders performance. (EDIT: actually is this true? I just assumed you couldn't open multiple audio devices at once, but maybe I'm wrong?)

SFML doesn't have that problem either.

SFML also supports more common things out of the box, like loading and playing ogg files, reading png files, etc. You need add-on libs to get that done with SDL.

SFML also has a simpler interface, a nice OO design, and an active forum base you can go to for help. AND it's still being actively developed.


There's little reason to use SDL anymore. I say definitely ditch it and use SFML.
Last edited on
SDL 1.3 Doesn't look to bad so far. Doesn't it have support now for the Nintendo DS as well? I thought SDL was still being actively developed? Either way I'd say for the sake of it to go with SFML. But I think SDL will soon be a much better choice if the so called 1.3 version takes off with what was originally meant to be in the release from memory.
SDL made sense back when 2D drawing was hardware accelerated. Nowadays you get better performance with 3D libs like OpenGL (which SFML uses).
I once tried moving my 2D SDL code to 2D OGL (which, by the way, is exactly the same as 3D, only you project things in a space without perspective, so an object looks exactly the same no matter how far from the camera it is). For the most part it was alright. I particularly liked how much faster rotation and resizing of surfaces is. There's simply no comparison to doing it in software.
However, one of the most common effects I needed -- a fade-in/out effect that gradually in/decreases the alpha of a surface -- was significantly slower. I don't know, maybe I was using the interface wrong (it wouldn't surprise me. It was my first time using OGL), but I don't think applying opacity to a whole texture 256 times should take over two seconds.
OGL also doesn't simplify alpha blending at all, which my code used heavily. If you're using sprites that use a single bit for alpha, OGL can do the redrawing for you, otherwise you have to tell it each time the order you want the textures to displayed in.

So yeah, OpenGL is great for 2D if your requirements happen to be within the very limited boundaries where OpenGL is efficient.
Last edited on
Mythios wrote:
SDL 1.3 Doesn't look to bad so far. Doesn't it have support now for the Nintendo DS as well? I thought SDL was still being actively developed?


It's been stuck at v1.2 for like ever (since I was first introduced to it, which was probably at least 6-7 years ago).

I just checked and apparently there is a 1.3 under development, which I was unaware of. So maybe it still is being worked on. How long has that been under development though? Maybe I just never noticed it existed until now?

helios wrote:
However, one of the most common effects I needed -- a fade-in/out effect that gradually in/decreases the alpha of a surface -- was significantly slower. I don't know, maybe I was using the interface wrong (it wouldn't surprise me. It was my first time using OGL), but I don't think applying opacity to a whole texture 256 times should take over two seconds.


Modifying textures (if that's what you were doing) is slow. It's generally not what you want to do unless it can be avoided.

For fade in / fade out effects, what I'd do is just draw a single untextured black quad over the entire rendered screen (or whatever portion of it you want faded out). The alpha of the black quad determines the level of the fade.

OGL also doesn't simplify alpha blending at all, which my code used heavily. If you're using sprites that use a single bit for alpha, OGL can do the redrawing for you, otherwise you have to tell it each time the order you want the textures to displayed in.


??

It sure does simplify it! I wonder what you were doing that caused you to have trouble with it. The only things about it are:

1) You have to set it up (admittedly kind of a pain, but it's only like 3 lines of code)
2) You have to draw things in "bottom up" order in order for it to be drawn as you expect (but this would be true of any library)

It almost sounds like you were describing #2 in the above paragraph. SDL has the same limitation though, so that doesn't really make sense.
Modifying textures (if that's what you were doing) is slow. It's generally not what you want to do unless it can be avoided.
It doesn't change the texture, just how the polygon is rendered.

For fade in / fade out effects, what I'd do is just draw a single untextured black quad over the entire rendered screen (or whatever portion of it you want faded out). The alpha of the black quad determines the level of the fade.
What if I want to fade out a single polygon in many?

2) You have to draw things in "bottom up" order in order for it to be drawn as you expect (but this would be true of any library)
Exactly what I meant. If I'm doing exactly the same I was doing with SDL, that's not a simplification. What am I giving it Z coordinates for, then, if I also have to tell it how to render polygons in the proper order?
What if I want to fade out a single polygon in many?


For just a single polygon, it's probably easiest to just change the color for that polygon with glColor.

You also might be able to use the color matrix to apply color changes to larger groups, although I'm not familiar with out to use the color matrix.

Exactly what I meant. If I'm doing exactly the same I was doing with SDL, that's not a simplification. What am I giving it Z coordinates for, then, if I also have to tell it how to render polygons in the proper order?


I wonder if that could even be possibly implemented.

Say you have 3 layers A B C that you want to blend together. A is on top, C is on bottom, but you draw them in the following order:

CAB.

CA would draw and blend properly of course, but OpenGL would screw up the drawing of B.

I wonder if B even could be properly blended with CA. OpenGL probably could use the depth testing to determine B is "below" CA and therefore blend it as BCA. But would BCA produce the same image as CBA (the desired result)?

I don't think it would, which is probably why OpenGL doesn't even bother with that approach.

I'd have to really look at the math to figure it out, but I'm not up for it right now.

EDIT: nah it wouldn't work.

What if B were totally opaque? CA would blend on top of it, which means C would be visible, even though logically C should not be visible.

So yeah it's just not possible with the way the frame is rendered, which is why OpenGL doesn't make it any easier. A shortcoming of current 3D hardware and design I suppose.
Last edited on
For just a single polygon, it's probably easiest to just change the color for that polygon with glColor.
Which is what I was doing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
void drawTexture(GLuint texture,double corner[4][2],double alpha=1.0){
	glBindTexture(GL_TEXTURE_2D,texture);
	glBegin(GL_QUADS);
	glColor4d(1,1,1,alpha); //This ****ing thing takes forever.
	glTexCoord2d(0,0);
	glVertex3d(corner[0][0],corner[0][1],0);
	glTexCoord2d(1,0);
	glVertex3d(corner[1][0],corner[1][1],0);
	glTexCoord2d(1,1);
	glVertex3d(corner[2][0],corner[2][1],0);
	glTexCoord2d(0,1);
	glVertex3d(corner[3][0],corner[3][1],0);
	glEnd();
	glLoadIdentity();
	glBindTexture(GL_TEXTURE_2D,0);
}

void drawTexture(GLuint texture,SDL_Rect &rect,double alpha=1.0){
	double corner[4][2]={
		{
			rect.x,
			rect.y
		},{
			rect.x+rect.w,
			rect.y
		},{
			rect.x+rect.w,
			rect.y+rect.h
		},{
			rect.x,
			rect.y+rect.h
		}
	};
	drawTexture(texture,corner,alpha);
}

//...
for (double a=0;b<256;a+=1.0/256.0,b++){
	glClear(GL_COLOR_BUFFER_BIT);
	SDL_Rect rect={0,0,640,480};
	drawTexture(texture,rect,a);
	SDL_GL_SwapBuffers();
}

EDIT: I just timed it. 3.382 s.

Couldn't it just be implemented as a dependency graph? A depends on B which depends on C. The only problem would be whether to allow branching. That is, there's no problem with a node depending on more than one node, but there may be problems if more than one node depends on the same node.
Last edited on
EDIT: I just timed it. 3.382 s.


There's no way it's taking that long.

You must be vsyncing. Rendering 256 frames at 75 Hz would take about 3.382 seconds. Rendering a single quad like you're doing would be lightning fast.

Couldn't it just be implemented as a dependency graph? A depends on B which depends on C.


This would have to be done on a per-pixel basis. What if B is angled on the Z axis so that part of it is behind C and part is in front of A?

A dependency graph per pixel would be completely impractical.
You must be vsyncing. Rendering 256 frames at 75 Hz would take about 3.382 seconds. Rendering a single quad like you're doing would be lightning fast.
Hmm... I'll have to look into that.
EDIT: You were right. I just disabled vsync from the NVIDIA control panel. 209 ms.

You could make the assumption that all the polygons are on parallel or equal planes.
Last edited on
You could make the assumption that all the polygons are on parallel or equal planes.


The thing is the frame buffer doesn't even know that there's polygons. Once you complete a polygon, it's basically just a rendered image. The framebuffer just keeps track of pixel data. It doesn't keep track of which pixels belong to which polygon.


I agree it would be nice for the lib to intelligently blend according to the polygon depth. I'm just saying it isn't really practical from the standpoint of a lib like OpenGL.
Last edited on
@Albatross

I'm really not that far in SDL, so I'm going to stop using it and am going to use SFML.

@at everyone

Thank you for replying, I really appriciate the help. Since everyone thinks that SFML is better I am going to start learning SFML.

Can anyone please recommend a great tutorial that is in .pdf form, since I don't have internet and want to learn on my own time.
Sadly, the best SFML tutorials are on the SFML website. I suppose you could download the individual pages and convert them to pdf format, but aside from that... there was even a thread on their forum asking about that. It's a bit old, but...
http://www.sfml-dev.org/forum/viewtopic.php?t=955&sid=24364311eb51002629bfa9c49a819869

-Albatross
Topic archived. No new replies allowed.