Windowed vsync with Opengl/Glee

After setting up a simple tilemap and implementing scrolling based on a camera following the player I noticed I was getting pretty bad tearing while scrolling the map. (This is using opengl + SDL).

After looking around I have found using double buffering with SDL and vsync on Glee are the recommended solutions with

1
2
3
4
  SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
  SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 1);

  wglSwapIntervalEXT(1);  //Enable vsync 


This works perfectly on fullscreen but not on windowed mode--however it does improve it on windowed mode by about 50%. Is there any other functions I am missing here or is it just a limitation of my graphics card?

I am using an integrated intel HD graphics chip right now (pretty much the worst you can get these days) and I have tried forcing vync in the graphics control panel which has no effect.

Any advice would be appreciated.
Last edited on
VSyncing when windowed is actually a very complex problem.

Here's a 5 page thread on another forum about it. He talks about DirectX at first, but I think the convo drifts into OpenGL... and I think he was [eventually] successful, but I can't quite remember.

http://forums.nesdev.com/viewtopic.php?f=3&t=9262


It's also a good read if you want to know why it's such a complex problem.

EDIT: removed the highlighting from the link

EDIT again:

This post in particular (from that thread) talks about why it's difficult:
http://forums.nesdev.com/viewtopic.php?p=98808#p98808

ANOTHER EDIT:

Just skimmed over the thread and the solution posted is with DX, not with OpenGL. =(
Last edited on
Well thanks for the resource, it was an interesting topic to read about.

The summary I got from all that was you do what you can to get vsync working and after that is all comes down to the hardware/how well windows can handle it (which isn't always that well).

Otherwise you just deal with it or use full screen which will normally work correctly either way. I can't really go into DX setup with this so I'm not sure there is much else I can do.
I am using an integrated intel HD graphics chip right now (pretty much the worst you can get these days) and I have tried forcing vync in the graphics control panel which has no effect.
That seems like an issue all its own. Does vsync work for other games?

I did a little fudging in SDL. SDL has a timer:
http://wiki.libsdl.org/moin.fcg/SDL_GetTicks?highlight=%28%5CbCategoryTimer%5Cb%29%7C%28SGFunctionst%29%7C%28CategoryStruc%29%7C%28CategoryEnum%29

so something like:
1
2
3
4
5
6
7
8
9
10
11
12
13
frame_rate = 1000 / 60; // result in 60 fps

time_start = SDL_GetTicks();
while ( my_app_running )
{
  do events

  if ( SDL_GetTicks() <= time_start + frame_rate )
  {
      time_start = SDL_GetTicks();
      do opengl;
  }
}



OpenGL might be able to figure out your refresh rate, but I have no idea...
Last edited on
That seems like an issue all its own. Does vsync work for other games?

Well I mean it "does" have an effect but I still get the same slight tearing as before. I don't really use vsync on many other games to say.

I actually have a SDL timer set up in that way to keep the frame rate at 60. I am not sure if I could use it in any better way.
1
2
3
4
if(FrameTimer.getTicks() < (1000 / 60))
{
	SDL_Delay((1000 / 60) - FrameTimer.getTicks());
}


When checking my frame rate it does stay fairly constant at 60fps but I still seem to get my tiles moving in a "choppy" way when scrolling my tilemap, which doesn't happen on full screen but does on windowed and looks like tearing to me. I could be wrong here? But if it was my camera algorithm it wouldn't work on full screen.....

I may try making a video however youtube runs at 30fps which will kind of defeats the purpose of trying to show frame lag.
Last edited on
That timer mechanism isn't perfect... as it assumes SDL_Delay will sleep for the exact time you give it (which... depending on the system and what the CPU is doing it might not). Writing a good framerate regulator is slightly more complex.

Also "choppy" and "tearing" are two different things. "tearing" is a visible break in the image where you can clearly see the top and bottom portions of the screen having two different images.

Example... if you have a program that just alternates between a blue and green screen every frame... tearing might look like this:

        The screen
+==========================+
|                          |
|                          |
|       Green              |
|                          |
|                          |
|                          |
|--------------------------|   <- scene drawn at this point during rasterization
|       Blue               |
|                          |
+==========================+


Choppiness is more due to an inconsistent framerate.

That said... even if you get a perfect 60 FPS (which you won't ever accomplish with a CPU driven timer/sleep mechanism) you'll still tear because your updates won't necessarily fall in VBlank.



However... VSyncing solves both of these problems by both waiting for VBlank (to ensure the scene is not displayed during rasterization) and by giving you a perfectly smooth framerate (based on the monitor's refresh rate, which is very stable)
I guess it was my mistake thinking this was a tearing issue then.
However... VSyncing solves both of these problems by both waiting for VBlank (to ensure the scene is not displayed during rasterization) and by giving you a perfectly smooth framerate (based on the monitor's refresh rate, which is very stable)


Seeing as I "am" using vsync--I can remove that timer and it will stay at 60fps and I am still seeing some choppiness every few seconds or so when moving my map. That means there must be something strange happening in my code that I am causing?

(More of a question I should be asking myself, but my code is so basic right now there isn't much I could be doing)

EDIT:
For now I am just going to use time based movement and get rid of any of the SDL_Delay calls I had. I still don't understand why using vsync did not give me smooth movement though. Thanks for the help Lowest/Disch, any more info is always welcome.
Last edited on
Are you doing 'time' based movement or 'frame' based movement?

IE... moving your objects X pixels per frame or X pixels per second.

If frame-based (and if you really are Vsyncing) the only thing I could think of that could cause choppiness is missed frames (which is doubtful unless you're sleeping or doing a LOT of cpu work) or crappy hardware.


If time-based... choppiness can be caused by misaligning your timing to your frames. IE, if there are ~17 ms per frame... then you'd have 2 frames in ~34 ms.... but you might be updating as 14+20 rather than 17+17.

Note that if this is what is happening, it might not be your fault... but might be the fault of the timer. Some timers (particularly on older versions of windows) have a really crap res and report times in multiples of 10 ms. SDL might be using one of those timers.
Time based movement with unlimited fps seems to be pretty smooth so I am staying with that, I could always add an SDL_Delay(1) to the end of my game loop to not kill my CPU.

Sooner or later I will look up how to get a constant 60fps with SDL without any unreliable timers. Marking this as solved, thanks!
This is my workaround
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    uint32_t currentTime = SDL_GetTicks();
    uint32_t previousTime = currentTime;
    uint32_t frameRate = 1000 / 60;
    
    while(running) 
    {
        while(SDL_PollEvent(&event))
        {
            Event(&event);  // Event is a function I wrote
        }
        
        currentTime = SDL_GetTicks();
        if (currentTime - previousTime > frameRate)
        {
            previousTime = currentTime;
            
            glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

            // do renderings            
            
            SDL_GL_SwapBuffers();
            
        }
        
    }
That looks like a good way to handle it, I have seen a few other people do that as well.

One question though, why don't you include your event logic inside the frame limiter? Do you really need the game logic running at > 500fps. 60 fps should be fine?
To start, I'm no expert. I think I came from the same place as you though; Why should my program be a recourse hog? The solution we have here is sort of a hack, we should probably be using threads.

As a reality check, we have this:
while (true)
which means this program is using 100% of the CPU regardless ( unless we use Delay, which I don't advise even for the stupid programs I have ).

I sometimes drop the frame rate to 1ps, just to see what I'm actually doing. I still want to be able to press 'q' and have my program quit instantly. If the event processing is inside the "waiting area" then I have to wait for that second.
Last edited on
Wall of text time.

IMO, if you're not VSyncing... you should always be sleeping/delaying. Sucking up 100% CPU is completely unnecessary and if the poor soul is running the game on their laptop it will suck all the juice out of their battery.

That aside... let's take a look at another problem here... and examine possible solutions.

One problem which hasn't been mentioned yet is that variable update periods can produce varying/inconsistent in-game behavior. For example... collisions are usually checked only once per update regardless of the time period of that update.

For example.. if there's a collision that's occuring within a 20 ms time period... if you process that time period in one 20 ms update() you will only register 1 collision.. whereas if you are processing every 5 ms you will register 4 collisions (once each update). This can impact collision logic, movement logic, etc. In extreme [albeit unlikely] cases, it can even result in game-breaking behavior (like tunneling) if you're not careful.

In any event this can lead to people who are running on faster machines having a different in-game experience than people who are running on slower machines... even if the slower machines are getting a full framerate with no slowdown.

This also makes it extremely difficult to have strictly deterministic behavior (if, for example you want to be able to record in-game movies by recording keypresses... or do 'Braid' style rewind effects) since the CPU time your program gets becomes an influencing factor in game logic.



The easiest way to solve this problem is to use a fixed-time update. IE... to say that one logic update occurs every X milliseconds. Back in the day.... it was very common for this timeframe to be equal to the length of one frame. So if you were drawing 60 FPS... you would also update logic at 60 FPS.

This presents another problem if your logic rate does not match the user's monitor refresh rate. What if the user's monitor updates at 75 FPS instead of 60? Then VSyncing is not an option or else your logic will run too fast. This is less of a problem now than it was when CRTs were common... but still.




To solve both of these problems I meet somewhere in the middle. Fixed logic rate... but let the display have whatever framerate it needs. In a recent project I had logic updating at 50 updates per second, while graphics (usually) displayed at 60 FPS. 50 UPS makes a nice and round 20 ms per logic update... not a messy 16.666667... and 50 UPS is certainly fast enough for the program to feel responsive to the user.


So the glaring question here is... if you're updating at 50 UPS but drawing at 60 FPS... wouldn't that look crappy? Wouldn't it make the graphics jittery due to the update rate being slower than the display rate?

The answer: only if you don't do it right.


The trick here is to run the logic 1 update ahead of where it should be... and interpolate the graphic representation of animations and positions of objects between the last 2 physical positions. I know that isn't very clear.... so here's an example.


Let's assume you have an object that is moving at 10 pixels per update (one update every 20 ms). This means that at timestamp 0 he will be at position 0... at timestamp 20 (ms) he will be at position 10, etc, etc:
1
2
3
4
5
6
7
8
9
10
LOGIC:  (50 UPS)
______________
Time      Phys Pos
  0         0
 20        10
 40        20
 60        30
 80        40
100        50
120        60


The naive approach to this would simple draw the object at its current position whenever the rendering code triggers. If we are drawing at 60 FPS... this produces very ugly results:
1
2
3
4
5
6
7
8
9
10
11
12
GRAPHIC:  (60 FPS)
__________________

Time         Pos (bad)
0            0
16.67        0
33.33        10
50           20
66.67        30
83.33        40
100          50
116.67       50


As you can see positions 0 and 50 are drawn twice. This creates very 'jerky' movement that is unpleasant.


So what I'm proposing is that you run the logic 1 update ahead of where it's supposed to be... then interpolate or "blend" the last two positions together based on where in that 20 ms window the drawing actually happens. So what you get is this:

1
2
3
4
5
6
7
8
9
10
11
12
GRAPHIC:  (60 FPS)
__________________

Time         Pos (good)
0            0
16.67        8
33.33        16
50           25
66.67        33
83.33        41
100          50
116.67       58


So at timestamp 16.6667 you would draw the object at position 8... even though it never actually was at position 8. This can be easily calculated:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// assume prev_pos and next_pos are the two logical/physical positions of the object that
//  we are interpolating.
// assume 't' is the timestamp at which the graphics are being rendered.

temp = t % 20;  // mod by 20 (because there are 20 ms per logic update)
interpolate = temp / 20.0;  // divide by 20 (with floating point math).  This is now
   // the scale by which to interpolate our 2 values.  ie:
   //  interpolate=  0:   draw at prev_pos
   //  interpolate=  1:   draw at next_pos
   //  interpolate=0.5:   draw halfway between prev_pos and next_pos

temp = (next_pos - prev_pos);
pos = prev_pos + (temp * interpolate);

// 'pos' is now the position at which to draw this object 



With this... you get extremely smooth graphics no matter what FPS you're rendering at. Also, your logic only has to run at 50 UPS, which means you can sleep/delay and ease some burden off the CPU.
^ Starts to have issues if your object is accelerating. (right?)

I will go back on my "no SDL_Delay()" rule. I thought it would block events, but apparently it does not ( on my machine/build/code/etc ). Ideally, the computer is doing nothing as much as possible.

Last edited on
^ Starts to have issues if your object is accelerating. (right?)


how so?
If an object accelerates at 10px per update, it's position looks like this:
1
2
3
4
5
6
7
8
9
10
LOGIC:  (50 UPS)
____________________
Time   Speed   Phys Pos  
  0     0        0
 20    10       10
 40    20       30
 60    30       60
 80    40      100
100    50      150
120    60      210


At time 50 ( to use an easy number ), the position is not 45.
True, but that would be difficult to notice in game. And if it is a real concern, you can factor accelleration into your interpolation calculations.
Topic archived. No new replies allowed.