Need an example for OpenGL

Pages: 12
How does this contradict
You claim it doesn't help with do trivial things with the OS. Literally what the entire library was intended for. Both can handle all input, all sound, networking, window creation, etc. with ease. The only reason there isn't a Direct3D specific API for either of those is because context creation is built into the D3D API, it's not part of the OS distinctly. GLX and WGL are OS specific APIs which need to be abstracted. There's nothing SDL can really do to simplify context creation more for Direct3D.

Not to mention that both Ogre and Irrlicht designed their code on outdated libraries! (OpenGL 1.x and D3D9? WTF! Irrlicht has support for D3D8!)

One of the cool things about both is that they actually abstract most of their API into different sections. They didn't design their code around anything in particular, generally just what a person wants whenever rendering stuff. You can change a render backend out at compile time to pretty much any other backend. That's why they can have D3D, OpenGL, and a software backend at the same time.

The reason why a lot of games are CPU bound is because programmers do stupid stuff like that.
Are you for real? You're going to say Ogre sucks because of potential cache misses? I've also not heard of a cache miss caused by a call using a polymorphic type. While dereferencing a pointer may cause a cache miss, the likelihood of any of this having a major impact on the engine by causing several frames to be lost is ridiculously unlikely. Most applications and libraries do not pay attention to cache lining or similar because it doesn't matter as a sort of macro optimization, that's micro optimization.

Keep in mind I wrote my comments on Ogre in a few minutes after looking at their code. I didn't spend several years thinking about it.
Even more the reason you should not be judging so harshly. It is not common sense nor is it sense in general.

The driver state for modern contexts is pitiful to say the least.
Seriously, you talk about vague statements then say crap like this. Please provide a reference You also act like D3D drivers don't have issues. Even with the additional support, it's not uncommon for a video game to experience issues with a specific driver and fail due to a driver bug: http://forums.steampowered.com/forums/showthread.php?t=2903238
http://us.battle.net/wow/en/forum/topic/8569150180

OpenGL ES is supported on [some] consoles, but no developers will ever use it because the speed is ridiculous. The only people who use OpenGL ES on consoles are homebrew developers. Pros don't use it.

You're incorrect but only slightly. OpenGL ES isn't generally directly supported by consoles. It's generally a modified version of OpenGL that's built specific to the machine its run on. However, it's by far a long shot from OpenGL ES and easier to port from than a D3D renderer.

Wii/U and DS use a modified stack of OpenGL. Ouya uses OpenGL ES straight. PS3 has multiple options but nothing is really supporting your idea of it being, "slow". Xbone obviously does not. I don't really like this argument anyways but your counter-argument is ridiculous and plain wrong.

There may be an implementation, but there is no support.
Please provide a reference.

Most of the porting issues with OpenGL comes from filesystem differences and shaders. I know of at least one commercial project that just so happened to support Linux because they chose to use SDL and it supports the platform out of the box without modification. It's not too hard if your entire game is based on GL and GLSL.

Gotta go, more later...
ezchgg wrote:
assilius wrote:
Oi!

I know Minecraft eats up resources, but i wouldn't call it a small program, especially not one you just run in the background all the time.
Minecraft is a relatively small program.

Also a very bad example because of how successful Minecraft is, worth Billions, y'know. Unless you are saying that making a less efficient program will make me a billionaire?
I don't see how being worth a lot equates to having well-written code.

Optimization is a huge huge huge part of why games look good.
No it isn't. Wtf? Do you know what optimization is?

Gameplay does not require as much optimization.
What?

It is completely relevant, you can make a poorly programmed piece of software, that doesn't matter so long as it is usable.
This is exactly why things like Adobe Reader take years to load.

Your arguments as to why not to use Ogre and similar API is that they aren't as efficient as they could be. Ok so they are not as efficient as they could be.
Efficiency makes a program more attractive. It's like asking why a car from the 30s is less preferred (usability wise) than a car from 2014.

What difference does that make? Efficiency does not guarantee anything.
...

I didn't misread what you said, your words did not correlate to the meaning you intended them to hold.
I think they did quite well. The paragraph I wrote explicitly mentioned SFML and OpenGL.

Your statement doesn't contradict his. You know the whole glass half full, you were looking at his statement from a perspective that benefits your own views.
?

God i hate marketing for consoles.
Okay?

You're getting unnecessarily hostile. Please refrain from responding until you cool down.

NoXzema wrote:
You claim it doesn't help with do trivial things with the OS. Literally what the entire library was intended for. Both can handle all input, all sound, networking, window creation, etc. with ease.
Reread what I said. I said non-trivial.

The only reason there isn't a Direct3D specific API for either of those is because context creation is built into the D3D API
It's not. It's built into the DXGI API. They're completely different things.

One of the cool things about both is that they actually abstract most of their API into different sections. They didn't design their code around anything in particular, generally just what a person wants whenever rendering stuff. You can change a render backend out at compile time to pretty much any other backend. That's why they can have D3D, OpenGL, and a software backend at the same time.
That's not what I meant. Because of using older APIs in mind, they effectively hindered the performance of newer APIs. For example, constant buffers are not in D3D9 (nor are uniform buffers in OpenGL in 1.x-2.x), so they're forced to send data every frame to the shader.

Are you for real? You're going to say Ogre sucks because of potential cache misses?
It's not a potential cache miss, it's usually always a cache miss.

I've also not heard of a cache miss caused by a call using a polymorphic type.
You dereference the structure, go to the vtable that's in the structure, fetch the function pointer to the function, and call it. The function is most likely not in the instruction cache.

It's a very real problem and a lot of professionals discuss this.

While dereferencing a pointer may cause a cache miss, the likelihood of any of this having a major impact on the engine by causing several frames to be lost is ridiculously unlikely.
People with this mindset are exactly the reason why people write code like this. They don't believe that their code is a serious issue.

http://www.yosoygames.com.ar/wp/2013/07/ogre-2-0-is-up-to-3x-faster/

And they're not even completely utilizing DoD.

Most applications and libraries do not pay attention to cache lining or similar because it doesn't matter as a sort of macro optimization, that's micro optimization.
And this is why people generally don't use them in the game development industry.

Even more the reason you should not be judging so harshly. It is not common sense nor is it sense in general.
I judged this the same way I'd judge my code. It is common sense and plenty of professionals have talked about this.

Seriously, you talk about vague statements then say crap like this. Please provide a reference You also act like D3D drivers don't have issues. Even with the additional support, it's not uncommon for a video game to experience issues with a specific driver and fail due to a driver bug:
I never said they didn't have issues. Their state is much better than OpenGL drivers, however.

You're incorrect but only slightly. OpenGL ES isn't generally directly supported by consoles.
There is a compliant version on both the PS3 and PS4.

It's generally a modified version of OpenGL that's built specific to the machine its run on.
The APIs consoles use are nothing like OpenGL in the slightest.

easier to port from than a D3D renderer.
It's actually quite a bit easier to port from D3D to console APIs.

Wii/U and DS use a modified stack of OpenGL
No it doesn't. It uses a custom and proprietary API. Again, nothing like OpenGL.

your counter-argument is ridiculous and plain wrong.
I don't know where you're getting your information from.

Most of the porting issues with OpenGL comes from filesystem differences and shaders. I know of at least one commercial project that just so happened to support Linux because they chose to use SDL and it supports the platform out of the box without modification. It's not too hard if your entire game is based on GL and GLSL.
It's from driver vendors who don't bother to create decent drivers for whatever reason. GLSL is hit or miss with portability, you have to spend quite a bit of time making sure every platform will run it.
Last edited on
I don't see how being worth a lot equates to having well-written code.

Haza i got through to you. There is no correlation. It doesn't matter if your code is well-written or not, you can still make a product that people will use. Often times the code is not well written simply the person who made the product is the only one that was willing to make it, it being something people needed/wanted.

This is exactly why things like Adobe Reader take years to load.

Adobe reader isn't a game, i've never messed around with adobe reader i'm sure the problem isn't that they have too many cache misses. Simply rather the algorithm they used was not suited for the task of quickly loading large files.

As an example text editors that read the whole file into memory before allowing you to edit it. If you simply read the part of text you are currently viewing loading large gigabyte files will become instantaneous. At that point you can have thousands of cache misses and it won't make a difference. As loading a couple bytes worth of text is going to out perform loading gigabytes into memory in every which way possible. Of course now you fixed one problem, you are going to face other problems that you wouldn't have if you loaded the entire file into memory, trade-offs for implementation.

Efficiency makes a program more attractive. It's like asking why a car from the 30s is less preferred (usability wise) than a car from 2014.

An electric car is more efficient than a gas one. Why isn't it everywhere? Cause it's range isn't as good as a gas car. It isn't as reliable if your "fuel" is almost empty and there is an emergency which requires you to travel to the next city, with a gas powered car you can simply go refuel the tank. With an electric car you have to wait 10-12 hours for it to charge and that full charge might not even get you to the next city. Of course this is changing as batteries start holding more charge.

Anyways i love when people bring in this unrealistic circumstance that has no relevance to the topic at hand. Again you are just picking a metaphor that promotes your views. Have you thought of becoming a lawyer. This kind of metaphor was used in the Google vs. Oracle case promoting copyrights for API. I think they used novels having copyrights, though.

No it isn't. Wtf? Do you know what optimization is?

Yes it is, there's only X amount of polygons that can be viewed, with Y amount of computations. It's all about optimizing out what you can't see, or reducing details that wouldn't make a difference in real time with moving objects. Of course if you are talking about art style then i guess you would be right. A game like minecraft, though it has a relatively amazing retro style appeal, even though it doesn't look realistic it is still a hog.

...

Have you been enlightened too far that you can't even present a rebuttal?

?

Yah must be difficult to imagine there being a perspective that is not your own.

Avilius wrote:
I think they did quite well. The paragraph I wrote explicitly mentioned SFML and OpenGL.
SFML doesn't allow you to get the helper classes that you'd get with OpenGL

You said, OpenGL gives you helper classes that you can't get with SFML, essentially. What helper classes were you referring to that OpenGL has that SFML doesn't? I think it's pretty clear your words do not hold the meaning you intended for them.

Okay?

You're getting unnecessarily hostile. Please refrain from responding until you cool down.

One of the downsides of not being able to feel anything, i am always cool :).

Evilus wrote:
Reread what I said. I said non-trivial.

Example example example! What did you do that was non-trivial that you couldn't do with SFML, i want to know.

bio wrote:
I plan to when I feel they're in a polished enough state (Sometimes I'll just randomly drop a project)

Good enough is perfect. You'll never upload any of your repos if you wait for them to be in pristine condition. I'll give you a hand, you'll be ash before anyone of them is perfect.
Last edited on
ezchgg wrote:
Haza i got through to you. There is no correlation. It doesn't matter if your code is well-written or not, you can still make a product that people will use. Often times the code is not well written simply the person who made the product is the only one that was willing to make it, it being something people needed/wanted.
I honestly don't even know what you're talking about anymore.

Adobe reader isn't a game, i've never messed around with adobe reader i'm sure the problem isn't that they have too many cache misses.
I never said it was a game. Games aren't the only things that have cache misses you know. You're missing the point of what I'm saying.

As an example text editors that read the whole file into memory before allowing you to edit it. If you simply read the part of text you are currently viewing loading large gigabyte files will become instantaneous. At that point you can have thousands of cache misses and it won't make a difference. As loading a couple bytes worth of text is going to out perform loading gigabytes into memory in every which way possible. Of course now you fixed one problem, you are going to face other problems that you wouldn't have if you loaded the entire file into memory, trade-offs for implementation.
Again, you missed the point.

An electric car is more efficient than a gas one. Why isn't it everywhere? Cause it's range isn't as good as a gas car. It isn't as reliable if your "fuel" is almost empty and there is an emergency which requires you to travel to the next city, with a gas powered car you can simply go refuel the tank. With an electric car you have to wait 10-12 hours for it to charge and that full charge might not even get you to the next city. Of course this is changing as batteries start holding more charge.
This doesn't even relate to what I was saying at all. Once again, it went right over your head. Can you please explain this?

Anyways i love when people bring in this unrealistic circumstance that has no relevance to the topic at hand. Again you are just picking a metaphor that promotes your views
That's pretty much what support is supposed to do? And my analogy was very much relevant... yours... not so much.

Have you thought of becoming a lawyer. This kind of metaphor was used in the Google vs. Oracle case promoting copyrights for API. I think they used novels having copyrights, though.
How is this relevant to anything I was saying?

Yes it is, there's only X amount of polygons that can be viewed, with Y amount of computations. It's all about optimizing out what you can't see, or reducing details that wouldn't make a difference in real time with moving objects. Of course if you are talking about art style then i guess you would be right. A game like minecraft, though it has a relatively amazing retro style appeal, even though it doesn't look realistic it is still a hog.
So you don't even know what optimization is. Nice.

Have you been enlightened too far that you can't even present a rebuttal?
You're not making any sense and I can't understand you.

Yah must be difficult to imagine there being a perspective that is not your own.
The irony.

You said, OpenGL gives you helper classes that you can't get with SFML, essentially. What helper classes were you referring to that OpenGL has that SFML doesn't? I think it's pretty clear your words do not hold the meaning you intended for them.
I guess you like to ignore the context my statement was in.

Example example example! What did you do that was non-trivial that you couldn't do with SFML, i want to know.
Try getting 2 contexts running side by side on the same Window.

Good enough is perfect. You'll never upload any of your repos if you wait for them to be in pristine condition. I'll give you a hand, you'll be ash before anyone of them is perfect.
Ooh, I love direct attacks. Some projects you just don't find interesting, nor do you find fun to work on. Other projects you'd much rather prefer them to be closed source.

I find this statement highly ironic as well to say the least. I haven't seen any of your projects.
Last edited on
Reread what I said. I said non-trivial.
Give examples of such non-trivial tasks then.

It's not. It's built into the DXGI API. They're completely different things.
Which does not have to be used directly and D3D abstracts it away...

There is a compliant version on both the PS3 and PS4.
Actually, I'm sure it's modified as it.

The APIs consoles use are nothing like OpenGL in the slightest
No it doesn't. It uses a custom and proprietary API. Again, nothing like OpenGL.

Here's an example: http://devkitpro.org/wiki/libogc/GX
GX2 is based off of GX. Again, shares similar concepts and looks quite a bit like GL... except modified to be more specific to the machine.

I judged this the same way I'd judge my code. It is common sense and plenty of professionals have talked about this.
People with this mindset are exactly the reason why people write code like this. They don't believe that their code is a serious issue.

http://www.yosoygames.com.ar/wp/2013/07/ogre-2-0-is-up-to-3x-faster/

This isn't even a realistic benchmark. Even if it were, these advancements are probably not from avoiding cache misses. Again, not a reason to say OGRE is bad.

Try getting 2 contexts running side by side on the same Window.

There are actually ways to do this. One is to simply create seperate windows for the contexts. The user won't know the difference and its easy to develop. If you need it in the same window, you can simply reparent the two new windows into another window. Toolkits can help with this and both SDL and SFML (and what the hell, GLUT and GLFW too) are smart enough to handle all events caused by toolkits. Alternatively, you can also use a toolkit to make windows for sizing and organization purposes, then create contexts using those windows. If you use SDL and SFML, you simply use the OS specific mechanism to reparent the window to the windows provided by the toolkit.

I don't know where you're getting your information from.

Still waiting on the benchmark showing that GL performance is slow on the PS series.

It's from driver vendors who don't bother to create decent drivers for whatever reason. GLSL is hit or miss with portability, you have to spend quite a bit of time making sure every platform will run it.
If you replace platform with video card, I'll give you the point. Since the compiler must be built into the OpenGL implementation, compilers are going to vary per driver vendor. This also heavily complicates shader caching since vendor may change between startup. One of my big wishes in OpenGL Next is that this is fixed with a solution similar to that of Cg or HLSL. Hell, if Cg supported later versions of GLSL, it would be wildly awesome in my book and I'd be all over it.

For what it's worth, I'm still not convinced OGRE should care much for cache missing since they aren't wildly struggling for performance gains over simple usability. Even if they do explicitly dereference pointers all over the place, the compiler will often fix that up for you anyways. If you benchmark something and you think the numbers are quite a bit off compared to something else, then you can question it. But there is no such benchmark and the claims that the engine is bad because they do not explicitly avoid cache misses is indeed ridiculous.
NoXzema wrote:
Give examples of such non-trivial tasks then.
I already did. If you want another:

How can I get raw input from the OS with these libraries? How can I get two mouses?

Which does not have to be used directly and D3D abstracts it away...
It doesn't...

NoXzema wrote:
Here's an example: http://devkitpro.org/wiki/libogc/GX
GX2 is based off of GX. Again, shares similar concepts and looks quite a bit like GL... except modified to be more specific to the machine.
devkitPro wrote:
Welcome to the home of devkitPro, provider of homebrew toolchains for wii, gamecube, ds, gba, gp32 and psp.

This is not the API that professionals use. This is some API conjured for homebrew developers. I believe this and the N64 are the only two consoles that ever used an OpenGL related API.

EDIT: Further reading I quickly found out that this was a Gamecube API. This is completely irrelevant for modern consoles, which I assumed we were talking about.

NoXzema wrote:
One is to simply create seperate windows for the contexts. The user won't know the difference and its easy to develop.
The user will know the difference between a window and two. And you're not solving the problem.

If you need it in the same window, you can simply reparent the two new windows into another window.
How?

Alternatively, you can also use a toolkit to make windows for sizing and organization purposes, then create contexts using those windows. If you use SDL and SFML, you simply use the OS specific mechanism to reparent the window to the windows provided by the toolkit.
You must like 10 GB executables then.

I'd probably say that OpenGL Next should support HLSL. Pretty much every other modern API does it, so there's no reason to try to be the odd-ball. Cg is going to be torn apart if it's continued.

For what it's worth, I'm still not convinced OGRE should care much for cache missing since they aren't wildly struggling for performance gains over simple usability.
Fixing cache misses =/= worse usability.

Even if they do explicitly dereference pointers all over the place, the compiler will often fix that up for you anyways.
Modern compilers cannot fix that. They can't fix everything.

Still waiting on the benchmark showing that GL performance is slow on the PS series.
A benchmark would immediately go against any contracts made with Sony. I don't think anyone wants to be punished for that.

With OpenGL you're essentially going through unnecessary driver overhead (which is a very real thing if you didn't know) to do quite limited tasks compared to LibGCM.

Ask any professional that has worked with it and they will tell you exactly what I mean.
Last edited on
The user will know the difference between a window and two. And you're not solving the problem.
Uh... you realize how many windows are in a normal GUI application...? This is a tested solution of my own. Please, at least do it yourself before spouting more non-sense...

This is not the API that professionals use. This is some API conjured for homebrew developers. I believe this and the N64 are the only two consoles that ever used an OpenGL related API.

EDIT: Further reading I quickly found out that this was a Gamecube API. This is completely irrelevant for modern consoles, which I assumed we were talking about.

This is what is used for GameCube and Wii. Aside from that, something similar to GX is used called GX2. You can't know, nothing is known about the internal API since it's protected by NDA. However, we can tell from "an anonymous source" that it does use GX2 such as said by vgleaks: http://www.vgleaks.com/world-premiere-wii-u-specs

I'd probably say that OpenGL Next should support HLSL. Pretty much every other modern API does it, so there's no reason to try to be the odd-ball. Cg is going to be torn apart if it's continued.
You have four modern APIs, two use it. This is a poor argument at best although I probably wouldn't mind it much if it didn't have legal issue.

With OpenGL you're essentially going through unnecessary driver overhead (which is a very real thing if you didn't know) to do quite limited tasks compared to LibGCM.

Ask any professional that has worked with it and they will tell you exactly what I mean.

What professionals have you talked to exactly?


If you need it in the same window, you can simply reparent the two new windows into another window.
Parenting a window is usually a one line call from the OS such SetParent for WinAPI: http://msdn.microsoft.com/en-us/library/windows/desktop/ms633541%28v=vs.85%29.aspx

Modern compilers cannot fix that. They can't fix everything.
Yes, they can. Open up a debugger sometime.

How can I get raw input from the OS with these libraries? How can I get two mouses?

SDL uses raw input, if available, by default. Not sure about SFML. However, there is an explicit lack of multi-mouse support for both APIs. I don't see why I couldn't add something for it though... The only occasion I can see actually using such an API is to choose between two mouses... perhaps on a laptop. I'm not sure how that's currently dealt with to be honest... SDL might just take input from all mouses. I'll look into it.

You must like 10 GB executables then.
This was kinda a dumb thing to say. You can create and size the windows yourself as you wish but using a toolkit would be easier. It would be just as complicated in D3D.

It doesn't...

http://msdn.microsoft.com/en-us/library/windows/desktop/ff476880%28v=vs.85%29.aspx#Contexts
http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075%28v=vs.85%29.aspx
An application can access DXGI directly, or call the Direct3D APIs in D3D11_1.h, D3D11.h, D3D10_1.h, or D3D10.h, which handles the communications with DXGI for you.

For what its worth, I'm aware that the first parameter requires the use of DXGI. However, this still doesn't really make a difference since a conceptual port of D3D wouldn't require the explicit use or implementation of DXGI, nor does it make it any less as part of D3D context creation. Where as OpenGL has no standard for context creation, D3D does have a standard for context creation. One would simply be able to make a DXGI factory and ignore the rest of the functionality associated with DXGI.
Last edited on
NoXzema wrote:
Uh... you realize how many windows are in a normal GUI application...? This is a tested solution of my own. Please, at least do it yourself before spouting more non-sense...
I don't even know what you're talking about anymore.

You have four modern APIs, two use it. This is a poor argument at best although I probably wouldn't mind it much if it didn't have legal issue.
There is no legal issue. You cannot sue someone for implementing a backend for an interface, nor can you sue someone for writing a compiler. It is not covered by law.

Mantle uses HLSL.

This is what is used for GameCube and Wii. Aside from that, something similar to GX is used called GX2. You can't know, nothing is known about the internal API since it's protected by NDA. However, we can tell from "an anonymous source" that it does use GX2 such as said by vgleaks: http://www.vgleaks.com/world-premiere-wii-u-specs
With both the Wii/U and the PS3, you generally directly construct command buffers for the GPU and then ship them off to it.

OpenGL hasn't been used on any [modern console] games that have been on store shelves.

http://stackoverflow.com/questions/16114601/some-opengl-function-calls-is-not-available-in-developing-ps3-game

datenwolf wrote:
Most games developed for the PS3 don't use OpenGL at all, but are programmed "on the metal" i.e. make direct use of the GPU without an intermediate, abstrace API. Yes, there is a OpenGL-esque API for the PS3, but this is actually based on OpenGL-ES.


http://scalibq.wordpress.com/2010/04/01/why-you-should-use-opengl-and-not-directx-%E2%80%93-an-analysis/

Scali wrote:
Lastly, the PS3 and Wii only support OpenGL in a ‘homebrew’ way. For best performance and quality, they have their own proprietary libraries. No commercial game on PS3 or Wii uses OpenGL, they all use the Sony/Nintendo-specific APIs or go down to the bare metal. The OpenGL implementations just aren’t fast enough. The custom APIs suit the hardware much better.


http://forum.train2game.com/showthread.php/2067-Why-DirectX-instead-of-OpenGL

And plenty more that I could find with a bit more time. Does this satisfy you?

NoXzema wrote:
Parenting a window is usually a one line call from the OS such SetParent for WinAPI: http://msdn.microsoft.com/en-us/library/windows/desktop/ms633541%28v=vs.85%29.aspx
That isn't SFML.

This was kinda a dumb thing to say. You can create and size the windows yourself as you wish but using a toolkit would be easier. It would be just as complicated in D3D.
This had nothing to do with D3D. I said that SFML and SDL aren't useful when you want to do non-trivial tasks with the OS, and you challenged what I said.

http://msdn.microsoft.com/en-us/library/windows/desktop/ff476880%28v=vs.85%29.aspx#Contexts
http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075%28v=vs.85%29.aspx
I'll give you a point.
There is no legal issue. You cannot sue someone for implementing a backend for an interface, nor can you sue someone for writing a compiler. It is not covered by law.

Mantle uses HLSL.

Except HLSL is a language specification and you can patent such things so people must pay royalties per use and/or implementation. There is also copyright issues of using the language specification. There is no official documentation on this subject, so much so that one cannot even determine if it's proprietary.

That isn't SFML.
It's also not non-trivial and beyond the scope of both SFML and SDL. Creating a cross platform method that reparents a platform specific window handle is probably less than 30 lines (I'm not sure how re-parenting works for Mac).

This had nothing to do with D3D. I said that SFML and SDL aren't useful when you want to do non-trivial tasks with the OS, and you challenged what I said.

Except they handle non-trivial tasks. They just don't handle every single use case that 0.1% of people might use. For a common game, even for AAA titles, SDL works fine... yet you claim its not good enough which is absurd. I'm not saying it couldn't use improvements but to discredit what it already does is not even an argument, it's ignorance.
I'm not going to sit here and make a list of non-trivial tasks that most games implement using SDL (and some that most games do not use but are still available) just to prove to you that it's a useful library. There are quite a few games who would tell you otherwise. You cannot expect me to think that SDL is useless because you list one non-trivial task that is nowhere near a common case that it doesn't handle.

I don't even know what you're talking about anymore.

Nice, ignoring the rest of the post to down talk me. You claimed that the user would know the difference between two new windows. They will not given the nature of how windows generally work. I suggested that you try a similar solution yourself before blatantly disregarding the solution. Also, it is a solution, it solves exactly your complaint of not being able to put two contexts in the same window, which, might I add, is another very uncommon case that you expect SDL and SFML to magically handle somehow.
Last edited on
NoXzema wrote:
Except HLSL is a language specification and you can patent such things so people must pay royalties per use and/or implementation. There is also copyright issues of using the language specification. There is no official documentation on this subject, so much so that one cannot even determine if it's proprietary.
Generally you're not compiling the source code itself, but interpreting the bytecode instead (MS already provides a compiler). AFAIK you cannot be sued for interpreting a file.

that most games implement using SDL (and some that most games do not use but are still available) just to prove to you that it's a useful library. There are quite a few games who would tell you otherwise. You cannot expect me to think that SDL is useless because you list one non-trivial task that is nowhere near a common case that it doesn't handle.
I never said it wasn't useful. It is useful. Actually, I'm quite fond of SDL.

You claimed that the user would know the difference between two new windows. They will not given the nature of how windows generally work.
This is the part I'm not understanding. You're saying the user cannot notice the difference between having two windows and having a single one?

I admit this is an uncommon case. Can you get a window menu (not sure if that's the proper term) on the top of your window without jumping through hoops?
This is the part I'm not understanding. You're saying the user cannot notice the difference between having two windows and having a single one?

Correct.

I admit this is an uncommon case. Can you get a window menu (not sure if that's the proper term) on the top of your window without jumping through hoops?

Yes but not directly through SDL or SFML. You would need to use similar strategies that I've already described. Using SDL or SFML to create the window/context and then reparent the window to the frame window and size it however you want. This is how I actually integrate with Qt instead of using Qt-specific mechanisms to separate the GL context creation code from the GUI code.
EDIT: Actually, I don't use SDL since I needed to use xcb and we don't use any other features SDL provides. However, I could have: https://github.com/computerquip/obs-studio-alt-backends/blob/master/gl-sdl.c

I never said it wasn't useful. It is useful. Actually, I'm quite fond of SDL.

You inferred that SDL isn't useful for non-trivial tasks... despite handling quite a bit of the complicated OS-specific sections of coding, and taking tasks that it doesn't even try to handle to degrade its quality and features.
Last edited on
NoXzema wrote:
Correct.
That's like saying someone doesn't know the difference between one apple and two apples...

Yes but not directly through SDL or SFML. You would need to use similar strategies that I've already described. Using SDL or SFML to create the window/context and then reparent the window to the frame window and size it however you want. This is how I actually integrate with Qt instead of using Qt-specific mechanisms to separate the GL context creation code from the GUI code.
But then you'd be adding another dependency to your program. That's not desirable, at least not to me.

You inferred that SDL isn't useful for non-trivial tasks... despite handling quite a bit of the complicated OS-specific sections of coding, and taking tasks that it doesn't even try to handle to degrade its quality and features.
Fair enough. I was trying to say that although SDL is useful, it is not suitable for some tasks that you face.

Ultimately I was trying to say that porting isn't straightforward. It was to support my statement that OpenGL doesn't just magically make your code cross-platform like a lot of others imply. Using it doesn't really make your porting process much easier (although it doesn't hurt it, either).
Last edited on
That's like saying someone doesn't know the difference between one apple and two apples...
No... a window isn't always the big square thing on your screen with border decorations and panel.

But then you'd be adding another dependency to your program. That's not desirable, at least not to me.
Well, that's also one of the reasons why the SDL backend was rejected. Maintaining dependencies for Mac and Windows tend to be a pain... however, there was agreement to use it on Linux but I had already created the GLX backend at that point and I didn't see enough purpose for using SDL at that point since it was less than 300 lines of code.

Ultimately I was trying to say that porting isn't straightforward. It was to support my statement that OpenGL doesn't just magically make your code cross-platform like a lot of others imply. Using it doesn't really make your porting process much easier (although it doesn't hurt it, either).


But it does if you have a competent design. It's a lot easier to benchmark a working application to figure out which driver is slower than another than it is to port to an entirely different API. Tools like apitrace can provide profiling measurements of each shader program and each call to OpenGL. A debugging context will cause the driver to tell you where your code is going to cause problems.

Try running a slow OpenGL program with apitrace. The cool thing about apitrace is that even if a program isn't programmed to run with a debugging context, apitrace can force it and then retrieve the messages that the driver spits out. When you do this, I can almost guarantee you that there's a whole slew of errors, often not even driver specific like bad uniform values or invalid calls entirely. You can also look, line by line, how fast each and every single call was (shader programs excluded, however, it will tell you how long a program is taking per frame including what part of the frame it's executing in) and determine what's taking the most time so you can figure out how to either work around it or fix it. Stuff like that exists... yet most people don't even care to use it or see if there are tools like that before saying OpenGL is bad. I can agree that OpenGL can be obscure at times but it's only going to get stronger and better.

One thing I'd really like to make clear is that I do not necessarily like OpenGL.
The speed of the driver is not a reason to not use OpenGL. That may sound naive but if people used OpenGL more than D3D, driver vendors would have cared a lot more about their OpenGL performance. The days of the fixed function OGL pipeline got a lot more love than OpenGL does today, and that's no coincidence. There are other more pressing matters to be addressed that people should argue with instead of "it's slow" or "xxx vendor doesn't support it yet", which OpenGL itself has no control over.

It will be noted that the deprecation model that OpenGL 3+ implemented is the worst idea ever conceived, complicating implementations so much that some (notably Mesa) found it more sane to ignore it completely (which is within the bounds of bounds of OpenGL specification because the deprecation model is actually an optional extension called ARB_compatibility which doesn't even have its own proper specification document "due to its size and complexity". They should have just stopped there and known something was wrong with those words).

I have my fair share of gripes about OpenGL. Even then though, people unfairly bash it without proper argument or reason a lot of the time which is counter-productive and often the spreading of false rumors that can last years.

OpenGL Next is going to remove the need for a lot of extensions, the deprecation model, the deprecated functions, and hopefully fix the mess that is GLSL to at least support a bytecode intermediary format like HLSL does (although, I wouldn't mind a new language completely). There's also talk of focusing more on the bindless APIs available in OpenGL 4 (namely because they're cleaner and faster). But note that most of what it's introducing is taking away functionality and setting focus rather than introducing functionality. OpenGL in it's current state is very powerful and can be extremely optimal. However, getting to that path with OpenGL is very hard to approach compared to D3D because, compared to previous times, a user friendly guide is lacking (books look more like the specification and some people will tell you to just read the OpeGL specification so as to not leave out corner cases). I've seen more people turn to blog posts than I have the official OpenGL book or documentation.

Sleep.... more later....
Last edited on
Generally you're not compiling the source code itself, but interpreting the bytecode instead (MS already provides a compiler). AFAIK you cannot be sued for interpreting a file.
You'd be surprised...

HLSL byte code has its own format specification which is prone to patenting and copyright. There is probably little Microsoft can do to prevent the simple interpretation of such a thing but the problem is that Microsoft alone controls it, a company that is known to burn everyone. What if the bytecode is modified in a way that's convenient only to Microsoft? They reserve the right to change DirectX any way they want. Nobody in their right mind would possibly think that the correct move for the helping preserve the longevity and stability of an API is to base it on something they cannot have any control over.

This might sound dumb but while Cg originally looks like a copy of HLSL... it's "different enough" to avoid such things, not to mention that they probably have an internal agreement with Microsoft since they help develop Cg.
NoXzema wrote:
No... a window isn't always the big square thing on your screen with border decorations and panel.
Obviously. I assumed you were talking about this as pretty much the only way is to solve this is to make windows within the window.

It's a lot easier to benchmark a working application to figure out which driver is slower than another than it is to port to an entirely different API.
Not exactly. If this was the case then Direct3D wouldn't even be used.

Tools like apitrace can provide profiling measurements of each shader program and each call to OpenGL.
I'm fully aware of apitrace.

Try running a slow OpenGL program with apitrace. The cool thing about apitrace is that even if a program isn't programmed to run with a debugging context, apitrace can force it and then retrieve the messages that the driver spits out. When you do this, I can almost guarantee you that there's a whole slew of errors, often not even driver specific like bad uniform values or invalid calls entirely. You can also look, line by line, how fast each and every single call was (shader programs excluded, however, it will tell you how long a program is taking per frame including what part of the frame it's executing in) and determine what's taking the most time so you can figure out how to either work around it or fix it.


Stuff like that exists... yet most people don't even care to use it or see if there are tools like that before saying OpenGL is bad.
Stuff like this exists for Direct3D as well. That's the problem. And it just so happens that developers seem to like Direct3D's tools better than OpenGL's.

Renderdoc is hands down the best graphics debugger I've ever seen. Fortunately they're working on an OpenGL backend, but as you can imagine it doesn't nearly have as much features as the D3D one. And considering you can't even run the gui on Linux, I don't see much of a reason to use the GL debugger in the first place.

The thing is that Direct3D tools are more mature than OpenGL's. If you want to change that then you have to write your own tools, which isn't exactly trivial.

The speed of the driver is not a reason to not use OpenGL.
I never said it was. It is a thing, however.

That may sound naive but if people used OpenGL more than D3D, driver vendors would have cared a lot more about their OpenGL performance. The days of the fixed function OGL pipeline got a lot more love than OpenGL does today, and that's no coincidence. There are other more pressing matters to be addressed that people should argue with instead of "it's slow" or "xxx vendor doesn't support it yet", which OpenGL itself has no control over.
This is not an API problem, but it is still a very relevant problem. I'm not judging theoretical OpenGL, I'm judging OpenGL on how it is in the field. OpenGL may not have control over it, but I also don't have control over plenty of things that still effect me. It might sound good on paper, but programmers don't care about that. They want something that actually works. And at this state, OpenGL support isn't stable enough. My point still stands.

What if the bytecode is modified in a way that's convenient only to Microsoft?
I highly doubt they would do that. Microsoft isn't the all evil money-centric company that a lot of people make them out to be. Even if they did that then the interpreter writers would still have a chance against them.

It's not really worth the money.

This might sound dumb but while Cg originally looks like a copy of HLSL... it's "different enough" to avoid such things, not to mention that they probably have an internal agreement with Microsoft since they help develop Cg.
HLSL and Cg were originally developed together by Nvidia and MS, that's why HLSL (circa D3D9) and Cg were pretty much identical around then. MS isn't associated with Cg anymore, but then again Nvidia doesn't work on it either, since it was too hard to maintain the differences between the APIs.
Last edited on
highly doubt they would do that. Microsoft isn't the all evil money-centric company that a lot of people make them out to be. Even if they did that then the interpreter writers would still have a chance against them.
Yes, nothing like making a poor decision early on just for it to "have a chance" later on. Microsoft, under their relatively new CEO, has made some drastic changes. However, their past is very bloody and even dangerous to other companies and consumers. While I hope their new wave stays positive, I won't be betting any money on it. Again, it's quite dumb of an idea to risk the reliability of your API by relying on another company, even more so when that company cost various companies hundreds of thousands by pulling the mat from underneath them by developing OpenGL driver kits...

Stuff like this exists for Direct3D as well. That's the problem. And it just so happens that developers seem to like Direct3D's tools better than OpenGL's.
From my experience, people don't use the OpenGL tools... at all. They aren't aware of them and/or choose not to use them because they don't see it being useful. I actually find people having the same issue with D3D as well where people refused to put research into basic tools that help develop with these APIs... it's not really OpenGL specific nor did I intend to make it sound that way. For what it's worth, apitrace has more contribution status than renderdoc. That's not to really say it's worse or less capable either.

Ironically, apitrace is also one of the most used tools for D3D afaik.

This is not an API problem, but it is still a very relevant problem. I'm not judging theoretical OpenGL, I'm judging OpenGL on how it is in the field. OpenGL may not have control over it, but I also don't have control over plenty of things that still effect me. It might sound good on paper, but programmers don't care about that. They want something that actually works. And at this state, OpenGL support isn't stable enough. My point still stands.

While it may be a problem, just moving to another API is quite the opposite of the correct motion to make in order to achieve the purposes of OpenGL. I couldn't really care less of how OpenGL is in "the field" (which is bullshit anyways since OpenGL works fine in "the field"). I only care about how usable the API is to everybody. Just because things affect you that you don't have any immediate control over does not mean you cannot attempt to better those things. That's how progress is made.

EDIT: Also, you're completely full of shit for saying OpenGL support isn't stable enough. There are thousands of games that use OpenGL *now* that work well beyond expectations. There's a few dozen highest end OpenGL games for Linux that work *now* in great quality. Are you saying those games just shouldn't have used OpenGL? This logic is plain dumb.

I never said it was. It is a thing, however.
It's really not as big of a gap as you might think for realistic use cases...

Last edited on
NoXzema wrote:
For what it's worth, apitrace has more contribution status than renderdoc.
I'd hope so. Renderdoc was opensourced recently by Crytek (relatively speaking).

While it may be a problem, just moving to another API is quite the opposite of the correct motion to make in order to achieve the purposes of OpenGL.
I don't think you're understanding what I'm saying (explained later).

I couldn't really care less of how OpenGL is in "the field" (which is bullshit anyways since OpenGL works fine in "the field").
Can you elaborate?

Just because things affect you that you don't have any immediate control over does not mean you cannot attempt to better those things. That's how progress is made.
Great. But companies don't give a shit about that. They want something that's guaranteed to run on a variety of different machine combinations at the maximum possible efficiency and quality. Running on more machines allows companies to make more money.

Also, you're completely full of shit for saying OpenGL support isn't stable enough.
Relative to D3D, it's not. If you still disagree, what other reason do AAA companies have to stick to D3D?

Are you saying those games just shouldn't have used OpenGL?
No, I never said that. I explicitly said that games should use the best supported API for their platform. D3D in Windows/Xbox/Windows Phone, OpenGL in Unix and Unix-like systems, and proprietary APIs in the rest of the consoles. I never said OpenGL should not be used at all (as it is pretty much your only option on some platforms), but it isn't your best choice for Windows or consoles.

It's really not as big of a gap as you might think for realistic use cases...
It isn't much at all, you're right.
Last edited on
Relative to D3D, it's not. If you still disagree, what other reason do AAA companies have to stick to D3D?
I'm not really sure. That's something you have to explain. A good explanation is simply that their are more D3D developers than their are OGL developers. There are various reasons for that but either way, I simply could not care less.

No, I never said that. I explicitly said that games should use the best supported API for their platform. D3D in Windows/Xbox/Windows Phone, OpenGL in Unix and Unix-like systems, and proprietary APIs in the rest of the consoles. I never said OpenGL should not be used at all (as it is pretty much your only option on some platforms), but it isn't your best choice for Windows or consoles.
As I see my entire past argument completely ignored....

Great. But companies don't give a shit about that.
Oh? What company have you been hired at that showed this? Have you made any OpenGL solutions that "didn't work"?

Can you elaborate?
Already explained or self explanatory in the thousands of games and applications that use modern OpenGL today...

To be honest, I'm tired of having my arguments ignored... you should google the rest. I should have told you to do this in the first place instead of you assuming everything and spoon feeding you. Have some complimentary posts from nvidia and valve:

https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_Bringing%20Unreal%20Engine%204%20to%20OpenGL.pdf

https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/OpenGL%204.x%20and%20Beyond.pdf

https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/Porting%20Source%20to%20Linux.pdf

https://www.youtube.com/watch?v=btNVfUygvio

EDIT:
No, I never said that.

First post you made on this thread is to use D3D instead of OpenGL. You've completely ignored my requests for information concerning Windows OpenGL performance issues before once again saying its not viable for Windows. Unbelievable...
Last edited on
NoXzema wrote:
Oh? What company have you been hired at that showed this? Have you made any OpenGL solutions that "didn't work"?
https://dolphin-emu.org/blog/2013/09/26/dolphin-emulator-and-opengl-drivers-hall-fameshame/

And my personal favorite from a former Valve employee:

http://richg42.blogspot.co.uk/2014/05/the-truth-on-opengl-driver-quality.html
I'm sure you can guess which vendor is which.

To be honest, I'm tired of having my arguments ignored... you should google the rest.
I honestly feel the same way towards you (no disrespect intended).

To be honest, I'm tired of having my arguments ignored... you should google the rest. I should have told you to do this in the first place instead of you assuming everything and spoon feeding you. Have some complimentary posts from nvidia and valve:
Nvidia and Valve are pretty much the only companies pushing OpenGL at the current moment. I'd take whatever they say with a grain of salt, especially after that stunt they pulled with the OpenGL vs D3D benchmarks, lol.

After reading these pdfs, they don't even help your position. They state reasons of why you should include an OpenGL backend (they never said that you should only use OpenGL) and I don't disagree with them. They then proceed to give tips on porting your application. These engines still have Direct3D backends, so my point still stands.

First post you made on this thread is to use D3D instead of OpenGL. You've completely ignored my requests for information concerning Windows OpenGL performance issues before once again saying its not viable for Windows. Unbelievable...
Scanning through your posts I do not see a request for this information. You did request for a console benchmark, but I already responded to that. It's probably me just overlooking, though.

And after looking over my posts, I don't even see where I brought performance into the argument. I will not defend a statement I did not make. If I did make that statement in this thread, then it's wrong.
Last edited on
Topic archived. No new replies allowed.
Pages: 12