Going crazy behind OpenGL

Pages: 12
I'm trying to solve a run-time error since a lot of time (around three or four months?)
The crash point is at glDrawElements, a SegFault, and the binary code shows access to a null pointer (in the latest AMD drivers' dlls).
The crash doesn't happen every time, but I have the perfect testcase:
This time it happened on the first frame.

The current OpenGL state is:

Frames Drawn:            2  --First Frame Empty, Loading goes on the Second frame
glUseProgram calls:      1  --One shader program has been bound througout the executable
glCreateProgram calls:   1  --One shader program is alive
glGetUniformName calls:  1  --One Uniform Name has been tracked for all shader programs
glUniform* calls:        1  --One Uniform has been set
glBindVertexArray calls: 3  --Three VAOs have been bound
glGenVertexArrays calls: 1  --One VAO has been created
glBindBuffer calls:      16 --Uniform Buffers adds 8 calls
glGenBuffer calls:       6  --Uniform Buffers adds 4 calls

Texture State:
- No textures bound         --Aware of this: No texture used in the shader
Bound VAO ID:
- 1                         --The only alive VAO is bound
VAO States:
-> VAO ID 1                 --It's in a good state
--> ARRAY_BUFFER:         5
--> ELEMENT_ARRAY_BUFFER: 6
Texture Slot Bound:
- TEXTURE0                  --Unused yet
Alive VBOs:
-> 1, 2, 3, 4, 5, 6         --1 to 4 are Uniform Buffers. All used VBOs are alive.
Buffered VBOs:
-> 1, 2, 3, 4, 5, 6         --All of them are buffered
Alive Shader Programs:
-> 3                        --The only shader program
Bound Shader Program:
-> 3                        --It is bound
Alive Textures:
-> None                     --Aware: No textures loaded
Shader Uniform States:
-> Program 3                --The only shader program
--> Uniform Location 85     --uniform sampler2D Textures[TEXCOUNT];
---> Buffered               --as expected, set to 0,1,2,3,4,5,6,7 but unused.


The crash only seems to happen on a glDrawElements call.
glDrawArrays is always clean, from the first to the last vertex.
Element Indices are checked beforehand and are correct.
Most of the times, the model is drawn correctly and many times, too.
If the application crashes, it crashes the first time an object is drawn (I only have a single object, so it can't be a single object being the problem).

The testcase consists of the model (and associated textures and shaders) being reloaded every frame.
Freed or not, the application receives the crash anyways.

Does anybody have an idea where should I look at to solve this problem?
Last edited on
apitrace is probably more useful here.
If the segfault doesn't happen, this is the trace:
http://pastebin.com/bjyMNvcj
If the segfault happens, this is the trace:
http://pastebin.com/KjtzdSbX
Note how the last call doesn't present the "GLSL=n" log at the end of the line.

The GL_INVALID_ENUM error comes from either glew, the extension system, or glfw, the window handling system, as it happens at the very beginning (even before my calls if I try).

Side-note: The state I manually tracked is as it looks like, since all OpenGL calls have been renamed to similar (sgl*) calls throughout the entire project, tracking the state.
Last edited on
apitrace will tell you exactly which one is causing the error. It shows the state OpenGL is in in every single call.
Oh, wow, I thought you meant a trace of the api calls. I'll be trying it this evening and report back.
Sorry if it wasn't clear that I meant the tool (lol).

https://github.com/apitrace/apitrace
Apitrace wasn't completely useful (but it did tell me a couple of things i didn't see, with no different results).
I tried AMD's CodeXL (found out about it few hours ago) and it's telling me "Array Bounds Exceeded", which explains why DrawArrays works... but I still have to find out why, since the indices correctly begin at 0 and boundchecking says it's in a good state.
Edit: Further indices inspection (right before buffering) tells no more.
Is it possible for OpenGL to lose Buffer data somehow?
Last edited on
If you provide the apitrace dump, I can also look at it. Not sure how you'd share that though... but if the given commands are all that are really executed, then it shouldn't be large.

Also, are you sure it's your use of OpenGL?
Last edited on
In a rar it's 88kb if I remember correctly, I'll upload it once I get on PC. I do believe it is my use of OpenGL that's causing problems, no reason to believe otherwise, I just can't find which thing is wrong. I expect the state to be as it becomes, but, maybe I'm wrong with something?
(Please note that in the actual trace there are multiple frames, I couldn't find it on the first frame, and that you may have to reproduce the file multiple times since the error doesn't happen constantly. Due to bad camera startup position, the screen will be black all the time despite the fact it's drawing stuff correctly (if it is). Also for obvious reasons I have rebound the buffers after the vao is bound before the model is drawn. Out of what is the trace I switched from glVertexAttribPointer to the 4.3 glBindVertexBuffer and family functions with no results). I think I'll have to align data, as I didn't try it yet. I'll tell beforehand how the vertex attribs look like:
Attrib 1: vec3 (position)
Attrib 2: vec3 (normal)
Attrib 3: vec2 (texcoord)
Attrib 4: uint (bone id, constantly 0 in this model, actually an UNSIGNED_BYTE and correctly provided with the integer VertexAttribPointer counterpart. Multiple bones in a model work correctly.)
The offsets are calculated with offsetof, and the stride with sizeof.
Storage is a vector, which is a contiguous memory block.
closed account (10X9216C)
Iirc, to pass an integer to a shader program, it is still loaded as a float.

1
2
3
4
in float node0;
somearray[int(node0)];

glVertexAttribPointer(Node0Loc, ... , GL_SHORT, ...);


If that's not what you are doing, i don't think that's your crash problem though, i don't get a crash when i change the float to an int, the models just become distorted.
No, there's an integer-only similar function like glVertexAttribIPointer which preserves the integer values and has no Normalize parameter.
The models are drawn correctly if they want to be drawn (physics simulation gives good results) but the program has this occasional crash.
(Whoops, I accidentally posted twice.)
Last edited on
I have noticed the warning message about using UNSIGNED_BYTE has the same "recording point" ("Index" in the table) as the crash.
I have switched from GL_UNSIGNED_BYTE to GL_UNSIGNED_INT (and correcting all the associated code) and it works flawlessly.
I have to thank NoXzema anyways, since I couldn't have pointed this thing out on my own without apitrace.
I've taken a look at the apitrace. There seems to be various warnings output by the driver. For future reference, I would highly recommend always using debug output.

http://www.opengl.org/wiki/Debug_Output

Going back to apitrace, apitrace will actually hook in a output function for you and fetch not only the output but various other warnings/errors it detects itself.

http://imgur.com/Fxtsedz
Weirdly enough, I don't get any of that, and the shader compiles and links successfully. Thanks for the screen tho, I'll see what I can do to fix that stuff.
With apitrace, you have to replay the trace to get error info.
I did replay it, maybe my drivers are too kind to throw an error?
I remember Debug Outputs to be driver dependent.
Perhaps. The catalyst driver is horrendous about it while nvidia is the best at it. Mesa and OSX sits somewhere in between.
Well, I guess I'll have to find some nvidia guy willing to do some beta testing.
lol... Unfortunately, when I started messing with the small area of OpenGL (which was just context creation) I actually bought two video cards from each vendor. Each one has quirks that has to be dealt with or your going to have people who complain about one thing or the other.

tldr; OpenGL isn't followed near as well as DirectX implementations. Not the fault of OpenGL but hopefully, OpenGL next will help alleviate some of the symptoms.
Last edited on
Pages: 12