Programming a Camera in OpenGL

Hi Everyone,

I just finished going through NeHe's OpenGL Tutorials and I'm trying to understand how to program my movement through the world , based on the movement of a Camera, instead of having the world move in the opposite direction that input would tell the Camera to move.

Does anybody know how the concept works...?


Thanks

You don't really move a camera, you transform, rotate, etc. each vertex in the "world" to emulate movement. Your "camera" is always in the same spot. Or... I guess it doesn't have to be depending on your idea of how movement works.

For example, in our universe, we know the world doesn't revolve around us (in the literal sense). But in the digital universe, it does. The definition of movement can't be the same as the definition of movement in the real world, rules are different.

OpenGL itself has no concept of camera afaik.
Ok so it's ok to modify the coordinates of the objects in the world has im moving through it?

Because in 2D the way I did it is that I would have Set Coordinates for my objects like lets say ObjectX and ObjectY , and when I moved through the world I would have CameraX and CameraY that would be modified based on movement, so let's say MoveRight means CameraX += 1; and MoveDown means CameraY += 1;

And then at the moment of Calling the Rendering of an object,

I would pass ObjectX - CameraX and ObjectY - CameraY, instead of Passing ObjectX and ObjectY. ObjectX and ObjectY are never modified, a new value affected by Camera is passed, this way it makes it way simpler for collisions...

It's Simpler because you have World Coordinates that can be uised for collisions and then you have Screen Coordinates for Rendering , and ObjectX - CameraX is a Screen Position and ObjectX is a World Position.

So you are saying that it basically is the same in OpenGL 3D?
Last edited on
Ho I think I got, what we are actually doing is affecting the glTranslatef position with the camera position!



int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity(); // Reset The View

GLfloat x_m, y_m, z_m, u_m, v_m;
GLfloat xtrans = -xpos;
GLfloat ztrans = -zpos;
GLfloat ytrans = -walkbias-0.25f;
GLfloat sceneroty = 360.0f - yrot;

int numtriangles;

glRotatef(lookupdown,1.0f,0,0);
glRotatef(sceneroty,0,1.0f,0);

glTranslatef(xtrans, ytrans, ztrans);


numtriangles = sector1.numtriangles;

// Process Each Triangle
for (int loop_m = 0; loop_m < numtriangles; loop_m++)
{
glBindTexture(GL_TEXTURE_2D, texture[sector1.triangle[loop_m].TextureId]);
glBegin(GL_TRIANGLES);
glNormal3f( 0.0f, 0.0f, 1.0f);
x_m = sector1.triangle[loop_m].vertex[0].x;
y_m = sector1.triangle[loop_m].vertex[0].y;
z_m = sector1.triangle[loop_m].vertex[0].z;
u_m = sector1.triangle[loop_m].vertex[0].u;
v_m = sector1.triangle[loop_m].vertex[0].v;
glTexCoord2f(u_m,v_m); glVertex3f(x_m,y_m,z_m);

x_m = sector1.triangle[loop_m].vertex[1].x;
y_m = sector1.triangle[loop_m].vertex[1].y;
z_m = sector1.triangle[loop_m].vertex[1].z;
u_m = sector1.triangle[loop_m].vertex[1].u;
v_m = sector1.triangle[loop_m].vertex[1].v;
glTexCoord2f(u_m,v_m); glVertex3f(x_m,y_m,z_m);

x_m = sector1.triangle[loop_m].vertex[2].x;
y_m = sector1.triangle[loop_m].vertex[2].y;
z_m = sector1.triangle[loop_m].vertex[2].z;
u_m = sector1.triangle[loop_m].vertex[2].u;
v_m = sector1.triangle[loop_m].vertex[2].v;
glTexCoord2f(u_m,v_m); glVertex3f(x_m,y_m,z_m);
glEnd();
}
return TRUE; // Everything Went OK
}

(xtrans, ytrans, ztrans) Are the variables affected by Movement Keyboard Input.
Instead of using glTranslatef, etc., you should use shaders. Shaders aren't going away any time soon (if ever). I would look into GLSL or Cg.
NeHe's tutorials are extremely outdated. glBegin/glEnd/glTexCoord/etc have all been deprecated in OpenGL for years.

NoXzema is right, you should be using shaders.

Stop reading NeHe and read this tutorial instead:

http://www.arcsynthesis.org/gltut/
Specifically, you will need hardware capable of running OpenGL version 3.3
.oi
?
I'm complaining that toy problems to learn the basics require such "new" hardware.


Edit: I suppose that the quotes may be misinterpreted. I understand that 4 years is a long time in our industry, but updating hardware is expensive (consider having to change an entire lab)
Last edited on
3.3 is from 2010. It's nearly 4 years old. How old do you want it to be? A decade?
The idea of programmable shaders (which is primarily what is hw demanding about OGL 3.3) predates 2010. So even if your hardware is older than 4 years old, you might be able to get away with driver updates.
And, if all else fails, you can always use Mesa for software-based (hardware accelerated if possible) OpenGL 3.0 graphics. I'm pretty sure. See http://mesa3d.org/
Last edited on
closed account (10X9216C)
95%+ support directx 10. That is roughly the same as opengl 3, that is not to say some of the features of opengl 4 aren't included. Specifically some that deal with shaders that aren't really hardware issues. Probably the biggest one for me is explicit uniform locations, that is supported still on my last GPU that's about a decade old.

http://store.steampowered.com/hwsurvey/videocard/
Hey Everyone, thanks for your help.

When I'm Using gluLookAt or glTranslate, combined with glRotate.

Am I moving through my scene with my Camera the right way, or is it the scene moving around my object , the simple but kind of wrong way?


Thanks
Your scene is moving around your camera. Your camera is always facing (0, 0, -1) where the order is x, y, z.

Also, you don't want to use those functions! Instead, you'll either want to create your own (which is nothing but matrix manipulation mind you) or use a library that does it for you like GLM to create the matrices required to "move" your objects or camera.

EDIT: In other words, you can move your camera by manipulating the View matrix. Eh... I know some people like to hide math behind the libraries... but it makes more sense if you do the math yourself for a few vertices (take a vertex, multiply the model matrix by the vertex... multiply the result by the view matrix... multiply the result by the projection matrix. This process is described mathematically in every OpenGL book even though they generally provide you tools to avoid it... something I don't really like).
Last edited on
Topic archived. No new replies allowed.