How are 3D graphics made?

I was wondering if someone can tell me what the term would be for the creation of a 3D model on a 2d screen. For example how to write a program that will spin a 3 dimensional object so you can see all sides of it. Essentially the type of 3d modelling used in video games, like perspective/moving 3D modelling. I am not trying to build a game engine, but am just curious as to how it works.

When I try googling it, I find some information about rasterization and ray casting, but I am not sure that is exactly what I am looking for. I want to know what kind of functions can calculate out 3D images on a 2D screen. And again I don't need a full explanation of the math, but just what the low level term would be so I can research it further. Thanks.
closed account (Dy7SLyTq)
i would pick one of the libraries listed below + opengl.
sfml
sdl

of course there is also allegro and ogre
Once again I have to turn to this awesome OpenGL tutorial.

He explains the rendering pipeline and how the program and graphics card turns a series of vertexes into a 2D scene.

http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html
You describe the model as a collection of points in 3D space, (x, y, z). You usually have a model in model space, where x, y, and z are between -1, and 1, and the model is at the origin ( where x = 0, y = 0, and z = 0).

You can then transform the model, i.e. rotate, scale, translate (move) the model by multiplying the vector (x, y, z) by a "transformation matrix, which you might call the model matrix. This matrix transforms the model into what you might call world space, where you have your world coordinate system.

You make transformation matrices for different purposes. Then you multiply them by each other in reverse order, and you get your combined transformation matrix which will apply all of the transformations. You multiply that by your vector, and then it is transformed the way you wanted.

You also have a camera matrix and a perspective matrix, which take your world, and orientate it so that you are looking at it from a certain angle, and position, and with a certain type of perspective. Literally, the viewer does not move, the whole world does around them.

The term they use is MVP, (model, view, perspective). And you end up with one matrix, your MVP, which is equal to perspective_matrix* view_matrix * model_matrix, and applies all of your transformations to each vertex in your scene, when multiplied by it.

Usually you will make the models in a program like blender, and export it to some kind of text file which lists all of the vectors that make up the model, and the normals, and uv's, etc.

The normals, are just points that describe a vector that is perpendicular to the surface of a plane, like maybe a triangle.

The uv's are coordinates that describe which part of a texture should be used for a given surface in the model.

Usually you will just have your entire model's texture mapped from one image. In Blender you can unmap your model so that each 2d face making it up, like maybe a bunch of triangles, lay flat on your 2d screen in front of a texture. You can then drag, reorient, or manipulate each face so that it is over the part of the texture you want it to be, and Blender will take care of calculating the uv's for you.

OpenGL needs the normal so it can calculate how light will reflect off of it. In openGL, you specify a normal per vertex. You can get a smooth rounded effect by averaging the normal of each vertex, with the surface normals of the triangles, or whatever shapes, are adjacent to it.

It used to be that the graphics pipeline was fixed, and that you could only interact with it by using functions to set the state. The hardware is designed to do a lot of the work for you. But things changed, and now programmers can program parts of the pipeline themselves.

You can write shaders for example, that help calculate what color the pixels on the screen should be, by doing calculations involving the position of the light source, the color of the light, the positions of the vertices, their normals (describing their relative angles/orientations), and other properties like how reflective they should be, and so on.
Last edited on
EDIT: I misunderstood what you were saying. Nevermind. You're right.
Last edited on
Thanks htirwin for the explanation. And thanks Disch, the tutorial is perfect.
Once again I have to turn to this awesome OpenGL tutorial.

He explains the rendering pipeline and how the program and graphics card turns a series of vertexes into a 2D scene.

http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html

Wow, that is an awesome tutorial.
Topic archived. No new replies allowed.