Getting started with game programming?

Pages: 12
I'm 16 and I been programming c++ for 6 month now.
I Know all about console Programming and have finished my graduation Project.
I wanna end up making my own 3d engine (and editor) but I don't know where to start.

I started learning some OpenGl with Freeglut and Glew but I read that its not that good for gaming. ( if you think that i should keep doing that please give me some good book/site to learn)

so where should I start? i'm really motivated I just don't know where I can find all the informations.
Last edited on
First off, after 6 months you know very little about C++ and you cannot possibly know everything about "console programming" (Could you please define that).

AFAIK OpenGL would probably be one of the better things to learn when it comes to 3d rendering and all things pertaining to it.
closed account (Dy7SLyTq)
opengl is good for gaming. and no one knows everything about c++. and there is no way you know everything about making console programs
i read Beginning C++ Game Programming.
and seen thenewbostons videos about c++
i probably dont know enouge.
but i just dont know where to go from here ?
Last edited on
closed account (3qX21hU5)
In my opinion I wouldn't start out by making a Game Engine (and Editor) as your first game development project. That is if you mean game engine like I think you mean, as in a Engine that is capable of developing multiple games, or that helps speed up the production of games.

Mainly because you wouldn't have any idea what should go into it since you haven't built a game before (Assuming here). I post this article all the time but I would recommend checking it out http://scientificninja.com/blog/write-games-not-engines

Anyways if I were you I would start out by first trying to make some very simple games like Pong or Asteroids or whatever just to get familiar with whatever language and library you choose to learn. Don't bite of a project that is over your level of programming because it is a very discouraging.

For a library I would recommend trying out SFML 2.0 or SDL (Don't get anything 1.X) they are much easier to learn then DirectX or openGL. Once you got your head around the basics of graphics and game development in general then you can move onto those and bigger projects but I would hold off for right now.

Also if you are interested in game engine's and just game development in general I would recommend check the book "Game Engine Architecture By - Jason Gregory" it will give you a very good idea how game engines and even games in general are structured.

At least that is my advice. Wish you the best of luck and if you have any questions just let us know and I am sure we would be glad to help answer any that we can.
Last edited on
Okay thanks for the advise
Game Coding Complete (by somebody...) is an excellent resource for when you get to writing an engine. Be warned, it does not hold your hand at all, or even supply with you a functional program. Just provides snippets of various things and lots of theory and discussions on pretty much every aspect of a game engine.
It's also very entertaining. The author is very personable and makes it an enjoyable read. I believe he was a major developer for the Ultima series.
It is by Mike McShaffry (I have the first edition of his book and Coding Complete from Microsoft. As well as Game Engine Architecture and Ultimate Game Engine Design or something like that which teaches you to build a basic 2D/3D game engine and make two games with it by the end of the book (never read it though, or the other two for that matter).

These are the three books I have from college:
http://books.google.com/books?id=RQ2jSQAACAAJ&dq=Ultimate+Game+Engine+Design&hl=en&sa=X&ei=rl_wUYL-FvGFyQGY_oDoBw&ved=0CC8Q6AEwAA

http://books.google.com/books?id=EZDtaaPy304C&dq=Ultimate%20Game%20Engine%20Design&source=gbs_similarbooks

http://books.google.com/books?id=8TGiJSpmvbEC&q=Ultimate+Game+Engine+Design&dq=Ultimate+Game+Engine+Design&hl=en&sa=X&ei=rl_wUYL-FvGFyQGY_oDoBw&ved=0CDQQ6AEwAQ

Last edited on by closed account z6A9GNh0
@ Zereo

I'm curious, what's wrong with SDL 1.x?
Everything.
That's actually not helpful, but thanks anyway.
closed account (3qX21hU5)
I'm curious, what's wrong with SDL 1.x?


It teaches outdated techniques that are no longer used in this type of programming and gets very bad performance (Because it is using outdated techniques).

There is also the fact that SDL 2.X is out and you should really be using it because it fixes a lot of the issues 1.X had.

Basically if you want to get any good performance in 1.X you have to go through a openGL interface.

Again that was all for SDL 1.X and doesn't count for the new version 2.X which seemed to fix most of the problems it has had.
+1 @ Zereo

SDL 1.x functions on "Surface Blitting" which is an extremely old drawing concept that is not geared towards performance (anymore).

There are numerous downsides:

1) Most modern graphics hardware doesn't put "surfaces" in video memory anymore because it has no concept of a "surface". Therefore any surface you create with SDL is likely going to be in system memory. This means that every time you draw to a hardware display (read: the screen), every single pixel must go over the bus from the CPU to the GPU. This is dreadfully slow.

2) Blitting from a surface that does not match the output display format means that each pixel must be "converted" prior to being displayed. Again this likely has to be done by the CPU. Slow.

3) Blitting is a plain rectangle->rectangle copy. This is severely limiting (arguably crippling) and prevents even the most basic effects, like rotation, scaling/stretching... or even just "flipping" a graphic so it's facing the opposite direction. All of which are trivial to do with modern graphics.

4) SDL surfaces use "color keys" to mark transparent pixels. Again this does not conform to how modern graphics hardware works. Modern hardware does alpha blending to produce transparency effects. Color keys simply do not exist any more.




Its audio lib is equally horrendous... requiring that all mixing be done in software (slow), and even allowing you to specify your own streaming audio format (which means it must be manually converting audio samples in software prior to actually streaming them -- slow slow slow)

So yeah. I would avoid SDL 1.x like the plague.
Last edited on
Most modern graphics hardware doesn't put "surfaces" in video memory anymore because it has no concept of a "surface".

I know that blitting and blitters (hardware that used to focus specifically on blitting) don't exist anymore. But just as a matter of interest: how does modern graphics "work" in regards putting pixels on screen?
closed account (N36fSL3A)
Once it gets out of release canidate I'm porting my program over. Since I wrote a wrapper I only really need to change like 5 things and I'm all good.
closed account (3qX21hU5)
I know that blitting and blitters (hardware that used to focus specifically on blitting) don't exist anymore. But just as a matter of interest: how does modern graphics "work" in regards putting pixels on screen?


Most use shaders and rasterization if I am not mistaken.

The rendering pipeline is mapped onto current graphics acceleration hardware such that the input to the graphics card (GPU) is in the form of vertices. These vertices then undergo transformation and per-vertex lighting. At this point in modern GPU pipelines a custom vertex shader program can be used to manipulate the 3D vertices prior to rasterization. Once transformed and lit, the vertices undergo clipping and rasterization resulting in fragments. A second custom shader program can then be run on each fragment before the final pixel values are output to the frame buffer for display.

The graphics pipeline is well suited to the rendering process because it allows the GPU to function as a stream processor since all vertices and fragments can be thought of as independent. This allows all stages of the pipeline to be used simultaneously for different vertices or fragments as they work their way through the pipe. In addition to pipelining vertices and fragments, their independence allows graphics processors to use parallel processing units to process multiple vertices or fragments in a single stage of the pipeline at the same time.


http://en.wikipedia.org/wiki/Shader
http://en.wikipedia.org/wiki/Rasterisation
http://en.wikipedia.org/wiki/Graphics_pipeline
Last edited on
I have never understood the graphics pipeline. Maybe one day I will try again. But thanks you for the explanation.
It's all done with 3D techniques and programmable hardware (shaders).

Video memory holds vertex data, textures, and whatever else you think it needs. Shaders access that data to render individual pixels.

A modern version of the 2D "blit" would be to render 2 triangles (to form a rectangle), and to have the fragment shader fetch pixel data from a texture and plot them appropriately. Rotating/scaling/flipping the image is just a matter of altering the vertex positions in the vertex shader.

It's definitely more involved.. which is why it's often abstracted to be more simplistic in libs like SFML, SDL 2.0.

This page does a decent job of giving a general overview:

http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html

He specifically talks about OpenGL... but the rendering pipeline in Direct3D is very similar.
Fredbill30 wrote:
Once it gets out of release canidate I'm porting my program over. Since I wrote a wrapper I only really need to change like 5 things and I'm all good.

Thank God all programmers don't have that attitude. If we all waited to use SDL 2.0 until it is out of "release candidate" then it would never get out of rc. To be removed from RC, the programmers have to use it and report all bugs for them to fix in subsequent updates. With no bug reports, they can't remove it from rc to make it stable.
closed account (N36fSL3A)
Yes I know. You guys do the work while I just wait. >:D
Pages: 12