They've been deprecated, which means they're being phased out.
I don't even think newer versions of OpenGL have them at all.
What's the point of using the others?
Aside from being more modern and working with more modern OpenGL features...
The others (glBindVertexArray, glDrawElements, glDrawArrays, etc), keep as much data in GPU memory as possible, to minimize transferring of data from CPU->GPU (which is extremely slow).
With glBegin/glEnd, every vertex must go over the bus every time its drawn.
With glMatrix functions, every matrix change must also get moved over
With modern OpenGL it's all on the GPU from the get-go. It lets the GPU do more of the work, and less time is wasted on CPU<->GPU communication.
You have to install it, but you should use SDL or SFML instead - you have to install those too; I recommend SFML which you can find at http://sfml-dev.org
They've been deprecated, which means they're being phased out.
They aren't being phased out, they were already removed. They were only deprecated for one minor version 3.0 only and then removed in 3.1.
Why not? I find them perfectly fine and I like them better than that other crap. (What's the point of using the others?)
So yah if you want to use any new features of opengl you'll have to let go of those functions.
They are slow for one thing, as Disch said passing memory from CPU to GPU is slow. If you are passing a 100 000+ vertex model every frame that's going to be slow. One method I've seen of optimizing the slowness between CPU and GPU is rendering every object that uses the same shader is rendered at the same time so that any shader initialization doesn't need to happen again for another instance (shader constants included).
It was removed cause at the end of the day in production code it was never used for that reason. I mean I like it for debugging purposes, it was a fast simple way to do it, kind of wish they would implement a new solution for that. It also got rid of keywords such as gl_ProjectionMatrix or something like that in GLSL, so you have to handle passing your own matrices to the shader.
It's phased out on newer versions of OpenGL, but older versions still exist on your computer, you just need to target them.
For some 3D graphics programs, it seams like it would not be a bad idea to have fallback modes.
If you want it to work with ivy bridge intel integrated graphics, you should be able to target or fallback to 3.0 and be safe. If you want it to be supported on a VM, then you need to target or fall back to 2.1, and use all of those ancient functions.
Finally, the new Haswell integrated graphics will support version 4.0 on linux.
So if you're writing something like a gui application, then you might be better off with version 2.1, or at least falling back to 2.1, because performance would not be an issue, and you would want it to be supported on a VM.
Games generally don't work on VM's anyways, so you might as well only target or fallback to 3.0 if your trying to make it well supported.