When to use what language and why

Pages: 1234
closed account (EzwRko23)

May I bring up Runescape? Personally I plan on pursuing video game programming as a career, so I really just care about what programming language will be fast enough to get me 60 FPS


Game industry is a different thing. Game industry was programming in assembly, when everyone was programming in C. Game industry was programming in C, when everyone else was programming in C++. Now game industry is programming C++ when everyone is programming Java or C#. Wait next 10 years, and computers will be fast enough, that you achieve that 60 FPS even in interpreted Python. :D

BTW: Jake 2, a Java clone of Quake 2, runs the same speed as the original Quake written in C (which means >200 FPS on a not-so-modern computer).
Last edited on
awesome article, thanks a ton bro!
i just FWD this to most of the office at my work.
That's a very good idea! Gaming industry are very trendy right now. Many young people loves to play online games.
Great topic, thanks for all the input folks.

If I could ask more questions please. For me, it is not practical to learn every possible language, so I'm trying to explore the options, and select a small subset of languages which will help me tackle my problem domain.

I currently work in investment banking, developing algorithmic trading engines. In parts of this arena, efficiency is imperative - if you get beaten to the market, no matter by how short, even if it's 1ns, you lose.

As such, I've heard many people argue that C/C++ is the languages of choice.

However, I've noticed that more and more investment banks, hedge funds, etc are developing their time-critical systems in Java, C#, etc. Why do they do this? I've heard said that garbage collection can harm efficiency. Has garbage collection advanced sufficiently to prevent this?

So basically, what I'm asking is, what makes these teams choose one of these Pure OO languages over C++?

TIA
Steve

Last edited on
closed account (S6k9GNh0)
I can't picture Java or Python (or similar), ever truly taking over something like C or C++. Professional games still use ASM and C. Games are much easier to use in OOP because of the nature of everything in a game being able to represent something. Java isn't used for a multitude of reasons, big one being memory efficiency. Just because we have resources doesn't mean we should blatantly waste them. Even if Java was as fast as C or C++, I still don't like the idea of a VM at all and would prefer something like D over Java any day.
closed account (EzwRko23)

Java isn't used for a multitude of reasons, big one being memory efficiency. Just because we have resources doesn't mean we should blatantly waste them.


Java doesn't waste resources more than C++ in most cases, and in some cases it is more memory efficient than C++. Can C++ compilers use 32-bit pointers instead of 64-bit on 64-bit machines? If not, C++ wastes a significant amout of memory on 64-bit architectures. :P

IMHO, Java isn't used in games just because of these two reasons:
1. lots of code / libraries have been already written for C++
2. games don't need that security, code stability or maintenance support that is required for e.g. a banking application that lives for a 10 years and must run 24/7.
3. games need more low-level tuning and this is easier done in C++

So, Java, indeed does not offer enough value to justify switching to it.
Anyway, some originally C games have been successfully ported to Java with no significant performance loss or resource consumption increase (e.g. Jake2). Just to show it can be done.
closed account (S6k9GNh0)
Give me links and I'll do benchmarks on both with a C compiler of my choice.
Last edited on
closed account (EzwRko23)
What links? Jake2?
Here it is: http://bytonic.de/html/jake2.html


You can also do a simpler benchmark.
Create an array of 10 million pointers in Java and C++. Compile it for a 64 bit CPU. Run the Java program with -XX:+UseCompressedOops. Measure memory consuption and see how your C++ loses this memory benchmark by almost 2 times (C++: a pointer = 64 bits, Java: a pointer = 32 bits). This optimisation technique was discussed some time ago at GCC discussion lists, and they dropped it - because there is no technical possibility to implement it without breaking the current model of C++ compilation.

Want a real-world case? Read about Firefox 2 and its memory fragmentation problems. C++ memory allocation was a cause of wasting more than 2/3 of memory. The problem would never exist in Java or .NET.

So, there is no SIMPLE answer. There are some cases, where managed platforms are way more efficient, both in terms of memory and performance. Anyway, usually differences are negligible for most of large applications. That is why so many server side apps choose Java because of... high performance.
Last edited on
xorebxebx wrote:
Game industry is a different thing. Game industry was programming in assembly, when everyone was programming in C. Game industry was programming in C, when everyone else was programming in C++. Now game industry is programming C++ when everyone is programming Java or C#. Wait next 10 years, and computers will be fast enough, that you achieve that 60 FPS even in interpreted Python. :D
Different aspects come in play here, as said industry has made quite an evolution since the old assembly days. These include visions and teachings of the creation of simulations and other interactive software.

Although computers might be fast enough to run interpreted Python games on 60 fps in 10 years, doesn't mean they should. The need for new technologies has increased greatly, and will keep increasing. Using interpreted Python you can run games that are "old" by then. Surely, Python will have evolved into a better language by then. The same can however be said for C++ (and any other language, at that).
closed account (S6k9GNh0)
The reason pointers are 64-bit in a 64-bit OS is quite obvious. When you reach past the 4GB limit, you are officially using 64-bit pointers. I honestly don't see how Java would be able to use 32-bit pointers in a 64-bit environment when they don't know where the memory is being addressed to them. Although, perhaps something could be workable like this in C/++?
http://www.springerlink.com/content/h6803610u1124354/
Last edited on
-XX:+UseCompressedOops
http://wikis.sun.com/display/HotSpotInternals/CompressedOops
It can be implemented in C++ (and even C) if you're willing to give up pointer arithmetic for the "compressed" object pointers. Basically, you treat the so-called pointers as handles. Allocation involves requesting the OS for specific virtual memory addresses. Let's say 0x<32-bit handle>00000000. That way you can have single objects of up to 4 GiB. Then to decode you need to do
1
2
3
4
template <typename T>
inline T *deref_handle(const handle &p){
	return (T *)(((uint64_t)p)<<32);
}
This is most likely how it's implemented in the JRE.
Note that the same technique can be applied for 32-bit pointers, although then you're limited to only 65536 objects of 64 KiB each.

It's hard to implement for all memory allocation, but I think this addressing model would be the exception rather than the norm in C/++ code.
Last edited on
*Albatross uncovers her ears*
Want a real-world case? Read about Firefox 2 and its memory fragmentation problems. C++ memory allocation was a cause of wasting more than 2/3 of memory.

*Albatross recovers her ears and mouth, resisting the temptation to correct the blame of the language for a fault made by the programmers*


-(Albatrosses cannot type out signatures with their ears covered)
Last edited on
closed account (EzwRko23)

*Albatross recovers her ears and mouth, resisting the temptation to correct the blame of the language for a fault made by the programmers


It is funny: if someone creates a bloated C++ application it is always the programmer's fault, and if someone creates a bloated Java application, it is always Java's fault... :D
However, it was not the programmers fault in the Firefox case. The problem of memory fragmentation is inherent to all C/C++ manual memory allocators. Sometimes they do extremely badly. Not often, but it happens.



It can be implemented in C++ (and even C) if you're willing to give up pointer arithmetic for the "compressed" object pointers. Basically, you treat the so-called pointers as handles.


It cannot be, or at least it would not be practical because of severe performance penalty.


It's a common optimization in the JVM world where the Just in time
compilers do it based on the configured heap sizes and because
there's no fixed ABI.

But on the C compiler level the problem is that you'll have an
completely own ABI and won't be able to use any standard libraries
unless you recompile them. And no system calls without a translation
layer or some way to tag all system interfaces to the the standard
ABI. Java avoids that by having clearly defined interfaces to the
outside world, but that's not the case in C. Or you annotiate all
structures where this translation should happen.

severe performance penalty
What, so somehow it's faster to shift a pointer in Java than in C? The JRE can't avoid decoding the pointer in order to use it any more than C, so I don't see how the overhead would be larger.
closed account (S6k9GNh0)
It is funny: if someone creates a bloated C++ application it is always the programmer's fault, and if someone creates a bloated Java application, it is always Java's fault... :D


Yes because C++ can usually have its ways to reduce bloat where as Java uses 5MB for just about anything basic you do and usually gives so and so performance. Although, the methods the programmers use normally (and I have seen some messed up Java code) doesn't help either.

C will ALWAYS have the ability to be faster than Java but its always the programmers fault that causes it to be slower. Any "optimization" that Java has can be implemented and used in C just about. If not, prove me wrong?
Last edited on
closed account (EzwRko23)

Any "optimization" that Java has can be implemented and used in C just about.


No. Just the opposite.

I've already shown a kind of optimization that cannot be done in C: compressed object pointers.

Java HotSpot can do some optimisations that static compilers cannot do, because it has more information and power - e.g. it can dynamically change ABI of some classes at runtime, it can move objects in memory, it can gather branch statistics for better branch prediction, it knows what code is loaded and can inline polymorphic virtual calls or remove locks, it can specialize code at runtime. Some of these possibilities are still not used and explored - that is why we observe about 30%-50% average speedup for each big release of Java, but almost no speedup with major releases of C++ compilers (which already do their best). E.g. Scala as of version 2.9 is intended to perform automatic code parallelization (automatic = without help of the programmer). C compilers can't do that, because it is extremely difficult to tell if some code doesn't have side effects in C, thus automatic parallelization would be unsafe.




Yes because C++ can usually have its ways to reduce bloat where as Java uses 5MB for just about anything basic


This is just as ridiculuous as saying that C++ is bloated because iostream adds over 200 kB to executable size. Wake up, you are living in times, where a low-end mobile phones have tens of MB of RAM.
Last edited on
Jake 2, a Java clone of Quake 2, runs the same speed as the original Quake written in C (which means >200 FPS on a not-so-modern computer).
I'm guessing that Jake 2 was a complete rewrite of Quake 2 in most aspects, in which case comparing the two is pointless. Jake 2 may have had more specific optimizations than Quake 2, and possibly unnoticeable missing features.

xorebxebx, my main problem with Java is that I have a mentality that interpreted languages are slower than compiled and that this will always be the case. I started with GML, which was interpreted and incredibly slow compared to C++. Processing a for loop with 20000 iterations would take 2-3 seconds on most computers. So keep this in mind, I've had traumatic experiences with interpreted before.

Also, looking at the game screenshots now, it looks like the max polys on screen will be < 20,000 on most occasions, which can be handled even by a crippled Intel chipset without much difficulty. Again, there's Runescape. Graphics quality is generally on the same level as Quake 2 in terms of poly count (from the looks of it), but with a few shaders as well. Runescape, on my laptop where I've had 10 times the amount of polygons rendered in an unoptimized C++ program with OpenGL on Mac OS X (which has been proven to have worse performance than on Windows), gets 10-15 FPS when there are... let's see, about 50 of those low-quality < 100 poly meshes of people on the screen. Hell, even less than that. Somewhere around 20-30 will slow me down significantly.
Last edited on
closed account (S6k9GNh0)
A 200 kB executable because of a "Hello world" program is bloated! Don't tell me its not or you don't understand how much data can possibly fit inside of 200 kB none the less 5 MB! I personally have 1 GB of RAM, 200 kB is nothing. But that doesn't mean I'll simply throw out random arrays of memory just because I can. It promotes bad programming habits and an overall sloppy outcome. I'm not going to sit there and read the fucking C program that prints the GPA of a high school student when the size of the C executable is nearly 5 MB. Our program, along with a hundred others or more plus the operating system itself has to all be using the same pool of memory for processing (edit: for the most part anyways).

And you can do everything in C, its just not practical. Its eventually up to us to wrap around more advanced techniques to improve speed and tighten memory usage and even if Java somehow does warm its way up past C in speed (I highly doubt memory usage), that simply means its time for a new and better techniques and possibly compilation tactics. The idea of a virtual machine beating the speed of the machine itself is rather off no?

Perhaps at some point, techniques will become too advanced for the common programmer such as myself but even then, I can't think of a way to make a virtual machine faster than the actual machine.
Last edited on
closed account (EzwRko23)

I can't think of a way to make a virtual machine faster than the actual machine


Noone claims VM is faster than the actual machine. But VMs can often do better job than static compilers. Static compilers miss lots of optimisation oportunities just because they have too little information. Most of C++ code out there is not optimal.



A 200 kB executable because of a "Hello world" program is bloated! Don't tell me its not or you don't understand how much data can possibly fit inside of 200 kB none the less 5 MB!


And to run that 200kB program you need an operating system that requires >64 MB only to boot.
It is bloated, if we you talking about embedded software. It is not, if you are talking about PCs.

@NGEn: Judging the whole platform by some buggy program is pointless. Recently I downloaded an open-source flight simulator written in C++ and it ran terribly, much worse than any Java game I happened to use. But this doesn't mean C++ sucks.
xorebxebx wrote:
A 200 kB executable because of a "Hello world" program is bloated! Don't tell me its not or you don't understand how much data can possibly fit inside of 200 kB none the less 5 MB!
And to run that 200kB program you need an operating system that requires >64 MB only to boot.
It is bloated, if we you talking about embedded software. It is not, if you are talking about PCs.

It's bloated regardless of the size of your hard drive. The size of the executable should reflect on the complexity of the program, such that the two are directly proportional. You wouldn't expect a 3D adventure game to be 100 kiB and you wouldn't expect a hello world program to be 200 kiB.
Pages: 1234