I'm a hobbyist 3D graphics programmer and I'm about to create a camera class to enable mouselook in my terrain generation project. I thought I'd do some testing in a console application to see how to optimise C++ functions involving vector maths.
I have been timing my code using a QueryPerformanceCounter timer class and finding the time taken in milliseconds. I typically use a for loop to run the code 20000 times then divide by this number to find an average time.
I created a Vector class with operator overloading for + - and overloading / and * by a scalar float. My main concerns were with optimising normalising and cross product functions. I've found something strange though. The operator overloading calls to / take around 0.000070ms to complete. Yet if I do the same thing in code it only takes 0.000005 ms. Thats a big difference! Why are operator overloading calls so slow? Doing it yourself in code rather than relying on the class operator function is the same thing why is there such a big difference. Is it the function overhead? I'm using Visual Studio 2008.
Here is my code:
1. Without turning on optimisations I defined the function in the class and also passed variables as const reference. And there was no change to my timings. The times were on average the same as before.
2. I looked into the options of turning on optimisations and this is what made the biggest difference.
But the first things I did were: You can get down to 0.000026ms just by turning off debug information in the project properties(get rid of the /ZI). (note that dividing in the code sequentially without functions is still 0.000005 ms) . The second thing I did was turning off basic runtime checks (get rid of the /RTC1) this got me down to 0.000015ms. The last thing was to turn on optimisations and this makes the code fast enough to not be noticed by the timer. I get 0ms nomatter how many iterations I set I still get 0. Might be a bug in the timing code or something when optimisations are on. But I can still time some code.
With optimisations on my normalisation vector function is timed and takes ~0.000023ms. This was my costliest operation before taking ~0.000098ms.
Note you can get the equivalent of all the property settings I did by simply going from a debug to release build. Obviously all the debug information and operations slow the code down considerably. I didn't really look at debug or release when writing code before. But it makes sense now. And I'm glad to get a little more insight into the processes that go on in building code.
One thing that is noteworthy is that using inline class functioning, and using variables passed as const reference instead of by value does nothing to improve the speed of the code before or after turning on/off optimisations and debug settings. Writing smarter tighter code doesn't seem to payoff for such short operations and functions, which is annoying, I'm a fan of doing things smarter and smaller.
Anyway thanks for the help, I love programming .. there's always something new to learn.
Note that Disch did not say that debug mode was pointless, only that trying to time an operations performance with debug mode turned on was. Debug mode can be invaluable when you pass that 10K LoC mark and some error comes up out of no where.
I dont know too much about IDEs and getting the most use out of them. I just wanted to learn, bought VS 2008 and went through a whole heap of tutorials. So I have very little knowledge about what the debug mode and release mode actually are and what they do.
I just left out a semi colon somewhere in the code and tried a compile with the settings for the release version. And I still get error messages in the Output window which point me to the line after the semi colon emmision. Thats pretty much all I'm used to getting to get through bugs, I've never used other tools.
So what can you do in a debug build do that you can't do in a release build?