Basically I have two linked lists, one which is the STL implementation and the other was mine and I tried running a benchmark to see which is better and the STL one wins by a large margin
using_method1 (my list)
Total time: 1531 in ms
Total time: 1281 in ms minus random
using_method2 (stl list)
Total time: 110 in ms
Total time: -140 in ms minus random
The thing thats weird about this is that if I increase the value of
const int limit = 10000000;
by a magnitude then I get a similar answer for method2 which doesn't make sense (and method1 is ten times longer). I wonder if maybe method2 isn't evaluated at all? I'm using g++ with -o3 flag. Heres the code at pastebin http://codebin.org/view/77e80e13
I see. I removed the std::cout from lines 47 and 108 (the destructor and the pop_back function) and recompiled and ran but I got.
Total time: 172 in ms
using_method1
Total time: 1641 in ms
Total time: 1469 in ms minus random
using_method2
Total time: 93 in ms
Total time: -79 in ms minus random
Process returned 0 (0x0) execution time : 2.109 s
Press any key to continue.
Why would using std::cout cause "interference" by the way?
EDIT Running it with
1 2
constint limit = 1000000000;
constint mod = 256;
Outputs
Total time: 11500 in ms
using_method1
Total time: 171063 in ms
Total time: 159563 in ms minus random
using_method2
Total time: 12406 in ms
Total time: 906 in ms minus random
Method2 is getting used by the processor although the execution time is ~1 second without randomly throwing "darts" at it.
I unrolled the traversal of the lists so that there is less "indirection" involved using branching. Trust me I tried using method 2 (the stl list) inside a for loop and its just as slow as method 1.
The thing is that method 1 "unrolled" doesn't speed up while method 2 is lightning fast? I don't understand why that is.