"Vectors are better than Linked Lists" Question

https://www.youtube.com/watch?v=YQs6IC-vgmo

I'm a bit confused as to why Bjarne Stroustrup says that linked lists are slower than vectors and that they should be avoided. In all of my tests that I do, when I am using a linked lists as a queue (only adding objects to end of linked list, only removing from beginning of linked list first in first out), my linked list results are much faster.

I am trying to figure out if there is any reason I should use a vector over a linked list for a queue type operation of first in first out. I am currently using a linked list for outgoing packets for a program, and I am worried that maybe I should not be using a linked list if the designer of C++ is saying not to use them.

Thanks for any input.

If anyone is interested, this is how I conducted my speed tests. Maybe I did something wrong or there is something I am not understanding?
http://www.mediafire.com/download/7ddsgjgp35o3a9k/PreTutorial10_Speed_Test_Demonstration.rar
Last edited on
In all of my tests that I do, when I am using a linked lists as a queue (only adding objects to end of linked list, only removing from beginning of linked list first in first out), my linked list results are much faster.


If you made use of push_back and pop_front with a std::vector then I would expect the performance to be the same as a linked list O(1). But a linked list is less efficient in terms of memory because it has to maintain a pointer for each node, this means it will take longer to construct as well. However linked list is better for insertion in the middle of the list. Then there is std::deque which is better than vector for insertion because of the way it is stored.

I would be interested to see how you implemented the timing for the comparison.
I just reuploaded the solution. Link is in the initial post. I was making a mistake and not using pop_front to remove the first element. I am still getting faster results with a linked list, however. Any idea why this is?
Last edited on
I was unable to open the rar file - I use Linux .

What was the main idea behind your timing procedure - did you use std::clock ?
I am sorry about that. I used a rigged up timer to get the differences using GetTickCount.

By any chance can you download .zip files?
http://www.mediafire.com/download/hyut50sqf5kpcj5/Tutorial_10_%28PreDemonstration%29_Speed_Demonstration.zip

If not, I can upload the sources to pastebin, I just thought that would be some trouble for people to look at since it would be 5 seperate files.
Right, I did manage to look at that.

So the timer uses a windows function GetTickCount() The thing to be careful about, is to use a CPU timer, not a Wall Clock. Now, I am a Linux guy - I don't know which category this function fits into :+)

The problem with Wall clocks is if the OS runs something else in another thread, it gets include in the time as well - so is not reliable for this purpose.

std::clock is a standard C++ function so it is portable, and it's CPU time type.

Had a quick read of the Windows docs, this one might be better if you were to continue using windows functions.

https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx

Hope this helps a little? :+)
Also, The numbers you have isn't enough to do a decent test - try it with 100 million or something. :+D
In SpeedTest.cpp:

You time the inner for loop, and then sum them all. Why not just time the whole thing? The average does integer division.
I tried it with 1,000,000 and I still got that the linked list was about 3x faster. I am going to look into changing how my timer works though to use std::clock. Thanks for that tip. I would keep going with bigger numbers, but one million seemed to be about 3x time difference just like 50,000 was so it seems a bit redundant. I had also tried 100,000,000 but the program broke when the vector size became a bit over 11 million.

Also, might be worth noting that my packet manager will most likely never have over 10,000 packets at once in the linked list in its actual implementation.


Edit: Just did it with 10 million and the linked list test was nearly 8x faster as far as appending. About to try with retrieval.

Edit2: The Retrieval results were that linked list was still about 4x faster than the vector on retrieval.
Last edited on
Have you changed the clock though - it won't make any difference until you do :+)

Just some other things about the code:

Put only 1 class into a file - both for header and cpp files - it makes it easier to find things. I like to name my header files *.hpp - it means header with cpp code inside.

A function should only do one conceptual thing. It's all very well to have AppendSpeedTest, but I would have it call separate functions for the vector and list.

Avoid using new The main problem is if something happens (like an exception) before the destructor or delete is called - memory leak. Use smart pointers instead std::unique_ptr or std::shared_ptr If you use STL containers for everything, then there is no need to worry about memory management at all.

Why do you implement your own linked list? Could just use std::list
Last edited on
I was using my own linked list because a std::list is a doubly linked list and I only needed to iterate forward.

However... I found a big issue with the way I was running my tests.

I was running them in debug mode and for some reason this way drastically effecting my numbers. It turns out vectors ARE faster when using release mode. Thanks for all of your help on this matter.
Ah, that will do it :+D


I was using my own linked list because a std::list is a doubly linked list and I only needed to iterate forward.

There is std::forward_list which is a single linked list.

http://en.cppreference.com/w/cpp/container/forward_list

std::vector is a very useful container - one can make it behave like a stack or a queue or an array. But then there is std::deque which has some advantages.

http://en.cppreference.com/w/cpp/container/deque


Just wondering what you think about this comment I made earlier. You could simplify your code a lot.

TheIdeasMan wrote:
If you use STL containers for everything, then there is no need to worry about memory management at all.
I think i'll switch to using shared_ptrs instead of raw ptrs. It makes sense. Thanks.
Last edited on
Just thinking you may possibly only need smart pointers if you were going to have a polymorphic container of them, otherwise just use an STL container of ordinary objects.

Good Luck !! :+)
Your conclusions are incorrect.

> If you made use of push_back and pop_front with a std::vector
There is no std::vector::pop_front() function.

> then I would expect the performance to be the same as a linked list O(1).
¿why? your hypothetical `pop_front()' would be O(n)

> I was making a mistake and not using pop_front to remove the first element.
You are still not using the unexistant `pop_front()'
1
2
3
4
5
		for (int j = 0; j < retrievecount; j++)
		{
			Packet p = vec_packets[0];
			vec_packets.pop_back();
		}
Your vector implementation is not a queue, but a stack.

> one million seemed to be about 3x time difference just like 50,000 was
> the linked list test was nearly 8x faster as far as appending.
> linked list was still about 4x faster than the vector on retrieval.
>> Have you changed the clock though - it won't make any difference until you do
Occam's razor. You are getting so much difference because the cuckoo in the clock died when measuring the vector's implementation.
Let's try to shot where it matters, please.

> I was running them in debug mode and for some reason this way drastically
> effecting my numbers. It turns out vectors ARE faster when using release mode.
Optimizations are quite aggressive, sometimes too much.
Still, you are measuring different things. Deleting the last element in a vector is quite an easy operation, just call one destructor and move a pointer. Deleting the first would mean to move all the other elements one position, and is quite expensive.


However you may implement a queue using vector, and it would be a lot faster than a linked list. You'll simply need to threat it as if it were a circular buffer, that way you don't need to move any elements, just adjust `front' and `end' indices.
By the way, there is `std::queue' also.


> I was unable to open the rar file - I use Linux .
Then you may want to install `unar'

> I just thought that would be some trouble for people to look at since it
> would be 5 separate files.
I recommend github for those cases.
@ne555

Thanks :+)

I must have confuzzled vector with deque - the latter unsurprisingly has push and pop at both ends. Although the complexity for those is constant not O(1).

> Then you may want to install `unar'

I will look into that :+)

ThingsLearnt += 3;
@ne555 Thank you very much. Shouldn't program when i'm tired I didn't even realize that it said pop_back(). I've decided to switch to using a std::queue thanks a lot.
Topic archived. No new replies allowed.