Perfecting my linked list

Pages: 1... 345
That's a curious observation, but I believe something else is amiss here. Have you debugged it to make sure that toA isn't actually always true? It's clearly not a static variable so it should always be true. Another guess here would be that pNext could be causing the pointer to go askew.

As for the code, I have something that'll help you test it (assuming you're using windows, I'll also supply the code for the timer function)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include <windows>

// Borrowed from here:
// http://stackoverflow.com/questions/1739259/how-to-use-queryperformancecounter

double PCFreq = 0.0;
__int64 CounterStart = 0;

void StartCounter() {
   LARGE_INTEGER li;

   if (!QueryPerformanceFrequency(&li))
      std::cout << "QueryPerformanceFrequency failed!\n";

   PCFreq = double(li.QuadPart) / 1000.0;
   QueryPerformanceCounter(&li);
   CounterStart = li.QuadPart;
}
double GetCounter() {
   LARGE_INTEGER li;
   QueryPerformanceCounter(&li);
   return double(li.QuadPart - CounterStart) / PCFreq;
}


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
int main() {
   srand(time(0));
   // Default Ctor
   vp::list<int> myList1;
   StartCounter();

   for (int i = 0; i < 1000000; i ++)
      myList1.push_back(rand());

   std::cout << "myList1 - It took " << GetCounter() << "ms to add " << myList1.size() << " items.\n\n";
   
   StartCounter();
   myList1.sort();
   std::cout << "myList1 - It took " << GetCounter() << "ms to sort " << myList1.size() << " items.\n\n";
   
   StartCounter();
   myList1.unique();
   std::cout << "myList1 - It took " << GetCounter() << "ms to unique items (" << myList1.size() << " items left) .\n\n";
   
   return 0;
}


I also implemented my merge function and works nicely, even with already overly large lists.

I noticed something odd though, upon timing my unique function against the STL's mine takes longer to run. I'm assuming due to my excessive overhead with the erase function being called when a copy is found. I may have to implement manual node deletion and size change or redesign my erase function.
Last edited on
There is definitely something amiss there.
I found that indeed toA is assigned = true every iteration.
Every node was being placed in list pA, and none in list pB.

This means that most of the code in that while loop is pointless.
After stripping out the bool toA variable and other code never executed I now have:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
dNode* dNode::split(void)
{
    if( !next ) return this;

    dNode *pA = this, *pB = this->next, *iter = pB->next;

    while( iter )// split the list in 2
    {
        dNode* pNext = iter->next;// save this
        iter->insert_after( pA );
        iter = pNext;

    //    iter = iter->next;// save this
    //    iter->prev->insert_after( pA );
    }
    if( pB->prev ) pB->prev->next = NULL;
    if( pA->prev ) pA->prev->next = NULL;
    pA->prev = NULL;
    pB->prev = NULL;

    if( pA->next )
        pA = pA->split();
    if( pB->next )
        pB = pB->split();

    return pA->merge( pB );
}// end of dNode::split() 

I no longer understand how my function works (and I never did).
I have tested merge_sort using this new version on lists of up to 5,000 nodes (so far) and it works.

I'll check out the code for testing which you just posted a bit later. I adapted from the code you posted last page and found it sorts 5,000 items in 484ms.

Further evidence that oddities are in play is the fact that if I use lines 13,14 in place of lines 9-11 (as you do) it crashes. I am mystified.

EDIT:
I found that the following works (by trial and error) for my while loop in the split():
1
2
3
4
5
while( iter )// split the list in 2
{
    iter->insert_after( pA );
    iter = iter->prev;
}

Using this the sort time for a 5,000 node list is 156ms. This is 3x faster than the last version. I may stick with this.
EDIT2: Still working at 60,000 nodes but sort time = 25.7 sec.
This means my method is a couple of orders of magnitude slower than yours.
My selection_sort() time for 60,000 nodes = 9.2 sec. (completing in 59,997 iterations)
Last edited on
I added a recursive implementation to https://gist.github.com/4239718
It's getting a bit unwieldy for one file.
Just curious cire, what's your runtime on 1 million random elements sorted? And what's your processor.

And getting a bit unwieldy for one file? That's not even 300 lines. I'm trying to manage almost double of that.

Note: Running your code from github on my system, I got:
1355039310
List is sorted.
1796ms sorting the list.
List is sorted.
1437ms sorting the list.

Process returned 0 (0x0)   execution time : 9.188 s
Press any key to continue.


On 1 million elements % 5000, that's pretty fast if I may say so myself. I'll have to look into implementing that a bit more. So far, I'm intrigued. Also, is merge sort the bottom up method, like how I did it, and recursive is the top down method, like fun2code is attempting?

Also, would chrono replace the need for my windows function? I hate the windows lib and really want a high resolution timer. I'll have to inspect your code a bit to understand it before implementing it. Or check out the references here at least.

I'm curious cire, what's your background? Do you code for a living? Where did you learn so much?

Edit: The chrono features are awesome, I just really hate the cast. It's extremely long and doesn't look right broken up in any sense. I was able to typedef milliseconds to just ms, but even that doesn't help shorten it up. You also can't declare using std::chrono which I thought was weird. I'm going to have to delve into the features of the library and see what options I have...but then again, I assume you already have and this is either the only way, or the best way already.

Another thing I want to get around to is learning more about the new random libraries. They seem really nifty to all just be grouped in one location. And the thought behind it is incredible. I'm really loving the C++11 features, and I don't care what anyone says, even if they aren't all used on a daily basis, they're fun to play with and even better when you actually need them.

On a curious level, looking over your code, you typedef'd typedef node* (*sort_func)(node*) ; That's a function pointer, but isn't that short of old school? Do you just find it easier to use that over functional?

One last thing, since we're on the topic of speed and times, does calling functions increase overhead, even if the function itself is fairly quick? The reason I ask is because my unique function seems like it should be pretty quick, but it is the only thing so far that's slower than STL. What implementation did they use differently than mine?

By the way, I've been looking through you code a bit, I haven't gotten to your sort functions yet, but even though I was expecting some hard to follow code, it's pretty straight forward, even more so than mine. The only thing I have to say though is I'm not a fan on your organization skill of your functions. I try to keep mine in alphabetically grouped sections, but it looks like yours are defined in the order they're used?

Also, why is the top down method, recursive, faster? I figured that splitting it would force it to do more calculations than just working from the bottom up. You've taught me a lot so far, and never cease to amaze me. I'm glad you joined in on this thread.
Last edited on
Just curious cire, what's your runtime on 1 million random elements sorted? And what's your processor.

When compiled with optimizations, typical performance is ~525ms for the iterative version, and just aout ~310ms for the recursive version. The processor is an AMD Phenom II (3.01GHz).


Also, is merge sort the bottom up method, like how I did it, and recursive is the top down method, like fun2code is attempting?

It's really just how the list is being split. In the top down approach, you begin with entire list and split it into two smaller lists, then split those lists into smaller lists, which is obviously conducive to a recursive solution. In the bottom up approach, you start by defining what would be the "leaf" lists in the top down approach and merge them to form larger and larger lists, which is more conducive to an iterative solution.


I'm curious cire, what's your background? Do you code for a living? Where did you learn so much?

I'm a self-taught hobbyist. Books, used to spend a lot of time on the relevant usenet groups.


The chrono features are awesome, I just really hate the cast.

It is an ugly thing, isn't it? You can get rid of some the ugliness with a using directive, which is what I would normally do. I didn't do it, because I wanted the additional namespace obvious for anyone who was reading the code from the link here.

do_sort could be rewritten like so:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
void do_sort( sort_func sort )
{
    using namespace std::chrono ;
    node * list = make_list(1000000) ;

    high_resolution_clock clock ;

    auto start = clock.now() ;
    list = sort(list) ;
    auto end = clock.now() ;

    check_sorted(list) ;
    std::cout << duration_cast<milliseconds>(end-start).count() << "ms sorting the list.\n" ;

    destroy(list) ;
}


typedef node* (*sort_func)(node*) ; That's a function pointer, but isn't that short of old school? Do you just find it easier to use that over functional?

std::function is useful when you're writing code that may expect either function pointers or functor objects. That wasn't the case here in this very specific toy program, so I didn't use it.

One last thing, since we're on the topic of speed and times, does calling functions increase overhead, even if the function itself is fairly quick?

Non-inline functions do have some associated overhead. See:
http://stackoverflow.com/questions/144993/how-much-overhead-is-there-in-calling-a-function-in-c

I try to keep mine in alphabetically grouped sections, but it looks like yours are defined in the order they're used?

The order of functions within a file doesn't really matter to me. I have this wonderful IDE that lets me right-click names and do stuff like "Go to definition"

I coded these in the order they appear, with the exception of main.

Also, why is the top down method, recursive, faster?

Without profiling, I couldn't say.

Not introducing the (small) extra complexity to make the iterative mergesort stable may have had an effect. Probably the implicit dynamic memory allocations in the std::queue had an effect.

I was a bit surprised myself. The iterative solution uses the 'natural' algorithm that takes advantage of already-sorted sequences in the unsorted list. I didn't do that with the recursive solution, although I did attempt to limit the number of one-element lists by sorting two-element lists instead of breaking them down.
Last edited on
That's impressive that you're just a hobbyist and know so much. Are there any books or supple reading material your suggest to read? I know there are always great articles on SO, but a lot of that covers stuff that is still over my head. I've also been searching wikipedia a lot lately about algorithms and the overall complexity of them. I've gotten better at understanding the language used on wiki and in turn have been able to design significant code.

Overall, it's always about learning. I enjoy the challenges I've been running across lately, and am always looking for ways to improve. Looking at your sort functions, the only thing I'll have to do to my list is to create some private functions so that I can pass pseudo lists to it instead of passing a list to prevent copying elements. Another thing I learned today, courtesy of my C# friend, was the heap sort algorithm. The concept of it seems tricky, and reading up on it, it seems fairly slow compared to quick sort, but the complexity is less. I thought that complexity is directly related to speed?

Reading further into the concept of a heap, I now have a pretty good idea of how and why heaps are good. I don't think I'll ever understand exactly how a compiler stores things on a heap, which is the complete opposite of my understand of the stack, but I understand priority is the primary role of a heap. Higher priority, the higher you are on the heap. Reading more about the heap, it really reminded of the trees that programmers make. I've never made one before so I'm not 100% sure that's an adequate comparison. If it is, I'm definitely going to have to attempt it.

If you are looking for new challenges, I highly suggest Project Euler (http://www.projecteuler.net), UVa Online Judge (http://uva.onlinejudge.org), or Sphere Online Judge (http://http://www.spoj.com). If you haven't tried them before, they'll give you a challenge, as I believe they do for anyone. If you have any websites that you know of that you could share, that'd also be greatly appreciated.

Last question, do you primarily use C++? What other languages do you know, if any? Favorite? It's always interesting to see where people started, what paths they like to go down, and how far they go to learn more. I look forward to hearing from you again and hopefully learning some more from you.
So, I spent the last few days trying to make my code a little more const-correct. I followed a lot of the STL list for the functions, and have also implemented a few functions, including the reverse function. Looking at the code, I'm fairly happy with how well everything works, but I still have a function functions yet to improve on, including the sort algorithm, reverse iterators, and the comparison operators.

Including those, I also wanted to implement a commenting scheme to make it easier to understand what each function does, and much like how the documentation here is, want to provide input/output for each function, a brief description, and also include Complexity. I have always been interested in Doxygen Comments and believe that's the way I'm going to go, plus it has the added features of generating an html page with it for a quick reference.

Here is the current code (I'll update this link directly when I add more things to it): http://pastebin.com/jCgsky0Q
coding style almost makes my eyes bleed
Errr thanks for your generous feedback? What do you find so impossible to read about it? Obviously it's not professional code, but I did my best to keep indentation even and to be consistent with naming styles, spacing, etc. If its really that bad, I'll take a look at it and try to improve it. You're the first person to say anything so I'm a bit skeptical.
its not bad so dont take it personally i was just looking at all of the bracket use

1
2
3
4
5
6
7
8
9
int main(){
    bool condition = false; 

    if(condition){
        printf("not neat");
    }

    return 0;
}

vs
1
2
3
4
5
6
7
8
9
10
11
12
int main()
{  

    bool condition = true;

    if(condition)
    {
       printf("neat");
    }

    return 0;
}

Its actually better than half the styles i've seen because you actually indent. I'm working on my game engine so I'm implementing all of these data structures as well. I just commented just because I was browsing and that was the first thing i thought.
Last edited on
DeXecipher wrote:
coding style almost makes my eyes bleed
DeXecipher wrote:
its not bad so dont take it personally


First, you make it sound like my coding style is the worst you've ever seen. Second, you make that accusation about my code, and tell me not to take it personally. I'm a humble coder, I do it for a hobby, and I know not only is my code not professional, but it can also be greatly approved on. However, I would like some respect for what I do, and don't take too kindly to people just saying it looks horrible, and yeah, I'm going to take it personally since it is, again, my code.

However, in regards to the bracket style, attached vs linux, I don't feel that's one of those things that would make someone's eyes bleed, even figuratively. Everyone has their preferences and you've even admitted, you've seen worse. Also, due to my first language being JAVA, where I learned to use the attached style, I've carried it over, along with a few other bad habits, which have been, for the most part, broken.

I must point out that, I'm not alone in my choice of bracket style on this site, however, there are few of us. I do look at it like this though: If it's good enough that the creator himself uses the same bracket style as I do, with a minor change, than it's quite acceptable as a means of a coding style. I also wanted to mention earlier that all of those separated brackets eat up an entire line, and although it doesn't seem like much, in a program that consists of around 500 lines, you've typically added over 50 lines just to separate the brackets. Yes, I'm aware that the less lines doesn't increase readability, etc., etc., but that's that many more lines that you need to scroll through when looking at the overall code.

Oh, and you add brackets around your one line if/for statements, you've just turned 2 lines into 4. Now multiply that by over 100 if/for statements across a program. You've almost doubled the overall size of the file.

I understand the importance of everything that has been mentioned here, and the possible negative side effects, however, it still comes down to the person that edits the code to figure out what's a good practice and what isn't. In my eyes, if you're going to modify my if/for statement and don't know that more than one line statements require brackets, you probably shouldn't be touching my code. I also know that to increase readability, where a function starts and stops, the brackets can show this easily, however, I properly indent everything so there should be absolutely no confusion. The only time it's ugly is during initializing variables which I still haven't found a good indentation to do so at (I'm believing 2 tabs will make it understandable though).
Topic archived. No new replies allowed.
Pages: 1... 345