Jun 7, 2010 (last update: Jun 8, 2010)

Score: 3.1/5 (54 votes)

Introduction

Say you have a 1, 5, 10, 20, and 50 dollar bill in the order of 50 - 10 - 20 - 1 - 5. We has humans can easily sort this money into least to greatest or greatest to least. But we take for granted the actual undertaking of the common practice.

When we try an implement one via machine code, we hit algebraic equations, recursion functions, numbers we've never seen. We can't simply put it in order like we do with our brain however easy that seems. However, we can simulate how we order our dollar bills.

For instance, take the dollar bills and try and sort them. By default and by nature, your brain should have told you to go by some pattern. My brain told me the highest numbers were 50 and 1 then placed them in respective locations. Then I look for something of equal or lower value and place it underneath and then keep iterating looking for more of the same value or lower until I come up with a lowest. And then I try and reiterate trying to find the second lowest, and then next highest, and so on until its organized.

Implementing it in this respect is easy but to be honest, its often not very efficient. Our brains take a simple route because it has the power to calculate it in milliseconds and we often can't tell the difference. If our brain chose a too difficult a pattern to organize the bills, it will choose another path that is more simple or will continue going about the same pattern in a different manner. However, you never change algorithms via digital sort and you are always trying to find the most optimized and fastest ways to code possible. This is why the algorithm article is here. It shows not only different algorithms but the math behind it and where to apply them.

Terms and Concepts

Big O notation is simply a mathematical way to express the time it takes for a function to finish: http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

Another concept you will need to understand is algorithm complexity, which is the a judgement of difficulty of the algorithm given which often measures space and timing of the algorithm. A much better source of information would be here on the wiki: http://en.wikipedia.org/wiki/Computational_complexity_theory#Machine_models_and_complexity_measures

I personally and honestly do not understand all there is to it, just enough to understand the measurement of complexity. Another article worth looking at would be this: http://www.progressive-coding.com/tutorial.php?id=1

Another concept to understand would be algorithm stability which is the ability of an algorithm to keep the relative locations of an element of the same value in the sorted array. Say for instance, you have 2 five dollar bills and they need to be sorted and one has a check mark on them and the other does not. In a stable algorithm, the one in front of the other will still be in front of the other after sorted. This is considered having a stable algorithm. In most cases its not needed but in the case that you have a second value attached to the array of values you want sorted, it is a must to understand algorithm stability and how it affects you.

And last but not least is algorithm adaptability. This is the ability of the algorithm to take an already partially-sorted array and be able to continue where it left off and finish sooner than if it it wasn't pre-sorted. This is often a big deal since it can exponentially save you time.

Once you understand the basics of these terms and concepts, you may be able to follow and compare different algorithms and understand why I choose one algorithm over the other.

Comparison Sort Algorithms

A common comparison sort is the bubble sort which is in a subcategory of comparison sorting called exchange sorting, which consists of swapping adjacent items until the sort is finished:

http://codepad.org/YeU6tmzB Slightly optimized version: http://codepad.org/YCJpEFMN

It's often inefficient and is used for simplistic, educational reasons only. The complexity of the algorithm on average is O(n^2) which means that it is very bad on larger lists. The complexity on best case performance would be O(n) in the case that the array is already sorted. However, it is easy to implement, stable, and adaptable.

A step up from this type of sort is the insertion sort which is a type and sort on it's own with many variations. It's a bit more efficient than bubble sort although the complexity is often O(n^2) as is the inefficient bubble sort we talked about. It has many advantages over bubble sort but most of which aren't good enough to be used commonly for larger lists. It's also a bit more complex to implement. Here is an example:

http://codepad.org/UUZqOqC9

bbl, gotta get up rather early.

It wasn't finished but thinks for being blunt Bazzy. At least you gave reasons why, unlike some.

1) I will describe their complexity and probably Big O notation while I'm at it.... theirs plenty of crap I missed.

2) I didn't comment because I failed to see where to comment. If you would comment it for me, that would be cool.

3) I explained that they are flexible and work with other types which is why I used templates to give an example.

4) I used codepad so I could give full working programs with results and not use my wording limitations.

bbl, gotta run.