What is the best time of sorting algorithms?

What kind of time, meaning, which time should I prefer to keep track of time for each algorithm? (Like seconds, milliseconds, nanoseconds, or something that I can keep tracking of) I want to find what should I prefer that can be accurate for the elapsed time of each sorting algorithm.

Secondly, how can I average the time after 5 tests for each sorting algorithm?

Here's the code:

#include <iostream>
#include <chrono>

using namespace std;
using namespace std::chrono;

#include "Heapsort.h"
#include "InsertionSort.h"
#include "ModQuickSort.h"
#include "QuickSort.h"
#include "RadixSort.h"
#include "SelectionSort.h"


int main()
{
const int SIZE = 42500;
int numbers[SIZE+1], trial = 5;
cout << "Sorting 42500 integers" << endl << endl;

for (int j = 1; j <= trial; j++)
{

for (int i = 0; i < SIZE; i++)
{
numbers[i] = (rand() % 100 + 1);
}
cout << "Trial #" << j << endl;

auto start = steady_clock::now();
quickSortMod(numbers, 0);
auto stop = steady_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
cout << "Modified quicksort elapsed time: "
<< duration.count() << " microseconds" << endl;
start = steady_clock::now();
quickSort(numbers, 0, SIZE);
stop = steady_clock::now();
duration = duration_cast<microseconds>(stop - start);
cout << "Quicksort elapsed time: "
<< duration.count() << " microseconds" << endl;
start = steady_clock::now();
heapSort(numbers, SIZE);
stop = steady_clock::now();
duration = duration_cast<microseconds>(stop - start);
cout << "Heapsort elapsed time: "
<< duration.count() << " microseconds" << endl;
start = steady_clock::now();
radixSort(numbers, SIZE);
stop = steady_clock::now();
duration = duration_cast<microseconds>(stop - start);
cout << "Radix sort estimated time: "
<< duration.count() << " microseconds" << endl;
start = steady_clock::now();
insertionSort(numbers, SIZE);
stop = steady_clock::now();
duration = duration_cast<microseconds>(stop - start);
cout << "Insertion Sort elapsed time: "
<< duration.count() << " microseconds" << endl;
start = steady_clock::now();
selectionSort(numbers, SIZE);
stop = steady_clock::now();
duration = duration_cast<microseconds>(stop - start);
cout << "Selection sort elapsed time: "
<< duration.count() << " microseconds" << endl;

cout << endl;
}
}

I used Chrono to keep track of time in microseconds.
your sorted array is tiny so you need some awfully tiny unit of time to see what is going on for any interesting algorithm. Its not any one answer here... its whatever makes sense to you for your output to be able to compare them. I personally threw in the towel years ago and just give it in seconds in scientific notation.

use <random> … everything else is c++ and then you stick the C random generation in there... with autos..

I can't tell, are you sorting sorted arrays after the first trial? Be sure to do this pattern:

seed random generation (same seed over and over, so all sorts tested on same array)
fill array
test a sort
repeat above.
otherwise the sorted array will give best possible case results, not random results, giving all but the first a serious advantage (several algorithms, even a couple of the N*N ones, are just N for presorted arrays).

average time works like anything else.
totaltime += first test
totaltime += second test
… blah blah
avgtime = totaltime/number of runs
be sure to re random the array for doing a loop of 5 over the same sort for an average too. Caching makes timing subsequent runs a pain. You get a huge lift after the first run from hardware/OS tricks. If you want to average it more carefully, do each test once, then each test again, … 5 times that way. The different algorithms will help reduce some of this effect.

I believe you need about 5 million items to get tenths of seconds on a low end PC if you compiled with max optimize settings.
Last edited on
Ok, thanks for that. So you mean, any units of time that would be more accurate for this program to work. Is that correct?

For the average, that would make sense but I have to do that for each algorithm. Can I have to accumulate the time and divide by the number of trials?
Ok, thanks for that. So you mean, any units of time that would be more accurate for this program to work. Is that correct?

I don't understand what you mean. Time is time. Its as accurate as your CPU clock and timer tool allow. Displaying it, you need enough digits to see how long they took relative to each other (again, sci notation seems like the easy fix).

yes, add the time and divide by trials. I said that. But you want to break it up so you don't call the same algorithm back to back, or the computer will cheat and make it faster than a cold start would be with caching.
Topic archived. No new replies allowed.