Using ctime to measure the time for quicksort

Hey guys, I'm trying to figure out how to use ctime in order to measure the time that it takes for quicksort to sort some numbers. I've been reading on the internet trying to find the correct way to implement ctime, but can't seem to figure out how. The program always shows the time as 0. Any help would be greatly appreciated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <fstream>

using namespace std;

const int INPUT_SIZE = 10000;


void print(int *input)
{
    for ( int i = 0; i < INPUT_SIZE; i++ )
        cout << input[i] << " ";
    cout << endl;
}

// The partition function
int partition(int* input, int p, int r)
{
    int pivot = input[r];

    while ( p < r )
    {
        while ( input[p] < pivot )
            p++;

        while ( input[r] > pivot )
            r--;

        if ( input[p] == input[r] )
            p++;
        else if ( p < r )
        {
            int tmp = input[p];
            input[p] = input[r];
            input[r] = tmp;
        }
    }

    return r;
}

void quicksort(int* input, int p, int r)
{
    if ( p < r )
    {
        quick++;
        int j = partition(input, p, r);
        quicksort(input, p, j-1);
        quicksort(input, j+1, r);
    }
}

int main()
{
      clock_t  clock1,clock2;
      int input[INPUT_SIZE] = {};
      for(int i = 0; i<INPUT_SIZE; i++)
         {
             int random_num = rand() % 1000;
             input[i] = random_num;

         }
   // int input[INPUT_SIZE] = {500, 700, 800, 100, 300, 200, 900, 400, 1000, 600};
    cout << "Input: ";
    print(input);

    clock1 = clock();
      quicksort(input, 0, 9);
    clock2 = clock();

    cout<< "Quicksort time "<< (float)(clock2 - clock1)/ CLOCKS_PER_SEC << " "<<endl;;

    cout << "Output: ";
    cout << " ";
    cout << " ";
    print(input);
    cout <<endl;
    cout <<endl;
  
}
1
2
3
4
5
6
7
8
9
10
#include <ctime>
...

	clock_t start, stop;
	double totalTime;

	start = clock();
	// Do stuff here
	stop = clock();
	totalTime = (stop - start) / (double)CLOCKS_PER_SEC;
Hey Esslercuffi thanks for the help! I used your code and it still prints out 0 for totalTime. Any other ideas why it might not be working?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

int main()
{
      clock_t  start,end;
      double totalTime;
      int input[INPUT_SIZE] = {};

      int input2[INPUT_SIZE] = {};
      for(int i = 0; i<INPUT_SIZE; i++)
         {
             int random_num = rand() % 1000;
             input[i] = random_num;

         }
     for(int i =0; i<INPUT_SIZE; i++)
        {
           input2[i] = input[i];
        }


  
    cout << "Input: ";
    print(input);

   cout << "input2 ";
   print(input2);

    cout<<endl;
 
  insertion_sort(input,INPUT_SIZE);
   cout<<"insertian sort done "<<endl;
      print(input);
   start = clock();
      quicksort(input, 0, INPUT_SIZE -1);
    end = clock();
    totalTime = (end - start)/ (double)CLOCKS_PER_SEC;
    cout <<"quicksort time is " << totalTime <<endl;

    cout << "Output: ";
   
   
    print(input);
    cout <<endl;

strange. I've used that code many times in the past. I'm not currently at a computer with a compiler on it, so I can't really dig in and see what's goin' on right now.
The clib clock only gives you resolution of seconds. Is your code running faster than one second?

If you want a better idea of how much time your function is taking, you need to do at least one of two things:

(1) run the test multiple times or using larger input
(2) a finer-resolution clock

Here's for (2):

 
#include <chrono> 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// When messing with the clocks in <chrono>, you almost always want to 
// make yourself some handy functions to help reduce typing later.

inline 
std::chrono::time_point <std::chrono::high_resolution_clock> 
now() 
{ 
  return std::chrono::high_resolution_clock::now(); 
}

template <typename T>
inline 
std::chrono::milliseconds::rep
milliseconds( T t ) 
{ 
  return std::chrono::duration_cast <std::chrono::milliseconds> ( t ).count();
}
1
2
3
4
5
6
7
8

  auto start_time = now();

  ...  // (remember not to do anything that involves I/O here)

  auto stop_time = now();

  unsigned long time_spent = milliseconds( stop_time - start_time );

Hope this helps.
Last edited on
Thanks for the help Duoas! I increased the input size and the clb clock started to work. Thanks for the tip about chrono since I'm comparing 2 sort functions it seems to be the better way to go.
http://www.cplusplus.com/reference/ctime/clock/

Actually, the resolution of clock() is system dependent, and usually much finer than seconds. If seconds were the limit of its granularity, then the defined metric for it's use would be SECS_PER_CLOCK, rather than the other way around. Even the example on the above page is measuring 0.143 second.

That said, I see that the high_resolution_clock you have above, guarantees the finest time granularity your system can provide. I wasn't even aware of it's existence. I'll be using it from now on. Thanks.
> I see that the high_resolution_clock you have above, guarantees the finest time granularity...

std::chrono::high_resolution_clock is a wall clock; do not use a wall clock to measure processor time.
http://coliru.stacked-crooked.com/a/7fc30e3b429b1c17

std::clock() gives a measure of approximate processor time; that is the right clock to use for this. For estimates with higher accuracy, increase the size of the sequence to be sorted, and take the average of multiple measurements.
http://coliru.stacked-crooked.com/a/0d3de457f5ba48b9
I was just going from this: http://www.cplusplus.com/reference/chrono/high_resolution_clock/ I misinterpreted the "shortest tick period" to mean "shortest possible on system hardware".

I was in the middle of working up my own comparison test when I saw your post.

*edit* @JLBorges to be fair, your example isn't using the high_resolution_clock, but the steady_clock. The page for the high_res clock says it "may be a synonym" for steady_clock, so I guess I'll finish my test to see if there's actually any difference on my system.

*edit #2* now I see your point is that it doesn't matter as the chrono clocks measure actual time elapsed regardless of which programs are recieving processing time, not time spend by this program on processing. Sometimes it takes me a while to pull my head out of my ass :)
Last edited on
> your example isn't using the high_resolution_clock, but the steady_clock.

To measure time, we need a clock which does not to go back in time. std::chrono::high_resolution_clock::is_steady may not be true.

Something like this is usually done to get the highest resolution clock suitable for measuring elapsed wall clock time:
1
2
3
using wall_clock = typename std::conditional< std::chrono::high_resolution_clock::is_steady,
                                              std::chrono::high_resolution_clock,
                                              std::chrono::steady_clock >::type ;


In any case, for measuring single-threaded in-memory sort performance, all the clocks in std::chrono are the wrong clocks.
Er, std::chrono::high_resolution_clock is supposed to be an interface to the highest-resolution clock on your system, and it doesn't suffer from the well-known problems that clock() has. If you are getting crappy performance out of it then you need to upgrade your compiler.
std::chrono::high_resolution_clock does not measure processor time; it measures wall clock time.
std::chrono::high_resolution_clock is not required to be a monotonic clock; std::chrono::high_resolution_clock::is_steady may not be true.

std::chrono::steady_clock is a monotonic clock.
Class std::chrono::steady_clock represents a monotonic clock. The time points of this clock cannot decrease as physical time moves forward. This clock is not related to wall clock time, and is best suitable for measuring intervals.
- http://en.cppreference.com/w/cpp/chrono/steady_clock


std::clock_t clock() measures approximate processor time used by the process .
std::clock_t clock();
Returns the approximate processor time used by the process since the beginning of an implementation-defined era related to the program's execution. To convert result value to seconds divide it by CLOCKS_PER_SEC. ...

std::clock time may advance faster or slower than the wall clock, depending on the execution resources given to the program by the operating system. For example, if the CPU is shared by other processes, std::clock time may advance slower than wall clock. On the other hand, if the current process is multithreaded and more than one execution core is available, std::clock time may advance faster than wall clock.
- http://en.cppreference.com/w/cpp/chrono/c/clock


Boost chrono has a set of clocks which measure actual processor time, in addition to clocks similar to the ones found in std::chrono. On most platforms, boost::chrono::process_cpu_clock has a higher resolution than std::clock()
http://www.boost.org/doc/libs/1_56_0/doc/html/chrono/reference.html#chrono.reference.other_clocks.process_cpu_clocks_hpp
@trevormoon, does your quick sort program run correctly? the code you posted isn't complete.
i have tried clock() from <ctime> , and now() from <chrono> in both my gcc, VS2012 and in cpp.sh several cases, both function showed same value, but clock() showed more precise value, such as 1.04283 sec while now() showed 1042 mili sec.
> both function showed same value,

They measure different things (that the two happen to produce the same value is happenstance; the program executed during a fortunate period when execution resources were continuously available to it).

Intervals from wall clocks give elapsed wall clock time; intervals from std::clock() give processor time used for execution. Both are useful measures.

Knowing how long a program takes to execute is useful in both test and production environments. It is also helpful if such timing information is broken down into real (wall clock) time, CPU time spent by the user, and CPU time spent by the operating system servicing user requests.
http://www.boost.org/doc/libs/1_56_0/doc/html/chrono/reference.html#chrono.reference.other_clocks.process_cpu_clocks_hpp


The time utility executes and times the specified utility. After the utility finishes, time writes to the standard error stream, (in seconds): the total time elapsed, the time used to execute the utility process and the time consumed by system overhead.
http://www.freebsd.org/cgi/man.cgi?query=time&apropos=0&sektion=0&manpath=FreeBSD+10.1-RELEASE&arch=default&format=html


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <iostream>
#include <ctime>
#include <chrono>
#include <thread>

int main()
{
    using namespace std::chrono ;
    
    const auto startp = std::clock() ;
    const auto startw = steady_clock::now() ;
    
    volatile double d = 0 ;
    for( int i = 0 ; i < 10000000 ; ++i ) d += 23.8 ;
    
    // assume that there is heavy contention for the processor and mid-way through our loop, 
    // the operating system gives the resource to other competing processes for 200 msecs
    std::this_thread::sleep_for( milliseconds(200) ) ;

    for( int i = 10000000 ; i < 20000000 ; ++i ) d += 23.8 ;

    const auto endw = steady_clock::now() ;
    const auto endp = std::clock() ;
    
    std::cout << "elapsed wall clock time: " << duration_cast<milliseconds>( endw - startw ).count() << " msecs\n" 
              << "elapsed processor time: " << double( endp - startp ) * 1000 / CLOCKS_PER_SEC << " msecs\n" ;
}

clang++ -std=c++14 -stdlib=libc++ -O2 -Wall -Wextra -pedantic-errors -pthread main.cpp -lsupc++ && time ./a.out
elapsed wall clock time: 315 msecs
elapsed processor time: 110 msecs

real	0m0.321s
user	0m0.116s
sys	0m0.004s


real 0m0.321s <= *** elapsed wall clock time
user 0m0.116s <= *** processor time used for execution
sys 0m0.004s <= *** processor time used for system overhead
Topic archived. No new replies allowed.