/* clock example: frequency of primes */
#include <stdio.h>
#include <time.h>
#include <math.h>
int frequency_of_primes (int n) {
int i,j;
int freq=n-1;
for (i=2; i<=n; ++i) for (j=sqrt(i);j>1;--j) if (i%j==0) {--freq; break;}
return freq;
}
int main ()
{
int f;
int t;
printf ("Calculating...\n");
f = frequency_of_primes (99999);
t = clock();
printf ("The number of primes lower than 100,000 is: %d\n",f);
printf ("It took me %d clicks (%f seconds).\n",t,((float)t)/CLOCKS_PER_SEC);
return 0;
}
//cplusplus.com
giszzmo, ignore the 100,000... that's how many numbers he was checking in his frequency of primes function. If you increase that number, it'll increase the computational time.
Computer clocks are generally not accurate to the micro-second. <time.h> or <ctime> doesn't have a high-resolution timer that can get this accurate.
To get something with a very high resolution, you need to use something directly from your operating system. The resolution that you will get will depend on your hardware (if your processor runs at 1 GHz (1 tick per microsecond) you will not have 1 micro-second resolution).
Here's an example of something high-res for windows with an output from my system. (It also demonstrates how inaccurate the Sleep() function is).
#include <chrono>
#include <iostream>
int frequency_of_primes (int n) {
int freq=n-1;
for (int i=2; i<=n; ++i)
for (int j=sqrt(i);j>1;--j)
if (i%j==0)
{
--freq;
break;
}
return freq;
}
int main ()
{
usingnamespace std::chrono ;
std::cout << "Calculating...\n" ;
auto begin = high_resolution_clock::now() ;
int f = frequency_of_primes (99999);
auto end = high_resolution_clock::now() ;
auto ticks = duration_cast<microseconds>(end-begin) ;
std::cout << "The number of primes lower than 100,000 is: " << f << '\n' ;
std::cout << "It took me " << ticks.count() << " microseconds.\n" ;
}
Not really. It just means that you'd go from this: (end.QuadPart - start.QuadPart) * 1000000 / freq.QuadPart
to this (end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart
Unless you cast that division to a double, you'll still get integer division and you won't have the decimal points.
If you don't mind 1-second resolution, then you can use the <ctime> header/library.
> Computer clocks are generally not accurate to the micro-second.
They are.
> To get something with a very high resolution, you need to use something directly from your operating system.
¿And where will the OS get that info from?
> 1 GHz (1 tick per microsecond)
1GHz = 1e9 Hz. That means 1 tick every 1e-9 s (one nanosecond), so 1000 ticks per microsecond
However, clock() does not count ticks
> Machine is running at 3.33 MHz.
¿steam powered?
@OP: clock() measure the time that your program is executing, using `clocks' units CLOCKS_PER_SEC tells you how many `clocks' are in 1 second
C'mon ne555. stop being a jerk. So I mixed GHz with MHz. That also didn't change the fact that my 3.4Ghz i7 is running at 3.33MHz on electricity (not steam). It also doesn't make my code or output in any way-shape-or-form wrong.