wanna measuring please help!

Hello, I want to measure the time my function use to execute. I was trying to use time() and clock() from time.h but it only return it in seconds. How can I measure it in milliseconds?

for example:
clock_t start;
//function
clock_t end = (clock() - start)/(CLOCKS_PER_SEC/1000);

but i wanna it return at least 6 precision.

output: start at: 0.000000
end at: 0.000420
BTW, i wanna do it in C++
Try this:

http://en.cppreference.com/w/cpp/chrono/c/clock

Edit:

Use the std::clock , not the high resolution one - realise the difference :+)
Last edited on
(clock() - start)/(CLOCKS_PER_SEC/1000);

should just be

(clock() - start)/(double)(CLOCKS_PER_SEC);

which should be at least in miliseconds on almost all systems that I know of. Its converted to seconds, so if you want ms, its * 1000, not /1000!!!

Which is terrible resolution: a 3.5ghz cpu executes 3.5 billion assembly instructions per sec and at times, twice that in the pipes. That's a hefty function to run even 1 ms, so if you get 0.0000 back, don't be alarmed. You can time it to more precision as above, but fractions of ms is not unusual.


Last edited on
@TheIdeasMan i tried it, it doesn't give me last 4 precision correctly, last 4 precision always stay at 0. 0.xx0000

with std::cout << std::fixed << std::setprecision(6) << 1000.0 * (c_end-c_start) / CLOCKS_PER_SEC << " ms\n"
Can you please give us the code of your program?
@Pattako (28)

clock_t time_begin, time_end;

//function <-------

wait_time((float)temp_time);
time_end = clock();
cout << setiosflags(ios::fixed) << setprecision(6) << (((float)time_end - (float)time_begin) /CLOCKS_PER_SEC );
I want a whole program, not part of your code. This way I will compile it and tell you the real problem.
gh0099 wrote:
@TheIdeasMan i tried it, it doesn't give me last 4 precision correctly, last 4 precision always stay at 0. 0.xx0000


Why do you want so much precision? Normally when timing something for performance, one would have a decent size set of data - say 1 million items. Timing for small sets of 1,000 say is a bit pointless. Have a look at the output of this example:
http://www.cplusplus.com/forum/general/193311/#msg930382

The other thing to do is use a profiler like valgrind. http://valgrind.org/
I disagree a little bit. Running large sets is part of the test, but you also want to check the small case. The reason is that you might want to know the "set up time" of a thing .. a function that allocates a bunch of memory or creates and destroys a bunch of convoluted classes that allocate and destroy memory, assign tons of stuff in the default ctor, etc can have a bunch of "do nothing" time up front that you have to identify so you can fix it. Its inner loop might be tweaked to the max but if it takes 1/3 of a second to set up... you need to know that, and that gets lost when running a billion records.

Usually, the inner loop is the key. But not always.

At one job, I had to produce some matrix algebra results at 60 hz. The data wasn't that huge, but we did need to know sub millisecond average run times, and a function (even in line) being called 60 times a second, we also had to keep an eye on the initialization/overhead. It just comes down to what exactly you are doing.

wouldnt you want a high precision effort to use doubles? That float isnt saving you anything.
Last edited on
(Edit: not to discount what jonnin wrote)
------------------------------------------------------
I do just want to point out the uselessness of measuring the time that a program takes to run. I can't go out to the world and say;

"This program takes exactly 3.0231 seconds to run!"

Woohoo!

Only, darn, most of my users are librarians who are running windows 98 computers which were new about 20 years ago. I'm on a Linux machine built last year. Hmm. There might be a little mismatch here.

But seriously; Yes measuring the run-time is a good experiment and helps you decide if one algorithm is faster than the other (and can be important if you're racing someone else's implementation). Understanding "big O" and keeping it in mind while writing code will have a much wider impact on the code you write.
https://www.google.com/search?q=big-O+in+coding&ie=utf-8&oe=utf-8

Being able to look at your old code and drop it from an exponential run to linear, or even to log will be so satisfying that actually timing the difference becomes pointless.
Last edited on
its useless in the vast majority of coding, you are not wrong. Ill argue that we could probably get at least 1 order of magnitude our of our machines by fixing the code though. There is a LOT of sorry code out there in industry these days. Its just not cost effective to do it right, its easier to let the users sit there tapping a finger for 10 seconds.

Topic archived. No new replies allowed.