Different way of using times()

Hello, I have a bit of linux code that I can't get working under windows, can anyone rewrite this bit so it compiles in c++ ?

if ((start = times(&tmsstart)) == (clock_t)-1)
[...]
if ((end = times(&tmsend)) == (clock_t)-1)

Full source is here;
https://github.com/nbs-system/naxsi/blob/master/naxsi_src/naxsi_skeleton.c
Lines 1097 to 1108
I don't have the time to look through all that (no pun intended), but the equivalent function in Windows is GetProcessTimes()
http://www.google.com/search?btnI=1&q=msdn+GetProcessTimes

Good luck!
This is what I was looking for;

#ifdef _MSC_VER
    start = clock();   
    ngx_http_dummy_data_parse(ctx, r);
    cf->request_processed++;
    end = clock(); 
#else
    if ((start = times(&tmsstart)) == (clock_t)-1)
      ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
		    "XX-dummy : Failed to get time");
    ngx_http_dummy_data_parse(ctx, r);
    cf->request_processed++;
    if ((end = times(&tmsend)) == (clock_t)-1)
      ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
		    "XX-dummy : Failed to get time");
#endif
    if (end - start > 10) // report if it took more than 1/10MS to perform all the checks


Any comments ? it compiles, links and works, but not sure if the 1/10ms test works the same.
I'm not sure.

The clock() function returns the number of ticks that pass only while your process is running.

The times() function returns system time.


What's wrong with using GetProcessTimes()?
clock() looks simpler, I just googled how to get milliseconds in c++ since the test is about more then 10ms this looks ok to me, what makes GetProcessTimes() work better then clock() in this test ?

It is used in a non-blocking and event driven module so getting time past while the thread is running sounds like its suppose to work.

And is it this simple as;

start = GetProcessTimes(); ??
Ok, changed it into:

#ifdef _MSC_VER
    start = GetProcessTimes(GetCurrentProcess(), &createTime, &exitTime, &sysTime, &cpuTime);
    ngx_http_dummy_data_parse(ctx, r);
    cf->request_processed++;
    end = GetProcessTimes(GetCurrentProcess(), &createTime2, &exitTime2, &sysTime2, &cpuTime2);
#else
    if ((start = times(&tmsstart)) == (clock_t)-1)
      ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
		    "XX-dummy : Failed to get time");
    ngx_http_dummy_data_parse(ctx, r);
    cf->request_processed++;
    if ((end = times(&tmsend)) == (clock_t)-1)
      ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
		    "XX-dummy : Failed to get time");
#endif
    if (end - start > 10)


Compiles and links ok, but still the same question, not sure if the 1/10ms test works the same.
And which codeblock is the best in this case.
Some advice which is better and why would be appreciated.
Did you read the documentation I linked?

Because the function doesn't work that way. Make yourself a helper function. You'll have to parse the time data out of the FILETIME struct.

BTW, I just noticed that your last line:

    if (end - start > 10) // report if it took more than 1/10MS to perform all the checks


isn't quite right. The amount of time in a clock tick isn't necessarily 1/10MS. You should be using the CLOCKS_PER_SEC value to help you here.


If you want another way of looking at simplified precision timing, check out this post:
http://www.cplusplus.com/forum/beginner/28855/3/#msg159698
Topic archived. No new replies allowed.