Heartbeat per second, losses time on computation

Hey folks,
I have a business case where i want to post a heart beat per second.
The problem is the code run slow is down by a few miliseconds that over time, it drags behind. I'd like 86400 seconds, but i'll end up short because of computation time.

Can this be solved?

Even in a simple case where i don't publish to my database it slows down.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#include <iostream>
#include <thread>
#include <chrono>
#include <ctime>

using namespace std;

int main()
{

int maxcount=86400;
int a=0;
auto genesistime = std::chrono::system_clock::now();
std::time_t genesis = std::chrono::system_clock::to_time_t(genesistime);
cout<<"We are starting here: " << std::ctime(&genesis);

while(a<maxcount)
{
  auto start = std::chrono::system_clock::now();
  a++;
  std::this_thread::sleep_for (std::chrono::seconds(1));
  auto end = std::chrono::system_clock::now();

  if (a % 3600==0)
  {
  cout<<a<<"-->";
  std::chrono::duration<double> elapsed_seconds = end-start;
  std::time_t end_time = std::chrono::system_clock::to_time_t(end);
  std::cout << "finished computation at " << std::ctime(&end_time)
  << "elapsed time: " << elapsed_seconds.count() << "s\n";
  }
}
return 0;
}

##output


We are starting here: Wed May 22 10:49:16 2019
3600-->finished computation at Wed May 22 11:49:17 2019
elapsed time: 1.00012s
7200-->finished computation at Wed May 22 12:49:17 2019
elapsed time: 1.00008s
10800-->finished computation at Wed May 22 13:49:18 2019
elapsed time: 1.00008s
14400-->finished computation at Wed May 22 14:49:18 2019
elapsed time: 1.00008s
18000-->finished computation at Wed May 22 15:49:18 2019
elapsed time: 1.00006s
21600-->finished computation at Wed May 22 16:49:19 2019
elapsed time: 1.00007s
25200-->finished computation at Wed May 22 17:49:19 2019
elapsed time: 1.00007s
28800-->finished computation at Wed May 22 18:49:19 2019
elapsed time: 1.00007s
32400-->finished computation at Wed May 22 19:49:20 2019
elapsed time: 1.00012s
36000-->finished computation at Wed May 22 20:49:20 2019
elapsed time: 1.00007s
39600-->finished computation at Wed May 22 21:49:20 2019
elapsed time: 1.00007s
43200-->finished computation at Wed May 22 22:49:20 2019
elapsed time: 1.00013s
Last edited on
Try this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
int main()
{
	double time = 0; //Variable to look at extra milliseconds
	bool check = false;
	int maxcount = 86400;
	int a = 0;
	auto genesistime = std::chrono::system_clock::now();
	std::time_t genesis = std::chrono::system_clock::to_time_t(genesistime);
	std::cout << "We are starting here: " << std::ctime(&genesis);

	while (a < maxcount)
	{
		auto start = std::chrono::system_clock::now();
		a++;

		if (time < 1) //See if extra milliseconds add up to a whole second
		{
			std::this_thread::sleep_for(std::chrono::seconds(1));
			check = false;
		}

		else //If we've built up to a second, don't sleep
		{
			time -= 1; //Once we've built to a second, take it out
			check = true;
		}

		auto end = std::chrono::system_clock::now();

		std::chrono::duration<double> elapsed_seconds = end - start;
		std::time_t end_time = std::chrono::system_clock::to_time_t(end);

		if(!check) //Could also check if elapsed_seconds > 1, but this is safer
		time += (elapsed_seconds.count() - 1); //Add up all those extra milliseconds

		else
		time += elapsed_seconds.count();
	}
	return 0;
}


Basically, add up the milliseconds and check to see when it'll add up to a full second. When it does, simply skip over the line which delays for a second for that specific iteration.

I didn't test it out myself since it would take a whole day to, but I assume it would work.


How many seconds are you losing? On my machine, with a little math, saw I'd lose about 200 seconds a day. Using your output data, I'd see that you'd lose about 7-8 seconds a day?
Last edited on
Suppose you have a function seconds_since_midnight() that returns the number of seconds since midnight (seconds_since_midnight() = current_time.hours * 3600 + current_time.minutes * 60 + current_time.seconds), and suppose that you have a process that needs to run every hour, with a deadline of k milliseconds (i.e. if the process runs k or more milliseconds late, the results are useless). Then all you need to do is check the value of seconds_since_midnight() every k/2 milliseconds:
1
2
3
4
5
6
7
8
9
10
//For example:
const int k = 1000;

auto start = seconds_since_midnight() % 3600;
while (true){
    do_useful_work();

    while (seconds_since_midnight() % 3600 != start)
        std::this_thread::sleep_for(std::chrono::milliseconds(k / 2));
}
In the above example, suppose do_useful_work() was first called at 00:12:42.85. The second time it could get called at 01:12:42.23, the third time at 02:12:42.47, etc., but it will never get called at xx:12:41.99, and it would be unlike to get called at xx:12:42.50 or later, unless the system is extremely busy and can't schedule time for the process. What's more, even if a deadline is missed by a little bit, the process won't drift (unless the hardware RTC drifts).
Perhaps a better way than my previous post. This specific version will only work on Windows, but there are ways to do it on other operating systems too:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#include <iostream>
#include <thread>
#include <chrono>
#include <ctime>
#include <windows.h>

using namespace std;

int main()
{
	double time = 0;
	bool check = false;
	int maxcount = 86400;
	int a = 0;
	auto genesistime = std::chrono::system_clock::now();
	std::time_t genesis = std::chrono::system_clock::to_time_t(genesistime);
	cout << "We are starting here: " << std::ctime(&genesis);

	while (a < maxcount)
	{
		auto start = std::chrono::system_clock::now();
		a++;

		if (time >= 0.001000)
		{
			time -= 0.001000;
			check = true;
			Sleep(999);
		}
		else
		{
			Sleep(1000);
			check = false;
		}

		auto end = std::chrono::system_clock::now();

		std::chrono::duration<double> elapsed_seconds = end - start;
		std::time_t end_time = std::chrono::system_clock::to_time_t(end);
		
		int t = 0;

		if (!check)
		{
			t = elapsed_seconds.count();
			time += (elapsed_seconds.count() - t);
		}
		else
			time += (elapsed_seconds.count() - .999);

	}
	return 0;
}


In this code, you take off only a millisecond once the delay builds up to it. In the other code, you'd skip a whole second at the iteration where the delay builds up to a second, which if you're displaying will seem odd every now and then.
FYI the Windows scheduler has a granularity of ~15 ms, so Sleep(1000) and Sleep(999) do basically the same.
FYI the Windows scheduler has a granularity of ~15 ms, so Sleep(1000) and Sleep(999) do basically the same.

That's depressing, my bad then. "Sleep" can then be replaced with "this_thread::sleep_for(std::chrono::milliseconds(x));" - which I'd assume is at least more accurate. I saw some recommended ways to get more fine tuned accuracy, but it seemed more trouble than it's worth.
I'm sure sleep_for() just uses Sleep() (or whatever underlying function Sleep() uses).
Windows does have ways to sleep for short periods (as long as your application can handle occationally missing deadlines by a couple milliseconds), but sleeping for longer periods will always be inaccurate unless you want to spinlock.
Topic archived. No new replies allowed.