It works well if I set the framerate limit to 60, but if I increase it to 100 or remove it altogether, then I get an error that seems to scale (perhaps exponentially) with the framerate, e.g. for a limit of 100 fps the error is only ~2 fps (so it will display 98-100), but if I set the limit to 300 then the displayed framerate is between 190 and 250 fps, although the error probably doesn't account for all of that, but since the main loop currently does nothing other than process events, it should easily be able to maintain 300 fps (and indeed if I remove the limit altogether I get a reading of 350-380 fps, though part of the difference is likely to be down to the overhead of sf::Window::setFramerateLimit). I thought maybe the error was due to epsilon values in floating point calculation. Is there some way I can account for it?
Also, how can I make the change in the framerate smoother? I thought of using weight values but I couldn't see where to actually apply them to the values -- applying weights of 0.1 and 0.9 to the subtraction and addition respectively (and then in reverse) just makes the value either decrease (and then overflow) or increase. Manipulating the frequency of samples (currently every frame) or how often the mean is calculated (currently every 100th frame) makes it more stable, but isn't a real solution. I want something that turns it into a curve--perhaps a sigmoid function (edit: tried the sigmoid and arc tangent functions; they didn't really work)?