I wrote a piece of code that is used to calculate the time of usage remaining, based on a systems current battery percentage (similar to a phone). This uses a basic y=mx+c algorithm and a least squares estimation (in descending order). It all works fine except that occasionally the time remaining will fluctuate significantly before returning to its linear relationship. It can return reasonable results for about an hour then gives a few "rubbish" results, then back to normal.
here is an example of percentage/time remaining (minutes):
Does anyone have an idea as what can be causing this (assuming the equations are correct). I realize it's hard to judge without the code, but any ideas would be very much appreciated.
Perhaps C++ isn't the language for the job if precision is in question here. You could potentially look into FORTRAN, I've never used the language but I know it was designed for scientific calculations in mind. There might be some newer languages out there now rather than FORTRAN. I know Python has no overflow issues as far as integer multiplication goes, although I'm not entirely sure how precise it is.
C++ uses the fpu to do it's floating point calculations which in turn uses extended floating point percision (80 bits). If you compare that to a high level language which may not use the fpu, which in turn gives a higher percision but a slower calculation time. Since all a double is is two integers, which is in a standardized IEEE format.