Hey all.

I wrote a piece of code that is used to calculate the time of usage remaining, based on a systems current battery percentage (similar to a phone). This uses a basic y=mx+c algorithm and a least squares estimation (in descending order). It all works fine except that occasionally the time remaining will fluctuate significantly before returning to its linear relationship. It can return reasonable results for about an hour then gives a few "rubbish" results, then back to normal.

here is an example of percentage/time remaining (minutes):

97\48

96\48

94\41

94\44

93\45

93\49

91\47

89\44

88\42

87\41

86\40

85\39

84\39

83\38

81\37 etc...

Does anyone have an idea as what can be causing this (assuming the equations are correct). I realize it's hard to judge without the code, but any ideas would be very much appreciated.

I wrote a piece of code that is used to calculate the time of usage remaining, based on a systems current battery percentage (similar to a phone). This uses a basic y=mx+c algorithm and a least squares estimation (in descending order). It all works fine except that occasionally the time remaining will fluctuate significantly before returning to its linear relationship. It can return reasonable results for about an hour then gives a few "rubbish" results, then back to normal.

here is an example of percentage/time remaining (minutes):

97\48

96\48

94\41

94\44

93\45

93\49

91\47

89\44

88\42

87\41

86\40

85\39

84\39

83\38

81\37 etc...

Does anyone have an idea as what can be causing this (assuming the equations are correct). I realize it's hard to judge without the code, but any ideas would be very much appreciated.

Is this because the inputs can have a few (very) bad readings?

Maybe robust regression instead of least squares will give you a more consistent result

Maybe robust regression instead of least squares will give you a more consistent result

Might be useful to plot your readings. Can a single "very busy" reading cause such a distortion?

All the inputs follow a linear slope and when plotted in excel give an R-Squared value of 0f 0.97, so there aren't any bad inputs or outliers.

I tend to agree with Duoas, but how do I overcome that if I use float. Using double doesn't seem to change anything. Any ideas?

You'll need to use a little algebra to rearrange your calculations to reduce the likelihood of overflow. Sometimes this takes a little thought.

closed account (*o1vk4iN6*)

Perhaps C++ isn't the language for the job if precision is in question here. You could potentially look into FORTRAN, I've never used the language but I know it was designed for scientific calculations in mind. There might be some newer languages out there now rather than FORTRAN. I know Python has no overflow issues as far as integer multiplication goes, although I'm not entirely sure how precise it is.

Last edited on

You won't get better precision from any other language, and bignums won't help improve speed.

closed account (*o1vk4iN6*)

C++ uses the fpu to do it's floating point calculations which in turn uses extended floating point percision (80 bits). If you compare that to a high level language which may not use the fpu, which in turn gives a higher percision but a slower calculation time. Since all a double is is two integers, which is in a standardized IEEE format.

Try to calulate 10 ** (2 ** 19) in C++.

Try to calulate 10 ** (2 ** 19) in C++.

There is no easy fix. I guess I will just modify my equations and incorporate some form of real time filtering like a kalman filter.

Topic archived. No new replies allowed.