The kind of bugs I deal with

Pages: 12
> My background for this statement is actual scientific implementation in models.
> By starting with the smallest values and summing up you'll end up
> with the lowest possible variance in value by summing them

When we add two floating point numbers of similar magnitude, the loss of precision is very small. When we add a big number to a much smaller one, the loss of precision is much more significant. For summation of a large sequence, one of the numbers being added is the accumulated sum so far; in general it is much larger than the next number being added to it.

Sorting just puts the smaller numbers in front, but in the general case, the partial sum still rapidly becomes much larger than any one number and the loss of accuracy remains significant. (Except in the theoretical case where the numbers are sorted on their absolute values, the distribution is very symmetric, and the partial sum happens to hover around zero throughout the addition).

For the general case, the only advantage that sorting offers is consistency. First sorting, and then summing the sequence will give the same answer for the same set of numbers, irrespective of their original order in the sequence. But the estimate of the loss of precision is still the same O(N) as in unsorted summation. It is easy to see that the loss of precision is O(1) for compensated summation and O(log N) for cascaded summation.


> It's in the lounge because I'm not looking for help.

You may not be looking for help; but if you intend to continue working in this field, I strongly encourage you to educate yourself on the basic issues involved in computations with floating point numbers.
@JLBorges I've been working in scientific computing for a number of years now. I'm well aware of the "basic issues involved in computations with floating point numbers". I'm also well aware of the difference between theoretical and practical application.

In a perfect would everything would add up nicely and we'd all go about our business. For 99.9% of applications this is definitely going to be the case. But for that 0.1% that is high performance computing, theory and real world are very different.

As I said, the actual scientific implementation that is preferred is to sort ascending then add. I didn't decide this, it was decided by people who have done professional scientific modelling for more years than I've been alive. People's whose combined experience measures in the hundreds of years. I don't defend the decision, I simply state it.

This is the area of computing where you literally have to trade accuracy for CPU cycles to ensure your models will run in a reasonable time. The difference between a model taking 1 second of 2 seconds to run is the difference between 11 or 22 days computational time when you're doing a 1 million iteration MCMC... let alone a 10 million one.

Edit: it's worth understanding that if you cannot reproduce your results, or another person cannot reproduce them exactly then they are no good. So while you do trade speed v accuracy, you need to ensure what you do is 100% consistent for every run. This makes things like OpenMP/Threading very difficult.
Last edited on
> I've been working in scientific computing for a number of years now.
> I'm well aware of the "basic issues involved in computations with floating point numbers".
> I'm also well aware of the difference between theoretical and practical application.

Fair enough. Please treat my earlier comment as withdrawn.


> As I said, the actual scientific implementation that is preferred is to sort ascending then add.
> I didn't decide this, it was decided by people who have done professional scientific modelling
> for more years than I've been alive. People's whose combined experience
> measures in the hundreds of years. I don't defend the decision,
> I simply state it.


'Sort ascending then add' would give consistent results; but it won't give the most accurate results. 'Sort ascending on absolute values (ignoring the sign) and then add'would also give consistent results, and the results would be more accurate than 'sort ascending then add'.

'Compensated summation' would give much more accurate results at a fraction of the cost of 'Sort and then add'. But it won't give consistent results (the same answer for the same set of numbers, if the order is changed).

'Sort ascending on absolute values (ignoring the sign) and then perform compensated summation' would give consistent, and quite accurate results. The overhead involved in the sort would eclipse the extra cost involved in compensated summation. AFAIK, the sensible recommendation should be: 'if the runtime cost of a sort is acceptable, first sort on on absolute values (ignoring the sign) and then perform compensated summation'.

Yea I completely agree, but CPU cycles are very expensive. A concept I think is lost on most modern day developers.

I would give some examples on the number of calculations we do for 1 process but it'd just terrify people (number of individual memory read/writes is in the billions)
Last edited on
Topic archived. No new replies allowed.
Pages: 12