How do x86 processors usually go about dividing numbers while getting around the rounding error of binary division? That's probably a broad question with a lot of answers, but if you know any techniques they use to divide numbers, I'd like to know.

But how do computers deal with it? Why don't we just have numbers really close to the actual answer when you do division on a computer in whatever program? Rounding? But then if the answer was actually supposed to be that number non-rounded, it would be wrong... Clearly I'm pretty ignorant on the subject. Haha.

Computers *don't* deal with it -- they are too stupid.

A smart programmer will organize his mathematical expressions in a way to avoid as much error as he needs.

It might be worth your time to read through this:

http://perso.ens-lyon.fr/jean-michel.muller/goldberg.pdf

A smart programmer will organize his mathematical expressions in a way to avoid as much error as he needs.

It might be worth your time to read through this:

http://perso.ens-lyon.fr/jean-michel.muller/goldberg.pdf

Topic archived. No new replies allowed.