How do calculators avoid floating-point precision errors?

I've tried to write the program for a calculator before. It turned out fine, able to do the basic calculations and evaluate whole expressions. But if I type in something like "sin π" (which should evaluate to 0), it gives me not 0, but some really small number displayed in scientific notation. If I try to multiply two big numbers, it would end up being x.0000001 even though all two numbers are integers. I figured this is because of floating-point rounding errors. But how do those commercial scientific calculators out there avoid these precision errors?
They don't.

You just don't see the error on the display.
So they basically round the result before displaying? But what about doing multiplication and division with big integers? The result could be off by "a lot" after the calculations that the program thinks it's not to be round off? And what data type do they use (is it long double)?
> And what data type do they use (is it long double)?

See: 'Using Binary or BCD?' http://www.thimet.de/CalcCollection/Calc-Precision.html
Topic archived. No new replies allowed.