#### Rounding in the real world, some analysis

Having discovered this by chance I think I have to clear up thing a little...

We have to distinguish between calculations in commerce and finances on one side and numerical calculations on the other.

Financial calculations should indeed be done with decimal digits, using numbers with a defined amount of digits right to the decimal point. Such numbers are available in languages like COBOL, RPG, PL/I, which are designed for programming commercial applications. Then the amount of digits is always under control of the programmer and rounding can happen as law requests.

Since commercial applications were and are an important use of computers all of the classical mainframes do contain support (instructions etc.) for decimal arithmetic. Also the early microprocessors contain instructions to use one byte as two bcd-encoded digits.

Yet the 'modern languages' like C, Pascal, Java do not have any type of decimal numbers. And since modern RISC microprocessors where designed to run C programms fast these processors do not have any support for decimal arithmetic. (statistics of programms written in C do not show any use of decimal arithmetic!)

There is only one exception: HP-Precicion Architecture supports decimal numbers, as these processors were designed to run commercial applications developed for the HP3000 family of computers. The Itanium, though designed to run large commercial databases, does not support decimals.

And this is not the end: When AMD designed the 64-bit extensions of the Intel x86 architecture they needed some more opcodes. So they had to remove some instructions from the original set. And what did they take? The decimal instructions! (with others). Just see http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24592.pdf at page 85.

Decimal arithmetic does not become impossible without these instructions, as there are very clever tricks to do it with binary arithmetic plus a lot of logical instructions (bit fiddling). Yet one as to know about this and if the compiler does not support this subroutines have to be written and called. All this does not make things easy and fast....

Then we have to see that languages like COBOL are totally 'uncool' today in education, so nearly no student knows about them and about using decimal numbers.

Let's now look at numerical computations. The fact that these are affected by rounding errors is well known since the beginning of using computers.

Since the 60-ties the Institute of Applied Mathmatics at Karlsruhe University here in Germany, headed by Prof. Ulrich Kulisch, did research and development to solve the problems of rounding errors. The main tools are interval arithmetic and the precise dot product. I was part of these efforts for many years, so I can speak as an insider. If anybody gets curious now: just search for "Ulrich Kulisch" in google.

We developed various extensions to programming languages to make use of this extended arithmetic easy (no need for assembler programming!), and also a lot of algorithms for doing computations with garanteed precise results were developed (e.g. solving of linear equations). All this is available for free (http://www.rz.uni-karlsruhe.de/~iam/html/language/xsc-sprachen.html).

Yet there is one main drawback: These calculations are slower than ordinary floating point. And soon we found that people prefer to compute wrong or at least questionable numbers fast over computing precise numbers slowly. So all these efforts are known to a few insiders only, but are not in wide use.

At present the problems of performance could be solved, as processors do now have transistors almost for free. Last year I discovered that modern processors by Intel and AMD contain nearly everything to implement interval arithmetic. With just a little clever control (a few thousand transistors, I guess) it could be implemented. For details see http://www.springerlink.com/content/e0kg1t22pmh75825/fulltext.pdf

Yet: why do we not have interval arithmetic in any processor? It seems to be a henn and egg problem. Interval arithmetic is considered slow and expensive. So those who know about it do not like to use it. Therefore it is not requested by the market. And since there seems to be no market there are no implementations.

It is very difficult to proof that something failed or crashed because of imprecise calculations. When we developed all those great tools for precise calculations we searched for some kind of "killer application", yet could not find one.

Somehow the real world seems to be fault tolerant against rounding errors in floating point computations.

I apologize for typing errors and bad english.

Reinhard Kirchner, Univ. of Kaiserslautern, Germany

kirchner@informatik.uni-kl.de