Make a comment to the author.


The 'paranoia' program, available from netlib, is an excellent program for testing the basic conformance of floating point hardware and emulators with the ieee floating point standard. It is of most interest if you are designing floating point hardware, or writing software to perform basic floating point arithmetic, or if you are just plain curious.

'paranoia' tests basic floating point design. It tests by using selected critical operands, so don't expect 'paranoia' to find manufacturing defects in your hardware (an early batch of AMD 486 processors had manufacturing defects), or design defects (such as the notorious fdiv bug of early Pentium processors) which often affect an unpredictably small range of operands.

As good as it is, the 'paranoia' program has its own flaws.

One fairly obvious problem is that the carefully crafted code which carefully tests the floating point arithmetic can be defeated if the compiler is allowed to do any optimisations (you can use gcc with the '-O' or similar optimisation if you also use the '-ffloat-store' option, but this defeats most of the optimisations and there is little point in optimising a program like 'paranoia' so why bother?).

The flaw most commonly encountered is due to the way 'paranoia' tests round-to-nearest-or-even rounding. 'paranoia' does not correctly handle the case where intermediate results are computed to a precision which is neither equal to the nominal precision nor at least twice the nominal precision. For example, on Intel 80x86 machines it is usual to run the FPU at 64 bit precision. Intermediate results are therefore computed to 64 bit precision. The precision of a 'double' is 53 bits. In this case, if you compile 'paranoia' to use double's then the program will incorrectly report that the arithmetic has a FLAW. You can fix this by adding code to set the FPU to 53 bit precision (e.g. with my wmexcp package).