How should accuracy be specified?
Published standards for mathematical library routines set a precedent for this Issue.
- E.g., IEEE standards specify a valid operating range and accuracy for each function.
Examples of what happens when the argument is out of range:
- A negative argument for a real square root, or a zero input for a logarithm will generate error messages.
- Too large (or small) an argument may generate a warning or require special handling (overflow-underflow protection).
Computational performance (efficiency) is not specified, but is market driven.
Accuracy is specified in decimal digits (or bits of precision).
- Accuracy is specified for the worst-case application.
- Nominal applications never come close to needing full accuracy.
- No options are provided for reduced accuracy � this leads to ad hoc (non-standard) in-line replacements to reduce computation time.
Standards based implementations require bounds checking and accuracy specification.