Alternatives to IEEE: NextGen Number Formats for Scientific Computing

Peter Lindstrom
Lawrence Livermore National Laboratory

Today's high-performance computing applications rely nearly exclusively on IEEE double-precision arithmetic for accuracy. However, there is a substantial performance and power cost associated with moving floating-point bits around the memory hierarchy, a large fraction of which are contaminated with round-off, truncation, iteration, or sensor error. Meanwhile, there is significant redundancy not only within each IEEE floating-point value, including unused exponent bits and representations of non-numbers, but also between spatially correlated values in discretized continuous fields.

This talk focuses on novel number representations that address many of the shortcomings of the IEEE floating-point format. We present a modular framework for generating new number formats that allocates precision to where it is most needed and that removes the redundancies and complexities associated with IEEE. We further present the hardware-friendly ZFP "compressed" floating-point format for short tuples of numbers that maximizes information content per bit stored and that supports fast conversion to and from IEEE. We conclude with empirical results that show how these new number representations increase---often by several orders of magnitude---the accuracy of numerical computations, from basic arithmetic to linear algebra, PDE solvers, and physics mini-applications.

Presentation (PDF File)

Back to Workshop II: HPC and Data Science for Scientific Discovery