Making Every Bit Count: Variable Precision?

Jeffrey Hittinger
Lawrence Livermore National Laboratory

Decades ago, when memory was a scarce resource, computational scientists routinely worked in single precision and were more sophisticated in dealing with the pitfalls finite-precision arithmetic. Today, however, we typically compute and store results in 64-bit double precision by default even when very few significant digits are required. Many of these bits are representing errors – truncation, iteration, roundoff – instead of useful information about the solution. This over-allocation of resources is wasteful of power, bandwidth, storage, and FLOPs; we communicate and compute on many meaningless bits and do not take full advantage of the computer hardware we purchase.



Because of the growing disparity of FLOPs to memory bandwidth in modern computer systems and the rise of General-Purpose GPU computing – which has better peak performance in single precision – there has been renewed interest in mixed precision computing, where tasks are identified that can be accomplished in single precision in conjunction with double precision. Such static optimizations reduce data movement and FLOPs, but their implementations are time consuming and difficult to maintain, particularly across computing platforms. Task-based mixed-precision would be more common if there were tools to simplify development, maintenance, and debugging. But why stop there? We often adapt mesh size, order, and models when simulating to focus the greatest effort only where needed. Why not do the same with precision?



At LLNL, we are developing the methods and tools that will enable the routine use of dynamically adjustable precision at a per-bit level depending on the needs of the task at hand. Just as adaptive mesh resolution frameworks adapt spatial grid resolution to the needs of the underlying solution, our goal is to provide more or less precision as needed locally. Acceptance from the community will require that we address three concerns: that we can ensure accuracy, ensure efficiency, and ensure ease of use in development, debugging, and application. In this talk, I will discuss the benefits and the challenges of variable precision computing, highlighting aspects of our ongoing research in data representations, numerical algorithms, and testing and development tools.



This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

Presentation (PDF File)

Back to Big Data Meets Computation