Mixed-Precision Numerical ComputingΒΆ

The emergence of deep learning as a leading computational workload for machine learning tasks on large-scale cloud infrastructure installations has led to a plethora of releases of heavily specialized hardware accelerators. These new platforms offer new floating-point representation formats that, most often, offer reduced mantissa precision and modified exponent range at significantly higher throughput rates which makes them attractive from the standpoint of performance and energy consumption. In order to leverage these unprecedented advances in computational power for numerical linear algebra solvers, a new breed of methods is required that embrace the new floating-point storage and processing while delivering guarantees of robust error bounds on par with the classic IEEE 754 formats such as single- or double-precision. The mixed-precision effort encompasses the activities that seize this new opportunity and deliver unprecedented levels of performance while maintaining accuracy and stability in line with the commonly expected results.