IEEE-754 pt 1 Floating Point Standard

Floating point arithmetic allows one to deal with numbers that are very large or very small by combining a number with an exponent. In the early 80s there were many approaches to doing floating-point arithmetic. It was like the software equivalent of the  tower of Babel. In 1983 the military’s Ada programming language took the approach of specifying the number of digits of precision and sweeping the implementation details under the rug. Binary interoperability became possible when the IEEE released the IEEE-754 floating point standard. Floating point units (FPUs) that implemented the standard quickly emerged. For binary formats the standard specifies four sizes: 16, 32, 64 and 128 bits. In Ada these would be precisions of 3, 6, 15 or 33 digits.  Half-precision is a storage only format (i.e. it is not used for computation).  That begs the question, if the precision requirement is for an in-between value (e.g. 9 or 11 digits), can we conserve memory with storage formats that meet the requirements for precision but also take less storage? The answer is absolutely yes, but in order to do that we need to add storage-only binary formats to the IEEE-754 standard and understand the implications of widening a storage format to a computational format and narrowing a computational format to fit within a storage only format.