Angle Pt 5 – A Novel GPS Location Code

Color My Data Eight-bit Compressed Binary Format (CBF-8) encodes data on six-bit boundaries using base-64 digits.  Base-64 digits include the digits 0 to 9, the upper case letters A to Z , the lower case letters a to z and the special characters $ and &. Using three digits each (18-bits) GPS positions are accurate to within 500 feet (best case) to 1000 feet (worst case). At four digits each (24-bits) position accuracy increases to 7.8 to 15.6 feet. In the United States the five digit  zip code gets you the nearest post office. Zip plus four narrows it much further. By contrast, using the angle primitive, base-64 digits, a four-digit latitude and four-digit longitude together with GPS means locating any place on the surface of the earth to an accuracy of better than 16 feet.

Angle pt 4 – Quantization Errors

Fixed point types have quantization errors.  Quantization errors can be as high as the weight of the least significant bit. For an n-bit angle primitive, the weight of the least significant bit is 22-n. For values near zero the angle and its arctangent are approximately the same. Thus, the best case quantization error is the weight of the least signficant bit.
The tangent function is non-linear. Quantization error increases as the square of the secant. For values near +/-1 this means the quantization error is nearly twice the weight of the least significant bit.

To translate quantization errors into distances along the surface of the earth we will use pi radians equals 10800 nautical miles or 20,000 kilometers. (A nautical mile is about 6076 feet). As seen in the following table, it takes very few bytes to get high accuracy. For example, a four-byte angle primitive has a quantization error of between 0.366 inches (best case) and 0.733 inches (worst case = 2x best case). Using a four-byte angle type in lieu of an eight-byte double precision value in radians yields a 50% savings in memory and bandwidth.
prev continue

bytes radians distance units meters
1 0.024543693 84.375 nm 156250
2 9.58738E-05 0.329589844 nm 610.3515625
3 3.74507E-07 7.822608948 ft 2.384185791
4 1.46292E-09 0.366684794 in 0.009313226
5 5.71452E-12 0.001432362 in 3.63798E-05

Angle pt 3 – Sine and Cosine Terms

Navigation and astronomy are based on spherical trigonometry. Spherical trigonometry uses the sine and cosine of an angle extensively.  The beauty of using the tangent of the half angle is that the sine and cosine can be calculated with double precision accuracy without the need for transcendental functions as seen from the following equations.
prev continue


Angle pt 2 – Side Opposite; Side Adjacent

When you have two sides of a right triangle, calculating the tangent of the half angle is straightforward. Calculate the square root of the sum of the squares of the side opposite, y, and the side adjacent, x (r in the equation below). When x is negative, use the ratio of -y to r-x and set the pi bit; otherwise, use the ratio of y to r+x and clear the pi bit. Note that the ratio is always between -1.0 and 1.0.
prev continue

Angle pt 1 – A New Primitive Data Type

Since GPS, navigation and astronomy all perform calculations on angles, why not have a dedicated angle data type that would compress location data and simplify trigonometric calculations?  Allow me to introduce you to the angle type.

I think of a primitive as a value that can be placed in a machine register. Values like char, int, long, float, double and boolean come to mind. The angle primitive has two fields: a boolean pi-bit and a fixed-point field designed to hold a value between -1.0 up to but not including 1.0. Let the fixed point value be the tangent of the half angle. The arctangent is a value between -pi/4 and +pi/4. Twice the arctangent ranges between -pi/2 and pi/2. The pi bit adds or subtracts pi yielding an angle between -pi and pi; a full circle.  In the following illustration the pi bit is in orange and the fixed point value is in green.
angle bits

IEEE-754 pt 1 Floating Point Standard

Floating point arithmetic allows one to deal with numbers that are very large or very small by combining a number with an exponent. In the early 80s there were many approaches to doing floating-point arithmetic. It was like the software equivalent of the  tower of Babel. In 1983 the military’s Ada programming language took the approach of specifying the number of digits of precision and sweeping the implementation details under the rug. Binary interoperability became possible when the IEEE released the IEEE-754 floating point standard. Floating point units (FPUs) that implemented the standard quickly emerged. For binary formats the standard specifies four sizes: 16, 32, 64 and 128 bits. In Ada these would be precisions of 3, 6, 15 or 33 digits.  Half-precision is a storage only format (i.e. it is not used for computation).  That begs the question, if the precision requirement is for an in-between value (e.g. 9 or 11 digits), can we conserve memory with storage formats that meet the requirements for precision but also take less storage? The answer is absolutely yes, but in order to do that we need to add storage-only binary formats to the IEEE-754 standard and understand the implications of widening a storage format to a computational format and narrowing a computational format to fit within a storage only format.