Computable Performance Metrics
Module 0: Floating Point Precision
Final
Module Author
Kris Stewart
Computer Science Department
stewart.sdsu.edu
W.M. Keck Foundation
This file:
ComputablePerformanceMetrics0.htm
Overview:
The evolution of the floating processor has continued to increase speed steadily, since the dawn of modern computing in 1976 with the Intel i8080/8 processor. Although mainframe computers began in 1947 with the Eniac, the early machines bear little resemblance to the systems we use now in the 21st century. Still, it is remarkable that the 1965 prediction by Gordon E. Moore [1] that the complexity, and capability, of processors would double each year still holds, roughly. This module will offer experiences based on teaching courses in high performance computing, beginning with the implied assumption that for a “Computable Metric for Performance” we address performance using floating point arithmetic. Important properties of floating point arithmetic are its finite precision defined by the size of the computer word, its relationship to scientific notation for representing values as fraction and exponent along with the fact that the IEEE-754 Floating Point (FP) Standard [2] has been adopted by most manufacturers of computer systems now. The FP Standard has simplified the process of writing portable, numerical software. An interesting historical essay, “An Interview with the Old Man of Floating-Point” [3], interviewing William Kahan, provides insight into how a community of expert users succeeded through collaboration to produce an effective specification for floating point arithmetic that was eventually adopted in the commercial marketplace of all compute manufacturers.
Introduction
to problem:
In the 21st century, most computers are based on using chips of either 32 bit (single precision) or 64 bit (double precision) datapaths. In the early days of computing, the different manufacturers, for example the Univac 1108, Honeywell 6000, PDP-11, Control Data 6600, Cray-1, Illiac-IV, Burroughs B5500, Hewlett Packard HP-45, Texas Instruments SR-5x, IBM 370 or Telefunken TR440, used base-2 (binary), base-8 (octal), base-10 (decimal), base-16 (hexadecimal) arithmetic systems, as summarized in the table in Chapter 2 [4] of an excellent 1977 numerical computing text. A thorough overview of floating-point is presented in “What Every Computer Scientist Should Know about Floating-Point [5] for details on the current status. Before we examine accuracy in performance calculations, we explore the actual arithmetic capability of the computing system used to gather the performance data in later modules. A computational way was established definitively by Michael Malcolm [6] in 1972, and further refined by W.J. Cody [7] in 1988 to compute machine hardware properties.
Statement
of the problem:
The machine round-off, eps, can be defined as the smallest computable, and storable, floating-point value so that 1.0 + eps does not equal 1.0. This is a measure of the error one expects when doing arithmetic in floating-point. Following Forsythe, Malcolm and Moler [4] this module focusses on simply computing eps.
Conceptual
questions to examine student’s understanding:
What is the smallest positive value which, when added to 1.0, will change the value of 1.0?
Solution:
The simple calculation, given in FORTRAN:
PROGRAM TESTMACHINE
REAL MYEPS
MYEPS = 0.5
10 CONTINUE
MYEPS = MYEPS / 2.0
IF (1.0 + MYEPS .GT. > 1.0) GO TO 10
PRINT*, “MYEPS = “, 2.0*MYEPS
END
Students sometimes find it counterintuitive that there is a value, EPS, that does not change the value of 1.0 when added to it. But when they run the above snippet of code, they will learn otherwise.
The code above can be copied into a file, testmachine.f, and compiled using a convenient FORTRAN compiler. At SDSU, we use SunOS and the following command and results were obtained.
> f90 testmachine.f -o testmachine
> testmachine
MYEPS = 1.1920929E-7
Suggestions
to instructors for using the module:
Many students in computing have focused on computations involving integers or characters and may not have a good understanding of floating-point calculations, which form the basis of much of computational science. Although this module is based on the historical evolution of floating-point, the acceptance of the IEEE Floating-Point Standard in 1985 is therefore part of the digital world student have used from their birth. The appreciation of the need for this standard to facilitate portable calculations and the profound fact of industry acceptance of the IEEE Standard was recognized by the Association for Computing Machinery (ACM) when William Kahan was awarded the ACM Turing Award in 1989 [8].
References:
[1] Gordon
[2] IEEE 754: Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Std. 754-1985, IEEE, 1985. Also available online from Professor Kahan’s lecture note:
http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
[3] C. Severance, “An Interview with the Old Man of Floating-Point”, IEEE Computer, March 1998. Reminiscences elicited from William Kahan. Also available online from www.cs.berkeley.edu/~wkahan/ieee754status/754story.html
[4] G.E. Forsythe, M.A. Malcolm and C.B. Moler,
Computer Methods for Mathematical Computations, Prentice-Hall,
[5] D. Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic, Computing Surveys, Association for Computing Machinery, March 1991. Also available on line from http://docs.sun.com/source/817-6702/ncg_goldberg.html
[6] M. A. Malcolm, Algorithms to Reveal Properties of Floating-Point Arithmetic, Communications of the ACM (15), November 1972.
[7] W.J. Cody, Jr., Algorithm 665: {MACHAR}: A Subroutine to Dynamically Determine Machine Parameters". ACM Trans. on Mathematical Software (14), 1988.
[8] http://www.acm.org/awards/turing_citations/kahan.html (Kahan’s Citation in 1989) http://en.wikipedia.org/wiki/ACM_Turing_Award (all Turing Awards from 1966)