Floating-Point Numbers Floating-point .

Floating-Point Numbers Floating-point number system characterized by four integers: β base or radix p precision [L; U] exponent range Number x represented as x = ± d0 + d1 β + d2 β2 + · · · + d p−1 βp−1! βE; where 0 ≤ di ≤ β − 1; i = 0; : : : ; p − 1; and L ≤ E ≤ U d0d1 · · · dp−1 called mantissa E called exponent d1d2 · · · dp−1 called fraction 24Typical Floating-Point Systems Most computers use binary (β = 2) arithmetic Parameters for typical floating-point systems shown below system β p L U IEEE SP 2 24 −126 127 IEEE DP 2 53 −1022 1023 Cray 2 48 −16383 16384 HP calculator 10 12 −499 499 IBM mainframe 16 6 −64 63 IEEE standard floating-point systems almost universally adopted for personal computers and workstations 25Normalization Floating-point system normalized if leading digit d0 always nonzero unless number represented is zero In normalized system, mantissa m of nonzero floating-point number always satisfies 1 ≤ m < β Reasons for normalization: • representation of each number unique • no digits wasted on leading zeros • leading bit need not be stored (in binary system) 26Properties of Floating-Point Systems Floating-point number system finite and discrete Number of normalized floating-point numbers: 2(β − 1)βp−1(U − L + 1) + 1 Smallest positive normalized number: underflow level = UFL = βL Largest floating-point number: overflow level = OFL = βU+1(1 − β−p) Floating-point numbers equally spaced only between powers of β Not all real numbers exactly representable; those that are are called machine numbers 27Example: Floating-Point System Tick marks indicate all 25 numbers in floatingpoint system having β = 2, p = 3, L = −1, and U = 1 ……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….. −4 −3 −2 −1 0 1 2 3 4 OFL = (1:11)2 × 21 = (3:5)10 UFL = (1:00)2 × 2−1 = (0:5)10 At sufficiently high magnification, all normalized floating-point systems look grainy and unequally spaced like this 28Rounding Rules If real number x not exactly representable, then approximated by \nearby” floating-point number fl(x) Process called rounding, and error introduced called rounding error Two commonly used rounding rules: • chop: truncate base-β expansion of x after (p−1)st digit; also called round toward zero • round to nearest: fl(x) nearest floatingpoint number to x, using floating-point number whose last stored digit is even in case of tie; also called round to even Round to nearest most accurate, and is default rounding rule in IEEE systems 29Machine Precision Accuracy of floating-point system characterized by unit roundoff, machine precision, or machine epsilon, denoted by mach With rounding by chopping, mach = β1−p With rounding to nearest, mach = 1 2β1−p Alternative definition is smallest number  such that fl(1 + ) > 1 Maximum relative error in representing real number x in floating-point system given by fl(x) − x x ≤ mach 30Machine Precision, continued For toy system illustrated earlier, mach = 0:25 with rounding by chopping mach = 0:125 with rounding to nearest For IEEE floating-point systems, mach = 2−24 ≈ 10−7 in single precision mach = 2−53 ≈ 10−16 in double precision IEEE single and double precision systems have about 7 and 16 decimal digits of precision Though both are \small,” unit roundoff error mach should not be confused with underflow level UFL In all practical floating-point systems, 0 < UFL < mach < OFL 31Subnormals and Gradual Underflow Normalization causes gap around zero in floatingpoint system If leading digits allowed to be zero, but only when exponent at its minimum value, then gap \filled in” by additional subnormal or denormalized floating-point numbers ……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….. −4 −3 −2 −1 0 1 2 3 4 Subnormals extend range of magnitudes representable, but have less precision than normalized numbers, and unit roundoff is no smaller Augmented system exhibits gradual underflow 32Exceptional Values IEEE floating-point standard provides special values to indicate two exceptional situations: • Inf, which stands for \infinity,” results from dividing a finite number by zero, such as 1=0 • NaN, which stands for \not a number,” results from undefined or indeterminate operations such as 0=0, 0 ∗ Inf, or Inf=Inf Inf and NaN implemented in IEEE arithmetic through special reserved values of exponent field 33Floating-Point Arithmetic Addition or subtraction: Shifting of mantissa to make exponents match may cause loss of some digits of smaller number, possibly all of them Multiplication: Product of two p-digit mantissas contains up to 2p digits, so result may not be representable Division: Quotient of two p-digit mantissas may contain more than p digits, such as nonterminating binary expansion of 1=10 Result of floating-point arithmetic operation may differ from result of corresponding real arithmetic operation on same operands 34Example: Floating-Point Arithmetic Assume β = 10, p = 6 Let x = 1:92403 × 102, y = 6:35782 × 10−1 Floating-point addition gives x + y = 1:93039 × 102; assuming rounding to nearest Last two digits of y do not affect result, and with even smaller exponent, y could have had no effect on result Floating-point multiplication gives x ∗ y = 1:22326 × 102; which discards half of digits of true product 35Floating-Point Arithmetic, continued Real result may also fail to be representable because its exponent is beyond available range Overflow usually more serious than underflow because there is no good approximation to arbitrarily large magnitudes in floating-point system, whereas zero is often reasonable approximation for arbitrarily small magnitudes On many computer systems overflow is fatal, but an underflow may be silently set to zero 36Example: Summing a Series Infinite series 1X n=1 1 n has finite sum in floating-point arithmetic even though real series is divergent Possible explanations: • Partial sum eventually overflows • 1=n eventually underflows • Partial sum ceases to change once 1=n becomes negligible relative to partial sum: 1=n < mach n−1 X k=1 (1=k) 37Floating-Point Arithmetic, continued Ideally, x flop y = fl(x op y), i.e., floatingpoint arithmetic operations produce correctly rounded results Computers satisfying IEEE floating-point standard achieve this ideal as long as x op y is within range of floating-point system But some familiar laws of real arithmetic not necessarily valid in floating-point system Floating-point addition and multiplication commutative but not associative Example: if  is positive floating-point number slightly smaller than mach, (1 + ) +  = 1; but 1 + ( + ) > 1 38Cancellation Subtraction between two p-digit numbers having same sign and similar magnitudes yields result with fewer than p digits, so it is usually exactly representable Reason is that leading digits of two numbers cancel (i.e., their difference is zero) Example: 1:92403×102−1:92275×102 = 1:28000×10−1; which is correct, and exactly representable, but has only three significant digits 39Cancellation, continued Despite exactness of result, cancellation often i
mplies serious loss of information Operands often uncertain due to rounding or other previous errors, so relative uncertainty in difference may be large Example: if  is positive floating-point number slightly smaller than mach, (1 + ) − (1 − ) = 1 − 1 = 0 in floating-point arithmetic, which is correct for actual operands of final subtraction, but true result of overall computation, 2, has been completely lost Subtraction itself not at fault: it merely signals loss of information that had already occurred 40Cancellation, continued Digits lost to cancellation are most significant, leading digits, whereas digits lost in rounding are least significant, trailing digits Because of this effect, it is generally bad idea to compute any small quantity as difference of large quantities, since rounding error is likely to dominate result For example, summing alternating series, such as ex = 1 + x + x2 2! + x3 3! + · · · for x < 0, may give disastrous results due to catastrophic cancellation 41Example: Cancellation Total energy of helium atom is sum of kinetic and potential energies, which are computed separately and have opposite signs, so suffer cancellation Year Kinetic Potential Total 1971 13:0 −14:0 −1:0 1977 12:76 −14:02 −1:26 1980 12:22 −14:35 −2:13 1985 12:28 −14:65 −2:37 1988 12:40 −14:84 −2:44 Although computed values for kinetic and potential energies changed by only 6% or less, resulting estimate for total energy changed by 144% 42Example: Quadratic Formula Two solutions of quadratic equation ax2 + bx + c = 0 given by x = −b ± qb2 − 4ac 2a Naive use of formula can suffer overflow, or underflow, or severe cancellation Rescaling coefficients can help avoid overflow and harmful underflow Cancellation between −b and square root can be avoided by computing one root using alternative formula x = 2c −b ∓ qb2 − 4ac Cancellation inside square root cannot be easily avoided without using higher precision 43Example: Standard Deviation Mean of sequence xi, i = 1; : : : ; n, is given by x ¯ = 1 n nX i=1 xi; and standard deviation by σ = 24 1 n − 1 nX i=1 (xi − x ¯)23 5 12 Mathematically equivalent formula σ = 24 1 n − 1 [email protected] nX i=1 x2 i − nx ¯21 A3 5 12 avoids making two passes through data Unfortunately, single cancellation error at end of one-pass formula is more damaging numerically than all of cancellation errors in two-pass formula combined 44Mathematical Software High-quality mathematical software is available for solving most commonly occurring problems in scientific computing Use of sophisticated, professionally written software has many advantages We will seek to understand basic ideas of methods on which such software is based, so that we can use software intelligently We will gain hands-on experience in using such software to solve wide variety of computational problems 45Desirable Qualities of Math Software • Reliability • Robustness • Accuracy • Efficiency • Maintainability • Portability • Usability • Applicability 46Sources of Math Software FMM: From book by Forsythe/Malcolm/Moler HSL: Harwell Subroutine Library IMSL: Internat. Math. & Stat. Libraries KMN: From book by Kahaner/Moler/Nash NAG: Numerical Algorithms Group Netlib: Free software available via Internet NR: From book Numerical Recipes NUMAL: From Math. Centrum, Amsterdam SLATEC: From U.S. Government labs SOL: Systems Optimization Lab, Stanford U. TOMS: ACM Trans. on Math. Software 47Scientific Computing Environments Interactive environments for scientific computing provide • powerful mathematical capabilities • sophisticated graphics • high-level programming language for rapid prototyping MATLAB is popular example, available for most personal computers and workstations Similar, \free” alternatives include octave, RLaB, and Scilab Symbolic computing environments, such as Maple and Mathematica, also useful 48

Leave a Reply

Your email address will not be published.