Foundations of Computational Economics

by Fedor Iskhakov, ANU

Representing numbers in a computer

https://youtu.be/AMFCQXFtamo

Description: Binary and hexadecimal numbers. Floating point numbers. Numerical stability and potential issues. Numerical noise.

Floating point arithmetics

  • Because computers only work with 0 and 1 internally, all real numbers have to be represented in binary format
  • This leads to many peculiar arithmetics properties of seemingly simple mathematical expressions
  • Understanding how computers work with real numbers is essential for computational economics

Simple example

What is the result of the comparison?

In [1]:
a = 0.1
b = 0.1
c = 0.1
a+b+c == 0.3
Out[1]:
False

So can we now trust the following calculation?

In [2]:
interest = 0.04
compounding = 365
investment = 1000
t=10
daily = 1 + interest/compounding
sum = investment*(daily**(compounding*t))

format(sum, '.25f')
Out[2]:
'1491.7920028601754438568605110'

Compare to exact calculation

In [3]:
#using floats
interest1 = 0.04
compounding = 365*24
t=100 #years
investment1 = 10e9 #one billion
daily1 = 1 + interest1/compounding
sum1 = investment1*(daily1**(compounding*t))
print('Amount computed using naive computation: %0.20e'%sum1)

#the same using precise decimal representation
from decimal import *
getcontext().prec = 100 #set precision of decimal calculations
interest2 = Decimal(interest1)
daily2 = 1 + interest2/compounding
investment2 = Decimal(investment1)
sum2 = investment2*(daily2**(compounding*t)) #using exact decimals
print('Amount computed using exact computation: %0.20e'%sum2)

diff=sum2-Decimal.from_float(sum1)
print('The difference is: %0.10f'%diff)
Amount computed using naive computation: 5.45976514222177001953e+11
Amount computed using exact computation: 5.45976514236965270996e+11
The difference is: 14.7882694559

So, what is happening?

  • Real numbers are represented with certain precision
  • In some cases, the errors may have economic significance
  • In order to write robust code suitable for the task at hand we have to understand what we should expect and why

Numerical stability of the code is an important property!

Number representation in decimal form

$ r $ — real number

$ b $ — base (radix)

$ d_0,d_1,d_2,...,d_k $ — digits (from lowest to highest)

$$ r = d_k \cdot b^k + d_{k-1} \cdot b^{k-1} + \dots + d_2 \cdot b^2 + d_1 \cdot b + d_0 $$

For example for decimals $ b=10 $ (0,1,..,9) we have

$$ 7,631 = 7 \cdot 1000 + 6 \cdot 100 + 3 \cdot 10 + 1 $$$$ 19,048 = 1 \cdot 10000 + 9 \cdot 1000 + 0 \cdot 100 + 4 \cdot 10 + 8 $$

Number representation in binary form

Now let $ b=2 $, so we only have digits 0 and 1

$$ 101011_{binary} = 1 \cdot 2^5 + 0 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2 + 1 = 43_{10} $$$$ 25_{10} = 16 + 8 + 1 = 2^4 + 2^3 + 2^0 = 11001_{binary} $$

Other common bases are 8 and 16 (with digits $ 0,1,2,\dots,9,a,b,c,d,e,f) $

Counting in binary

$ 0_{binary} $ $ \rightarrow $ $ 1_{binary} $ $ \rightarrow $ $ 10_{binary} $ $ \rightarrow $ $ 11_{binary} $ $ \rightarrow $ ??

Is it possible to count to 1000 using 10 fingers?

How many digits are needed?

  • In base-10 we need 1 digit to count up to 9, 2 digits to count up to 99 and so on
  • In base-2 we need 1 digit to count up to 1, 2 digits to count up to 11 = $ 3_{10} $ and so on
  • In base-16 we need 1 digit to count up to 15, 2 digits to count up to ff = $ 255_{10} $ and so on

In base-$ b $ it takes $ n $ digits to count up to up to $ b^n - 1 $

Similar structure for fractions

In base-$ b $ using $ k $ fractional digits

$$ 1.r = 1 + d_{-1} \cdot b^{-1} + d_{-2} \cdot b^{-2} + \dots + d_{-k} \cdot b^{-k} $$$$ 1.5627 = \frac{15,627}{10,000} = 1 + 5 \cdot 10^{-1} + 6 \cdot 10^{-2} + 2 \cdot 10^{-3} + 7 \cdot 10^{-4} $$

Yet, for some numbers there is no finite decimal representation

$$ \frac{4}{3} = 1 + 3 \cdot 10^{-1} + 3 \cdot 10^{-2} + 3 \cdot 10^{-3} + \dots = 1.333\dots $$$$ \frac{4}{3} = 1 + \frac{1}{3} = 1 + \frac{10}{3} 10^{-1} = 1 + 3 \cdot 10^{-1} + \frac{1}{3}10^{-1} $$$$ = 1.3 + \frac{10}{3} \cdot 10^{-2} = 1.3 + 3 \cdot 10^{-2} + \frac{1}{3}10^{-2} $$$$ = 1.33 + \frac{10}{3} \cdot 10^{-3} = 1.33 + 3 \cdot 10^{-3} + \frac{1}{3}10^{-3} = \dots $$

In binary

$$ 0.1 =\frac{1}{10} = \frac{16}{10} 2^{-4} = 0.0001_b + \frac{6}{10} 2^{-4} = $$$$ 0.0001_b + \frac{12}{10} 2^{-5} = 0.00011_b + \frac{2}{10} 2^{-5} = $$$$ 0.00011_b + \frac{16}{10} 2^{-8} = 0.00011001_b + \frac{6}{10} 2^{-8} = 0.000110011... $$

Therefore $ 0.1 $ can not be represented in binary exactly!

Scientific notation

$$ r = r_0 \cdot b ^ e $$

$ 0 \le r_0 < b $ — mantissa (coefficient)

$ b $ — base (radix)

$ e $ — exponent

  • Useful for writing both big and small numbers
  • Approximate representation when a finite number of digits are used to record the mantissa

Rounding error

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation

$ p $ — number of digits in the representation for $ r_0 $

$ e $ — exponent between $ e_{min} $ and $ e_{max} $, taking up $ p_e $ bits to encode

$$ r \approx \pm d_0. d_1 d_2 \dots d_p \cdot b^e $$

The float takes the total of $ 1 + p + p_e $ digits + one bit for the sign

Bits in float point representation

Distribution of representable real numbers

The main issues to be aware of

  1. Rounding errors $ \leftrightarrow $ loss of precision when numbers are represented in binary form $ \Rightarrow $ can not compare floats for equality
  2. Catastrophic cancellation $ \leftrightarrow $ potential drastic loss of precision when substracting close real numbers represented by floats $ \Rightarrow $ innocent formulas may in fact be numerically unstable
  3. Overflow $ \leftrightarrow $ obtaining a real number that is too large to be represented as float
  4. Underflow $ \leftrightarrow $ obtaining a real number that is indistinguishable from zero

Look at these cases in the Lab

Further learning resources