Piece-wise linear interpolation, Lagrange interpolation and Neville’s algorithm, Chebyshev polynomials and approximation, polynomial least squares fit.
import numpy as np
from scipy import linalg, interpolate, special
import matplotlib.pyplot as plt
Linear interpolation is defined as a concatenation of straight lines connecting each pair of the set of known points. If the two known points are given by the coordinates $ (x_{i}, y_{i}) $ and $ (x_{i + 1}, y_{i + 1}) $, the linear interpolation in the interval $ (x_{i}, x_{i + 1}) $ is given by the following formula,
$$ y = y_i + \alpha (x - x_i), \qquad x \in [x_i, x_{i+1}], $$where
$$ \alpha = \frac{y_{i+1} - y_{i}}{x_{i + 1} - x_{i}}. $$Exercise 05.1: Implement a function that calculates the linear interpolation.
def linear_interpolation(x_p, y_p, x):
"""
Calculates the linear interpolation.
Args:
x_p (array_like): X-coordinates of a set of datapoints
y_p (array_like): Y-coordinates of a set of datapoints
x (array_like): An array on which the interpolation is calculated
Returns:
numpy.ndarray: The linear interpolation
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.interpolate.interp1d
function.
x_p = np.random.rand(10)
y_p = np.random.rand(10)
x = np.linspace(np.min(x_p), np.max(x_p), 1000)
try:
np.testing.assert_array_almost_equal(linear_interpolation(x_p, y_p, x), interpolate.interp1d(x_p, y_p)(x), decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.2: Approximate the $ \sin x $ function with the linear interpolation using $ 8 $ uniformly spaced points $ x_i $ from the $ [-\pi, \ \pi] $ interval, calculate the absolute error of the approximation, and plot the results.
# add your code here
Given a set of $ n $ known data points $ (x_{i}, y_{i}) $, $ i = 0, 1, \dots, n-1 $, where no two $ x_i $ are the same, the Lagrange interpolation is the $ (n-1) $-th degree polynomial given by a linear combination
$$ L(x) = \sum_{i = 0}^{n-1} y_i F_i(x) $$of Lagrange basis polynomials
$$ F_i(x) = \prod_{\substack{j = 0 \\ j \neq i}}^{n-1} \frac{x - x_j}{x_i - x_j}. $$The resulting Lagrange interpolating polynomial is unique.
Exercise 05.3: Implement a function that calculates the Lagrange interpolating polynomial.
def lagrange_interpolation(x, y):
"""
Calculates a Lagrange interpolating polynomial.
Args:
x (array_like): X-coordinates of a set of datapoints
y (array_like): Y-coordinates of a set of datapoints
Returns:
numpy.poly1d: The Lagrange interpolating polynomial
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.interpolate.lagrange
function.
x_p = np.random.rand(10)
y_p = np.random.rand(10)
try:
np.testing.assert_array_almost_equal(lagrange_interpolation(x_p, y_p), interpolate.lagrange(x_p, y_p), decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.4: Approximate the $ \sin x $ function with the Lagrange interpolating polynomial using $ 8 $ uniformly spaced points $ x_i $ from the $ [-\pi, \ \pi] $ interval, calculate the absolute error of the approximation, and plot the results.
# add your code here
Neville's algorithm is used for recursive evaluation of Lagrange interpolating polynomial. The recurence is given by the following relation,
$$ L_{i, j} = \frac{(x - x_j) L_{i, j-1} - (x - x_i) L_{i+1, j}}{x_i - x_j}, \qquad L_{i, i} = y_i, \qquad i, j = 0, 1, \dots, n-1. $$The Lagrange interpolating polynomial is then $ L(x) = L_{0, n-1} $.
Exercise 05.5: Implement a function that calculates the Lagrange interpolating polynomial using the Neville's algorithm.
def neville_algorithm(x, y):
"""
Calculates a Lagrange interpolating polynomial using Neville's algorithm.
Args:
x (array_like): X-coordinates of a set of datapoints
y (array_like): Y-coordinates of a set of datapoints
Returns:
numpy.poly1d: The Lagrange interpolating polynomial
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.interpolate.lagrange
function.
x_p = np.random.rand(10)
y_p = np.random.rand(10)
try:
np.testing.assert_array_almost_equal(neville_algorithm(x_p, y_p), interpolate.lagrange(x_p, y_p), decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.6: Approximate the Runge's function
$$ f(x) = \frac{1}{1 + 25x^2} $$with the Lagrange interpolating polynomial using $ 12 $ uniformly spaced points $ x_i $ from the $ [-1, \ 1] $ interval, calculate the absolute error of the approximation, and plot the results.
# add your code here
Note: The problem of oscillation at the edges of an interval which occurs in Exercise 05.6 is called Runge's phenomenon.
Exercise 05.7: Write a function that calculates a Chebyshev polynomial of degree $ n $ using the following recursive formula,
$$ T_{n}(x) = 2 x T_{n-1}(x) - T_{n-2}(x), \quad T_0(x) = 1, \ T_1(x) = x. $$def chebyshev_polynomial(n):
"""
Calculates a Chebyshev polynomial of degree n using recursive formula.
Args:
n (int): Degree of the polynomial
Returns:
numpy.poly1d: The Chebyshev polynomial of degree n
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.special.eval_chebyt
function.
n = 10
x = np.linspace(-1.0, 1.0, 1000)
try:
np.testing.assert_array_almost_equal(chebyshev_polynomial(n)(x), special.eval_chebyt(n, x), decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.8: Plot the first $ 5 $ Chebyshev polynomials on $ x \in [-1, 1] $.
# add your code here
Exercise 05.9: Write a function that returns roots of a Chebyshev polynomial of degree $ n $.
def chebyshev_roots(n):
"""
Calculates roots of a Chebyshev polynomial of degree n.
Args:
n (int): Degree of the polynomial
Returns:
numpy.ndarray: Roots of a Chebyshev polynomial of degree n
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.special.roots_chebyt
function.
n = 10
try:
np.testing.assert_array_almost_equal(chebyshev_roots(n), special.roots_chebyt(n)[0], decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.10: Approximate the Runge's function
$$ f(x) = \frac{1}{1 + 25x^2} $$in the $[ -1, \ 1] $ interval with the Lagrange interpolating polynomial using the roots of $ 12 $-th degree Chebyshev polynomial, calculate the absolute error of the approximation, and plot the results.
# add your code here
Note: If the interpolation interval $ [a, b] \neq [-1, 1] $, then the nodes $ x_i $ of the Chebyshev interpolation can be found using affine transformation
$$ x_i = \frac{a + b}{2} + \frac{b - a}{2} z_i, \qquad i = 0, 1, \dots, n - 1. $$where $ z_i $ is the $ i $-th root of the $ n $-th degree Chebyshev polynomial.
Given a set of $ n $ known data points $ (x_{i}, y_{i}) $, $ i = 0, 1, \dots, n-1 $, the least squares method finds an approximation to the data by minimizing the sum of the squares of the residuals,
$$ S = \sum_{i = 0}^{n - 1} r_i^2, \qquad r_i = y_i - f(x_i; \beta), $$where $ f(x_i; \beta) $ is the model function and $ \beta $ is the vector of $ m $ adjustable parameters. The minimum of $ S $ is found by setting its gradient to $ 0 $,
$$ \frac{\partial S}{\partial \beta_j} = \sum_{i = 0}^{n - 1} 2 r_i \frac{\partial r_i}{\partial \beta_j} = \sum_{i = 0}^{n - 1} -2 \left( y_i - f(x_i; \beta) \right) \frac{\partial f(x_i; \beta)}{\partial \beta_j} = 0, \qquad j = 0, 1, \dots, m - 1. $$Polynomial least squares fit:
In the case of polynomial least squares fit, the model function can be written in the following form,
$$ f(x; \beta) = \beta_{m-1} x^{m-1} + \beta_{m-2} x^{m-2} + \dots + \beta_1 x + \beta_0. $$The values of vector $ \beta $ that minimize $ S $ can be thus obtained by solving a system of $ m $ linear algebraic equations,
$$ \sum_{i = 0}^{n - 1} \left( y_i - \beta_{m-1} x^{m-1} - \beta_{m-2} x^{m-2} - \dots - \beta_1 x - \beta_0 \right) x_i^j = 0, \qquad j = 0, 1, \dots, m - 1. $$The system of equations can be expressed as
$$ \beta_0 \sum_{i = 0}^{n - 1} x_i^j + \beta_1 \sum_{i = 0}^{n - 1} x_i^{j+1} + \dots + \sum_{i = 0}^{n - 1} \beta_{m-2} x_i^{j + m - 2} + \beta_{m-1} \sum_{i = 0}^{n - 1} x_i^{j + m-1} = \sum_{i = 0}^{n - 1} y_i x_i^j $$and in the matrix form,
$$ \begin{pmatrix} \sum 1 & \sum x_i & \cdots & \sum x_i^{m-2} & \sum x_i^{m-1} \\ \sum x_i & \sum x_i^2 & \cdots & \sum x_i^{m-1} & \sum x_i^{m} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \sum x_i^{m-2} & \sum x_i^{m-1} & \cdots & \sum x_i^{2m-4} & \sum x_i^{2m-3} \\ \sum x_i^{m-1} & \sum x_i^{m} & \cdots & \sum x_i^{2m-3} & \sum x_i^{2m-2} \\ \end{pmatrix} \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots\\ \beta_{m-2} \\ \beta_{m-1} \end{pmatrix} = \begin{pmatrix} \sum y_i \\ \sum y_i x_i \\ \vdots\\ \sum y_i x_i^{m-2} \\ \sum y_i x_i^{m-1} \end{pmatrix}. $$Exercise 05.11: Implement the n-th degree polynomial least squares approximation.
def polynomial_least_squares(x, y, n):
"""
Calculates the n-th degree polynomial least squares approximation.
Args:
x (array_like): X-coordinates of a set of datapoints
y (array_like): Y-coordinates of a set of datapoints
n (int): degree of the approximating polynomial
Returns:
numpy.poly1d: The n-th degree polynomial least squares approximation
"""
# add your code here
You may evaluate the correctness of your implementation using the numpy.polyfit
function.
x_p = np.random.rand(10)
y_p = np.random.rand(10)
n = 1
try:
np.testing.assert_array_almost_equal(polynomial_least_squares(x_p, y_p, n), np.polyfit(x_p, y_p, n), decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 05.12: Consider a set of $ 100 $ random points distributed along a linear function with a Gaussian noise. Fit the data using the 1st degree polynomial least squares approximation.
# add your code here
Exercise 05.13: Consider a set of $ 100 $ random points distributed along a quadratic function with a Gaussian noise. Fit the data using the 2nd degree polynomial least squares approximation.
# add your code here