jupytext | kernelspec | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
FIZ228 - Numerical Analysis
Dr. Emre S. Tasci, Hacettepe University
+++
This lecture is heavily benefited from Steven Chapra's [Applied Numerical Methods with MATLAB: for Engineers & Scientists](https://www.mheducation.com/highered/product/applied-numerical-methods-matlab-engineers-scientists-chapra/M9780073397962.html).
+++
Case Data (Chapra, 14.6):
{download}04_Chapra_data.csv<data/04_Chapra_data.csv>
1 | 10 | 25 |
2 | 20 | 70 |
3 | 30 | 380 |
4 | 40 | 550 |
5 | 50 | 610 |
6 | 60 | 1220 |
7 | 70 | 830 |
8 | 80 | 1450 |
Let's try to fit it to a linear model,
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme()
data = np.array([range(10,90,10),[25,70,380,550,610,1220,830,1450]]).T
x = data[:,0]
y = data[:,1]
print(data)
As our function values (
a(10) + b = 25
a(20) + b = 70
...
a(80) + b = 1450
We also need to include b's coefficients (our polynomial function actually being
A = np.vstack([x,np.ones(len(x))]).T
A
a,b = np.linalg.lstsq(A,y,rcond=None)[0]
print("a: {:.5f}\tb: {:.5f}".format(a,b))
(A crash course)
np.polyfit(x,y,1)
def f1(x,m,n):
return m*x+n
res = optimize.curve_fit(f1,x,y)
res[0]
While we are at it, let's plot it:
import matplotlib.pyplot as plt
xx = np.linspace(0,80,100)
yy = a*xx + b
plt.plot(xx,yy,"b-",x,y,"ko",markerfacecolor="k")
plt.show()
And here's the error (sum of the squares of the estimate residuals (
t = a*x + b
e = y-t
S_r = np.sum(e**2)
print(S_r)
and here's how to do the same thing (albeit, systematically ;) using functions:
def fun_lin(alpha, beta, x):
return alpha*x + beta
def err_lin(params):
e = y - fun_lin(params[0],params[1],x)
return np.sum(e**2)
err_ls = err_lin([a,b])
print("Least-square sum of squares error: {:10.2f}".format(err_ls))
It doesn't matter much even if the model we're trying to fit is non-linear. We can simply apply a transformation to form it into a linear one. Here are a couple example for handling non-linear functions:
Model | Nonlinear | Linearized |
---|---|---|
exponential | ||
power | ||
saturation-growth-rate |
(Source: S.C. Chapra, Applied Numerical Methods with MATLAB)
+++
Instead of fitting the given data into a linear model, let's fit them to a power model:
Example: Fit the data to the power model (Chapra, 14.6)
Data:
1 | 10 | 25 |
2 | 20 | 70 |
3 | 30 | 380 |
4 | 40 | 550 |
5 | 50 | 610 |
6 | 60 | 1220 |
7 | 70 | 830 |
8 | 80 | 1450 |
Find the optimum
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data1 = pd.DataFrame({'i':np.arange(1,9),'x':np.arange(10,90,10),
'y':[25,70,380,550,610,1220,830,1450]})
data1.set_index('i', inplace=True)
data1
plt.plot(data1.x,data1.y,"o")
plt.show()
We can convert it such that
and as the least square fit for a linear model given as
(For derivations, refer to FIZ219 Lecture Notes #5)
and since
n = data1.shape[0]
xp = np.log(data1.x)
yp = np.log(data1.y)
a1 = (n*np.sum(xp*yp)-np.sum(xp)*np.sum(yp)) / (n*np.sum(xp**2) - np.sum(xp)**2)
a0 = np.mean(yp) - a1*np.mean(xp)
print("a0: {:7.4f}\na1: {:7.4f}".format(a0,a1))
as
alpha = np.exp(a0)
beta = a1
print("alpha: {:7.4f}\nbeta: {:7.4f}".format(alpha,beta))
def fun(alpha, beta, x):
return alpha*x**beta
xx = np.linspace(0,80,100);
yy = fun(alpha,beta,xx)
plt.plot(data1.x,data1.y,"or",xx,yy,"-b")
plt.show()
def fun_pow(alpha, beta, x):
return alpha*x**beta
x = data1.x
y = data1.y
def err(params):
e = y - fun_pow(params[0],params[1],x)
return np.sum(e**2)
from scipy.optimize import minimize
res = minimize(err,[0.274,1.98])
print(res)
alpha2,beta2 = res.x
xx = np.linspace(0,80,100);
yy2 = fun(alpha2,beta2,xx)
plt.plot(data1.x,data1.y,"or",xx,yy2,"-k")
plt.show()
err_ls = err([alpha,beta])
err_min = err([alpha2,beta2])
print("Least-square sum of squares error: {:10.2f}".format(err_ls))
print(" Minimizer sum of squares error: {:10.2f}".format(err_min))
Let's plot the two side by side:
xx = np.linspace(0,80,100);
yy_ls = fun(alpha,beta,xx)
yy_min = fun(alpha2,beta2,xx)
# Blue for least-squares, Black for minimizer
plt.plot(data1.x,data1.y,"or",xx,yy_ls,"-b",xx,yy_min,"-k")
plt.legend(["data","least-squares","minimizer"])
plt.show()
Apart from the power fits, we performed an even simpler operation, namely, fit the data to a linear model. Let's put all the three together:
xx = np.linspace(0,80,100);
yy_ls_pow = fun(alpha,beta,xx)
yy_min_pow = fun(alpha2,beta2,xx)
yy_ls_lin = fun_lin(a,b,xx)
# Blue for least-squares, Black for minimizer
plt.plot(data1.x,data1.y,"or",xx,yy_ls_pow,"-b",\
xx,yy_min_pow,"-k",\
xx,yy_ls_lin,"-m")
plt.legend(["data","least-squares (power)",\
"minimizer (power)","least-squares (linear)"])
plt.show()
and here is a table of the errors:
Method | Error ( |
---|---|
LS (power) | 345713.59 |
Minimizer (power) | 222604.85 |
LS (linear) | 216118.15 |
So, we should take the linear least-squares fit as it yields the closest results... or, is it? (it is indeed, as it has the lowest error).
Now what would you say if I told you, this was some kind of "force vs. velocity" data -- would you change your mind then?
Here, let's make the graph in the proper way:
plt.plot(data1.x,data1.y,"or",xx,yy_ls_pow,"-b",\
xx,yy_min_pow,"-k",\
xx,yy_ls_lin,"-m")
plt.legend(["data","least-squares (power)",\
"minimizer (power)","least-squares (linear)"])
plt.title ("(Some kind of) Force vs. Velocity")
plt.xlabel("v (m/s)")
plt.ylabel("F (N)")
plt.show()
Even though the linear model produces better fit, the bothersome thing is its behaviour for small velocities: the fit carries the response to negative forces which doesn't make much sense (can you think of a case that behaves like this? Downwards for low velocities alas upwards for high velocities? Non-newtonian liquids? Not very likely).
Therefore, even if it's not the best fit, realizing that we are actually dealing with forces and velocity, not some mathematical toy but physical quantities, it -hopefully- makes much more sense to choose the power model over the linear model.
So our equation looks something like this:
with
method | ||
---|---|---|
LS | 0.2741 | 1.9842 |
Minimizer | 2.5384 | 1.4358 |
Still, are we insisting on taking the minimizer's results (because it yielded a better fit)?
In physics, the power relations are usually (and interestingly, actually) integers. LS's
So, to cut a long story short, that "some kind of force" was actually the Drag Force
with
Moral of the story: We are not mathematicians, nor computers but we are humans and physicists! Always eye-ball the model and more importantly use your heads! 8)
+++
If you have
Once again, let's check our good old data:
1 | 10 | 25 |
2 | 20 | 70 |
3 | 30 | 380 |
4 | 40 | 550 |
5 | 50 | 610 |
6 | 60 | 1220 |
7 | 70 | 830 |
8 | 80 | 1450 |
data = np.array([range(10,90,10),[25,70,380,550,610,1220,830,1450]]).T
x = data[:,0]
y = data[:,1]
def err_Sr(y,t):
# Sum of the squares of the estimate residuals
return np.sum((y-t)**2)
p = np.polyfit(x,y,len(x)-1)
print(p)
xx = np.linspace(10,80,100)
yy = np.zeros(len(xx))
n = len(x)
for k in range(n):
yy += p[k]*xx**(n-k-1)
# we could as well had used poly1d function
# to functionalize the polynomial 8)
f = np.poly1d(p)
print(f(x))
print("Sum of squares error: {:10.2f}".format(err_Sr(y,f(x))))
plt.plot(xx,yy,"-b",x,y,"ok",xx,f(xx),"-r")
plt.show()
for s in np.arange(2,8):
print("Order: {:d}".format(len(x)-s))
p = np.polyfit(x,y,len(x)-s)
print(p)
xx = np.linspace(10,80,100)
yy = np.zeros(len(xx))
f = np.poly1d(p)
print("Sum of squares error: {:10.2f}".format(err_Sr(y,f(x))))
plt.plot(xx,yy,"-b",x,y,"ok",xx,f(xx),"-r")
plt.show()
We have already met with
The data points' distances from the average value leads to the sum of the squares of the data residuals (
whereas, the sum of the squares of the estimate residuals (
And here are them, visualized for a linear fit:
(Source: S.C. Chapra, Applied Numerical Methods with MATLAB)
Using these two quantities, the coefficient of determination (
Where a result of 1 (hence,
+++
Fourier Transform Infrared Spectroscopy is one of the fundamental IR spectrum analysis methods. We are going to investigate the FTIR data of Silica, courtesy of Prof. Sevgi Bayarı.
{download}Data: 05_Silica_FTIR.csv<data/05_Silica_FTIR.csv>
data_IR = pd.read_csv("data/05_Silica_FTIR.csv",header=None)
data_IR.columns = ["Wavenumber (cm-1)","Absorbance"]
print(data_IR)
import seaborn as sns
sns.set_theme()
plt1 = sns.relplot(data=data_IR,x="Wavenumber (cm-1)",\
y="Absorbance",kind="line")
aux = plt1.set_axis_labels("Wavenumber ($cm^{-1}$)","Absorbance")
data_IR["Wavelength (um)"] = 1/data_IR["Wavenumber (cm-1)"]*1E-2*1E6
print(data_IR)
Let's focus on the highest peak:
filter1 = (data_IR.iloc[:,0] >=900) & (data_IR.iloc[:,0] <= 1500)
data_IR_filtered = data_IR[filter1]
plt1 = sns.relplot(data=data_IR_filtered,x="Wavenumber (cm-1)",\
y="Absorbance",kind="line")
aux = plt1.set_axis_labels("Wavenumber ($cm^{-1}$)","Absorbance")
Let's try to put a Gaussian in it! 8)
Here,
def Gauss(x,A,mu,sigma):
y = A*np.exp(-(x-mu)**2/(2*sigma**2))
return y
data_IR_x = data_IR_filtered.iloc[:,0]
data_IR_y = data_IR_filtered.iloc[:,1]
Let's try with a crude approximation for the peak position
x = data_IR_x
y_0 = Gauss(x,1.30,1150,200)
plt.plot(x,y_0,"b-",data_IR_x,data_IR_y,"r-")
plt.show()
We can surely do better than that!
y_max = np.max(data_IR_y)
i_ymax = np.argmax(data_IR_y)
print(i_ymax,y_max)
x_ymax = data_IR_x.iloc[i_ymax]
print(x_ymax,y_max)
x = data_IR_x
y_1 = Gauss(x,y_max,x_ymax,100)
plt.plot(x,y_1,"b-",data_IR_x,data_IR_y,"r-")
plt.show()
Let's call statistics for help!
N = np.sum(data_IR_y)
mu_x = np.sum(data_IR_x*data_IR_y)/N
mu_x2 = sum(x**2*data_IR_y)/N
sigma = np.sqrt(mu_x2 - mu_x**2)
y_max_opt = y_max
print(mu_x,sigma)
x = data_IR_x
y_2 = Gauss(x,y_max_opt,mu_x,sigma)
N2 = np.sum(y_2)
print(N/N2)
y_2 *= N/N2
plt.plot(x,y_2,"b-",data_IR_x,data_IR_y,"r-")
plt.show()
y_max_opt
scipy.optimize.curve_fit()
to the rescue!
from scipy import optimize
popt,_=optimize.curve_fit(Gauss,data_IR_x,data_IR_y,p0=[y_max,x_ymax,sigma])
print(popt)
x = data_IR_x
y_3 = Gauss(x,popt[0],popt[1],popt[2])
plt.plot(x,y_3,"b-",data_IR_x,data_IR_y,"r-")
plt.show()
Let's calculate the coefficient of determination
def r2(y,t):
# y: true data
# t: model data
mean = np.mean(y)
S_t = np.sum((y-mean)**2)
S_r = np.sum((y-t)**2)
r2 = (S_t - S_r)/S_t
return r2
print(r2(data_IR_y,y_0))
print(r2(data_IR_y,y_1))
print(r2(data_IR_y,y_2))
print(r2(data_IR_y,y_3))
- This lecture is heavily benefited from Steven Chapra's Applied Numerical Methods with MATLAB: for Engineers & Scientists.
- I'm indebted to Prof. Sevgi Bayari for generously supplying the FTIR data.