- Published on
Least Squares with Matrix Operations in Python for Exponential Approximation
- Authors

- Name
- Daisuke Kobayashi
- https://twitter.com
In the previous post, I used SciPy to fit data by least squares. I eventually wanted to try Bayesian linear regression as well, and I also wanted to understand what was happening internally, so this time I solved the least-squares problem directly with matrix operations. I think scipy.optimize.leastsq is really aimed at nonlinear least squares.
As before, I use the same data and fit it with an exponential function:
y = a * exp(b * x)
log(y) = log(a) + bx
Because the basis function phi is 1 + x, the computed weight vector w is in the log domain, so I convert the first term back with exp at the end.
Since the basis functions are treated as a linear combination of weights, I think this kind of transformation is necessary. For logarithmic and power-law approximations as well, you probably need to rewrite them in a form that is linear in the weights.
Running the script below produces the following weights, which confirms that the result matches the previous post.
0.00580943068574 4.84443568163
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import numpy
def least_squares_regression(X, t, phi):
PHI = numpy.array([phi(x) for x in X])
w = numpy.linalg.solve(numpy.dot(PHI.T, PHI), numpy.dot(PHI.T, t))
return w
if __name__ == "__main__":
X = numpy.array([6.559112404564946264e-01,
6.013845740818111185e-01,
4.449591514877473397e-01,
3.557250387126167923e-01,
3.798882550532960423e-01,
3.206955701106445344e-01,
2.600880460776140990e-01,
2.245379618606005157e-01])
t = numpy.array([1.397354195522357567e-01,
1.001406990711011247e-01,
5.173231204524778720e-02,
3.445520251689743879e-02,
3.801366557283047953e-02,
2.856782588754304408e-02,
2.036328213585812327e-02,
1.566228252276009869e-02])
phi = lambda x: [1, x]
w = least_squares_regression(X, numpy.log(t), phi)
print numpy.exp(w[0]), w[1]
Reference: