Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X
A direct look on least squares

Least squares

Here we go again. After a long time of travelling and illness something new on this blog. Today, I start a new mini series that takes a look at Least Squares in various ways. I want to show how least squares arises naturally in various areas - of course this will look at little similar all the time because it stays the same principle; but the basic approach depends on the context.

Today we start with a simple example and the method I'll call the direct approach.

The direct method

Suppose we have some kind of measurement process that measures every second and we know that the system we measure follows a model of third order, that is the values we expect are

\begin{equation*}m(t) = p_1 t^2 + p_2 t + p_3.\end{equation*}

But obviously, our measurement process is in some way noisy. Our aim is now to find good values for $$p_1, p_2$$ and $$p_3$$. But what is good? Well, why not start by minimizing the distance between our model prediction and the measurement values?

First plot showing a bad model fit and the errors we want to reduce.

In this plot we see our measurements as blue dots and we see an awful model fit as red line. And we see various $$\epsilon$$ bars. Those are the errors we want to minimize, the model prediction at a point $$t_i$$: minus the value that we measured at this point $$v_i$$:

\begin{equation*}\epsilon_i = m(t_i) - v_{i} = p_0 t_i^2 + p_1 t_i + p_3 - v_i = \begin{pmatrix} t_i^2 & t_i & 1 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} - v_i\end{equation*}

It seems like a bit of a stretch to write this simple equation in a matrix notation there in the last step. But it will benefit us in a moment.

We do not want to minimize the straight sum of those because positive and negative values might zero out. The absolute value is also not so easy because our error function is then non-smooth... Let's summarize the sum of squares of the individual errors:

\begin{equation*}S := \sum_{i=1}^5 \epsilon_i^2\end{equation*}

Minimization is best when we have a quadratic error function because there is only a single global minimum which can be found by setting the gradient to zero.

The notation of this minimization becomes a bit tedious, that's why we go back to the matrix notation we used earlier and realize that we can stack all measurements we have into one matrix. Working with vectors will make the derivatives easier to do:

\begin{align*}\vec{\epsilon} &= \begin{pmatrix} m(t_1) - v_1 \\ \vdots \\ m(t_5) - v_5 \end{pmatrix} \\ &= \begin{pmatrix} p_1 t_1^2 + p_2 t_1 + p_3 - v_1 \\ \vdots \\ p_1 t_5 ^2 + p_2 t_5 + p_3 - v_5 \end{pmatrix} \\ &= \underbrace{\begin{pmatrix} t_1^2 & t_1 & 1 \\ \vdots \\ t_5 ^2 & t_5 & 1 \end{pmatrix}}_{:= \mat{M}} \underbrace{\begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix}}_{:= \vec{p}} - \underbrace{\begin{pmatrix} v_1 \\ \vdots \\ v_5 \end{pmatrix}}_{:= \vec{v}}\end{align*}

Our error function $$S$$ then becomes

\begin{align*}S & = \vec{\epsilon}^T \vec{\epsilon} \\ &= (\mat{M} \vec{p} - \vec{v})^T (\mat{M} \vec{p} - \vec{v}) \\ &= \vec{p}^T \mat{M}^T \mat{M} \vec{p} - \vec{p}^T \mat{M}^T \vec{v} - \vec{v}^T \mat{M} \vec{p} + \vec{v}^T \vec{v}\end{align*}

Finding the gradient of S with respect to the parameter vector $$\vec{p}$$ is now easy. We also immediately set this equal to the zero vector to find the minimum of this function:

\begin{equation*}\nabla S = 2 \mat{M}^T \mat{M} \vec{p} - \mat{M}^T \vec{v} - \mat{M}^T \vec{v} \stackrel{!}{=} \vec{0}\end{equation*}

And so we find the parameters that minimizes the error to be

\begin{equation*}\vec{p} = (\mat{M}^T \mat{M})^{-1} \vec{M}^T \vec{v}\end{equation*}

Using this with the above example we get the following best fit

Best fit for the model

Conclusion

I call this the direct approach because we directly set out to minimize the quantity and the least square formula pops out. There are other goals one can set out to achieve which leads to the same formula which I will show in some of the next posts. Hopefully it will not take as long again till I find some time to post again.