Introduction

Least squares methods are commonly used and are important foundations for understanding many concepts in statistics, machine learning, and computational neuroscience. This tutorial will provide an introduction to ordinary and weighted least squares.

Ordinary Least Squares

The brain must continuously infer the causes of its sensory experiences. Indeed, the brain has no direct access to the world around it, but rather relies on sequences of action potentials sent from sensory receptors, which carry correlative information regarding the causes of those spike trains. These observed data will be denoted by the vector $\mathbf{x}$. We assume these data are generated by some unobservable (“hidden”) causes in the world, which we will denote by the vector $\boldsymbol\theta$. The generative model can be represented as in the following equation:

where $\boldsymbol\beta$ denotes a matrix of parameters which transform $\boldsymbol\theta$ to $\mathbf{x}$. We can define the error in this model as

and the loss function as

Let’s expand on the above Equation:

Taking the derivative of this function with respect to the causes $\boldsymbol\theta$,

Setting this to zero, we can solve for $\boldsymbol\theta$:

which is the ordinary least squares equation.

Weighted Least Squares

Consider that some measurements of $\mathbf{x}$ may have less importance than others. In this case, we may want to account for differential value of the measurements of $\mathbf{x}$. We can vary the contribution of any given observation by assigning a weight value. We thus introduce a weight matrix $\mathbf{W}$ into $\mathbf{x} = \boldsymbol\beta\boldsymbol\theta$,

and consequently into $\mathbb{\xi} = \boldsymbol\beta\boldsymbol\theta - \mathbf{x}$:

Consider the case of $n$ independent and identically distributed (iid) measurements, $\mathbf{x} \in \lbrace x_{1}, x_{2}, \ldots, x_{n} \rbrace$. Each measurement will have an associated variance, which we will assume is the same given the iid property; this will obviate the need for subscripting, and the variance will be denoted as $\sigma^2$. Let us suppose, for now, that

This is one particular instance of a weight matrix you might choose. Typically, however, one would want to assign \textit{higher} weights to measurements with greater precision, i.e. $1/\sigma^2$.

Just as in the OLS case, we take the derivative of the cost function and set it to zero to solve for $\boldsymbol\theta$.

Note that setting $\mathbf{W} = \mathbf{I}$, where $\mathbf{I}$ is the identity matrix, results in ordinary least squares: