Introduction

When fitting reinforcement learning models to behavioural data the efficiency of optimization algorithms can be dramatically improved by providing the analytical gradients to the optimization function (e.g. fminunc if using Matlab, or scipy.optimize.minimize if using python, etc.).

Procedures

Consider the case when one seeks to find the maximum a posteriori estimate (MAP) for a vector of parameters $\boldsymbol\theta$

where $\mathcal{A}$ is a set of actions taken by a group of subjects, $\boldsymbol\theta$ are the model parameters, and $\boldsymbol\phi$ are the parameters of the prior distribution of $\boldsymbol\theta$. Since the logarithm is a monotonically increasing function, the above equation can be written as

The right hand side of the above equation must be differentiated with respect to the parameters $\boldsymbol\theta$:

where the elements of the right hand side can be considered individually.

The Prior Distribution

As per Huys et al. (2011), we assume that the parameters $\boldsymbol\theta$ are distributed according to a multivariate Gaussian with hyperparameters $\boldsymbol\phi = \lbrace \phi_\mu \phi_\Sigma \rbrace$, where $\phi_\mu$ is the prior mean, and $\phi_\Sigma$ is the prior covariance matrix:

where $K$ is the dimension of $\boldsymbol\theta$. For example, if the model you are fitting has parameters $\boldsymbol\theta = \lbrace \alpha, \beta, \omega \rbrace$, then $K=3$. Taking the log of the multivariate Gaussian probability density function (pdf) yields

Taking the derivative, with respect to $\boldsymbol\theta$ proceeds as follows

I like to break things down to facilitate simpler notation and use a very simple looking chain rule. Let

and thus,

The chain rule is then easily represented as

and we can compute each derivative individually as follows

Combining these, we find that the derivative of the prior distribution with respect to the parameters $\boldsymbol\theta$, when distributed according to a multivariate Gaussian, is

The Log-Likelihood Function

The log-likelihood function is perhaps the more interesting derivative to compute because it will vary based on the model and the number of parameters in each model. A RL model consists of two components: an observation model and a learning model. The observation model

Observation Model

The observation model consists of the process by which the agent selects actions. Classically—and herein—we implement the observation model as a softmax function characterizing the probability of action $a_t$ given the current state of the learning model, $\mathcal{Q}_t(s_t, a_t)$ and the model parameters $\boldsymbol\theta$:

The probability of the entire series of actions selected by subject $i$ is

Assuming that if conditioned by $\mathcal{Q}$, the actions are independent across trials, one can express this as the product of individual observation probabilities at each time $t$

Taking the log of both sides,

we have the log-likelihood of the participants’ actions, which we would like to maximize with respect to the parameters $\boldsymbol\theta$:

A Rescorla-Wagner Learning Rule

Consider the following learning rule

We will be computing the derivatives of our RL model using this Rescorla-Wagner learning rule.

Specific Derivatives

Now that we have specified the base form of the derivative of the observation model and a basic learning model with respect to the parameter vector $\boldsymbol\theta$, we can compute the derivatives with respect to the individual parameters.

For notational simplicity, let

Then the derivative with respect to the learning rate $\alpha$ is

where

and using the product rule,

The derivative with respect to the inverse temperature $\beta$ is

Combining the Derivatives of the Likelihood and Priors

If using a function such as fminunc, the objective function would return both the log posterior, as well as its gradient. For instance, as follows:

Then one could apply fminunc as usual

[theta, varargout] = fminunc(logposterior, theta_initial, options)