Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

8.1. The Gradient Vector

The big theme in our course so far has been the three-step modeling recipe, of choosing a model, choosing a loss function, and then minimizing empirical risk (i.e. average loss) to find optimal model parameters.

In Chapter 2.3, we used calculus to find the slope w1w_1^* and intercept w0w_0^* that minimized mean squared error,

Rsq(w0,w1)=1ni=1n(yi(w0+w1xi))2R_\text{sq}(w_0, w_1) = \frac{1}{n} \sum_{i=1}^n (y_i - (w_0 + w_1 x_i))^2

by computing Rsqw0\frac{\partial R_\text{sq}}{\partial w_0} (the partial derivative with respect to w0w_0) Rsqw1\frac{\partial R_\text{sq}}{\partial w_1}, setting both to zero, and solving the resulting system of equations.

Then, in Chapters 7.1 and 7.2, we focused on the multiple linear regression model and the squared loss function, which saw us minimize

Rsq(w)=1nyXw2=1ni=1n(yiwAug(xi))2R_\text{sq}(\vec w) = \frac{1}{n} \lVert \vec y - X \vec w \rVert^2 = \frac{1}{n} \sum_{i=1}^n (y_i - \vec w \cdot \text{Aug}(\vec x_i))^2

where XX is the n×(d+1)n \times (d + 1) design matrix, yRn\vec y \in \mathbb{R}^n is the observation vector, and wRd+1\vec w \in \mathbb{R}^{d+1} is the parameter vector we’re trying to pick. In Chapter 6.3, we minimized Rsq(w)R_\text{sq}(\vec w) by arguing that the optimal w\vec w^* had to create an error vector, e=yXw\vec e = \vec y - X \vec w^*, that was orthogonal to colsp(X)\text{colsp}(X), which led us to the normal equations.

It turns out that there’s a way to use our calculus-based approach from Chapter 2.3 to minimize the more general version of RsqR_\text{sq} for any dd, that doesn’t involve computing dd partial derivatives. To see how this works, we need to define a new object, the gradient vector, which we’ll do here in Chapter 8.1. After we’re familiar with how the gradient vector works, we’ll use it to build a new approach to function minimization, one that works even when there isn’t a closed-form solution for the optimal parameters: that technique is called gradient descent, which we’ll see in Chapter 8.3.


Domain and Codomain

As we saw in Chapter 6.2 when we first introduced the concept of the inverse of a matrix, the notation

f:RdRnf: \mathbb{R}^d \to \mathbb{R}^n

means that ff is a function whose inputs are vectors with dd components and whose outputs are vectors with nn components. Rd\mathbb{R}^d is the domain of the function, and Rn\mathbb{R}^n is the codomain. I’ve used dd and nn to match the notation we’ve used for matrices and linear transformations. In general, if AA is an n×dn \times d matrix, then any vector x\vec x multiplied by AA (on the right) must be in Rd\mathbb{R}^d and the result AxA \vec x will be in Rn\mathbb{R}^n.

Given this framing, consider the following four types of functions.

TypeDomain and CodomainExamples
Scalar-to-scalarf:RRf: \mathbb{R} \to \mathbb{R}f(x)=x2+sin(x)f(x) = x^2 + \sin(x)
Rsq(w)=1ni=1n(yiw)2R_\text{sq}(w) = \frac{1}{n} \sum_{i=1}^n (y_i - w)^2
Vector-to-scalarf:RdRf: \mathbb{R}^d \to \mathbb{R}f(x)=xTxf(\vec x) = \vec x^T \vec x
f(x)=(x11)2+(x1x2)2+3f(\vec x) = (x_1 - 1)^2 + (x_1 - x_2)^2 + 3
Rsq(w)=1nyXw2R_\text{sq}(\vec w) = \frac{1}{n} \lVert \vec y - X \vec w \rVert^2
Scalar-to-vectorf:RRnf: \mathbb{R} \to \mathbb{R}^nf(x)=[121]+x[304]parametric form of a linef(x) = \underbrace{\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} + x \begin{bmatrix} 3 \\ 0 \\ -4 \end{bmatrix}}_{\text{parametric form of a line}}

f(x)=[x2+ex3]f(x) = \begin{bmatrix} x^2 + e^{x} \\ -3 \end{bmatrix}
Vector-to-vectorf:RdRnf: \mathbb{R}^d \to \mathbb{R}^nf(x)=[342001]xlinear transformation\underbrace{f(\vec x) = \begin{bmatrix} 3 & 4 \\ 2 & 0 \\ 0 & 1 \end{bmatrix} \vec x}_{\text{linear transformation}}

f(x)=[x12+x1x2+cos(x14)3x1x2+45x1x2/x3]f(\vec x) = \begin{bmatrix} x_1^2 + x_1x_2 + \cos(x_1^4) \\ 3x_1x_2 + 4 \\ 5x_1 \\ x_2 / x_3 \end{bmatrix}

The first two types of functions are “scalar-valued”, while the latter two are “vector-valued”. These are not the only types of functions that exist; for instance, the function f(A)=rank(A)f(A) = \text{rank}(A) is a matrix-to-scalar function.

The type of function we’re most concerned with at the moment are vector-to-scalar functions, i.e. functions that take in a vector (or equivalently, multiple scalar inputs) and output a single scalar.

Rsq(w)=1nyXw2R_\text{sq}(\vec w) = \frac{1}{n} \lVert \vec y - X \vec w \rVert^2

is one such function, and it’s the focus of this section.


Rates of Change

Let’s think from the perspectives of rates of change, since ultimately what we’re building towards is a technique for minimizing functions. We’re most familiar with the concept of rates of change for scalar-to-scalar functions.

If

f(x)=x2sin(x)f(x) = x^2 \sin(x)

then its derivative,

dfdx=2xsin(x)+x2cos(x)\frac{\text{d}f}{\text{d}x} = 2x \sin(x) + x^2 \cos(x)

itself is a scalar-to-scalar function, which describes how quickly ff is changing at any point xx in the domain of ff. At x=3x = 3, for instance, the instantaneous rate of change is

dfdx(3)=23sin(3)+32cos(3)8.06\frac{\text{d}f}{\text{d}x}(3) = 2\cdot 3 \sin(3) + 3^2 \cos(3) \approx -8.06

meaning that at x=3x = 3, ff is decreasing at a rate of (approximately) 8.06 per unit change in xx. Perhaps a more intuitive way of thinking about the instantaneous rate of change is to think of it as the slope of the tangent line to ff at x=3x = 3.

Loading...
Loading...

The steeper the slope, the faster ff is changing at that point; the sign of the slope tells us whether ff is increasing or decreasing at that point.

In Chapter 2.3, we saw how to compute derivatives of functions that take in multiple scalar inputs, like

f(x,y,z)=x2+2xy+3xz+4(yz)2f(x, y, z) = x^2 + 2xy + 3xz + 4(y - z)^2

In the language of Chapter 8.1, we’d call such a function a vector-to-scalar function, and might use the notation

f(x)=x12+2x1x2+3x1x3+4(x2x3)2f(\vec x) = x_1^2 + 2x_1x_2 + 3x_1x_3 + 4(x_2 - x_3)^2

This function has three partial derivatives, each of which describes the instantaneous rate of change of ff with respect to one of its inputs, while holding the other two inputs constant. There’s a good animation of what it means to hold an input constant in Chapter 2.2 that is worth revisiting.

Here,

fx1=2x1+2x2+3x3,fx2=2x1+8x28x3,fx3=3x18x2+8x3\frac{\partial f}{\partial x_1} = 2x_1 + 2x_2 + 3x_3, \quad \frac{\partial f}{\partial x_2} = 2x_1 + 8x_2 - 8x_3, \quad \frac{\partial f}{\partial x_3} = 3x_1 - 8x_2 + 8x_3

The big idea of this section, the gradient vector, packages all of these partial derivatives into a single vector. This will allow us to think about the direction in which ff is changing, rather than just looking at its rates of change in each dimension independently.


The Gradient Vector

As usual, we’ll start with an example. Suppose xR2x \in \mathbb{R}^2, and let

f(x)=x1ex12x22f(\vec x) = x_1 e^{-x_1^2 - x_2^2}
Loading...

To find f(x)\nabla f(\vec x), we need to compute the partial derivatives of ff with respect to each component of x\vec x. The “input variables” to ff are x1x_1 and x2x_2, so we need to compute fx1\frac{\partial f}{\partial x_1} and fx2\frac{\partial f}{\partial x_2}, but if you’d like, replace x1x_1 and x2x_2 with xx and yy if it makes the algebra a little cleaner, and then replace xx and yy with x1x_1 and x2x_2 at the end.

f(x)=x1ex12x22f(\vec x) = x_1 e^{-x_1^2 - x_2^2}
fx1=x1(x1ex12x22)=1ex12x22+x1ex12x22(2x1)product rule=(12x12)ex12x22\frac{\partial f}{\partial x_1} = \frac{\partial}{\partial x_1} \left( x_1 e^{-x_1^2 - x_2^2} \right) = \underbrace{1 \cdot e^{-x_1^2 - x_2^2} + x_1 \cdot e^{-x_1^2 - x_2^2} \cdot (-2x_1)}_{\text{product rule}} = (1 - 2x_1^2) e^{-x_1^2 - x_2^2}
fx2=x2(x1ex12x22)=x1ex12x22(2x2)chain rule=2x1x2ex12x22\frac{\partial f}{\partial x_2} = \frac{\partial}{\partial x_2} \left( x_1 e^{-x_1^2 - x_2^2} \right) = \underbrace{x_1 \cdot e^{-x_1^2 - x_2^2} \cdot (-2x_2)}_{\text{chain rule}} = -2x_1 x_2 e^{-x_1^2 - x_2^2}

Putting these together, we have

f(x)=[(12x12)ex12x222x1x2ex12x22]\nabla f(\vec x) = \begin{bmatrix} (1 - 2x_1^2) e^{-x_1^2 - x_2^2} \\ -2x_1 x_2 e^{-x_1^2 - x_2^2} \end{bmatrix}

Remember, f(x)\nabla f(\vec x) itself is a function. If we plug in a value of x\vec x, we get a new vector back.

f([10])=[(12(1)2)e(1)2022(1)(0)e(1)202]=[1/e0]\nabla f\left(\begin{bmatrix} -1 \\ 0\end{bmatrix}\right) = \begin{bmatrix} (1 - 2(-1)^2) e^{-(-1)^2 - 0^2} \\ -2(-1)(0) e^{-(-1)^2 - 0^2} \end{bmatrix} = \begin{bmatrix} -1/e \\ 0 \end{bmatrix}

What does f([10])=[1/e0]\nabla f\left(\begin{bmatrix} -1 \\ 0\end{bmatrix}\right) = \begin{bmatrix} -1/e \\ 0 \end{bmatrix} really tell us? In order to visualize it, let me introduce another way of visualizing ff, called a contour plot.

Loading...

I think of the contour plot as a birds-eye view 🦅 of ff when you look at it from above. when you look at the surface from above. Notice the correspondence between the colors in both graphs.

The circle-like traces in the contour plot are called level curves; they represent slices through the surface at a constant height. On the right, the circle labeled 0.1 represents the set of points where f(x1,x2)=0.1f(x_1, x_2) = 0.1.

Visualizing the fact that f([10])=[1/e0]\nabla f\left(\begin{bmatrix} -1 \\ 0\end{bmatrix}\right) = \begin{bmatrix} -1/e \\ 0 \end{bmatrix} is easier to do in the contour plot, since the contour plot is 2-dimensional, like the gradient vector is. Remember that red values are high and blue values are low.

Loading...

At the point [10]\begin{bmatrix} -1 \\ 0 \end{bmatrix}, which is at the tail of the vector drawn in gold, ff is near the global minimum, meaning there are lots of directions in which we can move to increase ff. But, the gradient vector at this point is [1/e0]\begin{bmatrix} -1/e \\ 0 \end{bmatrix}, which points in the direction of steepest ascent starting at [10]\begin{bmatrix} -1 \\ 0 \end{bmatrix}. The gradient describes the “quickest way up”.

As another example, consider the fact that f([1.250.5])[0.3470.204]\nabla f\left(\begin{bmatrix} 1.25 \\ -0.5 \end{bmatrix}\right) \approx \begin{bmatrix} -0.347 \\ 0.204 \end{bmatrix}.

Loading...

Again, the gradient at [1.250.5]\begin{bmatrix} 1.25 \\ -0.5 \end{bmatrix} gives us the direction in which ff is increasing the quickest at that very point. If we move even a little bit in any direction (in the direction of the gradient or some other direction), the gradient will change.

As you might guess, to find the critical points of a function – that is, places where it is neither increasing nor decreasing – we need to find points where the gradient is zero. Hold that thought.

Next, we work through concrete examples of gradients and matrix-vector operations.