The big theme in our course so far has been the three-step modeling recipe, of choosing a model, choosing a loss function, and then minimizing empirical risk (i.e. average loss) to find optimal model parameters.
In Chapter 2.3, we used calculus to find the slope and intercept that minimized mean squared error,
by computing (the partial derivative with respect to ) , setting both to zero, and solving the resulting system of equations.
Then, in Chapters 7.1 and 7.2, we focused on the multiple linear regression model and the squared loss function, which saw us minimize
where is the design matrix, is the observation vector, and is the parameter vector we’re trying to pick. In Chapter 6.3, we minimized by arguing that the optimal had to create an error vector, , that was orthogonal to , which led us to the normal equations.
It turns out that there’s a way to use our calculus-based approach from Chapter 2.3 to minimize the more general version of for any , that doesn’t involve computing partial derivatives. To see how this works, we need to define a new object, the gradient vector, which we’ll do here in Chapter 8.1. After we’re familiar with how the gradient vector works, we’ll use it to build a new approach to function minimization, one that works even when there isn’t a closed-form solution for the optimal parameters: that technique is called gradient descent, which we’ll see in Chapter 8.3.
Domain and Codomain¶
As we saw in Chapter 6.2 when we first introduced the concept of the inverse of a matrix, the notation
means that is a function whose inputs are vectors with components and whose outputs are vectors with components. is the domain of the function, and is the codomain. I’ve used and to match the notation we’ve used for matrices and linear transformations. In general, if is an matrix, then any vector multiplied by (on the right) must be in and the result will be in .
Given this framing, consider the following four types of functions.
| Type | Domain and Codomain | Examples |
|---|---|---|
| Scalar-to-scalar | ||
| Vector-to-scalar | ||
| Scalar-to-vector | ||
| Vector-to-vector | |
The first two types of functions are “scalar-valued”, while the latter two are “vector-valued”. These are not the only types of functions that exist; for instance, the function is a matrix-to-scalar function.
The type of function we’re most concerned with at the moment are vector-to-scalar functions, i.e. functions that take in a vector (or equivalently, multiple scalar inputs) and output a single scalar.
is one such function, and it’s the focus of this section.
Rates of Change¶
Let’s think from the perspectives of rates of change, since ultimately what we’re building towards is a technique for minimizing functions. We’re most familiar with the concept of rates of change for scalar-to-scalar functions.
If
then its derivative,
itself is a scalar-to-scalar function, which describes how quickly is changing at any point in the domain of . At , for instance, the instantaneous rate of change is
meaning that at , is decreasing at a rate of (approximately) 8.06 per unit change in . Perhaps a more intuitive way of thinking about the instantaneous rate of change is to think of it as the slope of the tangent line to at .
The steeper the slope, the faster is changing at that point; the sign of the slope tells us whether is increasing or decreasing at that point.
In Chapter 2.3, we saw how to compute derivatives of functions that take in multiple scalar inputs, like
In the language of Chapter 8.1, we’d call such a function a vector-to-scalar function, and might use the notation
This function has three partial derivatives, each of which describes the instantaneous rate of change of with respect to one of its inputs, while holding the other two inputs constant. There’s a good animation of what it means to hold an input constant in Chapter 2.2 that is worth revisiting.
Here,
The big idea of this section, the gradient vector, packages all of these partial derivatives into a single vector. This will allow us to think about the direction in which is changing, rather than just looking at its rates of change in each dimension independently.
The Gradient Vector¶
Let’s start with a straightforward example where the partial derivatives are easy to compute. Let
Then
so
If we evaluate the gradient at , we get
What does the fact that tell us? It tells us that the direction of steepest ascent of at is . To put this into context, let’s consider another example.
Visualizing the Gradient Vector¶
Let’s look at another example and use it to understand what the gradient of a function tells us visually.
Suppose , and let
To find , we need to compute the partial derivatives of with respect to each component of . The “input variables” to are and , so we need to compute and , but if you’d like, replace and with and if it makes the algebra a little cleaner, and then replace and with and at the end.
Putting these together, we have
Remember, itself is a function. If we plug in a value of , we get a new vector back.
What does really tell us? In order to visualize it, let me introduce another way of visualizing , called a contour plot.
I think of the contour plot as a bird’s-eye view of when you look at the surface from above. Notice the correspondence between the colors in both graphs.
The circle-like traces in the contour plot are called level curves; they represent slices through the surface at a constant height. On the right, the circle labeled 0.1 represents the set of points where .
Visualizing the fact that is easier to do in the contour plot, since the contour plot is 2-dimensional, like the gradient vector is. Remember that red values are high and blue values are low.
At the point , which is at the tail of the vector drawn in gold, is near the global minimum, meaning there are lots of directions in which we can move to increase . But, the gradient vector at this point is , which points in the direction of steepest ascent starting at . The gradient describes the “quickest way up”.
As another example, consider the fact that .
Again, the gradient at gives us the direction in which is increasing the quickest at that very point. If we move even a little bit in any direction (in the direction of the gradient or some other direction), the gradient will change.
One way to see this more globally is to draw many gradient vectors at once, forming a gradient field.
Each arrow in this gradient field shows the gradient vector at a different point, and the arrow lengths are proportional to the magnitude of the gradient there. Longer arrows indicate places where increases more steeply, while shorter arrows indicate places where the rate of increase is smaller. Note that the arrows don’t necessarily all point to the “top” of the function, located at – instead, they point in the direction of steepest ascent at each point.
At the two critical points and , the gradient really is , so those locations are shown as points instead of arrows.
As you might guess, to find the critical points of a function - that is, places where it is neither increasing nor decreasing - we need to find points where the gradient is zero. Hold that thought.
In our course, most of the functions we’ll work with won’t be defined in terms of the individual components of the input vector , like in the case of . Instead, they’ll be defined in terms of matrix-vector operations, like . Chapter 8.2 explores how to compute gradients of functions like this.