In Chapter 6.3, we explored the idea of projecting a vector onto the column space of a matrix. If has linearly independent columns, then the vector in that is closest to is the vector , where is the unique solution to the normal equation,
When has linearly independent columns, then is invertible, so we can solve for uniquely.
What if ’s Columns are Linearly Dependent?¶
In the case where ’s columns are linearly dependent, we can’t invert to solve for . This means that
has infinitely many solutions. Let’s give more thought to what these solutions actually are.
First, note that all of these solutions for correspond to the same projection, . The “best approximation” of in is always just one vector; if there are infinitely many 's, that just means there are infinitely many ways of describing that one best approximation. Remember, if vectors are linearly independent, then any of their linear combinations can only be expressed in one way; if they are linearly dependent, then their linear combinations can be expressed in infinitely many ways.
In other words, if has linearly dependent columns, then there are infinitely many 's that satisfy the normal equation, but they all correspond to the same projection in the figure below.
Let me drive this point home further. Let’s suppose both and satisfy
Then,
which means that
i.e. the difference between the two vectors, , is in . But, back in Chapter 5.3, we proved that and have the same null space, meaning any vector that gets sent to by also gets sent to by , and vice versa.
So,
too, but that just means
meaning that even though and are different-looking coefficient vectors, they both still correspond to the same linear combination of ’s columns!
Let’s see how we can apply this to an example. Let and . This is an example of a matrix with linearly dependent columns, so there’s no unique that satisfies the normal equations.
Finding One Solution¶
One way to find a possible vector is to solve the normal equations. is not invertible, so we can’t solve for uniquely, but we can still try and find a solution.
Here’s one approach: let’s just toss out the linearly dependent columns of and solve for using the remaining columns. Then, for the full can use the same coefficients for the linearly independent columns, but 0s for the dependent ones. Removing the linearly dependent columns does not change (i.e. the set of all linear combinations of ’s columns), so the projection is the same.
The easy solution is to keep columns 2 and 3, since their numbers are smallest. So, for now, let’s say
Here, . I won’t bore you with the calculations; you can verify them yourself.
Now, one possible for the full is , which keeps the same coefficients on columns 2 and 3 as in , but 0 for the column we didn’t use.
Finding All Solutions¶
As I mentioned above, if there are infinitely many solutions to the normal equation, then the difference between any two solutions is in , which is also . Put another way, if satisfies the normal equations, then so does for any .
So, once we have one , to get the rest, just add any vector in or (since those are the same subspaces).
What is ? It’s the set of vectors such that .
In our particular example,
we see that , so has a dimension of (by the rank-nullity theorem), so it’s going to be the span of a single vector. All we need to do now is find one vector in , and we will know that the null space is the set of scalar multiples of that vector.
Since column 1 is three times column 2, the vector must be in .
So, since , we know that the set of all possible 's is
This is not a subspace, since it doesn’t contain the zero vector.
There’s another way to arrive at this set of possible 's: we can solve the normal equations directly. I wouldn’t recommend this second approach since it’s much longer, but I’ll add it here for completeness.
Then, the normal equations give us
The first and second equations are just scalar multiples of each other, so we can disregard one of them, and solve for a form where we can use one unknown as a parameter for the other two. To illustrate, let’s pick .
gives us . Plugging this into both equations gives us
These are now both the same equation; the first one is just 3 times the second. So, we can solve for in terms of :
which gives us the complete solution
This is the exact same line as using the null space approach! Plug in to get , for example. The set of all possible 's is not a subspace, since it doesn’t contain the zero vector.
The Projection Matrix¶
So far, we’ve established that the vector in that is closest to is the vector , where is the solution to the normal equations,
If is invertible, then is the unique vector
meaning that the vector in that is closest to is
You’ll notice that the above expression also looks like a linear transformation applied to , where is being multiplied by the matrix
The matrix is called the projection matrix. In other classes, it is called the “hat matrix”, because they might use instead of and instead of , and in that notation, , so puts a “hat” on . (I don’t use hat notation in this class because drawing a hat on top of a vector is awkward. Doesn’t look strange?)
So,
shows us that there are two ways to interpret the act of projecting onto :
The resulting vector is some optimal linear combination of 's columns.
The resulting vector is the result of applying the linear transformation to .
Let’s work out an example. Suppose
’s columns are linearly independent, so is invertible, and
is well-defined.
X = np.array([[3, 0],
[0, 154],
[6, 0]])
P = X @ np.linalg.inv(X.T @ X) @ X.T
Parray([[0.2, 0. , 0.4],
[0. , 1. , 0. ],
[0.4, 0. , 0.8]])P @ np.array([1, 2, 3])array([1.4, 2. , 2.8])contains the information we need to project onto . Each row of tells us the right mixture of ’s components we need to construct the projection.
Notice that ’s second row is . This came from the fact that ’s first column had a second component of 0 while its second column had a non-zero second component but zeros in the other two components, meaning that we can scale ’s second column to exactly match ’s second component. Change the 154 in to any other non-zero value and won’t change!
Additionally, if we consider some that is already in , then multiplying it by doesn’t change it! For example, if we set (the sum of ’s columns), then .
P @ np.array([3, 154, 6])array([ 3., 154., 6.])Let’s work through some examples that develop our intuition for .
Example: Is invertible?¶
Suppose exists, meaning is invertible. Is invertible? If so, what is its inverse?
Example: Is orthogonal?¶
Is orthogonal?
Solution
No. Orthogonal matrices have the property that , meaning that
But, as we saw, is not invertible in general, so it can’t satisfy this property. This tells us that does not perform a rotation; projections are not rotations. Rotations can be undone but projections can’t.
Example: Is symmetric?¶
Is symmetric?
Solution
Yes. Symmetric matrices have the property that . We can show that satisfies this property; to do so, we’ll need to use the fact that .
Going from the second to the third line, we used the fact that is symmetric, and so is its inverse. Remember that is a square matrix consisting of the dot products of the columns of with themselves.
Example: Is idempotent?¶
Recall, an idemponent matrix satisfies . Is idempotent?
Solution
Yes.
Intuitively, this means that is the same as , meaning that once we’ve projected onto , projecting its projection again onto gives us back the same , since is already in .
Example: What is , and why?¶
What is ? What does the result mean?
Solution
Interpret as a matrix made up of , , ..., as its columns. is the projection of onto , but since is already in , projecting it again onto gives us back the same . So, should just be again.
Example: Rotations, Reflections, and Projections¶
Suppose is an arbitrary matrix. Describe the conditions on that make the corresponding linear transformation a...
Rotation
Reflection
Projection
Solution
Rotation: should be orthogonal, meaning that
All orthogonal matrices have a determinant of 1 or -1, since implies that (remember that ). If we want to be a rotation rather than a reflection, we also need
To summarize: a rotation matrix is orthogonal and has determinant 1. Implicit here is the fact that is square, meaning that . It doesn’t make sense to talk about a rotation that also changes the dimension of the space.
Reflection: The distinction between a reflection and a rotation is subtle in higher dimensions. We’ve mostly thought of “orthogonal” and “rotation” as being the same thing. But, for example,
is orthogonal, but doesn’t rotate vectors – it reflects them across the line in . The determinant of is -1.
In general, if is orthogonal and has a determinant of -1, then it either involves a reflection or a reflection and a rotation.
Recall, the formula for a rotation matrix – as discussed in Example: Householder Reflection in Chapter 6.2 – is given by
for some unit vector . This matrix reflects vectors across the hyperplane (remmeber, in a line is a hyperplane; in a plane is a hyperplane). If is of that form, then it indeed satisfies the properties above: it is orthogonal and has determinant -1. (To show that its determinant is -1, we need knowledge of eigenvalues, which we’ll introduce in Chapter 9.) It is furthermore symmetric, since
To summarize: a reflection matrix is orthogonal, symmetric, and has determinant -1. But, just because a matrix is orthogonal and has determinant -1, it doesn’t have to be a reflection matrix: it could represent a reflection followed by a rotation.
Projection: To reason about the properties of a projection matrix, let’s think about the matrix from earlier in this section. Suppose we have a subspace, for which the columns of are a basis (in other words, suppose ’s columns are linearly independent, and the subspace in question is ). Then, the matrix that projects vectors onto that subspace is given by
As we saw above, is both idempotent, meaning , and symmetric, meaning .
To summarize: a projection matrix is idempotent and symmetric.
There are matrices – like that are idempotent but not symmetric, and you can think of them as representing non-orthogonal projections. This is not a kind of projection we’ve studied yet. Let’s explore. Suppose and projects vectors onto some subspace. The error vector is . As we know for orthogonal projections, is orthogonal to anything in the column space of , e.g. . But it turns out the symmetry of is what guarantees that this happens. Let’s take the dot product of with :
In order for this dot product to be zero, we need . To make this happen, we need , since this would make , which would turn into , which is guaranteed by the idempotency of ().
What this all means is that if , then is guaranteed to be orthogonal to . But if , then is not necessarily zero, meaning that is not necessarily orthogonal to in general.
So in short, being idempotent makes it some kind of projection, but the symmetry of is what makes it an orthogonal projection. But for our purposes, we’ve only considered orthogonal projections, so the term “projection” without any other context means “orthogonal projection”.
To summarize:
| Transformation | Properties |
|---|---|
| Rotation | Orthogonal (), determinant = 1 |
| Reflection | Orthogonal (), symmetric () if strict reflection, determinant = -1 |
| Projection | Idempotent (), symmetric () |
Summary¶
Let’s take a step back and walk through our logic from Chapter 6.3 and here in Chapter 6.4 once more, since it’s that important.
Suppose is an matrix and is some vector in .
Orthogonal Projections¶
Our goal is to find the linear combination of ’s columns that is closest to .
This boils down to finding the vector that minimizes .
The vector that minimizes makes the resulting error vector,
orthogonal to the columns of .
The that makes the error vector orthogonal to the columns of is the one that satisfies the normal equation,
If is invertible, which happens if and only if ’s columns are linearly independent, then is the unique vector
Otherwise, there are infinitely many solutions to the normal equation. All of these infinitely many solutions correspond to the same projection, . If is one solution (which can be found by removing the linearly dependent columns of ), then all other solutions are of the form , where is any vector in .
The Projection Matrix¶
Assuming has linearly independent columns, the projection matrix is
is defined such that is the vector in that is closest to . is symmetric and idemponent, but not invertible nor orthogonal.
We’re now finally ready to head back to the land of machine learning.