In Chapter 4, we focused on understanding the subspace of Rnspanned by a set of vectors. I’ve been alluding to the fact that to solve the regression problem in higher dimensions, we’d need to be able to project a vector onto the span of a set of vectors. (To be clear, we haven’t done this yet.) Matrices will allow us to achieve this goal elegantly. It will take a few sections to get there:
In Chapter 5.1, we’ll define matrices and learn to perform basic operations on them.
In Chapter 5.3, we’ll define the column space, row space, null space, and rank of a matrix, while introducing the CR decomposition along the way.
In Chapter 6.1, we’ll understand matrix-vector multiplication as a transformation of the vector, and in Chapter 6.2 we’ll learn how to invert a matrix and compute its determinant.
In Chapter 6.3, we’ll learn to project a vector onto the span of the columns of a matrix, closing the loop with Chapter 3.4.
Just remember that all of this is for machine learning.
For now, let’s consider the following three vectors in R4:
v1=⎣⎡3202⎦⎤,v2=⎣⎡11−1−2⎦⎤,v3=⎣⎡4900⎦⎤
If we stack these vectors horizontally, we produce a matrix:
⎣⎡∣v1∣∣v2∣∣v3∣⎦⎤=⎣⎡320211−1−24900⎦⎤
A is a matrix with 4 rows and 3 columns, so we might say that A is a 4×3 matrix, or that A∈R4×3.
It’s common to use the notation Aij to denote the entry in the i-th row and j-th column of A, e.g. A23=9.
In numpy, you can access the entry in the 2nd row and 3rd column of the 2D matrix A using A[1, 2], once we account for the fact that numpy uses 0-based indexing.
Activity 1
In the cell above, try and find the bottom-right entry of A, first using positive indexing and then using negative indexing.
I’ve introduced matrices in a very particular way, because I’d like for you to think of them as a collection of vectors, sometimes called column vectors. We’ll come back to this in just a moment. You can also think of a matrix as a collection of row vectors; the example A defined above consists of four row vectors in R3 stacked horizontally.
When we introduced vectors, n was typically the placeholder we’d use for the number of components of a vector. With matrices, I like using n for the number of rows, and d for the number of columns. This means that, in general, a matrix is of shape n×d and is a member of the set Rn×d. This makes clear that when we’ve collected data, we store each of our n observations in a row of our matrix. Just be aware that using n×d to denote the shape of a matrix is a little unorthodox; most other textbooks will use m×n to denote the shape of a matrix. Of course, this is all arbitrary.
As we’ll see in the coming sections, square matrices are the most flexible – there are several properties and operations that are only defined for them. Unfortunately, not every matrix is square. Remember, matrices are used for storing data, and the number of observations, n and number of features, d, don’t need to be the same. (In practice, we’ll often have very tall matrices, i.e. n≫d, but this is not always the case.)
Great – we know how to add two matrices, and how to multiply a matrix by a scalar. The natural next step is to figure out how – and why – to multiply two matrices together.
First, a definition.
Let’s use the Golden Rule. First – as the title of this subsection suggests – we’ll start by computing the product of a matrix and a vector.
Let’s suppose A is the same 4×3 matrix we’ve become familiar with:
A=⎣⎡320211−1−24900⎦⎤
And let’s suppose x is some vector. Note that we can think of an n-dimensional vector as a matrix with n rows and 1 column. In order for the product Ax to be valid, x must have 3 elements in it, by the Golden Rule above. To make the example concrete, let’s suppose:
x=⎣⎡103⎦⎤
How do we multiply Ax? The following key definition will help us.
Let me say a little more about the dimensions of the output. Below, I’ve written the dimensions of A (4×3) and x (3×1) next to each other. By the Golden Rule, the inner dimensions, both of which are bolded, must be equal in order for the multiplication to be valid. The dimensions of the output will be the result of looking at the outer dimensions, which here are 4×1.
4×3A3×1x=4×1output
So, the result of multiplying Ax will be 4×1 matrix, or in other words, a vector in R4. Indeed, the result of multiply a matrix by a vector always results in another vector, and this act of multiplying a matrix by a vector is often thought of as transforming the vector from Rd to Rn.
So, how do we find those 4 components? As mentioned earlier, we compute each component by taking the dot product of a row in A with x.
A=⎣⎡320211−1−24900⎦⎤,x=⎣⎡103⎦⎤
Let’s start with the top row ofA, ⎣⎡314⎦⎤. The dot product of two vectors is only defined if they have equal lengths. This is why we’ve instituted the Golden Rule! The Golden Rule tells us we can only multiply A and x if the number of columns in A is the same as the number of components in x, which is true here (both of those numbers are 3).
Then, the dot product of the first row of A with x is:
⎣⎡314⎦⎤⋅⎣⎡103⎦⎤=3⋅1+1⋅0+4⋅3=15
Nice! We’re a quarter of the way there. Now, we just need to compute the remaining three dot products:
The dot product of the second row of A with x is ⎣⎡219⎦⎤⋅⎣⎡103⎦⎤=29.
The dot product of the third row of A with x is ⎣⎡0−10⎦⎤⋅⎣⎡103⎦⎤=0.
And finally, the dot product of the fourth row of A with x is ⎣⎡2−20⎦⎤⋅⎣⎡103⎦⎤=2.
The result of our matrix-vector multiplication, then, is the result of stacking all 4 dot products together into the vector ⎣⎡152902⎦⎤. To summarize:
Above, you’ll notice that x and the third row vector, ⎣⎡0−10⎦⎤, are orthogonal (rotate the plot so that you can see this), so the third component in
Ax=⎣⎡152902⎦⎤
is 0.
Activity 2
We’ll try not to bore you with mundane calculations in the future, but it’s important to perform matrix-vector multiplication by hand a few times to understand how it works.
In each part, perform the matrix-vector multiplication by hand or state that it cannot be done.
(While we haven’t yet looked at how to compute the product of two matrices, you can still answer this just using what you know about matrix-vector multiplication.)
In the cell below, use numpy to verify your answers. You’ll need to define the matrices and vectors as numpy arrays, and use the @ operator to perform the matrix-vector multiplication.
We’ve described matrix-vector multiplication as the result of taking the dot product of each row of A with x, and indeed this is the easiest way to actually compute the output. But, there’s another more important interpretation. In the above dot products, you may have noticed:
Entries in the first column of A (3, 2, 0, and 2) were always multiplied by the first element of x (1).
Entries in the second column of A (1, 1, -1, and -2) were always multiplied by the second element of x (0).
Entries in the third column of A (4, 9, 0, and 0) were always multiplied by the third element of x (3).
In other words:
Ax=⎣⎡320211−1−24900⎦⎤⎣⎡103⎦⎤=linear combination of columns of A1⎣⎡3202⎦⎤+0⎣⎡11−1−2⎦⎤+3⎣⎡4900⎦⎤=⎣⎡152902⎦⎤
At the start of this section, we defined A by stacking the vectors v1, v2, and v3 side-by-side, and I told you to think of a matrix as a collection of column vectors. The above result is precisely why – it’s because when we multiply A by x, we’re computing a linear combination of the columns of A, where the weights are the components of x!
Since Ax produces a linear combination of the columns of A, a natural question to ask at this point is whether the columns of A are all linearly independent. A only has 3 columns, each of which is in R4, so while they may or may not be linearly independent (are they?), we know they cannot span all of R4, as we’d need at least 4 vectors to reach every element in R4.
This is the type of thinking we’ll return to in Chapter 5.3. This will lead us to define the rank of a matrix, perhaps the single most important number associated with a matrix.
Activity 3
Consider the matrix M defined below.
M=[21−153−20140]
In each of the following parts, write out u concretely, compute Mu, and explain the result in English.
A vector whose second component is 1, and whose other components are 0.
A vector containing all 1s.
A vector containing all 51s.
A vector whose components sum to 1, whose first component is 53, and whose other components are all equal to one another.
Matrix-matrix multiplication – or just “matrix multiplication” – is a generalization of matrix-vector multiplication. Let’s present matrix multiplication in its most general terms.
Note that if p=1, this reduces to the matrix-vector multiplication case from before. In that case, the only possible value of j is 1, since the output only has 1 column, and the element in row i of the output vector is the dot product of row i in A and the vector B (which we earlier referred to as x in the less general case).
For a concrete example, suppose A and B are defined below:
A=⎣⎡320211−1−24900⎦⎤B=⎣⎡103272⎦⎤
The number of columns of A must equal the number of rows of B in order for the product AB to be defined, as the Golden Rule tells us. That is fortunately the case here. Since A has shape 4×3 and B has shape 3×2, the output matrix will have shape 4×2. Each of those 4⋅2=8 elements will be the dot product of a row in A with a column in B.
Let’s see if we can audit where these numbers came from. Let’s consider (AB)32, which is the element in row 3 and column 2 of the output. It should have come from the dot product of row 3 of A and column 2 of B.
And indeed, -7 is the dot product of ⎣⎡0−10⎦⎤ and ⎣⎡272⎦⎤.
You should notice that many of the numbers in the output AB look familiar. That’s because we used the same A as we did earlier in the section, and the first column of B is just x from the matrix-vector example. So, the first column in AB is the same as the vector Ax=⎣⎡152902⎦⎤ as we computed earlier. The difference now is that the output AB isn’t just a single vector, but is a matrix with 2 columns. The second column, ⎣⎡2129−7−10⎦⎤, comes from multiplying A by the second column in B, ⎣⎡272⎦⎤.
Note that as we add columns to B, we’d add columns to the output. If B had 10 columns, then AB would have 10 columns, too, without A needing to change. As long as the Golden Rule – that the number of columns in A equals the number of rows in B – holds, the product AB can be computed, and it has shape (number of rows in A)×(number of columns in B).
The first two properties – associativity and distributivity – match standard arithmetic properties that we’ve become accustomed to. The associative property allows you to, for example, compute ABx only by using matrix-vector multiplications, since you can first multiply Bx, which results in a vector, and then multiply A by that vector. (I had you do this in Activity 2 earlier in this section – I hope you did it! 🧐)
The fact that matrix multiplication is not commutative may come as a surprise, as every other form of multiplication you’ve learned about up until this point has been commutative (including the dot product).
In fact, if AB exists, BA may or may not! If A is n×d and B is d×p, then BA only exists if n=p. But even then, AB=BA in general.
For example, if A is 2×3 and B is 3×2, then AB is 2×2 and BA is 3×3; here, both products exist, but they cannot be equal since they have different shapes.
Even if A and B are both square matrices with the same shape, AB=BA in general. For illustration, consider:
A=[1324],B=[5768]
Then,
AB=[19432250]=BA=[23313446]
Activity 4
Activity 4.1
Let P=⎣⎡010001100⎦⎤, S=⎣⎡4000210003⎦⎤, and x=⎣⎡4612⎦⎤.
Evaluate Px and Sx. Then, explain in words what multiplying P and S by x does to x.
Evaluate PSx and SPx. The results should be different, as we’d expect, since matrix multiplication is not commutative in general. Explain the difference intuitively, given the “operations” P and S perform on x.
P is called a permutation matrix, and S is called a diagonal matrix.
Activity 4.2
The famous Fibonacci sequence of integers, F0,F1,F2,…, is defined as follows:
F0=0,F1=1,Fn=Fn−1+Fn−2 for n≥2
The first few terms in the sequence are 0,1,1,2,3,5,8,13,21,34,55,89,144,….
It turns out you can compute the nth term in the sequence using matrix multiplication.
Find a 2×2 matrix A such that A[Fn−1Fn−2]=[FnFn−1]. The answer uses relatively small numbers.
Compute A[10], i.e. the product of A and the vector [10].
Since A is square, we can multiply it by itself. Compute A2[10] and A3[10].
If you continue this process, you’ll find that An[10] is a vector containing the nth and (n−1)th terms in the Fibonacci sequence!
Activity 4.3
Using the same matrices P and S from Activity 4.1, compute (P−S)x and Px−Sx. Are both the results the same? If so, what property of matrix multiplication guarantees this?
Is the result of (P−S)x interpretable, in the same way that the results of Px and Sx were in Activity 4.1?
I’ve shown you the naïve – and by far most common – algorithm for matrix multiplication. If A and B are both square n×n matrices, then the runtime of the naïve algorithm is O(n3).
However, there exist more efficient algorithms for matrix multiplication. Strassen’s algorithm is one such example; it describes how to multiply two square n×n matrices in O(n2.807) time. The study of efficient algorithms for matrix multiplication is an active area of research; if you’re interested in learning more, look here.
Matrix multiplication, I might argue, is one of the reasons NVIDIA is the most valuable company in the world. Modern machine learning is built on matrix multiplication, and GPUs are optimized for it. Why? This comment from Reddit does a good job of explaining:
Imagine you have 1 million math assignments to do, they are very simple assignments, but there are a lot that need to be done, they are not dependent on each other so they can be done on any order.
You have two options, distribute them to 10 thousand people to do it in parallel or give them to 10 math experts. The experts are very fast, but hey, there are only 10 of them, the 10 thousand are more suitable for the task because they have the “brute force” for this.
GPUs have thousands of cores, CPUs have tens.
On that note, the @ operator in numpy is used for matrix multiplication; it is a shorthand for np.matmul. You can also use it to multiply a matrix by a vector.
Next, we look at the transpose and other special matrices that show up throughout linear algebra.