Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

6.2. Inverses

The big concept introduced in Chapter 5.3 was the rank of a matrix, rank(A)\text{rank}(A), which is the number of linearly independent columns of a matrix. The rank of a matrix told us the dimension of its column space, colsp(A)\text{colsp}(A), which is the set of all vectors that can be created with linear combinations of AA’s columns.

The ideas of rank, column space, and linear independence are all closely related, and also all apply for matrices of any shape – wide, tall, or square. This is good, because data from the real world is rarely square: we usually have many more observations (nn) than features (dd).

The big idea in Chapter 6.2 is that of the inverse of a matrix, A1A^{-1}, which only applies to square matrices. But, inverses and square matrices can still be useful for our rectangular matrices with real data, since the matrix ATAA^TA is square, no matter the shape of AA. And, as we discussed in Chapter 5.3 and you’re seeing in Homework 5, the matrix ATAA^TA has close ties to the correlations between the columns of AA, which you already know has some connection to linear regression.

After the linear-transformation detour, we return to inverses and how to compute them.


Motivation: What is an Inverse?

Scalar Addition and Multiplication

In “regular” addition and multiplication, you’re already familiar with the idea of an inverse. In addition, the inverse of a number aa is another number aa' that satisfies

a+a=a+a=0a + a' = a' + a = 0

0 is the “identity” element in addition, since adding it to any number aa doesn’t change the value of aa. Of course, a=aa' = -a, so a-a is the additive inverse of aa. Any number aa has an additive inverse; for example, the additive inverse of 2 is -2, and the additive inverse of -2 is 2.

In multiplication, the inverse of a number aa is another number aa' that satisfies

aa=aa=1a \cdot a' = a' \cdot a = 1

1 is the “identity” element in multiplication, since multiplying it by any number aa doesn’t change the value of aa. Most numbers have a multiplicative inverse of a=1aa' = \frac{1}{a}, but 0 does not!

0a=00 \cdot a' = 0

is not achieved by any aa'.

When a multiplicative inverse exists, we can use it to solve equations like

2x=52x = 5

by multiplying both sides by the multiplicative inverse of 2, which is 12\frac{1}{2}.

12(2x)=12(5)    x=52\frac{1}{2} (2x) = \frac{1}{2} (5) \implies x = \frac{5}{2}

Matrices

If AA is an n×dn \times d matrix, its additive inverse is just A-A, which comes from negating each element of AA. That part is not all that interesting. What we’re more interested in is the multiplicative inverse of a matrix, which is just referred to as the inverse of the matrix.

Suppose AA is some n×dn \times d matrix. We’d like to find an inverse, AA', such that when AA is multiplied by AA' in any order, the result is the identity element for matrix multiplication, II. Recall, the n×nn \times n identity matrix is a matrix with 1s on the diagonal (moving from the top left to the bottom right), and 0s everywhere else. The 3×33 \times 3 identity matrix is

I3=[100010001]I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}

I3x=xI_3 \vec x = \vec x for any xR3\vec x \in \mathbb{R}^3, and BI3=I3B=BB I_3 = I_3 B = B for any BR3×3B \in \mathbb{R}^{3 \times 3}.

[100010001][x1x2x3]=[x1x2x3]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}

If AA is an n×dn \times d matrix, we’d need a single matrix AA' that satisfies

AA=AA=IAA' = A'A = I

In AAAA', AA' is the right inverse of AA, and in AAA'A, AA' is the left inverse of AA. Right now, we’re interested in finding matrices that have both a left and right inverse, that happen to be the same matrix.

In order to evaluate AAAA', we’d need AA' to be of shape d×somethingd \times \text{something}, and in order to evaluate AAA'A, we’d need AA' to be of shape something else×n\text{something else} \times n. The solution to these constraints is to have AA' be a d×nd \times n matrix (like ATA^T).

If we try that, then

  • AAA'A has shape (d×n)(n×d)=d×d(d \times n) \cdot (n \times d) = d \times d

  • AAAA' has shape (n×d)(d×n)=n×n(n \times d) \cdot (d \times n) = n \times n

but, we want AAA'A and AAAA' to both be the same matrix, not just separate valid matrices. So, we need n=dn = d, which requires AA to be n×nn \times n, like its inverse.

For example, consider the 2×22 \times 2 matrix

A=[2435]A = \begin{bmatrix} 2 & 4 \\ 3 & 5\end{bmatrix}

It turns out that AA does have an inverse! Its inverse, denoted by A1A^{-1}, is

A1=[5/223/21]A^{-1} = \begin{bmatrix} -5/2 & 2 \\ 3/2 & -1\end{bmatrix}

Because A1A^{-1} is the matrix that satisfies both

AA1=[2435][5/223/21]=[1001]=IAA^{-1} = \begin{bmatrix} 2 & 4 \\ 3 & 5\end{bmatrix} \begin{bmatrix} -5/2 & 2 \\ 3/2 & -1\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix} = I

and

A1A=[5/223/21][2435]=[1001]=IA^{-1}A = \begin{bmatrix} -5/2 & 2 \\ 3/2 & -1\end{bmatrix} \begin{bmatrix} 2 & 4 \\ 3 & 5\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix} = I

numpy is good at finding inverses, when they exist.

import numpy as np

A = np.array([[2, 4], 
              [3, 5]])
              
np.linalg.inv(A)
array([[-2.5, 2. ], [ 1.5, -1. ]])
# The top-left and bottom-right elements are 1s, 
# and the top-right and bottom-left elements are 0s.
# 4.4408921e-16 is just 0, with some floating point error baked in.
A @ np.linalg.inv(A)
array([[1.0000000e+00, 0.0000000e+00], [4.4408921e-16, 1.0000000e+00]])

But, not all square matrices are invertible, just like not all numbers have multiplicative inverses. What happens if we try and invert

B=[1224]B = \begin{bmatrix} 1 & 2 \\ 2 & 4\end{bmatrix}
B = np.array([[1, 2], 
              [2, 4]])

np.linalg.inv(B)
---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)
Cell In[17], line 4
      1 B = np.array([[1, 2], 
      2               [2, 4]])
----> 4 np.linalg.inv(B)

File ~/miniforge3/envs/pds/lib/python3.10/site-packages/numpy/linalg/linalg.py:561, in inv(a)
    559 signature = 'D->D' if isComplexType(t) else 'd->d'
    560 extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 561 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
    562 return wrap(ainv.astype(result_t, copy=False))

File ~/miniforge3/envs/pds/lib/python3.10/site-packages/numpy/linalg/linalg.py:112, in _raise_linalgerror_singular(err, flag)
    111 def _raise_linalgerror_singular(err, flag):
--> 112     raise LinAlgError("Singular matrix")

LinAlgError: Singular matrix

We’re told the matrix is singular, meaning that it’s not invertible.

The point of Chapter 6.2 is to understand why some square matrices are invertible and others are not, and what the inverse of an invertible matrix really means.

What’s the Point?

In general, suppose AA is an n×dn \times d matrix and bRn\vec b \in \mathbb{R}^n. Then, the equation

Ax=bA \vec x = \vec b

is a system of nn equations in dd unknowns.

[a11a12a1da21a22a2dan1an2and]A[x1x2xd]x=[b1b2bn]b\underbrace{\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1d} \\ a_{21} & a_{22} & \cdots & a_{2d} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nd} \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_d \end{bmatrix}}_{\vec x} = \underbrace{\begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}}_{\vec b}

There may be no solution, a unique solution, or infinitely many solutions for the vector x\vec x that satisfies the system, depending on rank(A)\text{rank}(A) and whether b\vec b is in colsp(A)\text{colsp}(A).

If we assume AA is square, i.e. n=dn = d, then Ax=bA \vec x = \vec b is a system of nn equations in nn unknowns. If AA is invertible (which, remember, only square matrices can be), then there is a unique solution to the system, and we can find it by multiplying both sides by A1A^{-1} on the left.

Ax=b    A1Ax=A1b    x=A1bA \vec x = \vec b \implies A^{-1} A \vec x = A^{-1} \vec b \implies \vec x = A^{-1} \vec b

x=A1b\vec x = A^{-1} \vec b is the unique solution to the system of equations, and we can find it without having to manually solve the system. Thinking back to the example above, we used numpy to find that the inverse of

A=[2435]A = \begin{bmatrix} 2 & 4 \\ 3 & 5\end{bmatrix}

is

A1=[5/223/21]A^{-1} = \begin{bmatrix} -5/2 & 2 \\ 3/2 & -1\end{bmatrix}

We can use A1A^{-1} to solve the system

2x1+4x2=b13x1+5x2=b2\begin{align*} 2x_1 + 4x_2 &= b_1 \\ 3x_1 + 5x_2 &= b_2 \end{align*}

For any b1,b2Rb_1, b_2 \in \mathbb{R}, the unique solution for x1x_1 and x2x_2 is

[x1x2]=[5/223/21][b1b2]=[5/2b1+2b23/2b1b2]\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -5/2 & 2 \\ 3/2 & -1\end{bmatrix} \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} -5/2 b_1 + 2 b_2 \\ 3/2 b_1 - b_2 \end{bmatrix}

That said, as we’ll discuss at the bottom of this section, actually finding the inverse of a matrix is very computationally intensive, so usually we don’t actually compute A1A^{-1}. Knowing that it exists, and understanding the properties it satisfies, is the important part.


Inverting a Transformation

Remember, the big goal of this section is to find the inverse of a square matrix AA.

Since a square matrix AA corresponds to a linear transformation, we can think of A1A^{-1} as “reversing” or “undoing” the transformation.

For example, if AA scales vectors, A1A^{-1} should scale by the reciprocal, so that applying AA and then A1A^{-1} returns the original vector.

The simplest case involves a diagonal matrix, like

A=[2001/3]A = \begin{bmatrix} 2 & 0 \\ 0 & -1/3 \end{bmatrix}
Image produced in Jupyter

To undo the effect of AA, we can apply the transformation

A1=[1/2003]A^{-1} = \begin{bmatrix} 1/2 & 0 \\ 0 & -3 \end{bmatrix}

Many of the transformations we looked at involving 2×22 \times 2 matrices are reversible, and hence the corresponding matrices are invertible. If the matrix rotates by θ\theta, the inverse is a rotation by θ-\theta. If the matrix shears by “dragging to the right”, the inverse is a shear by “dragging to the left”.

Another way of visualizing whether a transformation is reversible is whether given a vector on the right, there is exactly one corresponding vector on the left. By exactly one, I mean not 0, and not multiple.

Here, we visualize

F=[105/41]F = \begin{bmatrix} 1 & 0 \\ -5/4 & 1 \end{bmatrix}

Given any vector b\vec b of the form b=Fx\vec b = F \vec x, there is exactly one vector x\vec x that satisfies this equation.

Image produced in Jupyter

On the other hand, if we look at

H=[1/2112]H = \begin{bmatrix} 1 / 2 & -1 \\ 1 & -2 \end{bmatrix}

the same does not hold true. Given any vector bcolsp(H)\vec b \in \text{colsp}(H) on the right, there are infinitely many vectors x\vec x such that Hx=bH \vec x = \vec b. The vectors in pink on the left are all sent to the same vector on the right, [12]\color{#d81a60}{\begin{bmatrix} -1 \\\\ -2 \end{bmatrix}}.

Image produced in Jupyter

And, there are no vectors on the left that get sent to [22]\begin{bmatrix} -2 \\\\ 2 \end{bmatrix} on the right. Any vector in R2\mathbb{R}^2 that isn’t on the line spanned by [12]\begin{bmatrix} -1 \\ -2 \end{bmatrix} is unreachable.

You may recall the following key ideas from discrete math:

  • A function is invertible if and only if it is both one-to-one (injective) and onto (surjective).

  • A function is one-to-one if and only if no two inputs get sent to the same output, i.e. f(x1)=f(x2)f(x_1) = f(x_2) implies x1=x2x_1 = x_2.

  • A function is onto if every element of the codomain is an output of the function, i.e. for every yYy \in Y, there exists an xXx \in X such that f(x)=yf(x) = y.

The transformation represented by HH is neither one-to-one (because [01]\begin{bmatrix} 0 \\ 1 \end{bmatrix} and [11.5]\begin{bmatrix} 1 \\ 1.5 \end{bmatrix} get sent to the same point) nor onto (because [20]\begin{bmatrix} -2 \\ 0 \end{bmatrix} isn’t mapped to by any vector in R2\mathbb{R}^2).

In order for a linear transformation to be invertible, it must be both one-to-one and onto, i.e. it must be a bijection. Again, don’t worry if these terms seem foreign: I’ve provided them here to help build connections to other courses if you’ve taken them. If not, the rest of my coverage should still be sufficient.


Inverting a Matrix

The big idea I’m trying to get across is that an n×nn \times n matrix AA is invertible if and only if the corresponding linear transformation can be “undone”.

That is, AA is invertible if and only if given any vector bRn\vec b \in \mathbb{R}^n, there is exactly one vector xRn\vec x \in \mathbb{R}^n such that Ax=bA \vec x = \vec b. If the visual intuition from earlier didn’t make this clear, here’s another concrete example. Consider

A=[120120121]A = \begin{bmatrix} 1 & 2 & 0 \\ 1 & 2 & 0 \\ 1 & 2 & 1 \end{bmatrix}

rank(A)=2\text{rank}(A) = 2, since AA’s first two columns are scalar multiples of one another. Let’s consider two possible b\vec b’s, each depicting a different case.

  1. b=[333]\vec b = \begin{bmatrix} 3 \\ 3 \\ 3 \end{bmatrix}. This b\vec b is in colsp(A)\text{colsp}(A). The issue with AA is that there are infinitely many linear combinations of the columns of AA that equal this b\vec b.

[120120121]A[210]an x=[120120121]A[540]another x=...=[333]b\underbrace{\begin{bmatrix} 1 & 2 & 0 \\ 1 & 2 & 0 \\ 1 & 2 & 1 \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}}_{\text{an } \vec x} = \underbrace{\begin{bmatrix} 1 & 2 & 0 \\ 1 & 2 & 0 \\ 1 & 2 & 1 \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} -5 \\ 4 \\ 0 \end{bmatrix}}_{\text{another } \vec x} = ... = \underbrace{\begin{bmatrix} 3 \\ 3 \\ 3 \end{bmatrix}}_{\vec b}
  1. b=[100]\vec b = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}. This b\vec b is not in colsp(A)\text{colsp}(A), meaning there is no xR3\vec x \in \mathbb{R}^3 such that Ax=bA \vec x = \vec b.

[120120121]A[x1x2x3]no such x=[100]b\underbrace{\begin{bmatrix} 1 & 2 & 0 \\ 1 & 2 & 0 \\ 1 & 2 & 1 \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}}_{\text{no such } \vec x} = \underbrace{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}}_{\vec b}

But, if AA’s columns were linearly independent, they’d span all of R3\mathbb{R}^3, and so Ax=bA \vec x = \vec b would have a unique solution x\vec x for any b\vec b we think of.

Definition

The properties in the box above are sometimes together called the invertible matrix theorem. This is not an exhaustive list, either, and we’ll see other equivalent properties as time goes on.

Inverse of a 2×22 \times 2 Matrix

As was the case with the determinant, the general formula for the inverse of a matrix is only convenient for 2×22 \times 2 matrices.

You could solve for this formula by hand, by finding scalars ee, ff, gg, and hh such that

[abcd]A[efgh]=[1001]\underbrace{\begin{bmatrix} a & b \\ c & d \end{bmatrix}}_A \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

Note that the formula above involves division by adbcad - bc. If adbc=0ad - bc = 0, then AA is not invertible, but adbcad - bc is just the determinant of AA! This should give you a bit more confidence in the equivalence of the statements “AA is invertible” and “det(A)0\text{det}(A) \neq 0”.

Let’s test out the 2×22 \times 2 formula on some examples. If

A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}

then

A1=11423[4231]=12[4231]=[213/21/2]A^{-1} = \frac{1}{1 \cdot 4 - 2 \cdot 3} \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix} = \frac{1}{-2} \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix} = \begin{bmatrix} -2 & 1 \\ 3/2 & -1/2 \end{bmatrix}

and indeed both

[1234][213/21/2]=[1001]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} -2 & 1 \\ 3/2 & -1/2 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

and

[213/21/2][1234]=[1001]\begin{bmatrix} -2 & 1 \\ 3/2 & -1/2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

hold.

On the other hand,

B=[1224]B = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}

is not invertible, since its columns are linearly dependent.

Beyond 2×22 \times 2 Matrices

For matrices larger than 2×22 \times 2, the calculation of the inverse is not as straightforward; there’s no simple formula. In the 3×33 \times 3 case, we’d need to find 9 scalars cijc_{ij} such that

[371250420]A[c11c12c13c21c22c23c31c32c33]C=A1=[100010001]I\underbrace{\begin{bmatrix} 3 & 7 & 1 \\ -2 & 5 & 0 \\ 4 & 2 & 0 \end{bmatrix}}_A \underbrace{\begin{bmatrix} {\color{#3d81f6}c_{11}} & {\color{orange}c_{12}} & {\color{#d81a60}c_{13}} \\ {\color{#3d81f6}c_{21}} & {\color{orange}c_{22}} & {\color{#d81a60}c_{23}} \\ {\color{#3d81f6}c_{31}} & {\color{orange}c_{32}} & {\color{#d81a60}c_{33}} \end{bmatrix}}_{C = A^{-1}} = \underbrace{\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}}_{I}

This involves solving a system of 3 equations in 3 unknowns, 3 times – one per column of the identity matrix. One such system is

3c11+7c21+c31=12c11+5c21=04c11+2c21=0\begin{align*} 3 {\color{#3d81f6}c_{11}} + 7 {\color{#3d81f6}c_{21}} + {\color{#3d81f6}c_{31}} &= 1 \\ -2 {\color{#3d81f6}c_{11}} + 5 {\color{#3d81f6}c_{21}} &= 0 \\ 4 {\color{#3d81f6}c_{11}} + 2 {\color{#3d81f6}c_{21}} &= 0 \end{align*}

You can quickly see how this becomes a pain to solve by hand. Instead, we can use one of two strategies:

  1. Using row reduction, also known as Gaussian elimination, which is an efficient method for solving systems of linear equations without needing to write out each equation explicitly. Row reduction can be used to both find the rank and inverse of a matrix, among other things. More traditional linear algebra courses spend a considerable amount of time on this concept, though I’ve intentionally avoided it in this course to instead spend time on conceptual ideas most relevant to machine learning. That said, you’ll get some practice with it in a future homework.

  2. Using a pre-built function in numpy that does the row reduction for us.

At the end of this section, I give you some advice on how to (and not to) compute the inverse of a matrix in code.


More Examples

As we’ve come to expect, let’s work through some examples that illustrate important ideas.

Example: Inverting a Product

  1. Suppose AA and BB are both invertible n×nn \times n matrices. Is ABAB invertible? If so, what is its inverse?

  2. Suppose A=BCA = BC, and AA, BB, and CC are all invertible n×nn \times n matrices. What is the inverse of BB?

  3. If AA, BB, and ABAB are all n×nn \times n matrices, and ABAB is invertible, must AA and BB both be invertible?

  4. In general, if ABAB is invertible, must AA and BB both be invertible?

Example: Inverting a Sum

Suppose AA and BB are both invertible n×nn \times n matrices. Is A+BA + B invertible? If so, what is its inverse?

Example: Inverting XTXX^TX

Suppose XX is an n×dn \times d matrix. Note that XX is not square, and so it is not invertible. However, XTXX^TX is a square matrix, and so it is possible that it is invertible.

Explain why XTXX^TX is invertible if and only if XX’s columns are linearly independent.

Example: Orthogonal Matrices

Recall, an n×nn \times n matrix QQ is orthogonal if QTQ=QQT=IQ^T Q = QQ^T = I.

What is the inverse of QQ? Explain how this relates to the formula for rotation matrices in R2\mathbb{R}^2 from earlier,

R(θ)=[cos(θ)sin(θ)sin(θ)cos(θ)]R(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}

Example: Householder Reflection

Suppose uRn\vec u \in \mathbb{R}^n is a unit vector. Let

P=I2uuTP = I - 2 \vec u \vec u^T

PP is called the Householder matrix. f(x)=Pxf(\vec x) = P \vec x reflects x\vec x across the line in R2\mathbb{R}^2 (or plane in R3\mathbb{R}^3, or in general, hyperplane in Rn\mathbb{R}^n) uTx=0\vec u^T \vec x = 0.

For example, suppose u=[3/54/5]\vec u = \begin{bmatrix} 3/5 \\ 4/5 \end{bmatrix}. Then,

P=I2[3/54/5][3/54/5]=[7/2524/2524/257/25]P = I - 2 \begin{bmatrix} 3/5 \\ 4/5 \end{bmatrix} \begin{bmatrix} 3/5 & 4/5 \end{bmatrix} = \begin{bmatrix} 7/25 & -24/25 \\ -24/25 & 7/25 \end{bmatrix}

reflects x\vec x across the line [3/54/5]x=0\begin{bmatrix} 3/5 \\ 4/5 \end{bmatrix} \cdot \vec x = 0, i.e. 35x1+45x2=0\frac{3}{5}x_1 + \frac{4}{5}x_2 = 0, or x2=34x1x_2 = -\frac{3}{4}x_1, or in more interpretable syntax, y=34xy = -\frac{3}{4}x.

Image produced in Jupyter
  1. Show, just using the definition P=I2uuTP = I - 2 \vec u \vec u^T, that PP is an orthogonal matrix.

  2. Find P1P^{-1}. (Why does the fact that PP is orthogonal imply that PP is invertible?)


Computing the Inverse in Code

Recall that if AA is an n×nn \times n matrix and bRn\vec b \in \mathbb{R}^n, then

Ax=bA \vec x = \vec b

is a system of nn equations in nn unknowns. One of the big (conceptual) usages of the inverse is to solve such a system, as I mentioned at the start of this section. If AA is invertible, then we can solve for x\vec x by multiplying both sides on the left by A1A^{-1}:

Ax=b    A1Ax=A1b    x=A1bA \vec x = \vec b \implies A^{-1} A \vec x = A^{-1} \vec b \implies \vec x = A^{-1} \vec b

A1A^{-1} has encoded within it the solution to Ax=bA \vec x = \vec b, no matter what b\vec b is. This makes A1A^{-1} very powerful. But, that power doesn’t come for free: in practice, finding A1A^{-1} is less efficient and more prone to floating point errors than solving the one system of nn equations in nn unknowns directly for the specific b\vec b we care about.

  • Solving Ax=bA \vec x = \vec b involves solving just one system of nn equations in nn unknowns.

  • Finding A1A^{-1} involves solving a system of nn equations in nn unknowns, nn times! Each system has the same coefficient matrix AA but a different right-hand side b\vec b, corresponding to the columns of the identity matrix (which are the standard basis vectors in Rn\mathbb{R}^n).

The more floating point operations we need to do, the more error is introduced into the final results.

Let’s run an experiment. Suppose we’d like to find the solution to the system

x1+x2=7x1+1.000000001x2=7.000000004\begin{align*} x_1 + x_2 &= 7 \\ x_1 + 1.000000001 x_2 &= 7.000000004 \end{align*}

This corresponds to Ax=bA \vec x = \vec b, with

A=[1111.000000001],b=[77.000000004]A = \begin{bmatrix} 1 & 1 \\ 1 & 1.000000001 \end{bmatrix}, \quad \vec b = \begin{bmatrix} 7 \\ 7.000000004 \end{bmatrix}

Here, we can eyeball the solution as being x=[34]\vec x = \begin{bmatrix} 3 \\ 4 \end{bmatrix}. How close to this “true” x\vec x do each of the following techniques get?

Option 1: Using np.linalg.inv

As we saw at the start of the section, np.linalg.inv(A) finds the inverse of A, assuming it exists. The following cell finds the solution to Ax=bA \vec x = \vec b by first computing A1A^{-1}, and then A1b=xA^{-1} \vec b = \vec x.

# Prevents unnecessary rounding.
np.set_printoptions(precision=16, suppress=True)

A = np.array([[1, 1],
              [1, 1.000000001]])

b = np.array([[7],
              [7.000000004]])

x_inv = np.linalg.inv(A) @ b
x_inv
array([[2.9999998807907104], [4.0000001192092896]])

Option 2: Using np.linalg.solve

The following cell solves the same problem as the previous cell, but rather than asking for the inverse of AA, it asks for the solution to Ax=bA \vec x = \vec b, which doesn’t inherently require inverting AA.

x_solve = np.linalg.solve(A, b)
x_solve
array([[3.], [4.]])

This small example already illustrates the big idea: inverting can introduce more numerical error than is necessary. If we cast the problem more abstractly, consider the matrix

A=[1111+ϵ]A = \begin{bmatrix} 1 & 1 \\ 1 & 1 + \epsilon \end{bmatrix}

where ϵ\epsilon is a small constant, e.g. ϵ=0.000000001\epsilon = 0.000000001 in the example above. The inverse of AA is

A1=11(1+ϵ)11[1+ϵ111]=1ϵ[1+ϵ111]A^{-1} = \frac{1}{1(1 + \epsilon) - 1 \cdot 1} \begin{bmatrix} 1 + \epsilon & -1 \\ -1 & 1 \end{bmatrix} = \frac{1}{\epsilon} \begin{bmatrix} 1 + \epsilon & -1 \\ -1 & 1 \end{bmatrix}

The closer ϵ\epsilon is to 0, the larger 1ϵ\frac{1}{\epsilon} becomes, making any floating point errors all the more costly.