Skip to article frontmatterSkip to article content

2.1. Vectors

Linear algebra can be thought of as the study of vectors, matrices, and linear transformations, all of which are ideas we’ll need to use in our journey to understand machine learning. We’ll start with vectors, which are the building blocks of linear algebra.

Definition

There are many ways to define vectors, but I’ll give you the most basic and practically relevant definition of a vector for now. I’ll introduce more abstract definitions later if we need them.

By ordered list, I mean that the order of the numbers in the vector matters.

In general, we’re mostly concerned with vectors in Rn\mathbb{R}^n, which is the set of all vectors with nn components or elements, each of which is a real number. It’s possible to consider vectors with complex components (the set of all vectors with complex components is denoted Cn\mathbb{C}^n), but we’ll stick to real vectors for now.

The vector v\vec v defined in the box above is in R3\mathbb{R}^3, which we can express as vR3\vec v \in \mathbb{R}^3. This is pronounced as “v is an element of R three”.

A general vector in Rn\mathbb{R}^n can be expressed in terms of its nn components:

v=[v1v2vn]\vec v = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}

Subscripts can be used for different, sometimes conflicting purposes:

The meaning of the subscript depends on the context, so just be careful!

While we’ll use the definition of a vector as a list of numbers for now, I hope you’ll soon appreciate that vectors are more than just a list of numbers – they encode remarkable amounts of information and beauty.


Norm (i.e. Length or Magnitude)

In the context of physics, vectors are often described as creatures with “a magnitude and a direction”. While this is not a physics class – this is EECS 245, after all! – this interpretation has some value for us too.

To illustrate what we mean, let’s consider some concrete vectors in R2\mathbb{R}^2, since it is easy to visualize vectors in 2 dimensions on a computer screen. Suppose:

u=[31],v=[46]{\color{orange}\vec u = \begin{bmatrix} 3 \\ 1 \end{bmatrix}}, \quad {\color{3d81f6}\vec v = \begin{bmatrix} 4 \\ -6 \end{bmatrix}}

Then, we can visualize u\color{orange}{\vec u} and v\color{#3d81f6}{\vec v} as arrows pointing from the origin (0,0)(0,0) to the points (3,1)(3, 1) and (4,6)(4, -6) in the 2D Cartesian plane, respectively.

Loading...

The vector v=[46]\vec v = \begin{bmatrix} 4 \\ -6 \end{bmatrix} moves 4 units to the right and 6 units down, which we know by reading the components of the vector. In Chapter 2.2, we’ll see how to describe the direction of v\vec v in terms of the angle it makes with the xx-axis (and you may remember how to calculate that angle using trigonometry).

It’s worth noting that v\vec v isn’t “fixed” to start at the origin – vectors don’t have positions. All three vectors in the figure below are the same vector, v\vec v.

Image produced in Jupyter

To compute the length of v\vec v – i.e. the distance between (0,0)(0, 0) and (4,6)(4, -6) – we should remember the Pythagorean theorem, which states that if we have a right triangle with legs of length aa and bb, then the length of the hypotenuse is a2+b2\sqrt{a^2 + b^2}. Here, that’s 42+(6)2=16+36=52=213\sqrt{4^2 + (-6)^2} = \sqrt{16 + 36} = \sqrt{52} = 2\sqrt{13}.

Image produced in Jupyter

Note that the norm involves a sum of squares, much like mean squared error 🤯. This connection will be made more explicit in Chapter 3, when we return to studying linear regression.

What may not be immediately obvious is why the Pythagorean theorem seems to extend to higher dimensions. The 2D case seems reasonable, but why is the length of the vector w=[623]\color{#d81b60}\vec w = \begin{bmatrix} 6 \\ -2 \\ 3 \end{bmatrix} in R3\mathbb{R}^3 equal to 62+(2)2+32\sqrt{6^2 + (-2)^2 + 3^2}?

Loading...

There are two right angle triangles in the picture above:

  • One triangle has legs of length 6 and 2, with a hypotenuse of hh; this triangle is shaded light blue\color{lightblue} \text{light blue} above.
  • Another triangle has legs of length 3 and hh, with a hypotenuse of w\left\| \vec w \right\|; this triangle is shaded dark pink\color{#d81b60} \text{dark pink} above.

To find w\left\| \vec w \right\|, we can use the Pythagorean theorem twice:

h2=62+(2)2=36+4=40    h=40h^2 = 6^2 + (-2)^2 = 36 + 4 = 40 \implies h = \sqrt{40}

Then, we can use the Pythagorean theorem again to find w\left\| \vec w \right\|:

w=h2+32=40+9=7=62+(2)2+32\left\| \vec w \right\| = \sqrt{h^2 + 3^2} = \sqrt{40 + 9} = 7 = \sqrt{6^2 + (-2)^2 + 3^2}

So, to find w\left\| \vec w \right\|, we used the Pythagorean theorem twice, and ended up computing the square root of the sum of the squares of the components of the vector, which is what the definition above states. This argument naturally extends to higher dimensions. We will do this often: build intuition in the dimensions we can visualize (2D, and with the help of interactive graphics, 3D), and then rely on the power of abstraction to extend our understanding to higher dimensions, even when we can’t visualize.

Vector norms satisfy several interesting properties, which we will introduce shortly once we have more context.


Addition and Scalar Multiplication

Vectors support two core operations: addition and scalar multiplication. These two operations are core to the study of linear algebra – so much so, that sometimes vectors are defined abstractly as “things that can be added and multiplied by scalars”.

Addition

This tells us that vector addition is performed element-wise. This is a term that you’ll encounter quite a bit in the context of writing numpy code, as you’ll see in lab.

Using our examples from earlier, u=[31]\color{orange}\vec u = \begin{bmatrix} 3 \\ 1 \end{bmatrix} and v=[46]\color{#3d81f6}\vec v = \begin{bmatrix} 4 \\ -6 \end{bmatrix}, we have that u+v=[75]{\color{orange}\vec u} + {\color{#3d81f6}\vec v} = \begin{bmatrix} 7 \\ -5 \end{bmatrix}.

Geometrically, we can arrive at the vector [75]\begin{bmatrix} 7 \\ -5 \end{bmatrix} by drawing u{\color{orange}\vec u} at the origin, then placing v{\color{#3d81f6}\vec v} at the tip of u{\color{orange}\vec u}.

Image produced in Jupyter

Vector addition is commutative, i.e. u+v=v+u{\color{orange}\vec u} + {\color{#3d81f6}\vec v} = {\color{#3d81f6}\vec v} + {\color{orange}\vec u}, for any two vectors u,vRn{\color{orange}\vec u}, {\color{#3d81f6}\vec v} \in \mathbb{R}^n. Algebraically, this should not be a surprise, since ui+vi=vi+ui{\color{orange}u_i} + {\color{#3d81f6}v_i} = {\color{#3d81f6}v_i} + {\color{orange}u_i} for all ii.

Visually, this means that we can instead start with v{\color{#3d81f6}\vec v} at the origin and then draw u{\color{orange}\vec u} starting from the tip of v{\color{#3d81f6}\vec v}, and we should land in the same place.

Image produced in Jupyter

We cannot, however, add w=[623]\vec w = \begin{bmatrix} 6 \\ -2 \\ 3 \end{bmatrix} to u\vec u, since u\vec u and w\vec w have different numbers of components.

In Python, we define vectors using numpy arrays, and addition occurs element-wise by default.

Scalar Multiplication

Using our examples from earlier, v=[46]\color{#3d81f6}\vec v = \begin{bmatrix} 4 \\ -6 \end{bmatrix} as an example, 3v=[1218]3 {\color{#3d81f6}\vec v} = \begin{bmatrix} 12 \\ -18 \end{bmatrix}. Note that I’ve deliberately defined this operation as scalar multiplication, not just “multiplication” in general, as there’s more nuance to the definition of multiplication in linear algebra.

Visually, a scalar multiple is equivalent to stretching or compressing a vector by a factor of the scalar. If the scalar is negative, the direction of the vector is reversed. Below, 23v-\frac{2}{3} \color{#3d81f6}\vec v points opposite to v\color{#3d81f6}\vec v and 3v3 \color{#3d81f6}\vec v.

Image produced in Jupyter

An important observation is that v\color{#3d81f6}\vec v, 3v3 \color{#3d81f6}\vec v, and 23v-\frac{2}{3} \color{#3d81f6}\vec v all lie on the same line.


Linear Combinations

Motivation and Definition

The two operations we’ve defined – vector addition and scalar multiplication – are the building blocks of linear algebra, and are often used in conjunction. For example, if we stick with the same vectors u\color{orange} \vec u and v\color{#3d81f6} \vec v from earlier, what might the vector 3u12v3 {\color{orange} \vec u} - \frac{1}{2} {\color{#3d81f6} \vec v} look like?

3u12v=3[31]12[46]=[93][23]=[76]3 {\color{orange} \vec u} - \frac{1}{2} {\color{#3d81f6} \vec v} = 3 {\color{orange} \begin{bmatrix} 3 \\ 1 \end{bmatrix}} - \frac{1}{2} {\color{#3d81f6} \begin{bmatrix} 4 \\ -6 \end{bmatrix}} = \begin{bmatrix} 9 \\ 3 \end{bmatrix} - \begin{bmatrix} 2 \\ -3 \end{bmatrix} = \begin{bmatrix} 7 \\ 6 \end{bmatrix}
Image produced in Jupyter

The vector [76]\begin{bmatrix} 7 \\ 6 \end{bmatrix}, drawn in black above, is a linear combination of u\color{orange}\vec u and v\color{#3d81f6}\vec v, since it can be written in the form 3u12v3{\color{orange}\vec u} - \frac{1}{2}{\color{#3d81f6}\vec v}. 3 and 12-\frac{1}{2} are the scalars that the definition above refers to as a1a_1 and a2a_2, and we’ve used u\color{orange}\vec u and v\color{#3d81f6}\vec v in place of v1\color{#d81b60}\vec v_1 and v2\color{#d81b60}\vec v_2. (I’ve tried to make the definition a bit more general – here, we’re just working with d=2d = 2 vectors in n=2n = 2 dimensions, but in practice dd and nn could both be much larger.)

Example in 2D

Here’s another linear combination of u\color{orange}\vec u and v\color{#3d81f6}\vec v, namely 6u+5v6{\color{orange}\vec u} + 5{\color{#3d81f6}\vec v}. Algebraically, this is:

6u+5v=6[31]+5[46]=[3824]6{\color{orange}\vec u} + 5{\color{#3d81f6}\vec v} = 6{\color{orange}\begin{bmatrix} 3 \\ 1 \end{bmatrix}} + 5{\color{#3d81f6}\begin{bmatrix} 4 \\ -6 \end{bmatrix}} = \begin{bmatrix} 38 \\ -24 \end{bmatrix}

Visually:

Image produced in Jupyter

I like thinking of a linear combination as taking “a little bit of the first vector, a little bit of the second vector, etc.” and then adding them all together. (By “little bit”, I mean some amount of, e.g. 6u6 {\color{orange}\vec u} is a little bit of u\color{orange}\vec u.) Another useful analogy is to think of the original vectors as “building blocks” that we can use to create new vectors through addition and scalar multiplication.

This idea, of creating new vectors by scaling and adding existing vectors, is so important that it’s essentially what our regression problem boils down to. In the context of our commute times example, imagine dt\vec{\text{dt}} contains the departure time for each row in our dataset (i.e. the time left in the morning), and nc\vec{\text{nc}} contains the average number of cars on the road on a particular day. If we want to use these two features in a linear model to predict commute time, our problem boils down to finding the optimal coefficients w1w_1 and w2w_2 in a linear combination of dt\vec{\text{dt}} and nc\vec{\text{nc}} that best predicts commute times.

vector of predicted commute times=w1dt+w2nc\text{vector of predicted commute times} = w_1 \vec{\text{dt}} + w_2 \vec{\text{nc}}

The Three Questions

We’re going to spend a lot of time thinking about linear combinations. Specifically:

Again, just as an example, suppose the two vectors we’re dealing with are our familiar friends:

u=[31],v=[46]{\color{orange}\vec u = \begin{bmatrix} 3 \\ 1 \end{bmatrix}}, \quad {\color{#3d81f6}\vec v = \begin{bmatrix} 4 \\ -6 \end{bmatrix}}

These are d=2d = 2 vectors in n=2n = 2 dimensions. With regards to the Three Questions:

  1. Can we write b\vec b as a linear combination of u{\color{orange}\vec u} and v{\color{#3d81f6}\vec v}?

    If b=[76]\vec b = \begin{bmatrix} 7 \\ 6 \end{bmatrix}, then the answer to the first question is yes, because we’ve shown that:

    3u12v=[76]3 {\color{orange}\vec u} - \frac{1}{2} {\color{#3d81f6}\vec v} = \begin{bmatrix} 7 \\ 6 \end{bmatrix}

    Similarly, if b=[3824]\vec b = \begin{bmatrix} 38 \\ -24 \end{bmatrix}, then the answer to the first question is also yes, because we’ve shown that:

    6u+5v=[3824]6 {\color{orange}\vec u} + 5 {\color{#3d81f6}\vec v} = \begin{bmatrix} 38 \\ -24 \end{bmatrix}

    If b\vec b is some other vector, the answer may be yes or no, for all we know right now.

  2. If so, are the values of the scalars on u{\color{orange}\vec u} and v{\color{#3d81f6}\vec v} unique?

    Not sure! It’s true that [76]=3u12v\begin{bmatrix} 7 \\ 6 \end{bmatrix} = 3 {\color{orange}\vec u} - \frac{1}{2} {\color{#3d81f6}\vec v}, but for all I know at this point, there could be other scalars a13a_1 \neq 3 and a212a_2 \neq -\frac{1}{2} such that:

    a1u+a2v=[76]a_1 {\color{orange}\vec u} + a_2 {\color{#3d81f6}\vec v} = \begin{bmatrix} 7 \\ 6 \end{bmatrix}

    (As it turns out, the answer is that the values 3 and 12-\frac{1}{2} are unique – you’ll show why this is the case in a following activity.)

  3. What is the shape of the set of all possible linear combinations of u{\color{orange}\vec u} and v{\color{#3d81f6}\vec v}?

    Also not sure! I know that [76]\begin{bmatrix} 7 \\ 6 \end{bmatrix} and [3824]\begin{bmatrix} 38 \\ -24 \end{bmatrix} are both linear combinations of u\color{orange}\vec u and v\color{#3d81f6}\vec v, and presumably there are many more, but I don’t know what they are.

    (It turns out that any vector in R2\mathbb{R}^2 can be written as a linear combination of u\color{orange}\vec u and v\color{#3d81f6}\vec v! Again, you’ll show this in an activity.)

We’ll more comprehensively study the “Three Questions” in Chapter 2.4. I just wanted to call them out for you here so that you know where we’re heading.

Example in 3D

As a final example, let’s consider the vectors:

w=[1246],r=[7110]{\color{#d81b60}\vec w = \begin{bmatrix} 12 \\ -4 \\ 6 \end{bmatrix}}, \quad {\color{#004d40}\vec r = \begin{bmatrix} 7 \\ 1 \\ 10 \end{bmatrix}}

These are d=2d = 2 vectors, as before, but now in n=3n = 3 dimensions. What do some of their linear combinations look like?

Loading...

Drag the plot above to look at all five vectors from different angles. You should notice that all of the linear combinations of w\color{#d81b60}\vec w and r\color{#004d40}\vec r lie on the same plane! We’ll develop more precise terminology to describe this idea, once again, in Chapter 2.4. For now, think of a plane as a flat sheet of paper (more formally, a surface) that extends infinitely in all directions.


Norms, Revisited

Earlier in this section, we defined the norm of a vector v\vec v as:

v=v12+v22++vn2=i=1nvi2\lVert \vec v \rVert = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2} = \sqrt{\sum_{i=1}^n v_i^2}

Now that we know how to add and scale vectors, we should think about how the norm behaves under these operations.

Properties of the Norm

Properties 1 and 2 are intuitive enough:

  1. Property 1 states that it’s impossible for a vector to have a negative norm. To calculate the norm a vector, we sum the squares of each of the vector’s components. As long as each component viv_i is a real number, then vi20v_i^2 \geq 0, and so i=1nvi20\sum_{i=1}^n v_i^2 \geq 0. The square root of a non-negative number is always non-negative, so the norm of a vector is always non-negative. The only case in which i=1nvi2=0\sum_{i=1}^n v_i^2 = 0 is when each vi=0v_i = 0, so the only vector with a norm of 0 is the zero vector.
  2. Property 2 states that scaling a vector by a scalar scales its norm by the absolute value of the scalar. For instance, it’s saying that both 2v2 \vec v and 2v-2 \vec v should be double the length of v\vec v. See, at this point, if you can prove why this is the case.

The Triangle Inequality

Property 3 is a bit more interesting. As a reminder, it states that:

u+vu+v\lVert \vec u + \vec v \rVert \leq \lVert \vec u \rVert + \lVert \vec v \rVert

This is a famous inequality, generally known as the triangle inequality, and it comes up all the time in proofs. Intuitively, it says that the length of a sum of vectors cannot be greater than the sum of the lengths of the individual vectors – or, more philosophically, a sum cannot be more than its parts. It’s called the triangle inequality because it’s a generalization of the fact that in a triangle, the sum of the lengths of any two sides is greater than the length of the third side.

Above:

u=32+12=10\lVert {\color{orange}\vec u} \rVert = \sqrt{3^2 + 1^2} = \sqrt{10}
v=42+(6)2=52\lVert {\color{#3d81f6}\vec v} \rVert = \sqrt{4^2 + (-6)^2} = \sqrt{52}
u+v=72+(5)2=74\lVert {\color{orange}\vec u} + {\color{#3d81f6}\vec v} \rVert = \sqrt{7^2 + (-5)^2} = \sqrt{74}

And indeed, 748.6\sqrt{74} \approx 8.6 is less than 10+5210.4\sqrt{10} + \sqrt{52} \approx 10.4.

To prove that the triangle inequality holds in general, for any two vectors u,vRn\vec u, \vec v \in \mathbb{R}^n, we’ll need to wait until Chapter 2.2. We don’t currently have any way to expand the norm u+v\lVert \vec u + \vec v \rVert – but we’ll develop the tools to do so soon. Just keep it in mind for now.

Unit Vectors and the Norm Ball

It’s common to use unit vectors to describe directions. I’ll use the same example as in Activity 1, when this idea was first introduced. Consider the vector x=[125]\vec x = \begin{bmatrix} 12 \\ 5 \end{bmatrix}. Its norm is x=122+52=169=13\lVert \vec x \rVert = \sqrt{12^2 + 5^2} = \sqrt{169} = \boxed{13}. (You might remember the (5,12,13)(5, 12, 13) Pythagorean triple from high school algebra– but that’s not important.)

There are plenty of vectors that point in the same direction as x\vec x – any vector cxc \vec x for c>0c > 0 does. (If c<0c < 0, then the vector cxc \vec x points in the opposite direction of x\vec x.)

But among all those, the only one with a norm of 1 is 113x\frac{1}{\boxed{13}} \vec x. Property 2 of the norm tells us this.

Image produced in Jupyter

In general, if v\vec v is any vector, then:

vv\frac{\vec v}{\lVert \vec v \rVert}

is a unit vector in the same direction as v\vec v. Sometimes, we say that vv\frac{\vec v}{\lVert \vec v \rVert} is a normalized version of v\vec v.

Here’s where things get interesting. Let’s visualize a few vectors and their normalized versions:

u=[31]    uu=[310110]v=[66]    vv=[1212]w=[75]    ww=[774574]x=[125]    xx=[1213513]y=[16]    yy=[137637]\begin{aligned} \vec u &= \begin{bmatrix} 3 \\ 1 \end{bmatrix} \implies \frac{\vec u}{\lVert \vec u \rVert} = \begin{bmatrix} \frac{3}{\sqrt{10}} \\ \frac{1}{\sqrt{10}} \end{bmatrix} \\ \vec v &= \begin{bmatrix} -6 \\ -6 \end{bmatrix} \implies \frac{\vec v}{\lVert \vec v \rVert} = \begin{bmatrix} \frac{-1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{bmatrix} \\ \vec w &= \begin{bmatrix} 7 \\ -5 \end{bmatrix} \implies \frac{\vec w}{\lVert \vec w \rVert} = \begin{bmatrix} \frac{7}{\sqrt{74}} \\ \frac{-5}{\sqrt{74}} \end{bmatrix} \\ \vec x &= \begin{bmatrix} -12 \\ 5 \end{bmatrix} \implies \frac{\vec x}{\lVert \vec x \rVert} = \begin{bmatrix} \frac{-12}{13} \\ \frac{5}{13} \end{bmatrix} \\ \vec y &= \begin{bmatrix} -1 \\ -6 \end{bmatrix} \implies \frac{\vec y}{\lVert \vec y \rVert} = \begin{bmatrix} \frac{-1}{\sqrt{37}} \\ \frac{-6}{\sqrt{37}} \end{bmatrix} \\ \end{aligned}
Image produced in Jupyter

What do these vectors all have in common, other than being unit vectors? They all lie on a circle of radius 1, centered at (0,0)(0, 0)!

Image produced in Jupyter

The circle shown above is called the norm ball of radius 1 in R2\mathbb{R}^2. It shows the set of all vectors vR2\vec v \in \mathbb{R}^2 such that v=1\lVert \vec v \rVert = 1. Using set notation, we might say:

{v:v=1,vR2}\{\vec v : \lVert \vec v \rVert = 1, \vec v \in \mathbb{R}^2\}

That this looks like a circle is no coincidence. The condition v=1\lVert \vec v \rVert = 1 is equivalent to v12+v22=1\sqrt{v_1^2 + v_2^2} = 1. Squaring both sides, we get v12+v22=1v_1^2 + v_2^2 = 1. This is the equation of a circle with radius 1 centered at the origin.

In R3\mathbb{R}^3, the norm ball of radius 1 is a sphere, and in general, in Rn\mathbb{R}^n, the norm ball of radius 1 is an nn-dimensional sphere.

Other Norms

So far, we’ve only discussed one “norm” of a vector, sometimes called the L2L_{\color{orange}2} norm or Euclidean norm. In general, if vRn\vec v \in \mathbb{R}^n is one vector, v=[v1v2vn]\vec v = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}, then its norm is:

v=v12+v22++vn2=i=1nvi2\lVert \vec v \rVert = \sqrt{v_1^{\color{orange}2} + v_2^{\color{orange}2} + \cdots + v_n^{\color{orange}2}} = \sqrt{\sum_{i=1}^n v_i^{\color{orange}2}}

This is, by far, the most common and most relevant norm, and in many linear algebra classes, it’s the only norm you’ll see. But in machine learning, a few other norms are relevant, too, so I’ll briefly discuss them here.

  • The L1L_1 or Manhattan norm of v\vec v is:

    v1=v1+v2++vn=i=1nvi\lVert \vec v \rVert_1 = |v_1| + |v_2| + \cdots + |v_n| = \sum_{i=1}^n |v_i|

    It’s called the Manhattan norm because it’s the distance you would travel if you walked from the origin to v\vec v in a grid of streets, where you can only move horizontally or vertically.

  • The LL_\infty or maximum norm of v\vec v is:

    v=maxivi\lVert \vec v \rVert_\infty = \max_{i} |v_i|

    This is largest absolute value of any component of v\vec v.

  • For any p1p \geq 1, the LpL_p norm of v\vec v is:

    vp=(i=1nvip)1p\lVert \vec v \rVert_p = \left( \sum_{i=1}^n |v_i|^p \right)^{\frac{1}{p}}

    Note that when p=2p = 2, this is the same as the L2L_2 norm. For other values of pp, this is a generalization. Something to think about: why is there an absolute value in the definition?

All of these norms measure the length of a vector, but in different ways. This might ring a bell: we saw very similar tradeoffs between squared and absolute losses in Chapter 1.

Believe it or not, all three of these norms satisfy the same “Three Properties” we discussed earlier.

Back to x=[125]\vec x = \begin{bmatrix} 12 \\ 5 \end{bmatrix}. What are the L2L_2, L1L_1, and LL_\infty norms of x\vec x?

Image produced in Jupyter

Here:

  • x2=122+52=144+25=169=13\lVert \vec x \rVert_2 = \sqrt{12^2 + 5^2} = \sqrt{144 + 25} = \sqrt{169} = 13
  • x1=12+5=12+5=17\lVert \vec x \rVert_1 = |12| + |5| = 12 + 5 = 17
  • x=max(12,5)=12\lVert \vec x \rVert_\infty = \max(|12|, |5|) = 12

Let’s revisit the idea of a norm ball. Using the standard L2L_2 norm, the norm ball in R2\mathbb{R}^2 is a circle. What does the norm ball look like for the L1L_1 and LL_\infty norms? Or LpL_p with an arbitrary pp?

Image produced in Jupyter

Most notably, the L1L_1 norm ball looks like a diamond, and the LL_\infty norm ball looks like a square. The L1.3L_{1.3} norm ball looks like a diamond with rounded corners. This is not the last you’ll see of these norm balls – in particular, in future machine learning courses, you’ll see them again in the context of regularization, which is a technique for preventing overfitting in our models.

np.linalg.norm and Vectorization

It’s been a while since we’ve experimented with numpy. A few things:

  • As we’ve seen, arrays can be added element-wise by default.
  • Arrays can also be multiplied by scalars out-of-the-box, meaning that linear combinations of arrays (vectors) are easy to compute. The above two facts mean that array operations are vectorized: they are applied to each element of the array in parallel, without needing to use a for-loop.
  • To compute the (L2L_2) norm of an array (vector), we can use np.linalg.norm.

Suppose you didn’t know about np.linalg.norm. There’s another way to compute the norm of an array (vector), that doesn’t involve a for-loop. Follow the activity to discover it.

In general, we’ll want to avoid Python for-loops in our code when there are numpy-native alternatives, as these numpy functions are optimized to use C (the programming language) under the hood for speed and memory efficiency.