Drag the plot above to look at all five vectors from different angles. You should notice that all of the linear combinations of w and rlie on the same plane! We’ll develop more precise terminology to describe this idea, once again, in Chapter 4.1.
Property 1 states that it’s impossible for a vector to have a negative norm. To calculate the norm a vector, we sum the squares of each of the vector’s components. As long as each component vi is a real number, then vi2≥0, and so ∑i=1nvi2≥0. The square root of a non-negative number is always non-negative, so the norm of a vector is always non-negative. The only case in which ∑i=1nvi2=0 is when each vi=0, so the only vector with a norm of 0 is the zero vector.
Property 2 states that scaling a vector by a scalar scales its norm by the absolute value of the scalar. For instance, it’s saying that both 2v and −2v should be double the length of v. See, at this point, if you can prove why this is the case.
We now revisit norms in detail, including geometric intuition and alternative norms.
Property 3 is a bit more interesting. As a reminder, it states that:
∥u+v∥≤∥u∥+∥v∥
This is a famous inequality, generally known as the triangle inequality, and it comes up all the time in proofs. Intuitively, it says that the length of a sum of vectors cannot be greater than the sum of the lengths of the individual vectors – or, more philosophically, a sum cannot be more than its parts. It’s called the triangle inequality because it’s a generalization of the fact that in a triangle, the sum of the lengths of any two sides is greater than the length of the third side.
To prove that the triangle inequality holds in general, for any two vectors u,v∈Rn, we’ll need to wait until Chapter 3.3. We don’t currently have any way to expand the norm ∥u+v∥ – but we’ll develop the tools to do so soon. Just keep it in mind for now.
Activity 2
The triangle inequality says that:
∥u+v∥≤∥u∥+∥v∥
In the example above, we had 74<10+52. In other words, there was strict inequality.
Find a pair of vectors u,v (say, in R2) such that the triangle inequality achieves equality, i.e. ∥u+v∥=∥u∥+∥v∥.
It’s common to use unit vectors to describe directions. I’ll use the same example as in Activity 1, when this idea was first introduced. Consider the vector x=[125]. Its norm is ∥x∥=122+52=169=13. (You might remember the (5,12,13) Pythagorean triple from high school algebra– but that’s not important.)
There are plenty of vectors that point in the same direction as x – any vector cx for c>0 does. (If c<0, then the vector cx points in the opposite direction of x.)
But among all those, the only one with a norm of 1 is 131x. Property 2 of the norm tells us this.
What do these vectors all have in common, other than being unit vectors? They all lie on a circle of radius 1, centered at (0,0)!
from utils import plot_vectors
import numpy as np
import plotly.graph_objects as go
# Draw the shaded unit circle
theta = np.linspace(0, 2 * np.pi, 200)
x_circle = np.cos(theta)
y_circle = np.sin(theta)
# Add the vectors on top
fig = plot_vectors([((3 / np.sqrt(10), 1 / np.sqrt(10)), 'black', r'$\frac{\vec u}{\lVert \vec u \rVert}$'),
((-1 / np.sqrt(2), -1 / np.sqrt(2)), 'black', r'$\frac{\vec v}{\lVert \vec v \rVert}$'),
((7 / np.sqrt(74), -5 / np.sqrt(74)), 'black', r'$\frac{\vec w}{\lVert \vec w \rVert}$'),
((-12 / 13, 5 / 13), 'black', r'$\frac{\vec x}{\lVert \vec x \rVert}$'),
((-1 / np.sqrt(37), -6 / np.sqrt(37)), 'black', r'$\frac{\vec y}{\lVert \vec y \rVert}$')
], vdeltax=0, vdeltay=0)
# Add dotted circle outline
fig.add_trace(go.Scatter(
x=x_circle,
y=y_circle,
mode="lines",
line=dict(color="gray", dash="dash", width=3),
hoverinfo='skip',
showlegend=False
))
fig.update_layout(width=400, height=400, yaxis_scaleanchor="x")
fig.update_xaxes(range=[-1.5, 1.5], tickvals=np.arange(-1.5, 1.5, 0.5))
fig.update_yaxes(range=[-1.5, 1.5], tickvals=np.arange(-1.5, 1.5, 0.5))
fig.show(scale=3)
The circle shown above is called the norm ball of radius 1 in R2. It shows the set of all vectors v∈R2 such that ∥v∥=1. Using set notation, we might say:
{v:∥v∥=1,v∈R2}
That this looks like a circle is no coincidence. The condition ∥v∥=1 is equivalent to v12+v22=1. Squaring both sides, we get v12+v22=1. This is the equation of a circle with radius 1 centered at the origin.
In R3, the norm ball of radius 1 is a sphere, and in general, in Rn, the norm ball of radius 1 is an n-dimensional sphere.
Activity 3
Find the unit vector that points in the same direction as z=⎣⎡40−3−3⎦⎤.
So far, we’ve only discussed one “norm” of a vector, sometimes called the L2 norm or Euclidean norm. In general, if v∈Rn is one vector, v=⎣⎡v1v2⋮vn⎦⎤, then its norm is:
∥v∥=v12+v22+⋯+vn2=i=1∑nvi2
This is, by far, the most common and most relevant norm, and in many linear algebra classes, it’s the only norm you’ll see. But in machine learning, a few other norms are relevant, too, so I’ll briefly discuss them here.
The L1 or Manhattan norm of v is:
∥v∥1=∣v1∣+∣v2∣+⋯+∣vn∣=i=1∑n∣vi∣
It’s called the Manhattan norm because it’s the distance you would travel if you walked from the origin to v in a grid of streets, where you can only move horizontally or vertically.
The L∞ or maximum norm of v is:
∥v∥∞=imax∣vi∣
This is largest absolute value of any component of v.
For any p≥1, the Lp norm of v is:
∥v∥p=(i=1∑n∣vi∣p)p1
Note that when p=2, this is the same as the L2 norm. For other values of p, this is a generalization. Something to think about: why is there an absolute value in the definition?
All of these norms measure the length of a vector, but in different ways. This might ring a bell: we saw very similar tradeoffs between squared and absolute losses in Chapter 1.
Believe it or not, all three of these norms satisfy the same “Three Properties” we discussed earlier.
Back to x=[125]. What are the L2, L1, and L∞ norms of x?
Let’s revisit the idea of a norm ball. Using the standard L2 norm, the norm ball in R2 is a circle. What does the norm ball look like for the L1 and L∞ norms? Or Lp with an arbitrary p?
The L1 norm ball looks like a diamond. Any vector with an L1 norm of 1 will lie on the boundary of the ball. The L∞ norm ball looks like a square, and the L1.3 norm ball looks like a diamond with rounded corners.
This is not the last you’ll see of these norm balls – in particular, in future machine learning courses, you’ll see them again in the context of regularization, which is a technique for preventing overfitting in our models.
Activity 4
In the case of x=[125], the L1 norm (12+5=17) is greater than the L2 norm (122+52=169=13), i.e. ∥x∥1>∥x∥2.
Find a vector y∈R2 such that ∥y∥1=∥y∥2.
Try and find a vector z∈R2 such that ∥z∥1<∥z∥2. What do you encounter?
Prove that ∥x∥2≤n∥x∥∞ for any vector x∈Rn. (Hint: Start with the definition of the L2 norm of x, square it, and try and compare each element in the sum to the largest element in the vector.)
It’s been a while since we’ve experimented with numpy. A few things:
As we’ve seen, arrays can be added element-wise by default.
Arrays can also be multiplied by scalars out-of-the-box, meaning that linear combinations of arrays (vectors) are easy to compute. The above two facts mean that array operations are vectorized: they are applied to each element of the array in parallel, without needing to use a for-loop.
To compute the (L2) norm of an array (vector), we can use np.linalg.norm.
Suppose you didn’t know about np.linalg.norm. There’s another way to compute the norm of an array (vector), that doesn’t involve a for-loop. Follow the activity to discover it.
Activity 5
In the cell above:
Write u ** 2. Squaring a vector is not an operation we’ve discussed (and isn’t an operation that exists in math), but numpy gives you back another array. What does this array contain?
Using np.sum and the new array you just created, find the norm of u without using np.linalg.norm.
Find the norm of 3 * u - 0.5 * v using the same technique, and make sure you get the same result as was already displayed for you.
In general, we’ll want to avoid Python for-loops in our code when there are numpy-native alternatives, as these numpy functions are optimized to use C (the programming language) under the hood for speed and memory efficiency.