Skip to main content
Logo image

Elementary Linear Algebra: For University of Lethbridge Math 1410

Section 2.7 Span and Linear Independence

So far in this chapter, we have restricted our attention to vectors in two or three dimensions, where we are able to visualize things geometrically. Something that you may have noticed is that the algebraic rules for the addition and scalar multiplication vectors are the same in both dimensions.
In this section, we consider this algebra of vectors abstractly, which will allow us to move beyond three dimensions to vectors with an arbitrary number of components.
Despite the fact that we cannot easily visualize them, higher-dimensional vectors frequently arise in applications where there are many variables involved. The fact that the rules of algebra remain the same mean that we can continue to manipulate these objects, even though we can no longer picture them.

Subsection 2.7.1 The vector space \(\R^n\)

For each positive integer \(n\text{,}\) a column vector \(\vec{x}\) is formed by arranging \(n\) real numbers \(x_1, x_2, \ldots, x_n\) into a column \(\vec{x} = \bbm x_1\\x_2\\\vdots \\x_n\ebm\text{.}\) In Chapter 4 we will see that this is a special type of matrix; for now, we can think of a column vector as an alternative to the notation \(\bbm x_1\\x_2\\\ldots, x_n\ebm\) encountered earlier in this chapter. In particular, we still refer to the numbers \(x_1,x_2,\ldots, x_n\) in \(\vec{x}\) as the components of \(\vec{x}\text{,}\) and we define addition and scalar multiplication of column vectors in terms of their components, as we did for vectors in \(\mathbb{R}^2\) and \(\mathbb{R}^3\text{.}\)
That is, given vectors \(\vec{x} = \bbm x_1\\x_2\\\vdots\\x_n\ebm\) and \(\vec{y} = \bbm y_1\\y_2\\\vdots \\y_n\ebm\text{,}\) and a scalar \(c\text{,}\) we define
\begin{equation*} \vec{x}+\vec{y} = \bbm x_1\\x_2\\\vdots\\x_n\ebm + \bbm y_1\\y_2\\\vdots \\y_n\ebm = \bbm x_1+y_1\\x_2+y_2\\\vdots \\x_n+y_n\ebm \end{equation*}
and
\begin{equation*} c\vec{x} = c\bbm x_1\\x_2\\\vdots\\x_n\ebm = \bbm cx_1\\cx_2\\\vdots\\cx_n\ebm\text{.} \end{equation*}
With these operations, the set of all \(n\times 1\) column vectors provides an example of what is known as a vector space.

Definition 2.7.1. The vector space \(\mathbb{R}^n\).

The space of all column vectors of real numbers is denoted by
\begin{equation*} \mathbb{R}^n = \left\{\left.\bbm x_1\\x_2 \\ \vdots \\x_n\ebm \,\,\right| \, x_1, x_2, \ldots, x_n\in \mathbb{R}\right\}\text{.} \end{equation*}
As with the vectors in \(\mathbb{R}^2\) and \(\mathbb{R}^3\) we encountered in Chapter 2, we allow the notation \(\mathbb{R}^n\) to represent both the space of points \((x_1,x_2, \ldots, x_n)\text{,}\) and the set of vectors defined within that space. Since we can identify any point \(P\) with the position vector \(\overrightarrow{OP}\text{,}\) the difference between viewing \(\mathbb{R}^n\) as a set of points or as a set of vectors is primarily one of perspective.
When \(n\geq 4\) we can no longer visualize vectors in \(\mathbb{R}^n\) as we did in Chapter 2, but we can handle them algebraically exactly as we did before, and we can extend the definitions of Chapter 2 to apply to vectors in \(\mathbb{R}^n\text{.}\)
In particular, we can define the length of a vector
\begin{equation*} \vec{x}=\bbm x_1\\x_2\\ \vdots\\ x_n\ebm\in\mathbb{R}^n \end{equation*}
by
\begin{equation*} \norm{\vec{x}} = \sqrt{x_1^2+x_2^2+\cdots +x_n^2}\text{,} \end{equation*}
and the dot product of vectors \(\vec{x}, \vec{y}\in\mathbb{R}^n\) by
\begin{equation*} \dotp xy = x_1y_1+x_2y_2+\cdots + x_ny_n\text{.} \end{equation*}
Having defined the dot product, we can still declare two vectors \(\vec x\) and \(\vec y\) to be orthogonal if \(\dotp xy = 0\text{,}\) and define the angle between two vectors by requiring that the identity
\begin{equation*} \dotp xy = \norm{\vec{x}}\norm{\vec{y}}\cos\theta \end{equation*}
remain valid. Using these definitions, along with Theorem 4.1.10, we can see that all of the properties of vector operations given in Theorem 2.2.16 remain valid in \(\mathbb{R}^n\text{.}\)
Then ten properties listed in Theorem 2.7.2 are known as the vector space axioms. Any set of objects satisfying these axioms is known as a vector space. There are many interesting examples of vector spaces other than \(\mathbb{R}^n\text{,}\) but we will not study vector spaces in general in this text.

Subsection 2.7.2 Linear combinations and span

One of the key insights of linear algebra is that a space such as \(\mathbb{R}^n\text{,}\) which contains infinitely many objects, can be generated using the operations of addition and scalar multiplication from a finite set of basic objects. We saw in Chapter 2, for example, that every vector in \(\mathbb{R}^3\) can be written in terms of just three basic unit vectors \(\veci\text{,}\) \(\vecj\text{,}\) and \(\veck\text{.}\)
Since addition and scalar multiplication are the main operations of linear algebra, it’s not too surprising (if a little unimaginative) that any combination of these operations is called a linear combination.

Definition 2.7.3. Linear combination in \(\mathbb{R}^n\).

A linear combination in \(\mathbb{R}^n\) is any expression of the form
\begin{equation*} c_1\vec{v}_1+c_2\vec{v_2}+\cdots + c_k\vec{v}_k\text{,} \end{equation*}
where \(c_1,c_2\,\ldots, c_k\in \mathbb{R}\) are scalars, and \(\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k\in\mathbb{R}^n\) are vectors.

Example 2.7.4. Forming linear combinations.

Let \(\vec{u} = \bbm 2\\-1\\3\ebm, \vec{v} = \bbm -4\\ 6\\ 3\ebm, \vec{w} = \bbm 2\\3\\12\ebm\) be vectors in \(\mathbb{R}^3\text{.}\) Form the following linear combinations:
  1. \(\displaystyle 3\vec{u}-4\vec{w}\)
  2. \(\displaystyle \vec{u}+\vec{v}-2\vec{w}\)
  3. \(\displaystyle 7\vec{v}+3\vec{v}\)
  4. \(\displaystyle 3\vec{u}+\vec{v}-\vec{w}\)
Solution.
  1. To simplify the linear combination, we first take care of the scalar multiplication, and then perform the addition. (We choose to interpret this expression as \(3\vec{u}+(-4)\vec{w}\text{,}\) and multiply by \(-4\) in the first step, and add in the second step, rather than multiplying by 4 and then subtracting.)
    \begin{equation*} 3\vec{u}-4\vec{w} = 3\bbm 2\\-1\\3\ebm -4\bbm 2\\3\\12\ebm = \bbm 6\\-3\\9\ebm + \bbm -8\\-12\\-48\ebm = \bbm -2\\-15\\-39\ebm\text{.} \end{equation*}
  2. We proceed as with the previous problem, this time performing the scalar multiplication of \(\vec{w}\) by \(-2\) in our heads:
    \begin{equation*} \vec{u}+\vec{v}-2\vec{w} = \bbm 2\\-1\\3\ebm + \bbm -4\\6\\3\ebm + \bbm -4\\-6\\-24\ebm = \bbm -6\\-1\\-18\ebm\text{.} \end{equation*}
  3. We find
    \begin{equation*} 7\vec{u}+3\vec{v} = 7\bbm 2\\-1\\3\ebm+3\bbm -4\\6\\3\ebm = \bbm 14\\-7\\21\ebm+\bbm -12\\18\\9\ebm = \bbm 2\\11\\30\ebm\text{.} \end{equation*}
  4. For our last example, we compute
    \begin{equation*} 3\vec{u}+\vec{v}-\vec{w} = \bbm 6\\-3\\9\ebm + \bbm -4\\6\\3\ebm - \bbm 2\\3\\12\ebm = \bbm 0\\0\\0\ebm\text{.} \end{equation*}
Notice that in the last example above, our linear combination works out to be the zero vector. Let’s think about this geometrically for a second: using the “tip-to-tail” method for adding vectors and beginning with the tail of \(3\vec{u}\) at the origin, if we add the vector \(\vec{v}\) at the tip of \(3\vec{u}\text{,}\) and then subtract \(\vec{w}\text{,}\) we end up back at the origin. The vectors \(3\vec{u}\text{,}\) \(\vec{v}\text{,}\) and \(\vec{w}\) must therefore lie in the same plane, since they form three sides of a triangle, as depicted in Figure 2.7.5.
The vectors 3u and v add to give the vector w, forming a triangle in a plane.
Figure 2.7.5. Depicting the last linear combination in Example 2.7.4
Viewed another way, notice that we can solve the equation \(3\vec{u}+\vec{v}-\vec{w}=\vec{0}\) for \(\vec{w}\text{:}\) we have
\begin{equation*} \vec{w} = 3\vec{u}+\vec{v}\text{.} \end{equation*}
What this tells us is that when we’re being asked to form linear combinations of the vectors \(\vec{u}, \vec{v}\text{,}\) and \(\vec{w}\) in Example 2.7.4, then vector \(\vec{w}\) is redundant. Suppose the vector \(\vec{x}\) is an arbitrary linear combination of these vectors; that is,
\begin{equation*} \vec{x} = a\vec{u}+b\vec{v}+c\vec{w} \end{equation*}
for some scalars \(a,b,c\text{.}\) If we plug in \(\vec{w} = 3\vec{u}+\vec{v}\text{,}\) then we get
\begin{align*} \vec{x} \amp = a\vec{u}+b\vec{v}+c(3\vec{u}+\vec{v})\\ \amp = a\vec{u}+b\vec{v}+3c\vec{u}+c\vec{v} \quad \text{ (distribute the scalar)}\\ \amp = (a+3c)\vec{u} + (b+c)\vec{v} \quad \text{ (collect terms)}\text{.} \end{align*}
Thus, \(\vec{x}\) has been written in terms of \(\vec{u}\) and \(\vec{v}\) only.
These ideas come up frequently enough in Linear Algebra that they have associated terminology. The definitions that follow seem innocent enough, but their importance to the theory of Linear Algebra cannot be understated.

Definition 2.7.6. The span of a set of vectors.

Let \(A = \{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k\}\) be a set of vectors in \(\mathbb{R}^n\text{.}\) The span of the vectors in \(A\text{,}\) denoted \(\operatorname{span}(A)\text{,}\) is the set \(S\) of all possible linear combinations of the vectors in \(A\text{.}\) That is,
\begin{equation*} S = \operatorname{span}(A) = \{c_1\vec{v}_1+c_2\vec{v}_2+\cdots + c_k\vec{v}_k \,|\, c_1, c_2, \ldots, c_k \in \mathbb{R}\}\text{.} \end{equation*}

Example 2.7.7. Describing spans in \(\mathbb{R}^3\).

Let \(\vec{u}, \vec{v}, \vec{w}\) be as in Example 2.7.4. Describe the following spans:
  1. \(\displaystyle \operatorname{span}\{\vec{u}\}\)
  2. \(\displaystyle \operatorname{span}\{\vec{u},\vec{v}\}\)
  3. \(\displaystyle \operatorname{span}\{\vec{u}, \vec{v}, \vec{w}\}\)
Solution.
  1. As a set, we have
    \begin{equation*} \operatorname{span}\{\vec{u}\} = \left\{\left.t\bbm 2\\-1\\3\ebm \, \right|\, t\in \mathbb{R}\right\}\text{,} \end{equation*}
    the set of all scalar multiplies of the vector \(\vec{u}\text{.}\)
    If we think back to Section 2.5, we can do a bit better with our description. The set \(\operatorname{span}\{\vec{u}\}\) consists of all vectors \(\bbm x\\y\\z\ebm\) such that
    \begin{equation*} \bbm x\\y\\z\ebm = t\bbm 2\\-1\\3\ebm = \bbm 0\\0\\0\ebm + t\bbm 2\\-1\\3\ebm\text{,} \end{equation*}
    which we recognize as the equation of a line through the origin in \(\mathbb{R}^3\) in the direction of the vector \(\vec{u}\text{.}\)
  2. Again, as a set we can write
    \begin{equation*} \operatorname{span}\{\vec{u},\vec{v}\} = \left\{\left.s\bbm 2\\-1\\3\ebm+t\bbm -4\\6\\3\ebm \, \right| \, s,t\in \mathbb{R}\right\}\text{,} \end{equation*}
    so \(\operatorname{span}\{\vec{u},\vec{v}\}\) consists of all vectors of the form \(\bbm 2s-4t\\-s+6t\\3s+3t\ebm\text{,}\) where \(s\) and \(t\) can be any real numbers. Again, with a bit of thought, we can come up with a geometric description of this set. Consider an arbitrary vector
    \begin{equation*} \vec{x} = s\vec{u}+t\vec{v} \in \operatorname{span}\{\vec{u},\vec{v}\}\text{.} \end{equation*}
    Any such vector can be obtained by moving some distance (measured by the scalar \(s\)) in the direction of \(\vec{u}\text{,}\) and then moving another distance (measured by the scalar \(t\)) in the direction of \(\vec{v}\text{.}\) We now have two directions in which to move, and if we haven’t forgotten what we learned in Section 2.6, this probably reminds us of the description of a plane.
    To see that \(\operatorname{span}\{\vec{u},\vec{v}\}\) is indeed a plane, we compute
    \begin{equation*} \vec{n} = \vec{u}\times\vec{v} = \bbm -21\\-18\\8\ebm\text{,} \end{equation*}
    which we know is orthogonal to both \(\vec{u}\) and \(\vec{v}\text{.}\) It follows from the properties of the dot product that for any other vector \(\vec{x}=s\vec{u}+t\vec{v}\) we have
    \begin{equation*} \dotp nx = \vec{n}\boldsymbol{\cdot}(s\vec{u}+t\vec{v}) = s(\dotp ns) + t(\dotp nv) = s(0)+t(0)=0\text{,} \end{equation*}
    so with \(\vec{x} = \bbm x\\y\\z\ebm\text{,}\) we have
    \begin{equation*} -21x-18y+8z=0\text{,} \end{equation*}
    which is the equation of a plane through the origin.
  3. In the discussion following Example 2.7.4 we saw that any vector that can be written as a linear combination of \(\vec{u}\text{,}\) \(\vec{v}\text{,}\) and \(\vec{w}\) can be written as a linear combination of \(\vec{u}\) and \(\vec{v}\) alone. Thus, the span of \(\vec{u}, \vec{v}\text{,}\) and \(\vec{w}\) doesn’t contain anything we didn’t already have in the span of \(\vec{u}\) and \(\vec{v}\text{;}\) that is,
    \begin{equation*} \operatorname{span}\{\vec{u},\vec{v},\vec{w}\}=\operatorname{span}\{\vec{u},\vec{v}\}\text{.} \end{equation*}

Example 2.7.8. Determining membership in a span.

Given the vectors \(\vec{u} = \bbm 2\\-1\\1\ebm\text{,}\) \(\vec{v} = \bbm 3\\2\\5\ebm\text{,}\) and \(\vec{w} = \bbm -2\\5\\3\ebm\text{,}\) determine whether or not the following vectors belong to \(\operatorname{span}\{\vec{u},\vec{v},\vec{w}\}\text{:}\)
  1. \(\displaystyle \vec{x} = \bbm 3\\6\\9\ebm\)
  2. \(\displaystyle \vec{y} = \bbm 4\\1\\-3\ebm\)
Solution.
We do not yet have a general technique for solving problems of this type. Notice that the question “Does \(\vec{x}\) belong to the span of \(\{\vec{u},\vec{v},\vec{w}\}\text{?}\)” is equivalent to the question, “Do there exist scalars \(a,b,c\) such that
\begin{equation*} a\vec{u}+b\vec{v}+c\vec{w}=\vec{x}\text{?} \end{equation*}
” Answering this question amounts to solving a system of linear equations: if we plug in our vectors, we have
\begin{equation*} a\bbm 2\\-1\\1\ebm + b\bbm 3\\2\\5\ebm + c\bbm -2\\5\\3\ebm = \bbm 2a+3b-2c\\-a+2b+5c\\a+5b+3c\ebm = \bbm 3\\6\\9\ebm\text{.} \end{equation*}
By definition of the equality of vectors, this amounts to the system of equations
\begin{equation*} \arraycolsep=2pt \begin{array}{ccccccc} 2a\amp +\amp 3b\amp -\amp 2c\amp =\amp 3\\ -a\amp +\amp 2b\amp +\amp 5c\amp =\amp 6\\ a\amp +\amp 5b\amp +\amp 3c\amp =\amp 9 \end{array}\text{.} \end{equation*}
We will develop systematic techniques for solving such systems in the next chapter. Until then, is there anything we can say? The very astute reader might notice that the vectors \(\vec{u},\vec{v}, \vec{w}\) all have something in common: their third component is the sum of the first two: \(1=2+(-1)\) for \(\vec{u}\text{,}\) \(5=3+2\) for \(\vec{v}\text{,}\) and \(3=-2+5\) for \(\vec{w}\text{.}\) Thus, all three vectors are of the form \(\bbm x\\y\\x+y\ebm\text{.}\) Now, notice what happens if we combine two such vectors:
\begin{equation*} s\bbm a\\b\\a+b\ebm+t\bbm c\\d\\c+d\ebm = \bbm sa+tc\\sb+td\\s(a+b)+t(c+d)\ebm = \bbm sa+tc\\sb+td\\(sa+tc)+(sb+td)\ebm, \end{equation*}
which is another vector of the same form. The same will be true for combinations of three or more such vectors.
For the vector \(\vec{x}\text{,}\) we check that \(3+6=9\text{,}\) so \(\vec{x}\) has the correct form, and indeed (with a bit of “guess and check” work), we find that \(a=b=c=1\) works, since \(\vec{u}+\vec{v}+\vec{w}=\vec{x}\text{.}\) Thus, we can conclude that
\begin{equation*} \vec{x}\in\operatorname{span}\{\vec{u},\vec{v},\vec{w}\}\text{.} \end{equation*}
For the vector \(\vec{y}\text{,}\) we add the first two components, getting \(4+1=5\neq -3\text{.}\) Since the third component is not the sum of the first two, there is no way that \(\vec{y}\) could belong to the span of \(\vec{u}\text{,}\) \(\vec{v}\text{,}\) and \(\vec{w}\text{.}\)

Subsection 2.7.3 Linear independence

Notice in Example 2.7.7 that the span did not change when we added the vector \(\vec{w}\) to the set of spanning vectors. This was probably not too surprising, since we saw that \(\vec{w} = 3\vec{u}+\vec{v}\text{,}\) meaning that \(\vec{w}\) is a linear combination of \(\vec{u}\) and \(\vec{v}\text{,}\) and thus,
\begin{equation*} \vec{w}\in\operatorname{span}\{\vec{u},\vec{v}\}\text{.} \end{equation*}
We don’t get anything new when we include the vector \(\vec{w}\) since it lies in the plane spanned by \(\vec{u}\) and \(\vec{v}\text{.}\) We say that the vector \(\vec{w}\) depends on \(\vec{u}\) and \(\vec{v}\text{,}\) in the same way that the total of a sum depends on the numbers being added. Since this dependence is defined in terms of linear combinations, we say that the vectors \(\vec{u},\vec{v},\vec{w}\) are linearly dependent.
In general, a set of vectors \(\vec{v}_1,\vec{v}_2\ldots, \vec{v}_k\) is linearly dependent of one of the vectors can be written as a linear combination of the others. If this is impossible, we say that the vectors are linearly independent.
The formal definition is as follows.

Definition 2.7.9. Linear dependence.

We say that a set of vectors
\begin{equation*} A = \{\vec{v}_1,\vec{v}_2,\ldots, \vec{v}_k\} \end{equation*}
in \(\mathbb{R}^n\) is linearly dependent if \(\vec{0}\in A\text{,}\) or if one of the vectors \(\vec{v}_i\) in \(A\) can be written as a linear combination of the other vectors in \(A\text{.}\) If the set \(A\) is not linearly dependent, we say that it is linearly independent.

Example 2.7.10. Determining linear independence.

Determine whether or not following sets of vectors are linearly independent:
  1. The vectors \(\vec{u} = \bbm 2\\-1\\1\ebm\text{,}\) \(\vec{v} = \bbm 3\\2\\5\ebm\text{,}\) and \(\vec{w} = \bbm -2\\5\\3\ebm\) from Example 2.7.8.
  2. The vectors
    \begin{equation*} \vec{x} = \bbm 2\\-1\\0\ebm, \vec{y} = \bbm -2\\3\\1\ebm, \quad \text{ and } \quad \vec{z} = \bbm 1\\1\\0\ebm\text{.} \end{equation*}
Solution.
Like problems involving span, a general approach to answering questions like these about linear independence will have to wait until we develop methods for solving systems of equations in the next chapter. However, for these two sets of vectors, we can reason our way to an answer.
  1. Here, we noticed that all three vectors satisfy the condition \(z=x+y\text{,}\) if we label their respective components as \(x\text{,}\) \(y\text{,}\) and \(z\text{.}\) But this condition is simply the equation of a plane; namely, \(x+y-z=0\text{.}\) Intuition tells us that any plane can be written as the span of two vectors, so we can expect that any one of the three vectors can be written in terms of the other two, and indeed, this is the case. With a bit of guesswork (or by asking a computer), we can determine that
    \begin{equation*} \vec{w} = -\frac{19}{7}\vec{u}+\frac{8}{7}\vec{v}\text{,} \end{equation*}
    showing that \(\vec{w}\) can be written as a linear combination of \(\vec{u}\) and \(\vec{v}\text{,}\) and thus, that our vectors are linearly dependent.
  2. Here, we make the useful observation that two of our three vectors have zero as their third component. Since \(\vec{x}\) and \(\vec{z}\) have third component zero, it is impossible for \(\vec{y}\) to be written as a linear combination of \(\vec{x}\) and \(\vec{z}\text{,}\) since any such linear combination would still have a zero in the third component. To see that \(\vec{x}\) cannot be written in terms of \(\vec{y}\) and \(\vec{z}\text{,}\) notice that for any \(a\) and \(b\text{,}\)
    \begin{equation*} a\vec{y}+b\vec{z} = \bbm -2a+b\\3a+b\\a\ebm\text{.} \end{equation*}
    If this is to equal \(\vec{x}\text{,}\) then we must have \(a=0\text{,}\) giving us \(\vec{x}=b\vec{z}\text{,}\) but it’s clear that \(\vec{x}\) is not a scalar multiple of \(\vec{z}\text{.}\) A similar argument shows that \(\vec{z}\) cannot be written in terms of \(\vec{x}\) and \(\vec{y}\text{,}\) and thus our vectors are linearly independent.
Another way to characterize linear independence is as follows: suppose we have a linear combination equal to the zero vector:
\begin{equation} c_1\vec{v}_1+c_2\vec{v}_2+\cdots +c_k\vec{v}_k = \vec{0}\text{.}\tag{2.7.1} \end{equation}
This is always possible of course, since we can simply set each of the scalars equal to zero. The condition of linear independence tells us that if our vectors are independent, then this is the only way to obtain a linear combination equal to the zero vector.
To see why this rule works, suppose we can choose our scalars in Equation (2.7.1) so that at least one of them is non-zero. For simplicity, let’s say \(c_1\neq 0\text{.}\) Then we can rewrite Equation (2.7.1) as
\begin{equation*} c_1\vec{v}_1 = -c_2\vec{v}_2-\cdots - c_k\vec{v}_k\text{,} \end{equation*}
and since \(c_1\neq 0\text{,}\) we can divide both sides by \(c_1\text{,}\) and we’ve written \(\vec{v}_1\) as a linear combination of the remaining vectors.
For example, from Example 2.7.10 we can conclude (with a bit of rearranging) that the vectors \(\vec{u}\text{,}\) \(\vec{v}\text{,}\) and \(\vec{w}\) satisfy the relationship
\begin{equation*} 19\vec{u}-8\vec{v}+7\vec{w} = \vec{0}\text{.} \end{equation*}
Linear independence can be a difficult concept at first, but in three dimensions we can use Equation (2.7.1) to provide a visual interpretation on a case-by-case basis.

Key Idea 2.7.11. Linearly independent sets of vectors in \(\mathbb{R}^3\).

  • Any set \(\{\vec{u}\}\) containing a single vector in \(\mathbb{R}^3\) is linearly dependent if \(\vec{u}=\vec{0}\text{,}\) and independent otherwise. (Here \eqref{eq-linearindep} becomes \(c\vec{u}=\vec{0}\text{.}\) If \(\vec{u}\neq\vec{0}\text{,}\) the only solution is to take \(c=0\text{.}\))
  • Any set \(\{\vec{u},\vec{v}\}\) containing two non-zero vectors in \(\mathbb{R}^3\) is linearly dependent if \(\vec{u}\) is parallel to \(\vec{v}\text{,}\) and independent otherwise. (In other words, two dependent vectors lie on the same line. Two independent vectors span a plane.)
  • Any set \(\{\vec{u},\vec{v},\vec{w}\}\) of three non-zero vectors in \(\mathbb{R}^3\) is linearly dependent if all three vectors lie in the same plane, and independent otherwise.
  • Any set of four or more vectors in \(\mathbb{R}^3\) is automatically linearly dependent.
This section introduced several new ideas. Some, like linear combinations, are straightforward. Others, like span and linear independence, take some getting used to. There remain two very obvious questions to address:
  1. How do we tell whether or not a given vector belongs to the span of a set of vectors?
  2. How do we tell if a set of vectors is linearly independent?
It turns out that both questions lead to systems of linear equations. As we saw in Example 2.7.8, we are currently unable to systematically solve such problems. In Chapters Chapter 3 and Chapter 4 we will develop the techniques needed to systematically solve such systems, at which point we will be able to easily answer questions about linear independence and span.
To see how such systems arise, suppose we want to know whether or not the vector \(\vec w = \bbm 2\\-1\\3\\0\ebm \in\mathbb{R}^4\) belongs to the set \(V = \operatorname{span}\{\vec{v}_1, \vec{v}_2, \vec{v}_3\}\text{,}\) where
\begin{equation*} \vec{v}_1 = \bbm 0\\2\\-1\\4\ebm, \vec{v}_2 = \bbm 3\\1\\0\\-4\ebm, \vec{v}_3 = \bbm -3\\6\\7\\2\ebm\text{.} \end{equation*}
By definition, \(V\) is the set of all possible linear combinations of the vectors \(\vec{v}_1, \vec{v}_2, \vec{v}_3\text{,}\) so saying that \(\vec w \in V\) is the same as saying that we can write \(\vec w\) as a linear combination of these vectors. Thus, what we want to know is whether or not there exist scalars \(x_1, x_2, x_3\) such that
\begin{equation*} \vec w = x_1\vec{v}_1 + x_2\vec{v}_2 + x_3\vec{v}_3\text{.} \end{equation*}
Substituting in the values for our vectors, this gives
\begin{equation*} x_1\bbm 0\\2\\-1\\4\ebm + x_2\bbm 3\\1\\0\\-4\ebm + x_3\bbm -3\\6\\7\\2\ebm = \bbm 3x_2-3x_3\\2x_1+x_2+6x_3\\-x_1+7x_3\\4x_1-4x_2+2x_3\ebm = \bbm 2\\-1\\3\\0\ebm\text{.} \end{equation*}
Since two vectors are equal if and only if each component is equal, the above vector equation leads to the following system of four equations:
\begin{equation*} \arraycolsep=2pt \begin{array}{ccccccc} \amp \amp 3x_2 \amp-\amp3x_3 \amp= \amp2\\ 2x_1\amp +\amp x_2\amp +\amp 6x_3\amp =\amp -1\\ -x_1\amp \amp \amp +\amp 7x_3\amp =\amp 3\\ 4x_1\amp -\amp 4x_2\amp +\amp 2x_3\amp =\amp 0 \end{array}\text{.} \end{equation*}
Thus, the question “Is the vector \(\vec w\) an element of \(V\text{?}\)” is equivalent to the question “Is there a solution to the above system of equations?”
Questions about linear independence are similar, but not quite the same. With the above example involving span, what we wanted to know is “Does a solution exist?” With linear independence, it is not whether a solution exists that is in doubt, but whether or not that solution is unique. For example, suppose we wanted to know if the vectors in our span example above are linearly independent. We would start with the vector equation
\begin{equation*} x_1\vec{v}_1+x_2\vec{v}_2+x_3\vec{v}_3 = \vec 0\text{,} \end{equation*}
and ask whether or not there are any solutions other than \(x_1=x_2=x_3=0\text{.}\)
This vector equation leads to a system just like the one above, except that the numbers to the right of the \(=\) signs would all be zeros. The techniques needed to answer these and other questions will be developed beginning in Chapter 3.

Exercises 2.7.4 Exercises

Exercise Group.

Simplify the given linear combinations, where
\begin{equation*} \vec u = \bbm -1\\0\\2\\4\ebm, \vec v = \bbm 3\\4\\-5\\0\ebm, \text{ and } \vec w = \bbm -3\\2\\0\\7\ebm\text{.} \end{equation*}
1.
\(3\vec u -2 \vec v\)
2.
\(-2\vec u + 3 \vec v +\vec w\)
3.
\(\vec u -2 \vec v + 5\vec w\)
4.
\(4\vec{v} - 3\vec w\)

Exercise Group.

Calculate the given quantity, where
\begin{equation*} \vu = \bbm 2\\0\\-1\\3\\7\ebm, \vvv = \bbm -3\\5\\0\\-6\\1\ebm, \text{ and } \vec w = \bbm 0\\-3\\5\\2\\-4\ebm\text{.} \end{equation*}
5.
\(\norm{\vec{v}}\)
6.
\(\dotp uv\)
7.
\(\vec w \boldsymbol{\cdot} (2\vec u -3\vec v)\)
8.
\(2(\vec w \boldsymbol{\cdot} \vec u) -3(\vec w\boldsymbol{\cdot}\vec v)\)

Exercise Group.

Determine if the given statement is true or false. Give a proof for any true statements, and give a counterexample for any false statements.
9.
A subset of a linearly independent set is linearly independent.
10.
A subset of a linearly dependent set is linearly dependent.
11.
Any set of vectors that contains the zero vector is linearly dependent.
12.
If the vector \(\vec{w}\) belongs to the span of the set \(\{\vec{v}_1,\ldots, \vec{v}_k\}\text{,}\) then the set \(\{\vec{w},\vec{v}_1,\ldots, \vec{v}_k\}\) is linearly dependent.
13.
If the set \(\{\vec{w},\vec{v}_1,\ldots, \vec{v}_k\}\) is linearly dependent, then \(\vec{w}\) belongs to the span of \(\{\vec{v}_1,\ldots, \vec{v}_k\}\text{.}\)