One of the things we noted in Exampleย 2.7.7 was that since \(\vec{w}\) belonged to \(\operatorname{span}\{\vec{u},\vec{v}\}\text{,}\) adding \(\vec{w}\) to any vector in \(\operatorname{span}\{\vec{u},\vec{v}\}\) resulted in another vector in \(\operatorname{span}\{\vec{u},\vec{v}\}\text{.}\) This leads to the notion of a subspace, another one of the key concepts in linear algebra.
What sets subspaces apart from other subsets of \(\mathbb{R}^n\) is the requirement that all of the properties listed in Theoremย 2.7.2 remain valid when applied to vectors from that subspace. We will not prove it here, but it suffices that the subspace be closed under the operations of addition and scalar multiplication.
It follows from Definitionย 5.3.1 that any linear combination of vectors in a subspace \(V\) is again an element of that subspace. One other important consequence of Definitionย 5.3.1 must be noted here: since any subspace \(V\) is closed under scalar multiplication by any scalar, and since \(0\cdot\vec v = \vec 0\) for any vector \(\vec v\text{,}\)every subspace contains the zero vector. This often provides an easy test when we want to rule out the possibility that a subset is a subspace.
From our work in Sectionย 2.6, we notice right away that the set \(R\) describes a plane with normal vector given by \(\vec n = \bbm 2\\-4\\3\ebm\text{.}\) Moreover, this particular plane passes through the origin, since \(2(0)-4(0)+3(0)=0\text{,}\) telling us that \(\vec 0 \in R\text{.}\)
It turns out that any plane through the origin is a subspace, and \(R\) is no exception, but letโs verify this directly using Definitionย 5.3.1. Suppose
\begin{equation*}
\vec u = \bbm u_1\\u_2\\u_3\ebm \quad \text{ and } \quad \vec v = \bbm v_1\\v_2\\v_3\ebm
\end{equation*}
are vectors in \(R\text{,}\) so that \(2u_1-4u_2+3u_3=0\) and \(2v_1-4v_2+3v_3=0\text{.}\) For the vector
\begin{equation*}
\vec u + \vec v = \bbm u_1+v_1\\u_2+v_2\\u_3+v_3\ebm\text{,}
\end{equation*}
For the subset \(S\text{,}\) we immediately notice that the third component must always equal 3; therefore, it is impossible for the zero vector to belong to \(S\text{,}\) and thus \(S\) is not a subspace.
You probably recall from your high school mathematics that a function such as \(f(x)=ax+b\) is considered linear, since its graph is a straight line. Functions like \(f(x)=x^2\) are considered non-linear, since their graphs are curved. One of the morals a student does well to learn quickly in linear algebra is that any expressions involving non-linear functions of any variables present are not going to play well with the rules of linear algebra.
For the subset \(U\text{,}\) the expressions \(3xy\) and \(x^2\) tip us off that we are probably not dealing with a subspace here. The easiest way to make sure of this is to check the rules in Definitionย 5.3.1 using specific vectors.
Looking at the definition of the set \(U\text{,}\) we know that if \(4\vec v\in U\text{,}\) then the third component of \(\vec v\) tells us \(x^2=4\text{,}\) so \(x=\pm 2\text{.}\) Now, letโs look at the other two components. If \(x=2\text{,}\) we must have
The first equation tells us that \(y=10\text{,}\) while the second requires \(y=1\text{.}\) Since \(10\neq 1\text{,}\) this is impossible. Similarly, if \(x=-2\text{,}\) then we would have to have \(y=14\) looking at the first component, and \(y=-1\) from the second. Since this is again impossible, it must be the case that \(4\vec v\notin U\text{.}\) Since \(U\) is not closed under scalar multiplication, \(U\) is not a subspace.
After seeing a few examples (and a few exercises), the reader can probably develop some intuition for identifying subspaces. To make sure we donโt become too reliant on intuition, however, weโll give one more example with two very similar-looking sets, only one of which is a subspace.
The expressions \(v+4\) and \(u-3\) in the definition of \(V\) look like the sort of linear functions we see in high school, but we need to keep in mind that in linear algebra the zero vector has an important role in making sure the algebra works properly. In Linear Algebra, among all functions of the form \(f(x)=mx+b\text{,}\) only those with \(b=0\) are considered โlinearโ: these are the functions whose graphs are lines through the origin.
then clearly we need \(v=-4\) and \(u=2\) from the second and third components, but \(2+2(-4) = -6\neq 0\text{,}\) so there is no way to obtain the zero vector as an element of \(V\text{,}\) telling us that \(V\) is not a subspace.
The subset \(W\) looks a lot like the subset \(V\text{,}\) so our instinct is probably telling us that \(W\) is not a subspace, either. To know for sure, the first thing we might check is whether or not \(\vec 0 \in W\text{.}\) In this case, we see that \(\vec 0\) is indeed in there. Setting \(u=2\) and \(v=-4\text{,}\) we get the vector
If this is an element of \(W\text{,}\) then we must have \(v+4=9\) for some \(v\in\mathbb{R}\) (looking at the second component) and \(u-2=-3\) for some \(u\in \mathbb{R}\) (looking at the third component), so \(u=-1\) and \(v=5\text{.}\) Putting these values into the first component, we need to have \(2(-1)+5=3\text{,}\) which is true! Does this mean \(W\) is a subspace? Not so fast: we only checked addition for one pair of vectors, and we havenโt checked scalar multiplication.
If we try a few more examples (the reader is encouraged to do so), we find that things keep working out, so we begin to suspect that maybe \(W\) really is a subspace. The only way to know for sure is to attempt to verify Definitionย 5.3.1 with a general proof. Suppose
\begin{equation*}
\vec v = \bbm 2a+b\\b+4\\a-2\ebm \text{ and } \vec w = \bbm 2c+d\\d+4\\c-2\ebm
\end{equation*}
are arbitrary elements of \(W\text{.}\) Adding these vectors, we get
\begin{equation*}
\vec v + \vec w = \bbm 2(a+c)+(b+d)\\ (b+d)+8\\(a+c)-4\ebm\text{,}
\end{equation*}
which certainly doesnโt look like an element of \(W\text{;}\) the constants are all wrong! We have an 8 in the second component instead of a 4, and a \(-4\) in the third component instead of a \(-2\text{.}\) (This is why constant terms in the definition of a subset are generally problematic.)
However, with a bit of sleight of hand, things are not as bad as they seem. Letโs write the second component as \((b+d+4)+4\text{,}\) and the third as \((a+c-2)-2\text{,}\) and let \(v=b+d+4\text{,}\) and \(u=a+c-2\text{.}\) If \(\vec v + \vec w\) is an element of \(W\text{,}\) then weโre going to need
Whew! That wasnโt so straightforward. Could we have made our lives a little bit easier? (The answer to this rhetorical question is almost always yes.)
We know that the potential trouble here came from the constant terms, so one option we have is to try burying them. Given the element
\begin{equation*}
\vec v = \bbm 2u+v\\v+4\\u-2\ebm \in W\text{,}
\end{equation*}
weโre under no obligation to stick with the variables \(u\) and \(v\text{.}\) Letโs try to simplify a bit: if we let \(x = u-2\) (so \(u=x+2\)) and \(y = v+4\) (so \(v=y-4\)), then
and thus we can write \(\vec v = \bbm 2x+y\\y\\x\ebm\text{,}\) with no more constant terms. In this form itโs much easier to verify that \(W\) is a subspace.
Letโs take a second look at the subspaces \(T\) and \(W\) from Exampleย 5.3.3 and Exampleย 5.3.4. Given an element \(\vec v = \bbm 3a-2b\\a+b\\a-4b\ebm\) of \(T\text{,}\) we note that
In fact, although we will not prove it in this textbook, every subspace of \(\mathbb{R}^n\) can be written as the span of some finite set of vectors. We can, however, prove that every span is a subspace.
Let \(\vec{v}_1,\vec{v}_2,\ldots, \vec{v}_k\) be vectors in \(\mathbb{R}^n\text{.}\) Then \(V=\operatorname{span}\{\vec{v}_1, \vec{v}_2,\ldots, \vec{v}_k\}\) is a subspace of \(\mathbb{R}^n\text{.}\)
We conclude with a discussion of how Theoremย 5.3.5 and the concept of linear independence allows us to give a complete description of the possible subspaces of \(\mathbb{R}^n\text{.}\) To begin with, we have the simplest possible subspace, the trivial subspace
If a subspace \(V\) has at least one non-zero vector, letโs say \(\vec v\in V\text{,}\) then by definition it must contain every scalar multiple of that vector. Thus, the next simplest type of subspace is given as the span of a single, non-zero vector:
\begin{equation*}
V_1 = \{t\vec v \, | \, t\in\mathbb{R} \text{ and } \vec v \neq \vec 0\}\text{.}
\end{equation*}
Of course, there are infinitely many possibilities for \(\vec v\text{,}\) but each choice of \(\vec v\neq \vec 0\) leads to a subspace that looks and acts โthe sameโ. As discussed earlier, we can picture a subspace of this type as a line through the origin.
Next, we could consider a subspace \(V_2 = \operatorname{span}\{\vec v, \vec w\}\text{,}\) with \(\vec v, \vec w \neq \vec 0\text{.}\) There are two possibilities. One is that \(\vec v\) and \(\vec w\) are parallel, so that the set \(\{\vec v,\vec w\}\) is linearly dependent. In this case we can write \(\vec w = k\vec v\) for some scalar \(k\text{,}\) and for any scalars \(a\) and \(b\text{,}\)
\begin{equation*}
a\vec v+b\vec w = a\vec v + b(k\vec v) = (a+bk)\vec v,
\end{equation*}
so our subspace \(V_2\) is really of the same type as \(V_1\text{.}\) If, however, the vectors \(\vec v\) and \(\vec w\) are linearly independent, then adding the vector \(\vec w\) gives us a second direction to work with, and \(V_2\) becomes an object that is strictly larger than \(V_1\text{.}\) In this case, the visualization is that of a plane through the origin.
Depending on the size of \(n\text{,}\) this argument continues. If we add a third vector \(\vec u\) that is already in the span of \(\vec v\) and \(\vec w\text{,}\) then the set \(\{\vec u, \vec v,\vec w\}\) is linearly dependent, and the span of this set is the same as what we already had. If, however, \(\vec u \notin\operatorname{span}\{\vec v, \vec w\}\text{,}\) then \(\{\vec u, \vec v,\vec w\}\) is linearly independent, and
\begin{equation*}
V_3 = \operatorname{span}\{\vec u, \vec v, \vec w\}
\end{equation*}
is a strictly larger subspace than \(\operatorname{span}\{\vec v, \vec w\}\text{.}\) We could then look for a fourth vector, and so on. However, in the familiar case of \(\mathbb{R}^3\text{,}\) the process stops at 3.
Notice the reference to dimension in Key Ideaย 5.3.6. In \(\mathbb{R}^3\text{,}\) we can rely on our intuitive (geometric) understanding of the concept of dimension. A complete understanding of the concept of dimension will have to wait until a second course in linear algebra; however, using the concepts in this section, we can make the following definition.
One could also define dimension as the largest number of linearly independent vectors one can choose from a subspace. If \(B=\{\vec{v}_1, \ldots, \vec{v}_k\}\) is a set of vectors in a subspace \(V\) such that
then we say \(B\) is a basis for \(V\text{.}\) For example, the set \(\{\veci, \vecj, \veck\}\) is a basis for \(\mathbb{R}^3\text{.}\) There are many possible bases for a subspace, but one can prove that the number of vectors in any basis is the same. Once this fact is established, we could alternatively define dimension as the number of vectors in any basis.