Skip to main content

Appendix C Solutions to Selected Exercises

1 Vector spaces
1.1 Definition and examples

Exercises

1.1.1.
1.1.1.a
Solution.
To get a vector space structure on \(V=(0,\infty)\text{,}\) we will define an addition \(\oplus\) on \(V\) by
\begin{equation*} x\oplus y = xy\text{,} \end{equation*}
where the right hand side is the usual product of real numbers, and for \(c\in\R\) and \(x\in V\text{,}\) we will define a scalar multiplication \(\odot\) by
\begin{equation*} c\odot x = x^c\text{.} \end{equation*}
1.1.1.b
Solution.
For any \(x,y,z\in V\text{,}\) we have:
\begin{align*} x\oplus y \amp = xy = yx = y\oplus x\\ x\oplus(y\oplus z)\amp = x\oplus yz = x(yz) = (xy)z = xy\oplus z = (x\oplus y)\oplus z\text{.} \end{align*}
1.1.1.d
Solution.
Let \(x\) be any element of \(V\text{.}\) Since \(x\gt 0\text{,}\) we know in particular that \(x\neq 0\text{,}\) so we can define \(-x = 1/x\text{,}\) where \(1/x\) denotes the usual reciprocal of a real number. We then have
\begin{equation*} x\oplus (-x) = x(1/x) = 1\text{,} \end{equation*}
and we saw above that \(1\) is the identity element of \(V\text{.}\)
1.1.1.e
Solution.
We assume some properties of exponents from high school algebra:
\begin{equation*} c\odot(x\oplus y) = (xy)^c = x^c y^c = c\odot x \oplus c\odot y\text{.} \end{equation*}
1.1.1.f
Solution.
This again follows from properties of exponents:
\begin{equation*} (c+d)\odot x = x^{c+d} = x^c x^d = c\odot x\oplus d\odot x\text{.} \end{equation*}
1.1.1.g
Solution.
We have
\begin{equation*} c\odot (d\odot x) = c\odot (x^d) = (x^d)^c = x^{dc} = x^{cd} = (cd)\odot x\text{.} \end{equation*}
1.1.1.h
Solution.
The last one is possibly the easiest: \(1\odot x = x^1 = x\text{.}\)
1.1.4.
Solution.
1.1.5.
Answer 1.
\(2\)
Answer 2.
\(61\)
Answer 3.
\(-3\)
Answer 4.
\(-\left(6+x\right)\)
1.1.6.
Answer 1.
\(19\)
Answer 2.
\(11\)
Answer 3.
\(-4.75\)
Answer 4.
\(1+-5\)
Answer 5.
\(\frac{1}{x+5}-5\)

1.2 Properties

Exercise 1.2.2.

1.2.2.a
Solution.
Suppose \(\uu+\vv=\uu+\ww\text{.}\) By adding \(-\uu\) on the left of each side, we obtain:
\begin{align*} -\uu+(\uu+\vv) \amp =-\uu+(\uu+\ww)\\ (-\uu+\uu)+\vv \amp =(-\uu+\uu)+\ww \quad \text{ by A3}\\ \zer+\vv \amp =\zer+\ww \quad \text{ by A5}\\ \vv \amp =\ww \quad \text{ by A4}\text{,} \end{align*}
which is what we needed to show.
1.2.2.b
Solution.
We have \(c\zer = c(\zer+\zer) = c\zer +c\zer\text{,}\) by A4 and S2, respectively. Adding \(-c\zer\) to both sides gives us
\begin{equation*} -c\zer+c\zer = -c\zer+(c\zer+c\zer)\text{.} \end{equation*}
Using associativity (A3), this becomes
\begin{equation*} -c\zer+c\zer = (-c\zer+c\zer)+c\zer\text{,} \end{equation*}
and since \(-c\zer+c\zer=\zer\) by A5, we get \(\zer =\zer+c\zer\text{.}\) Finally, we apply A4 on the right hand side to get \(\zer=c\zer\text{,}\) as required.
1.2.2.c
Solution.
Suppose there are two vectors \(\zer_1,\zer_2\) that act as additive identities. Then
\begin{align*} \zer_1 \amp = \zer_1+\zer_2 \quad \text{ since } \vv+\zer_2=\vv \text{ for any } \vv\\ \amp =\zer_2+\zer_1 \quad \text{ by axiom A2}\\ \amp \zer_2 \quad \text{ since } \vv+\zer_1=\vv \text{ for any } \vv \end{align*}
So any two vectors satisfying the property in A4 must, in fact, be the same.
1.2.2.d
Solution.
Let \(\vv\in V\text{,}\) and suppose there are vectors \(\ww_1,\ww_2\in V\) such that \(\vv+\ww_1=\zer\) and \(\vv+\ww_2=\zer\text{.}\) Then
\begin{align*} \ww_1 \amp = \ww_1+\zer \quad \text{ by A4}\\ \amp = \ww_1+(\vv+\ww_2) \quad \text{ by assumption}\\ \amp = (\ww_1+\vv)+\ww_2 \quad \text{ by A3}\\ \amp = (\vv+\ww_1)+\ww_2 \quad \text{ by A2}\\ \amp = \zer+\ww_2 \quad \text{ by assumption}\\ \amp \ww_2 \quad \text{ by A4}\text{.} \end{align*}

1.3 Subspaces

Exercises

1.3.2.
Answer 1.
\(\text{H contains the zero vector of V}\)
Answer 2.
\(\left[\begin{array}{cc} 1 &0\cr 0 &0 \end{array}\right], \left[\begin{array}{cc} 1 &0\cr 0 &1 \end{array}\right]\)
Answer 3.
\(2, \left[\begin{array}{cc} 1 &0\cr 0 &1 \end{array}\right]\)
Answer 4.
\(\text{H is not a subspace of V}\)
1.3.3.
Answer 1.
\(\text{H contains the zero vector of V}\)
Answer 2.
\(\text{CLOSED}\)
Answer 3.
\(\text{CLOSED}\)
Answer 4.
\(\text{H is a subspace of V}\)
1.3.4.
Answer 1.
\(\text{H does not contain the zero vector of V}\)
Answer 2.
\(\left[\begin{array}{cc} 1 &0\cr 0 &0 \end{array}\right], \left[\begin{array}{cc} 0 &0\cr 0 &1 \end{array}\right]\)
Answer 3.
\(2, \left[\begin{array}{cc} 1 &0\cr 0 &0 \end{array}\right]\)
Answer 4.
\(\text{H is not a subspace of V}\)

1.4 Span

Exercises

1.4.2.
Answer 1.
\(\left[\begin{array}{cc} 2 &-1\cr -1 &1 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} 2 &1\cr 1 &2 \end{array}\right]\)
1.4.3.
Answer 1.
\(9x+x^{2}\)
Answer 2.
\(-\left(2+x+2x^{2}\right)\)
1.4.4.
Answer.
\(4;\,-2;\,-4\)
1.4.7.
Answer 1.
\(-4\)
Answer 2.
\(-2\)

1.6 Linear Independence

Exercises

1.6.1.
Solution.
We set up a matrix and reduce:
Notice that this time we don’t get a unique solution, so we can conclude that these vectors are not independent. Furthermore, you can probably deduce from the above that we have \(2\vv_1+3\vv_2-\vv_3=\zer\text{.}\) Now suppose that \(\ww\in\spn\{\vv_1,\vv_2,\vv_3\}\text{.}\) In how many ways can we write \(\ww\) as a linear combination of these vectors?
1.6.2.
Solution.
In each case, we set up the defining equation for independence, collect terms, and then analyze the resulting system of equations. (If you work with polynomials often enough, you can probably jump straight to the matrix. For now, let’s work out the details.)
Suppose
\begin{equation*} r(x^2+1)+s(x+1)+tx = 0\text{.} \end{equation*}
Then \(rx^2+(s+t)x+(r+s)=0=0x^2+0x+0\text{,}\) so
\begin{align*} r \amp =0\\ s+t \amp =0\\ r+s\amp =0\text{.} \end{align*}
And in this case, we don’t even need to ask the computer. The first equation gives \(r=0\) right away, and putting that into the third equation gives \(s=0\text{,}\) and the second equation then gives \(t=0\text{.}\)
Since \(r=s=t=0\) is the only solution, the set is independent.
Repeating for \(S_2\) leads to the equation
\begin{equation*} (r+2s+t)x^2+(-r+s+5t)x+(3r+5s+t)1=0. \end{equation*}
This gives us:
1.6.3.
Solution.
We set a linear combination equal to the zero vector, and combine:
\begin{align*} a\bbm -1\amp 0\\0\amp -1\ebm +b\bbm 1\amp -1\\ -1\amp 1\ebm +c\bbm 1\amp 1\\1\amp 1\ebm +d \bbm 0\amp -1\\-1\amp 0\ebm = \bbm 0\amp 0\\ 0\amp 0\ebm\\ \bbm -a+b+c\amp -b+c-d\\-b+c-d\amp -a+b+c\ebm = \bbm 0\amp 0\\0\amp 0\ebm\text{.} \end{align*}
We could proceed, but we might instead notice right away that equations 1 and 4 are identical, and so are equations 2 and 3. With only two distinct equations and 4 unknowns, we’re certain to find nontrivial solutions.
1.6.8.
Answer.
\(-4;\,-3;\,1\)
1.6.9.
Answer.
\(-4;\,-3;\,1\)
1.6.10.
Answer.
\(0;\,0;\,0\)
1.6.11.
Answer.
\(0.666667;\,-0.75;\,-1\)

1.7 Basis and dimension

Exercises

1.7.1.
1.7.1.a
Solution.
By definition, \(U_1 = \spn \{1+x,x+x^2\}\text{,}\) and these vectors are independent, since neither is a scalar multiple of the other. Since there are two vectors in this basis, \(\dim U_1 = 2\text{.}\)
1.7.1.b
Solution.
If \(p(1)=0\text{,}\) then \(p(x)=(x-1)q(x)\) for some polynomial \(q\text{.}\) Since \(U_2\) is a subspace of \(P_2\text{,}\) the degree of \(q\) is at most 2. Therefore, \(q(x)=ax+b\) for some \(a,b\in\R\text{,}\) and
\begin{equation*} p(x) = (x-1)(ax+b) = a(x^2-x)+b(x-1)\text{.} \end{equation*}
Since \(p\) was arbitrary, this shows that \(U_2 = \spn\{x^2-x,x-1\}\text{.}\)
The set \(\{x^2-x,x-1\}\) is also independent, since neither vector is a scalar multiple of the other. Therefore, this set is a basis, and \(\dim U_2=2\text{.}\)
1.7.1.c
Solution.
If \(p(x)=p(-x)\text{,}\) then \(p(x)\) is an even polynomial, and therefore \(p(x)=a+bx^2\) for \(a,b\in\R\text{.}\) (If you didn’t know this it’s easily verified: if
\begin{equation*} a+bx+cx^2 = a+b(-x)+c(-x)^2\text{,} \end{equation*}
we can immediately cancel \(a\) from each side, and since \((-x)^2=x^2\text{,}\) we can cancel \(cx^2\) as well. This leaves \(bx=-bx\text{,}\) or \(2bx=0\text{,}\) which implies that \(b=0\text{.}\))
It follows that the set \(\{1,x^2\}\) spans \(U_3\text{,}\) and since this is a subset of the standard basis \(\{1,x,x^2\}\) of \(P_2\text{,}\) it must be independent, and is therefore a basis of \(U_3\text{,}\) letting us conclude that \(\dim U_3=2\text{.}\)
1.7.2.
Solution.
Again, we only need to add one vector from the standard basis \(\{1,x,x^2,x^3\}\text{,}\) and it’s not too hard to check that any of them will do.
1.7.4.
Answer.
\(5+x^{2}, x\)
1.7.5.
Answer.
\(\left[\begin{array}{cc} 1 &0\cr 0 &-1 \end{array}\right], \left[\begin{array}{cc} 0 &1\cr 0 &0 \end{array}\right], \left[\begin{array}{cc} 0 &0\cr 1 &0 \end{array}\right]\)
1.7.9.
Answer 1.
\(\text{1,2, or 3}\)
Answer 2.
\(\text{1 or 2}\)
Solution.
(a) The dimension of \(S_1\) cannot exceed the dimension of \(S_2\) since \(S_1\) is contained in \(S_2\text{.}\) \(S_1\) is non-zero, and thus its dimension cannot be 0. Hence 1, 2, or 3 are the possible dimensions of \(S_1\text{.}\)
(b) If \(S_1 \ne S_2\text{,}\) then \(S_1\) is properly contained in \(S_2\text{,}\) and the dimension of \(S_1\) is strictly less than the dimension of \(S_2\text{.}\) Thus only 1 or 2 are possible dimensions of \(S_1\) .
1.7.10.
Answer 1.
\(2\)
Answer 2.
\({\text{not a basis for P₂}}\)
Answer 3.
\(7x^{2}-x-2, 2-\left(9x^{2}+x\right)\)
1.7.11.
Answer 1.
\(3\)
Answer 2.
\({\text{basis for P₂}}\)
Answer 3.
\(7x^{2}-10x-5, 19x^{2}-7x-7, -\left(3x+1\right)\)

1.8 New subspaces from old

Exercise 1.8.7.

1.8.7.a
Solution.
If \((x,y,z)\in U\text{,}\) then \(z=0\text{,}\) and if \((x,y,z)\in W\text{,}\) then \(x=0\text{.}\) Therefore, \((x,y,z)\in U\cap W\) if and only if \(x=z=0\text{,}\) so \(U\cap W = \{(0,y,0)\,|\, y\in\R\}\text{.}\)
1.8.7.b
Solution.
There are in fact infinitely many ways to do this. Three possible ways include:
\begin{align*} \vv \amp = (1,1,0)+(0,0,1) \\ \amp = (1,0,0)+(0,1,1)\\ \amp = \left(1,\frac12,0\right)+\left(0,\frac12,1\right)\text{.} \end{align*}

Exercises

1.8.1.
1.8.1.a
Solution.
Since \(p(1)=0\text{,}\) we know that \(p(x)=(x-1)q(x)\text{,}\) for some \(q(x)=ax^2+bx+c\text{.}\) Thus, \(p(x)=ax^2(x-1)+bx(x-1)+c(x-1)\text{,}\) so a basis is given by \(\{(x-1),x(x-1),x^2(x-1)\}\text{.}\)
(Another option is \(\{(x-1),(x-1)^2,(x-1)^3\}\text{.}\))
1.8.1.b
Solution.
Since \(\dim U=3\) and \(\dim P_3(\R)=4\text{,}\) we know that any complement of \(U\) must be one-dimensional.
Therefore, a basis for a complement \(W\) of \(U\) is given by any polynomial in \(P_3(\R)\) that is not in \(U\text{.}\) In particular, we can choose any polynomial \(p(x)\) with \(p(1)\neq 0\text{;}\) for example, \(p(x)=x\text{.}\) Therefore, \(W=\{ax\,|\, a\in\R\}\) is a complement of \(U\text{.}\)
1.8.2.
1.8.2.a
Solution.
If \(\uu\in U\text{,}\) then
\begin{align*} \uu \amp = (3x_3,x_2,x_3,x_4,3x_2-5x_4)\\ \amp = (0,x_2,0,0,3x_2)+(3x_3,0,x_3,0,0)+(0,0,0,x_4,-5x_4)\\ \amp = x_2(0,1,0,0,3)+x_3(3,0,1,0,0)+x_4(0,0,0,1,-5)\text{.} \end{align*}
This shows that \(U=\spn \{(0,1,0,0,3), (3,0,1,0,0), (0,0,0,1,-5)\}\text{.}\) These vectors are also linearly independent, since each one has its first leading (nonzero) entry in a different position. (Think about what this implies for the RREF of the resulting matrix.)
1.8.2.b
Solution.
Since \(\dim U = 3\text{,}\) any complement of \(U\) must have dimension 2. We therefore need to find two independent vectors that do not belong to \(U\text{.}\)
Recall that \(U\) is defined by two conditions: \(x_1=3x_3\) and \(3x_2-5x_4=x_5\text{.}\) Therefore, a vector is not in \(U\) if \(x_1\neq x_3\text{,}\) or \(x_5\neq 3x_2-5x_4\text{.}\) This suggests that we define two vectors, each of which violates one of these conditions.
For the first, let us take \(\xx=(1,0,1,0,0)\text{.}\) This is not in \(U\) because \(1\neq 3(1)\text{.}\) For the second, let us take \(\yy=(0,1,0,1,1)\text{.}\) This is not in \(U\) because \(1\neq 3(1)-5(1)\text{.}\) We know that \(\{\xx,\yy\}\) is linearly independent, because each vector has nonzero entries in different positions. Therefore, \(W=\spn\{\xx,\yy\}\) is a complement of \(U\text{,}\) since it is spanned by vectors not in \(U\text{,}\) and it has the correct dimension.
1.8.4.
1.8.4.a
Answer.
\(\text{No}\)
1.8.4.b
Answer.
\(3\)
1.8.4.c
Answer.
\(1\)

2 Linear Transformations
2.1 Definition and examples

Exercises

2.1.1.
Solution.
We need to find scalars \(a,b,c\) such that
\begin{equation*} 2-x+3x^2 = a(x+2)+b(1)+c(x^2+x)\text{.} \end{equation*}
We could set up a system and solve, but this time it’s easy enough to just work our way through. We must have \(c=3\text{,}\) to get the correct coefficient for \(x^2\text{.}\) This gives
\begin{equation*} 2-x+3x^2=a(x+2)+b(1)+3x^2+3x\text{.} \end{equation*}
Now, we have to have \(3x+ax=-x\text{,}\) so \(a=-4\text{.}\) Putting this in, we get
\begin{equation*} 2-x+3x^2=-4x-8+b+3x^2+3x\text{.} \end{equation*}
Simiplifying this leaves us with \(b=10\text{.}\) Finally, we find:
\begin{align*} T(2-x+3x^2) \amp = T(-4(x+2)+10(1)+3(x^2+x)) \\ \amp = -4T(x+2)+10T(1)+3T(x^2+x)\\ \amp = -4(1)+10(5)+3(0) = 46\text{.} \end{align*}
2.1.2.
Answer 1.
\(\left[\begin{array}{c} 22\cr 26\cr 33 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{c} -42\cr 35 \end{array}\right]\)
Answer 3.
\(\left[\begin{array}{c} 17\cr 25 \end{array}\right]\)
2.1.3.
Answer 1.
\(\left[\begin{array}{cc} a_{11}+b_{11} &a_{21}+b_{21}\cr a_{12}+b_{12} &a_{22}+b_{22}\cr \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} a_{11} &a_{21}\cr a_{12} &a_{22}\cr \end{array}\right] + \left[\begin{array}{cc} b_{11} &b_{21}\cr b_{12} &b_{22}\cr \end{array}\right]\)
Answer 3.
\(\text{Yes, they are equal}\)
Answer 4.
\(\left[\begin{array}{cc} ca_{11} &ca_{21}\cr ca_{12} &ca_{22}\cr \end{array}\right]\)
Answer 5.
\(c \left( \left[\begin{array}{cc} a_{11} &a_{21}\cr a_{12} &a_{22}\cr \end{array}\right] \right)\)
Answer 6.
\(\text{Yes, they are equal}\)
Answer 7.
\(\text{f is a linear transformation}\)
2.1.4.
Answer 1.
\(2x+2y-3\)
Answer 2.
\(2x-3 + 2y-3\)
Answer 3.
\(\text{No, they are not equal}\)
Answer 4.
\(2cx-3\)
Answer 5.
\(c ( 2x-3 )\)
Answer 6.
\(\text{No, they are not equal}\)
Answer 7.
\(\text{f is not a linear transformation}\)
2.1.5.
Answer 1.
\(6T\mathopen{}\left(v_{1}\right)-T\mathopen{}\left(v_{2}\right)\)
Answer 2.
\(6\mathopen{}\left(w_{1}+w_{2}\right)+3\cdot -1\cdot 8w_{2}\)
2.1.6.
Answer 1.
\(\left(8,4\right)\)
Answer 2.
\(\left(6,-18\right)\)
Answer 3.
\(\left(14,-14\right)\)
2.1.7.
Answer.
\(\left(19,46\right)\)
2.1.8.
Answer.
\(\left[\begin{array}{c} 9x+4y\cr -9x+4y\cr \end{array}\right]\)
2.1.9.
Answer.
\(\left[\begin{array}{c} -2\cr 18\cr 9 \end{array}\right]\)
2.1.10.
2.1.10.a
Answer.
\(-6x^{2}+-10x+-4\)
2.1.10.b
Answer.
\(-32x^{2}+-36x+-18\)
2.1.10.c
Answer.
\(2x^{2}+2x+0\)
2.1.10.d
Answer.
\(a\mathopen{}\left(2x^{2}+2x+0\right)+b\mathopen{}\left(-32x^{2}+-36x+-18\right)+c\mathopen{}\left(-6x^{2}+-10x+-4\right)\)
2.1.11.
Answer.
\(181+-194x\)

2.2 Kernel and Image

Exercises

2.2.1.
2.2.1.a
Solution.
We have \(T(0)=0\) since \(0^T=0\text{.}\) Using proerties of the transpose and matrix algebra, we have
\begin{equation*} T(A+B) = (A+B)-(A+B)^T = (A-A^T)+(B-B^T) = T(A)+T(B) \end{equation*}
and
\begin{equation*} T(kA) = (kA) - (kA)^T = kA-kA^T = k(A-A^T) = kT(A)\text{.} \end{equation*}
2.2.1.b
Solution.
It’s clear that if \(A^T=A\text{,}\) then \(T(A)=0\text{.}\) On the other hand, if \(T(A)=0\text{,}\) then \(A-A^T=0\text{,}\) so \(A=A^T\text{.}\) Thus, the kernel consists of all symmetric matrices.
2.2.1.c
Solution.
If \(B=T(A)=A-A^T\text{,}\) then
\begin{equation*} B^T = (A-A^T)^T = A^T-A = -B\text{,} \end{equation*}
so certainly every matrix in \(\im A\) is skew-symmetric. On the other hand, if \(B\) is skew-symmetric, then \(B=T(\frac12 B)\text{,}\) since
\begin{equation*} T\Bigl(\frac12 B\Bigr) = \frac12 T(B) = \frac12(B-B^T) = \frac12(B-(-B))= B\text{.} \end{equation*}
2.2.2.
Answer.
\(\left[\begin{array}{c} -1\cr 1\cr 0\cr 1 \end{array}\right], \left[\begin{array}{c} -2\cr 1\cr 1\cr 1 \end{array}\right]\)
2.2.3.
Answer 1.
\(1\)
Answer 2.
\(2\)
2.2.4.
Answer.
\(\left[\begin{array}{c} 0\cr 0\cr 1 \end{array}\right]\)
2.2.5.
Answer 1.
\(\left<36,13\right>\)
Answer 2.
\(\left[\begin{array}{ccc} -3 &-3 &-3\cr -2 &-1 &0 \end{array}\right]\)
Answer 3.
\(\left<-3x+-3y+-3z,-2x+-1y+0z\right>\)
Answer 4.
\(\left<1,-2,1\right>\)
Answer 5.
\(\left<1,0\right>, \left<0,1\right>\)
Answer 6.
\(\text{surjective}\)
2.2.6.
Answer 1.
\(\left<-5,26,-6\right>\)
Answer 2.
\(\left[\begin{array}{cc} 5 &3\cr 3 &-4\cr -3 &0 \end{array}\right]\)
Answer 3.
\(\left<0,0\right>\)
Answer 4.
\(\left<5,3,-3\right>, \left<3,-4,0\right>\)
Answer 5.
\(\text{injective}\)
2.2.7.
Answer 1.
\(\left[\begin{array}{cc} 5 &1\cr 3 &-3 \end{array}\right]\)
Answer 2.
\(\text{bijective}\)
Answer 3.
\(\left[\begin{array}{cc} 0.166667 &0.0555556\cr 0.166667 &-0.277778 \end{array}\right]\)
2.2.8.
Answer 1.
\(\left[\begin{array}{c} -5\cr -69\cr 10 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} -4 &-3\cr -3 &5\cr -1 &-2 \end{array}\right]\)
Answer 3.
\(\text{injective}\)
2.2.9.
Answer 1.
\(\left[\begin{array}{c} -21\cr 28\cr 7 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} -9 &-15\cr 12 &20\cr 3 &5 \end{array}\right]\)
Answer 3.
\(\text{none of these}\)
2.2.10.
Answer.
\(3\)
2.2.11.
Answer 1.
\(2\)
Answer 2.
\(1\)
Answer 3.
\(2\)
Solution.
Solution:
(a) Since \(e_1\text{,}\) \(e_2\text{,}\) \(e_3\) spans \(\mathbb{R}^3\text{,}\) we get that \(L(\mathbb{R}^3)\) is spanned by \(L(e_1)\text{,}\) \(L(e_2)\text{,}\) \(L(e_3)\text{.}\) So
\begin{equation*} \begin{aligned}L(\mathbb{R}^3) \amp = \operatorname{span}\left\{ L(e_1),L(e_2),L(e_3)\right\}\\ \amp = \operatorname{span}\left\{\begin{bmatrix} -12 \\ 0 \\ -36 \end{bmatrix}, \begin{bmatrix} -24 \\ -12 \\ 0 \end{bmatrix}, \begin{bmatrix} 12 \\ 0 \\ 36 \end{bmatrix}\right\}\\ \amp = \operatorname{span}\left\{\begin{bmatrix} 1 \\ 0 \\ 3 \end{bmatrix}, \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ -3 \end{bmatrix}\right\}\\ \amp = \operatorname{span}\left\{\begin{bmatrix} 1 \\ 0 \\ 3 \end{bmatrix}, \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}\right)\end{aligned} \end{equation*}
and hence the dimension of the range is 2.
(b) The rank-nullity theorem implies that the dimension of the kernel is \(3-2 = 1\text{.}\)
(c) Notice that
\begin{equation*} \begin{aligned} L(S) \amp = \operatorname{span}\left\{L(11 e_1), L(24 e_2 + 24 e_3)\right\} = \operatorname{span}\left\{L(e_1), L(e_2)+L(e_3)\right)\}\\ \amp = \operatorname{span}\left\{\begin{bmatrix} -12 \\ 0 \\ -36 \end{bmatrix}, \begin{bmatrix} -12 \\ -12 \\ 36 \end{bmatrix}\right\}\end{aligned} \end{equation*}
and it is easy to check that these two vectors are linearly independent. Therefore, the dimension of \(L(S)\) is 2.
2.2.12.
Answer 1.
\(1, x\)
Answer 2.
\(1, x, x^{2}\)
2.2.13.
2.2.13.a
Solution.
Suppose \(T:V\to W\) is injective. Then \(\ker T = \{0\}\text{,}\) so
\begin{equation*} \dim V = 0 + \dim \im T \leq \dim W\text{,} \end{equation*}
since \(\im T\) is a subspace of \(W\text{.}\)
Conversely, suppose \(\dim V\leq \dim W\text{.}\) Choose a basis \(\{\vv_1,\ldots, \vv_m\}\) of \(V\text{,}\) and a basis \(\{\ww_1,\ldots, \ww_n\}\) of \(W\text{,}\) where \(m\leq n\text{.}\) By Theorem 2.1.8, there exists a linear transformation \(T:V\to W\) with \(T(\vv_i)=\ww_i\) for \(i=1,\ldots, m\text{.}\) (The main point here is that we run out of basis vectors for \(V\) before we run out of basis vectors for \(W\text{.}\)) This map is injective: if \(T(\vv)=\zer\text{,}\) write \(\vv=c_1\vv_1+\cdots + c_m\vv_m\text{.}\) Then
\begin{align*} \zer \amp = T(\vv)\\ \amp = T(c_1\vv_1+\cdots + c_m\vv_m)\\ \amp = c_1T(\vv_1)+\cdots + c_mT(\vv_m)\\ \amp = c_1\ww_1+\cdots +c_m\ww_m\text{.} \end{align*}
Since \(\{\ww_1,\ldots, \ww_m\}\) is a subset of a basis, it’s independent. Therefore, the scalars \(c_i\) must all be zero, and therefore \(\vv=\zer\text{.}\)
2.2.13.b
Solution.
Suppose \(T:V\to W\) is surjective. Then \(\dim \im T = \dim W\text{,}\) so
\begin{equation*} \dim V = \dim \ker T + \dim W \geq \dim W\text{.} \end{equation*}
Conversely, suppose \(\dim V\geq \dim W\text{.}\) Again, choose a basis \(\{\vv_1,\ldots, \vv_m\}\) of \(V\text{,}\) and a basis \(\{\ww_1,\ldots, \ww_n\}\) of \(W\text{,}\) where this time, \(m\geq n\text{.}\) We can define a linear transformation as follows:
\begin{equation*} T(\vv_1)=\ww_1,\ldots, T(\vv_n)=\ww_n, \text{ and } T(\vv_j) = \zer \text{ for } j>n. \end{equation*}
It’s easy to check that this map is a surjection: given \(\ww\in W\text{,}\) we can write it in terms of our basis as \(\ww=c_1\ww_1+\cdots + c_n\ww_n\text{.}\) Using these same scalars, we can define \(\vv=c_1\vv_1+\cdots + c_n\vv_n\in V\) such that \(T(\vv)=\ww\text{.}\)
Note that it’s not important how we define \(T(\vv_j)\) when \(j>n\text{.}\) The point is that this time, we run out of basis vectors for \(W\) before we run out of basis vectors for \(V\text{.}\) Once each vector in the basis of \(W\) is in the image of \(T\text{,}\) we’re guaranteed that \(T\) is surjective, and we can define the value of \(T\) on any remaining basis vectors however we want.

2.3 Isomorphisms, composition, and inverses
2.3.2 Composition and inverses

Exercises

2.3.2.2.
Solution.
Let \(\ww_1,\ww_2\in W\text{.}\) Then there exist \(\vv_1,\vv_2\in V\) with \(\ww_1=T(\vv_1), \ww_2=T(\vv_2)\text{.}\) We then have
\begin{align*} T^{-1}(\ww_1+\ww_2) \amp = T^{-1}(T(\vv_1)+T(\vv_2)) \\ \amp = T^{-1}(T(\vv_1+\vv_2))\\ \amp = \vv_1+\vv_2\\ \amp = T^{-1}(\ww_1)+T^{-1}(\ww_2)\text{.} \end{align*}
For any scalar \(c\text{,}\) we similarly have
\begin{equation*} T^{-1}(c\ww_1) = T^{-1}(cT(\vv_1))=T^{-1}(T(c\vv_1)) = c\vv_1 = cT^{-1}(\ww_1)\text{.} \end{equation*}
2.3.2.4.
Answer.
\(-cx^{2}+\left(a+4c\right)x+-\left(-4\right)a+b+\left(-4--4\cdot 4\right)c\)
2.3.2.5.
Answer 1.
\(\left(9.5x+\left(-4.5\right)y,-2x+1y\right)\)
Answer 2.
\(\left(0.333333x+0.666667y+\left(-0.666667\right)z,-0.333333x+0.333333y+0.666667z,0.333333x+\left(-0.333333\right)y+0.333333z\right)\)
Answer 3.
\(\frac{19}{2}\cdot 2-\frac{9}{2}\mathopen{}\left(-4\right)\)
Answer 4.
\(\frac{2}{2}\mathopen{}\left(-4\right)-\frac{4}{2}\cdot 2\)
Answer 5.
\(\frac{6+2\cdot 1\cdot \left(-3\right)-2\cdot 1}{3}\)
Answer 6.
\(\frac{-1\cdot 6+-3+2\cdot 1\cdot 1}{3}\)
Answer 7.
\(\frac{1\cdot 1\cdot 6-1\cdot \left(-3\right)+1}{3}\)

3 Orthogonality and Applications
3.1 Orthogonal sets of vectors
3.1.3 Exercises

3.1.3.1.

Solution.
This is an exercise in properties of the dot product. We have
\begin{align*} \len{\xx+\yy}^2 \amp = (\xx+\yy)\dotp (\xx+\yy) \\ \amp = \xx\dotp \xx+\xx\dotp\yy+\yy\dotp\xx+\yy\dotp\yy\\ \amp =\len{\xx}^2+2\xx\dotp\yy+\len{\yy}^2\text{.} \end{align*}

3.1.3.2.

Solution.
If \(\xx=\zer\text{,}\) then the result follows immediately from the dot product formula in Definition 3.1.1. Conversely, suppose \(\xx\dotp \vv_i=0\) for each \(i\text{.}\) Since the \(\vv_i\) span \(\R^n\text{,}\) there must exist scalars \(c_1,c_2,\ldots, c_k\) such that \(\xx=c_1\vv_1+c_2\vv_2+\cdots+c_k\vv_k\text{.}\) But then
\begin{align*} \xx\dotp\xx \amp = \xx\dotp (c_1\vv_1+c_2\vv_2+\cdots+c_k\vv_k) \\ \amp = c_1(\xx\dotp \vv_1)+ c_2(\xx\dotp \vv_2)+\cdots +c_k(\xx\dotp \vv_k)\\ \amp = c_1(0)+c_2(0)+\cdots + c_k(0)=0\text{.} \end{align*}

3.1.3.3.

Solution.
All three vectors are nonzero. To confirm the set is orthogonal, we simply compute dot products:
\begin{align*} (1,0,1,0)\dotp (-1,0,1,1)\amp =-1+0+1+0=0\\ (-1,0,1,1)\dotp (1,1,-1,2)\amp =-1+0-1+2=0\\ (1,0,1,0)\dotp (1,1,-1,2) \amp = 1+0-1+0=0\text{.} \end{align*}
To find a fourth vector, we proceed as follows. Let \(\xx=(a,b,c,d)\text{.}\) We want \(\xx\) to be orthogonal to the three vectors in our set. Computing dot products, we must have:
\begin{align*} (a,b,c,d)\dotp (1,0,1,0) \amp = a+c=0 \\ (a,b,c,d)\dotp (-1,0,1,1) \amp = -a+c+d=0 \\ (a,b,c,d)\dotp (1,1,-1,2) \amp = a+b-c+2d=0\text{.} \end{align*}
This is simply a homogeneous system of three equations in four variables. Using the Sage cell below, we find that our vector must satisfy \(a=\frac12 d, b = -3d, c=-\frac12 d\text{.}\)
One possible nonzero solution is to take \(d=2\text{,}\) giving \(\xx=(1,-6,-1,2)\text{.}\) We’ll leave the verification that this vector works as an exercise.

3.1.3.4.

Solution.
We compute
\begin{align*} \left(\frac{\vv\dotp\xx_1}{\len{\xx_1}^2}\right)\xx_1 \amp +\left(\frac{\vv\dotp\xx_2}{\len{\xx_2}^2}\right)\xx_2 +\left(\frac{\vv\dotp\xx_3}{\len{\xx_3}^2}\right)\xx_3\\ \amp = \frac{4}{2}\xx_1+\frac{-9}{3}\xx_2+\frac{-28}{7}\xx_3\\ \amp = 2(1,0,1,0)-3(-1,0,1,1)-4(1,1,-1,2)\\ \amp = (1,-4,3,-11) = \vv\text{,} \end{align*}
so \(\vv\in\spn\{\xx_1,\xx_2,\xx_3\}\text{.}\)
On the other hand, repeating the same calculation with \(\ww\text{,}\) we find
\begin{align*} \left(\frac{\vv\dotp\xx_1}{\len{\xx_1}^2}\right)\xx_1 \amp +\left(\frac{\vv\dotp\xx_2}{\len{\xx_2}^2}\right)\xx_2 +\left(\frac{\vv\dotp\xx_3}{\len{\xx_3}^2}\right)\xx_3\\ \amp =\frac12 (1,0,1,0)-\frac53 (-1,0,1,1) +\frac47 (1,1,-1,2)\\ \amp = \left(\frac{73}{42},\frac47,-\frac{115}{42},-\frac{11}{21}\right)\neq \ww\text{,} \end{align*}
so \(\ww\notin\spn\{\xx_1,\xx_2,\xx_3\}\text{.}\)
Soon, we’ll see that the quantity we computed when showing that \(\ww\notin\spn\{\xx_1,\xx_2,\xx_3\}\) is, in fact, the orthogonal projection of \(\ww\) onto the subspace \(\spn\{\xx_1,\xx_2,\xx_3\}\text{.}\)

3.1.3.5.

Answer.
\(\sqrt{60}\)

3.1.3.6.

Answer 1.
\(\sqrt{42}\)
Answer 2.
\(\left[\begin{array}{c} 0.771517\cr 0.308607\cr -0.308607\cr -0.46291 \end{array}\right]\)

3.1.3.7.

Answer.
\(115\)
Solution.
Note that the distributive property, together with symmetry, let us handle this dot product using what is essentially “FOIL”:
\begin{equation*} \begin{aligned} (5\xx-3\yy)\dotp (\xx+5\yy)\amp = (5\xx)\dotp \xx+(5\xx)\dotp(5\yy)+(-3\yy)\dotp \xx+(-3\yy)\dotp(5\yy)\\ \amp = 5(\xx\dotp\xx)+(5\cdot 5)(\xx\dotp \yy)-3(\yy\dotp \xx)+(-3\cdot 5)(\yy\dotp\yy)\\ \amp = 5\len{\xx}^2+25\xx\dotp\yy-3\xx\dotp\yy-15\len{\yy}^2\\ \amp = 5(2)^2+22(5)-15(1)^2 = 115\text{.} \end{aligned} \end{equation*}

3.1.3.8.

Answer 1.
\(0\)
Answer 2.
\(-26\)
Answer 3.
\(0\)
Solution.
Solution: One checks by direct computation that
\(a = 0\text{,}\) \(b = -26\text{,}\) \(c = 0\)
must hold.

3.1.3.9.

Answer.
\(\left[\begin{array}{c} 7\cr 0\cr 2 \end{array}\right], \left[\begin{array}{c} 5\cr 2\cr 0 \end{array}\right]\)

3.2 The Gram-Schmidt Procedure

Exercises

3.2.1.
Answer 1.
\(\left[\begin{array}{c}-5\\3\\0\\-3\\0\\1\\\end{array}\right]\)
Answer 2.
\(5\)
Answer 3.
\(44\)
Answer 4.
\(\left[\begin{array}{c}-2.43182\\-0.340909\\2\\3.34091\\0\\-1.11364\\\end{array}\right]\)
Answer 5.
\(1\)
Answer 6.
\(6.88636\)
Answer 7.
\(22.4318\)
Answer 8.
\(\left[\begin{array}{c}-0.139818\\0.0364742\\-0.613982\\-0.957447\\4\\-3.68085\\\end{array}\right]\)
3.2.2.
Answer 1.
\(\left[\begin{array}{c}3\\0\\4\\0\\4\\-3\\\end{array}\right]\)
Answer 2.
\(33\)
Answer 3.
\(50\)
Answer 4.
\(\left[\begin{array}{c}1.02\\-4\\-0.64\\0\\1.36\\1.98\\\end{array}\right]\)
Answer 5.
\(29\)
Answer 6.
\(1.86\)
Answer 7.
\(23.22\)
Answer 8.
\(\left[\begin{array}{c}1.17829\\0.320413\\1.73127\\-1\\-1.42894\\1.5814\\\end{array}\right]\)
3.2.3.
Answer 1.
\(\left[\begin{array}{c}0\\0\\3\\11\\1\\0\\\end{array}\right]\)
Answer 2.
\(\left[\begin{array}{c}1\\0\\1.81679\\-0.671756\\1.93893\\2\\\end{array}\right]\)
Answer 3.
\(\left[\begin{array}{c}1.23978\\14\\0.435631\\-0.161074\\0.464918\\-1.52044\\\end{array}\right]\)
3.2.4.
Answer 1.
\(\left[\begin{array}{c} -1\cr 0\cr 1\cr 3 \end{array}\right], \left[\begin{array}{c} 1\cr 3\cr 1\cr 0 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{c} 3\cr 6\cr 3\cr 6 \end{array}\right], \left[\begin{array}{c} -1.54545\cr -0.0909091\cr -2.54545\cr -2.09091 \end{array}\right]\)

3.3 Orthogonal Projection

Exercises

3.3.5.
Answer 1.
\({\frac{1}{3}}\)
Answer 2.
\(-{\frac{1}{6}}\)
3.3.6.
Answer.
\(\left[\begin{array}{c} -1\cr -1\cr 1 \end{array}\right]\)
3.3.7.
Answer.
\(\left[\begin{array}{c} -16\cr -5\cr 1\cr 0 \end{array}\right], \left[\begin{array}{c} 25\cr 7\cr 0\cr 1 \end{array}\right]\)
3.3.8.
Answer.
\(5.50348614864795\)
3.3.9.
Answer.
\(\left[\begin{array}{c} -3.65814\cr -1.26202\cr -5.4124\cr 7.85194 \end{array}\right]\)
3.3.10.
Answer.
\(\left[\begin{array}{c} 4.15789\cr 0.842105\cr -20.0526 \end{array}\right]\)

3.5 Project: dual basis

Exercise 3.5.1.

Solution.
We know that \(\dim V^* = \dim V=n\text{.}\) Since there are \(n\) vectors in the dual basis, it’s enough to show that they’re linearly independent. Suppose that
\begin{equation*} c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n=0 \end{equation*}
for some scalars \(c_1,c_2,\ldots, c_n\text{.}\)
This means that \((c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n)(v)=0\) for all \(v\in V\text{;}\) in particular, this must be true for the basis vectors \(v_1,\ldots, v_n\text{.}\)
By the definition of the dual basis, for each \(i=1,2,\ldots, n\) we have
\begin{equation*} (c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n)(v_i) = 0+\cdots + 0 +c_i(1)+0+\cdots + 0 = c_i = 0\text{.} \end{equation*}
Thus, \(c_i=0\) for each \(i\text{,}\) and therefore, the \(\phi_i\) are linearly independent.

Exercise 3.5.2.

Solution.
There are two things to check. First, we show that \(T^*(\phi)\in V^*\) for each \(\phi\in W*\text{.}\) Since \(T:V\to W\) and \(\phi:W\to \R\text{,}\) it follows that \(T^*\phi = \phi\circ T\) is a map from \(V\) to \(\R\text{.}\) But we must also show that it’s linear.
Given \(v_1, v_2\in V\text{,}\) we have
\begin{align*} (T^*\phi)(v_1+v_2) \amp = \phi(T(v_1+v_2))=\phi(T(v_1)+T(v_2)) \quad \text{ because } T \text{ is linear}\\ \amp =\phi(T(v_1))+\phi(T(v_2)) \quad \text{ because } \phi \text{ is linear}\\ \amp =(T^*\phi)(v_1)+(T^*\phi)(v_2)\text{.} \end{align*}
Similarly, for any scalar \(c\text{,}\)
\begin{equation*} (T^*\phi)(cv) = \phi(T(cv))=\phi(cT(v))=c(\phi(T(v)))=c((T^*\phi)(v))\text{.} \end{equation*}
This shows that \(T^*\phi\in V^*\text{.}\)
Next, we need to show that \(T^*:W^*\to V^*\) is a linear map. Let \(\phi,\psi\in W^*\text{,}\) and let \(c\) be a scalar. We have:
\begin{equation*} T^*(\phi+\psi) = (\phi+\psi)\circ T = \phi\circ T+\psi\circ T = T^*\phi+T^*\psi\text{,} \end{equation*}
and
\begin{equation*} T^*(c\phi) = (c\phi)\circ T = c(\phi\circ T) = cT^*\phi\text{.} \end{equation*}
This follows from the vector space structure on any space of functions. For a vector \(v\in V\text{,}\) we have
\begin{equation*} (T^*(c\phi))(v) = (c\phi(T(v)))=c(\phi(T(v)))=c((T^*\phi)(v))\text{.} \end{equation*}

Exercise 3.5.3.

Solution.
Let \(p\) be a polynomial. Then
\begin{equation*} (D^*\phi)(p) = \phi(D(p))=\phi(p') = \int_0^1 p'(x)\,dx\text{.} \end{equation*}
By the Fundamental Theorem of Calculus (or a tedious calculation, if you prefer), we get
\begin{equation*} (D^*\phi)(p) = p(1)-p(0)\text{.} \end{equation*}

Exercise 3.5.4.

Solution.
Let \(\phi\in W^*\text{.}\) We have
\begin{equation*} (S+T)^*(\phi) = \phi\circ(S+T) = \phi\circ S+\phi\circ T = S^*\phi+T^*\phi\text{,} \end{equation*}
since \(\phi\) is linear. Similarly,
\begin{equation*} (kS)^*(\phi) = \phi\circ (kS) = k(\phi\circ S) = k(S^*\phi)\text{.} \end{equation*}
Finally, we have
\begin{equation*} (ST)^*\phi = \phi\circ(ST) = \phi\circ(S\circ T) = (\phi\circ S)\circ T = T^*(\phi\circ S) = T^*(S^*\phi) = (T^*S^*)(\phi),\text{.} \end{equation*}
since composition is associative.

Exercise 3.5.5.

Solution.
As per the hint, suppose \(\phi = c_1\phi_1+c_2\phi_2+c_3\phi_3+c_4\phi_4\text{,}\) and that \(\phi\in U^0\text{.}\) Then
\begin{align*} \phi(2a+b,3b,a,a-2b) \amp = c_1\phi_1(2a+b,3b,a,a-2b)+c_2\phi_2(2a+b,3b,a,a-2b)\\ \amp \quad + c_3\phi_3(2a+b,3b,a,a-2b)+c_4\phi_4(2a+b,3b,a,a-2b)\\ \amp = c_1(2a+b)+c_2(3b)+c_3(a)+c_4(a-2b)\\ \amp = a(2c_1+c_3+c_4)+b(c_1+3c_2-2c_4)\text{.} \end{align*}
We wish for this to be zero for all possible values of \(a\) and \(b\text{.}\) Therefore, we must have
\begin{align*} 2c_1+c_3+c_4\amp =0\\ c_1+3c_2-2c_4\amp =0\text{.} \end{align*}
Solving gives us \(c_1=-\frac12 c_3-\frac12 c_4\) and \(c_2=\frac16 c_3+\frac56 c_4\text{,}\) so
\begin{align*} \phi \amp = \left(-\frac12 c_3-\frac12 c_4\right)\phi_1 +\left(\frac16 c_3+\frac56 c_4\right)\phi_2 + c_3\phi_3 + c_4\phi_4\\ \amp = c_3\left(-\frac12 \phi_1 + \frac16 \phi_2+\phi_3\right)+c_4\left(-\frac12\phi_1+\frac56 \phi_2 + \phi_4\right)\text{.} \end{align*}
This gives us the following basis for \(U^0\text{:}\)
\begin{equation*} \left\{\phi_3-\frac12 \phi_1+\frac16 \phi_2, \phi_4-\frac12\phi_1+\frac56\phi_3\right\}\text{.} \end{equation*}

4 Diagonalization
4.1 Eigenvalues and Eigenvectors

Exercises

4.1.1.
Answer.
\(x^{3}-5x^{2}+8x+20\)
4.1.2.
Answer.
\(-1, -4, 7\)
4.1.3.
Answer 1.
\(0\)
Answer 2.
\(\left[\begin{array}{c} 1\cr 1\cr -1 \end{array}\right]\)
Answer 3.
\(-4\)
Answer 4.
\(\left[\begin{array}{c} -1\cr -2\cr 1 \end{array}\right], \left[\begin{array}{c} -3\cr -3\cr 2 \end{array}\right]\)
4.1.4.
Answer 1.
\(0\)
Answer 2.
\(\left[\begin{array}{c} 2\cr 1\cr 1\cr 1 \end{array}\right]\)
Answer 3.
\(3\)
Answer 4.
\(\left[\begin{array}{c} -1\cr 1\cr 0\cr 0 \end{array}\right], \left[\begin{array}{c} 0\cr 1\cr 0\cr -1 \end{array}\right]\)
4.1.5.
Answer 1.
\(-0.148148\)
Answer 2.
\(0\)
4.1.6.
Answer 1.
\(0\)
Answer 2.
\(3\)
Answer 3.
\(3\)
Answer 4.
\(2\)
Answer 5.
\(1\)
Answer 6.
\(1\)
4.1.7.
Answer 1.
\(3^{8}\)
Answer 2.
\(\frac{1}{3}\)
Answer 3.
\(3+-3\)
Answer 4.
\(-3\cdot 3\)
4.1.8.
Answer 1.
\(-2\)
Answer 2.
\(-1\)
Answer 3.
\(1\)
Answer 4.
\(\left[\begin{array}{c} -4\cr -6\cr 2 \end{array}\right]\)
4.1.9.
Answer 1.
\(\left[\begin{array}{cc} 1 &0\cr 0 &1 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} 3 &-1\cr 2 &-1 \end{array}\right]\)
Answer 3.
\(\left[\begin{array}{cc} 3 &-4\cr 1 &-1 \end{array}\right]\)

4.2 Diagonalization of symmetric matrices

Exercise 4.2.1.

Solution.
Take \(\xx=\mathbf{e}_i\) and \(\yy=\mathbf{e}_j\text{,}\) where \(\{\mathbf{e}_1,\ldots, \mathbf{e}_n\}\) is the standard basis for \(\R^n\text{.}\) Then with \(A = [a_{ij}]\) we have
\begin{equation*} a_{ij} =\mathbf{e}_i\dotp(A\mathbf{e}_j) = (A\mathbf{e}_i)\dotp \mathbf{e}_j = a_{ji}\text{,} \end{equation*}
which shows that \(A^T=A\text{.}\)

Exercises

4.2.1.
Answer 1.
\(-50\)
Answer 2.
\(\left[\begin{array}{c} -0.316228\cr -0.948683 \end{array}\right]\)
Answer 3.
\(40\)
Answer 4.
\(\left[\begin{array}{c} 0.948683\cr -0.316228 \end{array}\right]\)
4.2.2.
Answer 1.
\(0\)
Answer 2.
\(\left[\begin{array}{c} 0.316228\cr -0.948683 \end{array}\right]\)
Answer 3.
\(30\)
Answer 4.
\(\left[\begin{array}{c} 0.948683\cr 0.316228 \end{array}\right]\)
4.2.3.
Answer 1.
\(-4\)
Answer 2.
\(\left[\begin{array}{c} -0.707107\cr 0.707107\cr 0 \end{array}\right]\)
Answer 3.
\(0\)
Answer 4.
\(\left[\begin{array}{c} 0.408248\cr 0.408248\cr -0.816497 \end{array}\right]\)
Answer 5.
\(9\)
Answer 6.
\(\left[\begin{array}{c} 0.57735\cr 0.57735\cr 0.57735 \end{array}\right]\)

4.4 Quadratic forms

Exercises

4.4.1.
Answer.
\(\left[\begin{array}{ccc} 7 &4.5 &-3\cr 4.5 &-1 &2\cr -3 &2 &-3 \end{array}\right]\)
4.4.2.
Answer.
\(9x_{1}^{2}-5x_{2}^{2}+4x_{3}^{2}-16x_{1}x_{2}+10x_{1}x_{3}+18x_{2}x_{3}\)
4.4.3.
Answer 1.
\(2, 3, 6\)
Answer 2.
\(\text{Positive definite}\)

4.5 Diagonalization of complex matrices
4.5.2 Complex matrices

Exercise 4.5.8.

Solution.
We have \(\bar{A}=\bbm 4\amp 1+i\amp -2-3i\\1-i\amp 5 \amp -7i\\-2+3i\amp 7i\amp -4\ebm\text{,}\) so
\begin{equation*} A^H = (\bar{A})^T = \bbm 4\amp 1-i\amp -2+3i\\1+i\amp 5\amp 7i\\-2-3i\amp -7i\amp -4\ebm = A\text{,} \end{equation*}
and
\begin{align*} BB^H \amp =\frac14\bbm 1+i\amp \sqrt{2}\\1-i\amp\sqrt{2}i\ebm\bbm 1-i\amp 1+i\\\sqrt{2}\amp-\sqrt{2}i\ebm \\ \amp =\frac14\bbm (1+i)(1-i)+2\amp (1+i)(1+i)-2i\\(1-i)(1-i)+2i\amp (1-i)(1+i)+2\ebm\\ \amp =\frac14\bbm 4\amp 0\\0\amp 4\ebm = \bbm 1\amp 0\\0\amp 1\ebm\text{,} \end{align*}
so that \(B^H = B^{-1}\text{.}\)

Exercise 4.5.12.

Solution.
Confirming that \(A^H=A\) is almost immediate. We will use the computer below to compute the eigenvalues and eigenvectors of \(A\text{,}\) but it’s useful to attempt this at least once by hand. We have
\begin{align*} \det(zI-A) \amp = \det\bbm z-4 \amp -3+i\\-3-i\amp z-1\ebm\\ \amp (z-4)(z-1)-(-3-i)(-3+i)\\ \amp z^2-5z+4-10\\ \amp (z+1)(z-6)\text{,} \end{align*}
so the eigenvalues are \(\lambda_1=-1\) and \(\lambda_2=6\text{,}\) which are both real, as expected.
Finding eigenvectors can seem trickier than with real numbers, mostly because it is no longer immediately apparent when one row or a matrix is a multiple of another. But we know that the rows of \(A-\lambda I\) must be parallel for a \(2\times 2\) matrix, which lets proceed nonetheless.
For \(\lambda_1=-1\text{,}\) we have
\begin{equation*} A + I =\bbm 5 \amp 3-i\\3+i\amp 2\ebm\text{.} \end{equation*}
There are two ways one can proceed from here. We could use row operations to get to the reduced row-echelon form of \(A\text{.}\) If we take this approach, we multiply row 1 by \(\frac15\text{,}\) and then take \(-3-i\) times the new row 1 and add it to row 2, to create a zero, and so on.
Easier is to realize that if we haven’t made a mistake calculating our eigenvalues, then the above matrix can’t be invertible, so there must be some nonzero vector in the kernel. If \((A+I)\bbm a\\b\ebm=\bbm0\\0\ebm\text{,}\) then we must have
\begin{equation*} 5a+(3-i)b=0\text{,} \end{equation*}
when we multiply by the first row of \(A\text{.}\) This suggests that we take \(a=3-i\) and \(b=-5\text{,}\) to get \(\zz = \bbm 3-i\\-5\ebm\) as our first eigenvector. To make sure we’ve done things correctly, we multiply by the second row of \(A+I\text{:}\)
\begin{equation*} (3+i)(3-i)+2(-5) = 10-10 = 0\text{.} \end{equation*}
Success! Now we move onto the second eigenvalue.
For \(\lambda_2=6\text{,}\) we get
\begin{equation*} A-6I = \bbm -2\amp 3-i\\3+i\amp -5\ebm\text{.} \end{equation*}
If we attempt to read off the answer like last time, the first row of \(A-6I\) suggests the vector \(\ww = \bbm 3-i\\2\ebm\text{.}\) Checking the second row to confirm, we find:
\begin{equation*} (3+i)(3-i)-5(2) = 10-10=0\text{,} \end{equation*}
as before.
Finally, we note that
\begin{equation*} \langle \zz, \ww\rangle = (3-i)\overline{(3-i)}+(-5)(2) = (3-i)(3+i)-10 = 0\text{,} \end{equation*}
so the two eigenvectors are orthogonal, as expected. We have
\begin{equation*} \len{\zz}=\sqrt{10+25}=\sqrt{35} \quad \text{ and } \quad \len{\ww}=\sqrt{10+4}=\sqrt{14}\text{,} \end{equation*}
so our orthogonal matrix is
\begin{equation*} U = \bbm \frac{3-i}{\sqrt{35}}\amp \frac{3-i}{\sqrt{14}}\\-\frac{5}{\sqrt{35}}\amp \frac{2}{\sqrt{14}}\ebm\text{.} \end{equation*}
With a bit of effort, we can finally confirm that
\begin{equation*} U^HAU = \bbm -1\amp 0\\0\amp 6\ebm\text{,} \end{equation*}
as expected.

4.5.3 Exercises

4.5.3.1.

Answer 1.
\(-6+2i\)
Answer 2.
\(\left[\begin{array}{c} 2-3i\cr 1\cr -8i \end{array}\right]\)

4.5.3.2.

Answer.
\(\left[\begin{array}{cc} 0 &-1\cr 1 &0 \end{array}\right]\)

4.5.3.3.

Answer.
\(-1, 3.31662i, -3.31662i\)

4.5.3.4.

Answer.
\(2, -2, 3.31662i, -3.31662i\)

4.5.3.5.

Answer.
\(\left[\begin{array}{cc} 9.48683^{n}\cos\mathopen{}\left(1.24905n\right) &-9.48683^{n}\sin\mathopen{}\left(1.24905n\right)\cr 9.48683^{n}\sin\mathopen{}\left(1.24905n\right) &9.48683^{n}\cos\mathopen{}\left(1.24905n\right)\cr \end{array}\right]\)

4.5.3.6.

Answer.
\(\left[\begin{array}{ccc} 5.83095^{n}\cos\mathopen{}\left(0.54042n\right) &-5.83095^{n}\sin\mathopen{}\left(0.54042n\right) &5.83095^{n}\mathopen{}\left(1\cos\mathopen{}\left(0.54042n\right)+2\sin\mathopen{}\left(0.54042n\right)\right)+\left(-1\right)\cdot 1^{n}\cr 5.83095^{n}\sin\mathopen{}\left(0.54042n\right) &5.83095^{n}\cos\mathopen{}\left(0.54042n\right) &5.83095^{n}\mathopen{}\left(1\sin\mathopen{}\left(0.54042n\right)-2\cos\mathopen{}\left(0.54042n\right)\right)+2\cdot 1^{n}\cr 0 &0 &1^{n}\cr \end{array}\right]\)

4.5.3.7.

Answer 1.
\(-5\)
Answer 2.
\(11\)
Answer 3.
\(-6\)

4.7 Matrix Factorizations and Eigenvalues
4.7.3 Exercises

4.7.3.1.

Answer.
\(9, 4\)

4.7.3.2.

Answer.
\(8.24621, 8.24621, 0\)

4.7.3.3.

Answer 1.
\(\left[\begin{array}{cc} -0.428571 &-0.285714\cr -0.285714 &-0.857143\cr -0.857143 &0.428571 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} 7 &21\cr 0 &7 \end{array}\right]\)

4.7.3.4.

Answer 1.
\(\left[\begin{array}{ccc} -0.333333 &0.666667 &0.666667\cr -0.666667 &-0.666667 &0.333333\cr -0.666667 &0.333333 &-0.666667 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{ccc} 9 &3 &-3\cr 0 &3 &3\cr 0 &0 &6 \end{array}\right]\)

5 Change of Basis
5.1 The matrix of a linear transformation

Exercise 5.1.2.

Solution.
It’s clear that \(C_B(\zer)=\zer\text{,}\) since the only way to write the zero vector in \(V\) in terms of \(B\) (or, indeed, any independent set) is to set all the scalars equal to zero.
If we have two vectors \(\vv,\ww\) given by
\begin{align*} \vv \amp = a_1\mathbf{e}_1+\cdots + a_n\mathbf{e}_n \\ \ww \amp = b_1\mathbf{e}_1+\cdots + b_n\mathbf{e}_n\text{,} \end{align*}
then
\begin{equation*} \vv+\ww = (a_1+b_1)\mathbf{e}_1+\cdots + (a_n+b_n)\mathbf{e}_n\text{,} \end{equation*}
so
\begin{align*} C_B(\vv+\ww) \amp = \bbm a_1+b_1\\\vdots \\ a_n+b_n\ebm \\ \amp = \bbm a_1\\\vdots\\a_n\ebm +\bbm b_1\\\vdots \\b_n\ebm\\ \amp = C_B(\vv)+C_B(\ww)\text{.} \end{align*}
Finally, for any scalar \(c\text{,}\) we have
\begin{align*} C_B(c\vv) \amp = C_B((ca_1)\mathbf{e}_1+\cdots +(ca_n)\mathbf{e}_n)\\ \amp = \bbm ca_1\\\vdots \\ca_n\ebm\\ \amp =c\bbm a_1\\\vdots \\a_n\ebm\\ \amp =cC_B(\vv)\text{.} \end{align*}
This shows that \(C_B\) is linear. To see that \(C_B\) is an isomorphism, we can simply note that \(C_B\) takes the basis \(B\) to the standard basis of \(\R^n\text{.}\) Alternatively, we can give the inverse: \(C_B^{-1}:\R^n\to V\) is given by
\begin{equation*} C_B^{-1}\bbm c_1\\\vdots \\c_n\ebm = c_1\mathbf{e}_1+\cdots +c_n\mathbf{e}_n\text{.} \end{equation*}

Exercises

5.1.1.
Solution.
We must first write our general input in terms of the given basis. With respect to the standard basis
\begin{equation*} B_0 = \left\{\bbm 1\amp 0\\0\amp 0\ebm, \bbm 0\amp 1\\0\amp 0\ebm, \bbm 0\amp 0\\1\amp 0\ebm, \bbm 0\amp 0\\0\amp 1\ebm\right\}\text{,} \end{equation*}
we have the matrix \(P = \bbm 1\amp 0\amp 0\amp 1\\0\amp 1\amp 1\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 1\amp 0\amp 1\ebm\text{,}\) representing the change from the basis \(B\) the basis \(B_0\text{.}\) The basis \(D\) of \(P_2(\R)\) is already the standard basis, so we need the matrix \(M_{DB}(T)P^{-1}\text{:}\)
For a matrix \(X = \bbm a\amp b\\c\amp d\ebm\) we find
\begin{equation*} M_{DB}(T)P^{-1}C_{B_0}(X)=\bbm 2\amp -2\amp 2\amp 1\\0\amp 3\amp -8\amp 1\\-1\amp 1\amp 2\amp -1\ebm\bbm a\\b\\c\\d\ebm = \bbm 2a-2b+2c+d\\3b-8c+d\\-a+b+2c-d\ebm\text{.} \end{equation*}
But this is equal to \(C_D(T(X))\text{,}\) so
\begin{align*} T\left(\bbm a\amp b\\c\amp d\ebm\right) \amp = C_D^{-1}\bbm 2a-2b+2c+d\\3b-8c+d\\-a+b+2c-d\ebm\\ \amp = (2a-2b+2c+d)+(3b-8c+d)x+(-a+b+2c-d)x^2\text{.} \end{align*}
5.1.2.
Answer.
\(\left[\begin{array}{cccc} 0 &0.75 &0 &0\cr 0 &0 &2 &0\cr 0 &0 &0 &2 \end{array}\right]\)
5.1.3.
Answer.
\(\left[\begin{array}{cccc} 0 &0 &2 &6\cr 0 &-1 &2 &3\cr 0 &-1 &0 &0 \end{array}\right]\)
5.1.4.
Answer.
\(\left[\begin{array}{cccc} 1 &0 &-1 &-1\cr -2 &-2 &0 &4\cr -3 &-3 &0 &3 \end{array}\right]\)
5.1.5.
Answer.
\(\left[\begin{array}{ccc} 20 &-14 &-19\cr 8 &-5 &-7 \end{array}\right]\)
5.1.6.
Answer.
\(\left[\begin{array}{cc} -5 &5\cr 0 &-4\cr -9 &17 \end{array}\right]\)

5.2 The matrix of a linear operator

Exercise 5.2.4.

Solution.
With respect to the standard basis, we have
\begin{equation*} M_0=M_{B_0}(T) = \bbm 3\amp -2\amp 4\\1\amp -5\amp 0\\0\amp 2\amp -7\ebm\text{,} \end{equation*}
and the matrix \(P\) is given by \(P = \bbm 1\amp 3\amp 1\\2\amp -1\amp 2\\0\amp 2\amp-5\ebm\text{.}\) Thus, we find
\begin{equation*} M_B(T)=P^{-1}M_0P=\bbm 9\amp 56\amp 36\\7\amp 15\amp 15\\-10\amp -46\amp -33\ebm\text{.} \end{equation*}

Exercises

5.2.1.
Answer 1.
\(\left[\begin{array}{cc} 8 &7\cr 3 &3 \end{array}\right]\)
Answer 2.
\(\left[\begin{array}{cc} 14 &-9\cr 5 &-3 \end{array}\right]\)
5.2.2.
5.2.2.a
Answer.
\(\left[\begin{array}{cc} 49 &-132\cr 18 &-49 \end{array}\right]\)
5.2.2.b
Answer.
\(\left[\begin{array}{cc} 17 &12\cr -22 &-17 \end{array}\right]\)
5.2.2.c
Answer.
\(\left[\begin{array}{cc} 11 &8\cr 4 &3 \end{array}\right]\)
5.2.2.d
Answer.
\(\left[\begin{array}{cc} 3 &-8\cr -4 &11 \end{array}\right]\)

5.7 Jordan Canonical Form

Exercise 5.7.7.

Solution.
With respect to the standard basis of \(\R^4\text{,}\) the matrix of \(T\) is
\begin{equation*} M = \bbm 1\amp 1\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp -1\amp 2\amp 0\\1\amp -1\amp 1\amp 1\ebm\text{.} \end{equation*}
We find (perhaps using the Sage cell provided below, and the code from the example above) that
\begin{equation*} c_T(x)=(x-1)^3(x-2)\text{,} \end{equation*}
so \(T\) has eigenvalues \(1\) (of multiplicity \(3\)), and \(2\) (of multiplicity \(1\)).
We tackle the repeated eigenvalue first. The reduced row-echelon form of \(M-I\) is given by
\begin{equation*} R_1 = \bbm 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\ebm\text{,} \end{equation*}
so
\begin{equation*} E_1(M) = \spn\{\xx_1\}, \text{ where } \xx_1 = \bbm 0\\0\\0\\1\ebm\text{.} \end{equation*}
We now attempt to solve \((M-I)\xx=\xx_1\text{.}\) We find
\begin{equation*} \left(\begin{matrix}0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 0\\0\amp -1\amp 1\amp 0\\1\amp -1\amp 1\amp 0\end{matrix}\right|\left.\begin{matrix}0\\0\\0\\1\end{matrix}\right) \xrightarrow{\text{RREF}} \left(\begin{matrix} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\end{matrix}\right|\left.\begin{matrix}1\\0\\0\\0\end{matrix}\right)\text{,} \end{equation*}
so \(\xx = t\xx_1+\xx_2\text{,}\) where \(\xx_2 = \bbm 1\\0\\0\\0\ebm\text{.}\) We take \(\xx_2\) as our first generalized eigenvector. Note that \((M-I)^2\xx_2 = (M-I)\xx_1=\zer\text{,}\) so \(\xx_2\in \nll (M-I)^2\text{,}\) as expected.
Finally, we look for an element of \(\nll (M-I)^3\) of the form \(\xx_3\text{,}\) where \((M-I)\xx_3=\xx_2\text{.}\) We set up and solve the system \((M-I)\xx=\xx_2\) as follows:
\begin{equation*} \left(\begin{matrix}0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 0\\0\amp -1\amp 1\amp 0\\1\amp -1\amp 1\amp 0\end{matrix}\right|\left.\begin{matrix}1\\0\\0\\0\end{matrix}\right) \xrightarrow{\text{RREF}} \left(\begin{matrix} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\end{matrix}\right|\left.\begin{matrix}0\\1\\1\\0\end{matrix}\right)\text{,} \end{equation*}
so \(\xx = t\xx_1+\xx_3\text{,}\) where \(\xx_3 =\bbm 0\\1\\1\\0\ebm\text{.}\)
Finally, we deal with the eigenvalue \(2\text{.}\) The reduced row-echelon form of \(M-2I\) is
\begin{equation*} R_2 = \bbm 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp -1\\0\amp 0\amp 0\amp 0\ebm\text{,} \end{equation*}
so
\begin{equation*} E_2(M) = \spn\{\yy\}, \text{ where } \yy = \bbm 0\\0\\1\\1\ebm\text{.} \end{equation*}
Our basis of column vectors is therefore \(B=\{\xx_1,\xx_2,\xx_3,\yy\}\text{.}\) Note that by design,
\begin{align*} M\xx_1 \amp =\xx_1\\ M\xx_2 \amp =\xx_1+\xx_2\\ M\xx_3 \amp= \xx_2+\xx_3\\ M\yy \amp = 2\yy\text{.} \end{align*}
The corresponding Jordan basis for \(\R^4\) is
\begin{equation*} \{(0,0,0,1),(1,0,0,0),(0,1,1,0),(0,0,1,1)\}\text{,} \end{equation*}
and with respect to this basis, we have
\begin{equation*} M_B(T) = \bbm 1\amp 1\amp 0\amp 0\\ 0\amp 1\amp 1\amp 0\\ 0\amp 0\amp 1\amp 0\\ 0\amp 0\amp 0\amp 2\ebm\text{.} \end{equation*}

Exercises

5.7.1.
Answer.
\(\left(x-1\right)\mathopen{}\left(x-1\right)\mathopen{}\left(x-3\right)\)
5.7.2.
Answer.
\(\left[\begin{array}{cccc} 0 &-1 &-1 &-1\cr 2 &1 &-3 &-2\cr 0 &1 &-2 &0\cr -1 &1 &-1 &1 \end{array}\right], \left[\begin{array}{cccc} -3 &1 &0 &0\cr 0 &-3 &0 &0\cr 0 &0 &1 &0\cr 0 &0 &0 &1 \end{array}\right]\)
5.7.3.
Answer.
\(\left[\begin{array}{cccc} 1 &-2 &-1 &0\cr 1 &0 &-3 &1\cr 1 &0 &2 &2\cr 0 &1 &2 &1 \end{array}\right], \left[\begin{array}{cccc} -2 &0 &0 &0\cr 0 &-2 &1 &0\cr 0 &0 &-2 &0\cr 0 &0 &0 &1 \end{array}\right]\)