Skip to main content

Appendix C Solutions to Selected Exercises

1 Vector spaces
1.2 Properties

Exercise 1.2.2.

1.2.2.a
Solution.
Suppose \(\uu+\vv=\uu+\ww\text{.}\) By adding \(-\uu\) on the left of each side, we obtain:
\begin{align*} -\uu+(\uu+\vv) \amp =-\uu+(\uu+\ww)\\ (-\uu+\uu)+\vv \amp =(-\uu+\uu)+\ww \quad \text{ by A3}\\ \zer+\vv \amp =\zer+\ww \quad \text{ by A5}\\ \vv \amp =\ww \quad \text{ by A4}\text{,} \end{align*}
which is what we needed to show.
1.2.2.b
Solution.
We have \(c\zer = c(\zer+\zer) = c\zer +c\zer\text{,}\) by A4 and S2, respectively. Adding \(-c\zer\) to both sides gives us
\begin{equation*} -c\zer+c\zer = -c\zer+(c\zer+c\zer)\text{.} \end{equation*}
Using associativity (A3), this becomes
\begin{equation*} -c\zer+c\zer = (-c\zer+c\zer)+c\zer\text{,} \end{equation*}
and since \(-c\zer+c\zer=\zer\) by A5, we get \(\zer =\zer+c\zer\text{.}\) Finally, we apply A4 on the right hand side to get \(\zer=c\zer\text{,}\) as required.
1.2.2.c
Solution.
Suppose there are two vectors \(\zer_1,\zer_2\) that act as additive identities. Then
\begin{align*} \zer_1 \amp = \zer_1+\zer_2 \quad \text{ since } \vv+\zer_2=\vv \text{ for any } \vv\\ \amp =\zer_2+\zer_1 \quad \text{ by axiom A2}\\ \amp \zer_2 \quad \text{ since } \vv+\zer_1=\vv \text{ for any } \vv \end{align*}
So any two vectors satisfying the property in A4 must, in fact, be the same.
1.2.2.d
Solution.
Let \(\vv\in V\text{,}\) and suppose there are vectors \(\ww_1,\ww_2\in V\) such that \(\vv+\ww_1=\zer\) and \(\vv+\ww_2=\zer\text{.}\) Then
\begin{align*} \ww_1 \amp = \ww_1+\zer \quad \text{ by A4}\\ \amp = \ww_1+(\vv+\ww_2) \quad \text{ by assumption}\\ \amp = (\ww_1+\vv)+\ww_2 \quad \text{ by A3}\\ \amp = (\vv+\ww_1)+\ww_2 \quad \text{ by A2}\\ \amp = \zer+\ww_2 \quad \text{ by assumption}\\ \amp \ww_2 \quad \text{ by A4}\text{.} \end{align*}

1.8 New subspaces from old

Exercise 1.8.7.

1.8.7.a
Solution.
If \((x,y,z)\in U\text{,}\) then \(z=0\text{,}\) and if \((x,y,z)\in W\text{,}\) then \(x=0\text{.}\) Therefore, \((x,y,z)\in U\cap W\) if and only if \(x=z=0\text{,}\) so \(U\cap W = \{(0,y,0)\,|\, y\in\R\}\text{.}\)
1.8.7.b
Solution.
There are in fact infinitely many ways to do this. Three possible ways include:
\begin{align*} \vv \amp = (1,1,0)+(0,0,1) \\ \amp = (1,0,0)+(0,1,1)\\ \amp = \left(1,\frac12,0\right)+\left(0,\frac12,1\right)\text{.} \end{align*}

2 Linear Transformations
2.2 Kernel and Image

Exercise 2.2.16.

2.2.16.a
Solution.
Suppose \(T:V\to W\) is injective. Then \(\ker T = \{0\}\text{,}\) so
\begin{equation*} \dim V = 0 + \dim \im T \leq \dim W\text{,} \end{equation*}
since \(\im T\) is a subspace of \(W\text{.}\)
Conversely, suppose \(\dim V\leq \dim W\text{.}\) Choose a basis \(\{\vv_1,\ldots, \vv_m\}\) of \(V\text{,}\) and a basis \(\{\ww_1,\ldots, \ww_n\}\) of \(W\text{,}\) where \(m\leq n\text{.}\) By Theorem 2.1.8, there exists a linear transformation \(T:V\to W\) with \(T(\vv_i)=\ww_i\) for \(i=1,\ldots, m\text{.}\) (The main point here is that we run out of basis vectors for \(V\) before we run out of basis vectors for \(W\text{.}\)) This map is injective: if \(T(\vv)=\zer\text{,}\) write \(\vv=c_1\vv_1+\cdots + c_m\vv_m\text{.}\) Then
\begin{align*} \zer \amp = T(\vv)\\ \amp = T(c_1\vv_1+\cdots + c_m\vv_m)\\ \amp = c_1T(\vv_1)+\cdots + c_mT(\vv_m)\\ \amp = c_1\ww_1+\cdots +c_m\ww_m\text{.} \end{align*}
Since \(\{\ww_1,\ldots, \ww_m\}\) is a subset of a basis, it’s independent. Therefore, the scalars \(c_i\) must all be zero, and therefore \(\vv=\zer\text{.}\)
2.2.16.b
Solution.
Suppose \(T:V\to W\) is surjective. Then \(\dim \im T = \dim W\text{,}\) so
\begin{equation*} \dim V = \dim \ker T + \dim W \geq \dim W\text{.} \end{equation*}
Conversely, suppose \(\dim V\geq \dim W\text{.}\) Again, choose a basis \(\{\vv_1,\ldots, \vv_m\}\) of \(V\text{,}\) and a basis \(\{\ww_1,\ldots, \ww_n\}\) of \(W\text{,}\) where this time, \(m\geq n\text{.}\) We can define a linear transformation as follows:
\begin{equation*} T(\vv_1)=\ww_1,\ldots, T(\vv_n)=\ww_n, \text{ and } T(\vv_j) = \zer \text{ for } j>n. \end{equation*}
It’s easy to check that this map is a surjection: given \(\ww\in W\text{,}\) we can write it in terms of our basis as \(\ww=c_1\ww_1+\cdots + c_n\ww_n\text{.}\) Using these same scalars, we can define \(\vv=c_1\vv_1+\cdots + c_n\vv_n\in V\) such that \(T(\vv)=\ww\text{.}\)
Note that it’s not important how we define \(T(\vv_j)\) when \(j>n\text{.}\) The point is that this time, we run out of basis vectors for \(W\) before we run out of basis vectors for \(V\text{.}\) Once each vector in the basis of \(W\) is in the image of \(T\text{,}\) we’re guaranteed that \(T\) is surjective, and we can define the value of \(T\) on any remaining basis vectors however we want.

3 Orthogonality and Applications
3.5 Project: dual basis

Exercise 3.5.1.

Solution.
We know that \(\dim V^* = \dim V=n\text{.}\) Since there are \(n\) vectors in the dual basis, it’s enough to show that they’re linearly independent. Suppose that
\begin{equation*} c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n=0 \end{equation*}
for some scalars \(c_1,c_2,\ldots, c_n\text{.}\)
This means that \((c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n)(v)=0\) for all \(v\in V\text{;}\) in particular, this must be true for the basis vectors \(v_1,\ldots, v_n\text{.}\)
By the definition of the dual basis, for each \(i=1,2,\ldots, n\) we have
\begin{equation*} (c_1\phi_1+c_2\phi_2+\cdots c_n\phi_n)(v_i) = 0+\cdots + 0 +c_i(1)+0+\cdots + 0 = c_i = 0\text{.} \end{equation*}
Thus, \(c_i=0\) for each \(i\text{,}\) and therefore, the \(\phi_i\) are linearly independent.

Exercise 3.5.2.

Solution.
There are two things to check. First, we show that \(T^*(\phi)\in V^*\) for each \(\phi\in W*\text{.}\) Since \(T:V\to W\) and \(\phi:W\to \R\text{,}\) it follows that \(T^*\phi = \phi\circ T\) is a map from \(V\) to \(\R\text{.}\) But we must also show that it’s linear.
Given \(v_1, v_2\in V\text{,}\) we have
\begin{align*} (T^*\phi)(v_1+v_2) \amp = \phi(T(v_1+v_2))=\phi(T(v_1)+T(v_2)) \quad \text{ because } T \text{ is linear}\\ \amp =\phi(T(v_1))+\phi(T(v_2)) \quad \text{ because } \phi \text{ is linear}\\ \amp =(T^*\phi)(v_1)+(T^*\phi)(v_2)\text{.} \end{align*}
Similarly, for any scalar \(c\text{,}\)
\begin{equation*} (T^*\phi)(cv) = \phi(T(cv))=\phi(cT(v))=c(\phi(T(v)))=c((T^*\phi)(v))\text{.} \end{equation*}
This shows that \(T^*\phi\in V^*\text{.}\)
Next, we need to show that \(T^*:W^*\to V^*\) is a linear map. Let \(\phi,\psi\in W^*\text{,}\) and let \(c\) be a scalar. We have:
\begin{equation*} T^*(\phi+\psi) = (\phi+\psi)\circ T = \phi\circ T+\psi\circ T = T^*\phi+T^*\psi\text{,} \end{equation*}
and
\begin{equation*} T^*(c\phi) = (c\phi)\circ T = c(\phi\circ T) = cT^*\phi\text{.} \end{equation*}
This follows from the vector space structure on any space of functions. For a vector \(v\in V\text{,}\) we have
\begin{equation*} (T^*(c\phi))(v) = (c\phi(T(v)))=c(\phi(T(v)))=c((T^*\phi)(v))\text{.} \end{equation*}

Exercise 3.5.3.

Solution.
Let \(p\) be a polynomial. Then
\begin{equation*} (D^*\phi)(p) = \phi(D(p))=\phi(p') = \int_0^1 p'(x)\,dx\text{.} \end{equation*}
By the Fundamental Theorem of Calculus (or a tedious calculation, if you prefer), we get
\begin{equation*} (D^*\phi)(p) = p(1)-p(0)\text{.} \end{equation*}

Exercise 3.5.4.

Solution.
Let \(\phi\in W^*\text{.}\) We have
\begin{equation*} (S+T)^*(\phi) = \phi\circ(S+T) = \phi\circ S+\phi\circ T = S^*\phi+T^*\phi\text{,} \end{equation*}
since \(\phi\) is linear. Similarly,
\begin{equation*} (kS)^*(\phi) = \phi\circ (kS) = k(\phi\circ S) = k(S^*\phi)\text{.} \end{equation*}
Finally, we have
\begin{equation*} (ST)^*\phi = \phi\circ(ST) = \phi\circ(S\circ T) = (\phi\circ S)\circ T = T^*(\phi\circ S) = T^*(S^*\phi) = (T^*S^*)(\phi),\text{.} \end{equation*}
since composition is associative.

Exercise 3.5.5.

Solution.
As per the hint, suppose \(\phi = c_1\phi_1+c_2\phi_2+c_3\phi_3+c_4\phi_4\text{,}\) and that \(\phi\in U^0\text{.}\) Then
\begin{align*} \phi(2a+b,3b,a,a-2b) \amp = c_1\phi_1(2a+b,3b,a,a-2b)+c_2\phi_2(2a+b,3b,a,a-2b)\\ \amp \quad + c_3\phi_3(2a+b,3b,a,a-2b)+c_4\phi_4(2a+b,3b,a,a-2b)\\ \amp = c_1(2a+b)+c_2(3b)+c_3(a)+c_4(a-2b)\\ \amp = a(2c_1+c_3+c_4)+b(c_1+3c_2-2c_4)\text{.} \end{align*}
We wish for this to be zero for all possible values of \(a\) and \(b\text{.}\) Therefore, we must have
\begin{align*} 2c_1+c_3+c_4\amp =0\\ c_1+3c_2-2c_4\amp =0\text{.} \end{align*}
Solving gives us \(c_1=-\frac12 c_3-\frac12 c_4\) and \(c_2=\frac16 c_3+\frac56 c_4\text{,}\) so
\begin{align*} \phi \amp = \left(-\frac12 c_3-\frac12 c_4\right)\phi_1 +\left(\frac16 c_3+\frac56 c_4\right)\phi_2 + c_3\phi_3 + c_4\phi_4\\ \amp = c_3\left(-\frac12 \phi_1 + \frac16 \phi_2+\phi_3\right)+c_4\left(-\frac12\phi_1+\frac56 \phi_2 + \phi_4\right)\text{.} \end{align*}
This gives us the following basis for \(U^0\text{:}\)
\begin{equation*} \left\{\phi_3-\frac12 \phi_1+\frac16 \phi_2, \phi_4-\frac12\phi_1+\frac56\phi_3\right\}\text{.} \end{equation*}

4 Diagonalization
4.2 Diagonalization of symmetric matrices

Exercise 4.2.1.

Solution.
Take \(\xx=\mathbf{e}_i\) and \(\yy=\mathbf{e}_j\text{,}\) where \(\{\mathbf{e}_1,\ldots, \mathbf{e}_n\}\) is the standard basis for \(\R^n\text{.}\) Then with \(A = [a_{ij}]\) we have
\begin{equation*} a_{ij} =\mathbf{e}_i\dotp(A\mathbf{e}_j) = (A\mathbf{e}_i)\dotp \mathbf{e}_j = a_{ji}\text{,} \end{equation*}
which shows that \(A^T=A\text{.}\)

4.5 Diagonalization of complex matrices
4.5.2 Complex matrices

Exercise 4.5.8.

Solution.
We have \(\bar{A}=\bbm 4\amp 1+i\amp -2-3i\\1-i\amp 5 \amp -7i\\-2+3i\amp 7i\amp -4\ebm\text{,}\) so
\begin{equation*} A^H = (\bar{A})^T = \bbm 4\amp 1-i\amp -2+3i\\1+i\amp 5\amp 7i\\-2-3i\amp -7i\amp -4\ebm = A\text{,} \end{equation*}
and
\begin{align*} BB^H \amp =\frac14\bbm 1+i\amp \sqrt{2}\\1-i\amp\sqrt{2}i\ebm\bbm 1-i\amp 1+i\\\sqrt{2}\amp-\sqrt{2}i\ebm \\ \amp =\frac14\bbm (1+i)(1-i)+2\amp (1+i)(1+i)-2i\\(1-i)(1-i)+2i\amp (1-i)(1+i)+2\ebm\\ \amp =\frac14\bbm 4\amp 0\\0\amp 4\ebm = \bbm 1\amp 0\\0\amp 1\ebm\text{,} \end{align*}
so that \(B^H = B^{-1}\text{.}\)

Exercise 4.5.12.

Solution.
Confirming that \(A^H=A\) is almost immediate. We will use the computer below to compute the eigenvalues and eigenvectors of \(A\text{,}\) but it’s useful to attempt this at least once by hand. We have
\begin{align*} \det(zI-A) \amp = \det\bbm z-4 \amp -3+i\\-3-i\amp z-1\ebm\\ \amp (z-4)(z-1)-(-3-i)(-3+i)\\ \amp z^2-5z+4-10\\ \amp (z+1)(z-6)\text{,} \end{align*}
so the eigenvalues are \(\lambda_1=-1\) and \(\lambda_2=6\text{,}\) which are both real, as expected.
Finding eigenvectors can seem trickier than with real numbers, mostly because it is no longer immediately apparent when one row or a matrix is a multiple of another. But we know that the rows of \(A-\lambda I\) must be parallel for a \(2\times 2\) matrix, which lets proceed nonetheless.
For \(\lambda_1=-1\text{,}\) we have
\begin{equation*} A + I =\bbm 5 \amp 3-i\\3+i\amp 2\ebm\text{.} \end{equation*}
There are two ways one can proceed from here. We could use row operations to get to the reduced row-echelon form of \(A\text{.}\) If we take this approach, we multiply row 1 by \(\frac15\text{,}\) and then take \(-3-i\) times the new row 1 and add it to row 2, to create a zero, and so on.
Easier is to realize that if we haven’t made a mistake calculating our eigenvalues, then the above matrix can’t be invertible, so there must be some nonzero vector in the kernel. If \((A+I)\bbm a\\b\ebm=\bbm0\\0\ebm\text{,}\) then we must have
\begin{equation*} 5a+(3-i)b=0\text{,} \end{equation*}
when we multiply by the first row of \(A\text{.}\) This suggests that we take \(a=3-i\) and \(b=-5\text{,}\) to get \(\zz = \bbm 3-i\\-5\ebm\) as our first eigenvector. To make sure we’ve done things correctly, we multiply by the second row of \(A+I\text{:}\)
\begin{equation*} (3+i)(3-i)+2(-5) = 10-10 = 0\text{.} \end{equation*}
Success! Now we move onto the second eigenvalue.
For \(\lambda_2=6\text{,}\) we get
\begin{equation*} A-6I = \bbm -2\amp 3-i\\3+i\amp -5\ebm\text{.} \end{equation*}
If we attempt to read off the answer like last time, the first row of \(A-6I\) suggests the vector \(\ww = \bbm 3-i\\2\ebm\text{.}\) Checking the second row to confirm, we find:
\begin{equation*} (3+i)(3-i)-5(2) = 10-10=0\text{,} \end{equation*}
as before.
Finally, we note that
\begin{equation*} \langle \zz, \ww\rangle = (3-i)\overline{(3-i)}+(-5)(2) = (3-i)(3+i)-10 = 0\text{,} \end{equation*}
so the two eigenvectors are orthogonal, as expected. We have
\begin{equation*} \len{\zz}=\sqrt{10+25}=\sqrt{35} \quad \text{ and } \quad \len{\ww}=\sqrt{10+4}=\sqrt{14}\text{,} \end{equation*}
so our orthogonal matrix is
\begin{equation*} U = \bbm \frac{3-i}{\sqrt{35}}\amp \frac{3-i}{\sqrt{14}}\\-\frac{5}{\sqrt{35}}\amp \frac{2}{\sqrt{14}}\ebm\text{.} \end{equation*}
With a bit of effort, we can finally confirm that
\begin{equation*} U^HAU = \bbm -1\amp 0\\0\amp 6\ebm\text{,} \end{equation*}
as expected.

5 Change of Basis
5.1 The matrix of a linear transformation

Exercise 5.1.2.

Solution.
It’s clear that \(C_B(\zer)=\zer\text{,}\) since the only way to write the zero vector in \(V\) in terms of \(B\) (or, indeed, any independent set) is to set all the scalars equal to zero.
If we have two vectors \(\vv,\ww\) given by
\begin{align*} \vv \amp = a_1\mathbf{e}_1+\cdots + a_n\mathbf{e}_n \\ \ww \amp = b_1\mathbf{e}_1+\cdots + b_n\mathbf{e}_n\text{,} \end{align*}
then
\begin{equation*} \vv+\ww = (a_1+b_1)\mathbf{e}_1+\cdots + (a_n+b_n)\mathbf{e}_n\text{,} \end{equation*}
so
\begin{align*} C_B(\vv+\ww) \amp = \bbm a_1+b_1\\\vdots \\ a_n+b_n\ebm \\ \amp = \bbm a_1\\\vdots\\a_n\ebm +\bbm b_1\\\vdots \\b_n\ebm\\ \amp = C_B(\vv)+C_B(\ww)\text{.} \end{align*}
Finally, for any scalar \(c\text{,}\) we have
\begin{align*} C_B(c\vv) \amp = C_B((ca_1)\mathbf{e}_1+\cdots +(ca_n)\mathbf{e}_n)\\ \amp = \bbm ca_1\\\vdots \\ca_n\ebm\\ \amp =c\bbm a_1\\\vdots \\a_n\ebm\\ \amp =cC_B(\vv)\text{.} \end{align*}
This shows that \(C_B\) is linear. To see that \(C_B\) is an isomorphism, we can simply note that \(C_B\) takes the basis \(B\) to the standard basis of \(\R^n\text{.}\) Alternatively, we can give the inverse: \(C_B^{-1}:\R^n\to V\) is given by
\begin{equation*} C_B^{-1}\bbm c_1\\\vdots \\c_n\ebm = c_1\mathbf{e}_1+\cdots +c_n\mathbf{e}_n\text{.} \end{equation*}

5.2 The matrix of a linear operator

Exercise 5.2.4.

Solution.
With respect to the standard basis, we have
\begin{equation*} M_0=M_{B_0}(T) = \bbm 3\amp -2\amp 4\\1\amp -5\amp 0\\0\amp 2\amp -7\ebm\text{,} \end{equation*}
and the matrix \(P\) is given by \(P = \bbm 1\amp 3\amp 1\\2\amp -1\amp 2\\0\amp 2\amp-5\ebm\text{.}\) Thus, we find
\begin{equation*} M_B(T)=P^{-1}M_0P=\bbm 9\amp 56\amp 36\\7\amp 15\amp 15\\-10\amp -46\amp -33\ebm\text{.} \end{equation*}

5.7 Jordan Canonical Form

Exercise 5.7.7.

Solution.
With respect to the standard basis of \(\R^4\text{,}\) the matrix of \(T\) is
\begin{equation*} M = \bbm 1\amp 1\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp -1\amp 2\amp 0\\1\amp -1\amp 1\amp 1\ebm\text{.} \end{equation*}
We find (perhaps using the Sage cell provided below, and the code from the example above) that
\begin{equation*} c_T(x)=(x-1)^3(x-2)\text{,} \end{equation*}
so \(T\) has eigenvalues \(1\) (of multiplicity \(3\)), and \(2\) (of multiplicity \(1\)).
We tackle the repeated eigenvalue first. The reduced row-echelon form of \(M-I\) is given by
\begin{equation*} R_1 = \bbm 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\ebm\text{,} \end{equation*}
so
\begin{equation*} E_1(M) = \spn\{\xx_1\}, \text{ where } \xx_1 = \bbm 0\\0\\0\\1\ebm\text{.} \end{equation*}
We now attempt to solve \((M-I)\xx=\xx_1\text{.}\) We find
\begin{equation*} \left(\begin{matrix}0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 0\\0\amp -1\amp 1\amp 0\\1\amp -1\amp 1\amp 0\end{matrix}\right|\left.\begin{matrix}0\\0\\0\\1\end{matrix}\right) \xrightarrow{\text{RREF}} \left(\begin{matrix} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\end{matrix}\right|\left.\begin{matrix}1\\0\\0\\0\end{matrix}\right)\text{,} \end{equation*}
so \(\xx = t\xx_1+\xx_2\text{,}\) where \(\xx_2 = \bbm 1\\0\\0\\0\ebm\text{.}\) We take \(\xx_2\) as our first generalized eigenvector. Note that \((M-I)^2\xx_2 = (M-I)\xx_1=\zer\text{,}\) so \(\xx_2\in \nll (M-I)^2\text{,}\) as expected.
Finally, we look for an element of \(\nll (M-I)^3\) of the form \(\xx_3\text{,}\) where \((M-I)\xx_3=\xx_2\text{.}\) We set up and solve the system \((M-I)\xx=\xx_2\) as follows:
\begin{equation*} \left(\begin{matrix}0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 0\\0\amp -1\amp 1\amp 0\\1\amp -1\amp 1\amp 0\end{matrix}\right|\left.\begin{matrix}1\\0\\0\\0\end{matrix}\right) \xrightarrow{\text{RREF}} \left(\begin{matrix} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\end{matrix}\right|\left.\begin{matrix}0\\1\\1\\0\end{matrix}\right)\text{,} \end{equation*}
so \(\xx = t\xx_1+\xx_3\text{,}\) where \(\xx_3 =\bbm 0\\1\\1\\0\ebm\text{.}\)
Finally, we deal with the eigenvalue \(2\text{.}\) The reduced row-echelon form of \(M-2I\) is
\begin{equation*} R_2 = \bbm 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp -1\\0\amp 0\amp 0\amp 0\ebm\text{,} \end{equation*}
so
\begin{equation*} E_2(M) = \spn\{\yy\}, \text{ where } \yy = \bbm 0\\0\\1\\1\ebm\text{.} \end{equation*}
Our basis of column vectors is therefore \(B=\{\xx_1,\xx_2,\xx_3,\yy\}\text{.}\) Note that by design,
\begin{align*} M\xx_1 \amp =\xx_1\\ M\xx_2 \amp =\xx_1+\xx_2\\ M\xx_3 \amp= \xx_2+\xx_3\\ M\yy \amp = 2\yy\text{.} \end{align*}
The corresponding Jordan basis for \(\R^4\) is
\begin{equation*} \{(0,0,0,1),(1,0,0,0),(0,1,1,0),(0,0,1,1)\}\text{,} \end{equation*}
and with respect to this basis, we have
\begin{equation*} M_B(T) = \bbm 1\amp 1\amp 0\amp 0\\ 0\amp 1\amp 1\amp 0\\ 0\amp 0\amp 1\amp 0\\ 0\amp 0\amp 0\amp 2\ebm\text{.} \end{equation*}