Skip to main content
Logo image

Elementary Linear Algebra: For University of Lethbridge Math 1410

Section 7.1 Eigenvalues and Eigenvectors

We start by considering the matrix \(\tta\) and vector \(\vx\) as given below.
\begin{equation*} \tta = \bbm 1 \amp 4\\2 \amp 3\ebm \quad \quad \vx = \bbm 1\\1\ebm\text{.} \end{equation*}
Multiplying \(\tta\vx\) gives:
\begin{align*} \tta\vx \amp = \bbm 1 \amp 4\\2\amp 3\ebm \bbm 1\\1\ebm\\ \amp = \bbm 5\\5\ebm\\ \amp = 5\bbm 1\\1\ebm\text{!} \end{align*}
Wow! It looks like multiplying \(\tta\vx\) is the same as \(5\vx\text{!}\) This makes us wonder lots of things: is this the only case in the world where something like this happens? (Probably not.) Is \(\tta\) somehow a special matrix, and \(\tta\vx = 5\vx\) for any vector \(\vx\) we pick? (Probably not.) Or maybe \(\vx\) was a special vector, and no matter what \(2\times 2\) matrix \(\tta\) we picked, we would have \(\tta\vx =5\vx\text{.}\) (Again, probably not.)
A more likely explanation is this: given the matrix \(\tta\text{,}\) the number 5 and the vector \(\vx\) formed a special pair that happened to work together in a nice way. It is then natural to wonder if other “special” pairs exist. For instance, could we find a vector \(\vx\) where \(\tta\vx=3\vx\text{?}\)
This equation is hard to solve at first; we are not used to matrix equations where \(\vx\) appears on both sides of “\(=\text{.}\)” Therefore we put off solving this for just a moment to state a definition and make a few comments.

Definition 7.1.1. Eigenvalues and Eigenvectors.

Let \(\tta\) be an \(n\times n\) matrix, \(\vx\) a nonzero \(n\times 1\) column vector and \(\lambda\) a scalar. If
\begin{equation*} \tta\vx = \lda\vx, \end{equation*}
then \(\vx\) is an eigenvector of \(\tta\) and is an eigenvalue of \(\tta\text{.}\)
The word “eigen” is German for “proper” or “characteristic.” Therefore, an eigenvector of \(\tta\) is a “characteristic vector of \(\tta\text{.}\)” This vector tells us something about \(\tta\text{.}\)
Why do we use the Greek letter (lambda)? It is pure tradition. Above, we used \(a\) to represent the unknown scalar, since we are used to that notation. We now switch to because that is how everyone else does it. (An example of mathematical peer pressure.) Don’t get hung up on this; it is just a number.
Note that our definition requires that \(\tta\) be a square matrix. If \(\tta\) isn’t square then \(\tta\vx\) and \(\lambda\vx\) will have different sizes, and so they cannot be equal. Also note that \(\vx\) must be nonzero. Why? What if \(\vx = \zero\text{?}\) Then no matter what is, \(\tta\vx = \lda\vx\text{.}\) This would then imply that every number is an eigenvalue; if every number is an eigenvalue, then we wouldn’t need a definition for it. Therefore we specify that \(\vx\neq \zero\text{.}\)
Our last comment before trying to find eigenvalues and eigenvectors for given matrices deals with “why we care.” Did we stumble upon a mathematical curiosity, or does this somehow help us build better bridges, heal the sick, send astronauts into orbit, design optical equipment, and understand quantum mechanics? The answer, of course, is “Yes.” (Except for the “understand quantum mechanics” part. Nobody truly understands that stuff; they just probably understand it.) This is a wonderful topic in and of itself: we need no external application to appreciate its worth. At the same time, it has many, many applications to “the real world.”
Back to our math. Given a square matrix \(\tta\text{,}\) we want to find a nonzero vector \(\vx\) and a scalar such that \(\tta\vx = \lda\vx\text{.}\) We will solve this using the skills we developed in Chapter 4.
\begin{align*} \tta\vx \amp = \lda\vx \quad \text{ (original equation)}\\ \tta\vx - \lda\vx \amp = \zero \quad \text{ (subtract } \lda\vx \text{ from both sides)}\\ (\tta-\lda\tti)\vx \amp = \zero \quad \text{ (factor out } \vx)\text{.} \end{align*}
Think about this last factorization. We are likely tempted to say
\begin{equation*} \tta\vx-\lda\vx = (\tta-\lda)\vx\text{,} \end{equation*}
but this really doesn’t make sense. After all, what does “a matrix minus a number” mean? We need the identity matrix in order for this to be logical.
Let us now think about the equation \((\tta-\lda\tti)\vx=\zero\text{.}\) While it looks complicated, it really is just matrix equation of the type we solved in Section 3.6. We are just trying to solve \(\ttb\vx=\zero\text{,}\) where \(\ttb = (\tta-\lda\tti)\text{.}\)
We know from our previous work that this type of equation always has a solution, namely, \(\vx = \zero\text{.}\) (Recall this is a homogeneous system of equations.) However, we want \(\vx\) to be an eigenvector and, by the definition, eigenvectors cannot be \(\zero\text{.}\)
This means that we want solutions to \((\tta-\lda\tti)\vx=\zero\) other than \(\vx=\zero\text{.}\) Recall that Theorem 4.4.12 says that if the matrix \((\tta-\lda\tti)\) is invertible, then the only solution to \((\tta-\lda\tti)\vx=\zero\) is \(\vx=\zero\text{.}\) Therefore, in order to have other solutions, we need \((\tta-\lda\tti)\) to not be invertible.
Finally, recall from Theorem 6.4.12 that noninvertible matrices all have a determinant of 0. Therefore, if we want to find eigenvalues and eigenvectors\(\vx\text{,}\) we need \(\det(\tta-\lda\tti) = 0\text{.}\)
Let’s start our practice of this theory by finding such that \(\det(\tta-\lda\tti) = 0\text{;}\) that is, let’s find the eigenvalues of a matrix.

Example 7.1.2. Computing the eigenvalues of a matrix.

Find the eigenvalues of \(\tta\text{,}\) that is, find such that \(\det(\tta-\lda\tti) = 0\text{,}\) where
\begin{equation*} \tta = \bbm 1 \amp 4\\2 \amp 3\ebm\text{.} \end{equation*}
Solution.
(Note that this is the matrix we used at the beginning of this section.) First, we write out what \(\tta-\lda\tti\) is:
\begin{align*} \tta-\lda\tti \amp = \bbm 1\amp 4\\2\amp 3\ebm - \lda\eyetwo\\ \amp = \bbm 1\amp 4\\2\amp 3\ebm - \bbm\lda\amp 0\\0\amp \lda\ebm\\ \amp = \bbm 1-\lda \amp 4 \\ 2\amp 3-\lda\ebm\text{.} \end{align*}
Therefore,
\begin{align*} \det(\tta-\lda\tti) \amp = \bvm 1-\lda \amp 4 \\ 2\amp 3-\lda\evm\\ \amp = (1-\lda)(3-\lda)-8\\ \amp = \lda^2-4\lda-5\text{.} \end{align*}
Since we want \(\det(\tta-\lda\tti)=0\text{,}\) we want \(\lda^2-4\lda-5=0\text{.}\) This is a simple quadratic equation that is easy to factor:
\begin{align*} \lda^2-4\lda-5 \amp = 0\\ (\lda-5)(\lda+1) \amp = 0\\ \lda \amp = -1,\ 5\text{.} \end{align*}
According to our above work, \(\det(\tta-\lda\tti)=0\) when \(\lda = -1,\ 5\text{.}\) Thus, the eigenvalues of \(\tta\) are \(-1\) and \(5\text{.}\)
Earlier, when looking at the same matrix as used in our example, we wondered if we could find a vector \(\vx\) such that \(\tta\vx=3\vx\text{.}\) According to this example, the answer is “No.” With this matrix \(\tta\text{,}\) the only values of \(\lda\) that work are \(-1\) and \(5\text{.}\)
Let’s restate the above in a different way: It is pointless to try to find \(\vx\) where \(\tta\vx=3\vx\text{,}\) for there is no such \(\vx\text{.}\) There are only 2 equations of this form that have a solution, namely
\begin{equation*} \tta\vx = -\vx \quad\quad \text{and} \quad \quad \tta\vx=5\vx\text{.} \end{equation*}
As we introduced this section, we gave a vector \(\vx\) such that \(\tta\vx = 5\vx\text{.}\) Is this the only one? Let’s find out while calling our work an example; this will amount to finding the eigenvectors of \(\tta\) that correspond to the eigenvector of 5.

Example 7.1.3. Computing an eigenvector corresponding to a given eigenvalue.

Find \(\vx\) such that \(\tta\vx=5\vx\text{,}\) where
\begin{equation*} \tta = \bbm 1 \amp 4\\2 \amp 3\ebm\text{.} \end{equation*}
Solution.
Recall that our algebra from before showed that if
\begin{equation*} \tta\vx=\lda\vx \quad \text{then} \quad (\tta-\lda\tti)\vx=\zero\text{.} \end{equation*}
Therefore, we need to solve the equation \((\tta-\lda\tti)\vx=\zero\) for \(\vx\) when \(\lambda = 5\text{:}\)
\begin{align*} \tta - 5\tti \amp = \bbm 1\amp 4\\2\amp 3\ebm - 5\bbm 1\amp 0\\0\amp 1\ebm\\ \amp = \bbm -4\amp 4\\2\amp -2\ebm\text{.} \end{align*}
To solve \((\tta-5\tti)\vx=\zero\text{,}\) we form the augmented matrix and put it into reduced row echelon form:
\begin{equation*} \bbm-4 \amp 4 \amp 0\\2\amp -2\amp 0\ebm \quad\quad \overrightarrow{\text{rref}} \quad\quad \bbm 1\amp -1\amp 0\\0\amp 0\amp 0\ebm\text{.} \end{equation*}
Thus
\begin{align*} x_1 \amp = t\\ x_2 \amp = t \text{ is free} \end{align*}
and
\begin{equation*} \vx = \bbm x_1\\x_2\ebm = t\bbm 1\\1\ebm\text{.} \end{equation*}
We have infinitely many solutions to the equation \(\tta\vx = 5\vx\text{;}\) any nonzero scalar multiple of the vector \(\bbm 1\\1\ebm\) is a solution. We can do a few examples to confirm this:
\begin{align*} \bbm 1\amp 4\\2\amp 3\ebm\bbm 2\\2\ebm \amp = \bbm 10\\10\ebm = 5\bbm 2\\2\ebm\\ \bbm 1\amp 4\\2\amp 3\ebm\bbm 7\\7\ebm \amp = \bbm 35\\35\ebm = 5\bbm 7\\7\ebm\\ \bbm 1\amp 4\\2\amp 3\ebm\bbm-3\\-3\ebm \amp = \bbm-15\\-15\ebm = 5\bbm-3\\-3\ebm\text{.} \end{align*}
Of course, this works in general. For any \(t\text{,}\) we have
\begin{equation*} \bbm 1 \amp 4\\2 \amp 3\ebm\left(t\bbm 1\\1\ebm\right) = t\left(\bbm 1\amp 4\\2\amp 3\ebm\bbm 1\\1\ebm\right) = t\left(5\bbm 1\\1\ebm\right) = 5\left(t\bbm 1\\1\ebm\right)\text{.} \end{equation*}
Our method of finding the eigenvalues of a matrix \(\tta\) boils down to determining which values of \(\lambda\) give the matrix \((\tta - \lambda\tti)\) a determinant of 0. In computing \(\det(\tta-\lambda\tti)\text{,}\) we get a polynomial in \(\lambda\) whose roots are the eigenvalues of \(\tta\text{.}\) This polynomial is important and so it gets its own name.

Definition 7.1.4. Characteristic Polynomial.

Let \(\tta\) be an \(n\times n\) matrix. The characteristic polynomial of \(\tta\) is the \(n\)th degree polynomial \(p(\lambda) = \det(\tta-\lambda\tti)\text{.}\)
Our definition just states what the characteristic polynomial is. We know from our work so far why we care: the roots of the characteristic polynomial of an \(n\times n\) matrix \(\tta\) are the eigenvalues of \(\tta\text{.}\)
In Examples 2 and 3, we found eigenvalues and eigenvectors, respectively, of a given matrix. That is, given a matrix \(\tta\text{,}\) we found values \(\lambda\) and vectors \(\vx\) such that \(\tta\vx = \lambda\vx\text{.}\) The steps that follow outline the general procedure for finding eigenvalues and eigenvectors; we’ll follow this up with some examples.

Key Idea 7.1.5. Finding Eigenvalues and Eigenvectors.

Let \(\tta\) be an \(n\times n\) matrix.
  1. To find the eigenvalues of \(\tta\text{,}\) compute \(p(\lambda)\text{,}\) the characteristic polynomial of \(\tta\text{,}\) set it equal to 0, then solve for \(\lambda\text{.}\)
  2. To find the eigenvectors of \(\tta\text{,}\) for each eigenvalue solve the homogeneous system \((\tta-\lambda\tti)\vx = \zero\text{.}\)

Example 7.1.6. Computing eigenvalues and eigenvectors.

Find the eigenvalues of \(\tta\text{,}\) and for each eigenvalue, find an eigenvector where
\begin{equation*} \tta = \bbm-3 \amp 15\\3 \amp 9\ebm\text{.} \end{equation*}
Solution.
To find the eigenvalues, we must compute \(\det(\tta-\lambda\tti)\) and set it equal to 0.
\begin{align*} \det(\tta-\lambda\tti) \amp = \bvm -3-\lambda \amp 15\\3 \amp 9-\lambda\evm\\ \amp = (-3-\lambda)(9-\lambda)-45\\ \amp = \lambda^2-6\lambda-27-4\\ \amp = \lambda^2-6\lambda-72\\ \amp = (\lambda-12)(\lambda+6)\text{.} \end{align*}
Therefore, \(\det(\tta-\lambda\tti) = 0\) when \(\lambda = -6\) and \(12\text{;}\) these are our eigenvalues. (We should note that \(p(\lambda) =\lambda^2-6\lambda-72\) is our characteristic polynomial.)
It sometimes helps to give them names, so we’ll say \(\lambda_1 = -6\) and \(\lambda_2 = 12\text{.}\) Now we find eigenvectors.
For \(\lambda_1=-6\text{,}\) we need to solve the equation \((\tta - (-6)\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm 3 \amp 15 \amp 0\\3\amp 15\amp 0\ebm \quad\quad\overrightarrow{\text{rref}}\quad\quad \bbm 1\amp 5\amp 0\\0\amp 0\amp 0\ebm\text{.} \end{equation*}
Our solution is
\begin{align*} x_1 \amp = -5t\\ x_2 \amp = t \quad \text{ is free}\text{;} \end{align*}
in vector form, we have
\begin{equation*} \vx = t\bbm-5\\1\ebm\text{.} \end{equation*}
We may pick any nonzero value for \(t\) to get an eigenvector; a simple option is \(x_2 = 1\text{.}\) Thus we have the eigenvector
\begin{equation*} \vx[1] = \bbm-5\\1\ebm\text{.} \end{equation*}
(We used the notation \(\vx[1]\) to associate this eigenvector with the eigenvalue \(\lambda_1\text{.}\))
We now repeat this process to find an eigenvector for \(\lambda_2 = 12\text{.}\) In solving \((\tta - 12\tti)\vx = \zero\text{,}\) we find
\begin{equation*} \bbm-15 \amp 15 \amp 0 \\ 3 \amp -3 \amp 0 \ebm \quad\quad\overrightarrow{\text{rref}}\quad\quad \bbm 1\amp -1\amp 0\\0\amp 0\amp 0\ebm\text{.} \end{equation*}
In vector form, we have
\begin{equation*} \vx = t\bbm 1\\1\ebm\text{.} \end{equation*}
Again, we may pick any nonzero value for \(t\text{,}\) and so we choose \(t = 1\text{.}\) Thus an eigenvector for \(\lambda_2\) is
\begin{equation*} \vx[2] = \bbm 1\\1\ebm\text{.} \end{equation*}
To summarize, we have:
\begin{equation*} \text{eigenvalue } \lambda_1 = -6 \text{ with eigenvector } \vx[1] = \bbm-5\\1\ebm \end{equation*}
and
\begin{equation*} \text{eigenvalue } \lambda_2 = 12 \text{ with eigenvector } \vx[2] = \bbm 1\\1\ebm\text{.} \end{equation*}
We should take a moment and check our work: is it true that \(\tta\vx[1] = \lambda_1\vx[1]\text{?}\)
\begin{equation*} \tta\vx[1] = \bbm-3\amp 15\\3\amp 9\ebm\bbm-5\\1\ebm = \bbm 30\\-6\ebm = (-6)\bbm -5\\1\ebm = \lambda_1\vx[1]\text{.} \end{equation*}
Yes; it appears we have truly found an eigenvalue/eigenvector pair for the matrix \(A\text{.}\)

Example 7.1.7. Computing eigenvalues and eigenvectors.

Let \(\tta = \bbm -3\amp 0\\5\amp 1\ebm\text{.}\) Find the eigenvalues of \(\tta\) and an eigenvector for each eigenvalue.
Solution.
We first compute the characteristic polynomial, set it equal to 0, then solve for \(\lambda\text{.}\)
\begin{align*} \det(\tta-\lambda\tti) \amp = \bvm -3-\lambda \amp 0\\5\amp 1-\lambda \evm\\ \amp = (-3-\lambda)(1-\lambda)\text{.} \end{align*}
From this, we see that \(\det(\tta-\lambda\tti)=0\) when \(\lambda = -3, 1\text{.}\) We’ll set \(\lambda_1 = -3\) and \(\lambda_2 = 1\text{.}\)
Finding an eigenvector for \(\lambda_1\text{:}\)
We solve \((\tta-(-3)\tti)\vx =\zero\) for \(\vx\) by row reducing the appropriate matrix:
\begin{equation*} \bbm 0 \amp 0 \amp 0 \\ 5 \amp 4 \amp 0 \ebm \quad\quad\overrightarrow{\text{rref}}\quad\quad \bbm 1\amp 4/5\amp 0\\0\amp 0\amp 0\ebm\text{.} \end{equation*}
Our solution, in vector form, is
\begin{equation*} \vx = t\bbm-4/5\\1\ebm\text{.} \end{equation*}
Again, we can pick any nonzero value for \(t\text{;}\) a nice choice would eliminate the fraction. Therefore we pick \(t = 5\text{,}\) and find
\begin{equation*} \vx[1] = \bbm -4\\5\ebm\text{.} \end{equation*}
Finding an eigenvector for \(\lambda_2\text{:}\)
We solve \((\tta-(1)\tti)\vx =\zero\) for \(\vx\) by row reducing the appropriate matrix:
\begin{equation*} \bbm -4 \amp 0 \amp 0 \\ 5 \amp 0 \amp 0 \ebm \quad\quad\overrightarrow{\text{rref}}\quad\quad \bbm 1\amp 0\amp 0\\0\amp 0\amp 0\ebm\text{.} \end{equation*}
We’ve seen a matrix like this before, but we may need a bit of a refreshing. Our first row tells us that \(x_1 = 0\text{,}\) and we see that no rows/equations involve \(x_2\text{.}\) We conclude that \(x_2\) is free. Therefore, our solution, in vector form, is
\begin{equation*} \vx = t\bbm 0\\1\ebm\text{.} \end{equation*}
We pick \(t = 1\text{,}\) and find
\begin{equation*} \vx[2] = \bbm 0\\1\ebm\text{.} \end{equation*}
To summarize, we have:
\begin{equation*} \text{eigenvalue } \lambda_1 = -3 \text{ with eigenvector } \vx[1] = \bbm-5\\4\ebm \end{equation*}
and
\begin{equation*} \text{eigenvalue } \lambda_2 = 1 \text{ with eigenvector } \vx[2] = \bbm 0\\1\ebm\text{.} \end{equation*}
Notice that in both of our examples so far, we were able to completely factor the characteristic polynomial and obtain two distinct eigenvalues. For \(2\times 2\) matrices, the characteristic polynomial will always be quadratic, and we know that finding roots of quadratic polynomials falls into three categories: those with two distinct roots, like in the examples above, those with one repeated root (for example, \(x^2-2x+1=(x-1)^2\)), and those with no real roots (for example, \(x^2+1\)). In the case of a repeated root, we will have only one eigenvalue. Will we have only one eigenvector, or could there be two? (We’ll have more to say about this later.) What if there are no real roots? Then there are no (real) eigenvalues, so presumably there are no eigenvectors, either. What if we allow for complex roots? Let’s look at some examples.

Example 7.1.8. A matrix with only one eigenvalue.

Find the eigenvalues and eigenvectors of the matrix
\begin{equation*} A = \bbm 1 \amp 4\\0 \amp 1\ebm\text{.} \end{equation*}
Solution.
The transformation \(T(\vx)=A\vx\) defined by \(A\) is an example of a horizontal shear. (See Section 5.1.) Such a transformation leaves horizontal vectors unaffected, but vectors with a nonzero vertical component get pulled to the right: see Figure 7.1.9.
Illustration of a horizontal shear. The unit square becomes a parallelogram.
Figure 7.1.9. A horizontal shear by a factor of \(k\)
From the diagram we can probably guess that the horizontal vector \(\bbm 1\\0\ebm\) will be an eigenvector with eigenvalue 1, since it is left untouched by the shear transformation. Let’s confirm this analytically.
We begin as usual by finding the characteristic polynomial. We have
\begin{equation*} \det(A-\lambda I) = \begin{vmatrix} 1-\lambda \amp 41\\0 \amp 1-\lambda \end{vmatrix} = (1-\lambda)^2\text{.} \end{equation*}
Here, we see that we have only one eigenvalue; namely, \(\lambda =1\text{.}\) Let’s look for a corresponding eigenvector. We have
\begin{equation*} A-\lambda = \bbm 0 \amp 4\\0 \amp 0\ebm \quad \arref \quad \bbm 0\amp 1\\0\amp 0\ebm\text{.} \end{equation*}
The corresponding system \((A-1\cdot I)\vx=\zero\) has augmented matrix with reduced row echelon form
\begin{equation*} \left[\begin{array}{cc|c}0 \amp 1 \amp 0\\0\amp 0\amp 0\end{array}\right], \end{equation*}
which tells us that in our solution, \(x_1=t\) is free, and \(x_2=0\text{.}\) Setting \(t=1\text{,}\) we get the single eigenvector
\begin{equation*} \vec{x}_1=\bbm 1\\0\ebm, \end{equation*}
as expected.
In each of our examples to this point, every eigenvalue corresponded to a single (independent) eigenvector. Is this always the case? We will not prove it in this textbook, but it turns out that in general, the power to which the factor \((\lambda - x)\) appears in the characteristic polynomial (called the multiplicity of the eigenvalue) places an upper limit on the number of independent eigenvectors that can correspond to that eigenvalue.
In Example 7.1.7, we had \(\det(A-\lambda I) = (-3-\lambda)^1(1-\lambda)^1\text{,}\) so the two eigenvalues \(\lambda = -3\) and \(\lambda = 1\) each have multiplicity one, and therefore they each have one corresponding eigenvector. In Example 7.1.8, the eigenvalue \(\lambda=3\) has multiplicity two, but we still had only one corresponding eigenvector. Can we ever have such an eigenvalue with two corresponding eigenvectors?

Example 7.1.10. An eigenvalue of multiplicity two.

Find the eigenvalues and eigenvectors of the matrix
\begin{equation*} A = \bbm 4 \amp 0\\0 \amp 4\ebm\text{.} \end{equation*}
Solution.
Here, we notice that \(A\) is a scalar multiple of the identity. As a transformation of the Cartesian plane, the transformation \(T(\vx)=A\vx\) is a dilation: it expands the size of every vector in the plane by a factor of 4. Knowing that this is a transformation that stretches, but does not rotate, we might expect that every nonzero vector is an eigenvector of \(A\text{!}\) Indeed, given \(\vx\neq \zero\text{,}\) we have
\begin{equation*} A\vx = (4I)\vx = 4(I\vx) =\vx\text{,} \end{equation*}
so \(\vx\) is an eigenvector corresponding to the eigenvalue 4.
Of course, this is pretty much the end of the story here, but let’s get some practice with our algorithm for finding eigenvalues and eigenvectors and confirm our results. We can immediately see that
\begin{equation*} \det(A-\lambda I) = (4-\lambda)^2, \end{equation*}
so that \(\lambda=4\) is an eigenvalue of multiplicity 2.
What about the eigenvectors? Well, computing \(A-4I\) is somewhat interesting: we get
\begin{equation*} A-4I = \bbm 4-4 \amp 0\\0 \amp 4-4\ebm = \bbm 0\amp 0\\0\amp 0\ebm, \end{equation*}
the zero matrix. Again, we see that literally any nonzero vector \(\vx \in\R^2\) qualifies as an eigenvector. We know that we can find at most two independent vectors in \(\R^2\text{,}\) so a simple choice is to take the standard basis vectors \(\ven{1}\) and \(\ven{2}\text{.}\)
Notice that we could have proceeded as usual and attempted to solve the system \((A-4I)\vx=\zero\text{.}\) In this case we get a rather strange augmented matrix:
\begin{equation*} \bbm A \amp \zero\ebm = \bbm\mathbf{0} \amp \zero\ebm = \bbm 0\amp 0\amp 0\\0\amp 0\amp 0\ebm\text{!} \end{equation*}
It might seem like there’s absolutely nothing to do here, but we can read off a solution. In this case neither row places any conditions on the variables \(x_1\) and \(x_2\text{,}\) so both are free: \(x_1=s\) and \(x_2=t\) are both parameters, and
\begin{equation*} \vx = \bbm x_1\\x_2\ebm = \bbm s\\t\ebm = s\bbm 1\\0\ebm + t\bbm 0 \\ 1\ebm\text{.} \end{equation*}
Setting \(s=1\) and \(t=0\) gives us the eigenvector \(\ven{1}\text{,}\) and setting \(s=0\text{,}\) \(t=1\) gives us the eigenvector \(\ven{2}\text{.}\)
We mentioned above that another possibility is that the characteristic polynomial has no real zeros at all, in which case our matrix has no (real) eigenvalues. Let’s see what we can say in such a situation.

Example 7.1.11. A matrix with complex eigenvalues.

Find the eigenvalues and eigenvectors of the matrix
\begin{equation*} A = \bbm 0 \amp -1\\1 \amp 0\ebm\text{.} \end{equation*}
Solution.
Before we proceed, let’s pause and think about this in the context of matrix transformations. If we define the transformation \(T(\vx) = A\vx\text{,}\) we have
\begin{equation*} T\left(\bbm x_1\\x_2\ebm\right) = \bbm 0 \amp -1\\1 \amp 0\ebm\bbm x_1\\x_2\ebm = \bbm -x_2\\x_1\ebm\text{.} \end{equation*}
Notice that \(T(\vx)\) is orthogonal to \(\vx\text{:}\)
\begin{equation*} T(\vx)\boldsymbol{\cdot}\vx = \bbm -x_2\\x_1\ebm \boldsymbol{\cdot}\bbm x_1\\x_2\ebm = -x_2x_1+x_1x_2=0\text{.} \end{equation*}
This is because the transformation \(T\) represents a rotation through an angle of \(\frac{\pi}{2}\) (90 degrees). Indeed, \(A\) is a rotation matrix (see Section 5.1) of the form
\begin{equation*} A = \bbm \cos\theta \amp -\sin\theta\\ \sin\theta \amp \cos\theta\ebm, \end{equation*}
where \(\theta = \frac{\pi}{2}\text{.}\)
Now, think about the eigenvalue equation \(A\vx = \lambda\vx\text{.}\) In this case, an eigenvector \(\vx\) would be a vector in the plane such that rotating it by 90 degrees produces a parallel vector! Clearly, this is nonsense, and indeed, we find that
\begin{equation*} \det(A-\lambda I) = \begin{vmatrix} -\lambda \amp -1\\1\amp -\lambda \end{vmatrix} = \lambda^2+1\text{,} \end{equation*}
which has no real roots, so the matrix \(A\) has no eigenvalues, which makes sense from a geometric point of view.
However, this is not the end of the story, provided that we’re willing to work with complex numbers. Over the complex numbers, we do have two eigenvalues:
\begin{equation*} \lambda^2+1 = (\lambda+i)(\lambda-i), \end{equation*}
so \(\lambda = i\) and \(\lambda = -i\) are eigenvalues. What are the eigenvectors? We proceed as always, except that the arithmetic in the row operations is a bit trickier with complex numbers. For \(\lambda=i\text{,}\) we have the system \((A-iI)\vx = \zero\text{.}\) We set up the augmented matrix below, and in this case, we’ll proceed step-by-step to the reduced row echelon form.
We have
\begin{equation*} A-iI = \bbm 0 \amp -1\\1 \amp 0\ebm - \bbm i\amp 0\\0\amp i\ebm = \bbm-i\amp -1\\1\amp -i\ebm, \end{equation*}
so we get the augmented matrix
\begin{equation*} \left[\begin{array}{cc|c}-i\amp -1\amp 0\\1\amp -i\amp 0\end{array}\right] \xrightarrow[]{R_1\leftrightarrow R_2} \left[\begin{array}{cc|c}1\amp -i\amp 0\\-i\amp -1\amp 0\end{array}\right] \xrightarrow[]{R_2+iR_1\to R_2}\left[\begin{array}{cc|c}1\amp -i\amp 0\\0\amp 0\amp 0\end{array}\right]\text{.} \end{equation*}
Notice in the last step that \(-i+i(1) = 0\) gives the zero in the first column, and \(-1+i(-i)=-1+1=0\) gives the zero in the second column. This tells us that \(x_2=t\) is a free (complex!) parameter while \(x_1-ix_2=0\text{,}\) so \(x_1 = ix_2=it\text{.}\) Our vector solution is thus
\begin{equation*} \vx[1] = \bbm it\\t\ebm = t\bbm i\\1\ebm\text{,} \end{equation*}
and we can check that
\begin{equation*} A\vx[1] = \bbm 0 \amp -1\\1 \amp 0\ebm\bbm i\\1\ebm = \bbm -1\\i\ebm = \bbm i(i)\\ i(1)\ebm = i\bbm i\\1\ebm = i\vx\text{,} \end{equation*}
as expected.
We can similarly set up and solve
\begin{equation*} \bbm(A+iI) \amp \zero\ebm = \left[\begin{array}{cc|c}i \amp -1\amp 0\\1\amp i\amp 0\end{array}\right] \quad \arref \quad \left[\begin{array}{cc|c}1\amp i\amp 0\\0\amp 0\amp 0\end{array}\right]\text{,} \end{equation*}
giving us \(x_2=t\) as a free parameter, and \(x_1 = -ix_2 = -it\text{,}\) so
\begin{equation*} \vx[2] = \bbm -it\\t\ebm = t\bbm -i\\1\ebm\text{.} \end{equation*}
In this context we’re free to chose any complex value for \(t\text{.}\) Choosing \(t=i\) gives us the solution \(\vx[2] = \bbm 1\\i\ebm\text{.}\)
Our last few examples provided interesting departures from the earlier ones where we had two distinct eigenvalues; they also provided examples where we were able to analyze the situation geometrically, by considering the linear transformations defined by the matrix. The reader is encouraged to consider the other examples of transformations given in Section 5.1 and attempt a similar analysis.
So far, our examples have involved \(2\times 2\) matrices. Let’s do an example with a \(3\times 3\) matrix. The only real additional complication here is that our characteristic polynomial will now be a cubic polynomial, so factoring it is going to take some more work.

Example 7.1.12. Eigenvalues and eigenvectors for a \(3\times 3\) matrix.

Find the eigenvalues of \(\tta\text{,}\) and for each eigenvalue, give one eigenvector, where
\begin{equation*} \tta = \bbm-7 \amp -2 \amp 10\\ -3 \amp 2 \amp 3 \\ -6 \amp -2 \amp 9 \ebm\text{.} \end{equation*}
Solution.
We first compute the characteristic polynomial, set it equal to 0, then solve for \(\lda\text{.}\) A warning: this process is rather long. We’ll use cofactor expansion along the first row; don’t get bogged down with the arithmetic that comes from each step; just try to get the basic idea of what was done from step to step.
\begin{align*} \det(\tta-\lambda\tti) \amp = \bvm -7-\lda \amp -2 \amp 10 \\-3 \amp 2-\lda \amp 3\\ -6 \amp -2 \amp 9-\lda \evm\\ \amp = (-7-\lda)\bvm 2-\lda \amp 3\\-2 \amp 9-\lda\evm \ -\ (-2)\bvm -3\amp 3\\-6 \amp 9-\lda \evm \ +\ 10\bvm -3\amp 2-\lda \\-6 \amp -2 \evm\\ \amp = (-7-\lda)(\lda^2-11\lda + 24) + 2(3\lda-9)+10(-6\lda+18)\\ \amp = -\lda^3+4\lda^2-\lda -6\\ \amp = -(\lda + 1)(\lda-2)(\lda-3)\text{.} \end{align*}
In the last step we factored the characteristic polynomial \(-\lda^3+4\lda^2-\lda -6\text{.}\) Factoring polynomials of degree \(\gt 2\) is not trivial; we’ll assume the reader has access to methods for doing this accurately.
One could also graph this polynomial to find the roots. Graphing will show us that \(\lda = 3\) looks like a root, and a simple calculation will confirm that it is.
Our eigenvalues are \(\lda_1 = -1\text{,}\) \(\lda_2 = 2\) and \(\lda_3 = 3\text{.}\) We now find corresponding eigenvectors.
For \(\lda_1 = -1\text{:}\)
We need to solve the equation \((\tta - (-1)\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm -6 \amp -2 \amp 10 \amp 0\\ -3 \amp 3 \amp 3 \amp 0 \\ -6 \amp -2 \amp 10 \amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 1\amp 0\amp -1.5\amp 0\\0\amp 1\amp -.5\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
Our solution, in vector form, is
\begin{equation*} \vx = x_3\bbm 3/2\\1/2\\1\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_3\text{;}\) a nice choice would get rid of the fractions. So we’ll set \(x_3 = 2\) and choose \(\vx[1]=\bbm 3\\1\\2\ebm\) as our eigenvector.
For \(\lda_2 = 2\text{:}\)
We need to solve the equation \((\tta - 2\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm -9 \amp -2 \amp 10 \amp 0\\ -3 \amp 0 \amp 3 \amp 0 \\ -6 \amp -2 \amp 7 \amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 1\amp 0\amp -1\amp 0\\0\amp 1\amp -.5\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
Our solution, in vector form, is
\begin{equation*} \vx = x_3\bbm 1\\1/2\\1\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_3\text{;}\) again, a nice choice would get rid of the fractions. So we’ll set \(x_3 = 2\) and choose \(\vx[2]=\bbm 2\\1\\2\ebm\) as our eigenvector.
For \(\lda_3 = 3\text{:}\)
We need to solve the equation \((\tta - 3\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm -10 \amp -2 \amp 10 \amp 0\\ -3 \amp -1 \amp 3 \amp 0 \\ -6 \amp -2 \amp 6 \amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 1\amp 0\amp -1\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
Our solution, in vector form, is (note that \(x_2 = 0\)):
\begin{equation*} \vx = x_3\bbm 1\\0\\1\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_3\text{;}\) an easy choice is \(x_3 = 1\text{,}\) so \(\vx[3]=\bbm 1\\0\\1\ebm\) as our eigenvector.
To summarize, we have the following eigenvalue/eigenvector pairs:
\begin{align*} \text{eigenvalue } \lda_1 \amp = -1 \text{ with eigenvector } \vx[1] = \bbm 3\\1\\2\ebm\\ \text{eigenvalue } \lda_2 \amp = 2 \text{ with eigenvector } \vx[2] = \bbm 2\\1\\2\ebm\\ \text{eigenvalue } \lda_3 = 3 \text{ with eigenvector } \vx[3] = \bbm 1\\0\\1\ebm\text{.} \end{align*}

Example 7.1.13. Computing eigenvalues and eigenvectors.

Find the eigenvalues of \(\tta\text{,}\) and for each eigenvalue, give one eigenvector, where
\begin{equation*} \tta = \bbm 2 \amp -1 \amp 1\\ 0\amp 1\amp 6 \\ 0\amp 3\amp 4 \ebm\text{.} \end{equation*}
Solution.
We first compute the characteristic polynomial, set it equal to 0, then solve for \(\lda\text{.}\) We’ll use cofactor expansion down the first column (since it has lots of zeros).
\begin{align*} \det(\tta-\lambda\tti) \amp = \bvm 2-\lda \amp -1 \amp 1 \\0\amp 1-\lda \amp 6\\ 0 \amp 3 \amp 4-\lda \evm\\ \amp = (2-\lda)\bvm 1-\lda \amp 6\\3\amp 4-\lda\evm\\ \amp = (2-\lda)(\lda^2-5\lda-14)\\ \amp = (2-\lda)(\lda-7)(\lda+2)\text{.} \end{align*}
Notice that while the characteristic polynomial is cubic, we never actually saw a cubic; we never distributed the \((2-\lda)\) across the quadratic. Instead, we realized that this was a factor of the cubic, and just factored the remaining quadratic. (This makes this example quite a bit simpler than the previous example.)
Our eigenvalues are \(\lda_1 = -2\text{,}\) \(\lda_2 = 2\) and \(\lda_3 = 7\text{.}\) We now find corresponding eigenvectors.
For \(\lda_1 = -2\text{:}\)
We need to solve the equation \((\tta - (-2)\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm 4 \amp -1 \amp 1\amp 0\\ 0\amp 3\amp 6\amp 0 \\ 0\amp 3\amp 6\amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 1\amp 0\amp 3/4\amp 0\\0\amp 1\amp 2\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
Our solution, in vector form, is
\begin{equation*} \vx = x_3\bbm -3/4\\-2\\1\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_3\text{;}\) a nice choice would get rid of the fractions. So we’ll set \(x_3 = 4\) and choose \(\vx[1]=\bbm -3\\-8\\4\ebm\) as our eigenvector.
For \(\lda_2 = 2\text{:}\)
We need to solve the equation \((\tta - 2\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm 0 \amp -1 \amp 1\amp 0\\ 0\amp -1\amp 6\amp 0 \\ 0\amp 3\amp 2\amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
This looks funny, so we’ll look remind ourselves how to solve this. The first two rows tell us that \(x_2 = 0\) and \(x_3 = 0\text{,}\) respectively. Notice that no row/equation uses \(x_1\text{;}\) we conclude that it is free. Therefore, our solution in vector form is
\begin{equation*} \vx = x_1\bbm 1\\0\\0\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_1\text{;}\) an easy choice is \(x_1 = 1\text{,}\) which gives \(\vx[2]=\bbm 1\\0\\0\ebm\) as our eigenvector.
For \(\lda_3 = 7\text{:}\)
We need to solve the equation \((\tta - 7\tti)\vx = \zero\text{.}\) To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.
\begin{equation*} \bbm -5 \amp -1 \amp 1\amp 0\\ 0\amp -6\amp 6\amp 0 \\ 0\amp 3\amp -3\amp 0 \ebm \quad \overrightarrow{\text{rref}} \quad \bbm 1\amp 0\amp 0\amp 0\\0\amp 1\amp -1\amp 0\\0\amp 0\amp 0\amp 0\ebm \end{equation*}
Our solution, in vector form, is (note that \(x_1 = 0\)):
\begin{equation*} \vx = x_3\bbm 0\\1\\1\ebm\text{.} \end{equation*}
We can pick any nonzero value for \(x_3\text{;}\) an easy choice is \(x_3 = 1\text{,}\) so \(\vx[3]=\bbm 0\\1\\1\ebm\) is our eigenvector.
To summarize, we have the following eigenvalue/eigenvector pairs:
\begin{align*} \text{eigenvalue } \lda_1 \amp = -2 \text{ with eigenvector } \vx[1] = \bbm-3\\-8\\4\ebm\\ \text{eigenvalue } \lda_2 \amp = 2 \text{ with eigenvector } \vx[2] = \bbm 1\\0\\0\ebm\\ \text{eigenvalue } \lda_3 \amp = 7 \text{ with eigenvector } \vx[3] = \bbm 0\\1\\1\ebm\text{.} \end{align*}
In this section we have learned about a new concept: given a matrix \(\tta\) we can find certain values and vectors \(\vx\) where \(\tta\vx =\lda\vx\text{.}\) In the next section we will continue to the pattern we have established in this text: after learning a new concept, we see how it interacts with other concepts we know about. That is, we’ll look for connections between eigenvalues and eigenvectors and things like the inverse, determinants, the trace, the transpose, etc..

Exercises Exercises

Exercise Group.

A matrix \(\tta\) and one of its eigenvectors are given. Find the eigenvalue of \(\tta\) for the given eigenvector.
1.
\(\tta = \bbm 9 \amp 8 \\ -6 \amp -5\ebm\text{,}\) \(\vx = \bbm -4\\3\ebm\)
2.
\(\tta = \bbm 19 \amp -6 \\ 48 \amp -15\ebm\text{,}\) \(\vx = \bbm 1\\3\ebm\)
3.
\(\tta = \bbm -11 \amp -19 \amp 14 \\ -6 \amp -8 \amp 6 \\ -12 \amp -22 \amp 15\ebm\text{,}\) \(\vx = \bbm 3\\2\\4\ebm\)
4.
\(\tta = \bbm -7 \amp 1 \amp 3 \\ 10 \amp 2 \amp -3 \\ -20 \amp -14 \amp 1\ebm\text{,}\) \(\vx = \bbm 1\\-2\\4\ebm\)
5.
\(\tta = \bbm -12 \amp -10 \amp 0 \\ 15 \amp 13 \amp 0 \\ 15 \amp 18 \amp -5\ebm\text{,}\) \(\vx = \bbm -1\\1\\1\ebm\)
6.
\(\tta = \bbm 1 \amp -2\\ -2 \amp 4 \ebm\text{,}\) \(\vx = \bbm 2\\1\ebm\)

Exercise Group.

A matrix \(\tta\) and one of its eigenvalues are given. Find an eigenvector of \(\tta\) for the given eigenvalue.
7.
\(\tta = \bbm -16 \amp -28 \amp -19 \\ 42 \amp 69 \amp 46 \\ -42 \amp -72 \amp -49\ebm\text{,}\) \(\lambda = 5\)
8.
\(\tta = \bbm 7 \amp -5 \amp -10 \\ 6 \amp 2 \amp -6 \\ 2 \amp -5 \amp -5\ebm\text{,}\) \(\lambda = -3\)
9.
\(\tta = \bbm 4 \amp 5 \amp -3 \\ -7 \amp -8 \amp 3 \\ 1 \amp -5 \amp 8\ebm\text{,}\) \(\lambda = 2\)
10.
\(\tta = \bbm 16 \amp 6 \\ -18 \amp -5\ebm\text{,}\) \(\lambda = 4\)
11.
\(\tta = \bbm -2 \amp 6 \\ -9 \amp 13\ebm\text{,}\) \(\lambda = 7\)

Exercise Group.

Find the eigenvalues of the given matrix. For each eigenvalue, give an eigenvector.
12.
\(\bbm 1 \amp -2 \amp -3 \\ 0 \amp 3 \amp 0 \\ 0 \amp -1 \amp -1\ebm\)
13.
\(\bbm 2 \amp -1 \amp 1 \\ 0 \amp 3 \amp 6 \\ 0 \amp 0 \amp 7\ebm\)
14.
\(\bbm 2 \amp -12 \\ 2 \amp -8\ebm\)
15.
\(\bbm 5 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \\ -1 \amp 5 \amp -2\ebm\)
16.
\(\bbm-3 \amp 1 \\ 0 \amp -1\ebm\)
17.
\(\bbm -1 \amp 18 \amp 0 \\ 1 \amp 2 \amp 0 \\ 5 \amp -3 \amp -1\ebm\)
18.
\(\bbm 1 \amp 0 \amp 12 \\ 2 \amp -5 \amp 0 \\ 1 \amp 0 \amp 2\ebm\)
19.
\(\bbm 3 \amp 12 \\ 1 \amp -1\ebm\)
20.
\(\bbm 3 \amp -1 \\ -1 \amp 3\ebm\)
21.
\(\bbm 1 \amp 0 \amp -18 \\ -4 \amp 3 \amp -1 \\ 1 \amp 0 \amp -8\ebm\)
22.
\(\bbm 3 \amp 5 \amp -5\\ -2\amp 3\amp 2\\-2\amp 5\amp 0\ebm\)
23.
\(\bbm-1 \amp -4 \\ -3 \amp -2\ebm\)
24.
\(\bbm 1\amp 2\amp 1\\1\amp 2\amp 3\\1\amp 1\amp 1\ebm\)
25.
\(\bbm 0 \amp 1 \\ 25 \amp 0\ebm\)
26.
\(\bbm 5 \amp 9 \\ -1 \amp -5\ebm\)
27.
\(\bbm-4 \amp 72 \\ -1 \amp 13\ebm\)
28.
\(\bbm 5 \amp -2 \amp 3 \\ 0 \amp 4 \amp 0 \\ 0 \amp -1 \amp 3\ebm\)