Skip to main content

Section 4.3 Project: Eigenvalues and diagonalization

Let A be a symmetric matrix. According to Real Spectral Theorem, there exists an orthogonal matrix P such that PTAP is diagonal, with the entries on the diagonal given by the eigenvalues of A.
In this worksheet, we will work through the process of finding the eigenvalues of a symmetric matrix, along with the orthogonal matrix P.
We will also look at the method described in Section 4.7 for finding dominant eigenvalues.

Subsection 4.3.1 Orthogonal diagonalization

Consider the following 5Γ—5 matrix A:
A=[97βˆ’23βˆ’23βˆ’3βˆ’46βˆ’23491212βˆ’23149212βˆ’32121942βˆ’46224252].

Exercise 4.3.1.

(a)
First, enter the matrix and confirm that it’s symmetric. You can compute the transpose of a SymPy matrix A using A.T.
(b)
Now, find the eigenvalues of A. Yes, we could jump straight to the answer, but let’s go through the steps anyway. First, compute the characteristic polynomial of A. You may with to refer to Subsection B.3.2 for the correct syntax to use.
Now, factor the polynomial:
Finally, based on the output, input the eigenvalues into the cell below.
(c)
Recall that if v is an eigenvector of A corresponding to the eigenvalue Ξ», then v belongs to the nullspace of Ξ»Iβˆ’A, where I is the identity matrix. You can create an nΓ—n identity matrix in SymPy using the syntax eye(n).
For each eigenvalue Ξ» found in part (b), compute the nullspace of Ξ»Iβˆ’A. You will want to refer to these nullspaces later, so give each one a name. For example, if your first eigenvalue was 7, you could enter something like
E1 = (7*sy.eye(5)-A).nullspace()
E1
to get the first eigenspace. Three code cells are provided below. If you are in Jupyter, you can add more cells by clicking the + button in the toolbar.
Finally, let’s check our work. Use the command A.eigenvects() to compute the eigenvalues and eigenvectors in one step, and confirm that the results match what you found above.
Next, we want to diagonalize A. There is a diagonalize command, but it won’t do orthogonal diagonalization, which is what we want.
Recall that the eigenvectors corresponding to distinct eigenvalues are automatically orthogonal. But this is not true of repeated eigenvalues, and there’s a good chance you’ve found that one of your eigenvalues has multiplicity greater than 1.
A matrix P is orthogonal if PTP=I, in which case te columns of P form an orthonormal basis. (So we are looking for an orthonormal basis of eigenvectors.)
For any 1-dimensional eigenspaces, you will just need to find a unit eigenvector. For example, if v=(2,1,0,3,1) is an eigenvector for an eigenvalue of multiplicity 1, then you will want to use the eigenvector u=115(2,1,0,3,1), since β€–vβ€–=15.
For any higher-dimensional eigenspaces, you can use the Gram-Schmidt algorithm. Recall that the optional argument true will produce unit vectors: given a basis B for an eigenspace, the command sy.GramSchmidt(B,true) will produce an orthonormal basis.

Exercise 4.3.2.

(a)
Determine an orthonormal basis of eigenvectors of A.
(b)
Once you have found five orthonormal eigenvectors create a matrix P whose columns are those eigenvectors:
(c)
Confirm that your matrix is orthogonal by computing PTP. You should get the identity matrix at the result.
(d)
Finally, compute the matrix PTAP, and confirm that the result is a diagonal matrix whose entries are the eigenvalues of A.

Subsection 4.3.2 The power method for dominant eigenvalues

For the problem you just completed, you may have noticed that there was one eigenvalue that was larger in absolute value than all the other eigenvalues. Such an eigenvalue is called a dominant eigenvalue.
If you complete Section 4.6, you will see that the state of such systems is ultimately given by a sum of certain geometric series (involving eigenvalues of a matrix), and that the long-term behaviour of the system is approximately geometric, and governed by the dominant eigenvalue.
The power method states that, given an initial state vector x0, the sequence of vectors
x0,Ax0,A2x0,…,Akx0,…
will converge to an eigenvector for the dominant eigenvalue.

Exercise 4.3.3.

Let’s see if this is true. Below, you are given an initial guess x0 and an empty list L. You want to populate L with vectors of the form Akx0, for 0≀k≀10. This is most easily done using a for loop. For details on syntax, see Subsubsection 4.7.2.1
(a)
Let’s print the last vector in the list, which will probably have some pretty big entries:
(b)
Now let’s check our work. How can we tell if this vector is (approximately) a multiple of the dominant eigenvector? One option is to divide each entry in L[10] by the smallest (in absolute value) non-zero entry in L[10]. How does this compare to the eigenvector you originally found for the dominant eigenvalue?
If you find that some entries look right, but not others, see the last part of the worksheet.
(c)
Finally, suppose we didn’t know what the dominant eigenvalue was, and we wanted to find it. Note that if x is a dominant eigenvector, then Ax=Ξ»x, where Ξ» is the dominant eigenvalue. Then
Ξ»=xβ‹…(Ξ»x)xβ‹…x=xβ‹…Axxβ‹…x.
It seems reasonable, then, that if xk is approximately equal to a dominant eigenvector, then the numbers
rk=xkβ‹…Axkxkβ‹…xk=xkβ‹…xk+1β€–xkβ€–2
should be approximately equal to the dominant eigenvalue. (These numbers are the so-called Rayliegh quotients.)
Following the approach in Subsubsection 4.7.2.1, compute the Rayleigh quotients rk for 1≀k≀10, and comment on how well they approximate the dominant eigenvalue of A.
(d)
Does it look like the numbers are approaching the dominant eigenvalue? Or do they seem to be getting near one of the other eigenvalues?
If your numbers seem wrong, it might be for the following reason: when we write our initial guess x0 as a linear combination of the eigenvectors, the coefficient of the dominant eigenvector has to be nonzero. Is that the case here? To check, note that if c→ is the vector of coefficients, then we must have Pc→=x0. Can you solve this equation for c→?
If the first entry in c is zero, enter a new value for x0, and try the above steps again in the cell below.