Skip to main content
Logo image

Elementary Linear Algebra: For University of Lethbridge Math 1410

Section 4.4 The Matrix Inverse

Once again we visit the old algebra equation, \(ax=b\text{.}\) How do we solve for \(x\text{?}\) We know that, as long as \(a\neq 0\text{,}\)
\begin{equation*} x = \frac{b}{a}, \ \text{or, stated in another way,} \ x = a^{-1}b\text{.} \end{equation*}
What is \(a^{-1}\text{?}\) It is the number that, when multiplied by \(a\text{,}\) returns 1. That is,
\begin{equation*} a^{-1}a = 1\text{.} \end{equation*}
Let us now think in terms of matrices. We have learned of the identity matrix \(\tti\) that “acts like the number 1.” That is, if \(\tta\) is a square matrix, then
\begin{equation*} \tti\tta = \tta\tti = \tta\text{.} \end{equation*}
If we had a matrix, which we’ll call \(\ttai\text{,}\) where \(\ttai\tta=\tti\text{,}\) then by analogy to our algebra example above it seems like we might be able to solve the linear system \(\ttaxb\) for \(\vx\) by multiplying both sides of the equation by \(\ttai\text{.}\) That is, perhaps
\begin{equation*} \vx = \ttai\vb\text{.} \end{equation*}
There is no guarantee that such a matrix is going to exist for an arbitrary \(n\times n\) matrix \(A\text{,}\) but if it does, we say that \(A\) is invertible.

Definition 4.4.1. Invertible Matrices and the Inverse of \tta.

We say that an \(n\times n\) matrix \(A\) is invertible if there exists a matrix \(X\) such that
\begin{equation*} AX = XA = I_n\text{.} \end{equation*}
When this is the case, we call the matrix \(X\) the inverse of \(A\) and write \(X=A^{-1}\text{.}\)
Of course, there is a lot of speculation here. We don’t know in general that such a matrix like \(\ttai\) exists. (And if it does, whether that matrix is unique, despite the use of the definite article in stating that \(X\) is “the” inverse of \(A\text{.}\)) However, we do know how to solve the matrix equation \(\tta\ttx = \ttb\text{,}\) so we can use that technique to solve the equation \(\tta\ttx = \tti\) for \(\ttx\text{.}\) This seems like it will get us close to what we want. Let’s practice this once and then study our results.

Example 4.4.2. Solving \(\tta\ttx=\tti\).

Let
\begin{equation*} \tta = \bbm 2 \amp 1\\1 \amp 1\ebm\text{.} \end{equation*}
Find a matrix \(\ttx\) such that \(\tta\ttx = \tti\text{.}\)
Solution.
We know how to solve this from the previous section: we form the proper augmented matrix, put it into reduced row echelon form and interpret the results.
\begin{equation*} \bbm 2 \amp 1 \amp 1\amp 0\\1\amp 1\amp 0\amp 1\ebm \quad \arref \quad \bbm 1\amp 0\amp 1\amp -1\\0\amp 1\amp -1\amp 2\ebm \end{equation*}
We read from our matrix that
\begin{equation*} \ttx = \bbm 1 \amp -1\\-1 \amp 2\ebm\text{.} \end{equation*}
Let’s check our work:
\begin{align*} \tta\ttx \amp = \bbm 2\amp 1\\1\amp 1\ebm\bbm 1\amp -1\\-1\amp 2\ebm\\ \amp = \bbm 1\amp 0\\0\amp 1\ebm\\ \amp = \tti \end{align*}
Sure enough, it works.
Looking at our previous example, we are tempted to jump in and call the matrix \(\ttx\) that we found “\(\ttai\text{.}\)” However, there are two obstacles in the way of us doing this.
First, we know that in general \(\tta\ttb \neq \ttb\tta\text{.}\) So while we found that \(\tta\ttx = \tti\text{,}\) we can’t automatically assume that \(\ttx\tta=\tti\text{.}\)
Secondly, we have seen examples of matrices where \(\tta\ttb = \tta\ttc\text{,}\) but \(\ttb\neq\ttc\text{.}\) So just because \(\tta\ttx = \tti\text{,}\) it is possible that another matrix \(\tty\) exists where \(\tta\tty = \tti\text{.}\) If this is the case, using the notation \(\ttai\) would be misleading, since it could refer to more than one matrix.
These obstacles that we face are not insurmountable. The first obstacle was that we know that \(\tta\ttx=\tti\) but didn’t know that \(\ttx\tta=\tti\text{.}\) That’s easy enough to check, though. Let’s look at \(\tta\) and \(\ttx\) from our previous example.
\begin{align*} \ttx\tta \amp = \bbm 1\amp -1\\-1\amp 2\ebm\bbm 2\amp 1\\1\amp 1\ebm\\ \amp =\bbm 1\amp 0\\0\amp 1\ebm\\ \amp =\tti\text{.} \end{align*}
Perhaps this first obstacle isn’t much of an obstacle after all. Of course, we only have one example where it worked, so this doesn’t mean that it always works. We have good news, though: it always does work. The only “bad” news is that this is a bit harder to prove. For now, we will state it as theorem, but the proof will have to wait until later: see the proof of Theorem 4.5.1.
The second obstacle is easier to address. We want to know if another matrix \(\tty\) exists where \(\tta\tty = \tti =\tty\tta\text{.}\) Let’s suppose that it does. Consider the expression \(\ttx\tta\tty\text{.}\) Since matrix multiplication is associative, we can group this any way we choose. We could group this as \((\ttx\tta)\tty\text{;}\) this results in
\begin{align*} (\ttx\tta)\tty \amp = \tti\tty\\ \amp = \tty\text{.} \end{align*}
We could also group \(\ttx\tta\tty\) as \(\ttx(\tta\tty)\text{.}\) This tells us
\begin{align*} \ttx(\tta\tty) \amp = \ttx\tti\\ \amp = \ttx\text{.} \end{align*}
Combining the two ideas above, we see that \(\ttx = \ttx\tta\tty = \tty\text{;}\) that is, \(\ttx=\tty\text{.}\) We conclude that there is only one matrix \(\ttx\) where \(\ttx\tta = \tti = \tta\ttx\text{.}\) (Even if we think we have two, we can do the above exercise and see that we really just have one.)
We have just proved the following theorem.
Thus, we were justified in Definition 4.4.1 in calling \(A^{-1}\) “the” inverse of \(A\) (rather than merely “an” inverse). Theorem 4.4.4 is incredibly important in practice. It tells us that if we are able to establish that either \(AX=I_n\) or \(XA = I_n\) for some matrix \(X\text{,}\) then we can immediately conclude two things: first, that \(A\) is invertible, and second, that \(A=A^{-1}\text{.}\) We put this observation to use in the next example.

Example 4.4.5. Using Theorems Theorem 4.4.3 and Theorem 4.4.4.

Suppose \(A\) is an \(n\times n\) matrix such that \(A^5 = I_n\text{.}\) Prove that \(A\) is invertible, and find an expression for \(A^{-1}\text{.}\)
Solution.
Using Theorem 4.4.4, we can quickly kill two birds with one stone. Using properties of exponents (and the fact that \(5=1+4\)), we have
\begin{equation*} A^5 = A\cdot (A\cdot A\cdot A\cdot A) = A(A^4) = I_n\text{.} \end{equation*}
Thus, if we set \(X=A^4\text{,}\) then \(AX = I_n\text{,}\) so by Theorems Theorem 4.4.3 and Theorem 4.4.4, \(A\) is invertible, and \(A^{-1} = A^4\text{.}\)
At this point, it is natural to wonder which \(n\times n\) matrices will be invertible. Will any non-zero matrix do? (No.) Are such matrices a rare occurrence? (No.) As we proceed through this chapter and the next, we will see that there are many different conditions one can place on an \(n\times n\) matrix that are equivalent to the statement “The matrix \(A\) is invertible.” Before we begin our attempt to answer this question in general, let’s look at a particular example.

Example 4.4.6. A non-invertible matrix.

Find the inverse of \(\tta = \bbm 1\amp 2\\2\amp 4\ebm\text{.}\)
Solution.
By solving the equation \(\tta\ttx = \tti\) for \(\ttx\) will give us the inverse of \(\tta\text{.}\) Forming the appropriate augmented matrix and finding its reduced row echelon form gives us
\begin{equation*} \bbm 1 \amp 2 \amp 1\amp 0\\2\amp 4\amp 0\amp 1\ebm \quad \arref \quad \bbm 1\amp 2\amp 0\amp 1/2\\0\amp 0\amp 1\amp -1/2\ebm \end{equation*}
Yikes! We were expecting to find that the reduced row echelon form of this matrix would look like
\begin{equation*} \bbm\tti \amp \ttai\ebm\text{.} \end{equation*}
However, we don’t have the identity on the left hand side. Our conclusion: \(\tta\) is not invertible.
We have just seen that not all matrices are invertible. The attentive reader might have been able to spot the source of the trouble in the previous example: notice that the second row of \(A\) is a multiple of the first, so that the row operation \(R_2-2R_1\to R_2\) created a row of zeros. Can you think what sort of condition would signal trouble for a general \(n\times n\) matrix? Here, we need to think back to our discussions of the various theoretical concepts we’ve encountered, such as rank, span, linear independence, and so on. Let us think of the rows of \(A\) as row vectors.
The elementary row operations that we perform on a matrix either rearrange these vectors, or create new vectors that are linear combinations of the old ones. The only way we end up with a row of zeros in the reduced row echelon form of \(A\) is if one of the rows of \(A\) can be written as a linear combination of the others; that is, if the rows of \(A\) are linearly dependent. We also know that if there is a row of zeros in the reduced row echelon form of \(A\text{,}\) then not every row contains a leading 1. Recalling that the rank of \(A\) is equal to the number of leading 1s in the reduced row echelon form of \(A\text{,}\) we have the following:
The claim that “the following statements are equivalent” in Theorem 4.4.7 means that as soon as we know that one of the statements on the list is true, we can immediately conclude that the others are true as well. This is also the case if we know one of the statements is false. For example, if we know that \(\operatorname{rank}(A)\lt n\text{,}\) then we can immediately conclude that \(A\) will not be invertible.
Let’s sum up what we’ve learned so far. We’ve discovered that if a matrix has an inverse, it has only one. Therefore, we gave that special matrix a name, “the inverse.” Finally, we describe the most general way to find the inverse of a matrix, and a way to tell if it does not have one.

Key Idea 4.4.8. Finding \(\ttai\).

Let \(\tta\) be an \(n \times n\) matrix. To find \(\ttai\text{,}\) put the augmented matrix
\begin{equation*} \bbm \tta \amp \tti_n \ebm \end{equation*}
into reduced row echelon form. If the result is of the form
\begin{equation*} \bbm I_n \amp \ttx \ebm, \end{equation*}
then \(\ttai = \ttx\text{.}\) If not, (that is, if the first \(n\) columns of the reduced row echelon form are not \(\tti_n\)), then \(\tta\) is not invertible.
Let’s try again.

Example 4.4.9. Computing the inverse of a matrix.

Find the inverse, if it exists, of \(\tta = \bbm 1\amp 1\amp -1\\1\amp -1\amp 1\\1\amp 2\amp 3\ebm\text{.}\)
Solution.
We’ll try to solve \(\tta\ttx = \tti\) for \(\ttx\) and see what happens.
\begin{equation*} \bbm 1 \amp 1 \amp -1\amp 1\amp 0\amp 0\\1\amp -1\amp 1\amp 0\amp 1\amp 0\\1\amp 2\amp 3\amp 0\amp 0\amp 1\ebm \quad \arref\quad \bbm 1\amp 0\amp 0\amp 1/2\amp 1/2\amp 0\\0\amp 1\amp 0\amp 1/5\amp -2/5\amp 1/5\\0\amp 0\amp 1\amp -3/10\amp 1/10\amp 1/5\ebm \end{equation*}
We have a solution, so
\begin{equation*} \ttai = \bbm 1/2 \amp 1/2 \amp 0\\1/5\amp -2/5\amp 1/5\\-3/10\amp 1/10\amp 1/5\ebm\text{.} \end{equation*}
Multiply \tta\(\ttai\) to verify that it is indeed the inverse of \(\tta\text{.}\)
In general, given a matrix \(\tta\text{,}\) to find \(\ttai\) we need to form the augmented matrix \(\bbm\tta\amp \tti \ebm\) and put it into reduced row echelon form and interpret the result. In the case of a \(2\times 2\) matrix, though, there is a shortcut. We give the shortcut in terms of a theorem.
We can’t divide by 0, so if \(ad-bc=0\text{,}\) we don’t have an inverse. Recall Example 4.4.6, where
\begin{equation*} \tta = \bbm 1 \amp 2\\2 \amp 4\ebm\text{.} \end{equation*}
Here, \(ad-bc = 1(4) - 2(2) = 0\text{,}\) which is why \(\tta\) didn’t have an inverse.
Although this idea is simple, we should practice it.

Example 4.4.11. Computing a \(2\times 2\) inverse using Theorem 4.4.10.

Use Theorem 4.4.10 to find the inverse of
\begin{equation*} \tta = \bbm 3 \amp 2\\-1 \amp 9\ebm \end{equation*}
if it exists.
Solution.
Since \(ad-bc = 29 \neq 0\text{,}\) \(\ttai\) exists. By Theorem 4.4.10,
\begin{align*} \ttai \amp = \frac{1}{3(9)-2(-1)}\bbm 9\amp -2\\1\amp 3\ebm\\ \amp = \frac{1}{29}\bbm 9\amp -2\\1\amp 3\ebm\text{.} \end{align*}
We can leave our answer in this form, or we could “simplify” it as
\begin{equation*} \ttai = \frac{1}{29}\bbm 9 \amp -2\\1 \amp 3\ebm = \bbm 9/29 \amp -2/29\\1/29 \amp 3/29\ebm\text{.} \end{equation*}
We started this section out by speculating that just as we solved algebraic equations of the form \(ax=b\) by computing \(x = a^{-1}b\text{,}\) we might be able to solve matrix equations of the form \(\ttaxb\) by computing \(\vx = \ttai\vb\text{.}\) If \(\ttai\) does exist, then we can solve the equation \(\ttaxb\) this way. Consider:
\begin{align*} \tta \vx \amp = \vb \amp \amp \text{ (original equation)}\\ \ttai\tta\vx \amp = \ttai\vb \amp \amp \text{ (multiply both sides on the left by } \ttai)\\ \tti\vx \amp = \ttai\vb \amp \amp \text{ (since } \ttai\tta=\tti)\\ \vx \amp = \ttai\vb \amp \amp \text{ (since } \tti\vx = \vx)\text{.} \end{align*}
Let’s step back and think about this for a moment. The only thing we know about the equation \(\ttaxb\) is that \(\tta\) is invertible. We also know that solutions to \(\ttaxb\) come in three forms: exactly one solution, infinitely many solutions, and no solution. We just showed that if \(\tta\) is invertible, then \(\ttaxb\) has at least one solution. We showed that by setting \(\vx\) equal to \ttai\vb, we have a solution. Is it possible that more solutions exist?
No. Suppose we are told that a known vector \(\vvv\) is a solution to the equation \(\ttaxb\text{;}\) that is, we know that \(\tta\vvv=\vb\text{.}\) We can repeat the above steps:
\begin{align*} \tta\vvv \amp =\vb\\ \ttai\tta\vvv \amp =\ttai\vb \\ \tti\vvv \amp = \ttai\vb \\ \vvv \amp = \ttai\vb \text{.} \end{align*}
This shows that all solutions to \(\ttaxb\) are exactly \(\vx = \ttai\vb\) when \(\tta\) is invertible. We have just proved the following theorem.
A corollary to this theorem is: If \(\tta\) is not invertible, then \(\ttaxb\) does not have exactly one solution. It may have infinitely many solutions and it may have no solution, and we would need to examine the reduced row echelon form of the augmented matrix \(\bbm \tta \amp \vb \ebm\) to see which case applies.
We demonstrate our theorem with an example.

Example 4.4.13. Using a matrix inverse to solve a system.

Solve \(\ttaxb\) by computing \(\vx = \ttai\vb\text{,}\) where
\begin{equation*} \tta = \bbm 1\amp 0\amp -3\\ -3\amp -4\amp 10 \\ 4\amp -5\amp -11\ebm\ \text{ and }\ \vb = \bbm -15\\ 57 \\ -46\ebm\text{.} \end{equation*}
Solution.
Without showing our steps, we compute
\begin{equation*} \ttai = \bbm 94\amp 15\amp -12\\ 7\amp 1\amp -1\\ 31\amp 5\amp -4\ebm\text{.} \end{equation*}
We then find the solution to \(\ttaxb\) by computing \(\ttai\vb\text{:}\)
\begin{align*} \vx \amp =\ttai\vb\\ \amp = \bbm 94 \amp 15\amp -12\\ 7\amp 1\amp -1\\ 31\amp 5\amp -4\ebm \bbm -15\\ 57 \\ -46\ebm \\ \amp = \bbm -3 -2\\ 4\ebm\text{.} \end{align*}
We can easily check our answer:
\begin{equation*} \bbm 1\amp 0\amp -3\\ -3\amp -4\amp 10 \\ 4\amp -5\amp -11\ebm\bbm -3\\ -2\\ 4\ebm = \bbm -15\\ 57 \\ -46\ebm\text{.} \end{equation*}
Knowing a matrix is invertible is incredibly useful. Among many other reasons, if you know \(\tta\) is invertible, then you know for sure that \(\ttaxb\) has a solution (as we just stated in Theorem 4.4.12). In the next section we’ll demonstrate many different properties of invertible matrices, including stating several different ways in which we know that a matrix is invertible.

Exercises Exercises

Exercise Group.

A matrix \(\tta\) is given. Find \(\ttai\) using Theorem 4.4.10, if it exists.
1.
\(\bbm 1\amp 5\\ -5\amp -24\ebm\)
2.
\(\bbm 1\amp -4\\ 1\amp -3\ebm\)
3.
\(\bbm 3\amp 0\\ 0\amp 7\ebm \)
4.
\(\bbm 2\amp 5\\ 3\amp 4\ebm \)
5.
\(\bbm 1 \amp -3\\ -2 \amp 6\ebm\)
6.
\(\bbm 3\amp 7\\ 2\amp 4\ebm \)
7.
\(\bbm 1\amp 0\\ 0\amp 1\ebm \)
8.
\(\bbm 0\amp 1\\ 1\amp 0\ebm \)

Exercise Group.

A matrix \(\tta\) is given. Find \(\ttai\) using Key Idea 4.4.8, if it exists.
9.
\(\bbm -2\amp 3\\ 1\amp 5\ebm \)
10.
\(\bbm -5\amp -2\\ 9\amp 2\ebm \)
11.
\(\bbm 1\amp 2\\ 3\amp 4\ebm \)
12.
\(\bbm 5 \amp 7\\ 5/3 \amp 7/3\ebm\)
13.
\(\bbm 25\amp -10\amp -4\\ -18\amp 7\amp 3\\ -6\amp 2\amp 1\ebm\)
14.
\(\bbm 1\amp 0\amp 0\\ 4\amp 1\amp -7\\ 20\amp 7\amp -48\ebm \)
15.
\(\bbm -4\amp 1\amp 5\\ -5\amp 1\amp 9\\ -10\amp 2\amp 19\ebm\)
16.
\(\bbm 1\amp -5\amp 0\\ -2\amp 15\amp 4\\ 4\amp -19\amp 1\ebm\)
17.
\(\bbm 25\amp -8\amp 0\\ -78\amp 25\amp 0\\ 48\amp -15\amp 1\ebm\)
18.
\(\bbm 1\amp 0\amp 0\\ 7\amp 5\amp 8\\ -2\amp -2\amp -3\ebm\)
19.
\(\bbm 0\amp 0\amp 1\\ 1\amp 0\amp 0\\ 0\amp 1\amp 0\ebm\)
20.
\(\bbm 0\amp 1\amp 0\\ 1\amp 0\amp 0\\ 0\amp 0\amp 1\ebm\)
21.
\(\bbm 2 \amp 3 \amp 4\\ -3\amp 6\amp 9\\ -1\amp 9\amp 13\ebm\)
22.
\(\bbm 5 \amp -1 \amp 0\\ 7\amp 7\amp 1\\ -2\amp -8\amp -1\ebm\)
23.
\(\bbm 1\amp 0\amp 0\amp 0\\ -19\amp -9\amp 0\amp 4\\ 33\amp 4\amp 1\amp -7\\ 4\amp 2\amp 0\amp -1\ebm \)
24.
\(\bbm 1\amp 0\amp 0\amp 0\\ 27\amp 1\amp 0\amp 4\\ 18\amp 0\amp 1\amp 4\\ 4\amp 0\amp 0\amp 1\ebm\)
25.
\(\bbm -15\amp 45\amp -3\amp 4\\ 55\amp -164\amp 15\amp -15\\ -215\amp 640\amp -62\amp 59\\ -4\amp 12\amp 0\amp 1\ebm \)
26.
\(\bbm 1\amp 0\amp 2\amp 8\\ 0\amp 1\amp 0\amp 0\\ 0\amp -4\amp -29\amp -110\\ 0\amp -3\amp -5\amp -19\ebm \)
27.
\(\bbm 0\amp 0\amp 1\amp 0\\ 0\amp 0\amp 0\amp 1\\ 1\amp 0\amp 0\amp 0\\ 0\amp 1\amp 0\amp 0\ebm \)
28.
\(\bbm 1\amp 0\amp 0\amp 0\\ 0\amp 2\amp 0\amp 0\\ 0\amp 0\amp 3\amp 0\\ 0\amp 0\amp 0\amp -4\ebm \)

Exercise Group.

A matrix \(\tta\) and a vector are given. Solve the equation \(\ttaxb\) using Theorem 4.4.12.
29.
\(\tta = \bbm 1\amp 2\amp 12\\ 0\amp 1\amp 6\\ -3\amp 0\amp 1\ebm\)
30.
\(\tta = \bbm 1\amp 0\amp -3\\ 8\amp -2\amp -13\\ 12\amp -3\amp -20\ebm\)
31.
\(\tta = \bbm 5\amp 0\amp -2\\ -8\amp 1\amp 5\\ -2\amp 0\amp 1\ebm\)
32.
\(\tta = \bbm 1\amp -6\amp 0\\ 0\amp 1\amp 0\\ 2\amp -8\amp 1\ebm\)
33.
\(\tta = \bbm 3 \amp 5\\ 2 \amp 3\ebm\text{,}\) \(\vb = \bbm 21\\ 13\ebm\)
34.
\(\tta = \bbm 1\amp -4\\ 4\amp -15\ebm\)
35.
\(\tta = \bbm 9\amp 70\\ -4\amp -31\ebm\)
36.
\(\tta = \bbm 10 \amp -57\\ 3 \amp -17\ebm\text{,}\) \(\vb =\bbm -14\\ -4\ebm\)