Skip to main content
Logo image

Elementary Linear Algebra: For University of Lethbridge Math 1410

Section 6.5 Applications of the Determinant

In the previous sections we have learned about the determinant, but we haven’t given a really good reason why we would want to compute it. This section shows two applications of the determinant: solving systems of linear equations and computing the inverse of a matrix.

Subsection 6.5.1 Cramer’s Rule

Example 6.5.2. Using Cramer’s Rule.

Use Cramer’s Rule to solve the linear system \(\ttaxb\) where
\begin{equation*} \tta = \bbm 1 \amp 5 \amp -3\\1\amp 4\amp 2\\2\amp -1\amp 0 \ebm \ \text{ and }\ \vb = \bbm-36\\-11\\7\ebm\text{.} \end{equation*}
Solution.
We first compute the determinant of \(\tta\) to see if we can apply Cramer’s Rule.
\begin{equation*} \det(A) = \bvm 1 \amp 5 \amp -3\\1\amp 4\amp 2\\2\amp -1\amp 0\evm = 49\text{.} \end{equation*}
Since \(\det(A)\neq 0\text{,}\) we can apply Cramer’s Rule. Following Theorem 6.5.1, we compute \(\det(\tta_1(\vb))\text{,}\) \(\det(\tta_2(\vb))\) and \(\det(\tta_3(\vb))\text{.}\)
\begin{equation*} \det(\tta_1(\vb)) = \bvm \mathbf{-36} \amp 5 \amp -3\\ \mathbf{-11}\amp 4\amp 2\\ \mathbf{7}\amp -1\amp 0 \evm = 49\text{.} \end{equation*}
(We used a bold font to show where replaced the first column of \(\tta\text{.}\))
\begin{align*} \det(\tta_2(\vb)) \amp = \bvm 1\amp \mathbf{-36}\amp -3\\1\amp \mathbf{-11}\amp 2\\2\amp \mathbf{7}\amp 0\evm = -245\\ \det(\tta_3(\vb)) \amp = \bvm 1\amp 5\amp \mathbf{-36}\\1\amp 4\amp \mathbf{-11}\\2\amp -1\amp \mathbf{7}\evm = 196\text{.} \end{align*}
Therefore we can compute \(\vx\text{:}\)
\begin{align*} x_1 \amp = \frac{\det(\tta_1(\vb))}{\det(A)} = \frac{49}{49} = 1\\ x_2 \amp = \frac{\det(\tta_2(\vb))}{\det(A)} = \frac{-245}{49} = -5\\ x_3 \amp = \frac{\det(\tta_3(\vb))}{\det(A)} = \frac{196}{49} = 4\text{.} \end{align*}
Therefore
\begin{equation*} \vx = \bbm x_1\\x_2\\x_3\ebm = \bbm 1\\-5\\4\ebm\text{.} \end{equation*}

Example 6.5.3. Using Cramer’s Rule.

Use Cramer’s Rule to solve the linear system \(\ttaxb\) where
\begin{equation*} \tta = \bbm 1 \amp 2\\3 \amp 4\ebm \ \text{ and } \ \vb = \bbm -1\\1\ebm\text{.} \end{equation*}
Solution.
The determinant of \(\tta\) is \(-2\text{,}\) so we can apply Cramer’s Rule.
\begin{align*} \det(\tta_1(\vb)) \amp = \bvm \mathbf{-1} \amp 2\\ \mathbf{1} \amp 4\evm = -6\\ \det(\tta_2(\vb)) \amp = \bvm 1 \amp \mathbf{-1}\\ 3 \amp \mathbf{1} \evm = 4\text{.} \end{align*}
Therefore
\begin{align*} x_1 \amp = \frac{\det(\tta_1(\vb))}{\det(A)} = \frac{-6}{-2} = 3\\ x_2 \amp = \frac{\det(\tta_2(\vb))}{\det(A)} = \frac{4}{-2} = -2\text{,} \end{align*}
and
\begin{equation*} \vx = \bbm x_1\\x_2\ebm = \bbm 3\\-2\ebm\text{.} \end{equation*}
We learned in Section 6.4 that when considering a linear system \(\ttaxb\) where \(\tta\) is square, if \(\det(A)\neq 0\) then \(\tta\) is invertible and \(\ttaxb\) has exactly one solution. We also stated in Key Idea 4.5.2 that if \(\det(A) = 0\text{,}\) then \(\tta\) is not invertible and so therefore either \(\ttaxb\) has no solution or infinitely many solutions. Our method of figuring out which of these cases applied was to form the augmented matrix \(\bbm \tta \amp \vb \ebm\text{,}\) put it into reduced row echelon form, and then interpret the results.
Cramer’s Rule specifies that \(\det(A)\neq 0\) (so we are guaranteed a solution). When \(\det(A)=0\) we are not able to discern whether infinitely many solutions or no solution exists for a given vector\(\vb\text{.}\) Cramer’s Rule is only applicable to the case when exactly one solution exists.
We end this section with a practical consideration. We have mentioned before that finding determinants is a computationally intensive operation. To solve a linear system with 3 equations and 3 unknowns, we need to compute 4 determinants. Just think: with 10 equations and 10 unknowns, we’d need to compute 11 really hard determinants of \(10\times 10\) matrices! That is a lot of work!
The upshot of this is that Cramer’s Rule makes for a poor choice in solving numerical linear systems. It simply is not done in practice; it is hard to beat Gaussian elimination.
So why include it? Because its truth is amazing. The determinant is a very strange operation; it produces a number in a very odd way. It should seem incredible to the reader that by manipulating determinants in a particular way, we can solve linear systems.

Subsection 6.5.2 The Adjugate Formula

Recall that Theorem 4.4.10 in Section 4.4 gave us a “shortcut” for computing the inverse of a \(2\times 2\) matrix \(A=\bbm a\amp b\\c\amp d\ebm\text{:}\) as long as \(\det(A)\neq 0\text{,}\) we have
\begin{equation*} A^{-1} = \frac{1}{\det(A)}\bbm d \amp -b\\-c \amp a\ebm\text{.} \end{equation*}
This result can be easily verified by checking that \(AA^{-1}=I_2\) as required. The reader may have wondered if there is a similar formula for \(A^{-1}\) for a general \(n\times n\) matrix \(A\text{,}\) and whether or not such a formula would still constitute a “shortcut”. The results here are mixed. Yes, there’s a formula, and we will present it shortly. However, as with Cramer’s rule, it is not a shortcut. The reasons are the same as those we just mentioned for Cramer’s rule: as long as we’re dealing with a matrix whose entries are numbers, computing the inverse using row operations is vastly more efficient.
We begin with a definition.

Definition 6.5.4. The adjugate of a matrix.

Let \(A\) be an \(n\times n\) matrix.
  • The matrix of cofactors of \(A\) is the \(n\times n\) matrix
    \begin{equation*} \operatorname{cof}(A) = [C_{ij}] \end{equation*}
    whose \((i,j)\)-entry is given by the \((i,j)\)-cofactor of \(A\text{.}\)
  • The adjugate of \(A\) is the \(n\times n\) matrix
    \begin{equation*} \operatorname{adj}(A) = (\operatorname{cof}(A))^T = [C_{ij}]^T\text{.} \end{equation*}
Thus to obtain the matrix of cofactors for \(A\text{,}\) we replace each entry of \(A\) by the corresponding cofactor. Taking the transpose of this matrix produces the adjugate of \(A\text{.}\)
Why do we care about the adjugate matrix? Consider the product \(A\cdot \operatorname{adj}(A)\text{:}\)
\begin{equation*} A\cdot \operatorname{adj}(A) = \bbm a_{11} \amp a_{12} \amp \cdots \amp a_{1n}\\ a_{21} \amp a_{22} \amp \cdots \amp a_{2n}\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ a_{n1} \amp a_{n2} \amp \cdots \amp a_{nn}\ebm \bbm C_{11} \amp C_{21} \amp \cdots \amp C_{n1}\\ C_{12} \amp C_{22} \amp \cdots \amp C_{n2}\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ C_{1n} \amp C_{2n} \amp \cdots \amp C_{nn}\ebm\text{.} \end{equation*}
Notice that the indices for \(\operatorname{adj}(A)\) are reversed, since we took the transpose of the cofactor matrix. What is the \((i,j)\) entry of this product? Consider first the case where \(i=j\text{.}\) We find that the \((i,i)\)-entry is
\begin{equation*} a_{i1}C_{i1}+a_{i2}C_{i2}+\cdots + a_{in}C_{in}\text{.} \end{equation*}
But this is just the cofactor expansion of \(\det(A)\) along the \(i^{\text{th}}\) row! Thus, the \((i,i)\) entry of \(A\cdot \operatorname{adj}(A)\) is simply \(\det(A)\text{.}\) This tells us what the diagonal is. What about the off-diagonal entries?
When \(i\neq j\text{,}\) we have the \((i,j)\)-entry
\begin{equation*} a_{i1}C_{j1}+a_{i2}C_{j2}+\cdots + a_{in}C_{jn}\text{.} \end{equation*}
This is no longer a cofactor expansion for the determinant of \(A\text{,}\) since we’re taking entries from one row of \(A\text{,}\) and cofactors from another. This is, however, a cofactor expansion for the determinant of the matrix \(B\) that we obtain if we replace Row \(j\) of \(A\) with another copy of Row \(i\text{.}\) (Take a moment to think about why this is true.) But this means that the matrix \(B\) has two identical rows, and using Theorem 6.4.6, we can see that we must have \(\det(B)=0\text{.}\) This means that all of the off-diagonal entries of our product are zero! We have
\begin{equation*} A\cdot \operatorname{adj}(A) = \bbm \det(A) \amp 0 \amp \cdots \amp 0\\ 0\amp \det(A) \amp \cdots \amp 0\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ 0\amp 0\amp \cdots \amp \det(A)\ebm = \det(A)I_n\text{.} \end{equation*}
Now, we know that \(A\) is invertible if and only if \(\det(A)\neq 0\text{,}\) and as long as \(\det(A)\neq 0\text{,}\) we can multiply both sides of the above equation by \(\dfrac{1}{\det(A)}\text{.}\) With a bit of rearranging, we find
\begin{equation*} A\cdot \left(\frac{1}{\det(A)}\operatorname{adj}(A)\right) = I_n\text{.} \end{equation*}
But we know that if we can find any matrix \(B\) such that \(AB=I_n\text{,}\) then \(B\) is necessarily the inverse of \(A\text{.}\) We have established the following theorem.
Let us repeat our words of caution from the beginning of this discussion. Just because we have a formula for the inverse does not mean we need to use it! Consider the case of a \(5\times 5\) matrix (remember that this is a relatively small matrix by practical standards). Would you want to use Theorem 6.5.5 to compute the inverse? What would this require? Well, we’d need to compute \(\det(A)\text{,}\) since that appears in the formula, so there’s already a \(5\times 5\) determinant to deal with. But don’t forget what \(\operatorname{adj}(A)\) is: a matrix of cofactors. In this case, \(\operatorname{adj}(A)\) would consist of twenty-five different \(4\times 4\) determinants that would all need to be computed. What do you think would be less work? Computing one \(5\times 5\) determinant and 25 \(4\times 4\) determinants, or using row operations? Now consider doing this for \(10\times 10\text{,}\) or \(100\times 100\) matrices. Sometimes the first method is also the best!
Let’s do one example to see that even for a \(3\times 3\) matrix, there’s a fair amount of work involved.

Example 6.5.6. Using the adjugate formula.

Use Theorem 6.5.5 to compute the inverse of the matrix
\begin{equation*} A = \bbm 2 \amp -1 \amp 3\\ 4 \amp 0 \amp -2\\ 1 \amp 5\amp -3\ebm\text{.} \end{equation*}
Solution.
We begin by computing \(\det(A)\text{,}\) to make sure that the inverse exists. Using the \(-1\) in the first row to create a zero in the \((3,2)\) spot below it, we have
\begin{align*} \det(A) \amp = \begin{vmatrix} 2 \amp -1 \amp 3\\ 4 \amp 0 \amp -2\\ 1 \amp 5 \amp -3 \end{vmatrix} = \begin{vmatrix} 2 \amp -1 \amp 3\\ 4 \amp 0 \amp -2\\ 11 \amp 0 \amp 12 \end{vmatrix}\\ \amp = (-1)(-1)^{1+2}\begin{vmatrix} 4 \amp -2\\ 11 \amp 12 \end{vmatrix} = 1(4(12)-11(-2)) = 70\text{.} \end{align*}
Next, we set about computing all nine cofactors of \(A\text{.}\) We have
\begin{align*} C_{11} \amp = (+1)\begin{vmatrix}0\amp -2\\5\amp -3\end{vmatrix} = 10\\ C_{12} \amp = (-1)\begin{vmatrix}4\amp -2\\1\amp -3\end{vmatrix} = 10\\ C_{13} \amp = (+1)\begin{vmatrix}4\amp 0\\1\amp 5\end{vmatrix} = 20\\ C_{21} \amp = (-1)\begin{vmatrix}-1\amp 3\\5\amp -3\end{vmatrix}=12\\ C_{22} \amp = (+1)\begin{vmatrix}2\amp 3\\1\amp -3\end{vmatrix} = -9\\ C_{23} \amp = (-1)\begin{vmatrix}2\amp -1\\1\amp 5\end{vmatrix} = -11\\ C_{31} \amp = (+1)\begin{vmatrix}-1\amp 3\\0\amp -2\end{vmatrix}=2\\ C_{32} \amp = (-1)\begin{vmatrix}2\amp 3\\4\amp -2\end{vmatrix}=16\\ C_{33} \amp = (+1)\begin{vmatrix}2\amp -1\\4\amp 0\end{vmatrix}=4\text{.} \end{align*}
Thus, we obtain
\begin{equation*} \operatorname{adj}(A)=\bbm 10 \amp 10 \amp 20\\12\amp -9\amp -11\\2\amp 16\amp 4\ebm^T = \bbm 10\amp 12\amp 2\\10\amp -9\amp 16\\20\amp -11\amp 4\ebm\text{.} \end{equation*}
If we haven’t made any computational errors (and there’s a good chance that we have!) then Theorem 6.5.5 tells us that
\begin{equation*} A^{-1} = \frac{1}{\det(A)}\operatorname{adj}(A) = \frac{1}{70}\bbm 10 \amp 12 \amp 2\\10\amp -9\amp 16\\20\amp -11\amp 4\ebm = \bbm 1/7\amp 6/35\amp 1/35\\1/7\amp -9/70\amp 8/35\\2/7\amp -11/70\amp 2/35\ebm\text{.} \end{equation*}
The reader should verify that \(AA^{-1}=I\) to make sure that we haven’t made any mistakes. (The author made two mistakes that were caught doing this verification!)
In the next chapter we’ll see another use for the determinant. Meanwhile, try to develop a deeper appreciation of math: odd, complicated things that seem completely unrelated often are intricately tied together. Mathematicians see these connections and describe them as “beautiful.”

Exercises 6.5.3 Exercises

Exercise Group.

Matrices \(\tta\) and \(\vb\) are given.
  1. Give \(\det(\tta)\) and \(\det(\tta_i)\) for all \(i\text{.}\)
  2. Use Cramer’s Rule to solve \(\ttaxb\text{.}\) If Cramer’s Rule cannot be used to find the solution, then state whether or not a solution exists.
1.
\(\tta = \bbm 3 \amp 0 \amp -3\ 5\amp 4\amp 4\\ 5\amp 5\amp -4\ebm\text{,}\) \(\vb = \bbm 24\\ 0\\ 31\ebm\text{.}\)
2.
\(\tta = \bbm 4 \amp -4 \amp 0\ 5\amp 1\amp -1\\ 3\amp -1\amp 2\ebm\text{,}\) \(\vb = \bbm 16\\ 22\\ 8\ebm\text{.}\)
3.
\(\tta = \bbm 1 \amp 0 \amp -10\ 4\amp -3\amp -10\\ -9\amp 6\amp -2\ebm\text{,}\) \(\vb = \bbm -40\\ -94 \\ 132\ebm\text{.}\)
4.
\(\tta = \bbm -6 \amp -7 \amp -7\ 5\amp 4\amp 1\\ 5\amp 4\amp 8\ebm \text{,}\) \(\vb = \bbm 58\\ -35\\ -49\ebm\text{.}\)
5.
\(\tta = \bbm 9 \amp 5\ -4 \amp -7\ebm\text{,}\) \(\vb = \bbm -45\\ 20\ebm\text{.}\)
6.
\(\tta = \bbm 0 \amp -6\ 9 \amp -10\ebm\text{,}\) \(\vb = \bbm 6\\ -17\ebm\text{.}\)
7.
\(\tta = \bbm 2 \amp 10\ -1 \amp 3\ebm\text{,}\) \(\vb = \bbm 42\\ 19\ebm\text{.}\)
8.
\(\tta = \bbm 7 \amp -7\ -7 \amp 9\ebm\text{,}\) \(\vb = \bbm 28\\ -26\ebm\text{.}\)
9.
\(\tta = \bbm -8 \amp 16\ 10 \amp -20\ebm\text{,}\) \(\vb = \bbm -48\\ 60\ebm\text{.}\)
10.
\(\tta = \bbm 7 \amp 14\ -2 \amp -4\ebm\text{,}\) \(\vb = \bbm -1\\ 4\ebm\text{.}\)
11.
\(\tta = \bbm 4 \amp 9 \amp 3\ -5\amp -2\amp -13\\ -1\amp 10\amp -13\ebm\text{,}\) \(\vb = \bbm -28\\ 35\\ 7\ebm\text{.}\)
12.
\(\tta = \bbm 7 \amp -4 \amp 25\ -2\amp 1\amp -7\\ 9\amp -7\amp 34\ebm\text{,}\) \(\vb = \bbm -1\\ -3\\ 5\ebm\text{.}\)

Exercise Group.

Use Theorem 6.5.5 to compute the inverse of \(A\text{,}\) if it exists.
13.
\(A = \bbm 2 \amp -1 \amp 4\\3\amp -5\amp 7\\0\amp 3\amp -2\ebm\)
14.
\(A = \bbm 3 \amp 2 \amp -5\\1\amp 0\amp -1\\7\amp 4\amp 2\ebm\)
15.
\(A = \bbm 2 \amp -4 \amp 7\-3\amp 1\amp 5\\5\amp -5\amp 2\ebm\)
16.
\(A = \bbm 5 \amp 2 \amp 0\\0\amp -2\amp 3\\5\amp -2\amp 6\ebm\)
17.
\(A = \bbm 1 \amp -4 \amp 3\amp 2\\5\amp 0\amp -3\amp 6\\2\amp -3\amp 1\amp 4\\7\amp 2\amp -5\amp 1\ebm\)
18.
\(A = \bbm 3 \amp 1 \amp 0\amp -1\\6\amp 4\amp 2\amp 0\\-3\amp -1\amp -5\amp 2\\1\amp 0\amp -1\amp 4\ebm\)