In the previous section we learned how to compute the determinant. In this section we learn some of the properties of the determinant, and this will allow us to compute determinants more easily. In the next section we will see one application of determinants.
We alluded to this fact way back after ExampleΒ 6.3.6. We had just learned what cofactor expansion was and we practised along the second row and down the third column. Later, we found the determinant of this matrix by computing the cofactor expansion along the first row. In all three cases, we got the number 0. This wasnβt a coincidence. The above theorem states that all three expansions were actually computing the determinant.
How does this help us? By giving us freedom to choose any row or column to use for the expansion, we can choose a row or column that looks βmost appealing.β This usually means βit has lots of zeros.β We demonstrate this principle below.
Our first reaction may well be βOh no! Not another \(4\times 4\) determinant!β However, we can use cofactor expansion along any row or column that we choose. The third column looks great; it has lots of zeros in it. The cofactor expansion along this column is
The wonderful thing here is that three of our cofactors are multiplied by 0. We wonβt bother computing them since they will not contribute to the determinant. Thus
Wow. That was a lot simpler than computing all that we did in ExampleΒ 6.3.10. Of course, in that example, we didnβt really have any shortcuts that we could have employed. Our next example involves a \(5\times 5\) determinant. At first, this looks like trouble, until we realize that the matrix is triangular. As weβll see, this makes our job much easier.
Since we can expand along any row or column, things are not as bad as they might at first seem. In fact, this problem is very easy. What row or column should we choose to find the determinant along? There are two obvious choices: the first column or the last row. Both have 4 zeros in them. We choose the first column. We omit most of the cofactor expansion, since most of it is just 0:
Similarly, this determinant is not bad to compute; we again choose to use cofactor expansion along the first column. Note: technically, this cofactor expansion is \(6\cdot(-1)^{1+1}A_{1,1}\text{;}\) we are going to drop the \((-1)^{1+1}\) terms from here on out in this example (it will show up a lot...).
We see that the final determinant is the product of the diagonal entries. This works for any triangular matrix (and since diagonal matrices are triangular, it works for diagonal matrices as well). This is an important enough idea that weβll put it into a box.
It is now again time to start thinking like a mathematician. Remember, mathematicians see something new and often ask βHow does this relate to things I already know?β So now we ask, βIf we change a matrix in some way, how is its determinant changed?β
The standard way that we change matrices is through elementary row operations. If we perform an elementary row operation on a matrix, how will the determinant of the new matrix compare to the determinant of the original matrix?
Weβve seen in the above example that there seems to be a relationship between the determinants of matrices βbefore and afterβ being changed by elementary row operations. Certainly, one example isnβt enough to base a theory on, and we have not proved anything yet. Regardless, the following theorem is true.
Computing \(\det(A)\) by cofactor expansion down the first column or along the second row seems like the best choice, utilizing the one zero in the matrix. We can quickly confirm that \(\det(A) = 1\text{.}\)
To compute \(\det(B)\text{,}\) notice that the rows of \(A\) were rearranged to form \(B\text{.}\) There are different ways to describe what happened; saying \(R_1\leftrightarrow R_2\) was followed by \(R_1\leftrightarrow R_3\) produces \(B\) from \(A\text{.}\) Since there were two row swaps, \(\det(B) = (-1)(-1)\det(A) = \det(A) = 1\text{.}\)
It takes a little thought, but we can form \(D\) from \(A\) by the operation \(-3R_2+R_1\rightarrow R_1\text{.}\) This type of elementary row operation does not change determinants, so \(\det(D) = \det(A)\text{.}\)
It doesnβt take too much work to compute \(\det B=24\text{.}\) In looking at our list of elementary row operations, we see that only the first three have an effect on the determinant. Therefore
In the previous example, we may have been tempted to βrebuildβ \(A\) using the elementary row operations and then computing the determinant. This can be done, but in general it is a bad idea; it takes too much work and it is too easy to make a mistake.
Letβs continue to think like mathematicians; mathematicians tend to remember βproblemsβ theyβve encountered in the past, and when they learn something new, in the backs of their minds they try to apply their new knowledge to solve their old problem. (This is why mathematicians rarely smile: they are remembering their problems)
What βproblemβ did we recently uncover? We stated in the last chapter that even computers could not compute the determinant of large matrices with cofactor expansion. How then can we compute the determinant of large matrices?
We just learned two interesting and useful facts about matrix determinants. First, the determinant of a triangular matrix is easy to compute: just multiply the diagonal elements. Secondly, we know that given any square matrix, we can use elementary row operations to put the matrix in triangular form. We can then find the determinant of the new matrix (which is easy), and adjust that number by recalling what elementary operations we performed.
In putting \(A\) into a triangular form, we need not worry about getting leading 1s, but it does tend to make our life easier as we work out a problem by hand. So letβs scale the first row by \(1/2\text{:}\)
Letβs name this last matrix \(B\text{.}\) The determinant of \(B\) is easy to compute as it is triangular; \(\det(B) = -16\text{.}\) We can use this to find \(\det(A)\text{.}\)
The first operation multiplied a row of \(A\) by \(\frac 12\text{.}\) This means that the resulting matrix had a determinant that was \(\frac12\) the determinant of \(A\text{.}\)
The next two operations did not affect the determinant at all. The last operation, the row swap, changed the sign. Combining these effects, we know that
In practice, we donβt need to keep track of operations where we add multiples of one row to another; they simply do not affect the determinant. Also, in practice, these steps are carried out by a computer, and computers donβt care about leading 1s. Therefore, row scaling operations are rarely used. The only things to keep track of are row swaps, and even then all we care about are the number of row swaps. An odd number of row swaps means that the original determinant has the opposite sign of the triangular form matrix; an even number of row swaps means they have the same determinant.
Getting all the way to triangular form isnβt really necessary. Use row operations of the above type to create as many zeros as possible in one of the columns, and then expand along that column.
To see how these principles work in practice, letβs repeat ExampleΒ 6.4.9. This time weβll focus on creating zeros, but we wonβt worry about getting to triangular form. Since adding a multiple of one row to another doesnβt change the determinant, we can compute \(\det(A)\) with a string of equalities, as follows:
Of course, in this case we got lucky and ended up with two zeros in the first row after one row operation. However, had this not been the case, we would have simply done one more row operation (\(R_3+3R_2\to R_3\)) to create a second zero in the first column, and then done a cofactor expansion along that column.
For larger determinants, we can follow the same routine: create zeros in one column, expand along that column to get a smaller determinant, and repeat.
Letβs think some more like a mathematician. How does the determinant work with other matrix operations that we know? Specifically, how does the determinant interact with matrix addition, scalar multiplication, matrix multiplication, the transpose and the trace? Weβll again do an example to get an idea of what is going on, then give a theorem to state what is true.
\begin{equation*}
A = \bbm 1 \amp 2\\3 \amp 4\ebm \ \text{and} \ B = \bbm 2\amp 1\\ 3\amp 5\ebm\text{.}
\end{equation*}
Find the determinants of the matrices \(A\text{,}\)\(B\text{,}\)\(A-B\text{,}\)\(3A\text{,}\)\(AB\text{,}\)\(A^T\text{,}\) and \(A^{-1}\text{.}\) Can you find any connections between these values?
We can figure this one out; multiplying one row of \(A\) by 3 increases the determinant by a factor of 3; doing it again (and hence multiplying both rows by 3) increases the determinant again by a factor of 3. Therefore \(\det(3A) = 3\cdot3\cdot\det(A)\text{,}\) or \(3^2\cdot\det(A)\text{.}\)
Obviously \(\det(A^T) = \det(A)\text{;}\) is this always going to be the case? If we think about it, we can see that the cofactor expansion along the first row of \(A\) will give us the same result as the cofactor expansion along the first column of \(A^T\text{.}\)
To see that TheoremΒ 6.4.11 is true, note that if \(A\) is not invertible, then the reduced row echelon form \(R\) of \(A\) must have a row of zeros. Performing a cofactor expansion along this row, we immediately see that \(\det(R)=0\text{.}\) Since \(R\) is obtained from \(A\) by a series of elementary row operations, we know from TheoremΒ 6.4.6 that \(\det(A)\) is a multiple of \(\det(R)\text{,}\) and thus \(\det(A)=0\text{.}\)
It follows from TheoremΒ 6.4.11 (using the logical principle known as the contrapositive) that if \(\det(A)\neq 0\text{,}\) weβre guaranteed that \(A\) is invertible.
At this point, we naturally should ask whether or not the converse to TheoremΒ 6.4.11 is true as well: suppose we know \(\det(A)=0\text{.}\) Does that imply that \(A\) is not invertible? (Or equivalently, if we know \(A\) is invertible, does this imply that \(\det(A)\neq 0\text{?}\)) The answer is yes, but to see this, we first need a more general result.
Proving that \(\det(AB)=\det(A)\det(B)\) is most easily done using elementary matrices. (See SectionΒ 4.6.) Recall that multiplying a matrix on the left by an elementary matrix is the same as doing the corresponding row operation: if \(A\) is any \(3\times 3\) matrix, then \(EA\) can be obtained from \(A\) using the same row operation used to create \(E\text{.}\)
for any matrix \(B\) and elementary matrix \(E\text{.}\) The rest boils down to two cases: either \(\det(A)=0\text{,}\) in which case \(A\) is not invertible, so neither is \(AB\text{,}\) and thus \(\det(AB)=0 = 0\det(B)\text{,}\) or \(\det(A)\neq 0\text{.}\) In the latter case, \(A\) is invertible, and can be written as a product of elementary matrices. We can then prove that \(\det(AB)=\det(A)\det(B)\) by applying TheoremΒ 6.4.6 repeatedly.
From TheoremΒ 6.4.12, we see that \(\det(AB)=\det(A)\det(B)\) for any matrices \(A\) and \(B\text{.}\) What does this tell us in the case of an invertible matrix? Recall that if \(A\) is invertible, then we can determine the inverse matrix \(A^{-1}\) such that
Now, the identity matrix is triangular, and all of its diagonal entries are equal to 1, so we immediately see that \(\det(I_n) = 1\text{.}\) Thus, taking the determinant of both sides of the above equation, we have
We have a product of two numbers equal to one, which tells us that neither of these numbers can be zero. (Otherwise, the product would be zero as well.) Thus, if \(A\) is invertible, it must be the case that \(\det(A)\neq 0\text{,}\) so a matrix \(A\) is invertible if and only if\(\det(A)\neq 0\text{.}\)
A matrix \(M\) and \(\det(M)\) are given. Matrices \(A\text{,}\)\(B\) and \(C\) are formed by performing operations on \(M\text{.}\) Determine the determinants of \(A\text{,}\)\(B\) and \(C\) using TheoremΒ 6.4.6 and TheoremΒ 6.4.12, and indicate the operations used to form \(A\text{,}\)\(B\) and \(C\text{.}\)