The key to finding matrix inverses is the following theorem.
|
|
Proof: Suppose A has reduced row-echelon form R, then for some collection of elementary matrices E1, E2, ..., Ek we have that Ek...E2E1A = R. Set C = Ek...E2E1; then C is invertible (it's the product of invertible matrices) and CA = R. |
Proof of a): We have to prove that the statements "R = I" and "A is invertible" imply each other. |
Assume that R = I, then CA = I. Since C is invertible, we can multiply both sides by C–1 to get A = C–1, so AC = I. It follows that A is invertible (its inverse is C). |
Assume that A is invertible, then the product CA = R is invertible. It follows that R cannot have a row of zeroes – if it did, the product RR–1 = I would also have a row of zeroes, which is impossible. Thus R = I. |
Proof of b): Since A
is invertible, its reduced row-echelon form is I,
so CA = I
and A–1
= C = Ek...E2E1.
Include an extra identity matrix on the right Translate that last equation back into a statement about row operations: it says that A–1 is the result of applying the same row operations to the identity matrix that reduced A to the identity matrix. |
The second part of this theorem tells you how to find A–1: first reduce A to the identity matrix and then apply the same row operations you used to the identity matrix to get A–1. Since you use the same row operations in both steps, it's more efficient to do both reductions at once.
To find the inverse of a matrix A: |
|
|
|
|
If you cannot reduce the left-hand part of this matrix to I (if you get a row of zeroes), A has no inverse. |