For any square matrix A, the inverse of A, if it exists, is the matrix B which satisfies AB = I and BA = I. If A has an inverse, it is said to be invertible, or non-singular, and we denote that inverse B by A–1, so AA-1 = A-1A = I.
If A does not have an inverse, it is said to be non-invertible, or singular.
.
Suppose our candidate for an inverse is
.
Set up the equation AB = I, i.e.
.
Multiply out and set the corresponding entries on each side equal to get the four equations
a + 2c = 1 b + 2d = 0
3a + 5c = 0 3b + 5d = 1 .
Notice that these equations come in two pairs; one in the variables a and c and the other in the variables b and d. Each pair is a linear system; the augmented matrices of the systems are
.
To find a, c and b, d, we solve these systems by reducing their matrices to reduced row-echelon form.
Here comes the key idea: since each system has the same coefficients to the left of the vertical line, the same row operations will reduce each augmented matrix to reduced row-echelon form. Instead of doing two separate calculations, then, we might as well do both at once by combining the augmented matrices into a single, "super-augmented" matrix:
.
Using the standard Gauss-Jordan algorithm, this matrix reduces to the matrix
which we can split back into the two augmented matrices
.
We thus have the solutions
a = -5 b = 2
c = 3 d = -1 ,so .
Notice that B is the right-hand side of the reduced super-augmented matrix
.
We could condense the work involved in finding the B by going directly to the super-augmented matrix, reducing it to reduced row-echelon form and reading B from the right-hand side.
Proof: The equations that say that A–1 is the inverse of A are
AA–1 = I and A–1A = I .
These same equations say that the inverse of A–1 is A,
i.e.
(A–1)–1 = A.
Proof: Set C = B–1A–1. Then
C(AB) = (B–1A–1)(AB) = B–1A–1AB = B–1B = I
and
(AB)C = (AB)(B–1A–1) = ABB–1A–1 = AA–1 = I.
Since C(AB) = I and (AB)C = I, matrix AB is invertible and has inverse C, i.e. (AB)-1 = B–1A–1.
For example, for square matrices A, B, C and D of the same size, (ABCD)-1 = D-1C-1B-1A-1.
In particular, if you apply the rule to n copies of A, you get a rule for inverses of powers of a square matrix:
(An)–1 = (A–1)n for any n = 1, 2, ...
i.e. the inverse of a power of an invertible matrix equals the same power of of its inverse.
(Recall that the transpose of a matrix A is the matrix AT whose columns are the rows of A and whose rows are the columns of A.)
Proof: Set C = (A–1)T. Then
CAT = (A–1)TAT = (AA–1)T = IT = I
and
ATC = AT(A–1)T = (A–1A)T = IT = I
so since CAT = I and ATC = I, C must
be
the inverse of AT,
i.e.
(AT)–1=
(A–1)T.
AB = I and BA = I
and
AC = I and BC = I.
Then
B(AC) = BI = B and (BA)C = IC = C.
Since matrix multiplication is associative, B(AC)
= (BA)C, so
B = C – there
can't be two different candidates for the inverse of A.
We can now talk about "the" inverse of a matrix instead of "an"
inverse – if there is one, that is.
Matrices and Matrix Calculations |
|||
Introduction | Addition, subtraction and scalar multiplication | Multiplying matrices | The inverse of a matrix and how to find it. |