You've probably noticed by now that we haven't yet talked about dividing matrices. For numbers, you can divide by any number that isn't zero by multiplying by its reciprocal. For matrices, the situation is a bit more complicated - the matrix that corresponds to a reciprocal is called an inverse matrix, but only certain matrices actually have an inverse - at the very least, they must be square, for example, but even then, they may not have inverses.
Suppose we have a square matrix A, and we want to define an inverse for A which corresponds to a reciprocal for numbers. What do we need? For numbers, the defining property of a reciprocal is that its product with the original number is 1. The matrix which corresponds to the number 1 is the identity matrix I, so a matrix times its inverse matrix should equal the identity matrix. Think a bit carefully about this multiplication – since matrix multiplication is not commutative, the product should be I whichever matrix you write first. So you want a matrix B such that AB = I and BA = I. But there's still no guarantee such a B exists.
Here's the formal definition.

For any square matrix A, the inverse of A, if it exists, is the matrix B which satisfies AB = I and BA = I. If A has an inverse, it is said to be invertible, or non-singular, and we denote that inverse B by A–1, so AA-1 = A-1A = I.

If A does not have an inverse, it is said to be non-invertible, or singular.

If a matrix has an inverse, then, how can we tell and how can we find that inverse? Let's try to find the inverse of the 2x2 matrix

.

Suppose our candidate for an inverse is

.

Set up the equation AB = I, i.e.

.

Multiply out and set the corresponding entries on each side equal to get the four equations

  a + 2c = 1          b + 2d = 0
3a + 5c = 0        3b + 5d = 1 .

Notice that these equations come in two pairs; one in the variables a and c and the other in the variables b and d. Each pair is a linear system; the augmented matrices of the systems are

.

To find a, c and b, d, we solve these systems by reducing their matrices to reduced row-echelon form.

Here comes the key idea: since each system has the same coefficients to the left of the vertical line, the same row operations will reduce each augmented matrix to reduced row-echelon form. Instead of doing two separate calculations, then, we might as well do both at once by combining the augmented matrices into a single, "super-augmented" matrix:

.

Using the standard Gauss-Jordan algorithm, this matrix reduces to the matrix

    

which we can split back into the two augmented matrices

.

We thus have the solutions

a = -5       b = 2
c = 3        d = -1 ,

so .

Notice that B is the right-hand side of the reduced super-augmented matrix

.

We could condense the work involved in finding the B by going directly to the super-augmented matrix, reducing it to reduced row-echelon form and reading B from the right-hand side.

To summarize, we have the following procedure for finding the inverse of a square matrix.
  1. Form the double matrix [A|I]
  2. Apply row operations to this matrix to try to reduce the left-hand side to I.
  3. If you can reduce the left-hand side to I, the whole matrix will transform into [I|A–1] and you can read off A–1 from the result.
  4. If you can't reduce the left-hand side to I, then A has no inverse.
This process works in general: if a square matrix A has an inverse, then finding A-1 is equivalent to solving n linear systems, each with the same set of left-hand sides. That in turn is equivalent to reducing the double matrix [A|I] to reduced row echelon form to produce the matrix [I|A-1].
Matrix inverses have several algebraic properties that you should know about. Here are some of them, with proofs of why they work.
If a matrix A has an inverse, then A-1 also has an inverse, and that inverse is A.

Proof: The equations that say that A–1 is the inverse of A are

          AA–1 = I   and   A–1A = I .

These same equations say that the inverse of A–1 is A, i.e.
(A–1)–1 = A.

If A and B are invertible matrices of the same size, then AB is invertible, and its inverse is B–1A–1. (Note the reversed order!)

Proof: Set C = B–1A–1. Then

 C(AB) =  (B–1A–1)(AB) = B–1A–1AB = B–1B = I

and

(AB)C = (AB)(B–1A–1) = ABB–1A–1 = AA–1 = I.

Since C(AB) = I and (AB)C = I, matrix AB is invertible and has inverse C, i.e. (AB)-1 = B–1A–1.

Note: By applying this property over and over again, you can show that the inverse of a product of any number of invertible matrices is the product of their inverses in the reverse order.

For example, for square matrices A, B, C and D of the same size, (ABCD)-1 = D-1C-1B-1A-1.

In particular, if you apply the rule to n copies of A, you get a rule for inverses of powers of a square matrix:

(An)–1 = (A–1)n for any n = 1, 2, ...

i.e. the inverse of a power of an invertible matrix equals the same power of of its inverse.

The inverse of the transpose of a matrix A is the transpose of its inverse:
          (A
T)–1 = (A–1)T.


(Recall that the transpose of a matrix A is the matrix AT whose columns are the rows of A and whose rows are the columns of A.)

Proof: Set C =  (A–1)T. Then

CAT = (A–1)TAT = (AA–1)T = IT = I

and

ATC = AT(A–1)T = (A–1A)T = IT = I

so since CAT = I and ATC = I, C must be the inverse of AT, i.e.
(AT)–1= (A–1)T.

Another worry – could there be more than one matrix which satisfies these conditions? Fortunately not: suppose matrices B and C both satisfy the conditions, i.e.

AB = I and BA = I

and

AC = I and BC = I.

Then

B(AC) = BI = B   and   (BA)C = IC = C.

Since matrix multiplication is associative, B(AC) = (BA)C, so
B = C – there can't be two different candidates for the inverse of A. We can now talk about "the" inverse of a matrix instead of "an" inverse – if there is one, that is.

What if the left-hand side of [A|I] can't be reduced to I? This would mean that each of the n linear systems used to find A-1 has no solution or infinitely many solutions. They can't all have infinitely many solutions (since A can have only one inverse). Then at least one of the systems has no solution, so there is no inverse.

Matrices and Matrix Calculations