# Properties of eigenvalues and eigenvectors

A square matrix A and its transpose have the same eigenvalues.

Proof. We have that

 det(AT – λI) = det(AT – λIT) = det(A –λI)T = det(A –λI)

so any solution of det(A –λI) = 0 is a solution of det(A –λI)T = 0 and vice versa. Thus A and AT have the same eigenvalues.

The matrices A and AT will usually have different eigenvectors.

 The eigenvalues of a diagonal or triangular matrix are its diagonal elements. Proof. Suppose the matrix A is diagonal or triangular. If you subtract λ's from its diagonal elements, the result A – λI is still diagonal or triangular. Its determinant is the product of its diagonal elements, so it is just the product of factors of the form (diagonal element – λ). The roots of the characteristic equation must then be the diagonal elements.

Another addition to the square matrix theorem.

 An n x n matrix is invertible if and only if it doesn't have 0 as an eigenvalue. Proof. An n x n matrix A has an eigenvalue 0 if and only if det(A – 0I) = 0, i.e. if and only if det(A) = 0. Since A is invertible if and only if detA ≠ 0, A is invertible if and only if 0 is not an eigenvalue of A.

 If a matrix A has eigenvalue λ with corresponding eigenvector x, then for any k = 1, 2, ... , Ak has eigenvalue λk corresponding to the same eigenvector x. Proof. Suppose the matrix A has eigenvalue λ with eigenvector x, i.e. suppose that Ax = λx. Then A2x = A(Ax) = A(λx) = λ(Ax) = λ(λx) = λ2x. Multiply by more A's to get A3x = λ3x, A4x = λ4x and so on.

 If A is an invertible matrix with eigenvalue λ corresponding to eigenvector x, then A–1 has eigenvalue λ–1 corresponding to the same eigenvector x. Proof. Multiply the equation Ax = λx by λ–1A–1:       λ–1A–1(Ax) = λ–1A–1λx, i.e.      λ–1x = A–1x. Thus A–1 has eigenvalue λ–1 corresponding to the same eigenvector x.

Eigenvectors of a matrix A with distinct eigenvalues are linearly independent.

Proof. Suppose the statement is not true, i.e. suppose that A has a linearly dependent set of eigenvectors each with a different eigenvalue. "Thin out" this set of vectors to get a linearly independent subset v1, v2, ..., vk, with distinct eigenvalues λ1, λ2, ..., λk.

Suppose u is one of the eigenvectors you thinned out because it was linearly dependent on the others:
u = c1v1 + c2v2 + ... + ckvk       *
for some scalars c1, c2, ..., ck.

First multiply * by A:

 Au = A(c1v1 + c2v2 + ... + ckvk) = c1Av1 + c2Av2 + ... + ckAvk = c1λ1v1 + c2λ2v2 + ... + ckλkvk.

Since u is also an eigenvector, Au = λu for some eigenvalue λ, so this equation gives
λu = c1λ1v1 + c2λ2v2 + ... + ckλkvk.           **

Now multiply * by λ:
λu = λc1v1 + λc2v2 + ... + λckvk .          ***

Subtract *** from ** to get

 0 = (c1λ1 – λc1)v1 + c2λ2 – λc2)v2 + ... + (ckλk – λck)vk = c1(λ1 – λ)v1 + c2(λ2 – λ)v2 + ... + ck(λk – λ)vk.

Since the vi's are linearly independent, cii – λ) = 0 for all i = 1, 2, ..., k. Since the eigenvalues (including λ) are all different, ci = 0 for all i. But this implies (from equation *) that u = 0, which is impossible since u is an eigenvector.

The original assumption must be false, i.e. it is not possible to have a linearly dependent set of eigenvectors with distinct eigenvalues; any eigenvectors with distinct eigenvalues must be linearly independent.