au + bv + cw = 0
and look at what the coefficients a, b and c can be. There's obviously the trivial solution a = b = c = 0; we're interested in non-trivial solutions (i.e. with at least one of a, b or c not equal to 0).
Suppose there is one, say with a ≠ 0. Then you can solve for u in as a linear combination of the other vectors:
u = –(b/a)v – (c/a)w,
so the vectors are linearly dependent.
On the other hand, suppose the vectors are linearly dependent, say with u a linear combination of the others:
u = pv + qw.
Then (–1)u + pv + qw = 0, so the linear dependence relation has the non-trivial solution a = –1, b = p, c = q.
In other words, linear dependence is equivalent to the existence of a non-trivial solution of the linear dependence relation.
There are three vectors in 3-space, so you can't tell geometrically whether or not they're linearly independent (unless you plot them, perhaps). So set up a linear dependence relation:
a[1, 2, 3] + b[4, 5, 6] + c[7, 8, 9] = [0, 0, 0].
This equation is equivalent to the linear system
a + 4b + 7c = 0
2a + 5b + 8c = 0
3a + 6b + 9c = 0.
Solve the system (by any means you know). You don't actually have to find all the non-trivial solutions; you just need to find out if the system has any non-trivial solutions. Check the determinant of the coefficient matrix (it turns out to be 0), or find the reduced row echelon form of the coefficient matrix (it's not the identity matrix), or ... . The system does indeed have non-trivial solutions, so the original vectors are linearly dependent.
Linear Independence | |||
Introduction | The basic definition of linear independence | Examples of linearly independent geometric vectors | A more practical form of linear independence |