Previous section

Determinants

A definition

Suppose that A is a square matrix, each of whose diagonal entries is different from 0. We can attempt to reduce it, first to an upper-triangular matrix, and then to a diagonal matrix D, by application of only one sort of elementary row-operation: that of adding a multiple of one row to another. If we succeed, then the matrix D is unique: it is the only diagonal matrix to which A can be reduced by the given elementary row-operation. We can then define the determinant of A to be the scalar, denoted det A, which is the product of the diagonal entries of D.

If we can reduce A to an upper-triangular matrix by the given method, but one of the diagonal entries is 0, then we may not be able to reduce further to a diagonal matrix. In any case, if one of the diagonal entries in the upper-triangular matrix is 0, then the determinant of A is 0.

Possibly we cannot even reduce A to an upper-triangular matrix without interchanging rows. Then we perform these interchanges and keep count of them. In the end, we calculate the determinant as before, but we change its sign if we used an odd number of row-interchanges.

Properties

It follows immediately that A is invertible if and only if det A is not zero.

In particular, the determinants of elementary matrices are thus:

  1. If E results from multiplying a row of I by the scalar a, then det A = a.
  2. If E results from interchanging two rows of I, then det A = -1.
  3. If E results from adding a multiple of one row of I to another, then det A = 1.
Note that the determinant of I itself is 1. If A is a product E1E2...Er of elementary matrices, then

det A = det E1 det E2 ... det Er .

More generally, the determinant of a product is the product of the determinants:

det (AB) = det A det B .

Also, taking the transpose of an elementary matrix does not change its determinant; hence, in general,

det AT = det A.

A technique

A straightforward way to calculate the determinant of a square matrix A is this: using the elementary row-operations except the scaling of rows, reduce A to an upper-triangular matrix. Each row-interchange causes a change of sign of the determinant of the matrix; adding a multiple of one row to another causes no change. The determinant of the upper-triangular matrix is the product of its diagonal entries.

More properties

Suppose two rows of a square matrix are identical. Then interchanging those identical rows must both change the sign of the determinant, and keep the determinant the same. Therefore the determinant must be zero.

Likewise, if one row is a multiple of another--in particular, if one row is zero--then the determinant is zero.

The same goes if we speak of columns instead of rows, since transposing a matrix does not change its determinant.

Another technique

An alternative definition of the determinant can be developed thus. Let A be the n×n matrix (aij). If n = 1, then det A is just the value of the single entry a11. Suppose n > 1. The minor of the entry aij is then the determinant of the (n - 1)×(n - 1) matrix resulting from deleting row i and column j of A. The cofactor of aij is: For each entry of the top row of A, take the product of that entry with its cofactor; the sum of these n products is det A. In fact, you can start with any row or column; take the products of its entries with their respective cofactors, add up the products--the result is det A.

We can check that this definition of the determinant gives the same results as the first by noting that each definition gives the same value for det I, namely 1; also, that under each definition, the determinant of a matrix changes in the same way when elementary row-operations are applied.

Theory

Suppose A is the n×n matrix

A = (a1 a2 ... an).

We know that interchanging two columns changes the sign of the determinant. Suppose we replace the column ar with a vector x of variables. (So, x is (x1 x2 ... xn)T.) Then the determinant of this new matrix is a linear polynomial in the variables xi. We can write this determinant as a function, say L(x). It is a linear function; that is,

L(ax + by) = a L(x) + b L(y).

A way to summarize these properties is to say that the determinant of a matrix is an alternating, multilinear function of its columns (but don't worry about the term, it's not in the book). In fact the determinant can be defined by these properties, together with the property that the determinant of the identity-matrix is 1.

A consequence

Suppose A, B and C are square matrices that are identical, except that their columns j are vectors a, b and a + b respectively. Then

det A + det B = det C .

Likewise for rows instead of columns.

Application to eigenvalues

Let A be a square, n×n matrix, and suppose there is a nonzero vector v and a scalar x such that

Av = xv .

Then x is an eigenvalue or characteristic value of A, and v is a corresponding eigenvector or characteristic vector. We shall develop the notions more later; for now we can note that the equation can be rewritten thus:

(A - xI)v = 0 ,

and also (xI - A)v = 0. Since v is nonzero, we can conclude that the matrix xI - A is not invertible, hence that its determinant is zero. In other words, the eigenvalues x of A are just the solutions of the equation

det (xI - A) = 0 .

The left member of the equation is a polynomial of degree n in the variable x; the equation is the characteristic equation of A. (Often eigenvalues are designated by the small Greek letter lambda.)

Next section

Valid CSS! Valid HTML 4.01!