Linear systems
The Linear In Linear Algebra
Ordinary algebra deals with polynomials, such as ax^{2} + bx + c or x^{3} + 27y^{3} or x^{3}  2x^{2}y + 4xy^{2}  8y^{3}. These are constructed out of variables such as x and y by three operations:
 multiplication of variables, to form x^{2} or xy^{2} for example;
 scaling, that is, multiplication by constants, whether literal (as in ax^{2}) or numeral (as in 27y^{2});
 addition.
A linear polynomial is one formed by the last two operations only; in other words, it is a linear combination of variables. [A technical point: this definition of linear polynomials excludes polynomials such as ax + b, which not all writers would do.] Linear polynomials are the province of linear algebra. So, linear algebra is simpler than algebra in general! Yet it is complicated by the allowance of any number of variables, and by consideration of systems of any number of equations.
The general linear polynomial in n variables is
a_{1}x_{1} + a_{2}x_{2} + ... + a_{n}x_{n}.
We shall understand the constants a_{i} to be real numbers, for now; later, they will sometimes by complex numbers. We shall generally call the constants scalars.
Linear Systems
One can see the goal of linear algebra as the understanding of linear systems. The general linear system of m equations in n unknowns is
a_{11}x_{1} + a_{12}x_{2} + ... + a_{1n}x_{n} = b_{1}
a_{21}x_{1} + a_{22}x_{2} + ... + a_{2n}x_{n} = b_{2}
................................
a_{m1}x_{1} + a_{m2}x_{2} + ... + a_{mn}x_{n} = b_{m}
The terms b_{i} are the constant terms of the corresponding equations. A solution to the system is an ntuple (x_{1}, x_{2}, ..., x_{n}) that satisfies each equation. (Note what is, strictly speaking, an ambiguity: the symbols x_{j} are variables in the system, but also constants making up a solution to the system.) The names of the variables x_{j} are not important; their order is what is important. The information of the system is contained in its augmented matrix
é ê ê ë 
a_{11} a_{12} ... a_{1n} b_{1} a_{21} a_{22} ... a_{2n} b_{2} .................. a_{m1} a_{m2} ... a_{mn} b_{m} 
ù ú ú û 
[You should see brackets the the left and right; but some browsers will just show stacks of accented letters.] This matrix is formed by adding the final column of constants b_{i} to the coefficient matrix
é ê ê ë 
a_{11} a_{12} ... a_{1n} a_{21} a_{22} ... a_{2n} ............... a_{m1} a_{m2} ... a_{mn} 
ù ú ú û 
Having m rows and n columns, this is an m×n matrix. We may denote it by (a_{ij}); here, a_{ij} is understood to be the entry in row i and column j of the matrix (counting from the top and the left, respectively), and i and j are understood to range from 1 to m and from 1 to n respectively. (If there is uncertainty about which letter represents the rownumber, and which the columnnumber, we can denote the matrix by (a_{ij})_{ij}.)
We can solve a linear system by manipulating the equations in certain ways; the corresponding manipulations of the augmented matrix are the elementary rowoperations:
 multiplication of (the entries in) a row by a nonzero scalar;
 interchange of two rows;
 addition of a (scalar) multiple of one row to another.
By applying these operations in the technique called Gaussian elimination (or reduction), we arrive at a matrix in rowechelon form. To characterize this form, let us call the first nonzero entry of a row the pivot of the row. (The book does not use this term. If every entry of a row is zero, then the row has no pivot.) A matrix is in rowechelon form if it meets the following conditions:
 If a row besides the first has a pivot, then this pivot is to the right of the pivot of the preceding row. (In particular, the preceding row has a pivot; so the rows with no pivots are at the bottom of the matrix.)
 Every pivot is 1. (The book then calls it a leading 1.)
With an augmented matrix so reduced, we can read off the solution(s) to the corresponding linear system. In particular:
 If the final column has a pivot, then the system is inconsistent (has no solution).

Otherwise, the system is consistent (has a solution); moreover:
 If each column except the last has a pivot, then the system has a unique solution.
 If column j has no pivot, and j < n + 1, then x_{j} is a free variable. The variables that are not free variables are leading variables; their values are determined by the values of the free variables. In particular, when there are free variables, then the system has infinitely many solutions.
Once you have an augmented matrix in rowechelon form, you have two possible routes towards an explicit solution to the corresponding system: backsubstitution and GaussJordan elimination. It doesn't matter which one you use. GaussJordan elimination yields a matrix in reduced rowechelon form, where every pivot is the unique nonzero entry in its column.
For every linear system, there is a corresponding homogeneous system, which results from replacing each constant term with zero. A homogeneous system is automatically consistent, and zero (that is, (0,...,0)) is a solution. If (r_{1}, r_{2}, ..., r_{n}) is a solution to a linear system, and (x_{1}, x_{2}, ..., x_{n}) is a solution to the corresponding homogeneous system, then
(x_{1} + r_{1}, x_{2} + r_{2}, ..., x_{n} + r_{n})
is also a solution of the original system. (More on this later.)