Abstract vector spaces
The vectors in R^{n} compose a vector space, because they can be added to each other and multiplied by scalars according to certain rules. (In fact, R^{n} is a real vector space, because the scalars are real numbers. Later we shall treat complex vector spaces.)In general, a vector space is any structure that behaves like R^{n}.
Formal definition
A scalar is a real number. A (real) vector space consists of five things: A set of objects, called vectors;
 An operation assigning, to each ordered pair (x, y) of vectors, a vector x + y, called their sum;
 For each scalar r, an operation assigning to each vector x a vector rx, called the scalar product of x by r ;
 An operation assigning to each vector x a vector x, called its negative;
 A distinguished vector 0, called the zerovector.
Addition rules
 x + y = y + x ;
 (x + y) + z = x + (y + z) ;
 x + 0 = x ;
 x + x = 0.
Scalarmultiplication rules
 (r + s)x = rx + sx ;
 r(x + y) = rx + ry ;
 r(sx) = (rs)x ;
`Rule of unity'
 1x = x .
Consequences of the definition
You can prove the following from the vectorspace rules: 0x = 0 .
 r0 = 0 .
 1x = x .
 If rx = 0 and r is not zero, then x = 0.
Examples
Every vector space has at least one vector, namely the zerovector. Possibly this is the only vector in the space, in which case the space is trivial.The set R of real numbers, with its operations of addition and multiplication, is a vector space.
With the usual operations of addition and scalar multiplication, the m×n matrices compose a vector space, which we could call R^{m×n}.
If n is a nonnegative integer, then a polynomial of degree n is a sum
a_{0} + a_{1}x + a_{2}x^{2} ... + a_{n}x^{n} ,
where x is a variable, the a_{i} are scalars, and a_{n} is not zero. In particular, a scalar a is a polynomial of degree 0, if a is not zero; if a = 0, then a is the zeropolynomial, whose degree is less than zero. We can observe the following:
 The polynomials of degree n do not compose a vector space;
 the polynomials of degree less than n do compose a vector space (which is the trivial space if n = 0);
 there is a vector space comprising all polynomials.
The set of positive real numbers is a vector space, provided that addition of vectors is understood to be multiplication, and scalarmultiplication by a real number r is understood to be exponentiation by r.
Suppose A is an n×n matrix such that A^{2} = A. We can define the product of a vector x in R^{n} by a scalar r to be rAx. With this new product, and the usual addition, R^{n} satisfies all rules for vector spaces, except the `rule of unity' (unless A is the identity).
Subspaces
Suppose V is a vector space, and W is a nonempty subset of V such that: x + y is in W whenever x and y are in W;
 rx is in W whenever r is a scalar and x is in W.
Any vector space has the following two subspaces: itself, and the trivial subspace {0}. (Possibly these are identical.)
If A is an m×n matrix, then the set of solutions x to the homogeneous linear system
Ax = 0
is a subspace of R^{n}, called the nullspace of A.
Linear combinations and spanning sets
Suppose v_{1}, v_{2},..., v_{n} are vectors in a vector space V. A linear combination of these vectors is a vector of the formx_{1}v_{1} + x_{2}v_{2} + ... + x_{n}v_{n} ,
where the coefficients x_{i} are scalars. Note that any such vector is in V. If all of the coefficients x_{i} are zero, then the linear combination can be called trivial. The set of all linear combinations of the vectors v_{i} is a subspace W of V called the span of the v_{i} and denoted
span{v_{1}, v_{2},..., v_{n}} .
We also say that W is spanned by the vectors v_{i}, and that the set {v_{1}, v_{2},..., v_{n}} is a spanning set for W.
Spanning sets, in general, are not unique, and may be redundant. For example, R^{n} is spanned by the set
{e_{1}, e_{2}, ..., e_{n}},
where e_{i} has 1 in row i and 0 everywhere else. But if v is any other vector in R^{n}, then R^{n} is also spanned by the set {e_{1}, e_{2}, ..., e_{n}, v}.
To check whether a vector b in R^{m} is a linear combination of vectors a_{1}, a_{2},..., a_{n}, write the vectors a_{i} as the columns of an m×n matrix A, and set up the equation
Ax = b .
The product Ax is a linear combination of the columns of A, so the equation is consistent if and only if b is a linear combination of the columns of A.
Linearly independent sets
A set {v_{1}, v_{2},..., v_{r}} of vectors from a vector space is called linearly independent if one of the following equivalent conditions holds: None of the vectors v_{i} is a linear combination of the others;

The zero vector is not a nontrivial linear combination of the vectors
v_{i}; in other words, if
x_{1}v_{1} + x_{2}v_{2} + ... + x_{r}v_{r} = 0 ,
then each of the coefficients x_{i} is zero.
To check for linear independence in a set of n vectors in R^{m}, write the vectors as the columns of an m×n matrix. The nullspace of the matrix is the trivial vector space if and only if the columns are linearly independent.
Note a consequence of the last fact: No set of n vectors in R^{m} is linearly independent, if m<n. Such sets may or may not be independent, if m is not less than n.
Suppose A is an m×n matrix, and B is an n×r matrix. Then the columns of the product AB are linear combinations of the columns of A. Knowing whether the columns of one of these are independent tells you nothing about whether the columns of the other are independent.
Bases
[The word bases is the plural of basis, which is originally Greek.] Linear independence is an intrinsic property of a subset of a vector space. If a linearly independent set also spans the vector space it is in, then it is called a basis of that space.
Any linearly independent set is the basis of something, namely its span.
The standard basis of R^{n} is the set {e_{1}, e_{2}, ..., e_{n}} (where e_{i} is as defined above; it is also column i of the n×n identity matrix). An arbitrary subset of R^{n} is a basis (of R^{n}) if and only if it has n elements, which are the columns of an invertible matrix.
Suppose B is a finite basis {v_{1}, v_{2}, ..., v_{n}} of a vector space V. Then for any vector v in V, there is a unique vector x in R^{n} such that
v = x_{1}v_{1} + x_{2}v_{2} + ... x_{n}v_{n} ;
the vector x is called the Bcoordinate vector of v, and denoted by
(v)_{B} .
The function from V to R^{n} that takes a vector to its Bcoordinate vector is a linear transformation. One can use it to show that every basis of V has size n; this justifies calling n the dimension of V.