Skip to content

Linear Independence

Linear Combination of Vectors

A linear combination of a finite collection of vectors \( \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_n} \) is an expression of the form:

\[ \overrightarrow{r} = \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_n \overrightarrow{v_n}, \]

where \( \lambda_1, \lambda_2, \dots, \lambda_n \) are scalars (real numbers).

This means that the vector \( \overrightarrow{r} \) is obtained by scaling each vector \( \overrightarrow{v_i} \) by some factor \( \lambda_i \) and then adding the results.

We have already been using the idea of a linear combination without explicitly naming it. For example, consider a point \( C \) that divides the line segment \( AB \) in the ratio \( 2:3 \). Using the section formula, the position vector of \( C \) is:

\[ \overrightarrow{OC} = \frac{2\overrightarrow{OB} + 3\overrightarrow{OA}}{5}. \]

Rewriting,

\[ \overrightarrow{OC} = \frac{2}{5} \overrightarrow{OB} + \frac{3}{5} \overrightarrow{OA}. \]

Here, \( \overrightarrow{OC} \) is expressed as a sum of scalar multiples of \( \overrightarrow{OA} \) and \( \overrightarrow{OB} \), which is precisely a linear combination of these two vectors.

Thus, when we describe a point as a weighted sum of two or more vectors, we are actually working with linear combinations. This concept is fundamental in understanding how vectors relate to each other and will be essential in later topics such as vector equations of lines and planes, vector spaces, and linear dependence.

The powerful idea behind linear combinations is that by taking a finite number of vectors and choosing appropriate scalars, we can construct new vectors in space. This means that a small set of vectors can be used to generate many other vectors, simply by scaling and adding them.

Linear Dependence and Independence

A set of vectors \( \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_n} \) is said to be linearly dependent if there exist scalars \( \lambda_1, \lambda_2, \dots, \lambda_n \), not all zero, such that:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_n \overrightarrow{v_n} = \overrightarrow{0}. \]

This means that at least one of the vectors in the set can be written as a linear combination of the others.

If the only solution to the equation

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_n \overrightarrow{v_n} = \overrightarrow{0} \]

is \( \lambda_1 = \lambda_2 = \dots = \lambda_n = 0 \), then the vectors are said to be linearly independent.

Linear independence means that none of the vectors in the set can be expressed as a combination of the others.

Linear Independence of Two Non-Collinear Vectors

Two collinear vectors are always linearly dependent, while two non-collinear vectors are always linearly independent.

Suppose \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are collinear. Then there exists a scalar \( k \) such that:

\[ \overrightarrow{v_1} = k\overrightarrow{v_2}. \]

Rearranging,

\[ \overrightarrow{v_1} - k\overrightarrow{v_2} = \overrightarrow{0}. \]

This shows that the vectors satisfy the dependence condition with \( \lambda_1 = 1 \), \( \lambda_2 = -k \), both of which are not necessarily zero. Hence, \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are linearly dependent.

Now, suppose \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are non-collinear and assume that there exist scalars \( \lambda_1, \lambda_2 \) such that:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} = \overrightarrow{0}. \]

We claim that this implies:

\[ \lambda_1 = 0, \quad \lambda_2 = 0. \]

If this were not true, then at least one of the scalars is nonzero. Suppose \( \lambda_1 \neq 0 \), then we can rewrite:

\[ \overrightarrow{v_1} = -\frac{\lambda_2}{\lambda_1} \overrightarrow{v_2}. \]

This equation expresses \( \overrightarrow{v_1} \) as a scalar multiple of \( \overrightarrow{v_2} \), which contradicts the assumption that \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are non-collinear.

Thus, the only possible solution is \( \lambda_1 = 0 \) and \( \lambda_2 = 0 \), proving that non-collinear vectors are linearly independent.

Linear Independence of Three Non-Coplanar Vectors

Vectors whose lines of support are parallel to some common plane are called coplanar vectors. If no such plane exists, the vectors are said to be non-coplanar.

Three non-coplanar vectors are always linearly independent.

Let \( \overrightarrow{v_1}, \overrightarrow{v_2}, \overrightarrow{v_3} \) be three vectors. Assume they are non-coplanar. Suppose we claim that there exist scalars \( \lambda_1, \lambda_2, \lambda_3 \), not all zero, such that:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \lambda_3 \overrightarrow{v_3} = \overrightarrow{0}. \]

If \( \lambda_3 \neq 0 \), we can express:

\[ \overrightarrow{v_3} = -\frac{\lambda_1}{\lambda_3} \overrightarrow{v_1} - \frac{\lambda_2}{\lambda_3} \overrightarrow{v_2}. \]

This means that \( \overrightarrow{v_3} \) is a linear combination of \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \). Define two new vectors:

\[ \overrightarrow{u_1} = -\frac{\lambda_1}{\lambda_3} \overrightarrow{v_1}, \quad \overrightarrow{u_2} = -\frac{\lambda_2}{\lambda_3} \overrightarrow{v_2}. \]

Thus,

\[ \overrightarrow{v_3} = \overrightarrow{u_1} + \overrightarrow{u_2}. \]

Since adding two vectors using the parallelogram law always results in a vector that lies in the same plane as the two vectors, this implies that \( \overrightarrow{v_3} \) lies in the plane containing \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \).

But this contradicts the assumption that \( \overrightarrow{v_1}, \overrightarrow{v_2}, \overrightarrow{v_3} \) are non-coplanar.

Thus, the only possible solution is:

\[ \lambda_1 = 0, \quad \lambda_2 = 0, \quad \lambda_3 = 0. \]

This proves that three non-coplanar vectors are always linearly independent.

The key idea of a linearly independent set of vectors is that no vector in the set can be written as a combination of the others. This means that each vector contributes something new—a new direction, a new dimension, or a new way to describe movement in space.

Think of it like building structures with rods. If you have three rods lying flat on a table, no matter how many ways you combine them, you can never build something that stands up in three-dimensional space. The reason? They are all in the same plane. You need a fourth rod that is not in the same plane to create a 3D structure.

Similarly, if you have a set of vectors, and one of them can be formed using the others, it is redundant—it doesn't add anything new. This means the vectors are dependent on each other. But if each vector points in a truly different direction that cannot be achieved by adding or scaling the others, then they are independent.

A linearly independent set ensures that each vector brings something unique to the set. If a set is linearly dependent, at least one of the vectors is unnecessary because it can be formed using the others.

This idea is fundamental in math because it tells us how many directions we truly need to describe a space, whether it is a line, a plane, or full three-dimensional space.

Example

A set that contains the null vector (zero vector) is always linearly dependent. This might seem a bit theoretical and strange at first, but the reason is simple:

Suppose we have a set of four vectors:

\[ \{ \overrightarrow{0}, \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \}. \]

To check whether this set is linearly dependent, we need to see if there exist scalars \( \lambda_1, \lambda_2, \lambda_3, \lambda_4 \), not all zero, such that:

\[ \lambda_1 \overrightarrow{0} + \lambda_2 \overrightarrow{a} + \lambda_3 \overrightarrow{b} + \lambda_4 \overrightarrow{c} = \overrightarrow{0}. \]

Now, no matter what values we assign to \( \lambda_2, \lambda_3, \lambda_4 \), the first term \( \lambda_1 \overrightarrow{0} \) will always be zero. But we can always choose \( \lambda_1 = 5 \), \( \lambda_2 = 0 \), \( \lambda_3 = 0 \), \( \lambda_4 = 0 \), giving:

\[ 5\overrightarrow{0} + 0\overrightarrow{a} + 0\overrightarrow{b} + 0\overrightarrow{c} = \overrightarrow{0}. \]

This satisfies the definition of linear dependence because at least one scalar (here, \( \lambda_1 = 5 \)) is nonzero.

Thus, any set containing the null vector is always linearly dependent, because we can always form the zero vector trivially by multiplying the null vector by any nonzero scalar while keeping the other coefficients zero.

Adding a Vector to a Linearly Dependent Set Keeps It Dependent

If a set of vectors is linearly dependent, then adding any new vector to this set cannot make it linearly independent.

Suppose we have a set of \( n \) linearly dependent vectors \( \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_n} \), meaning that there exist scalars \( \lambda_1, \lambda_2, \dots, \lambda_n \), not all zero, such that:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_n \overrightarrow{v_n} = \overrightarrow{0}. \]

Now, suppose we add another vector \( \overrightarrow{v_{n+1}} \) to this set, forming a new set:

\[ \{ \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_n}, \overrightarrow{v_{n+1}} \}. \]

To check for linear dependence, consider the same scalars \( \lambda_1, \lambda_2, \dots, \lambda_n \) from the previous dependency equation. If we choose a new scalar \( \lambda_{n+1} = 0 \), we get:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_n \overrightarrow{v_n} + 0 \cdot \overrightarrow{v_{n+1}} = \overrightarrow{0}. \]

Since this equation still holds with at least one nonzero coefficient (because the original set was already dependent), the new set remains linearly dependent.

Thus, once a set is linearly dependent, adding more vectors can never make it independent, because the dependency condition already exists, and the new vector cannot remove it.

Removing a Vector from a Linearly Independent Set Always Leaves It Independent

If a set of vectors is linearly independent, then removing any vector from the set can never make it dependent; the remaining vectors will always remain independent.

Suppose \( \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_n} \) is a linearly independent set. Assume that after removing \( \overrightarrow{v_n} \), the remaining set \( \{ \overrightarrow{v_1}, \overrightarrow{v_2}, \dots, \overrightarrow{v_{n-1}} \} \) becomes linearly dependent.

By definition, this means there exist scalars \( \lambda_1, \lambda_2, \dots, \lambda_{n-1} \), not all zero, such that:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_{n-1} \overrightarrow{v_{n-1}} = \overrightarrow{0}. \]

But then, we can extend this equation to include \( \overrightarrow{v_n} \) by setting:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \dots + \lambda_{n-1} \overrightarrow{v_{n-1}} + 0 \cdot \overrightarrow{v_n} = \overrightarrow{0}. \]

This is a linear dependence equation for the original set, contradicting the assumption that \( \{ \overrightarrow{v_1}, \dots, \overrightarrow{v_n} \} \) was independent.

Thus, the assumption that removing \( \overrightarrow{v_n} \) made the set dependent is false, proving that removing a vector from an independent set always leaves it independent.

Fundamental Theorem of Two Dimensions

Let \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) be two linearly independent vectors. This is equivalent to saying that \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are non-collinear. Then, any vector coplanar with \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) can be expressed uniquely as a linear combination of \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \). That is, if \( \overrightarrow{r} \) is any vector in the same plane as \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \), then:

\[ \overrightarrow{r} = \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2}, \quad \text{for some} \quad \lambda_1, \lambda_2 \in \mathbb{R}. \]

This follows directly from the concept of linear independence, but a geometric interpretation further clarifies why this is true.

Geometric Interpretation

Place \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) coinitial at a point \( O \). Since they are non-collinear, they form an angle between them that is neither 0 nor \( \pi \).

Now, consider any vector \( \overrightarrow{r} \) in the same plane as \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \), and make it coinitial with them at \( O \).

Construction Using Parallelogram Law

alt text

  1. Extend the infinite lines of support of \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \).
  2. From the head of \( \overrightarrow{r} \) (say, at point \( P \)), draw a line parallel to \( \overrightarrow{v_1} \) until it meets the line of support of \( \overrightarrow{v_2} \) at some point \( N \).
  3. Similarly, from \( P \), draw a line parallel to \( \overrightarrow{v_2} \) until it meets the line of support of \( \overrightarrow{v_1} \) at some point \( M \).
  4. The quadrilateral \( OMPN \) thus formed is a parallelogram.

By the parallelogram law, we have:

\[ \overrightarrow{OP} = \overrightarrow{OM} + \overrightarrow{ON}. \]

Since \( \overrightarrow{OM} \) is collinear with \( \overrightarrow{v_1} \), we can write:

\[ \overrightarrow{OM} = \lambda_1 \overrightarrow{v_1}, \quad \text{for some } \lambda_1 \in \mathbb{R}. \]

Similarly, since \( \overrightarrow{ON} \) is collinear with \( \overrightarrow{v_2} \), we have:

\[ \overrightarrow{ON} = \lambda_2 \overrightarrow{v_2}, \quad \text{for some } \lambda_2 \in \mathbb{R}. \]

Thus, using the parallelogram law:

\[ \overrightarrow{r} = \overrightarrow{OP} = \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2}. \]

From here, we can also see that the representation of a vector in terms of a given basis is unique. Suppose a vector \( \overrightarrow{r} \) is written as a linear combination of two linearly independent vectors \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) in two dimensions:

\[ \overrightarrow{r} = \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2}. \]

To prove uniqueness, assume that \( \overrightarrow{r} \) can also be written as a linear combination using different scalars \( \lambda_1' \) and \( \lambda_2' \):

\[ \overrightarrow{r} = \lambda_1' \overrightarrow{v_1} + \lambda_2' \overrightarrow{v_2}. \]

Equating both expressions for \( \overrightarrow{r} \), we get:

\[ \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} = \lambda_1' \overrightarrow{v_1} + \lambda_2' \overrightarrow{v_2}. \]

Rearranging,

\[ (\lambda_1 - \lambda_1') \overrightarrow{v_1} + (\lambda_2 - \lambda_2') \overrightarrow{v_2} = \overrightarrow{0}. \]

Since \( \overrightarrow{v_1} \) and \( \overrightarrow{v_2} \) are linearly independent, the only solution to this equation is:

\[ \lambda_1 - \lambda_1' = 0 \quad \text{and} \quad \lambda_2 - \lambda_2' = 0. \]

This implies that:

\[ \lambda_1 = \lambda_1', \quad \lambda_2 = \lambda_2'. \]

Thus, the representation of \( \overrightarrow{r} \) as a linear combination of \(\overrightarrow{v_1}\) and \(\overrightarrow{v_2}\) is unique.

This shows that any vector in the plane of two non-collinear vectors can always be expressed uniquely as a linear combination of those two vectors. This is a fundamental result because it tells us that two linearly independent vectors are enough to describe all vectors in two-dimensional space. The choice of scalars \( \lambda_1 \) and \( \lambda_2 \) determines how much of each vector contributes to forming \( \overrightarrow{r} \).

Fundamental Theorem of Three Dimensions

Let \( \overrightarrow{v_1}, \overrightarrow{v_2}, \overrightarrow{v_3} \) be three linearly independent vectors in space. Then, any vector \( \overrightarrow{r} \) that is coplanar with these three vectors (or more generally, any vector in three-dimensional space) can be expressed as a linear combination of \( \overrightarrow{v_1}, \overrightarrow{v_2}, \) and \( \overrightarrow{v_3} \):

\[ \overrightarrow{r} = \lambda_1 \overrightarrow{v_1} + \lambda_2 \overrightarrow{v_2} + \lambda_3 \overrightarrow{v_3}, \quad \text{where } \lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}. \]

If we allow \( \lambda_1, \lambda_2, \lambda_3 \) to take all possible values, we obtain all possible vectors in three-dimensional space. This means that any vector in space can be generated using just three independent directions.

Since only three linearly independent vectors are required to describe any vector in this space, we say that this space is three-dimensional. The term dimension refers to the minimum number of independent vectors required to describe every vector in the space.

Using Vectors to prove Geometrical Facts

Let us understand with some problems how we can use vectors to prove geometrical facts. Geometry is often visual, but using vectors and their properties, we can express geometrical concepts in vector notation and algebra. This approach allows us to prove results rigorously and apply them efficiently in different contexts.

Prove that the centroid of a triangle divides each median in the ratio \( 2:1 \).

Proof:

We will use vectors to prove this geometrical fact. Read it carefully.

triangle for the proof

Let \( \triangle ABC \) be some triangle in space. Take \( A \) as the reference point, and for simplicity, denote it as \( O \). With respect to this reference point, let the position vectors of \( B \) and \( C \) be:

\[ \overrightarrow{OB} = \mathbf{b}, \quad \overrightarrow{OC} = \mathbf{c}. \]

Since \(A\), \( B \) and \( C \) are not collinear points as they are forming a triangle, the vectors \( \mathbf{b} \) and \( \mathbf{c} \) cannot be collinear. Hence, they are linearly independent. The position vector of \( O \) is simply \( \mathbf{0} \).

Now, consider two medians \( AD \) and \( BE \), where \( D \) and \( E \) are the midpoints of \( BC \) and \( CA \), respectively. Let \( AD \) and \( BE \) intersect at \( G \). Our goal is to prove that \( AG:GD = BG:GE = 2:1 \).

Since \( D \) is the midpoint of \( BC \), its position vector is given by

\[ \overrightarrow{OD} = \frac{\mathbf{b} + \mathbf{c}}{2}. \]

Similarly, since \( E \) is the midpoint of \( CA \), its position vector is

\[ \overrightarrow{OE} = \frac{\mathbf{c}}{2}. \]

Now, assume that \( G \) divides \( AD \) in the ratio \( \lambda : (1 - \lambda) \), meaning that the position vector of \( G \) along \( AD \) can be written using the section formula as

\[ \overrightarrow{OG} = \lambda \overrightarrow{OD} + (1 - \lambda) \overrightarrow{O}. \]

Since the position vector of \( O \) is \( \mathbf{0} \), substituting \( \overrightarrow{OD} \) gives

\[ \overrightarrow{OG} = \lambda \left(\frac{\mathbf{b} + \mathbf{c}}{2} \right). \]

Simplifying,

\[ \overrightarrow{OG} = \frac{\lambda}{2} (\mathbf{b} + \mathbf{c}). \]

Similarly, assume that \( G \) divides \( BE \) in the ratio \( \mu : (1 - \mu) \). Then, using the section formula, the position vector of \( G \) along \( BE \) is

\[ \overrightarrow{OG} = \mu \overrightarrow{OE} + (1 - \mu) \overrightarrow{OB}. \]

Substituting \( \overrightarrow{OE} \) and \( \overrightarrow{OB} \), we get

\[ \overrightarrow{OG} = \mu \left(\frac{\mathbf{c}}{2}\right) + (1 - \mu) \mathbf{b}. \]

Since \( G \) is the same point in both cases, these two expressions for \( \overrightarrow{OG} \) must be equal:

\[ \frac{\lambda}{2} (\mathbf{b} + \mathbf{c}) = \mu \frac{\mathbf{c}}{2} + (1 - \mu) \mathbf{b}. \]

Rearranging,

\[ \left(\frac{\lambda}{2} - (1 - \mu) \right) \mathbf{b} + \left(\frac{\lambda}{2} - \frac{\mu}{2} \right) \mathbf{c} = 0. \]

Since \( \mathbf{b} \) and \( \mathbf{c} \) are linearly independent, the only possibility for this equation to hold is for the coefficients of \( \mathbf{b} \) and \( \mathbf{c} \) to be individually zero. This gives the system of equations:

\[ \frac{\lambda}{2} - (1 - \mu) = 0, \quad \frac{\lambda}{2} - \frac{\mu}{2} = 0. \]

From the second equation,

\[ \lambda = \mu. \]

Substituting this into the first equation,

\[ \frac{\lambda}{2} = 1 - \lambda. \]

Solving for \( \lambda \),

\[ \frac{3\lambda}{2} = 1 \quad \Rightarrow \quad \lambda = \frac{2}{3}. \]

Since \( \lambda = \mu \), we also get \( \mu = \frac{2}{3} \).

Thus,

\[ AG:GD = \frac{2}{3} : \left(1 - \frac{2}{3}\right) = \frac{2}{3} : \frac{1}{3} = 2:1. \]

Similarly,

\[ BG:GE = \frac{2}{3} : \frac{1}{3} = 2:1. \]

Since the medians were chosen arbitrarily, the same result holds for the third median \( CF \), where \( F \) is the midpoint of \( AB \). Hence, the centroid divides all three medians in the ratio \( 2:1 \). \(\blacksquare\)

Take another example:

Prove that the line joining any vertex of a rectangle to the midpoint of any side not passing through it trisects the diagonal not passing through it.

Proof:

Consider a rectangle \( ABCD \). Let \( DE \) be a line where \( E \) is the midpoint of \( AB \). We will prove that \( DE \) trisects the diagonal \( AC \).

alt text

Let us take \( A \) as the reference point, and for simplicity, denote it as \( O \). Assign position vectors:

\[ \overrightarrow{OB} = \mathbf{b}, \quad \overrightarrow{OD} = \mathbf{d}. \]

Since \( OBCD \) is a rectangle, we have,

\[ \overrightarrow{OC} = \overrightarrow{OB} + \overrightarrow{OD}. \]

Thus,

\[ \overrightarrow{OC} = \mathbf{b} + \mathbf{d}. \]

Since \( E \) is the midpoint of \( AB \), its position vector is

\[ \overrightarrow{OE} = \frac{\mathbf{b}}{2}. \]

Let \( P \) be the point where \( DE \) intersects \( AC \), and assume that \( P \) divides \( AC \) in the ratio \( \lambda : (1 - \lambda) \). Also, let \( P \) divide \( DE \) in the ratio \( \mu : (1 - \mu) \).

The position vector of \( P \) along \( AC \) using the section formula is

\[ \overrightarrow{OP} = \lambda \overrightarrow{OC} + (1 - \lambda) \overrightarrow{O}. \]

Since \( \overrightarrow{O} = \mathbf{0} \), we substitute \( \overrightarrow{OC} = \mathbf{b} + \mathbf{d} \), giving

\[ \overrightarrow{OP} = \lambda (\mathbf{b} + \mathbf{d}). \]

On the other hand, the position vector of \( P \) along \( DE \) is

\[ \overrightarrow{OP} = \mu \overrightarrow{OE} + (1 - \mu) \overrightarrow{OD}. \]

Substituting \( \overrightarrow{OE} = \frac{\mathbf{b}}{2} \) and \( \overrightarrow{OD} = \mathbf{d} \), we get

\[ \overrightarrow{OP} = \mu \left(\frac{\mathbf{b}}{2}\right) + (1 - \mu) \mathbf{d}. \]

Since both expressions for \( \overrightarrow{OP} \) must be equal,

\[ \lambda (\mathbf{b} + \mathbf{d}) = \mu \frac{\mathbf{b}}{2} + (1 - \mu) \mathbf{d}. \]

Rearranging,

\[ (\lambda - \mu/2) \mathbf{b} + (\lambda - 1 + \mu) \mathbf{d} = 0. \]

Since \( \mathbf{b} \) and \( \mathbf{d} \) are linearly independent, the coefficients of both must be individually zero:

\[ \lambda - \frac{\mu}{2} = 0, \quad \lambda - 1 + \mu = 0. \]

From the first equation,

\[ \mu = 2\lambda. \]

Substituting this into the second equation,

\[ \lambda - 1 + 2\lambda = 0. \]

Simplifying,

\[ 3\lambda = 1 \quad \Rightarrow \quad \lambda = \frac{1}{3}. \]

Thus,

\[ OP:PC = \frac{1}{3} : \left(1 - \frac{1}{3} \right) = \frac{1}{3} : \frac{2}{3} = 1:2. \]

This proves that the diagonal \( AC \) is trisected by the line \( DE \). \(\blacksquare\)

Prove, by vector methods, that the point of intersection of the diagonals of a trapezium lies on the line passing through the midpoints of the parallel sides. Assume the trapezium is not a parallelogram.

Solution:

Consider a trapezium \( OABC \) with \( OA \parallel BC \). Let \( P \) be the intersection of the diagonals \( AC \) and \( OB \), and let \( Q \) and \( R \) be the midpoints of \( BC \) and \( OA \), respectively. We aim to prove that \( P \) lies on the line \( QR \).

alt text

Take \( O \) as the reference point, and let the position vectors of \( A, B, C \) be:

\[ \overrightarrow{OA} = \mathbf{a}, \quad \overrightarrow{OB} = \mathbf{b}, \quad \overrightarrow{OC} = \mathbf{c}. \]

Since \( OA \parallel BC \), we have

\[ \overrightarrow{BC} = k \overrightarrow{OA} \quad \text{for some scalar } k. \]

Thus,

\[ \overrightarrow{OB} - \overrightarrow{OC} = k \mathbf{a}. \]

That is:

\[ \overrightarrow{b} - \overrightarrow{c} = k \mathbf{a}. \]

Assume \( P \) divides \( OB \) in the ratio \( \lambda : (1 - \lambda) \), so the position vector of \( P \) along \( OB \) is

\[ \overrightarrow{OP} = \lambda \overrightarrow{OB} + (1 - \lambda) \overrightarrow{O} = \lambda \mathbf{b}. \]

Similarly, assume \( P \) divides \( AC \) in the ratio \( \mu : (1 - \mu) \), so the position vector of \( P \) along \( AC \) is

\[ \overrightarrow{OP} = \mu \overrightarrow{OC} + (1 - \mu) \overrightarrow{OA} = \mu \mathbf{c} + (1 - \mu) \mathbf{a}. \]

Equating the two expressions for \( \overrightarrow{OP} \), we obtain

\[ \lambda \mathbf{b} = \mu \mathbf{c} + (1 - \mu) \mathbf{a}. \]

Using \( \mathbf{b} - \mathbf{c} = k \mathbf{a} \), we substitute \( \mathbf{b} = \mathbf{c} + k \mathbf{a} \) to get

\[ \lambda (\mathbf{c} + k \mathbf{a}) = \mu \mathbf{c} + (1 - \mu) \mathbf{a}. \]

Expanding,

\[ \lambda k \mathbf{a} + \lambda \mathbf{c} = \mu \mathbf{c} + (1 - \mu) \mathbf{a}. \]

Since \( \mathbf{a} \) and \( \mathbf{c} \) are linearly independent, equating coefficients gives

\[ \lambda k - 1 + \mu = 0, \quad \lambda - \mu = 0. \]

Solving,

\[ \mu = \lambda, \quad \lambda k - 1 + \lambda = 0. \]
\[ \lambda (k + 1) = 1 \quad \Rightarrow \quad \lambda = \frac{1}{1+k}, \quad \mu = \frac{1}{1+k}. \]

Thus,

\[ \overrightarrow{OP} = \frac{1}{1+k} \mathbf{b}. \]

Now, the position vector of \( Q \), the midpoint of \( BC \), is

\[ \overrightarrow{OQ} = \frac{\mathbf{b} + \mathbf{c}}{2}. \]

The position vector of \( R \), the midpoint of \( OA \), is

\[ \overrightarrow{OR} = \frac{\mathbf{a}}{2}. \]

Now, calculating \( \overrightarrow{QP} \):

\[ \overrightarrow{QP} = \overrightarrow{OP} - \overrightarrow{OQ}. \]

Substituting \( \overrightarrow{OP} = \frac{1}{1+k} \mathbf{b} \) and \( \overrightarrow{OQ} = \frac{\mathbf{b} + \mathbf{c}}{2} \),

\[ \overrightarrow{QP} = \frac{1}{1+k} \mathbf{b} - \frac{\mathbf{b} + \mathbf{c}}{2}. \]

Rewriting,

\[ \overrightarrow{QP} = \frac{2\mathbf{b} - (1+k) \mathbf{b} - (1+k) \mathbf{c}}{2(1+k)}. \]
\[ \overrightarrow{QP} = \frac{(1-k) \mathbf{b} - (1+k) \mathbf{c}}{2(1+k)}. \]

Now, calculating \( \overrightarrow{PR} \):

\[ \overrightarrow{PR} = \frac{\mathbf{a}}{2} - \frac{\mathbf{b}}{1+k}. \]
\[ \overrightarrow{PR} = \frac{(1+k) \mathbf{a} - 2\mathbf{b}}{2(1+k)}. \]

Since \( \mathbf{b} - \mathbf{c} = k \mathbf{a} \), we eliminate \(\mathbf{a}\)

\[ \overrightarrow{PR} = \frac{(1+k) \left(\frac{\mathbf{b} - \mathbf{c}}{k} \right) - 2 \mathbf{b}}{2(1+k)}. \]

Simplifying,

\[ \overrightarrow{PR} = \frac{(1+k) (\mathbf{b} - \mathbf{c}) - 2k \mathbf{b}}{2k(1+k)}. \]
\[ \overrightarrow{PR} = \frac{(1-k) \mathbf{b} - (1+k) \mathbf{c}}{2k(1+k)}. \]

Since \( k\overrightarrow{QP} = \overrightarrow{PR} \), we conclude that \(Q\), \(P\) and \(R\) are collinear and thus, \( P \) lies on \( QR \), proving the required result. \(\blacksquare\)

We could have written a more elegant proof once we know how to write equations of a straight line in vectors.