Skip to content

Resolution of Vectors

Resolution of Vectors in Two - Dimension

Consider a vector \( \overrightarrow{AB} \) lying in a plane. This vector has a well-defined length and direction, represented visually by a directed line segment from point \( A \) to point \( B \). Why do we consider a vector in a plane? One natural reason is that we might want to add it to another vector lying in the same plane, using the parallelogram law of vector addition.

To describe this vector analytically, we introduce a Cartesian coordinate system in the plane, choosing an arbitrary origin and specifying the directions of the \( x \)-axis and \( y \)-axis. In this coordinate system, let the initial point of the vector be \( A(x_1, y_1) \) and the terminal point be \( B(x_2, y_2) \). The coordinates of these points depend on the chosen Cartesian system, meaning that if we change the coordinate system, these values will also change.

A vector can be completely described by the pair of points \( A(x_1, y_1) \) and \( B(x_2, y_2) \), since these contain all necessary information. The magnitude of the vector is simply the Euclidean distance between these two points:

\[ |\overrightarrow{AB}| = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}. \]

The direction can be determined using coordinate geometry by computing the angle it makes with the positive \( x \)-axis. The key idea here is that once a vector is described analytically in terms of its coordinates, we no longer need to rely on a visual representation to perform operations on it. This is, in fact, the fundamental motivation behind coordinate geometry—converting geometric problems into algebraic form.

Decomposition of a Vector into Components

A more systematic approach to representing a vector is to express it as a sum of two perpendicular components along the coordinate axes. To achieve this, we construct a right-angled triangle by drawing a line through \( A \) parallel to the \( x \)-axis and another line through \( B \) parallel to the \( y \)-axis. These lines intersect at some point \( C \), forming the right-angled triangle \( \triangle ABC \).

vector AB

By construction, the segment \( AC \) is parallel to the \( x \)-axis and has length

\[ AC = x_2 - x_1, \]

while the segment \( CB \) is parallel to the \( y \)-axis and has length

\[ CB = y_2 - y_1. \]

These segments correspond to the projections of the vector \( \overrightarrow{AB} \) onto the coordinate axes.

Now, let \( \hat{\mathbf{i}} \) and \( \hat{\mathbf{j}} \) be the unit vectors along the positive \( x \)-axis and \( y \)-axis, respectively. Since the segment \( AC \) has a length of \( x_2 - x_1 \) and the same direction as \( \hat{\mathbf{i}} \), it can be expressed as the vector

\[ \overrightarrow{AC} = (x_2 - x_1) \hat{\mathbf{i}}. \]

Similarly, the segment \( CB \) has a length of \( y_2 - y_1 \) and the same direction as \( \hat{\mathbf{j}} \), so it can be written as

\[ \overrightarrow{CB} = (y_2 - y_1) \hat{\mathbf{j}}. \]

By the triangle law of vector addition,

\[ \overrightarrow{AC} + \overrightarrow{CB} = \overrightarrow{AB}. \]

Substituting the expressions for \( \overrightarrow{AC} \) and \( \overrightarrow{CB} \), we obtain

\[ \overrightarrow{AB} = (x_2 - x_1) \hat{\mathbf{i}} + (y_2 - y_1) \hat{\mathbf{j}}. \]

This shows that the vector \( \overrightarrow{AB} \) depends only on the differences \( x_2 - x_1 \) and \( y_2 - y_1 \). If we shift this vector to another location in the plane while keeping these differences the same, the vector remains unchanged.

Effect of Translation on a Vector

Consider a vector \( \overrightarrow{AB} \) whose initial point is \( A(1,2) \) and terminal point is \( B(4,5) \) in a given Cartesian coordinate system. The vector is given by

\[ \overrightarrow{AB} = (4 - 1) \hat{\mathbf{i}} + (5 - 2) \hat{\mathbf{j}} = 3\hat{\mathbf{i}} + 3\hat{\mathbf{j}}. \]

Now, suppose we shift this vector one unit along the positive \( x \)-axis and three units along the negative \( y \)-axis. This means that both the initial and terminal points of the vector will undergo the same transformation:

  • The new initial point will be

    \[ (1+1, 2-3) = (2, -1). \]
  • The new terminal point will be

    \[ (4+1, 5-3) = (5, 2). \]

After the shift, the new vector \( \overrightarrow{A'B'} \) from \( A'(2, -1) \) to \( B'(5,2) \) is

\[ \overrightarrow{A'B'} = (5 - 2) \hat{\mathbf{i}} + (2 - (-1)) \hat{\mathbf{j}} = 3\hat{\mathbf{i}} + 3\hat{\mathbf{j}}. \]

Thus, the vector remains unchanged:

\[ \overrightarrow{A'B'} = \overrightarrow{AB}. \]

This confirms that translating a vector in the plane does not alter its length or direction. The vector remains the same, as it depends only on the difference of coordinates between the initial and terminal points, not their absolute positions in the coordinate system.

Example

In a Cartesian coordinate system, the vector \( \overrightarrow{AB} \) with initial point \( A(x_1, y_1) \) and terminal point \( B(x_2, y_2) \) is given by

\[ \overrightarrow{AB} = (x_2 - x_1) \hat{\mathbf{i}} + (y_2 - y_1) \hat{\mathbf{j}}. \]

Let us find the vectors for the given initial and terminal points:

  1. Initial Point: \( (-1, 0) \), Terminal Point: \( (2, 4) \)

    \[ \overrightarrow{AB} = (2 - (-1)) \hat{\mathbf{i}} + (4 - 0) \hat{\mathbf{j}} = 3\hat{\mathbf{i}} + 4\hat{\mathbf{j}}. \]
  2. Initial Point: \( (0, -1) \), Terminal Point: \( (-2,1) \)

    \[ \overrightarrow{CD} = (-2 - 0) \hat{\mathbf{i}} + (1 - (-1)) \hat{\mathbf{j}} = -2\hat{\mathbf{i}} + 2\hat{\mathbf{j}}. \]
  3. Initial Point: \( (-4, -3) \), Terminal Point: \( (-4,1) \)

    \[ \overrightarrow{EF} = (-4 - (-4)) \hat{\mathbf{i}} + (1 - (-3)) \hat{\mathbf{j}} = 0\hat{\mathbf{i}} + 4\hat{\mathbf{j}} = 4\hat{\mathbf{j}}. \]
  4. Initial Point: \( (3,0) \), Terminal Point: \( (5,-4) \)

    \[ \overrightarrow{GH} = (5 - 3) \hat{\mathbf{i}} + (-4 - 0) \hat{\mathbf{j}} = 2\hat{\mathbf{i}} - 4\hat{\mathbf{j}}. \]

Thus, the four vectors are:

\[ \overrightarrow{AB} = 3\hat{\mathbf{i}} + 4\hat{\mathbf{j}}, \quad \overrightarrow{CD} = -2\hat{\mathbf{i}} + 2\hat{\mathbf{j}}, \quad \overrightarrow{EF} = 4\hat{\mathbf{j}}, \quad \overrightarrow{GH} = 2\hat{\mathbf{i}} - 4\hat{\mathbf{j}}. \]

alt text

General Representation of a Vector

The unit vectors \( \hat{\mathbf{i}} \) and \( \hat{\mathbf{j}} \) are linearly independent because they are non-collinear. By Fundamental theorem of 2-dimension, we can say that, any vector in the plane can be represented as a combination of two non-collinear vectors, it follows that any vector \( \overrightarrow{v} \) in this plane can be written in the form

\[ \overrightarrow{v} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}}, \]

where \( x \) and \( y \) are the components of the vector along the \( x \)-axis and \( y \)-axis, respectively. We call this the resolved form of a vector.

Thus, every vector in a plane can be expressed uniquely in terms of its components along the coordinate axes, simplifying calculations and making algebraic operations on vectors straightforward.

Why Do We Resolve a Vector?

Resolving a vector into its components helps us work with it in a more mathematical and systematic way, rather than relying only on diagrams and geometric intuition. It makes vector operations easier to handle using algebra and gives us a clear way to analyze and calculate different properties of a vector.

  1. Makes Vector Calculations More Analytical

    Instead of dealing with a vector as just an arrow, resolving it into components lets us work with numbers and equations. This helps in solving problems involving addition, subtraction, projections, and transformations much more easily.

  2. Contains All Information About a Vector

    The resolved form of a vector gives us everything we need to fully understand it:

    • Length (Magnitude): The magnitude of the vector is found directly from its components using the formula

      \[ |\mathbf{v}| = \sqrt{x^2 + y^2} \]

      in two dimensions or

      \[ |\mathbf{v}| = \sqrt{x^2 + y^2 + z^2} \]

      in three dimensions.

    • Direction: The direction of the vector can be determined from its components using trigonometric ratios like

      \[ \tan\theta = \frac{y}{x}. \]
    • Angle Between Vectors: When vectors are in resolved form, their dot product and cross product can be used to find angles between them using simple algebraic formulas instead of complex geometric constructions.

  3. Simplifies Vector Operations

    • Adding and Subtracting Vectors: Once resolved, vectors can be added or subtracted component-wise, which is much easier than working with them geometrically.
    • Multiplying a Vector by a Number: If a vector is resolved, multiplying it by a number (scaling) simply means multiplying each component by that number.
    • Finding Projections: We can easily determine how much of one vector points in the direction of another by working with its components.

Position Vector

The position vector of a point \( P(x, y) \) with respect to the origin \( O \) is given by:

\[ \overrightarrow{OP} = x \hat{i} + y \hat{j}. \]

This represents the vector from the fixed reference point \( O \) to the point \( P(x, y) \) in the plane.

Vector from a point to another point

We can use position vectors to write a vector from a point to another point.

alt text

Consider two points \( A(x_1, y_1) \) and \( B(x_2, y_2) \). The vector from \( A \) to \( B \), denoted as \( \overrightarrow{AB} \), is given by:

\[ \overrightarrow{AB} = \overrightarrow{OB} - \overrightarrow{OA}. \]

Since the position vector of \( A \) is:

\[ \overrightarrow{OA} = x_1 \hat{i} + y_1 \hat{j}, \]

and the position vector of \( B \) is:

\[ \overrightarrow{OB} = x_2 \hat{i} + y_2 \hat{j}, \]

subtracting these gives:

\[ \overrightarrow{AB} = (x_2 \hat{i} + y_2 \hat{j}) - (x_1 \hat{i} + y_1 \hat{j}). \]

Using the properties of vector subtraction, this simplifies to:

\[ \overrightarrow{AB} = (x_2 - x_1) \hat{i} + (y_2 - y_1) \hat{j}. \]

Magnitude of a Vector in Two Dimensions

Given a vector \(\overrightarrow{v} = x \hat{i} + y \hat{j}\), we want to find its magnitude. To find that, shift the vector \( \overrightarrow{v} \) so that its initial point is at the origin \( O \). Then, its terminal point has coordinates \( (x, y) \).

The magnitude of \( \overrightarrow{v} \), denoted as \( |\overrightarrow{v}| \), is defined as its length. Since the vector extends from \( O(0,0) \) to \( P(x,y) \), its length is given by the distance between the points \( O(0,0) \) and \( P(x,y) \):

\[ |\overrightarrow{v}| = \sqrt{x^2 + y^2}. \]

Thus, for any vector expressed in the form:

\[ \overrightarrow{v} = x \hat{i} + y \hat{j}, \]

its magnitude is always:

\[ |\overrightarrow{v}| = \sqrt{x^2 + y^2}. \]

Polar Form of a Vector in Two Dimensions

Consider a vector \( \overrightarrow{v} \) with magnitude \( r \). Shift \( \overrightarrow{v} \) such that its terminal point is at the origin and its head is at some point \( P \). Suppose the vector makes an angle \( \theta \) measured from the positive x-axis, following the usual convention of measuring angles in trigonometry.

From coordinate geometry, the coordinates of \( P \) in terms of \( r \) and \( \theta \) are:

\[ (x, y) = (r \cos\theta, r \sin\theta). \]

Thus, the vector \( \overrightarrow{v} \) can be written as:

\[ \overrightarrow{v} = r \cos\theta \hat{i} + r \sin\theta \hat{j}. \]

Rewriting,

\[ \overrightarrow{v} = r (\cos\theta \hat{i} + \sin\theta \hat{j}). \]

alt text

In this form, \( \overrightarrow{v} \) consists of two parts:

  • The magnitude \( r \), which represents the length of the vector.
  • The term inside the parentheses, which represents a unit vector in the direction of \( \overrightarrow{v} \).

Denoting this unit vector as \( \hat{v} \), we define:

\[ \hat{v} = \cos\theta \hat{i} + \sin\theta \hat{j}. \]

Since \( |\hat{v}| \) must be 1, we verify:

\[ |\hat{v}| = \sqrt{\cos^2\theta + \sin^2\theta} = \sqrt{1} = 1. \]

Thus, the vector \( \overrightarrow{v} \) can be expressed in polar form as:

\[ \overrightarrow{v} = r \hat{v} = r (\cos\theta \hat{i} + \sin\theta \hat{j}). \]

This form is particularly useful when dealing with rotations, circular motion, and transformations involving angles. It expresses the vector in terms of its magnitude and direction without explicitly using Cartesian coordinates.

Operations on Vectors in Two Dimensions

Vectors in two dimensions can be added, subtracted, and scaled just like numbers, but they follow specific rules based on their components. Instead of relying on geometric constructions, we can perform these operations algebraically by working with their resolved forms.

Let \( \overrightarrow{v} \) and \( \overrightarrow{w} \) be two vectors given in component form:

\[ \overrightarrow{v} = x_1 \hat{i} + y_1 \hat{j}, \quad \overrightarrow{w} = x_2 \hat{i} + y_2 \hat{j}. \]

1. Scalar Multiplication

Multiplying a vector by a scalar changes its magnitude but does not change its direction, except when multiplying by a negative number, which reverses the direction.

If \( k \) is a scalar, then:

\[ k \overrightarrow{v} = k (x_1 \hat{i} + y_1 \hat{j}) = (k x_1) \hat{i} + (k y_1) \hat{j}. \]
  • If \( k > 0 \), the vector stretches in the same direction.
  • If \( k < 0 \), the vector reverses direction and scales accordingly.
  • If \( k = 0 \), the vector becomes the zero vector.

2. Addition of Two Vectors

The sum of two vectors is obtained by adding their corresponding components:

\[ \overrightarrow{v} + \overrightarrow{w} = (x_1 \hat{i} + y_1 \hat{j}) + (x_2 \hat{i} + y_2 \hat{j}). \]

Using properties of vector addition:

\[ \overrightarrow{v} + \overrightarrow{w} = (x_1 + x_2) \hat{i} + (y_1 + y_2) \hat{j}. \]

Thus, vector addition simply adds the x-components and y-components separately.

3. Subtraction of Two Vectors

Subtracting one vector from another is done by subtracting the corresponding components:

\[ \overrightarrow{v} - \overrightarrow{w} = (x_1 \hat{i} + y_1 \hat{j}) - (x_2 \hat{i} + y_2 \hat{j}). \]

This simplifies to:

\[ \overrightarrow{v} - \overrightarrow{w} = (x_1 - x_2) \hat{i} + (y_1 - y_2) \hat{j}. \]

Resolution of Vectors in Three Dimensions

In a two-dimensional Cartesian coordinate system, we can measure the location of a point only within a plane. However, if we wish to measure the position of a point outside this plane, we introduce a third axis, called the z-axis. This allows us to fully describe points in three-dimensional space.

However, in three dimensions, there arises a problem of orientation. When extending from a two-dimensional x-y plane, the z-axis can be taken in two different ways:

  1. The z-axis can be drawn coming out of the plane**.
  2. The z-axis can be drawn going into the plane**.

alt text

This results in two possible coordinate systems:

  • Right-Handed Coordinate System
  • Left-Handed Coordinate System

alt text

The difference between these two systems is in the relative orientation of the x, y, and z axes.

The Significance of This Ambiguity

This issue is not just a mathematical artifact but is also observed in nature. For instance, molecules in chemistry exhibit isomerism, where the same molecular formula can correspond to different spatial arrangements. In such cases, molecules with mirror-image configurations (such as left-handed and right-handed isomers) behave differently despite having identical atomic compositions.

Default System

In three-dimensional space, the right-handed coordinate system is the standard convention unless explicitly stated otherwise. This system is universally adopted in mathematics, physics, computer graphics, and engineering. In vector algebra, the cross product follows the right-hand rule, which also governs the direction of angular momentum and electromagnetic forces. Almost all 3D graphics engines, such as OpenGL and DirectX, assume a right-handed system for rendering and transformations, as do robotics frameworks and animation tools. Similarly, CAD software and mechanical design principles rely on this orientation for consistency in modeling and calculations. Since the right-handed system provides a natural and widely accepted reference, it is the default in all applications unless a left-handed system is explicitly specified.

Consider a point \( P \) in space. We set up a three-dimensional Cartesian coordinate system with the origin at some point and the \( x \)-, \( y \)-, and \( z \)-axes in some orientation. With respect to this system, the coordinates of \( P \) are measured as \( (x, y, z) \). For simplicity, assume \( x, y, \) and \( z \) are positive; this assumption does not affect anything fundamentally but makes visualization clearer.

First, let us understand how we measure the coordinates of a point in three-dimensional space. Unlike two dimensions, where a point’s position is determined by its distance along the \( x \)- and \( y \)-axes, in three dimensions, a point's location requires an additional measurement along the \( z \)-axis. T

To visualize this, draw three planes passing through \( P \) and parallel to the \( yz \)-, \( zx \)-, and \( xy \)-planes. This forms a cuboid with the origin \( O(0,0,0) \) and \( P \) on opposite ends of the diagonal of the cuboid. The structure of this cuboid helps us break down the coordinates of \( P \) into simpler segments along each axis.

On the \( x \)-axis, the vertex of the cuboid is \( A \), on the \( y \)-axis, it is \( B \), and on the \( z \)-axis, it is \( C \). If we drop a perpendicular from \( P \) to the \( x \)-axis, it will intersect at \( A \). Similarly, the perpendicular from \( P \) to the \( y \)-axis intersects at \( B \), and the one to the \( z \)-axis intersects at \( C \). These points help us define the three independent movements required to reach \( P \) from the origin.

cuboid

There is another vertex \( D \) of the cuboid on the \( yz \)-plane. If we drop a normal from \( P \) to the \( yz \)-plane, we get \( D \). Similarly, when we drop normals onto the \( zx \)-plane and the \( xy \)-plane, we get the vertices \( E \) and \( F \) of the cuboid.

The coordinates of these points are derived as follows:

  • If \( OA = x \) units, then \( A \) has coordinates \( (x,0,0) \).
  • If \( OB = y \) units, then \( B \) has coordinates \( (0,y,0) \).
  • If \( OC = z \) units, then \( C \) has coordinates \( (0,0,z) \).
  • The point \( D \) has coordinates \( (0, y, z) \), which means that to move from \( O \) to \( D \), we move 0 units along the \( x \)-axis, \( y \) units along the \( y \)-axis, and \( z \) units along the \( z \)-axis.
  • The point \( E \) has coordinates \( (x, 0, z) \), and \( F \) has coordinates \( (x, y, 0) \).
  • Finally, \( P \) has coordinates \( (x, y, z) \).

This means that to move from \( O \) to \( P \), one possible path is:

  1. Move from \( O \) to \( A \) by \( x \) units along the \( x \)-axis.
  2. Move from \( A \) to \( F \) by \( y \) units along the \( y \)-axis.
  3. Move from \( F \) to \( P \) by \( z \) units along the \( z \)-axis.

Alternatively, different paths can be taken, such as \( O \) to \( C \), \( C \) to \( D \), \( D \) to \( P \). One may wonder: how many such paths are possible?

If \( \hat{\mathbf{i}}, \hat{\mathbf{j}}, \) and \( \hat{\mathbf{k}} \) denote unit vectors along the positive \( x \)-, \( y \)-, and \( z \)-axes, respectively, then:

  • The vector \( \overrightarrow{OA} \) is given by

    \[ \overrightarrow{OA} = x \hat{\mathbf{i}}. \]
  • The vector \( \overrightarrow{AF} \), which is equal to \( \overrightarrow{OB} \), is given by

    \[ \overrightarrow{AF} = y \hat{\mathbf{j}}. \]
  • The vector \( \overrightarrow{FP} \), which is equal to \( \overrightarrow{OC} \), is given by

    \[ \overrightarrow{FP} = z \hat{\mathbf{k}}. \]

By the polygon law of vector addition,

\[ \overrightarrow{OA} + \overrightarrow{AF} + \overrightarrow{FP} = \overrightarrow{OP}. \]

Thus,

\[ \overrightarrow{OP} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}}. \]

This vector \( \overrightarrow{OP} \) is the position vector of point \( P \).

Length of \(\overrightarrow{OP}\)

In the right-angled triangle \( OAF \), where \( A \) is the right-angle vertex, we apply the Pythagorean theorem:

\[ OF = \sqrt{OA^2 + AF^2}. \]

Since \( AF = OB \), we substitute:

\[ OF = \sqrt{OA^2 + OB^2} = \sqrt{x^2 + y^2}. \]

Now, consider the right-angled triangle \( OFP \), where \( F \) is the right-angle vertex. Again, by the Pythagorean theorem,

\[ OP = \sqrt{OF^2 + FP^2}. \]

Since \( FP = OC \), substituting gives

\[ OP = \sqrt{OF^2 + OC^2} = \sqrt{(x^2 + y^2) + z^2}. \]

Thus,

\[ OP = \sqrt{x^2 + y^2 + z^2}. \]

Since \( OP = |\overrightarrow{OP}| \) is the magnitude of the position vector \( \overrightarrow{OP} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}} \), this confirms the formula for the magnitude of a vector in three-dimensional space:

\[ |\overrightarrow{OP}| = \sqrt{x^2 + y^2 + z^2}. \]

Now, consider a vector \( \overrightarrow{AB} \) floating freely in space. With respect to our coordinate system, its initial point \( A \) has coordinates \( (x_1, y_1, z_1) \), and its terminal point \( B \) has coordinates \( (x_2, y_2, z_2) \).

We know that

\[ \overrightarrow{AB} = \overrightarrow{OB} - \overrightarrow{OA}. \]

Here, \( \overrightarrow{OA} \) and \( \overrightarrow{OB} \) are the position vectors of points \( A \) and \( B \), given by

\[ \overrightarrow{OA} = x_1 \hat{\mathbf{i}} + y_1 \hat{\mathbf{j}} + z_1 \hat{\mathbf{k}}, \]
\[ \overrightarrow{OB} = x_2 \hat{\mathbf{i}} + y_2 \hat{\mathbf{j}} + z_2 \hat{\mathbf{k}}. \]

Subtracting, we obtain

\[ \overrightarrow{AB} = (x_2 - x_1) \hat{\mathbf{i}} + (y_2 - y_1) \hat{\mathbf{j}} + (z_2 - z_1) \hat{\mathbf{k}}. \]

Thus, the vector \( \overrightarrow{AB} \) depends only on the differences \( (x_2 - x_1) \), \( (y_2 - y_1) \), and \( (z_2 - z_1) \), rather than on the absolute positions of \( A \) and \( B \) in space. This means that shifting the vector while keeping its length and direction unchanged does not alter its representation in terms of \( \hat{\mathbf{i}}, \hat{\mathbf{j}}, \) and \( \hat{\mathbf{k}} \).

This is also supported by the fundamental theorem of three-dimensional space: the vectors \( \hat{\mathbf{i}}, \hat{\mathbf{j}}, \) and \( \hat{\mathbf{k}} \) are non-coplanar, meaning they do not lie in the same plane. As a result, any vector in space can be written as a linear combination of these three vectors.

Thus, for some scalars \( x, y, \) and \( z \), any vector \( \mathbf{v} \) in three-dimensional space can be expressed as

\[ \mathbf{v} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}}. \]

However, as we saw above, these scalars \( x, y, \) and \( z \) have a geometric significance. The expression

\[ x \hat{\mathbf{i}} + y \hat{\mathbf{j}} + z \hat{\mathbf{k}} \]

is not just any arbitrary linear combination; it represents the position vector of the point \( (x, y, z) \) in three-dimensional space. This means that every point in space corresponds uniquely to a vector, and every vector corresponds to a point when considered as a position vector from the origin.

Vector Operations in Resolved Form

Given a vector in resolved form:

\[ \mathbf{v} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}}, \]

where \( x, y, z \) are the components of \( \mathbf{v} \) along the standard coordinate axes.

1. Length (Magnitude) of a Vector

The magnitude (or norm) of \( \mathbf{v} \) is given by

\[ |\mathbf{v}| = \sqrt{x^2 + y^2 + z^2}. \]

This follows from the Pythagorean theorem applied in three dimensions, as shown previously.

2. Unit Vector Parallel to \( \mathbf{v} \)

A unit vector in the direction of \( \mathbf{v} \) is given by

\[ \hat{\mathbf{v}} = \frac{\mathbf{v}}{|\mathbf{v}|} = \frac{x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}}}{\sqrt{x^2 + y^2 + z^2}}. \]

This ensures that \( |\hat{\mathbf{v}}| = 1 \), preserving the direction of \( \mathbf{v} \) while normalizing its length.

3. Scalar Multiplication (Scaling by \( \lambda \))

For any scalar \( \lambda \), scaling \( \mathbf{v} \) by \( \lambda \) results in

\[ \lambda \mathbf{v} = (\lambda x)\hat{\mathbf{i}} + (\lambda y)\hat{\mathbf{j}} + (\lambda z)\hat{\mathbf{k}}. \]

4. Vector Addition

For two vectors,

\[ \mathbf{a} = x_1\hat{\mathbf{i}} + y_1\hat{\mathbf{j}} + z_1\hat{\mathbf{k}}, \quad \mathbf{b} = x_2\hat{\mathbf{i}} + y_2\hat{\mathbf{j}} + z_2\hat{\mathbf{k}}, \]

their sum is

\[ \mathbf{a} + \mathbf{b} = (x_1 + x_2)\hat{\mathbf{i}} + (y_1 + y_2)\hat{\mathbf{j}} + (z_1 + z_2)\hat{\mathbf{k}}. \]

5. Vector Subtraction

The difference \( \mathbf{a} - \mathbf{b} \) is given by

\[ \mathbf{a} - \mathbf{b} = (x_1 - x_2)\hat{\mathbf{i}} + (y_1 - y_2)\hat{\mathbf{j}} + (z_1 - z_2)\hat{\mathbf{k}}. \]

This represents the vector from the terminal point of \( \mathbf{b} \) to the terminal point of \( \mathbf{a} \).

Resolution of Vectors in general

The fundamental theorem of three-dimensional space states that if \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are three linearly independent vectors, meaning they are non-coplanar, then any vector \( \overrightarrow{r} \) in space can be expressed as a linear combination of these three vectors:

\[ \overrightarrow{r} = x\overrightarrow{a} + y\overrightarrow{b} + z\overrightarrow{c}. \]

This means that any vector in space can be resolved along \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \). The vectors \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are called the basis vectors, and the scalars \( x, y, z \) are the coordinates of \( \overrightarrow{r} \) in this system.

If the vectors \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are mutually perpendicular and also unit vectors, they become particularly convenient for computations. Perpendicularity ensures that each component acts independently, and unit magnitude simplifies calculations. In this case, we often denote these unit vectors as \( \hat{a}, \hat{b}, \hat{c} \), and the resolution of \( \overrightarrow{r} \) takes the form:

\[ \overrightarrow{r} = x\hat{a} + y\hat{b} + z\hat{c}. \]

Geometrically, the coordinates \( x, y, z \) represent the projections of \( \overrightarrow{r} \) along \( \hat{a}, \hat{b}, \hat{c} \).

A particularly useful and intuitive choice for \( \hat{a}, \hat{b}, \hat{c} \) is when they are aligned with the Cartesian coordinate axes. Instead of using generic names, we introduce the standard unit vectors:

\[ \hat{i}, \hat{j}, \hat{k}. \]

These are the unit vectors along the positive \( x \)-, \( y \)-, and \( z \)-axes, respectively. Writing the resolution of \( \overrightarrow{r} \) in this system gives:

\[ \overrightarrow{r} = x\hat{i} + y\hat{j} + z\hat{k}. \]

This is simply the position vector of the point \( (x, y, z) \) in three-dimensional space.

The concept here is general. Whether we use \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) or \( \hat{i}, \hat{j}, \hat{k} \), the idea remains the same: any vector in space can be written in terms of three independent directions. However, when the basis vectors are mutually perpendicular and unit vectors, the system becomes an orthonormal basis, which simplifies calculations significantly. This idea extends beyond three dimensions to higher mathematics, where spaces of any number of dimensions can be described using appropriate basis vectors.

Equality of Vectors

Let

\[ \overrightarrow{a} = a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k}, \quad \overrightarrow{b} = b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k}. \]

Then,

\[ \overrightarrow{a} = \overrightarrow{b} \iff a_1 = b_1, \quad a_2 = b_2, \quad a_3 = b_3. \]

Proof:

We start with the given equality

\[ \overrightarrow{a} = \overrightarrow{b}. \]

Substituting their resolved forms,

\[ a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k} = b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k}. \]

Rearranging,

\[ (a_1 - b_1) \hat{i} + (a_2 - b_2) \hat{j} + (a_3 - b_3) \hat{k} = \mathbf{0}. \]

Since \( \hat{i}, \hat{j}, \hat{k} \) are linearly independent, their linear combination can be zero if and only if each coefficient is zero:

\[ a_1 - b_1 = 0, \quad a_2 - b_2 = 0, \quad a_3 - b_3 = 0. \]

Thus,

\[ a_1 = b_1, \quad a_2 = b_2, \quad a_3 = b_3. \]

This completes the proof. \(\blacksquare\)

Collinearity of Vectors

Two vectors

\[ \overrightarrow{a} = a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k}, \quad \overrightarrow{b} = b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k} \]

are collinear if and only if

\[ \frac{a_1}{b_1} = \frac{a_2}{b_2} = \frac{a_3}{b_3} \]

for some scalar \( \lambda \in \mathbb{R} \). Proof:

We know that two vectors are collinear if and only if one is a scalar multiple of the other, i.e.,

\[ \overrightarrow{a} = \lambda \overrightarrow{b} \quad \text{for some } \lambda \in \mathbb{R}. \]

Substituting the resolved forms of \( \overrightarrow{a} \) and \( \overrightarrow{b} \),

\[ a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k} = \lambda (b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k}). \]

Expanding,

\[ a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k} = \lambda b_1 \hat{i} + \lambda b_2 \hat{j} + \lambda b_3 \hat{k}. \]

Equating the coefficients of \( \hat{i}, \hat{j}, \hat{k} \), we get the system of equations:

\[ a_1 = \lambda b_1, \quad a_2 = \lambda b_2, \quad a_3 = \lambda b_3. \]

Solving for \( \lambda \) in each equation, we obtain

\[ \frac{a_1}{b_1} = \frac{a_2}{b_2} = \frac{a_3}{b_3} = \lambda. \]

Thus, \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are collinear if and only if this proportionality holds. \(\blacksquare\)

Collinearity of Three Points

Three points \( A, B, C \) are collinear if and only if the vectors \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \) are collinear.

Proof:

Let \( A, B, C \) be three collinear points with position vectors \( \mathbf{a}, \mathbf{b}, \mathbf{c} \) with respect to some reference point \( O \).

We know from before that three points are collinear if and only if there exist scalars \( x, y, z \), not all zero, such that

\[ x \mathbf{a} + y \mathbf{b} + z \mathbf{c} = 0 \]

and

\[ x + y + z = 0. \]

Substituting \( y = -x - z \) in the first equation,

\[ x \mathbf{a} + (-x - z) \mathbf{b} + z \mathbf{c} = 0. \]

Rewriting,

\[ x \mathbf{a} - x \mathbf{b} - z \mathbf{b} + z \mathbf{c} = 0. \]

Factoring,

\[ x (\mathbf{a} - \mathbf{b}) + z (\mathbf{c} - \mathbf{b}) = 0. \]

This implies

\[ x \overrightarrow{AB} = z \overrightarrow{BC}. \]

Thus, \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \) are collinear, proving the statement. \(\blacksquare\)

Example

Prove that the points \( A(3, 1, 5) \), \( B(-1, -5, -3) \), and \( C(5, 4, 9) \) are collinear.

Solution:

The position vectors of \( A, B, \) and \( C \) with respect to the origin \( O \) are:

\[ \overrightarrow{OA} = 3\hat{i} + \hat{j} + 5\hat{k}, \quad \overrightarrow{OB} = -\hat{i} - 5\hat{j} - 3\hat{k}, \quad \overrightarrow{OC} = 5\hat{i} + 4\hat{j} + 9\hat{k}. \]

First, find \( \overrightarrow{AB} \):

\[ \overrightarrow{AB} = \overrightarrow{OB} - \overrightarrow{OA}. \]
\[ \overrightarrow{AB} = (-\hat{i} - 5\hat{j} - 3\hat{k}) - (3\hat{i} + \hat{j} + 5\hat{k}). \]
\[ \overrightarrow{AB} = -4\hat{i} - 6\hat{j} - 8\hat{k}. \]

Now, find \( \overrightarrow{BC} \):

\[ \overrightarrow{BC} = \overrightarrow{OC} - \overrightarrow{OB}. \]
\[ \overrightarrow{BC} = (5\hat{i} + 4\hat{j} + 9\hat{k}) - (-\hat{i} - 5\hat{j} - 3\hat{k}). \]
\[ \overrightarrow{BC} = 6\hat{i} + 9\hat{j} + 12\hat{k}. \]

Now, we check the collinearity of \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \):

\[ \frac{-4}{6} = \frac{-6}{9} = \frac{-8}{12}. \]

Simplifying,

\[ \frac{-2}{3} = \frac{-2}{3} = \frac{-2}{3}. \]

Since the proportionality holds, \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \) are collinear. Hence, the points \( A, B, \) and \( C \) are collinear.

Thus, the resolution of vectors provides an algebraic method for checking the collinearity of three points.

Coplanarity of Three Vectors

Three vectors

\[ \overrightarrow{a} = a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k}, \quad \overrightarrow{b} = b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k}, \quad \overrightarrow{c} = c_1 \hat{i} + c_2 \hat{j} + c_3 \hat{k} \]

are coplanar if and only if

\[ \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = 0. \]

Proof:

We know from the concept of linear dependence that three vectors are coplanar if and only if there exist scalars \( x, y, z \), not all zero, such that

\[ x\overrightarrow{a} + y\overrightarrow{b} + z\overrightarrow{c} = 0. \]

Substituting their resolved forms,

\[ x (a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k}) + y (b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k}) + z (c_1 \hat{i} + c_2 \hat{j} + c_3 \hat{k}) = 0. \]

Expanding,

\[ (a_1 x + b_1 y + c_1 z) \hat{i} + (a_2 x + b_2 y + c_2 z) \hat{j} + (a_3 x + b_3 y + c_3 z) \hat{k} = 0. \]

Since \( \hat{i}, \hat{j}, \hat{k} \) are linearly independent, their coefficients must individually be zero:

\[ a_1 x + b_1 y + c_1 z = 0, \]
\[ a_2 x + b_2 y + c_2 z = 0, \]
\[ a_3 x + b_3 y + c_3 z = 0. \]

This gives a homogeneous system of linear equations in \( x, y, z \). Since we assumed that \( x, y, z \) are not all zero, this system must have a nontrivial solution.

By the fundamental theorem of linear algebra, a homogeneous system has a nontrivial solution if and only if the determinant of its coefficient matrix is zero:

\[ \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = 0. \]

Thus, the vectors \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are coplanar if and only if their determinant is zero. \(\blacksquare\)

Well, the above result can be generalized for general resolution of vectors.

Let \( \overrightarrow{u}, \overrightarrow{v}, \overrightarrow{w} \) be three linearly independent (non-coplanar) vectors. Consider three vectors written as their linear combinations:

\[ \overrightarrow{a} = a_1 \overrightarrow{u} + a_2 \overrightarrow{v} + a_3 \overrightarrow{w}, \]
\[ \overrightarrow{b} = b_1 \overrightarrow{u} + b_2 \overrightarrow{v} + b_3 \overrightarrow{w}, \]
\[ \overrightarrow{c} = c_1 \overrightarrow{u} + c_2 \overrightarrow{v} + c_3 \overrightarrow{w}. \]

Suppose \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are coplanar. We will prove that

\[ \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = 0. \]

Since \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are coplanar, there exist scalars \( x, y, z \), not all zero, such that

\[ x\overrightarrow{a} + y\overrightarrow{b} + z\overrightarrow{c} = \overrightarrow{0}. \]

Substituting their resolved forms,

\[ x (a_1 \overrightarrow{u} + a_2 \overrightarrow{v} + a_3 \overrightarrow{w}) + y (b_1 \overrightarrow{u} + b_2 \overrightarrow{v} + b_3 \overrightarrow{w}) + z (c_1 \overrightarrow{u} + c_2 \overrightarrow{v} + c_3 \overrightarrow{w}) = \overrightarrow{0}. \]

Expanding,

\[ (a_1 x + b_1 y + c_1 z) \overrightarrow{u} + (a_2 x + b_2 y + c_2 z) \overrightarrow{v} + (a_3 x + b_3 y + c_3 z) \overrightarrow{w} = \overrightarrow{0}. \]

Since \( \overrightarrow{u}, \overrightarrow{v}, \overrightarrow{w} \) are linearly independent, their coefficients must individually be zero:

\[ a_1 x + b_1 y + c_1 z = 0, \]
\[ a_2 x + b_2 y + c_2 z = 0, \]
\[ a_3 x + b_3 y + c_3 z = 0. \]

This forms a homogeneous system of equations in \( x, y, z \). Since \( x, y, z \) are not all zero, this system must have a nontrivial solution.

A homogeneous system has a nontrivial solution if and only if the determinant of its coefficient matrix is zero:

\[ \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = 0. \]

Thus, the vectors \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) are coplanar if and only if this determinant is zero, proving the result.

Coplanarity of Four Points

Four points \( A, B, C, \) and \( D \) are coplanar if and only if the vectors \( \overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD} \) are coplanar.

Proof:

Let the position vectors of \( A, B, C, D \) with respect to some reference point be \( \mathbf{a}, \mathbf{b}, \mathbf{c}, \mathbf{d} \), respectively.

Since \( A, B, C, D \) are coplanar, there exist scalars \( \lambda_1, \lambda_2, \lambda_3, \lambda_4 \), not all zero, such that

\[ \lambda_1 \mathbf{a} + \lambda_2 \mathbf{b} + \lambda_3 \mathbf{c} + \lambda_4 \mathbf{d} = 0 \]

with the additional constraint

\[ \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 = 0. \]

Rewriting \( \lambda_1 \) in terms of the other scalars:

\[ \lambda_1 = - (\lambda_2 + \lambda_3 + \lambda_4). \]

Substituting into the first equation:

\[ (-\lambda_2 - \lambda_3 - \lambda_4) \mathbf{a} + \lambda_2 \mathbf{b} + \lambda_3 \mathbf{c} + \lambda_4 \mathbf{d} = 0. \]

Rearranging:

\[ \lambda_2 (\mathbf{b} - \mathbf{a}) + \lambda_3 (\mathbf{c} - \mathbf{a}) + \lambda_4 (\mathbf{d} - \mathbf{a}) = 0. \]

This simplifies to:

\[ \lambda_2 \overrightarrow{AB} + \lambda_3 \overrightarrow{AC} + \lambda_4 \overrightarrow{AD} = 0. \]

Since \( \lambda_2, \lambda_3, \lambda_4 \) are not all zero (otherwise, \( \lambda_1 \) would also be zero, contradicting our assumption), this equation shows that the vectors \( \overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD} \) are linearly dependent, implying that they are coplanar.

Thus, the four points \( A, B, C, D \) are coplanar if and only if the vectors \( \overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD} \) are coplanar. \(\blacksquare\)

The proof is correct, but in this chapter, try to develop a visual understanding and imagination—this is the key to solving problems besides algebra and proofs. The fact that four points being coplanar implies that the vectors joining them are also coplanar can be readily seen if one imagines the plane containing these points.