Operations on Vectors
Vectors are mathematical objects that can be manipulated through well-defined operations. The two fundamental operations on vectors are vector addition and scalar multiplication. Vector addition combines two vectors to produce a new vector, incorporating both magnitude and direction in a meaningful way. Scalar multiplication, on the other hand, involves multiplying a vector by a real number (a scalar), which alters its magnitude without affecting its direction, except in cases where the scalar is negative.
Comparing Vectors: Equality of Vectors
Two vectors are said to be equal if and only if they satisfy the following three conditions simultaneously:
- They have the same line of support or parallel lines of support.
- They have the same length (magnitude).
- They have the same sense (direction from initial to terminal point).
This means that we are not concerned with where the vectors are located in space. Two vectors are considered equal if they have the same magnitude and direction, regardless of their position. Their location does not matter; as long as they satisfy the above conditions, they are identical in terms of their effect.
Scalar Multiplication
For a vector \( \overrightarrow{a} \) and a scalar \( k \), the product \( k \overrightarrow{a} \) is defined as a vector whose magnitude is scaled by \( |k| \) and whose direction depends on the sign of \( k \):
- If \( k > 0 \), the vector \( k\overrightarrow{a} \) has the same direction as \( \overrightarrow{a} \) and its length is \( k \) times the length of \( \overrightarrow{a} \).
- If \( k < 0 \), the vector \( k\overrightarrow{a} \) has the opposite sense of \( \overrightarrow{a} \) and its length is \( |k| \) times the length of \( \overrightarrow{a} \).
- If \( k = 0 \), then \( k\overrightarrow{a} \) is the null vector.
For example, if \( \overrightarrow{a} \) represents some vector, then \( 2\overrightarrow{a} \) is a vector in the same direction as \( \overrightarrow{a} \), but with twice the length. Similarly, \( -\overrightarrow{a} \) has the same length as \( \overrightarrow{a} \) but is directed oppositely.
Properties of Scalar Multiplication
Scalar multiplication satisfies the following fundamental properties:
-
Closure Property:
For any vector \( \overrightarrow{a} \) and any scalar \( k \), the product \( k\overrightarrow{a} \) is also a vector.
-
Associative Property with Scalars:
For any two scalars \( k_1 \) and \( k_2 \) and a vector \( \overrightarrow{a} \),
\[ (k_1 k_2) \overrightarrow{a} = k_1 (k_2 \overrightarrow{a}). \]This ensures that the order of scalar multiplication does not affect the result.
-
Multiplicative Identity:
For every vector \( \overrightarrow{a} \),
\[ 1 \cdot \overrightarrow{a} = \overrightarrow{a}. \]This means that multiplying a vector by 1 does not change it.
-
Multiplication by Zero:
For any vector \( \overrightarrow{a} \),
\[ 0 \cdot \overrightarrow{a} = \overrightarrow{0}. \]This ensures that scaling any vector by 0 results in the null vector.
-
Effect of Negative Scalars:
For any scalar \( k \) and vector \( \overrightarrow{a} \),
\[ (-k) \overrightarrow{a} = -(k \overrightarrow{a}). \]This implies that multiplying a vector by a negative scalar reverses its sense.
-
Effect on Magnitude:
For any scalar \( k \) and vector \( \overrightarrow{a} \),
\[ | k \overrightarrow{a} | = |k| | \overrightarrow{a} |. \]In particular, when \( k = -1 \),
\[ | -\overrightarrow{a} | = |-1| | \overrightarrow{a} | = | \overrightarrow{a} |. \]This confirms that multiplying a vector by \( -1 \) reverses its direction but does not change its magnitude.
Unit Vector
A unit vector is a vector whose magnitude is exactly 1 unit. That is, for a unit vector \( \hat{a} \),
Unit vectors are generally denoted with a hat symbol, such as \( \hat{v} \), to distinguish them from arbitrary vectors.
Given any nonzero vector \( \overrightarrow{v} \), its magnitude is \( | \overrightarrow{v} | \), and the unit vector in the direction of \( \overrightarrow{v} \) is obtained by dividing \( \overrightarrow{v} \) by its magnitude:
Since scalar multiplication scales the magnitude of a vector, the new vector \( \hat{v} \) has the property:
Thus, \( \hat{v} \) is a unit vector in the direction of \( \overrightarrow{v} \). This construction ensures that every nonzero vector can be uniquely associated with a corresponding unit vector that maintains its direction but has a standard length of one unit.
Addition of Vectors
Vectors are added geometrically by following the triangle law of addition. Given two vectors, say \( \overrightarrow{a} \) and \( \overrightarrow{b} \), their sum \( \overrightarrow{a} + \overrightarrow{b} \) is obtained as follows:
To construct the sum, the vector \( \overrightarrow{b} \) is moved without changing its direction so that its initial point coincides with the terminal point of \( \overrightarrow{a} \). The directed line segment from the initial point of \( \overrightarrow{a} \) to the terminal point of \( \overrightarrow{b} \) represents the resultant vector \( \overrightarrow{a} + \overrightarrow{b} \). This construction forms a triangle, which is why this method is called the triangle law of vector addition.
The vector sum \( \overrightarrow{a} + \overrightarrow{b} \) retains the properties of vector quantities—it has both magnitude and direction, and its value depends on the relative orientation of \( \overrightarrow{a} \) and \( \overrightarrow{b} \).
Given three points \( A \), \( B \), and \( C \) in space, the sum of the directed line segments \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \) is given by:
This equation follows from the geometric construction where the terminal point of \( \overrightarrow{AB} \) coincides with the initial point of \( \overrightarrow{BC} \). The resultant vector \( \overrightarrow{AC} \) is the directed line segment extending from \( A \) to \( C \), effectively summing the displacements represented by \( \overrightarrow{AB} \) and \( \overrightarrow{BC} \).
Properties of Vector Addition
-
Closure Property:
For any two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \), the sum \( \overrightarrow{a} + \overrightarrow{b} \) is also a vector. Thus, vector addition is closed.
-
Commutative Property:
Vector addition satisfies the commutative law, meaning
\[ \overrightarrow{a} + \overrightarrow{b} = \overrightarrow{b} + \overrightarrow{a}. \]The order of addition does not affect the resultant vector. As can be seen in the diagram below.
-
Associative Property:
Vector addition satisfies the associative law, meaning
\[ (\overrightarrow{a} + \overrightarrow{b}) + \overrightarrow{c} = \overrightarrow{a} + (\overrightarrow{b} + \overrightarrow{c}). \]The way in which vectors are grouped does not affect the final sum.
Geometrically, we arrange the vectors such that the head of \( \overrightarrow{a} \) coincides with the tail of \( \overrightarrow{b} \), and the head of \( \overrightarrow{b} \) coincides with the tail of \( \overrightarrow{c} \). In the figure, two different ways of summing the three vectors are shown:
-
On the left, we first add \( \overrightarrow{b} \) and \( \overrightarrow{c} \) to form the resultant \( \overrightarrow{b} + \overrightarrow{c} \), then add this to \( \overrightarrow{a} \), yielding the final sum \( \overrightarrow{a} + (\overrightarrow{b} + \overrightarrow{c}) \).
-
On the right, we first add \( \overrightarrow{a} \) and \( \overrightarrow{b} \) to form \( \overrightarrow{a} + \overrightarrow{b} \), then add \( \overrightarrow{c} \), yielding \( (\overrightarrow{a} + \overrightarrow{b}) + \overrightarrow{c} \).
Since both constructions lead to the same final resultant vector, the associativity of vector addition is visually confirmed.
-
-
Additive Identity:
The null vector acts as the additive identity in vector addition:
\[ \overrightarrow{a} + \overrightarrow{0} = \overrightarrow{a}. \]Adding the null vector to any vector does not change the vector.
-
Additive Inverse:
Every vector \( \overrightarrow{a} \) has an additive inverse, denoted as \( -\overrightarrow{a} \), such that
\[ \overrightarrow{a} + (-\overrightarrow{a}) = \overrightarrow{0}. \]The sum of a vector and its additive inverse is always the null vector. The additive inverse \( -\overrightarrow{a} \) is a vector with the same magnitude as \( \overrightarrow{a} \) but opposite direction.
-
Distributive Property:
Vector addition satisfies the distributive property with respect to scalar multiplication. For any scalar \( k \) and vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \),
\[ k (\overrightarrow{a} + \overrightarrow{b}) = k\overrightarrow{a} + k\overrightarrow{b}. \]Additionally, for any scalars \( k_1 \) and \( k_2 \),
\[ (k_1 + k_2) \overrightarrow{a} = k_1 \overrightarrow{a} + k_2 \overrightarrow{a}. \]These properties ensure that scalar multiplication distributes over vector addition and that the scaling of a vector by a sum of scalars is equivalent to the sum of individual scalings.
Polygon Law of Vector Addition
The polygon law of vector addition is an extension of the triangle law to the addition of multiple vectors. Given \( n \) vectors \( \overrightarrow{a_1}, \overrightarrow{a_2}, \dots, \overrightarrow{a_n} \), their sum is obtained by arranging them in sequence such that the head of each vector coincides with the tail of the next.
To find the resultant vector:
- Place the head of \( \overrightarrow{a_1} \) at the tail of \( \overrightarrow{a_2} \),
- Then place the head of \( \overrightarrow{a_2} \) at the tail of \( \overrightarrow{a_3} \),
- Continue this process for all vectors up to \( \overrightarrow{a_n} \).
The resultant vector is given by the directed line segment connecting the tail of \( \overrightarrow{a_1} \) to the head of \( \overrightarrow{a_n} \). This resultant represents the sum:
The order in which the vectors are added does not change the resultant due to the associative and commutative properties of vector addition.
If a collection of vectors is arranged head to tail in such a way that the tail of the first vector coincides with the head of the last vector, a closed polygon is formed. In this case, the sum of all the vectors is the null vector, expressed as:
This follows directly from the polygon law of vector addition, where the resultant vector is the directed line segment from the tail of the first vector to the head of the last. Since these points coincide in a closed polygon, the resultant is the null vector.
Parallelogram Law of Vector Addition
The parallelogram law of vector addition is essentially the triangle law of addition viewed from a different perspective. Given two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \), instead of arranging them head to tail, we position them as co-initial vectors, meaning their tails coincide at the same point.
Let the common initial point be \( O \). Suppose the head of \( \overrightarrow{a} \) is \( A \) and the head of \( \overrightarrow{b} \) is \( B \). To construct the parallelogram, draw a copy of \( \overrightarrow{a} \) from \( B \) to \( C \) and a copy of \( \overrightarrow{b} \) from \( A \) to \( C \), such that \( OACB \) forms a parallelogram.
The diagonal \( \overrightarrow{OC} \) of the parallelogram, starting from \( O \), represents the resultant vector:
Since \( OA = \overrightarrow{a} \) and \( OB = \overrightarrow{b} \), the parallelogram law follows directly from the triangle law of vector addition, as the diagonal \( \overrightarrow{OC} \) can also be obtained by first going along \( \overrightarrow{OA} \) and then \( \overrightarrow{AC} \) (which is equal to \( \overrightarrow{OB} \)). This provides an alternative geometric approach to vector addition.
We can apply the parallelogram law of vector addition to two vectors at a time. The main advantage of this method is that it provides a clear geometric understanding of how the resultant vector \( \overrightarrow{c} = \overrightarrow{a} + \overrightarrow{b} \) is related to the original vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \).
To analyze the magnitude of \( \overrightarrow{c} \), we first express \( |\overrightarrow{c}| = |\overrightarrow{a} + \overrightarrow{b}| \). Assume that the angle between \( \overrightarrow{a} \) and \( \overrightarrow{b} \) is \( \theta \).
Now, construct the parallelogram \( OABC \) with:
- \( O \) as the common initial point,
- \( A \) as the head of \( \overrightarrow{a} \),
- \( B \) as the head of \( \overrightarrow{b} \),
- \( C \) as the opposite vertex completing the parallelogram.
To proceed with the magnitude calculation, extend \( OA \) beyond \( A \), and from \( C \), drop a perpendicular \( CN \) onto \( OA \) at point \( N \). This construction allows us to resolve \( \overrightarrow{b} \) into components along and perpendicular to \( \overrightarrow{a} \), facilitating the derivation of \( |\overrightarrow{c}| \).
In \( \triangle ONC \), we have:
Applying the Pythagorean theorem in \( \triangle ONC \):
Substituting the values of \( ON \) and \( CN \):
Expanding:
Using the identity \( \cos^2 \theta + \sin^2 \theta = 1 \), we simplify:
Thus, the magnitude of the resultant vector is:
This formula gives the magnitude of the sum of two vectors in terms of their individual magnitudes and the angle \( \theta \) between them.
The relative direction of the resultant vector \( \overrightarrow{c} = \overrightarrow{a} + \overrightarrow{b} \) can also be determined. Let \( \alpha \) be the angle between \( \overrightarrow{c} \) and \( \overrightarrow{a} \).
From the right triangle \( \triangle ONC \), we use the definition of the tangent function:
Since we have already determined:
we substitute these values:
This formula provides the direction of the resultant vector \( \overrightarrow{c} \) relative to \( \overrightarrow{a} \), given the angle \( \theta \) between \( \overrightarrow{a} \) and \( \overrightarrow{b} \).
Similarly, if \( \beta \) is the angle between \( \overrightarrow{c} \) and \( \overrightarrow{b} \), we can determine it using the right triangle \( \triangle OMC \) (where \( M \) is the foot of the perpendicular from \( C \) onto \( OB \), analogous to \( N \) in the previous case).
By applying the definition of the tangent function,
From the construction, we know:
Thus, substituting these values:
This equation gives the direction of the resultant vector \( \overrightarrow{c} \) relative to \( \overrightarrow{b} \), completing the analysis of both reference angles.
Subtraction of Vectors
Vector subtraction is defined as
where \( -\overrightarrow{b} \) is the additive inverse of \( \overrightarrow{b} \). This means that subtracting \( \overrightarrow{b} \) from \( \overrightarrow{a} \) is equivalent to adding \( -\overrightarrow{b} \) to \( \overrightarrow{a} \).
To construct the vector difference geometrically, consider two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \). Instead of adding \( \overrightarrow{b} \), reverse its direction to obtain \( -\overrightarrow{b} \). The vector \( -\overrightarrow{b} \) is equal in magnitude to \( \overrightarrow{b} \) but points in the opposite direction. Now, place \( \overrightarrow{a} \) and \( -\overrightarrow{b} \) as co-initial vectors and complete the parallelogram with sides \( \overrightarrow{a} \) and \( -\overrightarrow{b} \). The diagonal of this paralleogram OB'CA is the resultant vector \(\overrightarrow{a} - \overrightarrow{b}\)
By basic geometric properties, itcan be proved that the vector \( \overrightarrow{OC} = \overrightarrow{a} - \overrightarrow{b} \) is simply the directed line segment from the head of \( \overrightarrow{b} \) to the head of \( \overrightarrow{a} \), that is, \(\overrightarrow{BA}\). That is,
Thus, the vector difference \( \overrightarrow{a} - \overrightarrow{b} \) is the vector starting at the head of \( \overrightarrow{b} \) and ending at the head of \( \overrightarrow{a} \).
If the angle between \( \overrightarrow{a} \) and \( \overrightarrow{b} \) when they are co-initial is \( \theta \), then the angle between \( \overrightarrow{a} \) and \( -\overrightarrow{b} \) is \( \pi - \theta \).
Since vector subtraction is defined as
the magnitude of \( \overrightarrow{a} - \overrightarrow{b} \) follows from the standard magnitude formula for vector addition:
Applying the magnitude formula:
Using the identity \( \cos(\pi - \theta) = -\cos \theta \), this simplifies to:
This formula gives the magnitude of the vector difference in terms of the magnitudes of \( \overrightarrow{a} \) and \( \overrightarrow{b} \) and the angle \( \theta \) between them when placed as co-initial vectors.
This geometric understanding of vector addition and subtraction is particularly useful in analyzing properties of the parallelogram formed by two co-initial vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \). The relationships between the diagonals and the sides of the parallelogram lead to important conclusions:
-
If \( |\overrightarrow{a} + \overrightarrow{b}| = |\overrightarrow{a} - \overrightarrow{b}| \), then the lengths of both diagonals are equal. This implies that the parallelogram must be a rectangle, since only in a rectangle are the diagonals of equal length. In a rectangle, adjacent sides are perpendicular, so it follows that \( \overrightarrow{a} \) is perpendicular to \( \overrightarrow{b} \). The converse is also true: if \( \overrightarrow{a} \) is perpendicular to \( \overrightarrow{b} \), then the parallelogram formed must be a rectangle, and thus the diagonals must have equal length.
-
If \( \overrightarrow{a} + \overrightarrow{b} \) is perpendicular to \( \overrightarrow{a} - \overrightarrow{b} \), then the diagonals of the parallelogram are perpendicular to each other. This condition is satisfied only when the parallelogram is a rhombus, as in a rhombus, the diagonals are always perpendicular. Since a rhombus has all sides equal in length, it follows that \( |\overrightarrow{a}| = |\overrightarrow{b}| \).
-
If the relation
\[ |\overrightarrow{a}|^2 + |\overrightarrow{b}|^2 = |\overrightarrow{a} + \overrightarrow{b}|^2 \]or
\[ |\overrightarrow{a}|^2 + |\overrightarrow{b}|^2 = |\overrightarrow{a} - \overrightarrow{b}|^2 \]holds, then by the parallelogram law, this implies that the parallelogram is a rectangle. This again confirms that \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are perpendicular.
Parallelogram Identity
The parallelogram identity states that for any two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \),
This identity follows directly from the parallelogram law of vector addition and provides an important relation between the sum and difference of two vectors.
To prove this, we use the squared magnitude formula:
Similarly,
Adding these two equations:
Since the \( 2 | \overrightarrow{a} | | \overrightarrow{b} | \cos \theta \) terms cancel, we get:
Position Vector
To describe the location of a point in space, a reference point is needed. Without a fixed reference, the position of a point has no meaningful description. Suppose a point \( P \) is given, and we want to specify its location precisely. A natural way to do this is to relate it to a fixed point \( O \), which serves as the origin or reference.
To measure the position of \( P \) relative to \( O \), consider the directed line segment \( \overrightarrow{OP} \). This directed segment uniquely defines the location of \( P \) in terms of both magnitude (distance from \( O \)) and direction (orientation from \( O \) to \( P \)). The vector \( \overrightarrow{OP} \) is called the position vector of \( P \).
Thus, a position vector represents the position of a point in space relative to a reference point by assigning it a directed line segment. The magnitude \( | \overrightarrow{OP} | \) gives the distance of \( P \) from \( O \), while its direction specifies how \( P \) is oriented with respect to \( O \).
Consider two points \( P_1 \) and \( P_2 \) in space, with a fixed reference point \( O \). The position vector of \( P_1 \) is the directed line segment \( \overrightarrow{OP_1} \), and the position vector of \( P_2 \) is \( \overrightarrow{OP_2} \).
By the triangle law of vector addition, placing the head of \( \overrightarrow{OP_1} \) at the tail of \( \overrightarrow{P_1P_2} \), the relation
holds. Rearranging, the directed line segment from \( P_1 \) to \( P_2 \) is given by
The distance between \( P_1 \) and \( P_2 \) is the magnitude of this directed segment:
This expresses the displacement from \( P_1 \) to \( P_2 \) as the difference of their position vectors, and its magnitude gives the distance between the two points.
A reference point is any arbitrarily chosen point in space that serves as the origin for measuring positions. It does not have to be fixed at any specific location, and its choice is entirely dependent on the problem or context.
Collinear Vectors
Two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are said to be collinear if and only if they have the same or parallel lines of support. Their magnitudes and sense do not affect collinearity; only their directional alignment matters. This should not be confused with collinear points, which refer to points lying on the same straight line.
To denote that two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are collinear, we write:
Now that scalar multiplication has been introduced, we can express collinearity algebraically. Suppose \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are non-null vectors. The unit vector along \( \overrightarrow{a} \) is given by:
and the unit vector along \( \overrightarrow{b} \) is:
If \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are collinear, then their unit vectors must be either equal or negatives of each other, meaning:
Multiplying both sides by \( | \overrightarrow{a} | \), this gives:
This shows that there exists a real scalar \( \lambda \) such that:
where \( \lambda \) is given by \( \pm \frac{| \overrightarrow{a} |}{| \overrightarrow{b} |} \), confirming that \( \overrightarrow{a} \) is a scalar multiple of \( \overrightarrow{b} \). If \( \lambda > 0 \), then \( \overrightarrow{a} \) and \( \overrightarrow{b} \) have the same sense; if \( \lambda < 0 \), they have opposite sense.
Now consider the special case where \( \overrightarrow{b} \) is the null vector. Since the null vector has zero magnitude and no defined direction, for any vector \( \overrightarrow{a} \), we trivially have:
Thus, the null vector is collinear with every vector in space. This completes the algebraic characterization of collinear vectors.
Conversely, if there exists a real scalar \( \lambda \) such that
then \( \overrightarrow{a} \) and \( \overrightarrow{b} \) must be collinear. This follows directly from the definition of scalar multiplication: multiplying a vector by a scalar only scales its magnitude but does not alter its line of support.
If \( \lambda > 0 \), then \( \overrightarrow{a} \) has the same sense as \( \overrightarrow{b} \). If \( \lambda < 0 \), then \( \overrightarrow{a} \) has the opposite sense of \( \overrightarrow{b} \). In both cases, \( \overrightarrow{a} \) remains constrained to the same or parallel line of support as \( \overrightarrow{b} \), confirming that the two vectors are collinear.
If \( \lambda = 0 \), then \( \overrightarrow{a} = \overrightarrow{0} \), which is trivially collinear with every vector. Hence, the existence of such a \( \lambda \) is both a necessary and sufficient condition for collinearity.
Thus, the geometric condition that two vectors have the same or parallel lines of support is equivalent to the algebraic condition that one vector is a scalar multiple of the other:
If two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \) are not collinear, we write
This means that \( \overrightarrow{a} \) and \( \overrightarrow{b} \) do not share the same or parallel lines of support. In other words, there does not exist any real scalar \( \lambda \) such that