Skip to content

Cross Product of Vectors

What exactly is Cross-Product

The cross product arises when we try to solve the following problem:

Find a unit vector perpendicular to two noncollinear vectors

\[ \mathbf{a} = a_1 \mathbf{i} + a_2 \mathbf{j} + a_3 \mathbf{k} \]

and

\[ \mathbf{b} = b_1 \mathbf{i} + b_2 \mathbf{j} + b_3 \mathbf{k}. \]

To solve this problem, we use the dot product of vectors. The objective is to find a vector that is perpendicular to both \( \mathbf{a} \) and \( \mathbf{b} \), and among all such vectors, we seek one that has unit magnitude.

Let

\[ \mathbf{n} = n_1 \mathbf{i} + n_2 \mathbf{j} + n_3 \mathbf{k} \]

be the required unit vector. Since \( \mathbf{n} \) is a unit vector, its magnitude must satisfy the fundamental property

\[ n_1^2 + n_2^2 + n_3^2 = 1. \quad \text{(1)} \]

Additionally, since it is perpendicular to \( \mathbf{a} \), the dot product condition gives

\[ \mathbf{a} \cdot \mathbf{n} = 0 \implies a_1 n_1 + a_2 n_2 + a_3 n_3 = 0. \quad \text{(2)} \]

Similarly, since \( \mathbf{n} \) is perpendicular to \( \mathbf{b} \), we impose another orthogonality condition

\[ \mathbf{b} \cdot \mathbf{n} = 0 \implies b_1 n_1 + b_2 n_2 + b_3 n_3 = 0. \quad \text{(3)} \]

Thus, we now have a system of three equations for the three unknowns \( n_1, n_2, n_3 \). Our goal is to solve for these unknowns in a systematic manner.

From equations (2) and (3), we observe that this system of equations is linear in \( n_1, n_2, n_3 \). We can express these variables in terms of determinants as follows:

\[ \frac{n_1}{\begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}} = -\frac{n_2}{\begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}} = \frac{n_3}{\begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}}. \]

Let this common ratio be \( \lambda \), an arbitrary scalar factor that determines the magnitude of \( \mathbf{n} \). Thus, we obtain explicit expressions for the components of \( \mathbf{n} \) as

\[ n_1 = \lambda \begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}, \]
\[ n_2 = -\lambda \begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}, \]
\[ n_3 = \lambda \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}. \]

At this stage, the only remaining unknown is \( \lambda \), which we determine by using equation (1). Substituting the values of \( n_1, n_2, n_3 \) into (1), we obtain

\[ \lambda^2 \left( \begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}^2 \right) = 1. \]

Solving for \( \lambda \), we get

\[ \lambda = \pm \frac{1}{\sqrt{ \begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}^2 }}. \]

Now, the denominator in this expression is significant. To gain further insight into it, we recall the fundamental formula for the cosine of the angle \( \theta \) between two vectors:

\[ \cos \theta = \frac{\mathbf{a} \cdot \mathbf{b}}{|\mathbf{a}| |\mathbf{b}|} = \frac{a_1 b_1 + a_2 b_2 + a_3 b_3}{\sqrt{a_1^2 + a_2^2 + a_3^2} \sqrt{b_1^2 + b_2^2 + b_3^2}}. \]

Using this, we express \( \sin^2 \theta \) as

\[ \sin^2 \theta = 1 - \cos^2 \theta. \]

Substituting the value of \( \cos^2 \theta \),

\[ \sin^2 \theta = 1 - \frac{(a_1 b_1 + a_2 b_2 + a_3 b_3)^2}{(a_1^2 + a_2^2 + a_3^2)(b_1^2 + b_2^2 + b_3^2)}. \]

Rearranging the expression,

\[ \sin^2 \theta = \frac{(a_1^2 + a_2^2 + a_3^2)(b_1^2 + b_2^2 + b_3^2) - (a_1 b_1 + a_2 b_2 + a_3 b_3)^2}{(a_1^2 + a_2^2 + a_3^2)(b_1^2 + b_2^2 + b_3^2)}. \]

Expanding the numerator explicitly,

\[ (a_1^2 b_2^2 + a_2^2 b_1^2 - 2 a_1 a_2 b_1 b_2) + (a_1^2 b_3^2 + a_3^2 b_1^2 - 2 a_1 a_3 b_1 b_3) + (a_2^2 b_3^2 + a_3^2 b_2^2 - 2 a_2 a_3 b_2 b_3). \]

This expression simplifies precisely to

\[ \begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}^2 + \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}^2 = |\mathbf{a}|^2 |\mathbf{b}|^2 \sin^2 \theta. \]

Thus, substituting this into the expression for \( \lambda \), we obtain

\[ \lambda = \pm \frac{1}{|\mathbf{a}| |\mathbf{b}| \sin \theta}. \]

Substituting this into our expressions for \( n_1, n_2, n_3 \),

\[ n_1 = \pm \frac{\begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta}, \]
\[ n_2 = \mp \frac{\begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta}, \]
\[ n_3 = \pm \frac{\begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta}. \]

We now substitute the obtained values of \( n_1, n_2, n_3 \) back into \( \mathbf{n} \), giving

\[ \mathbf{n} = n_1 \mathbf{i} + n_2 \mathbf{j} + n_3 \mathbf{k}. \]

Substituting the values,

\[ \mathbf{n} = \left( \pm \frac{\begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta} \right) \mathbf{i} + \left( \mp \frac{\begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta} \right) \mathbf{j} + \left( \pm \frac{\begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix}}{|\mathbf{a}| |\mathbf{b}| \sin \theta} \right) \mathbf{k}. \]

Rewriting,

\[ \mathbf{n} = \pm \frac{\begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \end{vmatrix} \mathbf{i} - \begin{vmatrix} a_1 & a_3 \\ b_1 & b_3 \end{vmatrix} \mathbf{j} + \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \end{vmatrix} \mathbf{k} }{|\mathbf{a}| |\mathbf{b}| \sin \theta}. \]

The numerator represents the determinant

\[ \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}. \]

Thus,

\[ \mathbf{n} = \pm \frac{ \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix} }{|\mathbf{a}||\mathbf{b}| \sin \theta}. \]

This result shows that there are two possible unit vectors, each pointing in opposite directions perpendicular to the plane containing \( \mathbf{a} \) and \( \mathbf{b} \).

The determinant in the numerator of \( \mathbf{n} \) is significant because it represents a vector that is perpendicular to both \( \mathbf{a} \) and \( \mathbf{b} \). This determinant is given by

\[ \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}. \]

This determinant is given a special name in vector algebra: it is called the cross product of \( \mathbf{a} \) and \( \mathbf{b} \), denoted by \( \mathbf{a} \times \mathbf{b} \). That is,

\[ \mathbf{a} \times \mathbf{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}. \]

Using this notation, our previously derived expression for \( \mathbf{n} \) can now be written compactly as

\[ \mathbf{n} = \pm \frac{\mathbf{a} \times \mathbf{b}}{|\mathbf{a}||\mathbf{b}| \sin \theta}. \]

This form clearly shows that the cross product \( \mathbf{a} \times \mathbf{b} \) gives a vector perpendicular to both \( \mathbf{a} \) and \( \mathbf{b} \), and dividing it by \( |\mathbf{a}| |\mathbf{b}| \sin \theta \) normalizes its magnitude to 1, ensuring that \( \mathbf{n} \) is a unit vector. There are two possible choices for \( \mathbf{n} \), one in each direction perpendicular to the plane of \( \mathbf{a} \) and \( \mathbf{b} \).


In the expression

\[ \mathbf{n} = \pm \frac{\mathbf{a} \times \mathbf{b}}{|\mathbf{a}||\mathbf{b}| \sin \theta}, \]

there are two possible values for \( \mathbf{n} \), corresponding to the two opposite directions perpendicular to both \( \mathbf{a} \) and \( \mathbf{b} \). To determine which of these two directions corresponds to a given choice of sign, we must establish a convention for the orientation of \( \mathbf{n} \) relative to \( \mathbf{a} \) and \( \mathbf{b} \).

To do this, we make the two vectors \( \mathbf{a} \) and \( \mathbf{b} \) coinitial, meaning we consider them starting from the same point. Then, we shift the reference point of the Cartesian coordinate system to this common initial point of \( \mathbf{a} \) and \( \mathbf{b} \), effectively making this point the origin. Next, we align the positive x-axis along \( \mathbf{a} \) and rotate the coordinate system such that \( \mathbf{b} \) lies in the xy-plane. This transformation does not affect the algebra of vectors since vector operations are independent of the choice of coordinate system.

In this new coordinate system, the vectors take the simplified forms

\[ \mathbf{a} = a_1 \mathbf{i}, \quad a_1 > 0, \]
\[ \mathbf{b} = b_1 \mathbf{i} + b_2 \mathbf{j}. \]

Since \( \mathbf{b} \) lies in the xy-plane, its z-component is zero. Now, we compute the cross product:

\[ \mathbf{a} \times \mathbf{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & 0 & 0 \\ b_1 & b_2 & 0 \end{vmatrix}. \]

Expanding this determinant along the first row,

\[ \mathbf{a} \times \mathbf{b} = \mathbf{i} \begin{vmatrix} 0 & 0 \\ b_2 & 0 \end{vmatrix} - \mathbf{j} \begin{vmatrix} a_1 & 0 \\ b_1 & 0 \end{vmatrix} + \mathbf{k} \begin{vmatrix} a_1 & 0 \\ b_1 & b_2 \end{vmatrix}. \]

Since the first two determinants are zero, we are left with

\[ \mathbf{a} \times \mathbf{b} = (a_1 b_2) \mathbf{k}. \]

Thus,

\[ \mathbf{n} = \pm \frac{a_1 b_2 \mathbf{k}}{a_1 |\mathbf{b}| \sin \theta}. \]

Canceling \( a_1 \) from the numerator and denominator,

\[ \mathbf{n} = \pm \frac{b_2 \mathbf{k}}{|\mathbf{b}| \sin \theta}. \]

Now, observe that \( \sin \theta \) is always positive because the angle \( \theta \) between two vectors lies in the range \( 0 \leq \theta \leq \pi \). The sign of \( \mathbf{n} \) thus depends only on \( b_2 \):

  • If \( b_2 > 0 \), then choosing the positive sign gives \( \mathbf{n} \) along the positive z-axis.

    alt text

  • If \( b_2 < 0 \), then choosing the positive sign gives \( \mathbf{n} \) along the negative z-axis.

    alt text

This matches a well-known orientation rule: If you take your right hand, align your fingers along \( \mathbf{a} \) and curl them toward \( \mathbf{b} \), then your thumb points in the direction of \( \mathbf{n} \).

Thus, the right-hand rule determines the direction of \( \mathbf{n} \). This means that instead of using both signs, we define \( \mathbf{n} \) as

\[ \mathbf{n} = \frac{\mathbf{a} \times \mathbf{b}}{|\mathbf{a}||\mathbf{b}| \sin \theta}. \]

That is, we do not use the negative sign explicitly, and we always choose the direction given by the right-hand rule.

This convention allows us to unambiguously determine the orientation of \( \mathbf{n} \): if you place your right hand so that your fingers point along \( \mathbf{a} \) and curl them toward \( \mathbf{b} \), your thumb will point in the direction of \( \mathbf{n} \).

Cross Product in Left Hand System

We have been using the right-hand system throughout our discussion. This means that the orientation of the vectors follows the standard convention: if you point your right-hand fingers along \( \mathbf{a} \) and curl them towards \( \mathbf{b} \), your thumb will point in the direction of \( \mathbf{n} \).

However, if we were to use a left-hand system instead, the orientation of \( \mathbf{a} \) and \( \mathbf{b} \) would change (because of change in orientation of x and y-axis), which in turn would affect the direction of \( \mathbf{n} \). In this system, the cross product is determined using the left-hand rule instead of the right-hand rule.

Put your left hand towards \( \mathbf{a} \) and move towards \( \mathbf{b} \). Then your thumb will point in the direction of \( \mathbf{n} \).

Definition

From our detailed discussion, we define the cross product of two vectors \( \overrightarrow{a} \) and \( \overrightarrow{b} \) as follows:

When the vectors are expressed explicitly in resolved form, the cross product is given by the determinant:

\[ \overrightarrow{a} \times \overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}. \]

Earlier, we found the unit vector \(\overrightarrow{n}\) perpendicular to both \(\overrightarrow{a}\) and \(\overrightarrow{b}\) as:

\[ \overrightarrow{n} = \frac{\overrightarrow{a} \times \overrightarrow{b}}{|\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\theta}. \]

Thus, we have an alternative way of expressing the cross product as:

\[ \overrightarrow{a} \times \overrightarrow{b} = |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\theta\,\overrightarrow{n},\quad 0 \leq \theta \leq \pi. \]

Since \(0 \leq \theta \leq \pi\), it follows that \(\sin\theta \geq 0\). Here, \(\overrightarrow{n}\) is a unit vector perpendicular to both vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), whose direction is uniquely determined by the right-hand rule.

Note that even though, in the initial calculation of \(\overrightarrow{n}\), we explicitly assumed \(\overrightarrow{a}\) and \(\overrightarrow{b}\) to be noncollinear, we extend the definition of the cross product to include the case of collinear vectors as well. In such cases, \(\theta = 0\) or \(\pi\), hence:

\[ \overrightarrow{a} \times \overrightarrow{b} = |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin 0 \quad\text{or}\quad |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\pi = 0. \]

Thus, for collinear vectors, the cross product is defined to be the zero vector. In this special scenario, the unit vector \(\overrightarrow{n}\) no longer makes sense, because there is no unique direction perpendicular to two collinear vectors—indeed, there are infinitely many such directions.

Finally, since \(\overrightarrow{n}\) is a unit vector, the magnitude of the cross product \(\overrightarrow{a}\times\overrightarrow{b}\) is simply the nonnegative scalar \( |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\theta \). Thus, we clearly have:

\[ |\overrightarrow{a}\times\overrightarrow{b}| = |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\theta. \]

We do not put a modulus sign around \(\sin\theta\), as we have established that \(\sin\theta\) is always nonnegative in the range \(0\leq \theta \leq \pi\).

Also, from our earlier expression for \(\overrightarrow{n}\), we have another useful way of expressing it compactly:

\[ \overrightarrow{n} = \frac{\overrightarrow{a}\times\overrightarrow{b}}{|\overrightarrow{a}\times\overrightarrow{b}|}. \]

Example

Find a vector of length \(5\) units perpendicular to both vectors

\[ \overrightarrow{a} = \mathbf{i} + \mathbf{j} + \mathbf{k}\quad\text{and}\quad \overrightarrow{b} = \mathbf{i} - \mathbf{j} + 2\mathbf{k}, \]

and making an acute angle with the \(z\)-axis.

Solution:

Let the required vector be denoted by \(\overrightarrow{v}\). Since \(\overrightarrow{v}\) is perpendicular to both vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), it must be collinear with their cross product \(\overrightarrow{a}\times\overrightarrow{b}\). Thus, we write:
[ \overrightarrow{v} = \lambda(\overrightarrow{a}\times\overrightarrow{b}). ]

We first calculate \(\overrightarrow{a}\times\overrightarrow{b}\):

\[ \overrightarrow{a}\times\overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ 1 & 1 & 1 \\ 1 & -1 & 2 \end{vmatrix} = (2+1)\mathbf{i} - (2-1)\mathbf{j} + (-1-1)\mathbf{k} = 3\mathbf{i}-\mathbf{j}-2\mathbf{k}. \]

Thus,

\[ \overrightarrow{v} = \lambda (3\mathbf{i}-\mathbf{j}-2\mathbf{k}). \]

Now, the magnitude of \(\overrightarrow{v}\) is given as \(5\), therefore we have:

\[ |\overrightarrow{v}| = |\lambda|\,|3\mathbf{i}-\mathbf{j}-2\mathbf{k}|\quad\implies\quad 5 = |\lambda|\sqrt{3^2 + (-1)^2 + (-2)^2} = |\lambda|\sqrt{14}. \]

This implies

\[ |\lambda| = \frac{5}{\sqrt{14}}\quad\implies\quad\lambda = \pm\frac{5}{\sqrt{14}}. \]

Therefore, the two possible vectors are:

\[ \overrightarrow{v} = \pm\frac{5}{\sqrt{14}}(3\mathbf{i}-\mathbf{j}-2\mathbf{k}). \]

Since \(\overrightarrow{v}\) must make an acute angle with the positive \(z\)-axis, we select the appropriate vector from these two. A vector \((a\mathbf{i}+b\mathbf{j}+c\mathbf{k})\) makes an acute angle with the positive \(z\)-axis if its dot product with \(\mathbf{k}\) is positive; this means its \(z\)-component \(c\) must be positive.

Here, the vector \(3\mathbf{i}-\mathbf{j}-2\mathbf{k}\) has a negative \(z\)-component (\(-2\)). Hence, to obtain a positive \(z\)-component, we choose the negative sign for \(\lambda\):

\[ \overrightarrow{v} = -\frac{5}{\sqrt{14}}(3\mathbf{i}-\mathbf{j}-2\mathbf{k}) = \frac{-15\mathbf{i}+5\mathbf{j}+10\mathbf{k}}{\sqrt{14}}. \]

This vector has length \(5\), is perpendicular to both \(\overrightarrow{a}\) and \(\overrightarrow{b}\), and clearly makes an acute angle with the positive \(z\)-axis. \(\blacksquare\)

The usefulness of Cross-Product

The primary use of the cross product is to simplify the process of finding a vector perpendicular to two given vectors. Rather than solving multiple equations and performing extensive algebra each time, the cross product provides a concise, direct, and systematic method.

Given two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), their cross product \(\overrightarrow{a}\times\overrightarrow{b}\) is immediately perpendicular to both vectors. Thus, instead of repeatedly solving a system of linear equations, we simply evaluate the determinant:

\[ \overrightarrow{a}\times\overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] a_1 & a_2 & a_3\\[6pt] b_1 & b_2 & b_3 \end{vmatrix}. \]

This greatly simplifies the computational effort and provides a compact and elegant method to directly obtain a vector perpendicular to two given vectors.

Properties of Cross Product

We have introduced a new kind of product on vectors—the cross product. Let us now learn and understand its properties thoroughly and explore how it behaves with respect to various operations.

Commutativity and Associativity:

The cross product, unlike scalar multiplication or dot product, is not commutative. Specifically, we have the important identity:

\[ \overrightarrow{a} \times \overrightarrow{b} = -\,(\overrightarrow{b} \times \overrightarrow{a}) \]

This property can be understood clearly using the determinant definition of the cross product. Recall,

\[ \overrightarrow{a} \times \overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] a_1 & a_2 & a_3\\[6pt] b_1 & b_2 & b_3 \end{vmatrix}. \]

If we interchange vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), this corresponds exactly to interchanging the second and third rows of the determinant:

\[ \overrightarrow{b} \times \overrightarrow{a} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] b_1 & b_2 & b_3\\[6pt] a_1 & a_2 & a_3 \end{vmatrix}. \]

But from determinants' properties, interchanging two rows changes the determinant's sign. Thus,

\[ \overrightarrow{b} \times \overrightarrow{a} = -(\overrightarrow{a} \times \overrightarrow{b}). \]

Additionally, this anti-commutativity can also be understood intuitively by the right-hand rule. If we apply the right-hand rule by aligning fingers along \(\overrightarrow{a}\) and curling them toward \(\overrightarrow{b}\), the thumb points in one particular direction. Reversing the order, aligning fingers along \(\overrightarrow{b}\) first and then curling toward \(\overrightarrow{a}\), the thumb points exactly in the opposite direction. Hence, we again verify the anti-commutative property.

Thus, the cross product is fundamentally anti-commutative.

Scalar Multiplication:

Another important property of the cross product is its compatibility with scalar multiplication. If \( k \) is any scalar and \(\overrightarrow{a}\), \(\overrightarrow{b}\) are vectors, we have the following identity:

\[ k(\overrightarrow{a}\times\overrightarrow{b}) = (k\overrightarrow{a})\times\overrightarrow{b} = \overrightarrow{a}\times(k\overrightarrow{b}). \]

This can easily be understood using the determinant definition of the cross product:

Recall that

\[ \overrightarrow{a}\times\overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] a_1 & a_2 & a_3\\[6pt] b_1 & b_2 & b_3 \end{vmatrix}. \]

Multiplying by scalar \( k \):

  • Multiplying vector \(\overrightarrow{a}\) by \(k\) corresponds to multiplying the second row of this determinant by \(k\):
\[ (k\overrightarrow{a})\times\overrightarrow{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] ka_1 & ka_2 & ka_3\\[6pt] b_1 & b_2 & b_3 \end{vmatrix}. \]
  • Similarly, multiplying vector \(\overrightarrow{b}\) by \(k\) corresponds to multiplying the third row of the determinant by \(k\):
\[ \overrightarrow{a}\times(k\overrightarrow{b}) = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\[6pt] a_1 & a_2 & a_3\\[6pt] kb_1 & kb_2 & kb_3 \end{vmatrix}. \]

By determinant properties, multiplying a row by a scalar \(k\) simply factors \(k\) out of the determinant. Thus, in each case, the scalar \(k\) can be factored out:

\[ (k\overrightarrow{a})\times\overrightarrow{b} = k(\overrightarrow{a}\times\overrightarrow{b}), \quad \overrightarrow{a}\times(k\overrightarrow{b}) = k(\overrightarrow{a}\times\overrightarrow{b}). \]

Hence, scalar multiplication distributes naturally over the cross product, confirming the above stated identities rigorously.

Distributivity over Vector Addition:

The cross product also satisfies a crucial property called distributivity over vector addition. Specifically, it holds that:

\[ \overrightarrow{a}\times(\overrightarrow{b}+\overrightarrow{c}) = \overrightarrow{a}\times\overrightarrow{b} + \overrightarrow{a}\times\overrightarrow{c} \quad \text{(Left Distributivity)}, \]

and similarly,

\[ (\overrightarrow{a}+\overrightarrow{b})\times\overrightarrow{c} = \overrightarrow{a}\times\overrightarrow{c} + \overrightarrow{b}\times\overrightarrow{c} \quad \text{(Right Distributivity)}. \]

These identities follow directly from the determinant definition of the cross product.

These can be easily proven using determinants.

Collinearity of Vectors and Cross-Product

Two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) are collinear if and only if their cross product equals the zero vector:

\[ \overrightarrow{a} \text{ and } \overrightarrow{b} \text{ are collinear } \iff \overrightarrow{a}\times\overrightarrow{b} = \overrightarrow{0}. \]

This follows directly from the definition of the cross product. Specifically, we know that:

\[ |\overrightarrow{a}\times\overrightarrow{b}| = |\overrightarrow{a}||\overrightarrow{b}|\sin\theta. \]

If vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) are collinear, then the angle between them is either \(0\) or \(\pi\). Thus,

  • When \(\theta=0\) or \(\theta=\pi\), \(\sin\theta=0\), which implies: [ |\overrightarrow{a}\times\overrightarrow{b}| = |\overrightarrow{a}||\overrightarrow{b}|\sin\theta = 0 \implies \overrightarrow{a}\times\overrightarrow{b}=\overrightarrow{0}. ]

Conversely, if \(\overrightarrow{a}\times\overrightarrow{b}=\overrightarrow{0}\), it implies that:

\[ |\overrightarrow{a}\times\overrightarrow{b}|=|\overrightarrow{a}||\overrightarrow{b}|\sin\theta=0. \]

Since we assume \(\overrightarrow{a}\) and \(\overrightarrow{b}\) are nonzero vectors, we must have \(\sin\theta=0\), which means \(\theta=0\) or \(\theta=\pi\). Thus, the vectors must be collinear.

Hence, we have rigorously established the necessary and sufficient condition:

\[ \overrightarrow{a}\times\overrightarrow{b}=\overrightarrow{0} \iff \overrightarrow{a} \text{ and } \overrightarrow{b}\text{ are collinear}. \]

Cross Product of Coordinate Unit Vectors

In a right-handed coordinate system, the cross products involving unit vectors along coordinate axes have simple and important results. Consider unit vectors \(\mathbf{i}\), \(\mathbf{j}\), and \(\mathbf{k}\) along the positive \(x\)-, \(y\)-, and \(z\)-axes, respectively. Then we have:

  1. The cross product of any vector with itself is always zero:

    \[ \mathbf{i}\times\mathbf{i}=\mathbf{0},\quad\mathbf{j}\times\mathbf{j}=\mathbf{0},\quad\mathbf{k}\times\mathbf{k}=\mathbf{0} \]

    (This is because the angle between a vector and itself is zero, and thus \(\sin 0 = 0\).)

  2. Cross products of distinct unit vectors follow the right-hand rule cyclically:

    \[ \mathbf{i}\times\mathbf{j}=\mathbf{k},\quad\mathbf{j}\times\mathbf{k}=\mathbf{i},\quad\mathbf{k}\times\mathbf{i}=\mathbf{j}. \]

    These relations can easily be verified using the determinant definition. For instance:

    \[ \mathbf{i}\times\mathbf{j}= \begin{vmatrix} \mathbf{i}&\mathbf{j}&\mathbf{k}\\[6pt] 1&0&0\\[6pt] 0&1&0 \end{vmatrix} = (0)\mathbf{i}-(0)\mathbf{j}+(1)\mathbf{k}=\mathbf{k}. \]

Thus, summarizing clearly:

  • \(\mathbf{i}\times\mathbf{i}=\mathbf{j}\times\mathbf{j}=\mathbf{k}\times\mathbf{k}=\mathbf{0}\).
  • \(\mathbf{i}\times\mathbf{j}=\mathbf{k},\quad\mathbf{j}\times\mathbf{k}=\mathbf{i},\quad\mathbf{k}\times\mathbf{i}=\mathbf{j}\).

Cross Product and Basis in Three-dimensional Space

If \(\overrightarrow{a}\) and \(\overrightarrow{b}\) are two nonzero and noncollinear vectors, then the vectors

\[ \overrightarrow{a},\quad \overrightarrow{b},\quad \text{and}\quad \overrightarrow{a}\times\overrightarrow{b} \]

are always noncoplanar.

To understand why, recall that \(\overrightarrow{a}\times\overrightarrow{b}\) is perpendicular to both \(\overrightarrow{a}\) and \(\overrightarrow{b}\). If these three vectors were coplanar, then \(\overrightarrow{a}\times\overrightarrow{b}\) would lie in the plane formed by \(\overrightarrow{a}\) and \(\overrightarrow{b}\). But this is impossible, since by definition, \(\overrightarrow{a}\times\overrightarrow{b}\) must be perpendicular to that very plane. Thus, these three vectors cannot lie in the same plane and are therefore noncoplanar.

Consequently, by the fundamental theorem of three-dimensional geometry, any vector \(\overrightarrow{r}\) in three-dimensional space can be uniquely expressed as a linear combination of these three noncoplanar vectors. Formally, we have:

\[ \overrightarrow{r} = x\,\overrightarrow{a} + y\,\overrightarrow{b} + z\,(\overrightarrow{a}\times\overrightarrow{b}) \]

for some scalars \(x, y, z\). We will see later on how such a formulation is helpful.

Cross Product and Area

The relationship between the cross product and the area of a parallelogram formed by two vectors is quite surprising and insightful. To understand this clearly, let us revisit the cross product with a geometric perspective.

Consider two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), placed with their initial points coinciding (making them coinitial). Construct a parallelogram using these two vectors as adjacent sides—precisely the same way we do when applying the parallelogram law of vector addition.

Let us denote the parallelogram formed by vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) as parallelogram \(OACB\), where \(O\) is the common initial point, \(OA = \overrightarrow{a}\), \(OB = \overrightarrow{b}\), and \(C\) is the opposite vertex formed by completing the parallelogram.

paralleogram constructed

Now, our question is simple yet fundamental: What is the area of this parallelogram formed by \(\overrightarrow{a}\) and \(\overrightarrow{b}\)?

We recall from basic geometry that the area of any parallelogram is given by the formula:

\[ \text{Area} = \text{base} \times \text{height}. \]

Taking \(OA\) as the base, the length of the base is \(|\overrightarrow{a}|\). Let \(\theta\) be the angle between vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\). Drop a perpendicular from vertex \(B\) onto the base \(OA\), meeting at point \(N\). Then, by simple trigonometry, the perpendicular height \(BN\) is given by:

\[ BN = |\overrightarrow{b}|\sin\theta. \]

Thus, the area of the parallelogram is:

\[ \text{Area} = |\overrightarrow{a}| \times |\overrightarrow{b}|\sin\theta. \]

However, we immediately recognize this quantity: It is precisely the magnitude of the cross product \(\overrightarrow{a}\times\overrightarrow{b}\). Indeed, we previously established that:

\[ |\overrightarrow{a}\times\overrightarrow{b}| = |\overrightarrow{a}|\,|\overrightarrow{b}|\,\sin\theta. \]

Thus, we arrive at a remarkable and fundamental geometric interpretation of the cross product:

\[ |\overrightarrow{a}\times\overrightarrow{b}| = \text{Area of parallelogram formed by } \overrightarrow{a} \text{ and } \overrightarrow{b}. \]

This connection between geometry and algebra highlights one of the most elegant and surprising features of the cross product.

Consequently, the area of the triangle formed by the two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\), placed coinitially, is exactly half the area of the parallelogram we just discussed.

Thus, the area of this triangle is given by:

\[ \text{Area of triangle} = \frac{1}{2}|\overrightarrow{a}\times\overrightarrow{b}|. \]

Area of Triangle

If we have a triangle \(ABC\) with vertices at points \(A\), \(B\), and \(C\), the area of this triangle can be neatly expressed using the cross product of vectors formed by its sides.

Specifically, the area of triangle \(ABC\) can be computed by considering the vectors formed by any two sides emanating from a common vertex:

\[ \text{Area of triangle } ABC = \frac{1}{2}\left|\overrightarrow{AB}\times\overrightarrow{AC}\right|\,=\,\frac{1}{2}\left|\overrightarrow{BA}\times\overrightarrow{BC}\right|\,=\,\frac{1}{2}\left|\overrightarrow{CA}\times\overrightarrow{CB}\right|. \]

If the points \( A, B, C \) have position vectors \( \overrightarrow{a}, \overrightarrow{b}, \overrightarrow{c} \) respectively, then the area of triangle \( ABC \) can be expressed directly using these position vectors.

First, recall the general formula we just derived:

\[ \text{Area of triangle }ABC = \frac{1}{2}\left|\overrightarrow{AB}\times\overrightarrow{AC}\right|. \]

Since \(\overrightarrow{AB} = \overrightarrow{b}-\overrightarrow{a}\) and \(\overrightarrow{AC} = \overrightarrow{c}-\overrightarrow{a}\), we have:

\[ \text{Area} = \frac{1}{2}\left|(\overrightarrow{b}-\overrightarrow{a})\times(\overrightarrow{c}-\overrightarrow{a})\right|. \]

Expanding using distributivity of cross product, we get:

\[ = \frac{1}{2}\left|\overrightarrow{b}\times\overrightarrow{c}-\overrightarrow{b}\times\overrightarrow{a}-\overrightarrow{a}\times\overrightarrow{c}+\overrightarrow{a}\times\overrightarrow{a}\right|. \]

Since \(\overrightarrow{a}\times\overrightarrow{a}=\overrightarrow{0}\), this simplifies neatly to:

\[ = \frac{1}{2}\left|\overrightarrow{b}\times\overrightarrow{c}+\overrightarrow{a}\times\overrightarrow{b}+\overrightarrow{c}\times\overrightarrow{a}\right|. \]

Thus, we have the elegant result:

\[ \text{Area of triangle }ABC = \frac{1}{2}\left|\overrightarrow{a}\times\overrightarrow{b}+\overrightarrow{b}\times\overrightarrow{c}+\overrightarrow{c}\times\overrightarrow{a}\right|. \]

Area of a Quadrilateral Using Cross Product

Consider a planar quadrilateral \(ABCD\) situated in three-dimensional space. We will now prove a rather surprising and elegant result: the area of this quadrilateral equals half the magnitude of the cross product of its diagonals. Formally stated, we have:

\[ \text{Area of quadrilateral }ABCD = \frac{1}{2}\left|\overrightarrow{AC}\times\overrightarrow{BD}\right|. \]

Proof:

To see why this is true, we begin by dividing the quadrilateral \(ABCD\) into two triangles by drawing diagonal \(AC\). Thus, the area of quadrilateral \(ABCD\) equals the sum of the areas of triangles \(ABC\) and \(CDA\):

\[ \text{Area of quadrilateral }ABCD = \text{Area of triangle }ABC + \text{Area of triangle }CDA. \]

Expressing these triangle areas using cross products, we have:

\[ = \frac{1}{2}\left|\overrightarrow{AB}\times\overrightarrow{AC}\right| + \frac{1}{2}\left|\overrightarrow{AC}\times\overrightarrow{AD}\right|. \]

Now, observe carefully the directions of vectors using the right-hand rule. Since \(ABCD\) is planar, both cross products \(\overrightarrow{AB}\times\overrightarrow{AC}\) and \(\overrightarrow{AC}\times\overrightarrow{AD}\) are perpendicular to the same plane, and thus point in the same (or exactly opposite) direction. By appropriate labeling of vertices (as done here), we ensure they point in the same direction.

If two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\) have the same direction, then clearly:

\[ |\overrightarrow{u}| + |\overrightarrow{v}| = |\overrightarrow{u} + \overrightarrow{v}|. \]

Using this observation, we combine the magnitudes of our two cross products into a single magnitude:

\[ \text{Area of quadrilateral }ABCD = \frac{1}{2}\left|\overrightarrow{AB}\times\overrightarrow{AC} + \overrightarrow{AC}\times\overrightarrow{AD}\right|. \]

Factor out vector \(\overrightarrow{AC}\) carefully, noting the anti-commutative property \(\overrightarrow{AB}\times\overrightarrow{AC} = -\,\overrightarrow{AC}\times\overrightarrow{AB}\):

\[ = \frac{1}{2}\left|-\overrightarrow{AC}\times\overrightarrow{AB} + \overrightarrow{AC}\times\overrightarrow{AD}\right|. \]

This simplifies neatly by distributivity of the cross product:

\[ = \frac{1}{2}\left|\overrightarrow{AC}\times(\overrightarrow{AD}-\overrightarrow{AB})\right|. \]

But the vector difference \(\overrightarrow{AD}-\overrightarrow{AB}\) is precisely the diagonal vector \(\overrightarrow{BD}\). Hence, we have:

\[ \text{Area of quadrilateral }ABCD = \frac{1}{2}\left|\overrightarrow{AC}\times\overrightarrow{BD}\right|. \]

Thus, the area of a planar quadrilateral in space can be directly computed by taking half the magnitude of the cross product of its two diagonals. \(\blacksquare\)