Skip to content

Product of Determinants

Determinant of Product of Matrices

For two square matrices \(A\) and \(B\) of order \(n\), the determinant of the product of \(A\) and \(B\) is the product of their determinants. That is,

\[ |AB| = |A| |B|. \]

We will not prove this property here, as the proof requires more advanced linear algebra concepts. However, we can easily verify this property with specific examples.

Let:

\[ A = \begin{pmatrix} 1 & 2 \\ 3 & 2 \end{pmatrix}, \quad B = \begin{pmatrix} 1 & 1 \\ 3 & 4 \end{pmatrix}. \]

First, calculate the determinants of \(A\) and \(B\).

Determinant of \(A\):

\[ |A| = \begin{vmatrix} 1 & 2 \\ 3 & 2 \end{vmatrix} = (1)(2) - (2)(3) = 2 - 6 = -4. \]

Determinant of \(B\):

\[ |B| = \begin{vmatrix} 1 & 1 \\ 3 & 4 \end{vmatrix} = (1)(4) - (1)(3) = 4 - 3 = 1. \]

Now, calculate the product \(AB\):

\[ AB = \begin{pmatrix} 1 & 2 \\ 3 & 2 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 1(1) + 2(3) & 1(1) + 2(4) \\ 3(1) + 2(3) & 3(1) + 2(4) \end{pmatrix} = \begin{pmatrix} 7 & 9 \\ 9 & 11 \end{pmatrix}. \]

Determinant of \(AB\):

\[ |AB| = \begin{vmatrix} 7 & 9 \\ 9 & 11 \end{vmatrix} = (7)(11) - (9)(9) = 77 - 81 = -4. \]

Finally, check if \( |AB| = |A||B| \):

\[ |A||B| = (-4)(1) = -4. \]

Thus, \( |AB| = |A||B| \), verifying the property for this example.

For \(n\) square matrices \(A_1, A_2, \dots, A_n\) of order \(n\), the determinant of the product of these matrices is the product of their individual determinants. That is,

\[ |A_1 A_2 \dots A_n| = |A_1| |A_2| \dots |A_n|. \]

This property extends the rule for two matrices to any number of matrices.

If we take \(A_1=A_2=...=A_n=A\), we get,

\[ |A^n| = |A|^n. \]

Multiplication of Determinants

Determinant multiplication can be handled in a manner similar to matrix multiplication. The key property is that:

\[ |A||B| = |AB|. \]

In standard matrix multiplication (row-column multiplication), we take the inner dot product of the rows of \(A\) with the columns of \(B\). However, the flexibility of determinants allows us to use various multiplication forms without changing the result.

Since:

\[ |A||B| = |AB^T| = |A||B^T|, \]

we can take the inner dot product of the rows of \(A\) with the rows of \(B\) (because the columns of \(B^T\) are the rows of \(B\)) to compute the same determinant. This method is known as row-row multiplication.

Similarly:

\[ |A||B| = |A^T B| = |A^T||B|, \]

indicates that we can perform column-row multiplication, where the columns of \(A\) are used with the rows of \(B\).

Lastly:

\[ |A||B| = |A^T B^T| = |A^T||B^T|, \]

shows that we can also perform column-column multiplication, where the columns of both \(A\) and \(B\) are multiplied together.

This flexibility in choosing between row-row, row-column, column-row, and column-column multiplication emphasizes the robustness of the determinant property in matrix multiplication. All these forms will yield the same determinant \( |A||B| \).

Example:

Let

\[ A = \begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix}, \quad B = \begin{vmatrix} 2 & 0 \\ 1 & 3 \end{vmatrix}. \]

We will calculate the product of determinants \( |A||B| \) using four methods: row-column, row-row, column-row, and column-column multiplication.

1. Row-Column Multiplication:

In row-column multiplication, we compute the inner dot product of the rows of \(A\) with the columns of \(B\).

\[ |A||B| = \begin{vmatrix} (1 \cdot 2 + 2 \cdot 1) & (1 \cdot 0 + 2 \cdot 3) \\ (3 \cdot 2 + 4 \cdot 1) & (3 \cdot 0 + 4 \cdot 3) \end{vmatrix} = \begin{vmatrix} 4 & 6 \\ 10 & 12 \end{vmatrix}. \]

Now calculate the determinant:

\[ |A||B| = (4 \cdot 12) - (6 \cdot 10) = 48 - 60 = -12. \]

2. Row-Row Multiplication:

In row-row multiplication, we compute the inner dot product of the rows of \(A\) with the rows of \(B\) (considering rows of \(B\) directly without transposing it).

\[ |A||B| = \begin{vmatrix} (1 \cdot 2 + 2 \cdot 0) & (1 \cdot 1 + 2 \cdot 3) \\ (3 \cdot 2 + 4 \cdot 0) & (3 \cdot 1 + 4 \cdot 3) \end{vmatrix} = \begin{vmatrix} 2 & 7 \\ 6 & 15 \end{vmatrix}. \]

Now calculate the determinant:

\[ |A||B| = (2 \cdot 15) - (7 \cdot 6) = 30 - 42 = -12. \]

3. Column-Row Multiplication:

In column-row multiplication, we compute the inner dot product of the columns of \(A\) with the rows of \(B\).

\[ |A||B| = \begin{vmatrix} (1 \cdot 2 + 3 \cdot 0) & (2 \cdot 2 + 4 \cdot 0) \\ (1 \cdot 1 + 3 \cdot 3) & (2 \cdot 1 + 4 \cdot 3) \end{vmatrix} = \begin{vmatrix} 2 & 4 \\ 10 & 14 \end{vmatrix}. \]
\[ |A||B| = 28-40 = -12. \]

4. Column-Column Multiplication:

In column-column multiplication, we compute the inner dot product of the columns of \(A\) with the columns of \(B\).

\[ |A||B| = \begin{vmatrix} (1 \cdot 2 + 3 \cdot 1) & (1 \cdot 0 + 3 \cdot 3) \\ (2 \cdot 2 + 4 \cdot 1) & (2 \cdot 0 + 4 \cdot 3) \end{vmatrix} = \begin{vmatrix} 5 & 9 \\ 8 & 12 \end{vmatrix}. \]

Now calculate the determinant:

\[ |A||B| = (5 \cdot 12) - (9 \cdot 8) = 60 - 72 = -12. \]

We realize that no matter how we multiply two matrices, the determinant value remains the same. However, the structure of the resultant determinant changes depending on the method of multiplication. Let us explore this concept using the following example.

Consider the matrix:

\[ \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix}. \]

We want to calculate the square of this determinant, which means multiplying it by itself:

\[ \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix} \cdot \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix}. \]

Row-Column Multiplication:

First, we will perform row-column multiplication. This involves taking the dot product of the rows of the first matrix with the columns of the second matrix. Let’s calculate the result of this product:

\[ \begin{vmatrix} (a \cdot a + b \cdot c + c \cdot b) & (a \cdot b + b \cdot a + c \cdot c) & (a \cdot c + b \cdot b + c \cdot a) \\ (c \cdot a + a \cdot c + b \cdot b) & (c \cdot b + a \cdot a + b \cdot c) & (c \cdot c + a \cdot b + b \cdot a) \\ (b \cdot a + c \cdot c + a \cdot b) & (b \cdot b + c \cdot a + a \cdot c) & (b \cdot c + c \cdot b + a \cdot a) \end{vmatrix}. \]

This results in the following matrix:

\[ \begin{vmatrix} a^2 + 2bc & c^2 + 2ab & b^2 + 2ac \\ b^2 +2ac & a^2 + 2bc & c^2 + 2ab\\ c^2 + 2ab & b^2+2ac & a^2 + 2bc \end{vmatrix}. \]

Row-Row Multiplication:

Now, let's perform row-row multiplication by taking the dot product of the rows of the first matrix with the rows of the second matrix. The multiplication will look like this:

\[ \begin{vmatrix} (a \cdot a + b \cdot b + c \cdot c) & (a \cdot c + b \cdot a + c \cdot b) & (a \cdot b + b \cdot c + c \cdot a) \\ (c \cdot a + a \cdot b + b \cdot c) & (c \cdot c + a \cdot a + b \cdot b) & (c \cdot b + a \cdot c + b \cdot a) \\ (b \cdot a + c \cdot b + a \cdot c) & (b \cdot c + c \cdot a + a \cdot b) & (b \cdot b + c \cdot c + a \cdot a) \end{vmatrix}. \]

This results in a matrix of a different structure:

\[ \begin{vmatrix} a^2 + b^2 + c^2 & ac + ab + bc & ab + ac + bc \\ ac + ab + bc & a^2 + b^2 + c^2 & ab + ac + bc \\ ab + ac + bc & ac + ab + bc & a^2 + b^2 + c^2 \end{vmatrix}. \]

This structure is completely different from the row-column multiplication result. Here, we have more uniform sums of squares and cross terms appearing in every position.

Although the determinant value remains the same in both the row-column and row-row methods, the structure of the resulting determinant is quite different. In the row-column multiplication, the entries are combinations of quadratic and cross terms, whereas in the row-row multiplication, the matrix structure emphasizes sums of squares and more uniform terms.

This shows how the determinant remains invariant in value but can have entirely different structures depending on the multiplication method.

Splitting a Determinant

Sometimes, complex-looking determinants can be simplified by splitting them into a product of two simpler determinants. This method doesn't work for all types of determinants, but for specific structures, it makes evaluation much easier.

Consider the determinant:

\[ \begin{vmatrix} a_1 p_1 + b_1 q_1 + c_1 r_1 & a_1 p_2 + b_1 q_2 + c_1 r_2 & a_1 p_3 + b_1 q_3 + c_1 r_3 \\ a_2 p_1 + b_2 q_1 + c_2 r_1 & a_2 p_2 + b_2 q_2 + c_2 r_2 & a_2 p_3 + b_2 q_3 + c_2 r_3 \\ a_3 p_1 + b_3 q_1 + c_3 r_1 & a_3 p_2 + b_3 q_2 + c_3 r_2 & a_3 p_3 + b_3 q_3 + c_3 r_3 \end{vmatrix}. \]

At first glance, this determinant looks complicated, but its structure allows us to break it into two simpler determinants. Each term in the matrix is a sum of products involving the \(a_i\), \(b_i\), and \(c_i\) coefficients multiplied by different \(p_j\), \(q_j\), and \(r_j\) terms. This suggests that we can factor the determinant into:

\[ \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} \cdot \begin{vmatrix} p_1 & q_1 & r_1 \\ p_2 & q_2 & r_2 \\ p_3 & q_3 & r_3 \end{vmatrix}. \]

This allows us to evaluate two smaller determinants instead of one large one. The first determinant involves the \(a_i\), \(b_i\), and \(c_i\) coefficients, and the second involves the \(p_i\), \(q_i\), and \(r_i\) terms. The original complex-looking determinant is then simply the product of these two determinants.

This technique shows how splitting a determinant into a product can simplify calculations dramatically, especially for structured determinants like the one above. However, this method only applies to certain types of determinants where such factorization is possible.

Example

Let \(\alpha\) and \(\beta\) be the roots of the equation \( ax^2 + bx + c = 0 \), and define \( S_n = \alpha^n + \beta^n \) for \( n \geq 1 \).

Evaluate the value of the determinant:

\[ D = \begin{vmatrix} 3 & 1 + S_1 & 1 + S_2 \\ 1 + S_1 & 1 + S_2 & 1 + S_3 \\ 1 + S_2 & 1 + S_3 & 1 + S_4 \end{vmatrix} \]

Solution:

\[ S_1 = \alpha + \beta, \quad S_2 = \alpha^2 + \beta^2, \quad S_3 = \alpha^3 + \beta^3, \quad S_4 = \alpha^4 + \beta^4 \]

This gives us the expanded determinant:

\[ D = \begin{vmatrix} 3 & 1 + \alpha + \beta & 1 + \alpha^2 + \beta^2 \\ 1 + \alpha + \beta & 1 + \alpha^2 + \beta^2 & 1 + \alpha^3 + \beta^3 \\ 1 + \alpha^2 + \beta^2 & 1 + \alpha^3 + \beta^3 & 1 + \alpha^4 + \beta^4 \end{vmatrix} \]

We write out each entry of the determinant in a detailed way:

\[ D = \begin{vmatrix} 1\cdot1 + 1\cdot1 + 1\cdot1 & 1 + 1 \cdot \alpha + 1 \cdot \beta & 1 + 1 \cdot \alpha^2 + 1 \cdot \beta^2 \\ 1\cdot 1 + 1 \cdot \alpha + 1 \cdot \beta & 1 \cdot 1 + \alpha \cdot \alpha + \beta \cdot \beta & 1 \cdot 1 + \alpha \cdot \alpha^2 + \beta \cdot \beta^2 \\ 1 \cdot 1 + 1 \cdot \alpha^2 + 1 \cdot \beta^2 & 1 \cdot 1 + \alpha \cdot \alpha^2 + \beta \cdot \beta^2 & 1 \cdot 1 + \alpha^2 \cdot \alpha^2 + \beta^2 \cdot \beta^2 \end{vmatrix} \]

We can clearly see that we can now split it into two a product of two determinants:

\[ D = \begin{vmatrix} 1 & 1 & 1 \\ 1 & \alpha & \alpha^2 \\ 1 & \beta & \beta^2 \end{vmatrix} \begin{vmatrix} 1 & 1 & 1 \\ 1 & \alpha & \alpha^2 \\ 1 & \beta & \beta^2 \end{vmatrix} = {\begin{vmatrix} 1 & 1 & 1 \\ 1 & \alpha & \alpha^2 \\ 1 & \beta & \beta^2 \end{vmatrix}}^2 \]

We know that,

\[ \begin{vmatrix} 1 & 1 & 1 \\ 1 & \alpha & \alpha^2 \\ 1 & \beta & \beta^2 \end{vmatrix} = (\alpha - 1)(1 - \beta)(\alpha - \beta) \]

Thus:

\[ D = ((1 - \alpha)(1 - \beta)(\alpha - \beta))^2 \]

Simplifying, we get:

\[ D = (\alpha - \beta)^2 \left( (1 - \alpha)(1 - \beta) \right)^2 \]

Since \(\alpha\) and \(\beta\) are the roots of the quadratic equation \( ax^2 + bx + c = 0 \), we use:

\[ \alpha + \beta = -\frac{b}{a}, \quad \alpha \beta = \frac{c}{a}, \quad \alpha - \beta = \frac{\sqrt{b^2 - 4ac}}{a} \]

[ D = \left( \frac{\sqrt{b^2 - 4ac}}{a} \right)^2 \left( \frac{a + b + c}{a} \right)^2 ] \ Simplifying:

\[ D = \frac{(b^2 - 4ac)(a + b + c)^2}{a^4} \]

Thus, the value of the determinant \(D\) is:

\[ D = \frac{(b^2 - 4ac)(a + b + c)^2}{a^4} \]