Skip to content

Properties of Determinants

Invariance under Transposition

One important property of determinants is that the determinant of a matrix does not change when you take the transpose of the matrix. In other words, if you switch the rows and columns of a matrix (i.e., take the transpose), the determinant remains the same:

\[ |A^T| = |A|. \]

This means that the determinant is invariant under transposition. Whether you calculate the determinant of the original matrix \(\mathbf{A}\) or its transpose \(\mathbf{A}^T\), the result will be identical.

Why is this true?

The reason behind this property is rooted in the way determinants are calculated. When you take the transpose of a matrix, you're essentially swapping its rows and columns. However, the determinant formula uses the same set of operations on both rows and columns (expanding cofactors, crisscrossing for minors, etc.), and these operations are symmetrical with respect to rows and columns.

So, whether you expand along rows or along columns, the result is mathematically the same, which leads to:

\[ |A^T| = |A|. \]

Interchange of Rows and Columns

If two rows or two columns of a matrix are interchanged, the determinant of the new matrix is the negative of the original matrix’s determinant:

\[ |\mathbf{B}| = -|\mathbf{A}|. \]

Verification with a \(3 \times 3\) Matrix

Verifying for a order 3 detrminant is enough for us.

Let \(\mathbf{A}\) be a general \(3 \times 3\) matrix:

\[ \mathbf{A} = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}. \]

The determinant of \(\mathbf{A}\) is:

\[ |\mathbf{A}| = a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}. \]

Now, let’s swap the first and second rows of \(\mathbf{A}\), resulting in matrix \(\mathbf{B}\):

\[ \mathbf{B} = \begin{pmatrix} a_{21} & a_{22} & a_{23} \\ a_{11} & a_{12} & a_{13} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}. \]

Expand the determinant of \(\mathbf{B}\) along row 2 (which is identical to the original row 1 of \(\mathbf{A}\)):

\[ |\mathbf{B}| = - a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} + a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} - a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}. \]

This is exactly the same expression as the determinant of \(\mathbf{A}\), but with a swapped sign. Thus,

\[ |\mathbf{B}| = -|\mathbf{A}|. \]

So, if we interchange rows or columns an even number of times, there is no change in the determinant. This is because each interchange reverses the sign of the determinant, and an even number of reversals brings the determinant back to its original sign:

\[ |\mathbf{B}| = (-1)^{\text{number of interchanges}} \cdot |\mathbf{A}|. \]

For even interchanges:

\[ (-1)^{\text{even}} = 1, \quad \text{so} \quad |\mathbf{B}| = |\mathbf{A}|. \]

For odd interchanges:

\[ (-1)^{\text{odd}} = -1, \quad \text{so} \quad |\mathbf{B}| = -|\mathbf{A}|. \]

Thus, if rows or columns are interchanged even times, the determinant remains unchanged.

Zero Row or Column

If all elements of any row or any column of a matrix are zero, the determinant of the matrix is zero:

\[ \text{If } \text{row}_i = 0 \text{ or } \text{column}_j = 0, \quad |\mathbf{A}| = 0. \]

Multiplication by a Scalar

If you multiply all elements of a row or a column of a matrix by a scalar \(\lambda\), the determinant of the matrix is scaled by the same factor. In other words, multiplying a row or column by \(\lambda\) multiplies the determinant by \(\lambda\):

\[ |\mathbf{B}| = \lambda |\mathbf{A}|, \]

where \(\mathbf{B}\) is the matrix obtained by multiplying any row or column of \(\mathbf{A}\) by \(\lambda\).

Let's consider a matrix \(\mathbf{A}\) and multiply the first row by a scalar \(k\). The matrix \(\mathbf{B}\) becomes:

\[ \mathbf{B} = \begin{pmatrix} k \cdot a_{11} & k \cdot a_{12} & k \cdot a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}. \]

We will now expand the determinant of \(\mathbf{B}\) along the first row:

\[ |\mathbf{B}| = k \cdot a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - k \cdot a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + k \cdot a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}. \]

Factor out the common scalar \(k\):

\[ |\mathbf{B}| = k \left( a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix} \right). \]

This is exactly \(k \cdot |\mathbf{A}|\), showing that multiplying one row by \(k\) scales the determinant by \(k\):

\[ |\mathbf{B}| = k \cdot |\mathbf{A}|. \]

"Determinant of a Scalar Multiple of a Matrix"

For a scalar \(k\) and an \(n \times n\) matrix \(A\), the determinant of \(kA\) (where every element of \(A\) is multiplied by \(k\)) is given by:

\[ |kA| = k^n |A|. \]

Explanation:

When we multiply a matrix \(A\) by a scalar \(k\), we perform scalar multiplication, meaning that every element of the matrix is multiplied by \(k\). So, if:

\[ A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}, \]

then

\[ kA = \begin{pmatrix} k \cdot a_{11} & k \cdot a_{12} \\ k \cdot a_{21} & k \cdot a_{22} \end{pmatrix}. \]

Now, \( |kA| \) is the determinant of a matrix where every element has been multiplied by \(k\). The key idea is that we can take \(k\) out of each row of the determinant.

For example, for a \(2 \times 2\) matrix, we have:

\[ |kA| = \begin{vmatrix} k \cdot a_{11} & k \cdot a_{12} \\ k \cdot a_{21} & k \cdot a_{22} \end{vmatrix}. \]

Since each element in both rows has a factor of \(k\), we can factor out \(k\) from each row:

\[ |kA| = k \cdot k \cdot \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = k^2 |A|. \]

For an \(n \times n\) matrix, you can factor out \(k\) from each of the \(n\) rows. This gives:

\[ |kA| = k^n |A|. \]

So, multiplying every element of the matrix by \(k\) results in multiplying the determinant by \(k^n\), where \(n\) is the number of rows or columns in the matrix \(A\).

Two Rows or Columns Are Equal

If any two rows or any two columns of a matrix are equal, the determinant of the matrix is zero.

\[ |\mathbf{A}| = 0 \quad \text{if two rows or columns are identical}. \]

Verification Using a \(3 \times 3\) Matrix

Consider the matrix where the first and second rows are equal:

\[ \mathbf{A} = \begin{pmatrix} a_1 & b_1 & c_1 \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{pmatrix}. \]

We will expand this determinant along row 3 to verify that the determinant is zero.

The determinant of \(\mathbf{A}\) is:

\[ |\mathbf{A}| = a_2 \begin{vmatrix} b_1 & c_1 \\ b_1 & c_1 \end{vmatrix} - b_2 \begin{vmatrix} a_1 & c_1 \\ a_1 & c_1 \end{vmatrix} + c_2 \begin{vmatrix} a_1 & b_1 \\ a_1 & b_1 \end{vmatrix}. \]

Now, let's compute each \(2 \times 2\) determinant:

  • \(\begin{vmatrix} b_1 & c_1 \\ b_1 & c_1 \end{vmatrix} = b_1 \cdot c_1 - b_1 \cdot c_1 = 0\),
  • \(\begin{vmatrix} a_1 & c_1 \\ a_1 & c_1 \end{vmatrix} = a_1 \cdot c_1 - a_1 \cdot c_1 = 0\),
  • \(\begin{vmatrix} a_1 & b_1 \\ a_1 & b_1 \end{vmatrix} = a_1 \cdot b_1 - a_1 \cdot b_1 = 0\).

Substituting these values back into the determinant expression:

\[ |\mathbf{A}| = a_2 \cdot 0 - b_2 \cdot 0 + c_2 \cdot 0 = 0. \]

Thus, if two rows or columns are equal, the determinant of the matrix is:

\[ |\mathbf{A}| = 0. \]

Two Rows or Columns Are Proportional

If two rows or columns of a matrix are proportional (i.e., one is a scalar multiple of the other), the determinant of the matrix is zero.

Consider the matrix:

\[ \mathbf{A} = \begin{pmatrix} a_1 & b_1 & c_1 \\ k \cdot a_1 & k \cdot b_1 & k \cdot c_1 \\ a_2 & b_2 & c_2 \end{pmatrix}. \]

Since the second row is a scalar multiple of the first row by \(k\), using the property that multiplying a row by a scalar \(k\) scales the determinant by \(k\), we can factor \(k\) out:

\[ |\mathbf{A}| = k \cdot \begin{vmatrix} a_1 & b_1 & c_1 \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{vmatrix}. \]

Now, the determinant has two identical rows, and by the property that the determinant of a matrix with two equal rows (or columns) is zero, we have:

\[ |\mathbf{A}| = k \cdot 0 = 0. \]

Thus, if two rows or columns are proportional, the determinant of the matrix is zero.

Splitting a Determinant as a sum of two determinants

If each element of any row or column of a matrix is expressed as a sum of two or more terms, the determinant of the matrix can be written as the sum of two or more determinants. Specifically, if one row or column contains sums, the determinant can be split into separate determinants for each term.

For a matrix where the elements of the first row are sums:

\[ \begin{vmatrix} a_1 + \alpha & b_1 + \beta & c_1 + \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} \]

this determinant can be expressed as the sum of two determinants:

\[ \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha & \beta & \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

To understand this, let’s expand the original determinant along the first row:

\[ \begin{vmatrix} a_1 + \alpha & b_1 + \beta & c_1 + \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = (a_1 + \alpha) \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - (b_1 + \beta) \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} + (c_1 + \gamma) \begin{vmatrix} a_2 & b_2 \\ a_3 & b_3 \end{vmatrix}. \]

Now, apply the distributive property:

\[ = a_1 \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} + \alpha \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - b_1 \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} - \beta \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} + c_1 \begin{vmatrix} a_2 & b_2 \\ a_3 & b_3 \end{vmatrix} + \gamma \begin{vmatrix} a_2 & b_2 \\ a_3 & b_3 \end{vmatrix}. \]

Grouping terms based on constants and sums:

\[ = \left( a_1 \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - b_1 \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} + c_1 \begin{vmatrix} a_2 & b_2 \\ a_3 & b_3 \end{vmatrix} \right) + \left( \alpha \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - \beta \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} + \gamma \begin{vmatrix} a_2 & b_2 \\ a_3 & b_3 \end{vmatrix} \right). \]

This clearly shows the determinant can be split into two separate determinants:

\[ \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha & \beta & \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

Another perspective on this property is that it allows for the addition of two determinants. We can add two determinants that have exactly the same rows or columns, except possibly for one row or column, and the result is a new determinant where the differing row or column is the sum of the corresponding rows or columns in the original determinants.

Given two determinants \(\mathbf{A}_1\) and \(\mathbf{A}_2\) where all rows or columns are identical except for one, we can write:

\[ \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha & \beta & \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 + \alpha & b_1 + \beta & c_1 + \gamma \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

Here, both determinants have identical second and third rows, and only the first row differs. By adding the two determinants, the result is a new determinant in which the first row is the sum of the two original first rows.

Consider the determinant

\[ \begin{vmatrix} a_1 + \alpha_1 & b_1 + \beta_1 & c_1 + \gamma_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

We begin by splitting this determinant along the first row. Applying the property that allows us to express the sum in a row as the sum of two determinants, we have:

\[ \begin{vmatrix} a_1 + \alpha_1 & b_1 + \beta_1 & c_1 + \gamma_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

Next, each of the two determinants can be further split, as the second row also consists of sums. Using the same property, we break each determinant as follows:

\[ \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} a_1 & b_1 & c_1 \\ \alpha_2 & \beta_2 & \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix}, \]

and

\[ \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ \alpha_2 & \beta_2 & \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

Thus, the original determinant is split into four determinants:

\[ \begin{vmatrix} a_1 + \alpha_1 & b_1 + \beta_1 & c_1 + \gamma_1 \\ a_2 + \alpha_2 & b_2 + \beta_2 & c_2 + \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} a_1 & b_1 & c_1 \\ \alpha_2 & \beta_2 & \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ \alpha_2 & \beta_2 & \gamma_2 \\ a_3 & b_3 & c_3 \end{vmatrix}. \]

Consider the determinant dependent on \(k\):

\[ D_k = \begin{vmatrix} f_1(k) & f_2(k) & f_3(k) \\ a & b & c \\ p & q & r \end{vmatrix}. \]

We now sum this determinant over \(k\), where \(k\) ranges from \(1\) to \(n\):

\[ \sum_{k=1}^{n} D_k = \sum_{k=1}^{n} \begin{vmatrix} f_1(k) & f_2(k) & f_3(k) \\ a & b & c \\ p & q & r \end{vmatrix}. \]

Using the property that allows the summation of determinants when only one row or column depends on \(k\), we can express the summation as a single determinant where the first row entries are the sums of the respective terms for each \(f_i(k)\):

\[ \sum_{k=1}^{n} D_k = \begin{vmatrix} \sum_{k=1}^{n} f_1(k) & \sum_{k=1}^{n} f_2(k) & \sum_{k=1}^{n} f_3(k) \\ a & b & c \\ p & q & r \end{vmatrix}. \]

This shows that when we sum the determinants over \(k\), the result is a single determinant where the first row consists of the sums \(\sum_{k=1}^{n} f_1(k)\), \(\sum_{k=1}^{n} f_2(k)\), and \(\sum_{k=1}^{n} f_3(k)\), while the second and third rows remain unchanged.

Adding a Multiple of One Row or Column to Another Row or Column

If you take any row (or column) of a matrix and add a multiple \(\lambda\) of another row (or column) to it, the value of the determinant remains unchanged. This operation is commonly used in row or column manipulation without affecting the determinant.

Mathematically:

Given a matrix \(\mathbf{A}\), if you perform the operation:

\[ R_i \rightarrow R_i + \lambda R_j \]

or

\[ C_i \rightarrow C_i + \lambda C_j \]

(where \(R_i\) and \(R_j\) represent rows, and \(C_i\) and \(C_j\) represent columns, and \(\lambda\) is any scalar), the determinant remains the same:

\[ |\mathbf{A}| = |\mathbf{A'}|, \]

where \(\mathbf{A'}\) is the matrix after adding \(\lambda R_j\) to \(R_i\), or \(\lambda C_j\) to \(C_i\).

We perform this kind of operation—adding a multiple of one row or column to another—to manipulate elements in a way that makes the determinant easier to work with or more informative. These operations are especially useful in simplifying determinants to a form that is more manageable for calculation.

For example, to simplify a determinant, we often use these row and column operations to create zeros in a row or column. A determinant with zeros in any row or column is much easier to evaluate, as it reduces the number of terms in the expansion.

Since adding a multiple of one row (or column) to another does not change the value of the determinant, we can strategically use this property to transform a complex determinant into a simpler form. For instance, if a row or column is filled with zeros, the expansion of the determinant involves fewer calculations, often leading to a quicker evaluation.

Example

Consider the determinant

\[ \begin{vmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \\ 4 & 1 & 2 \end{vmatrix}. \]

To simplify this determinant, we subtract two times the first row from the second row, and four times the first row from the third row. Subtracting two times the first row from the second row gives:

\[ \begin{pmatrix} 2 & 3 & 1 \end{pmatrix} - 2 \times \begin{pmatrix} 1 & 2 & 3 \end{pmatrix} = \begin{pmatrix} 0 & -1 & -5 \end{pmatrix}. \]

Next, subtracting four times the first row from the third row gives:

\[ \begin{pmatrix} 4 & 1 & 2 \end{pmatrix} - 4 \times \begin{pmatrix} 1 & 2 & 3 \end{pmatrix} = \begin{pmatrix} 0 & -7 & -10 \end{pmatrix}. \]

Thus, the determinant becomes:

\[ \begin{vmatrix} 1 & 2 & 3 \\ 0 & -1 & -5 \\ 0 & -7 & -10 \end{vmatrix}. \]

Now, we can expand this determinant along the first column, which contains two zeros. This simplifies to:

\[ 1 \cdot \begin{vmatrix} -1 & -5 \\ -7 & -10 \end{vmatrix}. \]

The \(2 \times 2\) determinant is:

\[ \begin{vmatrix} -1 & -5 \\ -7 & -10 \end{vmatrix} = (-1)(-10) - (-5)(-7) = 10 - 35 = -25. \]

Therefore, the original determinant evaluates to:

\[ -25. \]

This process demonstrates how row operations can simplify the determinant calculation by creating zeros.

Consider the determinant

\[ \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}. \]

To prove that this determinant equals \((a - b)(b - c)(c - a)\), we start by subtracting the first column from the second and third columns. This operation simplifies the determinant without changing its value. After performing the column operations \(C_2 \rightarrow C_2 - C_1\) and \(C_3 \rightarrow C_3 - C_1\), we get:

\[ \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ a^2 & b^2 - a^2 & c^2 - a^2 \end{vmatrix}. \]

Next, we factor out the common terms from the second and third columns. Notice that \(b^2 - a^2 = (b - a)(b + a)\) and \(c^2 - a^2 = (c - a)(c + a)\), so the determinant becomes:

\[ (b - a)(c - a) \begin{vmatrix} 1 & 0 & 0 \\ a & 1 & 1 \\ a^2 & b + a & c + a \end{vmatrix}. \]

Since the first column contains two zeros, we can expand the determinant along the first row, leaving us with the smaller determinant:

\[ (b - a)(c - a) \cdot \begin{vmatrix} 1 & 1 \\ b + a & c + a \end{vmatrix}. \]

Now, we evaluate the \(2 \times 2\) determinant:

\[ \begin{vmatrix} 1 & 1 \\ b + a & c + a \end{vmatrix} = (c + a) - (b + a) = c - b. \]

Thus, the original determinant simplifies to:

\[ (b - a)(c - a)(c - b). \]

Finally, rearranging the factors gives us the desired result:

\[ (a - b)(b - c)(c - a). \]

Example

Consider the determinant:

\[ \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^3 & b^3 & c^3 \end{vmatrix}. \]

We will prove that:

\[ \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^3 & b^3 & c^3 \end{vmatrix} = (a - b)(b - c)(c - a)(a + b + c). \]

First, subtract the first column from the second and third columns. This results in:

\[ \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ a^3 & b^3 - a^3 & c^3 - a^3 \end{vmatrix}. \]

Factor out \((b - a)\) and \((c - a)\) from the second and third columns. Since \(b^3 - a^3 = (b - a)(b^2 + ab + a^2)\) and \(c^3 - a^3 = (c - a)(c^2 + ac + a^2)\), the determinant becomes:

\[ (b - a)(c - a) \begin{vmatrix} 1 & 0 & 0 \\ a & 1 & 1 \\ a^3 & b^2 + ab + a^2 & c^2 + ac + a^2 \end{vmatrix}. \]

Expanding along the first row gives:

\[ (b - a)(c - a) \begin{vmatrix} 1 & 1 \\ b^2 + ab + a^2 & c^2 + ac + a^2 \end{vmatrix}. \]

The \(2 \times 2\) determinant simplifies to:

\[ (b - a)(c - a)((c^2 + ac + a^2) - (b^2 + ab + a^2)) = (b-a)(c-a)(c^2-b^2+ac-ab) = (b-a)(c-a)((c-b)(c+b)+a(c-b))= (b - a)(c - a)(c - b)(a + b + c). \]

Thus, the determinant is:

\[ (a - b)(b - c)(c - a)(a + b + c). \]

Prove that:

\[ \begin{vmatrix} 1 & 1 & 1 \\ a^2 & b^2 & c^2 \\ a^3 & b^3 & c^3 \end{vmatrix} = (a - b)(b - c)(c - a)(ab + bc + ca). \]

Solution:

First, subtract the first column from the second and third columns, yielding:

\[ \begin{vmatrix} 1 & 0 & 0 \\ a^2 & b^2 - a^2 & c^2 - a^2 \\ a^3 & b^3 - a^3 & c^3 - a^3 \end{vmatrix}. \]

Next, factor out \((b - a)\) from column 2 and \((c - a)\) from column 3. This simplifies the determinant to:

\[ (b - a)(c - a) \begin{vmatrix} 1 & 0 & 0 \\ a^2 & b + a & c + a \\ a^3 & b^2 + ab + a^2 & c^2 + ac + a^2 \end{vmatrix}. \]

Now, subtract column 2 from column 3:

\[ \begin{vmatrix} 1 & 0 & 0 \\ a^2 & b + a & (c + a) - (b + a) \\ a^3 & b^2 + ab + a^2 & (c^2 + ac + a^2) - (b^2 + ab + a^2) \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \\ a^2 & b + a & c - b \\ a^3 & b^2 + ab + a^2 & (c^2 - b^2) + (ac - ab) \end{vmatrix}. \]

Simplifying the third column, factor out \((c - b)\) from column 3:

\[ (b - a)(c - a)(c - b) \begin{vmatrix} 1 & 0 & 0 \\ a^2 & b + a & 1 \\ a^3 & b^2 + ab + a^2 & a+b+c \end{vmatrix}. \]

Finally, expand the determinant along the first row:

\[ (b - a)(c - a)(c - b) \cdot \begin{vmatrix} b + a & 1 \\ b^2 + ab + a^2 & b + a \end{vmatrix}. \]

The \(2 \times 2\) determinant evaluates to:

\[ (b + a)(a+b+c) - (b^2 + ab + a^2) = ab + bc + ca. \]

Thus, the original determinant simplifies to:

\[ (b - a)(c - a)(c - b)(ab + bc + ca), \]

or equivalently:

\[ (a - b)(b - c)(c - a)(ab + bc + ca). \]

Example

Prove that:

\[ \begin{vmatrix} p - q - r & 2p & 2p \\ 2q & q - r - p & 2q \\ 2r & 2r & r - p - q \end{vmatrix} = (p + q + r)^3. \]

Solution:

Consider the determinant:

\[ D = \begin{vmatrix} p - q - r & 2p & 2p \\ 2q & q - r - p & 2q \\ 2r & 2r & r - p - q \end{vmatrix}. \]

Perform the row operation \(R_1 \rightarrow R_1 + R_2 + R_3\), i.e., add all rows together. This operation simplifies the first row:

\[ D = \begin{vmatrix} (p - q - r) + 2q + 2r & 2p + (q - r - p) + 2r & 2p + 2q + (r - p - q) \\ 2q & q - r - p & 2q \\ 2r & 2r & r - p - q \end{vmatrix} \]

This simplifies to:

\[ D = \begin{vmatrix} p + q + r & p + q + r & p + q + r \\ 2q & q - r - p & 2q \\ 2r & 2r & r - p - q \end{vmatrix}. \]

Next, factor out \( (p + q + r) \) from the first row:

\[ D = (p + q + r) \begin{vmatrix} 1 & 1 & 1 \\ 2q & q - r - p & 2q \\ 2r & 2r & r - p - q \end{vmatrix}. \]

Now, perform column operations. Subtract the third column from the second column and the first column from the third column, i.e., \(C_2 \rightarrow C_2 - C_3\) and \(C_3 \rightarrow C_3 - C_1\):

\[ D = (p + q + r) \begin{vmatrix} 1 & 0 & 0 \\ 2q & -(p + q + r) & 0 \\ 2r & 0 & -(p + q + r) \end{vmatrix}. \]

Factor \(-(p + q + r)\) from the second and third columns:

\[ D = (p + q + r) \cdot (-(p + q + r)) \cdot (-(p + q + r)) \begin{vmatrix} 1 & 0 & 0 \\ 2q & 1 & 0 \\ 2r & 0 & 1 \end{vmatrix}. \]

The remaining determinant is:

\[ \begin{vmatrix} 1 & 0 & 0 \\ 2q & 1 & 0 \\ 2r & 0 & 1 \end{vmatrix} = 1. \]

Thus, the determinant simplifies to:

\[ D = (p + q + r) \cdot (-(p + q + r)) \cdot (-(p + q + r)) = (p + q + r)^3. \]

Determinant of Circular Matrix

Prove that:

\[ \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix} = (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca) = a^3 + b^3 + c^3 - 3abc. \]

Solution:

We start with the determinant:

\[ D = \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix}. \]

First, we add all rows to the first row: \( R_1 \rightarrow R_1 + R_2 + R_3 \). Performing this operation, we get:

\[ D = \begin{vmatrix} (a + b + c) & (a + b + c) & (a + b + c) \\ c & a & b \\ b & c & a \end{vmatrix}. \]

Now, factor out \((a + b + c)\) from the first row:

\[ D = (a + b + c) \begin{vmatrix} 1 & 1 & 1 \\ c & a & b \\ b & c & a \end{vmatrix}. \]

Next, expand the \(3 \times 3\) determinant. We use cofactor expansion along the first row:

\[ \begin{vmatrix} 1 & 1 & 1 \\ c & a & b \\ b & c & a \end{vmatrix} = 1 \cdot \begin{vmatrix} a & b \\ c & a \end{vmatrix} - 1 \cdot \begin{vmatrix} c & b \\ b & a \end{vmatrix} + 1 \cdot \begin{vmatrix} c & a \\ b & c \end{vmatrix}. \]

Each of the \(2 \times 2\) determinants evaluates to:

\[ \begin{vmatrix} a & b \\ c & a \end{vmatrix} = a^2 - bc, \quad \begin{vmatrix} c & b \\ b & a \end{vmatrix} = ac - b^2, \quad \begin{vmatrix} c & a \\ b & c \end{vmatrix} = c^2 - ab. \]

Thus, the expression becomes:

\[ D = (a + b + c) \left[ (a^2 - bc) - (ac - b^2) + (c^2 - ab) \right]. \]

Simplifying inside the brackets:

\[ a^2 - bc - ac + b^2 + c^2 - ab = a^2 + b^2 + c^2 - ab - bc - ac. \]

So we get:

\[ D = (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ac). \]

To prove the second part of the equation, we expand:

\[ (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ac). \]

Using the distributive property:

\[ a(a^2 + b^2 + c^2 - ab - bc - ac) + b(a^2 + b^2 + c^2 - ab - bc - ac) + c(a^2 + b^2 + c^2 - ab - bc - ac), \]

which simplifies to:

\[ a^3 + b^3 + c^3 - 3abc. \]

Thus, the determinant also equals:

\[ a^3 + b^3 + c^3 - 3abc. \]

Hence, we have proven:

\[ \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix} = (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca) = a^3 + b^3 + c^3 - 3abc. \]