Skip to content

Inverse of a Matrix

Now that we have explored the determinant of a matrix, let’s define the inverse of a matrix.

The inverse is only defined for square matrices. The inverse of a square matrix \(A\) of order \(n\) is another square matrix \(X\) of the same order such that:

\[ AX = XA = I_n \]

Where \(I_n\) is the identity matrix of order \(n\). Essentially, the matrix \(X\) "undoes" the effect of \(A\), much like how the reciprocal of a number reverses its multiplication.

Invertibility of a Matrix:

It turns out that such a matrix \(X\) does not always exist. But, if it does, we say that matrix \(A\) is invertible, and we denote the inverse matrix \(X\) by \(A^{-1}\). In a way, \(A^{-1}\) acts as the reciprocal of \(A\).

The critical condition for a matrix to have an inverse is:

\[ \boxed{|A| \neq 0} \]

If the determinant of \(A\) is non-zero, then \(A^{-1}\) exists, and \(A\) is invertible.

Singular Matrices:

When the determinant of \(A\) equals zero, i.e., \(|A| = 0\), we call \(A\) a singular matrix, and in this case, no inverse matrix exists for \(A\). In other words, a matrix is non-invertible or singular if its determinant is zero.

Finding Inverse

The next logical step is understanding how to find the inverse of a matrix and why the inverse exists only when \(|A| \neq 0\).

Finding Inverse using Gauss-Jordan Method

From the definition, we know that the inverse matrix \(A^{-1}\) is the matrix such that:

\[ AX = I_n \]

Here, \(X = A^{-1}\), and \(I_n\) is the identity matrix of order \(n\). We can think of this equation as breaking down into \(n\) systems of linear equations.

Let's represent the matrix \(X\) as:

\[ X = \begin{pmatrix} x_1 & x_2 & \dots & x_n \end{pmatrix} \]

where each \(x_i\) is a column matrix with \(n\) unknowns.

On the right-hand side, the identity matrix \(I_n\) can be written as:

\[ I_n = \begin{pmatrix} e_1 & e_2 & \dots & e_n \end{pmatrix} \]

where \(e_i\) is the \(i\)-th column of the identity matrix (i.e., \(e_1 = [1 \ 0 \ \dots \ 0]^T\), \(e_2 = [0 \ 1 \ \dots \ 0]^T\), and so on).

The equation \(AX = I_n\) can be viewed as \(n\) separate systems of linear equations:

  • \(Ax_1 = e_1\)
  • \(Ax_2 = e_2\)
  • \(Ax_3 = e_3\)
  • \( \dots \)
  • \(Ax_n = e_n\)

Each system \(Ax_i = e_i\) corresponds to a linear equation where we are solving for the column vector \(x_i\). However, instead of solving these equations separately, we can combine them into one augmented matrix equation.

We write all these systems together as:

\[ \left[ A \middle| e_1 \ \dots \ e_n \right] = \left[ A \middle| I_n \right] \]

The matrix \(\left[A \middle| I_n \right]\) is the augmented matrix where the matrix \(A\) is augmented with the identity matrix \(I_n\).

The goal of the Gauss-Jordan process is to apply row operations to transform \(A\) into the identity matrix \(I_n\) (if possible). If this is achievable, the inverse exists. If not, it indicates that the matrix \(A\) is singular (i.e., non-invertible).

The process can be summarized as follows:

  1. We start with the augmented matrix \(\left[A \middle| I_n\right]\).
  2. Apply row operations to the matrix \(A\) with the goal of converting it to \(I_n\).
  3. As we perform row operations on \(A\), we also apply the same operations to the columns of \(I_n\) on the right-hand side.

When we successfully convert \(A\) to the identity matrix \(I_n\) on the left side, the matrix on the right-hand side becomes the inverse \(A^{-1}\). Therefore, the augmented matrix evolves as follows:

\[ \left[A \middle| I_n \right] \quad \xrightarrow{\text{Gauss-Jordan}} \quad \left[I_n \middle| A^{-1}\right] \]

Thus, by applying the Gauss-Jordan elimination process, we simultaneously find the inverse matrix \(A^{-1}\) (if it exists).

When Does the Inverse Exist?

As you can see, the inverse \(A^{-1}\) exists only if we are able to convert \(A\) into \(I_n\) through the row operations. This is possible if and only if the determinant of \(A\) is non-zero (\(|A| \neq 0\)). (The reason for this is that row operations in Gauss Jordan prrocess are almost the same as determinants row operations. It can be proven using this fact that it is possible to convert A to I only when \(|A|\ne0\)). If \(|A| = 0\), the matrix \(A\) is singular, and it is impossible to transform it into \(I_n\), meaning that the inverse does not exist.

Finding Inverse using Adjoint of a Matrix

Another method to find the inverse of a matrix involves using a concept called the adjoint or adjugate of a matrix.

Definition of Adjoint

Let \(A = [a_{ij}]_{n \times n}\) be a square matrix of order \(n\). The adjoint (or adjugate) of \(A\), denoted by \(\text{adj}(A)\), is the transpose of the cofactor matrix of \(A\).

Let’s consider the matrix \(A\) as:

\[ A = \begin{pmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{pmatrix} \]

The adjugate matrix, \(\text{adj}(A)\), is the transpose of the cofactor matrix:

\[ \text{adj}(A) = \begin{pmatrix} C_{11} & C_{12} & \dots & C_{1n} \\ C_{21} & C_{22} & \dots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \dots & C_{nn} \end{pmatrix}^T \]

So the elements \(C_{ij}\) are the cofactors corresponding to each entry \(a_{ij}\) of the matrix.

How to use adjoint to find the inverse

Let \(\mathbf{A} = [a_{ij}]_{n \times n}\) be a square matrix of order \(n\).

Firstly, we realize that multiplying a matrix by its adjugate leads to an important result. When we multiply the matrix \(\mathbf{A}\) by its adjoint \(\text{adj}(\mathbf{A})\), we get:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = \begin{pmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{pmatrix} \cdot \begin{pmatrix} C_{11} & C_{21} & \dots & C_{n1} \\ C_{12} & C_{22} & \dots & C_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1n} & C_{2n} & \dots & C_{nn} \end{pmatrix} \]

By using a property of determinants, we know that when a row (or column) of a matrix is multiplied by the cofactors of the same row (or column), the result is the determinant of the matrix. Specifically, for any row \(i\),

\[ a_{i1} C_{i1} + a_{i2} C_{i2} + \dots + a_{in} C_{in} = |\mathbf{A}| \]

where \(|\mathbf{A}|\) is the determinant of \(\mathbf{A}\). This holds true for any row or column of \(\mathbf{A}\).

Furthermore, if we multiply a row of \(\mathbf{A}\) by the cofactors of a different row, the result is zero:

\[ a_{i1} C_{j1} + a_{i2} C_{j2} + \dots + a_{in} C_{jn} = 0 \quad \text{if} \ i \neq j \]

Thus, the product of \(\mathbf{A}\) and its adjugate \(\text{adj}(\mathbf{A})\) is a diagonal matrix, where each diagonal entry is \(|\mathbf{A}|\) and all off-diagonal entries are zero:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = \begin{pmatrix} |\mathbf{A}| & 0 & \dots & 0 \\ 0 & |\mathbf{A}| & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & |\mathbf{A}| \end{pmatrix} = |\mathbf{A}| \mathbf{I}_n \]

where \(\mathbf{I}_n\) is the identity matrix of order \(n\).

Similarly, we can show that:

\[ \text{adj}(\mathbf{A}) \cdot \mathbf{A} = |\mathbf{A}| \mathbf{I}_n \]

Therefore, we conclude that:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = \text{adj}(\mathbf{A}) \cdot \mathbf{A} = |\mathbf{A}| \mathbf{I}_n \]

Now, if \(|\mathbf{A}| \neq 0\), we can divide both sides of the equation by \(|\mathbf{A}|\), which gives:

\[ \frac{\mathbf{A} \cdot \text{adj}(\mathbf{A})}{|\mathbf{A}|} = \frac{|\mathbf{A}| \mathbf{I}_n}{|\mathbf{A}|} = \mathbf{I}_n \]

This implies that:

\[ \mathbf{A}^{-1} = \frac{1}{|\mathbf{A}|} \cdot \text{adj}(\mathbf{A}) \]

Thus, the inverse of a matrix \(\mathbf{A}\) exists only when \(|\mathbf{A}| \neq 0\). When the determinant \(|\mathbf{A}|\) is zero, the matrix is singular and does not have an inverse. If \(|\mathbf{A}| \neq 0\), then the inverse is given by:

\[ \mathbf{A}^{-1} = \frac{1}{|\mathbf{A}|} \cdot \text{adj}(\mathbf{A}) \]

Properties of Inverse:

Let’s explore the properties of the inverse with respect to other matrix operations:

Property 1: \(\left(\mathbf{A}^{-1}\right)^{-1} = \mathbf{A}\)
The inverse of the inverse of a matrix is the original matrix.

Property 2: \(\left(\mathbf{A}^T\right)^{-1} = \left(\mathbf{A}^{-1}\right)^T\)

Proof: Let \(\mathbf{A}\) be an invertible matrix. We need to show that the inverse of the transpose of \(\mathbf{A}\) is the transpose of the inverse of \(\mathbf{A}\).

We know the following basic properties of matrix transposition:

\[ \left(\mathbf{A} \mathbf{B}\right)^T = \mathbf{B}^T \mathbf{A}^T \]

and the definition of an inverse matrix states:

\[ \mathbf{A} \cdot \mathbf{A}^{-1} = \mathbf{I}_n \quad \text{and} \quad \mathbf{A}^{-1} \cdot \mathbf{A} = \mathbf{I}_n \]

Taking the transpose of both sides of the first equation:

\[ \left(\mathbf{A} \cdot \mathbf{A}^{-1}\right)^T = \mathbf{I}_n^T \]

which simplifies to:

\[ \left(\mathbf{A}^{-1}\right)^T \cdot \mathbf{A}^T = \mathbf{I}_n \]

This shows that \(\left(\mathbf{A}^{-1}\right)^T\) is the inverse of \(\mathbf{A}^T\), so:

\[ \left(\mathbf{A}^T\right)^{-1} = \left(\mathbf{A}^{-1}\right)^T \]

\(\blacksquare\)

Property 3: \(|\mathbf{A}^{-1}| = \frac{1}{|\mathbf{A}|} = |\mathbf{A}|^{-1}\)

Proof: The determinant of the inverse of a matrix \(\mathbf{A}\) is the reciprocal of the determinant of \(\mathbf{A}\).

We know that for an invertible matrix \(\mathbf{A}\), the product of the matrix and its inverse gives the identity matrix:

\[ \mathbf{A} \cdot \mathbf{A}^{-1} = \mathbf{I}_n \]

Taking the determinant of both sides:

\[ |\mathbf{A} \cdot \mathbf{A}^{-1}| = |\mathbf{I}_n| \]

Using the property of determinants that \(|\mathbf{A} \cdot \mathbf{B}| = |\mathbf{A}| \cdot |\mathbf{B}|\), we have:

\[ |\mathbf{A}| \cdot |\mathbf{A}^{-1}| = |\mathbf{I}_n| \]

Since \(|\mathbf{I}_n| = 1\), this simplifies to:

\[ |\mathbf{A}| \cdot |\mathbf{A}^{-1}| = 1 \]

Thus:

\[ |\mathbf{A}^{-1}| = \frac{1}{|\mathbf{A}|} = |\mathbf{A}|^{-1} \]

This shows that the determinant of the inverse of a matrix is the reciprocal of the determinant of the matrix. \(\blacksquare\)

Property 4: \(\left(\mathbf{A} \mathbf{B}\right)^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1}\)

Proof: Let \(\mathbf{A}\) and \(\mathbf{B}\) be two invertible matrices. We need to prove that the inverse of the product \(\mathbf{A} \mathbf{B}\) is \(\mathbf{B}^{-1} \mathbf{A}^{-1}\).

Start by multiplying the product \(\mathbf{A} \mathbf{B}\) by \(\mathbf{B}^{-1} \mathbf{A}^{-1}\):

\[ \left(\mathbf{A} \mathbf{B}\right) \left(\mathbf{B}^{-1} \mathbf{A}^{-1}\right) \]

Using the associative property of matrix multiplication, this becomes:

\[ \mathbf{A} \left( \mathbf{B} \mathbf{B}^{-1} \right) \mathbf{A}^{-1} \]

Since \(\mathbf{B} \mathbf{B}^{-1} = \mathbf{I}_n\), we have:

\[ \mathbf{A} \cdot \mathbf{I}_n \cdot \mathbf{A}^{-1} = \mathbf{A} \cdot \mathbf{A}^{-1} \]

Finally, since \(\mathbf{A} \mathbf{A}^{-1} = \mathbf{I}_n\), we get:

\[ \mathbf{I}_n \]

Thus, \(\mathbf{B}^{-1} \mathbf{A}^{-1}\) is the inverse of \(\mathbf{A} \mathbf{B}\), so:

\[ \left(\mathbf{A} \mathbf{B}\right)^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1} \]

This proves that the inverse of a product of two matrices is the product of their inverses in reverse order.\(\blacksquare\)

Property 5: In general, \(\left(\mathbf{A}_1 \mathbf{A}_2 \dots \mathbf{A}_n\right)^{-1} = \mathbf{A}_n^{-1} \dots \mathbf{A}_2^{-1} \mathbf{A}_1^{-1}\)

Property 6: \(\left(\mathbf{A}^n\right)^{-1} = \left(\mathbf{A}^{-1}\right)^n\)

Proof:

Let \(\mathbf{A}\) be an invertible matrix and \(n\) be a positive integer. We need to prove that the inverse of \(\mathbf{A}^n\) is \(\left(\mathbf{A}^{-1}\right)^n\).

We know that:

\[ \mathbf{A}^n = \underbrace{\mathbf{A} \cdot \mathbf{A} \cdot \dots \cdot \mathbf{A}}_{\text{n times}} \]

Now, take the inverse of both sides:

\[ \left(\mathbf{A}^n\right)^{-1} = \left(\mathbf{A} \cdot \mathbf{A} \cdot \dots \cdot \mathbf{A}\right)^{-1} \]

Using the property \(\left(\mathbf{A} \mathbf{B}\right)^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1}\), we can apply the inverse to each factor in reverse order:

\[ \left(\mathbf{A}^n\right)^{-1} = \mathbf{A}^{-1} \cdot \mathbf{A}^{-1} \cdot \dots \cdot \mathbf{A}^{-1} \]

which is simply:

\[ \left(\mathbf{A}^n\right)^{-1} = \left(\mathbf{A}^{-1}\right)^n \]

Thus, we have proven that:

\[ \left(\mathbf{A}^n\right)^{-1} = \left(\mathbf{A}^{-1}\right)^n \]

Property 7: \(\left(k\mathbf{A}\right)^{-1} = \frac{1}{k} \mathbf{A}^{-1}\)

Proof: Let \(k\) be a non-zero scalar and \(\mathbf{A}\) an invertible matrix. We need to show that the inverse of \(k\mathbf{A}\) is \(\frac{1}{k} \mathbf{A}^{-1}\).

Consider the matrix \(k\mathbf{A}\), where each element of \(\mathbf{A}\) is multiplied by the scalar \(k\). We want to find a matrix such that:

\[ (k\mathbf{A}) \cdot \left(\frac{1}{k} \mathbf{A}^{-1}\right) = \mathbf{I}_n \]

Since scalar multiplication distributes over matrix multiplication, we have:

\[ (k\mathbf{A}) \cdot \left(\frac{1}{k} \mathbf{A}^{-1}\right) = k \cdot \mathbf{A} \cdot \frac{1}{k} \cdot \mathbf{A}^{-1} \]

The scalars \(k\) and \(\frac{1}{k}\) cancel out, leaving:

\[ \mathbf{A} \cdot \mathbf{A}^{-1} = \mathbf{I}_n \]

Thus, the inverse of \(k\mathbf{A}\) is:

\[ \left(k\mathbf{A}\right)^{-1} = \frac{1}{k} \mathbf{A}^{-1} \]

This property shows that multiplying a matrix by a scalar \(k\) affects the inverse by a factor of \(\frac{1}{k}\). \(\blacksquare\)

Property 8: \(\left(\text{adj}(\mathbf{A})\right)^{-1} = \text{adj}(\mathbf{A}^{-1})\)

Proof:

We start with the relation:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = |\mathbf{A}| \mathbf{I}_n \]

Now, take the inverse of both sides:

\[ \left(\mathbf{A} \cdot \text{adj}(\mathbf{A})\right)^{-1} = \left(|\mathbf{A}| \mathbf{I}_n\right)^{-1} \]

Using the property \(\left(\mathbf{A} \mathbf{B}\right)^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1}\), this becomes:

\[ \left(\text{adj}(\mathbf{A})\right)^{-1} \cdot \mathbf{A}^{-1} = \frac{1}{|\mathbf{A}|} \mathbf{I}_n \]

Next, multiply both sides from the right by \(\text{adj}(\mathbf{A}^{-1})\):

\[ \left(\text{adj}(\mathbf{A})\right)^{-1} \cdot \mathbf{A}^{-1} \cdot \text{adj}(\mathbf{A}^{-1}) = \frac{1}{|\mathbf{A}|} \cdot \text{adj}(\mathbf{A}^{-1}) \]

We know that \(\mathbf{A}^{-1} \cdot \text{adj}(\mathbf{A}^{-1}) = |\mathbf{A}^{-1}| \mathbf{I}_n = \frac{1}{|\mathbf{A}|} \mathbf{I}_n\). We get:

\[ \left(\text{adj}(\mathbf{A})\right)^{-1} \cdot \frac{1}{|\mathbf{A}|} \mathbf{I}_n = \frac{1}{|\mathbf{A}|} \text{adj}(\mathbf{A}^{-1}) \]

Cancel \(\frac{1}{|\mathbf{A}|}\) from both sides:

\[ \left(\text{adj}(\mathbf{A})\right)^{-1} = \text{adj}(\mathbf{A}^{-1}) \]

Thus, the proof is complete: \(\left(\text{adj}(\mathbf{A})\right)^{-1} = \text{adj}(\mathbf{A}^{-1})\).

Property 9: If \(\lambda_i \neq 0\) for all \(i = 1, 2, \dots, n\), then

\[ \left(\text{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)\right)^{-1} = \text{diag}\left(\frac{1}{\lambda_1}, \frac{1}{\lambda_2}, \dots, \frac{1}{\lambda_n}\right) \]

That is, if

\[ \mathbf{D} = \begin{pmatrix} \lambda_1 & 0 & \dots & 0 \\ 0 & \lambda_2 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \lambda_n \end{pmatrix} \]

then,

\[ \mathbf{D}^{-1} = \begin{pmatrix} \frac{1}{\lambda_1} & 0 & \dots & 0 \\ 0 & \frac{1}{\lambda_2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{1}{\lambda_n} \end{pmatrix} \]

Inverse is not distributive over addition

\(\left(\mathbf{A} + \mathbf{B}\right)^{-1} \neq \mathbf{A}^{-1} + \mathbf{B}^{-1}\)

It is important to note that the inverse of the sum of two matrices \(\mathbf{A}\) and \(\mathbf{B}\) is not the sum of their inverses. In general, the following is not true:

\[ \left(\mathbf{A} + \mathbf{B}\right)^{-1} \neq \mathbf{A}^{-1} + \mathbf{B}^{-1} \]

This is a common misconception and arises because matrix addition and inversion do not distribute in the same way as multiplication.

Why This is Not True:

For the sum of inverses to hold, the two matrices \(\mathbf{A}\) and \(\mathbf{B}\) would need to satisfy specific commutative properties that generally do not apply to arbitrary matrices. The formula \(\left(\mathbf{A} + \mathbf{B}\right)^{-1}\) is more complex and depends on the relationship between \(\mathbf{A}\) and \(\mathbf{B}\).

Example:

Consider:

\[ \mathbf{A} = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}, \quad \mathbf{B} = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} \]

The inverses of \(\mathbf{A}\) and \(\mathbf{B}\) are:

\[ \mathbf{A}^{-1} = \begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{2} \end{pmatrix}, \quad \mathbf{B}^{-1} = \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & 1 \end{pmatrix} \]

Now, let’s compute \(\mathbf{A} + \mathbf{B}\):

\[ \mathbf{A} + \mathbf{B} = \begin{pmatrix} 3 & 0 \\ 0 & 3 \end{pmatrix} \]

The inverse of \(\mathbf{A} + \mathbf{B}\) is:

\[ \left(\mathbf{A} + \mathbf{B}\right)^{-1} = \begin{pmatrix} \frac{1}{3} & 0 \\ 0 & \frac{1}{3} \end{pmatrix} \]

However, the sum of \(\mathbf{A}^{-1} + \mathbf{B}^{-1}\) is:

\[ \mathbf{A}^{-1} + \mathbf{B}^{-1} = \begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{2} \end{pmatrix} + \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} \frac{3}{2} & 0 \\ 0 & \frac{3}{2} \end{pmatrix} \]

Clearly, \(\left(\mathbf{A} + \mathbf{B}\right)^{-1} \neq \mathbf{A}^{-1} + \mathbf{B}^{-1}\).

Thus, it is crucial to remember that \(\left(\mathbf{A} + \mathbf{B}\right)^{-1} \neq \mathbf{A}^{-1} + \mathbf{B}^{-1}\).

Properties of Adjoint

Property 1: \(\mathbf{A} \cdot \text{adj}(\mathbf{A}) = \text{adj}(\mathbf{A}) \cdot \mathbf{A} = |\mathbf{A}| \mathbf{I}_n\)

This is true by definition as we saw above.

Property 2: \(|\text{adj}(\mathbf{A})| = |\mathbf{A}|^{n-1}\)

Proof:

We begin with the key property of matrices involving the adjoint:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = |\mathbf{A}| \mathbf{I}_n \]

Now, take the determinant of both sides:

\[ |\mathbf{A} \cdot \text{adj}(\mathbf{A})| = |\mathbf{A}|^n \]

Here, we use the fact that the determinant of \(\mathbf{I}_n\) (the identity matrix) is 1 and \(|kA|=k^n|A|\), so the equation simplifies to:

\[ |\mathbf{A}| \cdot |\text{adj}(\mathbf{A})| = |\mathbf{A}|^n \]

Next, apply the determinant property for products of matrices, which states that:

\[ |\mathbf{A} \mathbf{B}| = |\mathbf{A}| \cdot |\mathbf{B}| \]

Using this property, we rewrite the equation as:

\[ |\mathbf{A}| \cdot |\text{adj}(\mathbf{A})| = |\mathbf{A}|^n \]

Now, divide both sides by \(|\mathbf{A}|\) (assuming \(|\mathbf{A}| \neq 0\)):

\[ |\text{adj}(\mathbf{A})| = \frac{|\mathbf{A}|^n}{|\mathbf{A}|} = |\mathbf{A}|^{n-1} \]

Special Case (When \(|\mathbf{A}| = 0\)):

If \(|\mathbf{A}| = 0\), then:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = 0 \]

This implies that \(\text{adj}(\mathbf{A})\) is a singular matrix (i.e., its determinant is zero), so:

\[ |\text{adj}(\mathbf{A})| = 0 \]

This result is consistent with the formula \(|\text{adj}(\mathbf{A})| = |\mathbf{A}|^{n-1}\), as \(|\mathbf{A}|^{n-1} = 0^{n-1} = 0\) when \(|\mathbf{A}| = 0\).

Thus, the property holds in all cases:

\[ |\text{adj}(\mathbf{A})| = |\mathbf{A}|^{n-1} \]

\(\blacksquare\)

  • For a matrix of order 2: \(|\text{adj}(\mathbf{A})| = |\mathbf{A}|\)
  • For a matrix of order 3: \(|\text{adj}(\mathbf{A})| = |\mathbf{A}|^2\)

Property 3: \(|\text{adj}(\text{adj}(\dots \text{adj}(\mathbf{A}) \dots))|\) (applied \(n\) times)

For any matrix \(\mathbf{A}\) of order \(n\), applying the adjoint operation \(n\) times results in the following relationship for the determinant:

\[ |\text{adj}(\text{adj}(\dots \text{adj}(\mathbf{A}) \dots))| = |\mathbf{A}|^{(n-1)^n} \]

Explanation:

  1. First Adjoint: We know from the basic property of the adjoint matrix that:

    \[ |\text{adj}(\mathbf{A})| = |\mathbf{A}|^{n-1} \]
  2. Second Adjoint (Adjoint of the Adjoint): Applying the adjoint operation again to \(\text{adj}(\mathbf{A})\), we use the same property:

    \[ |\text{adj}(\text{adj}(\mathbf{A}))| = |\text{adj}(\mathbf{A})|^{n-1} = (|\mathbf{A}|^{n-1})^{n-1} = |\mathbf{A}|^{(n-1)^2} \]
  3. Third Adjoint: Applying the adjoint operation a third time, we get:

    \[ |\text{adj}(\text{adj}(\text{adj}(\mathbf{A})))| = |\text{adj}(\text{adj}(\mathbf{A}))|^{n-1} = |\mathbf{A}|^{(n-1)^3} \]
  4. General Case (Applying the Adjoint \(n\) Times): Continuing this process, after applying the adjoint \(n\) times, we get:

    \[ |\text{adj}(\text{adj}(\dots \text{adj}(\mathbf{A}) \dots))| = |\mathbf{A}|^{(n-1)^n} \]

Property 4: For a non-singular matrix \(\mathbf{A}\) of order \(n\), the adjoint of the adjoint of \(\mathbf{A}\) is given by:

\[ \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-2} \mathbf{A} \]

Proof:

We start with the known property of the adjoint:

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) = |\mathbf{A}| \mathbf{I}_n \quad \text{(1)} \]

Now, replacing \(\mathbf{A}\) with \(\text{adj}(\mathbf{A})\) in equation (1), we get:

\[ \text{adj}(\mathbf{A}) \cdot \text{adj}(\text{adj}(\mathbf{A})) = |\text{adj}(\mathbf{A})| \mathbf{I}_n \]

From Property 2, we know:

\[ |\text{adj}(\mathbf{A})| = |\mathbf{A}|^{n-1} \]

Thus, the equation becomes:

\[ \text{adj}(\mathbf{A}) \cdot \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-1} \mathbf{I}_n \quad \text{(2)} \]

Next, pre-multiply both sides of equation (2) by \(\mathbf{A}\):

\[ \mathbf{A} \cdot \text{adj}(\mathbf{A}) \cdot \text{adj}(\text{adj}(\mathbf{A})) = \mathbf{A} \cdot |\mathbf{A}|^{n-1} \mathbf{I}_n \]

Since \(\mathbf{A} \cdot \text{adj}(\mathbf{A}) = |\mathbf{A}| \mathbf{I}_n\), we have:

\[ |\mathbf{A}| \mathbf{I}_n \cdot \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-1} \mathbf{A} \]

This simplifies to:

\[ |\mathbf{A}| \cdot \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-1} \mathbf{A} \]

Finally, divide both sides by \(|\mathbf{A}|\) (since \(\mathbf{A}\) is non-singular implies \(|\mathbf{A}| \neq 0\)):

\[ \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-2} \mathbf{A} \]

Thus, we have proven the property:

\[ \text{adj}(\text{adj}(\mathbf{A})) = |\mathbf{A}|^{n-2} \mathbf{A} \]

\(\blacksquare\)

Property 5: For an \(n \times n\) matrix \(\mathbf{A}\) and a scalar \(k\), the adjoint of the matrix \(k\mathbf{A}\) is given by:

\[ \text{adj}(k\mathbf{A}) = k^{n-1} \text{adj}(\mathbf{A}) \]

Proof:

Let \(\mathbf{A} = [a_{ij}]\), a matrix of order \(n\), and consider the matrix \(k\mathbf{A} = [ka_{ij}]\), where each element of \(\mathbf{A}\) is scaled by the constant \(k\).

We first prove that the cofactor of the \((i,j)\)-th element of \(k\mathbf{A}\) is \(k^{n-1}\) times the cofactor of the \((i,j)\)-th element of \(\mathbf{A}\).

Cofactor of the \((i,j)\)-th element of \(k\mathbf{A}\):

The cofactor of the \((i,j)\)-th element of \(k\mathbf{A}\), denoted as \(C_{ij}(k\mathbf{A})\), is:

\[ C_{ij}(k\mathbf{A}) = (-1)^{i+j} \cdot \text{det}\left(\mathbf{M}_{ij}(k\mathbf{A})\right) \]

where \(\mathbf{M}_{ij}(k\mathbf{A})\) is the \((n-1) \times (n-1)\) submatrix obtained by deleting the \(i\)-th row and \(j\)-th column from \(k\mathbf{A}\).

Since each element of \(k\mathbf{A}\) is \(k\) times the corresponding element of \(\mathbf{A}\), the submatrix \(\mathbf{M}_{ij}(k\mathbf{A})\) is:

\[ \mathbf{M}_{ij}(k\mathbf{A}) = k \cdot \mathbf{M}_{ij}(\mathbf{A}) \]

Now, the determinant of the \((n-1) \times (n-1)\) submatrix \(\mathbf{M}_{ij}(k\mathbf{A})\) is:

\[ \text{det}\left(\mathbf{M}_{ij}(k\mathbf{A})\right) = k^{n-1} \cdot \text{det}\left(\mathbf{M}_{ij}(\mathbf{A})\right) \]

Thus, the cofactor of the \((i,j)\)-th element of \(k\mathbf{A}\) becomes:

\[ C_{ij}(k\mathbf{A}) = (-1)^{i+j} \cdot k^{n-1} \cdot \text{det}\left(\mathbf{M}_{ij}(\mathbf{A})\right) \]

This is equivalent to:

\[ C_{ij}(k\mathbf{A}) = k^{n-1} \cdot C_{ij}(\mathbf{A}) \]

Adjoint of \(k\mathbf{A}\):

Since the adjoint matrix \(\text{adj}(k\mathbf{A})\) is simply the transpose of the cofactor matrix, each element of \(\text{adj}(k\mathbf{A})\) is \(k^{n-1}\) times the corresponding element of \(\text{adj}(\mathbf{A})\). Therefore:

\[ \text{adj}(k\mathbf{A}) = k^{n-1} \cdot \text{adj}(\mathbf{A}) \]

Thus, we have proven that:

\[ \text{adj}(k\mathbf{A}) = k^{n-1} \cdot \text{adj}(\mathbf{A}) \]

Product of two matrices equal to null: AB=O

In real numbers, if the product of two numbers \(xy = 0\), then either \(x = 0\) or \(y = 0\). However, when dealing with matrices, the equation \(\mathbf{A}\mathbf{B} = \mathbf{O}\) does not necessarily imply that \(\mathbf{A} = \mathbf{O}\) or \(\mathbf{B} = \mathbf{O}\).

For example, consider the following matrices:

\[ \mathbf{A} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \quad \text{and} \quad \mathbf{B} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \]

Here, \(\mathbf{A} \neq \mathbf{O}\) and \(\mathbf{B} \neq \mathbf{O}\), but their product is:

\[ \mathbf{A} \mathbf{B} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = \mathbf{O} \]

Thus, even though neither \(\mathbf{A}\) nor \(\mathbf{B}\) is the zero matrix, their product is the zero matrix.

Theorem 1: If \(\mathbf{A}\mathbf{B} = \mathbf{O}\), then at least one of \(\mathbf{A}\) or \(\mathbf{B}\) must be singular.

Proof:

Assume, for the sake of contradiction, that both \(\mathbf{A}\) and \(\mathbf{B}\) are non-singular, meaning that \(|\mathbf{A}| \neq 0\) and \(|\mathbf{B}| \neq 0\). For non-singular matrices, we know:

\[ |\mathbf{A}\mathbf{B}| = |\mathbf{A}| \cdot |\mathbf{B}| \neq 0 \]

However, we are given that \(\mathbf{A}\mathbf{B} = \mathbf{O}\), and therefore:

\[ |\mathbf{A}\mathbf{B}| = |\mathbf{O}| = 0 \]

This creates a contradiction because the determinant of the zero matrix is zero, while the product of the determinants of non-singular matrices is non-zero. Thus, the assumption that both \(\mathbf{A}\) and \(\mathbf{B}\) are non-singular must be false. Hence, at least one of \(\mathbf{A}\) or \(\mathbf{B}\) must be singular, meaning \(|\mathbf{A}| = 0\) or \(|\mathbf{B}| = 0\).\(\blacksquare\)

Theorem 2: If \(\mathbf{A}\) is non-singular (\(|\mathbf{A}| \neq 0\)), then \(\mathbf{B} = \mathbf{O}\). Similarly, if \(\mathbf{B}\) is non-singular (\(|\mathbf{B}| \neq 0\)), then \(\mathbf{A} = \mathbf{O}\).

Proof: When \(\mathbf{A}\) is non-singular:

If \(\mathbf{A}\) is non-singular, it means that its inverse \(\mathbf{A}^{-1}\) exists. Starting with the equation:

\[ \mathbf{A}\mathbf{B} = \mathbf{O} \]

we multiply both sides by \(\mathbf{A}^{-1}\):

\[ \mathbf{A}^{-1} \mathbf{A} \mathbf{B} = \mathbf{A}^{-1} \mathbf{O} \]

This simplifies to:

\[ \mathbf{I}_n \mathbf{B} = \mathbf{O} \]

where \(\mathbf{I}_n\) is the identity matrix. Thus, we have:

\[ \mathbf{B} = \mathbf{O} \]

Therefore, if \(\mathbf{A}\) is non-singular, \(\mathbf{B} = \mathbf{O}\).

When \(\mathbf{B}\) is non-singular:

Similarly, if \(\mathbf{B}\) is non-singular, meaning \(\mathbf{B}^{-1}\) exists, we start with the same equation \(\mathbf{A}\mathbf{B} = \mathbf{O}\), and multiply both sides by \(\mathbf{B}^{-1}\):

\[ \mathbf{A}\mathbf{B}\mathbf{B}^{-1} = \mathbf{O}\mathbf{B}^{-1} \]

This simplifies to:

\[ \mathbf{A} \mathbf{I}_n = \mathbf{O} \]

which implies:

\[ \mathbf{A} = \mathbf{O} \]

Therefore, if \(\mathbf{B}\) is non-singular, \(\mathbf{A} = \mathbf{O}\). \(\blacksquare\)

For any square matrices \(\mathbf{A}\) and \(\mathbf{B}\) of order \(n\), if \(\mathbf{A}\mathbf{B} = \mathbf{O}\) and \(\mathbf{A}\) is singular but not the zero matrix, then \(\mathbf{B}\) must also be singular.

To prove this, assume \(\mathbf{B}\) is non-singular, i.e., \(|\mathbf{B}| \neq 0\). Since \(\mathbf{B}\) is non-singular, we can multiply both sides of \(\mathbf{A}\mathbf{B} = \mathbf{O}\) by \(\mathbf{B}^{-1}\):

\[ \mathbf{A}\mathbf{B}\mathbf{B}^{-1} = \mathbf{O}\mathbf{B}^{-1} \]

This simplifies to:

\[ \mathbf{A}\mathbf{I}_n = \mathbf{O} \quad \Rightarrow \quad \mathbf{A} = \mathbf{O} \]

This contradicts our assumption that \(\mathbf{A} \neq \mathbf{O}\). Therefore, \(\mathbf{B}\) must be singular if \(\mathbf{A}\) is singular but not zero.

Additionally, there are infinitely many matrices \(\mathbf{B}\) for which \(\mathbf{A}\mathbf{B} = \mathbf{O}\). To prove, we can express \(\mathbf{B}\) as \(\mathbf{B} = [\mathbf{b}_1 \dots \mathbf{b}_n]\), where \(\mathbf{b}_i\) represents the \(i\)-th column of \(\mathbf{B}\). The matrix equation \(\mathbf{A}\mathbf{B} = \mathbf{O}\) can then be written as:

\[ \mathbf{A}[\mathbf{b}_1 \dots \mathbf{b}_n] = [\mathbf{o}_1 \dots \mathbf{o}_n] \]

This implies that:

\[ \mathbf{A}\mathbf{b}_1 = \mathbf{o}_1, \quad \mathbf{A}\mathbf{b}_2 = \mathbf{o}_2, \quad \dots, \quad \mathbf{A}\mathbf{b}_n = \mathbf{o}_n \]

Thus, we have \(n\) homogeneous systems of linear equations.

Since \(\mathbf{A}\) is singular, the equation \(\mathbf{A}\mathbf{x} = \mathbf{0}\) has infinitely many solutions. Therefore, each column \(\mathbf{b}_i\) of \(\mathbf{B}\) is a solution to the homogeneous system \(\mathbf{A}\mathbf{b}_i = \mathbf{0}\). This means that there are infinite possible values for \(\mathbf{b}_1, \mathbf{b}_2, \dots, \mathbf{b}_n\), leading to infinitely many possible matrices \(\mathbf{B}\).