Skip to content

Introduction to Matrices

Definition

A matrix \( \mathbf{A} \) is a structured array of elements, either real or complex numbers, arranged in horizontal rows and vertical columns. Formally, an \( m \times n \) matrix is represented as:

\[ \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \dots & a_{mn} \end{bmatrix} \]

where \( a_{ij} \) denotes the element located in the \( i \)-th row and \( j \)-th column. Here, \( m \) is the number of rows (horizontal), and \( n \) is the number of columns (vertical), determining the matrix's dimension as \( m \times n \).

Key Points:

  1. Rows and Columns: The rows of a matrix are the horizontal lines of elements, while the columns are the vertical lines. For instance, in a \( 2 \times 3 \) matrix:

    \[ \mathbf{B} = \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \end{bmatrix} \]

    \( \mathbf{B} \) has 2 rows and 3 columns.

  2. Dimensions: The dimension \( m \times n \) of a matrix specifies its structure. For example:

    • A \( 1 \times 3 \) matrix:

      \[ \mathbf{C} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \end{bmatrix} \]

      is a single row with three elements. - A \( 3 \times 1 \) matrix:

      \[ \mathbf{D} = \begin{bmatrix} d_{11} \\ d_{21} \\ d_{31} \end{bmatrix} \]

      is a single column with three elements. - A \( 2 \times 2 \) matrix:

      \[ \mathbf{E} = \begin{bmatrix} e_{11} & e_{12} \\ e_{21} & e_{22} \end{bmatrix} \]

      has 2 rows and 2 columns.

  3. No Vacant Slots: Every position within a matrix must be filled with an element \( a_{ij} \). It is illegal and invalid for any slot in the matrix to be vacant. Each entry is essential, and the matrix is not defined if any element is missing. This requirement distinguishes matrices from sets, as matrices cannot be empty.

  4. Minimum Size: The smallest possible matrix is a \( 1 \times 1 \) matrix, such as:

    \[ \mathbf{F} = \begin{bmatrix} f_{11} \end{bmatrix} \]

    which contains a single element.

In summary, a matrix is a well-defined array with every slot filled by an element, and it must have at least one row and one column. The arrangement of elements into rows and columns gives the matrix its dimension, which is a fundamental characteristic of its structure.

Consider a \( 3 \times 4 \) matrix \( \mathbf{G} \), where the matrix has 3 rows and 4 columns. Each element in the matrix is distinct and denoted by specific numerical values. The matrix is represented as:

\[ \mathbf{G} = \begin{bmatrix} 7 & 13 & 5 & 9 \\ 2 & 8 & 14 & 11 \\ 4 & 1 & 12 & 6 \end{bmatrix} \]

In this matrix:

  • The first row consists of the elements \( 7, 13, 5, \) and \( 9 \).
  • The second row contains the elements \( 2, 8, 14, \) and \( 11 \).
  • The third row includes the elements \( 4, 1, 12, \) and \( 6 \).

Here, the element \( 8 \) is located in the second row and second column, denoted as \( g_{22} = 8 \). Similarly, \( g_{23} = 14 \) refers to the element in the second row and third column.

A matrix \( \mathbf{A} \) of dimension \( m \times n \) can be compactly represented using the notation:

\[ \mathbf{A} = \left[ a_{ij} \right]_{m \times n} \]

Here, \( a_{ij} \) denotes the element in the \( i \)-th row and \( j \)-th column of the matrix, where the indices \( i \) and \( j \) satisfy:

\[ 1 \leq i \leq m \quad \text{and} \quad 1 \leq j \leq n \]

This notation indicates that \( \mathbf{A} \) is an \( m \times n \) matrix, with \( m \) rows and \( n \) columns. Each element \( a_{ij} \) is uniquely identified by the pair of indices \( i \) and \( j \), where \( i \) is the row index and \( j \) is the column index.

Constructing a Matrix Using a Given Rule

Construct the matrix \( \mathbf{C} = \left[ c_{ij} \right]_{4 \times 3} \) using the rule \( c_{ij} = i^2 + j^2 \), where \( i \) is the row index and \( j \) is the column index.

Solution:

The matrix \( \mathbf{C} \) is a \( 4 \times 3 \) matrix, meaning it has 4 rows and 3 columns. To determine each element \( c_{ij} \), we apply the given rule \( c_{ij} = i^2 + j^2 \).

Step 1: Calculate the elements of the first row (\( i = 1 \)):

\[ \begin{aligned} c_{11} &= 1^2 + 1^2 = 1 + 1 = 2, \\ c_{12} &= 1^2 + 2^2 = 1 + 4 = 5, \\ c_{13} &= 1^2 + 3^2 = 1 + 9 = 10. \end{aligned} \]

Step 2: Calculate the elements of the second row (\( i = 2 \)):

\[ \begin{aligned} c_{21} &= 2^2 + 1^2 = 4 + 1 = 5, \\ c_{22} &= 2^2 + 2^2 = 4 + 4 = 8, \\ c_{23} &= 2^2 + 3^2 = 4 + 9 = 13. \end{aligned} \]

Step 3: Calculate the elements of the third row (\( i = 3 \)):

\[ \begin{aligned} c_{31} &= 3^2 + 1^2 = 9 + 1 = 10, \\ c_{32} &= 3^2 + 2^2 = 9 + 4 = 13, \\ c_{33} &= 3^2 + 3^2 = 9 + 9 = 18. \end{aligned} \]

Step 4: Calculate the elements of the fourth row (\( i = 4 \)):

\[ \begin{aligned} c_{41} &= 4^2 + 1^2 = 16 + 1 = 17, \\ c_{42} &= 4^2 + 2^2 = 16 + 4 = 20, \\ c_{43} &= 4^2 + 3^2 = 16 + 9 = 25. \end{aligned} \]

Step 5: Write the matrix \( \mathbf{C} \) with the calculated elements:

\[ \mathbf{C} = \begin{bmatrix} 2 & 5 & 10 \\ 5 & 8 & 13 \\ 10 & 13 & 18 \\ 17 & 20 & 25 \end{bmatrix} \]

Thus, the matrix \( \mathbf{C} \) constructed using the rule \( c_{ij} = i^2 + j^2 \) is:

\[ \mathbf{C} = \left[ c_{ij} \right]_{4 \times 3} = \begin{bmatrix} 2 & 5 & 10 \\ 5 & 8 & 13 \\ 10 & 13 & 18 \\ 17 & 20 & 25 \end{bmatrix} \]

Different Types of Matrices

1. Null Matrix

A Null Matrix (or Zero Matrix) is a matrix in which all elements are zero. For any matrix \( \mathbf{O} = [o_{ij}]_{m \times n} \):

\[ o_{ij} = 0 \quad \text{for all} \quad 1 \leq i \leq m, \, 1 \leq j \leq n \]

Example:

\[ \mathbf{O} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}_{2 \times 3} \]

2. Row Matrix

A Row Matrix is a matrix that consists of a single row. For a row matrix \( \mathbf{R} = [r_{1j}]_{1 \times n} \):

\[ \mathbf{R} = \begin{bmatrix} r_{11} & r_{12} & \dots & r_{1n} \end{bmatrix} \]

Example:

\[ \mathbf{R} = \begin{bmatrix} 3 & 5 & 7 \end{bmatrix}_{1 \times 3} \]

3. Column Matrix

A Column Matrix is a matrix that consists of a single column. For a column matrix \( \mathbf{C} = [c_{i1}]_{m \times 1} \):

\[ \mathbf{C} = \begin{bmatrix} c_{11} \\ c_{21} \\ \vdots \\ c_{m1} \end{bmatrix} \]

Example: [ \mathbf{C} = \begin{bmatrix} 4 \ 6 \ 8 \end{bmatrix}_{3 \times 1} ]

4. Rectangular Matrix

A Rectangular Matrix is a matrix where the number of rows is not equal to the number of columns, i.e., \( m \neq n \). For a matrix \( \mathbf{A} = [a_{ij}]_{m \times n} \) with \( m \neq n \):

\[ \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \end{bmatrix} \quad (m = 2, n = 3) \]

Example:

\[ \mathbf{A} = \begin{bmatrix} 2 & 4 & 6 \\ 8 & 10 & 12 \end{bmatrix}_{2 \times 3} \]

5. Square Matrix

A Square Matrix is a matrix in which the number of rows equals the number of columns, i.e., \( m = n \). For a square matrix \( \mathbf{B} = [b_{ij}]_{n \times n} \):

\[ \mathbf{B} = \begin{bmatrix} b_{11} & b_{12} & \dots & b_{1n} \\ b_{21} & b_{22} & \dots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \dots & b_{nn} \end{bmatrix} \]

Main Diagonal: The main diagonal of a square matrix consists of elements \( b_{ij} \) where \( i = j \). In a square matrix: - Elements above the main diagonal satisfy \( i < j \). - Elements below the main diagonal satisfy \( i > j \).

Example:

\[ \mathbf{B} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}_{3 \times 3} \]

In this example, the main diagonal consists of elements \( 1, 5, 9 \), elements above the diagonal are \( 2, 3, 6 \), and elements below the diagonal are \( 4, 7, 8 \).

Further Categorization of Square Matrices

1. Identity Matrix

An Identity Matrix is a square matrix in which all the elements of the main diagonal are 1, and all other elements are 0. For an identity matrix \( \mathbf{I} = [i_{ij}]_{n \times n} \):

\[ i_{ij} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} \]

Example:

\[ \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}_{3 \times 3} \]

2. Scalar Matrix

A Scalar Matrix is a square matrix in which all the elements of the main diagonal are equal to some constant \( k \), and all other elements are 0. For a scalar matrix \( \mathbf{S} = [s_{ij}]_{n \times n} \):

\[ s_{ij} = \begin{cases} k & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} \]

Example:

\[ \mathbf{S} = \begin{bmatrix} 5 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 5 \end{bmatrix}_{3 \times 3} \]

3. Diagonal Matrix

A Diagonal Matrix is a square matrix in which all the elements outside the main diagonal are 0. For a diagonal matrix \( \mathbf{D} = [d_{ij}]_{n \times n} \):

\[ d_{ij} = \begin{cases} d_{ii} & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} \]

In diag notation, a diagonal matrix can be represented as:

\[ \mathbf{D} = \text{diag}(d_{11}, d_{22}, \dots, d_{nn}) \]

Example:

\[ \mathbf{D} = \begin{bmatrix} 7 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 4 \end{bmatrix}_{3 \times 3} \]

4. Lower Triangular Matrix

A Lower Triangular Matrix is a square matrix in which all the elements above the main diagonal are 0. For a lower triangular matrix \( \mathbf{L} = [l_{ij}]_{n \times n} \):

\[ l_{ij} = 0 \quad \text{for all } i < j \]

Example:

\[ \mathbf{L} = \begin{bmatrix} 4 & 0 & 0 \\ 2 & 5 & 0 \\ 1 & 3 & 6 \end{bmatrix}_{3 \times 3} \]

5. Upper Triangular Matrix

An Upper Triangular Matrix is a square matrix in which all the elements below the main diagonal are 0. For an upper triangular matrix \( \mathbf{U} = [u_{ij}]_{n \times n} \):

\[ u_{ij} = 0 \quad \text{for all } i > j \]

Example:

\[ \mathbf{U} = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 5 & 6 \\ 0 & 0 & 9 \end{bmatrix}_{3 \times 3} \]

When comparing matrices, the concept of "greater than" or "less than" does not apply as it does with individual numbers. Instead, matrices are primarily compared based on equality.

Operations on matrices

Matrices are fundamental mathematical objects, much like real numbers. When we define any mathematical object, we also define the operations that can be performed on them. For real numbers, we have operations like addition, subtraction, multiplication, and division. Similarly, matrices have their own set of operations, such as addition, multiplication, and scalar multiplication.

Matrix Equality

Two matrices \( \mathbf{A} = [a_{ij}]_{m \times n} \) and \( \mathbf{B} = [b_{ij}]_{m \times n} \) are said to be equal if and only if:

  1. Same Dimensions: The matrices must have the same dimensions, i.e., they must have the same number of rows and the same number of columns. Formally, if \( \mathbf{A} \) is \( m \times n \), then \( \mathbf{B} \) must also be \( m \times n \).

  2. Element-wise Equality: Each corresponding element of the two matrices must be equal. That is:

\[ a_{ij} = b_{ij} \quad \text{for all} \quad 1 \leq i \leq m \quad \text{and} \quad 1 \leq j \leq n \]

Example of Matrix Equality

Consider two matrices \( \mathbf{A} \) and \( \mathbf{B} \):

\[ \mathbf{A} = \begin{bmatrix} 2 & 3 & 5 \\ 4 & 6 & 8 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 2 & 3 & 5 \\ 4 & 6 & 8 \end{bmatrix} \]

Here, both matrices \( \mathbf{A} \) and \( \mathbf{B} \) are \( 2 \times 3 \) matrices, and corresponding elements are equal:

\[ a_{11} = b_{11} = 2, \quad a_{12} = b_{12} = 3, \quad a_{13} = b_{13} = 5 \]
\[ a_{21} = b_{21} = 4, \quad a_{22} = b_{22} = 6, \quad a_{23} = b_{23} = 8 \]

Thus, \( \mathbf{A} = \mathbf{B} \).

Non-Equality

If either the dimensions differ or any corresponding elements do not match, the matrices are not equal. For example, if:

\[ \mathbf{C} = \begin{bmatrix} 2 & 3 & 5 \\ 4 & 7 & 8 \end{bmatrix} \]

then \( \mathbf{A} \) and \( \mathbf{C} \) are not equal because \( a_{22} = 6 \) and \( c_{22} = 7 \), so \( a_{22} \neq c_{22} \).

Matrix Addition

Matrix addition is one of the fundamental operations defined for matrices, similar to addition for real numbers. When adding matrices, we combine them element-wise, provided they have the same dimensions.

Definition of Matrix Addition

Given two matrices \( \mathbf{A} = [a_{ij}]_{m \times n} \) and \( \mathbf{B} = [b_{ij}]_{m \times n} \), the sum \( \mathbf{C} = \mathbf{A} + \mathbf{B} \) is a matrix \( \mathbf{C} = [c_{ij}]_{m \times n} \) where each element \( c_{ij} \) is given by:

\[ c_{ij} = a_{ij} + b_{ij} \quad \text{for all} \quad 1 \leq i \leq m, \, 1 \leq j \leq n \]

This means that the corresponding elements of \( \mathbf{A} \) and \( \mathbf{B} \) are added together to produce the elements of \( \mathbf{C} \).

Conditions for Matrix Addition

  • Same Dimensions: The matrices \( \mathbf{A} \) and \( \mathbf{B} \) must have the same number of rows and columns. Matrix addition is not defined for matrices of different dimensions.

Example of Matrix Addition

Consider the following matrices \( \mathbf{A} \) and \( \mathbf{B} \):

\[ \mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 7 & 8 & 9 \\ 10 & 11 & 12 \end{bmatrix} \]

To find the sum \( \mathbf{C} = \mathbf{A} + \mathbf{B} \), we add the corresponding elements:

\[ \mathbf{C} = \begin{bmatrix} 1+7 & 2+8 & 3+9 \\ 4+10 & 5+11 & 6+12 \end{bmatrix} = \begin{bmatrix} 8 & 10 & 12 \\ 14 & 16 & 18 \end{bmatrix} \]

Properties of Matrix Addition

Matrix addition possesses several important properties that make it a well-defined operation in linear algebra. These properties are analogous to those of addition for real numbers and are essential for understanding how matrices behave under addition.

  1. Commutativity

    Matrix addition is commutative, which means that the order in which matrices are added does not affect the result. Specifically, for any two matrices \( \mathbf{A} \) and \( \mathbf{B} \) of the same dimensions:

    \[ \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \]

    Example:

    Let \( \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \) and \( \mathbf{B} = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \). Then:

    \[ \mathbf{A} + \mathbf{B} = \begin{bmatrix} 1+5 & 2+6 \\ 3+7 & 4+8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix} \]
    \[ \mathbf{B} + \mathbf{A} = \begin{bmatrix} 5+1 & 6+2 \\ 7+3 & 8+4 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix} \]

    Since \( \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \), matrix addition is commutative.

  2. Associativity

    Matrix addition is associative, which means that when adding three matrices, the grouping of the matrices does not affect the result. Specifically, for any three matrices \( \mathbf{A} \), \( \mathbf{B} \), and \( \mathbf{C} \) of the same dimensions:

    \[ (\mathbf{A} + \mathbf{B}) + \mathbf{C} = \mathbf{A} + (\mathbf{B} + \mathbf{C}) \]

    Example:

    Let \( \mathbf{A} = \begin{bmatrix} 1 & 0 \\ 2 & 3 \end{bmatrix} \), \( \mathbf{B} = \begin{bmatrix} 4 & 5 \\ 6 & 7 \end{bmatrix} \), and \( \mathbf{C} = \begin{bmatrix} 8 & 9 \\ 10 & 11 \end{bmatrix} \). Then:

    \[ (\mathbf{A} + \mathbf{B}) + \mathbf{C} = \begin{bmatrix} 1+4 & 0+5 \\ 2+6 & 3+7 \end{bmatrix} + \mathbf{C} = \begin{bmatrix} 5 & 5 \\ 8 & 10 \end{bmatrix} + \begin{bmatrix} 8 & 9 \\ 10 & 11 \end{bmatrix} = \begin{bmatrix} 13 & 14 \\ 18 & 21 \end{bmatrix} \]
    \[ \mathbf{A} + (\mathbf{B} + \mathbf{C}) = \mathbf{A} + \begin{bmatrix} 4+8 & 5+9 \\ 6+10 & 7+11 \end{bmatrix} = \mathbf{A} + \begin{bmatrix} 12 & 14 \\ 16 & 18 \end{bmatrix} = \begin{bmatrix} 1+12 & 0+14 \\ 2+16 & 3+18 \end{bmatrix} = \begin{bmatrix} 13 & 14 \\ 18 & 21 \end{bmatrix} \]

    Since \( (\mathbf{A} + \mathbf{B}) + \mathbf{C} = \mathbf{A} + (\mathbf{B} + \mathbf{C}) \), matrix addition is associative.

  3. Additive Identity

    The additive identity in matrix addition is the zero matrix. For any matrix \( \mathbf{A} \) of dimension \( m \times n \), there exists a zero matrix \( \mathbf{O} = [0]_{m \times n} \) such that:

    \[ \mathbf{A} + \mathbf{O} = \mathbf{O} + \mathbf{A} = \mathbf{A} \]

    The zero matrix is a matrix where all elements are zero.

    Example:

    Let \( \mathbf{A} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \) and \( \mathbf{O} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \). Then:

    \[ \mathbf{A} + \mathbf{O} = \begin{bmatrix} 2+0 & 3+0 \\ 4+0 & 5+0 \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \]

    Since \( \mathbf{A} + \mathbf{O} = \mathbf{A} \), the zero matrix acts as the additive identity.

  4. Additive Inverse

    For any matrix \( \mathbf{A} \), there exists an additive inverse matrix \( -\mathbf{A} \) such that:

    \[ \mathbf{A} + (-\mathbf{A}) = (-\mathbf{A}) + \mathbf{A} = \mathbf{O} \]

    The additive inverse matrix \( -\mathbf{A} \) is obtained by negating each element of \( \mathbf{A} \).

    Example:

    Let \( \mathbf{A} = \begin{bmatrix} 3 & -2 \\ 4 & 6 \end{bmatrix} \). The additive inverse \( -\mathbf{A} \) is:

    \[ -\mathbf{A} = \begin{bmatrix} -3 & 2 \\ -4 & -6 \end{bmatrix} \]

    Now, adding \( \mathbf{A} \) and \( -\mathbf{A} \):

    \[ \mathbf{A} + (-\mathbf{A}) = \begin{bmatrix} 3+(-3) & -2+2 \\ 4+(-4) & 6+(-6) \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} = \mathbf{O} \]

    Since \( \mathbf{A} + (-\mathbf{A}) = \mathbf{O} \), \( -\mathbf{A} \) is indeed the additive inverse of \( \mathbf{A} \).

    Thus, the sum of matrices \( \mathbf{A} \) and \( \mathbf{B} \) is:

    \[ \mathbf{C} = \begin{bmatrix} 8 & 10 & 12 \\ 14 & 16 & 18 \end{bmatrix} \]

Scalar Multiplication of Matrices

Scalar multiplication is an operation where a matrix is multiplied by a scalar (a single number). This operation is analogous to multiplying each element of a vector or a list by a number, but it is applied to all elements of a matrix.

Definition of Scalar Multiplication

Given a matrix \( \mathbf{A} = [a_{ij}]_{m \times n} \) and a scalar \( k \), the scalar multiplication of \( \mathbf{A} \) by \( k \) results in a new matrix \( \mathbf{B} = k\mathbf{A} = [b_{ij}]_{m \times n} \) where each element \( b_{ij} \) is given by:

\[ b_{ij} = k \cdot a_{ij} \quad \text{for all} \quad 1 \leq i \leq m, \, 1 \leq j \leq n \]

This means that the scalar \( k \) multiplies every element of the matrix \( \mathbf{A} \).

Example of Scalar Multiplication

Consider a matrix \( \mathbf{A} \) and a scalar \( k = 3 \):

\[ \mathbf{A} = \begin{bmatrix} 2 & -1 & 4 \\ 0 & 3 & -5 \end{bmatrix} \]

To find the matrix \( \mathbf{B} = 3\mathbf{A} \), multiply each element of \( \mathbf{A} \) by 3:

\[ \mathbf{B} = 3 \times \begin{bmatrix} 2 & -1 & 4 \\ 0 & 3 & -5 \end{bmatrix} = \begin{bmatrix} 3 \times 2 & 3 \times (-1) & 3 \times 4 \\ 3 \times 0 & 3 \times 3 & 3 \times (-5) \end{bmatrix} = \begin{bmatrix} 6 & -3 & 12 \\ 0 & 9 & -15 \end{bmatrix} \]

Thus, the resulting matrix \( \mathbf{B} \) is:

\[ \mathbf{B} = \begin{bmatrix} 6 & -3 & 12 \\ 0 & 9 & -15 \end{bmatrix} \]

Properties of Scalar Multiplication

Scalar multiplication of matrices has several important properties:

  1. Distributivity Over Matrix Addition: [ k(\mathbf{A} + \mathbf{B}) = k\mathbf{A} + k\mathbf{B} ] Where \( \mathbf{A} \) and \( \mathbf{B} \) are matrices of the same dimensions and \( k \) is a scalar.

  2. Distributivity Over Scalar Addition: [ (k + l)\mathbf{A} = k\mathbf{A} + l\mathbf{A} ] Where \( k \) and \( l \) are scalars and \( \mathbf{A} \) is a matrix.

  3. Associativity of Scalar Multiplication: [ k(l\mathbf{A}) = (kl)\mathbf{A} ] Where \( k \) and \( l \) are scalars and \( \mathbf{A} \) is a matrix.

  4. Multiplication by 1: [ 1 \cdot \mathbf{A} = \mathbf{A} ] Where \( 1 \) is the scalar one and \( \mathbf{A} \) is any matrix. This property shows that multiplying any matrix by 1 leaves it unchanged.

  5. Multiplication by 0: [ 0 \cdot \mathbf{A} = \mathbf{O} ] Where \( 0 \) is the scalar zero, and \( \mathbf{O} \) is the zero matrix of the same dimensions as \( \mathbf{A} \). This property shows that multiplying any matrix by 0 results in a zero matrix.

Matrix Subtraction

Matrix subtraction is defined as the addition of one matrix to the additive inverse (or negative) of another matrix. If you have two matrices \( \mathbf{A} \) and \( \mathbf{B} \), the subtraction \( \mathbf{A} - \mathbf{B} \) is performed by adding \( \mathbf{A} \) to the negative of \( \mathbf{B} \).

Definition of Matrix Subtraction

Given two matrices \( \mathbf{A} = [a_{ij}]_{m \times n} \) and \( \mathbf{B} = [b_{ij}]_{m \times n} \), the subtraction \( \mathbf{C} = \mathbf{A} - \mathbf{B} \) is defined as:

\[ \mathbf{C} = \mathbf{A} + (-\mathbf{B}) \]

Where \( -\mathbf{B} = [-b_{ij}]_{m \times n} \) is the matrix obtained by negating each element of \( \mathbf{B} \). The resulting matrix \( \mathbf{C} = [c_{ij}]_{m \times n} \) has elements:

\[ c_{ij} = a_{ij} - b_{ij} \quad \text{for all} \quad 1 \leq i \leq m, \, 1 \leq j \leq n \]

Example of Matrix Subtraction

Consider the matrices \( \mathbf{A} \) and \( \mathbf{B} \):

\[ \mathbf{A} = \begin{bmatrix} 5 & 8 & -3 \\ 7 & 2 & 4 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 3 & 6 & -1 \\ 2 & 5 & -4 \end{bmatrix} \]

To find the matrix \( \mathbf{C} = \mathbf{A} - \mathbf{B} \), first determine the negative of \( \mathbf{B} \):

\[ -\mathbf{B} = \begin{bmatrix} -3 & -6 & 1 \\ -2 & -5 & 4 \end{bmatrix} \]

Now, add \( \mathbf{A} \) and \( -\mathbf{B} \):

\[ \mathbf{C} = \mathbf{A} + (-\mathbf{B}) = \begin{bmatrix} 5 & 8 & -3 \\ 7 & 2 & 4 \end{bmatrix} + \begin{bmatrix} -3 & -6 & 1 \\ -2 & -5 & 4 \end{bmatrix} \]

Perform the element-wise addition:

\[ \mathbf{C} = \begin{bmatrix} 5 + (-3) & 8 + (-6) & -3 + 1 \\ 7 + (-2) & 2 + (-5) & 4 + 4 \end{bmatrix} = \begin{bmatrix} 2 & 2 & -2 \\ 5 & -3 & 8 \end{bmatrix} \]

Thus, the result of the matrix subtraction \( \mathbf{A} - \mathbf{B} \) is:

\[ \mathbf{C} = \begin{bmatrix} 2 & 2 & -2 \\ 5 & -3 & 8 \end{bmatrix} \]

Matrix Multiplication

Matrix multiplication can initially seem complex, but it becomes clearer when we break it down with a concrete example. Let's consider two matrices \( \mathbf{A} \) and \( \mathbf{B} \) with the following dimensions and elements:

\[ \mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ 10 & 11 & 12 \end{bmatrix}_{4 \times 3}, \quad \mathbf{B} = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}_{3 \times 2} \]

To find the product \( \mathbf{AB} \), we need to perform the following steps:

Step-by-Step Calculation:

  1. First row of \( \mathbf{A} \) with all columns of \( \mathbf{B} \):

    • First row of \( \mathbf{A} \): \( [1, 2, 3] \)
    • First column of \( \mathbf{B} \): \( [1, 2, 3] \)

      \[ \text{Dot product} = 1 \cdot 1 + 2 \cdot 2 + 3 \cdot 3 = 1 + 4 + 9 = 14 \]
    • Second column of \( \mathbf{B} \): \( [4, 5, 6] \)

      \[ \text{Dot product} = 1 \cdot 4 + 2 \cdot 5 + 3 \cdot 6 = 4 + 10 + 18 = 32 \]
    • Resulting first row of \( \mathbf{AB} \): \( [14, 32] \)

  2. Second row of \( \mathbf{A} \) with all columns of \( \mathbf{B} \):

    • Second row of \( \mathbf{A} \): \( [4, 5, 6] \)
    • First column of \( \mathbf{B} \): \( [1, 2, 3] \)

      \[ \text{Dot product} = 4 \cdot 1 + 5 \cdot 2 + 6 \cdot 3 = 4 + 10 + 18 = 32 \]
    • Second column of \( \mathbf{B} \): \( [4, 5, 6] \)

      \[ \text{Dot product} = 4 \cdot 4 + 5 \cdot 5 + 6 \cdot 6 = 16 + 25 + 36 = 77 \]
    • Resulting second row of \( \mathbf{AB} \): \( [32, 77] \)

  3. Third row of \( \mathbf{A} \) with all columns of \( \mathbf{B} \):

    • Third row of \( \mathbf{A} \): \( [7, 8, 9] \)
    • First column of \( \mathbf{B} \): \( [1, 2, 3] \)

      \[ \text{Dot product} = 7 \cdot 1 + 8 \cdot 2 + 9 \cdot 3 = 7 + 16 + 27 = 50 \]
    • Second column of \( \mathbf{B} \): \( [4, 5, 6] \)

      \[ \text{Dot product} = 7 \cdot 4 + 8 \cdot 5 + 9 \cdot 6 = 28 + 40 + 54 = 122 \]
    • Resulting third row of \( \mathbf{AB} \): \( [50, 122] \)

  4. Fourth row of \( \mathbf{A} \) with all columns of \( \mathbf{B} \):

    • Fourth row of \( \mathbf{A} \): \( [10, 11, 12] \)
    • First column of \( \mathbf{B} \): \( [1, 2, 3] \)

      \[ \text{Dot product} = 10 \cdot 1 + 11 \cdot 2 + 12 \cdot 3 = 10 + 22 + 36 = 68 \]
    • Second column of \( \mathbf{B} \): \( [4, 5, 6] \)

      \[ \text{Dot product} = 10 \cdot 4 + 11 \cdot 5 + 12 \cdot 6 = 40 + 55 + 72 = 167 \]
    • Resulting fourth row of \( \mathbf{AB} \): \( [68, 167] \)

The resulting matrix \( \mathbf{AB} \) is:

\[ \mathbf{AB} = \begin{bmatrix} 14 & 32 \\ 32 & 77 \\ 50 & 122 \\ 68 & 167 \end{bmatrix}_{4 \times 2} \]

Understanding the Process

In matrix multiplication, each element of the product matrix \( \mathbf{AB} \) is obtained by taking the dot product of a row vector from \( \mathbf{A} \) with a column vector from \( \mathbf{B} \). This operation is called the inner product.

For matrix multiplication to be valid, the number of columns in \( \mathbf{A} \) must equal the number of rows in \( \mathbf{B} \). If this condition is not met, the matrices are not conformable, meaning they cannot be multiplied.

Defining Conformable Matrices

Matrices \( \mathbf{A} \) and \( \mathbf{B} \) are said to be conformable for multiplication if the number of columns in \( \mathbf{A} \) equals the number of rows in \( \mathbf{B} \). Specifically:

  • If \( \mathbf{A} \) has dimensions \( m \times p \) and \( \mathbf{B} \) has dimensions \( p \times n \), then the matrices are conformable.
  • The resulting product \( \mathbf{AB} \) will have dimensions \( m \times n \).

Definition of Matrix Multiplication

Let \( \mathbf{A} = [a_{ij}]_{m \times p} \) and \( \mathbf{B} = [b_{jk}]_{p \times n} \) be two conformable matrices. The product \( \mathbf{AB} = [c_{ik}]_{m \times n} \) is defined as:

\[ c_{ik} = \sum_{j=1}^{p} a_{ij} b_{jk} \]

This means that each element \( c_{ik} \) in the resulting matrix \( \mathbf{AB} \) is the sum of the products of corresponding elements from the \( i \)-th row of \( \mathbf{A} \) and the \( k \)-th column of \( \mathbf{B} \). In other words:

\[ c_{ik} = a_{i1}b_{1k} + a_{i2}b_{2k} + \dots + a_{ip}b_{pk} = \sum_{j=1}^{p} a_{ij} b_{jk} \]

Non-Commutativity of Matrix Multiplication

Matrix multiplication is generally not commutative, meaning that the order in which matrices are multiplied affects the result. There are several reasons why this is the case:

  1. Non-Conformable Matrices:

One key reason for non-commutativity is that the product \( \mathbf{AB} \) might be defined, but the reverse product \( \mathbf{BA} \) may not be, due to the matrices being non-conformable. For example, if \( \mathbf{A} \) is a \( 3 \times 4 \) matrix and \( \mathbf{B} \) is a \( 4 \times 5 \) matrix, then \( \mathbf{AB} \) is defined and will be a \( 3 \times 5 \) matrix. However, \( \mathbf{BA} \) is not defined because \( \mathbf{B} \) has 5 columns, and \( \mathbf{A} \) has 3 rows, making them non-conformable for multiplication.

  1. Different Dimensions:

Even when both \( \mathbf{AB} \) and \( \mathbf{BA} \) are defined, their dimensions may differ, which is another reason they may not be equal. Suppose \( \mathbf{A} \) is an \( m \times n \) matrix, and \( \mathbf{B} \) is an \( n \times m \) matrix. The product \( \mathbf{AB} \) will be an \( m \times m \) matrix, while \( \mathbf{BA} \) will be an \( n \times n \) matrix. If \( m \neq n \), the dimensions of \( \mathbf{AB} \) and \( \mathbf{BA} \) do not match, and they cannot be directly compared.

  1. Square Matrices:

Even when both matrices are square (i.e., \( \mathbf{A} \) and \( \mathbf{B} \) are both \( n \times n \) matrices), the products \( \mathbf{AB} \) and \( \mathbf{BA} \) may still not be equal. This is because the elements of the matrices interact differently depending on the order of multiplication.

Example:

Consider the square matrices \( \mathbf{A} \) and \( \mathbf{B} \) where:

\[ \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 3 & 0 \\ 4 & 5 \end{bmatrix} \]

First, calculate \( \mathbf{AB} \):

\[ \mathbf{AB} = \begin{bmatrix} 1 \cdot 3 + 2 \cdot 4 & 1 \cdot 0 + 2 \cdot 5 \\ 0 \cdot 3 + 1 \cdot 4 & 0 \cdot 0 + 1 \cdot 5 \end{bmatrix} = \begin{bmatrix} 3 + 8 & 0 + 10 \\ 4 & 5 \end{bmatrix} = \begin{bmatrix} 11 & 10 \\ 4 & 5 \end{bmatrix} \]

Next, calculate \( \mathbf{BA} \):

\[ \mathbf{BA} = \begin{bmatrix} 3 \cdot 1 + 0 \cdot 0 & 3 \cdot 2 + 0 \cdot 1 \\ 4 \cdot 1 + 5 \cdot 0 & 4 \cdot 2 + 5 \cdot 1 \end{bmatrix} = \begin{bmatrix} 3 & 6 \\ 4 & 8 + 5 \end{bmatrix} = \begin{bmatrix} 3 & 6 \\ 4 & 13 \end{bmatrix} \]

Here, \( \mathbf{AB} \) and \( \mathbf{BA} \) are both \( 2 \times 2 \) matrices, but:

\[ \mathbf{AB} = \begin{bmatrix} 11 & 10 \\ 4 & 5 \end{bmatrix}, \quad \mathbf{BA} = \begin{bmatrix} 3 & 6 \\ 4 & 13 \end{bmatrix} \]

Since \( \mathbf{AB} \neq \mathbf{BA} \), this example illustrates that even when matrices are square and conformable, matrix multiplication is generally not commutative.