Solving System of Linear Equations
Consider a system of linear equations:
This system involves three variables: \(x\), \(y\), and \(z\). Our goal is to determine the specific values of \(x\), \(y\), and \(z\) that simultaneously satisfy all three equations. These specific values, which make each equation true, are referred to as the solutions of the system of linear equations.
To find these solutions, we employ a systematic method known as eliminating variables. This method simplifies the system stepbystep, allowing us to isolate each variable and solve for its value.
The elimination process works by eliminating any two of the three variables and isolating the remaining variable to determine its value. To achieve this, we follow two simple rules: we can multiply any equation by any nonzero number, and we can add or subtract two equations. This process does not alter the solution of the system of linear equations but helps us in systematically eliminating the unknown variables one by one.
Let us eliminate \(x\). To eliminate \(x\), we multiply equation \((1)\) by 2 in our head and subtract it from equation \((2)\):
Next, we multiply equation \((1)\) by 3 in our head and subtract it from equation \((3)\):
This gives us two new equations which do not contain \(x\):
These two equations can now be used to further isolate and solve for the variables \(y\) and \(z\).
Now, we substitute \( y = \frac{3}{2} \) back into equation \((5)\) to solve for \(z\):
Next, we substitute \( y = \frac{3}{2} \) and \( z = \frac{21}{2} \) into equation \((1)\) to solve for \(x\):
Therefore, the solution to the system of equations is:
Gaussian Elimination
Why are we learning this?
Gaussian elimination is important because it allows for the systematic solving of systems of linear equations, which is essential for handling large systems with many variables. While we can solve small systems like 2 or 3 variables, manually, practical applications often involve hundreds or thousands of variables, making manual methods impractical. Gaussian elimination provides a clear, algorithmic approach that computers can execute, enabling the efficient solution of large systems. Also, it forms the basis for more advanced numerical techniques in linear algebra that you will study in future.
In Gaussian elimination, we represent the system of linear equations in matrix form. This is because, if we observe carefully, the way we solve linear equations involves manipulating the coefficients directly, without needing to repeatedly write the variables \(x\), \(y\), \(z\), or the equals symbol. The Chinese mathematicians were among the first to realize and use this efficient method.
Matrix Representation
Consider the system of linear equations:
We can represent our system of linear equations in an array format, which we call as augmented matrix, as follows:
In this matrix:
 Each row corresponds to one equation in the system.
 Each column corresponds to the coefficients of one variable, with the last column representing the constants on the righthand side of the equations.
For example:
 The first column contains the coefficients of \(x\).
 The second column contains the coefficients of \(y\).
 The third column contains the coefficients of \(z\).
 The last column contains the constants from the right side of the equations.
\( R_1, R_2, R_3 \) represent the rows of the matrix, while \( C_1, C_2, C_3, C_4 \) represent the columns of the matrix.
Elementary Row Operations
Based on our observation of solving linear equations, we then define three rules, known as elementary row operations, which help us manipulate the matrix without changing the solutions of the system.

Rule 1: Row Multiplication
We can multiply any row by any nonzero number. This is equivalent to multiplying all coefficients and the constant term of an equation by that number. This operation does not change the solution of the system.
For example, multiplying the first row by 2:
\[ \begin{pmatrix} 1 & 2 & 1 &  & 2 \\ 2 & 3 & 1 &  & 4 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \rightarrow \begin{pmatrix} 2 & 4 & 2 &  & 4 \\ 2 & 3 & 1 &  & 4 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \]This is equivalent to multiplying the first equation by 2.

Rule 2: Row Addition/Subtraction
We can add or subtract the multiples of one row to/from another row. This is equivalent to adding or subtracting the corresponding equations. This operation helps eliminate variables stepbystep.
For example, subtracting twice the first row from the second row:
\[ \begin{pmatrix} 1 & 2 & 1 &  & 2 \\ 2 & 3 & 1 &  & 4 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 2 & 1 &  & 2 \\ 0 & 7 & 1 &  & 0 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \] 
Rule 3: Row Swapping
We can interchange two rows. This is equivalent to swapping the positions of two equations. This does not affect any solutions.
For example, swapping the first and second rows:
\[ \begin{pmatrix} 1 & 2 & 1 &  & 2 \\ 2 & 3 & 1 &  & 4 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \rightarrow \begin{pmatrix} 2 & 3 & 1 &  & 4 \\ 1 & 2 & 1 &  & 2 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \]
Applying Row Operations to Matrices
To solve a system of linear equations using Gaussian elimination, we start by applying row operations to its augmented matrix. Here's a stepbystep explanation:
Step 1: Choosing the Pivot Element
We begin with the top leftmost element of the matrix, which we call the pivot. The pivot is crucial for the elimination process.
 If all elements in this column are zero, we move diagonally to the next row and column.
 If the pivot (top leftmost element) is nonzero, we can use it directly.
 If the pivot is zero, we interchange this row with a row below that has a nonzero element in the first column. This ensures that we have a nonzero pivot to work with.
Step 2: Eliminate Elements Below the Pivot
Once we have a nonzero pivot, the next goal is to make all elements below this pivot zero. This simplifies the system by reducing the number of variables in the subsequent rows.
 To eliminate the elements below the pivot, we use row operations. Specifically, we multiply the pivot row by a suitable nonzero number and subtract it from the rows below.
 This step is equivalent to eliminating the variable associated with the pivot from the other equations.
Step 3: Move to the Next Pivot
After making the elements below the pivot zero, we move diagonally to the element in the second row and second column. This becomes our new pivot.
 If this new pivot is nonzero, we proceed with the next elimination step.
 If the new pivot is zero, we interchange this row with a row below that has a nonzero element in the same column. If all elements below are zero as well, we move diagonally to the next row and column.
Step 4: Repeat the Process
We repeat the process of eliminating elements below the pivot and moving to the next pivot until we reach the last row. By doing this, we systematically transform the matrix into an upper triangular form.
Let us look at the following example to understand how the Gaussian elimination works.
Example
Consider the system of linear equations represented by the following augmented matrix:

The top leftmost element is zero. We interchange the first row with a row below that has a nonzero element in the first column:
\[ \begin{pmatrix} 2 & 3 & 1 &  & 4 \\ 0 & 2 & 1 &  & 2 \\ 3 & 1 & 2 &  & 3 \\ \end{pmatrix} \] 
Now, the pivot is 2.

To make the element below the pivot (in the third row, first column) zero, we perform the following row operation:
\[ R_3 = R_3  \frac{3}{2} R_1 \rightarrow \begin{pmatrix} 2 & 3 & 1 &  & 4 \\ 0 & 2 & 1 &  & 2 \\ 0 & \frac{7}{2} & \frac{1}{2} &  & 3 \end{pmatrix} \] 
Move diagonally to the element in the second row and second column (2). This is our new pivot.

To make the element below the second pivot (in the third row, second column) zero, we perform the following row operation:
\[ R_3 = R_3  \frac{7}{4} R_2 \rightarrow \begin{pmatrix} 2 & 3 & 1 &  & 4 \\ 0 & 2 & 1 &  & 2 \\ 0 & 0 & 1 &  & \frac{7}{2} \end{pmatrix} \] 
Move diagonally to the element in the third row and third column (1). This is our new pivot.
The matrix is now in upper triangular form:
\[ \begin{pmatrix} 2 & 3 & 1 &  & 4 \\ 0 & 2 & 1 &  & 2 \\ 0 & 0 & 1 &  & \frac{7}{2} \end{pmatrix} \]
RowEchelon Form
After gaussian elimination in the above example what we have got is called the row echelon form of a matrix
A matrix is in row echelon form if it meets the following criteria:
 Leading Element: The first nonzero number from the left (known as the leading element) in each row is to the right of the leading element in the row above it.
 Zeros Below Leading Elements: The leading element in each row is the only nonzero entry in its column, with all elements below it being zero.
 Nonzero Rows: Any row containing all zeros is at the bottom of the matrix.
From the above example:
Let's analyze why this matrix is in row echelon form:

Leading Element Position:
 The first row's leading element is \(2\), which is the first nonzero number from the left.
 The second row's leading element is \(2\), positioned to the right of the first row's leading element.
 The third row's leading element is \(1\), positioned to the right of the second row's leading element.

Zeros Below Leading Elements:
 In the first column, below the leading element \(2\) in the first row, all elements are zero.
 In the second column, below the leading element \(2\) in the second row, all elements are zero.
 In the third column, below the leading element \(1\) in the third row, there are no rows left.

Nonzero Rows:
 There are no rows of all zeros. If there were, they would be at the bottom of the matrix.
Thus, this matrix is in row echelon form because it satisfies all the conditions.
Back Substitution
After performing Gaussian elimination and getting the row echelon form, we convert the matrix back to equations:
This matrix is in row echelon form. We convert it to the following system of equations:
From the last equation, solve for \(z\):
Substitute \(z\) into the second equation to find \(y\):
Substitute \(y\) and \(z\) into the first equation to find \(x\):
This process is called backsubstitution.
Example
Given the system of equations:
The corresponding augmented matrix is:
Step 1: Transforming to Row Echelon Form

Identify the pivot element in the first row:
\[ \begin{pmatrix} 1 & 2 & 1 &  & 6 \\ 1 & 1 & 3 &  & 4 \\ 3 & 1 & 2 &  & 1 \end{pmatrix} \] 
Perform row operations to create zeros below the pivot:
\[ R_2 \leftarrow R_2 + R_1 \]\[ R_3 \leftarrow R_3  3R_1 \]\[ \begin{pmatrix} 1 & 2 & 1 &  & 6 \\ 0 & 3 & 4 &  & 10 \\ 0 & 7 & 5 &  & 19 \end{pmatrix} \] 
Multiply the second row by 7 and the third row by 3:
\[ R_2 \leftarrow 7R_2 \]\[ R_3 \leftarrow 3R_3 \]\[ \begin{pmatrix} 1 & 2 & 1 &  & 6 \\ 0 & 21 & 28 &  & 70 \\ 0 & 21 & 15 &  & 57 \end{pmatrix} \] 
Add the second row to the third row to create a zero in the first column of the third row:
\[ R_3 \leftarrow R_3 + R_2 \]\[ \begin{pmatrix} 1 & 2 & 1 &  & 6 \\ 0 & 21 & 28 &  & 70 \\ 0 & 0 & 13 &  & 13 \end{pmatrix} \]
Step 2: BackSubstitution
Now we perform backsubstitution:

From the third row:
\[ 13z = 13 \implies z = 1 \] 
Substitute \(z\) into the second row to solve for \(y\):
\[ 21y  28(1) = 70 \implies 21y + 28 = 70 \implies 21y = 42 \implies y = 2 \] 
Substitute \(y\) and \(z\) into the first row to solve for \(x\):
\[ x + 2(2)  (1) = 6 \implies x + 4 + 1 = 6 \implies x + 5 = 6 \implies x = 1 \]
Thus, the solution to the system is:
Example
Consider the system of linear equations:
To solve this system using Gaussian elimination, we first convert it into an augmented matrix and then perform row operations to transform it into row echelon form.
Step 1: Write the Augmented Matrix
The augmented matrix for the system is:
Step 2: Perform Row Operations to Get Row Echelon Form

Identify the pivot in the first column (Row 1, Column 1):
\[ \begin{pmatrix} {1} & 1 & 1 &  & 3 \\ 2 & 1 & 3 &  & 4 \\ 3 & 2 & 1 &  & 2 \end{pmatrix} \] 
Eliminate the elements below the pivot:

Subtract 2 times Row 1 from Row 2:
\[ R_2 \leftarrow R_2  2R_1 \] 
Subtract 3 times Row 1 from Row 3:
\[ R_3 \leftarrow R_3  3R_1 \]
Resulting matrix:
\[ \begin{pmatrix} 1 & 1 & 1 &  & 3 \\ 0 & 3 & 1 &  & 2 \\ 0 & 1 & 2 &  & 7 \end{pmatrix} \] 

Identify the pivot in the second column (Row 2, Column 2):
\[ \begin{pmatrix} 1 & 1 & 1 &  & 3 \\ 0 & {3} & 1 &  & 2 \\ 0 & 1 & 2 &  & 7 \end{pmatrix} \] 
Eliminate the element below the pivot:

Interchange Row 2 and Row 3 to make the pivot positive:
\[ R_2 \leftrightarrow R_3 \]
Resulting matrix:
\[ \begin{pmatrix} 1 & 1 & 1 &  & 3 \\ 0 & 1 & 2 &  & 7 \\ 0 & 3 & 1 &  & 2 \end{pmatrix} \] 

Subtract 3 times Row 2 from Row 3:
\[ R_3 \leftarrow R_3  3R_2 \]Resulting matrix:
\[ \begin{pmatrix} 1 & 1 & 1 &  & 3 \\ 0 & 1 & 2 &  & 7 \\ 0 & 0 & 7 &  & 14 \end{pmatrix} \]
Now the matrix is in row echelon form.
Step 3: Back Substitution
Convert the row echelon form matrix back to equations and solve for the variables:

Last equation:
\[ 7z = 14 \implies z = 2 \] 
Secondtolast equation:
\[ y  2z = 7 \implies y  2(2) = 7 \implies y  4 = 7 \implies y = 3 \] 
First equation:
\[ x + y + z = 3 \implies x + 3 + 2 = 3 \implies x + 5 = 3 \implies x = 2 \]
Solution
The solution to the system of equations is:
No Solution
Solve the system of linear equations using Gaussian elimination:
Solution:

Write the augmented matrix:
\[ \begin{pmatrix} 1 & 1 & 2 & \vert & 3 \\ 2 & 2 & 1 & \vert & 5 \\ 3 & 1 & 3 & \vert & 7 \end{pmatrix} \] 
Perform row operations to get the matrix in row echelon form:
 Step 1: Use the first row to eliminate the \(x\)terms from the second and third rows.
\[ \begin{aligned} R_2 &\leftarrow R_2  2R_1: &\begin{pmatrix} 1 & 1 & 2 & \vert & 3 \\ 0 & 4 & 3 & \vert & 1 \\ 3 & 1 & 3 & \vert & 7 \end{pmatrix}\\ R_3 &\leftarrow R_3  3R_1: &\begin{pmatrix} 1 & 1 & 2 & \vert & 3 \\ 0 & 4 & 3 & \vert & 1 \\ 0 & 4 & 3 & \vert & 2 \end{pmatrix} \end{aligned} \] Step 2: Use the second row to eliminate the \(y\)term from the third row.
\[ R_3 \leftarrow R_3  R_2: \begin{pmatrix} 1 & 1 & 2 & \vert & 3 \\ 0 & 4 & 3 & \vert & 1 \\ 0 & 0 & 0 & \vert & 1 \end{pmatrix} \] 
Interpret the row echelon form:
\[ \begin{pmatrix} 1 & 1 & 2 & \vert & 3 \\ 0 & 4 & 3 & \vert & 1 \\ 0 & 0 & 0 & \vert & 1 \end{pmatrix} \]The corresponding system of equations is: [ \begin{cases} x  y + 2z = 3 \ 4y  3z = 1 \ 0 = 1 \end{cases} ]
The third equation \(0 = 1\) is a contradiction, indicating that the system of equations has no solution.
Infinitely Many Solutions
Solve the system of linear equations using Gaussian elimination:
Solution:

Write the augmented matrix:
\[ \begin{pmatrix} 3 & 2 & 1 & \vert & 0 \\ 1 & 3 & 5 & \vert & 9 \\ 5 & 4 & 9 & \vert & 18 \end{pmatrix} \] 
Perform row operations to get the matrix in row echelon form:
 Step 1: Swap \(R_1\) and \(R_2\) to make the leading coefficient of the first row 1.
\[ R_1 \leftrightarrow R_2: \begin{pmatrix} 1 & 3 & 5 & \vert & 9 \\ 3 & 2 & 1 & \vert & 0 \\ 5 & 4 & 9 & \vert & 18 \end{pmatrix} \] Step 2: Use the first row to eliminate the \(x\)terms from the second and third rows.
\[ \begin{aligned} R_2 &\leftarrow R_2  3R_1: &\begin{pmatrix} 1 & 3 & 5 & \vert & 9 \\ 0 & 11 & 16 & \vert & 27 \\ 5 & 4 & 9 & \vert & 18 \end{pmatrix}\\ R_3 &\leftarrow R_3  5R_1: &\begin{pmatrix} 1 & 3 & 5 & \vert & 9 \\ 0 & 11 & 16 & \vert & 27 \\ 0 & 11 & 16 & \vert & 27 \end{pmatrix} \end{aligned} \] Step 3: Use the second row to eliminate the \(y\)term from the third row.
\[ R_3 \leftarrow R_3  R_2: \begin{pmatrix} 1 & 3 & 5 & \vert & 9 \\ 0 & 11 & 16 & \vert & 27 \\ 0 & 0 & 0 & \vert & 0 \end{pmatrix} \] 
Interpret the row echelon form:
\[ \begin{pmatrix} 1 & 3 & 5 & \vert & 9 \\ 0 & 11 & 16 & \vert & 27 \\ 0 & 0 & 0 & \vert & 0 \end{pmatrix} \]The corresponding system of equations is:
\[ \begin{cases} x + 3y + 5z = 9 \\ 11y  16z = 27 \\ 0 = 0 \end{cases} \]Since the third equation \(0 = 0\) is always true, it indicates that we have infinitely many solutions.
To understand why we have infinite solutions, we note that \(z\) is a free variable. This situation arises because we have more unknowns than equations. Specifically, in this system, we have three variables (\(x\), \(y\), and \(z\)) but only two nontrivial equations after reducing the matrix to row echelon form.
When we have more unknowns than equations, it implies that at least one variable will not be constrained by the equations. This variable is referred to as a "free variable." In this example, \(z\) is chosen as the free variable. However, it's important to note that we can choose any one variable to be free. The choice of the free variable does not affect the nature of the solutions; it just provides a different perspective on how to express the solutions.
Assume \(z = \lambda\) where \(\lambda \in \mathbb{R}\).
From the second equation in the row echelon form:
\[ 11y  16z = 27 \implies 11y  16\lambda = 27 \implies y = \frac{27  16\lambda}{11} \]Now, substitute \(y\) and \(z\) into the first equation:
\[ x + 3\left(\frac{27  16\lambda}{11}\right) + 5\lambda = 9 \]\[ x + \frac{81  48\lambda}{11} + 5\lambda = 9 \]\[ x = 9  \frac{81  48\lambda + 55\lambda}{11} \]\[ x = \frac{18 + 7\lambda}{11} \]Therefore, for any \(\lambda \in \mathbb{R}\):
\[ x = \frac{18 + 7\lambda}{11}, \quad y = \frac{27  16\lambda}{11}, \quad z = \lambda \]These expressions provide the parametric form of the solutions for the system, indicating an infinite number of solutions. For example, if \(\lambda = 0\):
\[ x = \frac{18}{11}, \quad y = \frac{27}{11}, \quad z = 0 \]This set of values satisfies the system of equations.
Definition of Rank
The rank of a matrix is the number of nonzero rows in its row echelon form obtained through Gaussian elimination.
Example
Find the rank of the matrix:
Solution:
To determine the rank of the matrix, we need to transform it into row echelon form and count the number of nonzero rows. Follow these steps:
 Start with the given matrix:
 Perform the row operations:
This yields:
 Swap \( R_2 \) and \( R_3 \):
 Perform the row operation:
This gives:
From the final row echelon form, we observe there are 3 nonzero rows. Therefore, the rank of the matrix is:
To understand the concept of rank and why it is useful, let's consider a general system of linear equations with \(m\) equations and \(n\) unknowns.
Consider the system of linear equations:

Coefficient Matrix:
 The coefficient matrix \(A\) is formed by the coefficients of the unknowns \(x_1, x_2, \ldots, x_n\) in the system.
\[ A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \] 
Augmented Matrix:
 The augmented matrix \([Ab]\) is formed by appending the column of constants \(b_1, b_2, \ldots, b_m\) to the coefficient matrix.
\[ [Ab] = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} & \vert & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & \vert & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vert & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & \vert & b_m \end{pmatrix} \] 
Gaussian Elimination and Rank:
 When we apply Gaussian elimination to the augmented matrix \([Ab]\), we transform it into its row echelon form.The rank of a matrix is the number of nonzero rows in its row echelon form.
 If we hide the last column of the augmented matrix during Gaussian elimination, we are effectively applying Gaussian elimination to the coefficient matrix \(A\).

Simultaneous Echelon Forms:
 The process of Gaussian elimination can be applied to both \(A\) and \([Ab]\) simultaneously, obtaining the echelon forms of both matrices.

Determining the Solution:

The relationship between the rank of \(A\) and the rank of \([Ab]\) helps determine the type of solution for a system of linear equations:

Unique Solution: If \(\text{rank}(A) = \text{rank}([Ab]) = n\), then the system has a unique solution.
Explanation: Here, \(n\) represents the number of unknowns (variables). When the rank of the coefficient matrix \(A\) and the augmented matrix \([Ab]\) are equal to \(n\), it means that there are \(n\) nonzero rows in the row echelon form of the augmented matrix. This implies that we have exactly enough independent equations to solve for all \(n\) variables uniquely without any free variables.

No Solution: If \(\text{rank}(A) \neq \text{rank}([Ab])\), then the system has no solution.
Explanation: When the rank of the coefficient matrix \(A\) is different from the rank of the augmented matrix \([Ab]\), it indicates the presence of a contradiction in the system of equations. This contradiction usually appears as a row in the augmented matrix where all the coefficients of the variables are zero, but the constant term is nonzero (e.g., \(0 = \text{nonzero number}\)). Such a row represents an inconsistent equation, making it impossible to find any solution.

Infinite Solutions: If \(\text{rank}(A) = \text{rank}([Ab]) < n\), then the system has infinitely many solutions.
Explanation: When the rank of the coefficient matrix \(A\) and the augmented matrix \([Ab]\) are equal but less than \(n\), it means that there are fewer independent equations than unknowns. This results in at least one variable being a free variable. In the row echelon form, this scenario is indicated by having one or more rows with all zero coefficients for the variables (but zero as the constant term as well). These free variables can take any value, leading to infinitely many solutions for the system.
By applying these principles to the system of equations, we can determine the nature of the solutions based on the ranks of the matrices involved.


GaussJordan Elimination Method
The GaussJordan elimination method is an extension of Gaussian elimination. In this method, we go beyond obtaining a row echelon form by continuing the row operations until the matrix is in reduced row echelon form (RREF). This allows us to read the solution directly from the matrix. Here are the steps involved in the GaussJordan elimination method:

Form the Augmented Matrix:
Write the system of linear equations as an augmented matrix \([Ab]\), where \(A\) is the coefficient matrix and \(b\) is the column of constants.
\[ \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} & \vert & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & \vert & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vert & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & \vert & b_m \end{pmatrix} \] 
Forward Elimination (Gaussian Elimination):
Perform Gaussian elimination steps to transform the augmented matrix into row echelon form. In row echelon form:  All nonzero rows are above any rows of all zeros.  The leading entry (pivot) of each nonzero row is 1.  Each leading 1 is to the right of the leading 1 in the previous row.  All entries below the leading 1 are zeros.

Normalization and Backward Elimination (GaussJordan Steps):
After obtaining the row echelon form, continue with the following steps to transform the matrix into reduced row echelon form (RREF):

Normalization: Ensure that each leading entry (pivot) in the matrix is 1. If the pivot is not 1, divide the entire row by the value of the pivot to make it 1.

Backward Elimination: Make all the entries above each pivot equal to zero by subtracting appropriate multiples of the pivot row from the rows above. This ensures that each leading 1 is the only nonzero entry in its column.


Interpret the Result:
The resulting matrix in reduced row echelon form will look like this:
\[ \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 & \vert & d_1 \\ 0 & 1 & 0 & \cdots & 0 & \vert & d_2 \\ 0 & 0 & 1 & \cdots & 0 & \vert & d_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vert & \vdots \\ 0 & 0 & 0 & \cdots & 1 & \vert & d_n \end{pmatrix} \]If any row in the form \(0, 0, \ldots, 0 \,  \, c\) (where \(c \neq 0\)) appears, the system has no solution. If there are rows in the form \(0, 0, \ldots, 0 \,  \, 0\), the system has infinitely many solutions, and the variables corresponding to these rows are free variables.
Let's solve the following system of linear equations using GaussJordan elimination:

Form the Augmented Matrix:
\[ \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 2 & 1 & 2 & \vert & 3 \\ 3 & 2 & 1 & \vert & 4 \end{pmatrix} \] 
Forward Elimination (Gaussian Elimination): Perform Gaussian elimination steps to transform the augmented matrix into row echelon form:
\[ \begin{aligned} R_2 &\leftarrow R_2  2R_1: \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 3 & 0 & \vert & 1 \\ 3 & 2 & 1 & \vert & 4 \end{pmatrix}\\ R_3 &\leftarrow R_3  3R_1: \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 3 & 0 & \vert & 1 \\ 0 & 1 & 4 & \vert & 1 \end{pmatrix} \end{aligned} \]Swap the second and third rows to make the leading coefficient of the second row nonzero:
\[ R_2 \leftrightarrow R_3: \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 1 & 4 & \vert & 1 \\ 0 & 3 & 0 & \vert & 1 \end{pmatrix} \]Subtract 3 times the second row from the third row:
\[ R_3 \leftarrow R_3  3R_2: \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 1 & 4 & \vert & 1 \\ 0 & 0 & 12 & \vert & 2 \end{pmatrix} \] 
Normalization and Backward Elimination (GaussJordan Steps):
Normalize the third row by dividing by 12:
\[ R_3 \leftarrow \frac{1}{12}R_3 \implies \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 1 & 4 & \vert & 1 \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \]Eliminate the entry above the pivot in the third column in the second row and the first row:
\[ R_2 \leftarrow R_2 + 4R_3 \implies \begin{pmatrix} 1 & 1 & 1 & \vert & 1 \\ 0 & 1 & 0 & \vert & \frac{1}{3} \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \]\[ R_1 \leftarrow R_1 R_3 \implies \begin{pmatrix} 1 & 1 & 0 & \vert & \frac{7}{6} \\ 0 & 1 & 0 & \vert & \frac{1}{3} \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \]Normalize the second row by dividing by 1:
\[ R_2 \leftarrow R_2 \implies \begin{pmatrix} 1 & 1 & 0 & \vert & \frac{7}{6} \\ 0 & 1 & 0 & \vert & \frac{1}{3} \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \]Eliminate the entry above the pivot in the second column in the first row:
\[ R_1 \leftarrow R_1  R_2 \implies \begin{pmatrix} 1 & 0 & 0 & \vert & \frac{3}{2} \\ 0 & 1 & 0 & \vert & \frac{1}{3} \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \] 
Interpret the Result:
The system is now in reduced row echelon form:
\[ \begin{pmatrix} 1 & 0 & 0 & \vert & \frac{3}{2} \\ 0 & 1 & 0 & \vert & \frac{1}{3} \\ 0 & 0 & 1 & \vert & \frac{1}{6} \end{pmatrix} \]The corresponding solutions are:
\[ \begin{cases} x = \frac{3}{2} \\ y = \frac{1}{3} \\ z = \frac{1}{6} \end{cases} \]Therefore, the solution to the system of equations is:
\[ x = \frac{3}{2}, \quad y = \frac{1}{3}, \quad z = \frac{1}{6} \]