Gaussian Elimination & LU Decomposition
Numerical Algorithms for Linear Systems
1. Gaussian Elimination
The most fundamental direct method for solving a system of linear equations $\boldsymbol{A}\boldsymbol{x} = \boldsymbol{b}$. It consists of "Forward Elimination" to transform the matrix into an upper triangular form, and "Backward Substitution" to find the solution.
① Forward Elimination
In step $k$, eliminate the elements in the rows below ($i > k$) using the $k$-th row.
Calculation of Multiplier $m_{ik}$:
\[ m_{ik} = \frac{a_{ik}^{(k)}}{a_{kk}^{(k)}} \]Update of Elements:
\[ a_{ij}^{(k+1)} = a_{ij}^{(k)} - m_{ik} a_{kj}^{(k)} \quad (j = k+1, \dots, n) \] \[ b_{i}^{(k+1)} = b_{i}^{(k)} - m_{ik} b_{k}^{(k)} \]② Backward Substitution
Determine the values starting from the bottom variable using the transformed upper triangular matrix.
2. Pivoting
An essential process to maintain the accuracy and stability of numerical calculations.
Partial Pivoting:
In each step $k$ of forward elimination, find the row with the largest absolute value among $|a_{kk}|$ to $|a_{nk}|$ in the $k$-th column and swap it with the $k$-th row. This prevents loss of significance (rounding errors) caused by division by small values.
3. LU Decomposition
A method that organizes the calculation process of Gaussian elimination to decompose matrix $\boldsymbol{A}$ into the product of a lower triangular matrix $\boldsymbol{L}$ and an upper triangular matrix $\boldsymbol{U}$ ($\boldsymbol{A} = \boldsymbol{LU}$).
Recurrence Formula (Doolittle Method):
Once LU decomposition is performed, even if the right-hand vector $\boldsymbol{b}$ changes, the system can be solved at high speed using only substitution operations, making it ideal for iterative methods like Newton's method.
4. Mathematical Background of Sparse Matrix Processing
In large-scale systems such as circuit matrices, mathematically optimizing the placement (structure) of non-zero elements is key to computational efficiency.
① Structural Analysis via Graph Theory
The sparse structure of matrix $\boldsymbol{A}$ can be represented as a directed graph $G(\boldsymbol{A}) = (V, E)$ with an adjacency matrix. Pivot selection in the elimination process corresponds to "node removal" in the graph and the accompanying "addition of edges (Fill-ins)."
Theorem: Elimination Graph
When node $v_k$ is eliminated, if no edge exists between any of its adjacent nodes, a new edge connecting them (Fill-in) is generated. Avoiding the formation of a complete graph is the mathematical essence of maintaining sparsity.
② Mathematics of the Markowitz Criterion
In step $k$, the upper bound of the number of fill-ins generated when element $a_{ij}$ is selected as the pivot is given by the following **Markowitz Product**:
Here, $r_i^{(k)}$ is the number of non-zero elements in the $i$-th row, and $c_j^{(k)}$ is the number of non-zero elements in the $j$-th column. By making a selection that minimizes $M_{ij}$ at each elimination (local optimization), the overall computational complexity of $O(N^3)$ is dramatically reduced.
③ Symbolic Factorization
Before starting numerical calculation (constant calculation), the positions where fill-ins occur are determined in advance using only the matrix structure (indices). This provides the following benefits:
-
Static Memory Allocation:
Eliminates dynamic memory allocation during actual numerical computation, maximizing cache efficiency. -
Operation Scheduling:
Identifies operations with no dependencies, enabling optimization for parallel computing.
Mathematically, one should choose the element with the minimum Markowitz Product, but numerical stability (the absolute value of the pivot must not be too small) must also be considered. In implementation, "Threshold Pivoting," which selects the element with the minimum Markowitz product within a range that guarantees stability (by setting a threshold), is common.
5. Concrete Calculation Example (2 Variables)
Solve the following system using Gaussian elimination with pivot selection.
\[ \begin{cases} 1x_1 + 2x_2 = 5 \\ 3x_1 + 4x_2 = 11 \end{cases} \]Step 1: Pivot Selection
Since $|3| > |1|$, swap the 1st row and the 2nd row.
Step 2: Forward Elimination
Multiply the 1st row by $1/3$ and subtract it from the 2nd row.
Step 3: Backward Substitution
$0.667x_2 = 1.333 \Rightarrow x_2 = 2$
$3x_1 + 4(2) = 11 \Rightarrow 3x_1 = 3 \Rightarrow x_1 = 1$
Result: $x_1 = 1, x_2 = 2$