Gauss-Seidel Method – An Iterative Method for Solving Linear Systems

Gauss-Seidel Method (via wikipedia):

also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite

Extracting the pure technical information, the Gauss-Seidel Method is an iterative method, where given Ax = b and A and b are known, we can determine the x values. The beauty of this method, is if a matrix with diagonal dominance or is symmetric and positive definite, as well as an initial guess for the x values it is guaranteed to converge (it often converges even if these conditions are not met). Gauss-Seidel method is similar to Jacobi’s Method, both being iterative methods for solving systems of linear equations, but Gauss-Seidel converges somewhat quicker in serial applications[1].

Using the Gauss-Seidel Method

The method is fairly straight forward, given a standard system of linear equations, Ax = b. Where, A is a matrix (often representing a series of equations), x is a vector of x variables (Gauss-Seidel method is used to solve this vector) and b is the solution vector. In Gauss-Seidel method, we then split the A matrix into Upper (U) and Lower (L) matrices (the lower matrix in this case also contains the diagonal), then iterate using the following method:

gauss-equation

Where, given A:

[1, 1, 1]
[1, 1, 1]
[1, 1, 1]

Lower (L):

[1, 0, 0]
[1, 1, 0]
[1, 1, 1]

Upper (U):

[0, 1, 1]
[0, 0, 1]
[0, 0, 0]

That is all there is to this method! To calculate the solution ten to hundreds of times and you can solve for x.

Using python it is relatively easy to program:

In the python program above, ‘n’ represents the number of iterations, ‘b’ represents the solution to Ax = b and A represents the matrix, and ‘x’ is what we are attempting to solve for (we first make an initial guess).

Output (each iteration included):

init [1, 1, 1]

000 [ 0.5         0.16666667  0.52777778]
001 [ 0.20138889 -0.24768519  0.61612654]
002 [-0.02787423 -0.26520705  0.58375664]
003 [-0.02854268 -0.2870098   0.59091282]
004 [-0.0412331  -0.28646916  0.58861753]
005 [-0.04038896 -0.28771796  0.58917449]
006 [-0.04115261 -0.28760121  0.5890083 ]
007 [-0.04105268 -0.28767869  0.58905078]
008 [-0.04110204 -0.28766682  0.5890386 ]
009 [-0.04109306 -0.28767195  0.58904181]
010 [-0.04109643 -0.28767094  0.58904091]
011 [-0.0410957  -0.28767129  0.58904115]
012 [-0.04109593 -0.28767121  0.58904108]
013 [-0.04109588 -0.28767124  0.5890411 ]
014 [-0.04109589 -0.28767123  0.58904109]
015 [-0.04109589 -0.28767123  0.5890411 ]
016 [-0.04109589 -0.28767123  0.5890411 ]
017 [-0.04109589 -0.28767123  0.5890411 ]
018 [-0.04109589 -0.28767123  0.5890411 ]
019 [-0.04109589 -0.28767123  0.5890411 ]

Sol [-0.04109589 -0.28767123  0.5890411 ]
Act [-0.04109589 -0.28767123  0.5890411 ]

As you can see by iteration 15 the iterative solution was as stable and as good as the comparable actual solution! You can find the complete code for the Gauss-Seidel method as well as Jacobi method on my github.

Related Articles

Jacobi Method
Introduction to Markov Processes
Introduction to Monte Carlo Methods
Quadrature – Numerical Integration Comparison
Gaussian Quadrature

Leave a Reply

Your email address will not be published. Required fields are marked *

 characters available

Time limit is exhausted. Please reload the CAPTCHA.