In order to compare the performance of the SOR, AOR, and SSOR methods to the newly developed OSOR and OSSOR methods, some linear problems were tested.
5.1. Example 1
For this problem,
is found to be
With the initial guess
and under
, SOR is convergence with 27 iterations, as shown in
Figure 1a. The maximum error (ME) is
.
By using the OSOR with the optimal value
w in Equation (
85), through 26 iterations it converges, as shown in
Figure 1a; ME =
is obtained. In
Figure 1a, the residuals obtained by SOR and OSOR with the optimal value
w in Equation (
85) are compared.
It is well known that, for SOR, the relaxation parameter must be
for the stability of the iterative algorithm. When we take
, SOR is unstable, as shown in
Table 1; however, as shown in
Figure 1b, the spectral radius of OSOR is smaller than one at each step.
Table 1 reveals that OSOR can converge with 35 steps.
To further test the stability of SOR and OSOR, we take
. In
Figure 2a, we can see that SOR blows up very rapidly. Its values of
, as shown in
Figure 2b, also tend to large negative values. In contrast, the OSOR method as a stabilization of SOR can seek an accurate solution with ME =
within 46 steps. The values of
are kept constant, as seen in
Figure 2b.
As shown in
Figure 1b, the step-by-step spectral radii are oscillatory around the value of one when
is negative. The total contraction rate defined in Equation (
64) is 0.6528, which is smaller than one; the OSOR iteration is convergent.
Table 1 compares ME and number of steps (NSs) obtained by SOR and OSOR for different values of
w. It can be seen that SOR is sensitive to the values of
w; when
, it requires more than 100 steps for the convergence, and when
, it blows up. When
, we find that many
in SOR are negative, which cause the slow convergence of SOR with 367 steps. In contrast to SOR, OSOR is very stable, not sensitive to the values of
w. The OSOR method can not only stabilize SOR and also enhance the accuracy. For OSOR,
w can range in a large interval
.
For SSOR, the optimal value like that in Equation (
85) is not available. It is interesting that, when we use the sun-optimal value of
in SSOR obtained from Equation (
37), we can achieve a highly accurate solution within 19 steps to obtain ME =
.
Table 2 compares the ME and NS obtained by SSOR and OSSOR for different values of
w. The improvement of NS made by OSSOR is obvious. For SSOR, the range is
. For OSSOR,
w can range in a large interval
.
We take
.
Table 3 compares NS and ME obtained by using the optimal value of
for SOR, and the sub-optimal values of
w obtained from Equation (
37) for SOR, and from Equation (
67) for OSOR.
is obtained for SOR;
is obtained for OSOR.
As an application of Theorem 2, we can apply AOR to stabilize SOR, when SOR is unstable with
, as shown in
Table 1.
From Equation (
27), the iteration matrix for AOR is given by
Unlike for the OSOR method, the above is a constant matrix.
For the stability of AOR, the spectral radius
must be smaller than one. We plot the stability region in
Figure 3. When SOR, corresponding to
, is unstable with
, AOR can be stabilized by taking a suitable value of
, such that
locates inside the stable region.
In
Table 5, we fix
and
, but with different values of
in AOR to compute the solution.
5.3. Example 3
Consider an example of Equation (
1) with
. Suppose that
, and exact solutions of
are unknown. The initial values are
.
With
, under
, OSOR and OSSOR using GSSA at each step are convergent with seven and six steps, respectively, as shown in
Figure 4. However, both SOR and SSOR are not convergent.
For the stability of AOR, the spectral radius
must be smaller than one. We plot the stability region in
Figure 5. Although we take
and
in the stable region in
Figure 4, SOR is divergent under
. Therefore, we take the convergence criterion for the relative residual
. As shown in
Figure 6, SOR is convergent with six steps, and the AOR with
and
is convergent with four steps. However, the residual errors 1.287 and 1.685 indicate that they are incorrect solutions obtained by SOR and AOR.
If
is taken as the convergence criterion of the relative residuals obtained by SOR and AOR, they are divergent again, as shown in
Figure 6. In contrast, as shown in
Table 8, OSOR and OSSOR can compute accurate solutions very fast.
5.4. Example 4
We consider a linear system in Equation (
1) with [
28]
By using GSSA, the sub-optimal value for OSOR is found.
The initial values are
, and under
for the relative residual, OSOR is convergent with 40 steps. In
Table 9, ME and NS are compared to that obtained in [
28] by using the RSORJ and RJSOR methods; they used the two-step techniques, and the optimal value
is used.
In
Table 10, the NSs obtained by OSOR and OSSOR are compared to those obtained in [
36] by using the algorithms of Gauss–Seidel (GS), refinement of Gauss–Seidel (RGS), second refinement of Gauss–Seidel (SRGS), and third refinement of Gauss–Seidel (TRGS).