Next Article in Journal
Existence Results for Nonlinear Impulsive System with Causal Operators
Next Article in Special Issue
Simulation of Shock Waves in Methane: A Self-Consistent Continuum Approach Enhanced Using Machine Learning
Previous Article in Journal
Performance Analysis of a New Non-Orthogonal Multiple Access Design for Mitigating Information Loss
Previous Article in Special Issue
Risk Measures’ Duality on Ordered Linear Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods

by
Bassim A. Hassan
1,
Issam A. R. Moghrabi
2,3,*,
Thaair A. Ameen
4,
Ranen M. Sulaiman
1 and
Ibrahim Mohammed Sulaiman
5,6
1
College of Computer Science and Mathematics, University of Mosul, Mosul 41002, Iraq
2
Department of Computer Science, University of Central Asia Naryn, Naryn 722918, Kyrgyzstan
3
Department of Information Systems and Technology, Kuwait Technical College, Kuwait City 32060, Kuwait
4
Mosul University Presidency, University of Mosul, Mosul 41002, Iraq
5
Faculty of Education and Arts, Sohar University, Sohar 311, Oman
6
Institute of Strategic Industrial Decision Modelling, School of Quantitative Sciences, Universiti Utara Malaysia, Sintok 06010, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2754; https://doi.org/10.3390/math12172754
Submission received: 5 July 2024 / Revised: 4 August 2024 / Accepted: 18 August 2024 / Published: 5 September 2024
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning, 2nd Edition)

Abstract

:
The conjugate gradient (CG) directions are among the important components of the CG algorithms. These directions have proven their effectiveness in many applications—more specifically, in image processing due to their low memory requirements. In this study, we derived a new conjugate gradient coefficient based on the famous quadratic model. The derived algorithm is distinguished by its global convergence and essential descent properties, ensuring robust performance across diverse scenarios. Extensive numerical testing on image restoration and unconstrained optimization problems have demonstrated that the new formulas significantly outperform existing methods. Specifically, the proposed conjugate gradient scheme has shown superior performance compared to the traditional Fletcher–Reeves (FR) conjugate gradient method. This advancement not only enhances computational efficiency on unconstrained optimization problems, but also improves the accuracy and quality of image restoration, making it a highly valuable tool in the field of computational imaging and optimization.

1. Introduction

Many real-world applications involve nonlinear optimization problems, particularly, problems of extremely enormous dimensions, and thus require the use of first-order schemes to obtain their solutions. Conjugate gradient (CG) methods, the most popular first-order approaches, have been widely demonstrated to be useful in handling challenging unconstrained and constrained problems, such as problems arising from imaging processing. This is due to the efficiency of the CG algorithm in handling sparse and large-scale systems, which are common in such tasks. These methods are iterative and thus allow for early termination criteria to save time, and can be adapted for nonlinear functions, making their algorithms versatile. In addition, CG schemes efficiently handle minimization problems with regularization, scale well with the size of the problem, and can control parallel processing capabilities, improving their efficacy and speed for high-resolution image processing tasks.
In [1], the authors propose a two-phase strategy that combines the advantages of the adaptive median filter with the variational method in a single approach. The adaptive median filter [2] is used in the initial phase of the processing for salt-and-pepper noise. In such methods, X represents the real image and A = 1,2 , 3 , . . . , M × 1,2 , 3 , . . . , N represents the index set of X and Ν A denotes the set of indices of the noise pixels that were detected during the first phase of the analysis process. Thus, the issue is determining an effective method for minimizing the function as follows:
f α ( x ) = ( i , j ) Ν x i , j y i , j + ζ 2 ( 2 × S i , j 1 + S i , j 2
where ζ is the regularization parameter, S i , j 1 = 2 ( m , n ) Ρ i , j Ν c ϕ α ( u i , j y m , n ) , S i , j 2 = ( m , n ) Ρ i , j Ν ϕ α ( x i , j y m , n ) are the edge-preserving potential functions, and ϕ α = α + x 2 , α > 0 is the edge-preserving potential function. P i , j denotes the set of the four closest neighbors of the pixel at the location ( i , j ) A , and y i , j denotes the observed pixel value of the image at the position ( i , j ) , with x i , j = x i , j ( i , j ) Ν in each row denoting a column vector c of one length sorted lexicographically after the other.
It has been demonstrated in [1,2] that the term x i , j y i , j in (1) allows for the detection of noisy pixels, but it also adds a tiny bias in the restoration of corrupted pixels when used in conjunction with other techniques. Our method begins by detecting the set of all noisy pixels, which occurs in its initial phase. As a result, this phrase is no longer necessary during the restoration phase. This recommends that we remove it from consideration (1). Consequently, we can only analyze the functional aspects of the following:
f α x = i , j Ν ( 2 × S i , j 1 + S i , j 2 .
In this study, we are more interested in investigating the performance of a new Conjugate gradient method on image restoration and unconstrained optimization problem of the form:
M i n f ( x )   ,   x R n
where f is continuously differentiable (see [3,4,5]). The CG algorithm generates a sequence of iterative points via [6]:
x k + 1 = x k + α k d k ,
where α k is a step length and the search direction d k + 1 is generated as:
d k + 1 = g k + 1 + β k d k .
For further information on the possible choices of the conjugate coefficient β k , see [7,8]. In general, the global convergence characteristics of CG techniques are widely studied. According to [9], the Fletcher–Reeves (FR) method has been identified as having the best convergence results, while the Hestenes–Stiefel (HS) method has been recognized as one of the most efficient CG methods with good numerical performance but failing to satisfy the global convergence properties under classical line search conditions [10]. The particular choices for the FR and HS methods are:
β k F R = g k + 1 T g k + 1 g k T g k ,   β k H S = y k T g k + 1 d k T y k ,
with y k = g k + 1 g k . The presentations in [11,12,13,14] provide excellent references to contemporary CG approaches that have yielded significant outcomes, in comparison to those in (6). Because the Hestenes–Stiefel formula meets the conjugacy criterion, it is particularly appealing to require that (see [15])
d k + 1 T y k = 0 ,
be satisfied when new methods are developed. The derivations in [16,17] represent an outstanding summary of the evolution of multiple variants of nonlinear conjugate gradient methods with a specific focus on global convergence qualities, as described in [17]. In theory, based on Perry’s conjugacy criterion (7), it may be rewritten as:
d k + 1 T y k = s k T g k + 1 .
In [14], the Hestenes–Stiefel (HS) method has been modified to produce:
β k W C = y k + 1 T g k + 1 d k T y k + 2 ( f k f k + 1 ) + g k T s k d k T y k
As revealed by the numerical findings, the Wu and Chen [17] approach numerically supersedes the HS method [10].
In [18], it is shown that for quadratic functions, the step size is determined exactly as:
α k = g k T d k d k T G d k .
For non-quadratic problems, classical line searches, such as cubic interpolation, are employed to find a step size α k along a generated search direction. For convergence purposes, α k is usually required to satisfy the strong Wolfe–Powell (SWP) [19] line search conditions:
f x k + α k d k f x k + δ α k g k T d k d k T g x k + α k d k σ   d k T g k ,
where 0 < δ < σ < 1 [20,21]. Such conditions are particularly beneficial in examining the convergence properties of CG methods. For a more recent study on the CG method, see [22,23,24,25,26,27].
Next, a new CG conjugacy parameter is developed by employing a quadratic model. This is followed by an analysis of the new method’s convergence properties. The new derivation aims to further the numerical behavior of CG methods.

2. Deriving the New Parameter Based on the Quadratic Model

In this section, we present the derivation process of the new conjugate gradient formulas. The algorithm used for the computational experiment is further presented at the end of this section. The motivation for constructing novel conjugate gradient parameters via the quadratic model is to enhance the convergence rate and accuracy of the CG algorithm. By leveraging the second-order curvature and presenting modified formulas, the updated algorithm can provide improved direction and step length adjustments, and this would lead to more effective minimization, particularly in large-scale and complex problems in image restoration and unconstrained optimization, which need robust optimization procedures. The new CG parameter is derived using the quadratic model:
f k + 1 = f k + s k T g k + 1 2 s k T Q x k s k ,
where the corresponding gradient is given as:
g k + 1 = g k + Q ( x k ) s k
The second-order curvature is derived from (12) and (13) to obtain:
s k T y k = 2 s k T Q ( x k ) s k + 2 ( f k + 1 f k )
Using the above equation to obtain:
s k T Q x k s k = 2 ( s k T g k ) 2 s k T y k + 2 f k f k + 1 = ω k s k T y k ,
where
ω k B T 1 = 2 ( s k T g k ) 2 ( s k T y k ) ( s k T y k + 2 ( f k f k + 1 ) )
Using (8) and (15) in the above equation, we obtain:
d k + 1 T y k = ω k s k T y k s k T g k .
Since d k + 1 = g k + 1 + β k s k , this implies that:
β k s k T y k = g k + 1 T y k ω k s k T y k s k T g k
which yields:
β k = g k + 1 T y k s k T y k ω k s k T y k s k T y k s k T g k s k T y k
Additionally, for exact line search, (16) leads to two suggested expressions as follows:
ω k B T 2 = 2 ( s k T g k ) 2 ( s k T y k ) ( 2 ( f k f k + 1 ) s k T g k )
and
ω k B T 3 = 2 ( s k T g k ) 2 ( s k T y k ) ( 2 ( f k f k + 1 ) + α k g k T g k ) .
We refer to the three alternatives of ω k as BT1, BT2, and BT3, respectively, as indicated in (16), (20), and (21). Introducing multiple forms of ω k , including BT1, BT2, and BT3, offers flexibility in selecting the most suitable parameter for different problem contexts, improving the approach’s performance and adaptability.
The algorithmic steps (Algorithm 1) for the derived method are summarized as:
Algorithm 1. The new conjugate gradient algorithm for minimizing.
Initialization .   Given   x 0 R n ,   δ ( 0,1 ) , σ ( δ , 1 ) ,   set   d 0 = g 0   and   k = 0 .
Stage   1 .   If   g k ε , then stop.
Stage   2 .   Find   α k by (9) and (10).
Stage   3 .   Let   x k + 1 = x k + α k d k ,   and   compute   β k by (19).
Stage   4 .   Compute   d k + 1 = g k + 1 + β k d k .
Stage   5 .   Set   k = k + 1 and go to stage 1.

3. Convergence Analysis of the Uniformly Convex Function

The global convergence analysis for the derived methods is considered in this section. The following assumptions are needed:
Assumption 1.
f ( x ) is bounded below on R n and bounded on the set  Ψ = x R n : f ( x ) f ( x 0 ) .
Assumption 2.
The gradient g is Lipschitz continuous, i.e., there exists a non-negative steady L such that
g ( u ) g ( w ) L u w , u , w R n .
Under these assumptions on the objective function, there exists a steady Γ 0 , such that f ( x ) Γ . More details can be found in [28,29].
We start by proving the descent property for the new algorithm in the following theorem.
Theorem 1.
Let x k and d k be generated by (5) with the choices (19), (20), and (21) for the conjugacy parameter, then d k + 1 is a downhill direction.
Proof of Theorem 1.
Since d 0 = g 0 , we obtained g 0 T d 0 = g 0 2 0 . Multiplying d k + 1 in (5) by g k + 1 T and using Equation (13), we have:
d k + 1 T g k + 1 g k + 1 2 + g k + 1 T y k s k T y k s k T g k + 1 s k T y k s k T g k + 1 .
By Lipschitz condition:
y k T g k + 1 L s k T g k + 1 .
Combining (23) with (24), we obtain:
d k + 1 T g k + 1 g k + 1 2 + L s k T g k + 1 s k T y k s k T g k + 1 s k T y k s k T g k + 1 .
Hence, (25) yields:
d k + 1 T g k + 1 g k + 1 2 + L 1 ( s k T g k + 1 ) 2 s k T y k .
Next, from (26), we have:
d k + 1 T g k + 1 g k + 1 2 < 0
Hence, the generated directions are downhill. □
To show that the new method converges globally, we employ the following lemma, which was proven in [19,20]
Lemma 1.
If positive constants, m and M , exist, such that for k 0
δ k T γ k δ k 2 m
and
γ k 2 δ k T γ k M ,
for any positive integer t , the inequality, (44) holds for at least t / 2 values of k 1,2 , , t .
Using the condition of Lemma 1, we can prove the following result.
Theorem 2.
Assume that f is a uniformly convex function on R n , i.e., there exists a stable μ > 0 , that satisfies:
l i m k ( inf g k + 1 ) = 0 .
If the conditions in Lemma 1 hold, then:
l i m k ( inf g k + 1 ) = 0 .
Proof of Theorem 2:
The proof is similar to the ones in [20,21]. □

4. Numerical Results

In this section, numerical data is presented to demonstrate the efficiency of the BT1, BT2, and BT3 algorithms in reducing salt-and-pepper impulse noise by lowering the threshold in (3) and further solving unconstrained optimization problems. The parameters chosen for the line search in (11) for the BT1, BT2, and BT3 procedures are δ = 0.0001 and σ = 0.5 . All simulations are conducted on a PC with MATLAB 2015a. The BT1, BT2, and BT3 techniques are compared to the FR method in terms of performance efficiency. It is vital to emphasize that the speed at which the obtained decrease in (3) is the primary focus of this research. The Signal-to-Noise Ratio is used to evaluate the quality of the recovered image:
P S N R = 10 log 10 25 5 2 1 M N i , j ( u i , j r u i , j * ) 2 ,
where u i , j r and u i , j * denote the pixel values of the restored image and the original image, respectively. The stopping criteria for both techniques are as follows:
f u k f u k 1 f u k 1 0 4   a n d f ( u k ) 1 0 4 1 + f u k .
Table 1 reports the computed PSNR (peak signal-to-noise ratio), in addition to the iterations count (NI) and function evaluations (NF) for each of the tested methods, as opposed to the standard FR method.
As demonstrated in Table 1, the BT1, BT2, and BT3 are more efficient as they require the fewest iterations and function evaluations compared to the FR method. Furthermore, the PSNR values generated by all three new approaches are superior. The restoration results achieved utilizing the FR, BT1, BT2, and BT3 algorithms are shown in Figure 1, Figure 2, Figure 3 and Figure 4. These findings demonstrate that the recommended image-correcting procedures BT1, BT2, and BT3 are reliable and effective.
Next, the numerical performance of the proposed method was evaluated on a total of twenty-three unconstrained optimization problems under the strong Wolfe line search. Three metrics including the number of iterations (NOI), the number of function evaluation (NOF), and computational time (CPUT) are used to measure the efficiency of all algorithms. The termination condition was also set as:
  • If g k + 1 < 10 6 was not satisfied.
  • If iterations exceed 2000.
Table 2 presents a detailed performance of all the algorithms. The symbol (***) is used to denote the point where an algorithm fails to satisfy the above conditions. The symbol (##) means a failure to converge to a solution.
The above results are further analyzed using the performance profile tool introduced by Dolan and More [30]. The following figures present the visual illustration of the results from Table 2.
As seen in Figure 5 and Figure 6, a percent P(τ) of the problems for which the technique is within a factor τ of the optimal time is displayed for each method. A method’s quickest percentage of test problems is indicated by the vertical axis on the left side of the curves.
The figures indicate that the proposed algorithms outperformed the classical FR algorithm based on the number of iterations and function evaluation. This shows that the new methods are ranked out of the FR method, thus competing favorably with the existing method.

5. Conclusions

In conclusion, we have developed a new modified conjugate gradient formula and introduced three new conjugate gradient methods, BT1, BT2, and BT3, which propose different options for the conjugacy parameter. The new choices are designed to enhance image processing tasks, particularly in image restoration applications. By employing Wolfe line search conditions, we established the global convergence properties of these new methods. Our comprehensive simulation studies demonstrated that the BT1, BT2, and BT3 methods significantly reduce the number of iterations and function evaluations required, thus improving the computational efficiency of the methods on unconstrained optimization problems. Moreover, these methods were further shown to effectively restore image quality, surpassing the performance of the traditional conjugate gradient method.
The results highlight the potential of BT1, BT2, and BT3 methods to advance the field of image processing. Their ability to achieve higher accuracy with less computational effort makes them valuable tools for practitioners and researchers equally. Future work will focus on further optimizing these methods and exploring their application to a broader range of image processing challenges.

Author Contributions

Conceptualization, B.A.H. and I.A.R.M.; methodology I.M.S. and I.A.R.M.; validation I.M.S. and T.A.A.; writing original version, I.A.R.M., R.M.S. and B.A.H.; Investigation I.M.S. and R.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

School of Arts and Sciences, University of Central Asia, Naryn, Kyrg Republic.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xue, W.; Ren, J.; Zheng, X.; Liu, Z.; Liang, Y. A new DY conjugate gradient method and applications to image denoising. IEICE Trans. Inf. Syst. 2018, 12, 2984–2990. [Google Scholar] [CrossRef]
  2. Yu, G.; Huang, J.; Zhou, Y. A descent spectral conjugate gradient method for impulse noise removal. Appl. Math. Lett. 2010, 23, 555–560. [Google Scholar] [CrossRef]
  3. Huang, T.; Yang, G.; Tang, G. A fast two-dimensional median filtering algorithm. Image Process. Based Partial Differ. Equ. 1979, 27, 13–18. [Google Scholar] [CrossRef]
  4. Sulaiman, I.M.; Supian, S.; Mamat, M. New class of hybrid conjugate gradient coefficients with guaranteed descent and efficient line search. IOP Conf. Ser. Mater. Sci. Eng. 2019, 621, 012021. [Google Scholar] [CrossRef]
  5. Awwal, A.M.; Yahaya, M.M.; Pakkaranang, N.; Pholasa, N. A New Variant of the Conjugate Descent Method for Solving Unconstrained Optimization Problems and Applications. Mathematics 2024, 12, 2430. [Google Scholar] [CrossRef]
  6. Malik, M.; Abas, S.S.; Mamat, M.; Mohammed, I.S. A new hybrid conjugate gradient method with global convergence properties. Int. J. Adv. Sci. Technol. 2020, 29, 199–210. [Google Scholar]
  7. Hager, W.W.; Zhang, H. A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2006, 2, 35–58. [Google Scholar]
  8. Hassan, B.A.; Jabbar, H.N.; Laylani, Y.A.; Moghrabi, I.A.R.; Alissa, A.J. An enhanced fletcher-reeves-like conjugate gradient methods for image restoration. Int. J. Electr. Comput. Eng. 2023, 13, 6268–6276. [Google Scholar] [CrossRef]
  9. Fletcher, R. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
  10. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  11. Dai, Y.H.; Yuan, Y. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef]
  12. Fletcher, R. Practical Methods of Optimization; Wiley: Hoboken, NJ, USA, 1987. [Google Scholar]
  13. Liu, Y.; Storey, C. Efficient generalized conjugate gradient algorithms, part 1: Theory. J. Optim. Theory Appl. 1991, 69, 129–137. [Google Scholar] [CrossRef]
  14. Polak, E.; Ribiere, G. Note sur la Convergence de Directions Conjugate, Revue Francaise Informant. Reserche. Opertionelle 1969, 3, 35–43. [Google Scholar]
  15. Perry, A. A Modified Conjugate Gradient Algorithm. Oper. Res. 1978, 26, 1073–1078. [Google Scholar] [CrossRef]
  16. Moghrabi, I.A.R. A new scaled secant-type conjugate gradient algorithm. In Proceedings of the 2017 European Conference on Electrical Engineering and Computer Science, EECS 2017, Bern, Switzerland, 17–19 November 2017. [Google Scholar] [CrossRef]
  17. Wu, C.; Chen, G. New type of conjugate gradient algorithms for unconstrained optimization problems. J. Syst. Eng. Electron. 2010, 21, 1000–1007. [Google Scholar] [CrossRef]
  18. Nocedal, J.; Wright, S.J. Numerical Optimization-Springer Series in Operations Research; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  19. Wolfe, P. Convergence conditions for ascent methods. II: Some corrections. SIAM Rev. 1971, 3, 185–188. [Google Scholar] [CrossRef]
  20. Hassan, B.A.; Moghrabi, I.A.R. A modified secant equation quasi-Newton method for unconstrained optimization. J. Appl. Math. Comput. 2023, 69, 451–464. [Google Scholar] [CrossRef]
  21. Dai, Y.; Han, J.; Liu, G.; Sun, D.; Yin, H.; Yuan, Y.X. Convergence Properties of Nonlinear Conjugate Gradient Methods. SIAM J. Optim. 2000, 10, 345–358. [Google Scholar] [CrossRef]
  22. Ibrahim, S.M.; Salihu, N. Two sufficient descent spectral conjugate gradient algorithms for unconstrained optimization with application. Optim. Eng. 2024, 31, 1–26. [Google Scholar] [CrossRef]
  23. Hassan, B.A.; Taha, M.W.; Kadoo, F.H.; Mohammed, S.I. A new modification into Quasi-Newton equation for solving unconstrained optimization problems. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2022; Volume 2394. [Google Scholar]
  24. Salihu, N.; Kumam, P.; Sulaiman, I.M.; Arzuka, I.; Kumam, W. An efficient Newton-like conjugate gradient method with restart strategy and its application. Math. Comput. Simul. 2024, 226, 354–372. [Google Scholar] [CrossRef]
  25. Malik, M.; Mamat, M.; Abas, S.S.; Sulaiman, I.M. Performance Analysis of New Spectral and Hybrid Conjugate Gradient Methods for Solving Unconstrained Optimization Problems. IAENG Int. J. Comput. Sci. 2021, 48, 66–79. [Google Scholar]
  26. Salihu, N.; Kumam, P.; Muhammad Yahaya, M.; Seangwattana, T. A revised Liu–Storey Conjugate gradient parameter for unconstrained optimization problems with applications. Eng. Optim. 2024, 1–25. [Google Scholar] [CrossRef]
  27. Ibrahim, A.H.; Rapajić, S.; Kamandi, A.; Kumam, P.; Papp, Z. Relaxed-inertial derivative-free algorithm for systems of nonlinear pseudo-monotone equations. Comput. Appl. Math. 2024, 43, 239. [Google Scholar] [CrossRef]
  28. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef]
  29. Yuan, G.; Wei, Z.; Lu, X. Global convergence of BFGS and PRP methods under a modified weak Wolfe–Powell line search. Appl. Math. Model. 2018, 47, 811–825. [Google Scholar] [CrossRef]
  30. Dolan, E.D.; More, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Lena image.
Figure 1. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Lena image.
Mathematics 12 02754 g001aMathematics 12 02754 g001b
Figure 2. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 House image.
Figure 2. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 House image.
Mathematics 12 02754 g002
Figure 3. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Elaine image.
Figure 3. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Elaine image.
Mathematics 12 02754 g003
Figure 4. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Cameraman image.
Figure 4. Demonstrates the results of algorithms FR, BT1, BT2, and BT3 of 256 × 256 Cameraman image.
Mathematics 12 02754 g004aMathematics 12 02754 g004b
Figure 5. Performance profile based on NOI.
Figure 5. Performance profile based on NOI.
Mathematics 12 02754 g005
Figure 6. Performance profile based on NOF.
Figure 6. Performance profile based on NOF.
Mathematics 12 02754 g006
Table 1. Numerical results of FR, BT1, BT2, and BT3 algorithms.
Table 1. Numerical results of FR, BT1, BT2, and BT3 algorithms.
ImageNoise Level r (%)FR-MethodBT1-MethodBT2-MethodBT3-Method
NINFPSNR (dB)NINFPSNR (dB)NINFPSNR (dB)NINFPSNR (dB)
Le508215330.552942.090.030.507755.0109.030.472630.060.030.779
708115527.482445.097.027.342556.0111.027.517653.0107.027.2491
9010821122.858353.0113.022.982458.0115.023.009954.0109.022.8871
Ho50525330.684530.063.035.207235.070.034.945336.072.035.1792
706311631.256432.066.030.901439.078.030.749329.058.030.9249
9011121425.28736.074.025.102352.0103.025.26748.096.025.1356
El50353633.912924.048.033.868730.058.033.86226.051.033.9353
70383931.86417.032.031.963430.058.031.793134.068.031.7348
906511428.201939.080.028.206744.086.028.041644.088.028.1316
c51250598735.535928.060.035.29634.069.035.86226.051.035.3528
707814230.625934.072.030.611339.079.030.614534.068.030.6749
9012123624.396247.098.024.926650.0101.024.841144.088.024.8521
Table 2. Performance comparison based on NOI, NOF, and CPU time.
Table 2. Performance comparison based on NOI, NOF, and CPU time.
BT1BT2BT3FR
No.FunctionDIMInitialNOINOFCPUTNOINOFCPUTNOINOFCPUTNOINOFCPUT
1QUARTICM100(11,…,11)41010.002562******************171080.00045
2QUARTICM1000(11,…,11)*********4##0.0028762020.00248181220.000572
3BIGGSB12(3,3)130.002401130.00294130.00303130.002224
4BIGGSB12(11,11)130.002209130.00192130.00164130.002617
5QUADRATIC QF2(0.01,0.01)240.002074240.0038240.00349240.00311
6QUARTC100(11,…,11)41010.010275******************171080.011307
7EXT PENALTY8000(1,1,…,1)******************3300.00287*********
8DIAGONAL 61000(0.5,…,0.5)5330.0176295320.02154*********11120.003807
9DIAGONAL 610,000(0.5,…,0.5)5560.05131***************************
10DIAGONAL 650,000(0.5,…,0.5)3330.13378*********4580.23391*********
11EXT DENSCHNB10,000(1,1,…,1)130.001311130.00078130.00074130.00144
12EXT DENSCHNB50,000(1,1,…,1)130.00241130.00282130.00293130.002363
13EXT DENSCHNB100,000(1,1,…,1)130.005923130.00398130.00579130.005875
14MATYAS2(1,1)6170.0013746170.000736170.000787360.000423
15MATYAS2(0.5,0.5)6170.000526170.000836170.000597360.000422
16BRENT2(11,11)130.000396130.0004130.00049130.00071
17BRENT2(13,13)130.000602130.00068130.00045130.000803
18BRENT2(3,3)130.000679130.00037130.00054130.000375
19Rotated Ellipse 22(0.5, -1)13210.00145113210.0013713210.0012710170.000349
20Rotated Ellipse 22(5,-5)130.000438130.00044130.00056130.000501
21DIAGONAL 12(1,1)13240.00521212190.0036712200.0037215230.003264
22DIAGONAL 22(1,1)14190.0030939120.0023610130.0022912130.00164
23Aluffi-Pentini2(1,1)580.000786580.00073580.001196100.000516
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hassan, B.A.; Moghrabi, I.A.R.; Ameen, T.A.; Sulaiman, R.M.; Sulaiman, I.M. Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods. Mathematics 2024, 12, 2754. https://doi.org/10.3390/math12172754

AMA Style

Hassan BA, Moghrabi IAR, Ameen TA, Sulaiman RM, Sulaiman IM. Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods. Mathematics. 2024; 12(17):2754. https://doi.org/10.3390/math12172754

Chicago/Turabian Style

Hassan, Bassim A., Issam A. R. Moghrabi, Thaair A. Ameen, Ranen M. Sulaiman, and Ibrahim Mohammed Sulaiman. 2024. "Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods" Mathematics 12, no. 17: 2754. https://doi.org/10.3390/math12172754

APA Style

Hassan, B. A., Moghrabi, I. A. R., Ameen, T. A., Sulaiman, R. M., & Sulaiman, I. M. (2024). Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods. Mathematics, 12(17), 2754. https://doi.org/10.3390/math12172754

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop