Numerical Methods for Solving Nonlinear Equations

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 19211

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Applied Mathematics, University of Granada, 18071 Granada, Spain
Interests: applied mathematics; numerical analysis; fixed point theory; inverse problems; funtional analysis

E-Mail Website
Guest Editor
Department of Applied Mathematics, University of Granada, 18071 Granada, Spain
Interests: convex analysis; minimax inequalities; numerical analysis

Special Issue Information

Dear Colleagues,

As you know, many  problems coming from several areas as medicine, biology, economics, finance or engineering can be described in terms of nonlinear equations or systems of such equations, which can take different forms, from algebraic, differential, integral or integro-differential models to variational inequalities or equilibrium problems, to name only a few. For this reason, nonlinear problems constitute one of the most fruitful fields of study in pure and applied mathematics.

However, usually there are not available direct methods allowing us to deal with their effective resolution. Hence the enormous interest in its numerical treatment.

This special issue is intended to collect recent advances in this area. The topics of interest for the special issue include, but are not limited to, NUMERICAL METHODS FOR SOLVING:

  • Nonlinear differential equations.
  • Nonlinear integral equations.
  • Nonlinear integro-differential equations.
  • Nonlinear variational equations.
  • Nonlinear optimization problems.
  • Nonlinear control problems.
  • Equilibrium problems.
  • Nonlinear algebraic equations.

In addition to all this, their applications to real-world are also especially welcome.

Prof. Dr. Maria Isabel Berenguer
Prof. Dr. Manuel Ruiz Galán
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • nonlinear problems
  • numerical methods
  • differential, integral, integro-differential equations
  • optimizacion problems
  • variational equations
  • control problems
  • equilibrium problems
  • algebraic equations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2694 KiB  
Article
Hybrid Newton–Sperm Swarm Optimization Algorithm for Nonlinear Systems
by Obadah Said Solaiman, Rami Sihwail, Hisham Shehadeh, Ishak Hashim and Kamal Alieyan
Mathematics 2023, 11(6), 1473; https://doi.org/10.3390/math11061473 - 17 Mar 2023
Cited by 6 | Viewed by 1819
Abstract
Several problems have been solved by nonlinear equation systems (NESs), including real-life issues in chemistry and neurophysiology. However, the accuracy of solutions is highly dependent on the efficiency of the algorithm used. In this paper, a Modified Sperm Swarm Optimization Algorithm called MSSO [...] Read more.
Several problems have been solved by nonlinear equation systems (NESs), including real-life issues in chemistry and neurophysiology. However, the accuracy of solutions is highly dependent on the efficiency of the algorithm used. In this paper, a Modified Sperm Swarm Optimization Algorithm called MSSO is introduced to solve NESs. MSSO combines Newton’s second-order iterative method with the Sperm Swarm Optimization Algorithm (SSO). Through this combination, MSSO’s search mechanism is improved, its convergence rate is accelerated, local optima are avoided, and more accurate solutions are provided. The method overcomes several drawbacks of Newton’s method, such as the initial points’ selection, falling into the trap of local optima, and divergence. In this study, MSSO was evaluated using eight NES benchmarks that are commonly used in the literature, three of which are from real-life applications. Furthermore, MSSO was compared with several well-known optimization algorithms, including the original SSO, Harris Hawk Optimization (HHO), Butterfly Optimization Algorithm (BOA), Ant Lion Optimizer (ALO), Particle Swarm Optimization (PSO), and Equilibrium Optimization (EO). According to the results, MSSO outperformed the compared algorithms across all selected benchmark systems in four aspects: stability, fitness values, best solutions, and convergence speed. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

13 pages, 528 KiB  
Article
Finding an Efficient Computational Solution for the Bates Partial Integro-Differential Equation Utilizing the RBF-FD Scheme
by Gholamreza Farahmand, Taher Lotfi, Malik Zaka Ullah and Stanford Shateyi
Mathematics 2023, 11(5), 1123; https://doi.org/10.3390/math11051123 - 23 Feb 2023
Cited by 1 | Viewed by 1427
Abstract
This paper proposes a computational solver via the localized radial basis function finite difference (RBF-FD) scheme and the use of graded meshes for solving the time-dependent Bates partial integro-differential equation (PIDE) arising in computational finance. In order to avoid facing a large system [...] Read more.
This paper proposes a computational solver via the localized radial basis function finite difference (RBF-FD) scheme and the use of graded meshes for solving the time-dependent Bates partial integro-differential equation (PIDE) arising in computational finance. In order to avoid facing a large system of discretization systems, we employ graded meshes along both of the spatial variables, which results in constructing a set of ordinary differential equations (ODEs) of lower sizes. Moreover, an explicit time integrator is used because it can bypass the need to solve the large discretized linear systems in each time level. The stability of the numerical method is discussed in detail based on the eigenvalues of the system matrix. Finally, numerical tests revealed the accuracy and reliability of the presented solver. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

17 pages, 341 KiB  
Article
Approximation of the Fixed Point of the Product of Two Operators in Banach Algebras with Applications to Some Functional Equations
by Khaled Ben Amara, Maria Isabel Berenguer and Aref Jeribi
Mathematics 2022, 10(22), 4179; https://doi.org/10.3390/math10224179 - 9 Nov 2022
Cited by 1 | Viewed by 1332
Abstract
Making use of the Boyd-Wong fixed point theorem, we establish a new existence and uniqueness result and an approximation process of the fixed point for the product of two nonlinear operators in Banach algebras. This provides an adequate tool for deriving the existence [...] Read more.
Making use of the Boyd-Wong fixed point theorem, we establish a new existence and uniqueness result and an approximation process of the fixed point for the product of two nonlinear operators in Banach algebras. This provides an adequate tool for deriving the existence and uniqueness of solutions of two interesting type of nonlinear functional equations in Banach algebras, as well as for developing an approximation method of their solutions. In addition, to illustrate the applicability of our results we give some numerical examples. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
13 pages, 440 KiB  
Article
Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations
by R. H. Al-Obaidi and M. T. Darvishi
Mathematics 2022, 10(16), 2952; https://doi.org/10.3390/math10162952 - 16 Aug 2022
Cited by 5 | Viewed by 1557
Abstract
In this paper, in order to solve systems of nonlinear equations, a new class of frozen Jacobian multi-step iterative methods is presented. Our proposed algorithms are characterized by a highly convergent order and an excellent efficiency index. The theoretical analysis is presented in [...] Read more.
In this paper, in order to solve systems of nonlinear equations, a new class of frozen Jacobian multi-step iterative methods is presented. Our proposed algorithms are characterized by a highly convergent order and an excellent efficiency index. The theoretical analysis is presented in detail. Finally, numerical experiments are presented for showing the performance of the proposed methods, when compared with known algorithms taken from the literature. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

16 pages, 331 KiB  
Article
A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions
by Ioannis K. Argyros, Samundra Regmi, Stepan Shakhno and Halyna Yarmola
Mathematics 2022, 10(16), 2931; https://doi.org/10.3390/math10162931 - 14 Aug 2022
Viewed by 1186
Abstract
A process for solving an algebraic equation was presented by Newton in 1669 and later by Raphson in 1690. This technique is called Newton’s method or Newton–Raphson method and is even today a popular technique for solving nonlinear equations in abstract spaces. The [...] Read more.
A process for solving an algebraic equation was presented by Newton in 1669 and later by Raphson in 1690. This technique is called Newton’s method or Newton–Raphson method and is even today a popular technique for solving nonlinear equations in abstract spaces. The objective of this article is to update developments in the convergence of this method. In particular, it is shown that the Kantorovich theory for solving nonlinear equations using Newton’s method can be replaced by a finer one with no additional and even weaker conditions. Moreover, the convergence order two is proven under these conditions. Furthermore, the new ratio of convergence is at least as small. The same methodology can be used to extend the applicability of other numerical methods. Numerical experiments complement this study. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
28 pages, 377 KiB  
Article
Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces
by Michael I. Argyros, Ioannis K. Argyros, Samundra Regmi and Santhosh George
Mathematics 2022, 10(15), 2621; https://doi.org/10.3390/math10152621 - 27 Jul 2022
Cited by 5 | Viewed by 1432
Abstract
In this article, we propose a new methodology to construct and study generalized three-step numerical methods for solving nonlinear equations in Banach spaces. These methods are very general and include other methods already in the literature as special cases. The convergence analysis of [...] Read more.
In this article, we propose a new methodology to construct and study generalized three-step numerical methods for solving nonlinear equations in Banach spaces. These methods are very general and include other methods already in the literature as special cases. The convergence analysis of the specialized methods is been given by assuming the existence of high-order derivatives which are not shown in these methods. Therefore, these constraints limit the applicability of the methods to equations involving operators that are sufficiently many times differentiable although the methods may converge. Moreover, the convergence is shown under a different set of conditions. Motivated by the optimization considerations and the above concerns, we present a unified convergence analysis for the generalized numerical methods relying on conditions involving only the operators appearing in the method. This is the novelty of the article. Special cases and examples are presented to conclude this article. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
14 pages, 504 KiB  
Article
Gradient-Based Optimization Algorithm for Solving Sylvester Matrix Equation
by Juan Zhang and Xiao Luo
Mathematics 2022, 10(7), 1040; https://doi.org/10.3390/math10071040 - 24 Mar 2022
Cited by 2 | Viewed by 2311
Abstract
In this paper, we transform the problem of solving the Sylvester matrix equation into an optimization problem through the Kronecker product primarily. We utilize the adaptive accelerated proximal gradient and Newton accelerated proximal gradient methods to solve the constrained non-convex minimization problem. Their [...] Read more.
In this paper, we transform the problem of solving the Sylvester matrix equation into an optimization problem through the Kronecker product primarily. We utilize the adaptive accelerated proximal gradient and Newton accelerated proximal gradient methods to solve the constrained non-convex minimization problem. Their convergent properties are analyzed. Finally, we offer numerical examples to illustrate the effectiveness of the derived algorithms. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

10 pages, 295 KiB  
Article
An Iterative Algorithm for Approximating the Fixed Point of a Contractive Affine Operator
by María Isabel Berenguer and Manuel Ruiz Galán
Mathematics 2022, 10(7), 1012; https://doi.org/10.3390/math10071012 - 22 Mar 2022
Cited by 1 | Viewed by 1646
Abstract
First of all, in this paper we obtain a perturbed version of the geometric series theorem, which allows us to present an iterative numerical method to approximate the fixed point of a contractive affine operator. This result requires some approximations that we obtain [...] Read more.
First of all, in this paper we obtain a perturbed version of the geometric series theorem, which allows us to present an iterative numerical method to approximate the fixed point of a contractive affine operator. This result requires some approximations that we obtain using the projections associated with certain Schauder bases. Next, an algorithm is designed to approximate the solution of Fredholm’s linear integral equation, and we illustrate the behavior of the method with some numerical examples. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
19 pages, 329 KiB  
Article
Extrapolation Method for Non-Linear Weakly Singular Volterra Integral Equation with Time Delay
by Li Zhang, Jin Huang, Hu Li and Yifei Wang
Mathematics 2021, 9(16), 1856; https://doi.org/10.3390/math9161856 - 5 Aug 2021
Cited by 2 | Viewed by 2004
Abstract
This paper proposes an extrapolation method to solve a class of non-linear weakly singular kernel Volterra integral equations with vanishing delay. After the existence and uniqueness of the solution to the original equation are proved, we combine an improved trapezoidal quadrature formula with [...] Read more.
This paper proposes an extrapolation method to solve a class of non-linear weakly singular kernel Volterra integral equations with vanishing delay. After the existence and uniqueness of the solution to the original equation are proved, we combine an improved trapezoidal quadrature formula with an interpolation technique to obtain an approximate equation, and then we enhance the error accuracy of the approximate solution using the Richardson extrapolation, on the basis of the asymptotic error expansion. Simultaneously, a posteriori error estimate for the method is derived. Some illustrative examples demonstrating the efficiency of the method are given. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

16 pages, 412 KiB  
Article
On the Convergence of a New Family of Multi-Point Ehrlich-Type Iterative Methods for Polynomial Zeros
by Petko D. Proinov and Milena D. Petkova
Mathematics 2021, 9(14), 1640; https://doi.org/10.3390/math9141640 - 12 Jul 2021
Viewed by 1808
Abstract
In this paper, we construct and study a new family of multi-point Ehrlich-type iterative methods for approximating all the zeros of a uni-variate polynomial simultaneously. The first member of this family is the two-point Ehrlich-type iterative method introduced and studied by Trićković and [...] Read more.
In this paper, we construct and study a new family of multi-point Ehrlich-type iterative methods for approximating all the zeros of a uni-variate polynomial simultaneously. The first member of this family is the two-point Ehrlich-type iterative method introduced and studied by Trićković and Petković in 1999. The main purpose of the paper is to provide local and semilocal convergence analysis of the multi-point Ehrlich-type methods. Our local convergence theorem is obtained by an approach that was introduced by the authors in 2020. Two numerical examples are presented to show the applicability of our semilocal convergence theorem. Full article
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)
Show Figures

Figure 1

Back to TopTop