Next Article in Journal
Time-Dependent 4D Quantum Harmonic Oscillator and Reacting Hydrogen Atom
Next Article in Special Issue
Existence and Nonexistence of Positive Solutions for Perturbations of the Anisotropic Eigenvalue Problem
Previous Article in Journal
Decision Support System Based on Complex Fractional Orthotriple Fuzzy 2-Tuple Linguistic Aggregation Operator
Previous Article in Special Issue
Descent Derivative-Free Method Involving Symmetric Rank-One Update for Solving Convex Constrained Nonlinear Monotone Equations and Application to Image Recovery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement of Unconstrained Optimization Methods Based on Symmetry Involved in Neutrosophy

by
Predrag S. Stanimirović
1,2,*,
Branislav Ivanov
3,
Dragiša Stanujkić
3,
Vasilios N. Katsikis
4,
Spyridon D. Mourtas
2,4,
Lev A. Kazakovtsev
2,5 and
Seyyed Ahmad Edalatpanah
6
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
2
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Prosp. Svobodny 79, Krasnoyarsk 660041, Russia
3
Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, 19210 Bor, Serbia
4
Department of Economics, Division of Mathematics and Informatics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
5
Institute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, Prosp. Krasnoyarskiy Rabochiy 31, Krasnoyarsk 660037, Russia
6
Department of Applied Mathematics, Ayandegan Institute of Higher Education, Tonekabon P.O. Box 46818-53617, Mazandaran, Iran
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(1), 250; https://doi.org/10.3390/sym15010250
Submission received: 1 December 2022 / Revised: 7 January 2023 / Accepted: 12 January 2023 / Published: 16 January 2023
(This article belongs to the Special Issue Nonlinear Analysis and Its Applications in Symmetry II)

Abstract

:
The influence of neutrosophy on many fields of science and technology, as well as its numerous applications, are evident. Our motivation is to apply neutrosophy for the first time in order to improve methods for solving unconstrained optimization. Particularly, in this research, we propose and investigate an improvement of line search methods for solving unconstrained nonlinear optimization models. The improvement is based on the application of symmetry involved in neutrosophic logic in determining appropriate step size for the class of descent direction methods. Theoretical analysis is performed to show the convergence of proposed iterations under the same conditions as for the related standard iterations. Mutual comparison and analysis of generated numerical results reveal better behavior of the suggested iterations compared with analogous available iterations considering the Dolan and Moré performance profiles and statistical ranking. Statistical comparison also reveals advantages of the neutrosophic improvements of the considered line search optimization methods.

1. Introduction, Preliminaries, and Motivation

We investigate applications of neutrosophic logic in determining an additional step size in gradient descent methods for solving the multivariate unconstrained optimization problem
min f ( x ) , x R n ,
in which the objective f : R n R is uniformly convex and twice continuously differentiable.
The most general iteration aimed to solve (1) is the descent direction (DD) method
x k + 1 = x k + t k d k ,
such that x k + 1 is the actual approximation, x k is the former approximation, t k > 0 is a step size, and d k is an appropriate search direction that satisfies the descent condition g k T d k < 0 , in which g k = f ( x k ) stands for the gradient vector of the objective f. The most common choice is the antigradient direction d k = g k , leading to the gradient descent ( G D ) iterations
x k + 1 = x k t k g k ,
in which the learning rate t k is typically determined by an inexact line search procedure. The iterative rule of the general quasi-Newton (QN) class of iterations with line search
x k + 1 = x k t k H k g k
utilizes an appropriate symmetric and positive definite estimation B k of the Hessian G k = 2 f ( x k ) and H k = B k 1 [1]. The upgrade B k + 1 from B k is established based on the QN characteristic
B k + 1 ς k = ξ k , such that ς k = x k + 1 x k , ξ k = g k + 1 g k .
Computation of the Hessian or its approximations that include matrix operations is time-consuming and prohibitive. Following the goal to make optimization methods efficient in solving large-scale problems, we use the simplest scalar Hessian’s approximation [2,3]:
B k = γ k I , γ k > 0 .
In this paper, we are interested in the following iterative scheme
x k + 1 = x k γ k 1 t k g k .
Iterations (7) are introduced as improved gradient descent ( I G D ) methods. The roles of the additional step γ k and the basic step length t k are clearly separated and complement each other. The quantity t k is defined as the output of an inexact line search methodology, while γ k is calculated based on Taylor series of f ( x ) .
Diverse forms and improvements of the I G D iterative scheme (7) were suggested in [4,5,6,7,8]. The S M method proposed in [6] corresponds to the iteration
x k + 1 = x k t k ( γ k S M ) 1 g k ,
where γ k S M > 0 is the gain parameter determined utilizing the Taylor approximation of f x k t k ( γ k S M ) 1 g k , which results
γ k + 1 S M = 2 γ k S M γ k S M Δ k + t k g k 2 t k 2 g k 2 ,
such that f p : = f ( x p ) , Δ k : = f k + 1 f k and
( x ) = x , x > 0 1 , x 0 .
The modification of the S M method was defined as the transformation M S M = M ( S M ) [9]
x k + 1 = M ( S M ) ( x k ) = x k t k τ k ( γ k M S M ) 1 g k ,
where t k ( 0 , 1 ) is defined by the backtracking search, τ k = 1 + t k t k 2 , and
γ k + 1 M S M = 2 γ k M S M γ k M S M Δ k + t k τ k g k 2 ( t k τ k ) 2 g k 2 .
We propose improvements of line search iterative rules for solving (1). The main idea is based on the application of neutrosophic logic in determining appropriate step length for various gradient descent rules. This idea is based on the hybridization principle proposed in [5,9,10], where an appropriate correction parameter α k with a fixed value is used. A hybridization of the S M iterations (termed H S M ) was introduced in [5] as the iterative rule
x k + 1 = H ( S M ) ( x k ) = x k ( η k + 1 ) ( γ k H S M ) 1 t k g k ,
such that η k is the correction quantity and γ k H S M is the gain value defined as
γ k + 1 H S M = 2 γ k H S M γ k H S M Δ k + ( η k + 1 ) t k g k 2 ( η k + 1 ) 2 t k 2 g k 2 .
The hybridizations of several I G D methods, including the M S M method, were proposed and investigated in [9,10]. An overview of methods derived by the hybridization of I G D iterations with the Picard–Mann, Ishikawa, and Khan iterative processes [11,12,13] was given in [14]. Some common fixed point results for fuzzy mappings were derived in [15]. A detailed numerical comparison between hybrid and nonhybrid I G D methodswas performed in [14]. Four gradient descent algorithms with adaptive step size were proposed and investigated in [16].
Our goal in this paper is to use an adaptive neutrosophic logic parameter ν k instead of the fixed correction parameter η k + 1 in determining appropriate step sizes for various gradient descent methods. The parameter ν k in each iteration will be determined on the basis of the neutrosophic logic controller (NLC).
Consider the universe U . The fuzzy set theory relies on a membership function T ( u ) [ 0 , 1 ] , u U [17]. In addition, a fuzzy set N over U is a set of ordered pairs N = { u , T ( u ) | u U } .
The intuitionistic fuzzy set (IFS) was established based on the nonmembership function F ( u ) [ 0 , 1 ] , u U [18]. Following the philosophy of using two opposing membership functions, an IFS N in U is defined as the set of ordered triples
N = { u , T ( u ) , F ( u ) | u U } ,
which are based on the independence of the members, that is T ( u ) , F ( u ) : U [ 0 , 1 ] and 0 T ( u ) + F ( u ) 1 .
The IFS theory was extended by Smarandache in [19] and Wang et al. [20]. The novelty is the introduction of the indeterminacy-membership function I ( u ) , which symbolizes hesitation in a decision-making process. As a result, elements of a set in the neutrosophic theory are defined by three individualistic membership functions [19,20] defined by the rules of symmetry: the truth-membership function T ( u ) , the indeterminacy-membership function I ( u ) , and the falsity-membership F ( u ) function. A single-valued neutrosophic set (SVNS) N over U is the set of neutrosophic numbers of the form N = u , T ( u ) , I ( u ) , F ( u ) | u U . Values of the membership functions independently take values from [ 0 , 1 ] , which initiates T ( u ) , I ( u ) , F ( u ) : U [ 0 , 1 ] and 0 T ( u ) + I ( u ) + F ( u ) 3 .
A neutrosophic set is symmetric in nature since the indeterminacy I appears in the middle between the Truth T and False F [21,22]. Furthermore, a refined neutrosophic set with two indeterminacies I 1 and I 2 in the middle between T and F also includes a kind of symmetry [22]. In [23], the authors firstly introduced a normalized and a weighted symmetry measure of simplified neutrosophic sets and then proposed a neutrosophic multiple criteria decision-making method based on the introduced symmetry estimate.
Fuzzy logic (FL), intuitionistic fuzzy logic (IFL), and neutrosophic logic (NL) appear as efficient tools to handle mathematical models with uncertainty, fuzziness, ambiguity, inaccuracy, incomplete certainty, incompleteness, inconsistency, and redundancy. NL can be considered as one of the new theories based on the fundamental principles of neutrosophy, which actually belongs to the group of many-valued logics and actually represents an extension of FL. NL can also be considered as a new branch of logic that deals with the shortcomings of FL and classical logic, as well as IFL. Some of the disadvantages of FL, such as the failure to handle inconsistent information, are significantly reduced by applying NL. Truth and falsity in NL are independent, while in IFL they are dependent. Neutrosophic logic can manipulate both incomplete and inconsistent data. Thus, there is a need to explore the use of NL in various domains from medical treatment to the role of recommendation systems using new advanced computational intelligence techniques. An NL is a better choice than the FL and IFL in the representation of real-world data and their executions, because of the following reasons:
(a)
FL and IFL systems neglect the importance of indeterminacy. A fuzzy logic controller (FLC) is based on membership and nonmembership of a particular element to a particular set and take into account the indeterminate nature of generated data.
(b)
An FL or IFL system is further constrained by the fact that the sum of membership and nonmembership values is limited to 1. More details are available in [24].
(c)
NL reasoning clearly distinguishes concepts of absolute truth and relative truth, assuming the existence of the absolute truth with assigned value 1 + .
(d)
NL is applicable in the situation of overlapping regions of the fuzzy systems [25].
Neutrosophic sets (NS) have important applications for denoising, clustering, segmentation, and classification in numerous medical image-processing applications. A utilization of neutrosophic theory in denoising medical images and their segmentation was proposed in [26], such that a neutrosophic image is characterized by three membership sets. Several applications of neutrosophic systems were described in [27]. An application of neutrosophy in natural language processing and sentiment analysis was investigated in [22].
Our goal in the present paper is to improve some of the main gradient descent methods for solving unconstrained nonlinear optimization problems utilizing the advantages of neutrosophic systems. Principal results of the current investigation are emphasized as follows.
(1)
We investigate applications of neutrosophic logic in determining an additional step size in line search methods for solving the unconstrained optimization problem.
(2)
Applications of neutrosophic logic in multiple step-size methods for solving unconstrained optimization problems are described and investigated.
(3)
Rigorous theoretical analysis is performed to show convergence of the proposed iterations under the same conditions as for the corresponding original methods.
(4)
Numerical comparison between suggested algorithms given the corresponding available iterations considering the Dolan and Moré benchmarking and the statistical ranking is presented.
The remaining sections are developed according to the following arrangement. Optimization methods based on additional neutrosophic parameters are presented in Section 2. Convergence analysis is investigated in Section 3. Section 4 gives numerical experiments and comparisons. Section 4 gives numerical experiments and compares the MSM, SM, and GD methods with the neutrosophic extensions FMSM, FSM, and FGD methods, equipped with neutrosophic control. Moreover, the application of the new methods in regression analysis is given within this section. Some closing remarks and a vision of future investigation are presented in Section 5.

2. Fuzzy Optimization Methods

Fuzzy descent direction ( F D D ) iterations are defined as a modification of the D D iterations (2), as follows:
x k + 1 = Φ ( D D ) ( x k ) = x k + ν k t k d k ,
where ν k > 0 is an appropriately defined fuzzy parameter. In general, ν k should satisfy
ν k < 1 , if Δ k > 0 , = 1 , if Δ k = 0 , > 1 , if Δ k < 0 .
The main idea used in (13) is to decrease the composite step size ν k t k of iterations (12) in the case where f increases and increase ν k t k in the case when f decreases.
We define the general fuzzy Q N ( F Q N ) iterative scheme with the line search as
x k + 1 = Φ ( Q N ) ( x k ) = Φ ( x k H k g k ) = x k ν k H k g k ,
The fuzzy G D method ( F G D ) is defined by
x k + 1 = Φ ( G D ) ( x k ) = Φ ( x k t k g k ) = x k ν k t k g k .
The fuzzy SM method (FSM) is defined as
x k + 1 = Φ ( S M ) ( x k ) = x k ν k t k ( γ k F S M ) 1 g k ,
where
γ k + 1 F S M = 2 γ k F S M γ k F S M Δ k + ν k t k g k 2 ν k t k 2 g k 2 .
Starting from (9) and (14), we define the fuzzy M S M method ( F M S M ) by
x k + 1 = Φ ( M S M ) ( x k ) = x k ν k t k τ k ( γ k F M S M ) 1 g k ,
where
γ k + 1 F M S M = 2 γ k F M S M γ k F M S M Δ k + ν k t k τ k g k 2 ν k t k τ k 2 g k 2 .
Table 1 summarizes different steps utilized in the iterations utilized in this paper, in which the strike means absence of a suitable parameter.
Algorithm 1, restated from [6,28], is exploited to determine the step length t k .
Algorithm 1 The backtracking inexact line search.
Input: Goal function f ( x ) , a vector d k at x k and real quantities 0 < σ < 0.5 , β ( 0 , 1 ) .
  1: t = 1
  2: While f ( x k + t d k ) > f ( x k ) + σ t g k T d k , perform t : = t β .
  3: Output: t k = t .
Algorithm 2 describes the general framework of the F D D class of methods.
Algorithm 2 Framework of F D D methods.
Input: Objective f ( x ) and an initial point x 0 dom ( f ) .
  1: Put k = 0 , ν 0 = 1 , calculate f ( x 0 ) , g 0 = f ( x 0 ) , and generate a descent direction d 0 .
  2: If stopping indicators are fulfilled, then stop; otherwise, go to the subsequent step.
  3: (Backtracking) Determine t k ( 0 , 1 ] applying Algorithm 1.
  4: Compute x k + 1 using (12).
  5: Compute f ( x k + 1 ) and generate descent vector d k + 1 .
  6: (Score function) Compute Δ k : = f k + 1 f k .
  7: (Neutrosophistication) Compute T ( Δ k ) , I ( Δ k ) , F ( Δ k ) using appropriate membership functions.
  8: Define neutrosophic inference engine.
  9: (De-neutrosophistication) Compute ν k ( Δ k ) using de-neutrosophication rule.
  10: k : = k + 1 and go to step 2.
  11: Output: { x k + 1 , f ( x k + 1 ) } .
It is worth mentioning that the general structure of fuzzy neutrosophic optimization methods follows the philosophy described in the diagram of Figure 1.

FMSM Method

To define the FMSM method, we need to define the steps Score function, neutrosophistication and de-neutrosophistication in Algorithm 2.
(1)
Neutrosophication. Using three membership functions, neutrosophic logic maps the input ϑ : = f ( x k ) f ( x k + 1 ) into neutrosophic triplets ( T ( ϑ ) , I ( ϑ ) , F ( ϑ ) ) .
The truth-membership function is defined as the sigmoid function:
T ( ϑ ) = 1 / ( 1 + e c 1 ( ϑ c 2 ) ) .
The parameter c 1 is responsible for its slope at the crossover point ϑ = c 2 . The falsity-membership function is the sigmoid function:
F ( ϑ ) = 1 / ( 1 + e c 1 ( ϑ c 2 ) ) .
The indeterminacy-membership function is the Gaussian function:
I ( ϑ ) = e ( ϑ c 2 ) 2 2 c 1 2 ,
where the parameter c 1 stands for the standard deviation, and the parameter c 2 is the mean. The neutrosophication of the crisp value ϑ R used in the implementation is the transformation of ϑ into ϑ : T ( ϑ ) , I ( ϑ ) , F ( ϑ ) , where the membership functions are defined in (20)–(22).
Since the final goal is to minimize f ( x ) , it is reasonable to use Δ k as a measure in the developed NLC. So, we consider the dynamic neutrosophic set (DNS) defined by D : = T ( Δ k ) , I ( Δ k ) , F ( Δ k ) ; Δ k R .
(2)
Neutrosophic inference engine: The neutrosophic rule between the fuzzy input set I and the fuzzy output set under the neutrosophic format O = { T , I , F } is described by the following “IF–THEN” rules:
R 1 : If I = P then O = { T , I , F } R 2 : If I = N then O = { T , I , F } .
The notations P and N stand for fuzzy sets and exactly indicate a positive and negative error, respectively. Using the unification R = R 1 R 2 , we obtain O i = I R i , i = 1 , 2 , where ∘ symbolizes the fuzzy transformation. Furthermore, it follows that κ I R ( ζ ) = κ I R 1 κ I R 2 , κ I R ( ζ ) = sup ( κ I κ O i ) , and i = 1 , 2 , where ⋀ (resp. ⋁) denotes the ( min , max , max ) operator, (resp. ( max , min , min ) operator). The process of turning the fuzzy outputs into a single, crisp output value is known as defuzzification. There are various defuzzification methods that can be used to perform this procedure. The centroid method, the weighted average method, and the max or mean–max membership principles are some popular defuzzification methods. In this study, the following defuzzification method, called centroid, is employed to obtain a vector of crisp outputs ζ * = [ T ( Δ k ) , I ( Δ k ) , F ( Δ k ) ] R 3 of the fuzzy vector ζ = { T ( Δ k ) , I ( Δ k ) , F ( Δ k ) } :
ζ * = O ζ κ I R ( ζ ) d ζ O κ I R ( ζ ) d ζ .
(3)
De-neutrosophication. This step assumes conversion T ( Δ k ) , I ( Δ k ) , F ( Δ k ) ν k ( Δ k ) R resulting in a single (crisp) value ν k ( Δ k ) .
The following de-neutrosophication rule is proposed to obtain the parameter ν k ( Δ k ) using the rule (24), which follows the constraints stated in (13):
ν k ( Δ k ) = 1 T ( Δ k ) + I ( Δ k ) + F ( Δ k ) / c 1 , Δ k > 0 1 , Δ k = 0 3 T ( Δ k ) + I ( Δ k ) + F ( Δ k ) , Δ k < 0 .
The parameter c 1 3 maintains the lower limit ν k ( Δ k ) < 1 of ν k ( Δ k ) in the case Δ k > 0 . Moreover, definition (24) assumes that the membership functions must satisfy T ( Δ k ) + I ( Δ k ) + F ( Δ k ) < 2 in the case Δ k > 0 .
For better understanding, the NLC structure decomposed by the neutrosophic rules is presented in the diagram of Figure 2. It is crucial to remember that the NLC controller structure was built specifically to solve the issues discussed in this paper, including the membership functions chosen, the number of fuzzy rules chosen, the defuzzification method chosen, and the de-neutrosophication method chosen. As a result, the NLC controller structure is heuristic, and different structures can be required for various applications.
The utilized settings for the NLC employed in all numerical experiments and graphs of this paper are presented in Table 2.
Our imperative requirement is ν k ( Δ k ) 0 . The fulfillment of this requirement immediately follows from the membership values T ( Δ k ) , F ( Δ k ) , I ( Δ k ) during the neutrosophication process, which are presented in Figure 3a. The NLC output value, ν k ( Δ k ) , during the de-neutrosophication process is presented in Figure 3b.
Figure 3 clearly shows that (24) satisfies basic requirements imposed in (13). More precisely, graphs in Figure 3 show 1 T ( Δ k ) + I ( Δ k ) + F ( Δ k ) / c 1 < 1 in the case Δ k > 0 , and 3 T ( Δ k ) + I ( Δ k ) + F ( Δ k ) 1 in the case Δ k < 0 .
Remark 1.
During the iterations, the function decreases and tends to the minimum, so lim k Δ k = 0 , that is, lim k ν k ( Δ k ) = 1 . This observation leads to the conclusion that the parameter ν k 1 decreases as we approach the minimum of the function, and thus the influence of neutrosophy on the gradient methods decreases. Such desirable behavior of ν k ( Δ k ) was our intention.
Algorithm 3 is the algorithmic framework of the FMSM method.
Algorithm 3 Framework of F M S M method.
Input: Objective f ( x ) and appropriate initialization x 0 dom ( f ) .
  1: Put k = 0 and compute f ( x 0 ) , g 0 = f ( x 0 ) and take γ 0 = 1 , ν 0 = 1 .
  2: If stopping criteria are satisfied, then stop; otherwise, go to the subsequent step.
  3: (Backtracking) Find the step size t k ( 0 , 1 ] using Algorithm 1 utilizing the search direction d k = ν k τ k ( γ k F M S M ) 1 g k .
  4: Compute x k + 1 using (18).
  5: Calculate f ( x k + 1 ) and g k + 1 = f ( x k + 1 ) .
  6: Compute γ k + 1 F M S M applying (19).
  7: Compute Δ k : = f k + 1 f k .
  8: Compute T ( Δ k ) , I ( Δ k ) , F ( Δ k ) using (20)–(22), respectively.
  9: Compute ζ * = [ T ( Δ k ) , I ( Δ k ) , F ( Δ k ) ] using (23).
  10: Compute ν k : = ν k ( Δ k ) using (24).
  11: Put k : = k + 1 , and go to Step 2.
  12: Return { x k + 1 , f ( x k + 1 ) } .

3. Convergence Analysis

The following assumptions are necessary, and the following auxiliary results are useful.
Assumption 1.
( 1 ) The level set M = { x R n | f ( x ) f ( x 0 ) } , defined around the initial iterate x 0 of (2), is bounded.
( 2 ) The objective f is continuous and differentiable in a neighborhood P of M , and its gradient g is Lipschitz continuous, i.e., there exists L > 0 , which satisfies
g ( v ) g ( w ) L v w , v , w P .
Several useful results from [28,29,30] and [31,32] are restated for completeness. Let d k be chosen as a descent direction, and let the gradient g ( x ) fulfill the Lipschitz requirement (25). The step length t k derived in the backtracking Algorithm 1 satisfies
t k min 1 , β ( 1 σ ) L g k T d k d k 2 .
The notation f n (resp. f n ) is used to indicate that f : R n R is twice continuously differentiable and uniformly convex (resp. uniformly convex) on R n . From [31,32], it follows that Assumption 1 is satisfied if f n .
Lemma 1
([31,32]). Assumption f n implies the existence of real numbers m, M, such that
0 < m 1 M .
Moreover, f ( p ) possesses a unique minimum p * , such that
m q 2 q T 2 f ( p ) q M q 2 , p , q R n ;
1 2 m p p * 2 f ( p ) f ( p * ) 1 2 M p p * 2 , p R n ;
m p q 2 ( g ( p ) g ( q ) ) T ( p q ) M p q 2 , p , q R n .
For simplicity, denote the S M and M S M iterations as
x k + 1 ( M ) S M = x k ( M ) S M t k ω k ( γ k ( M ) S M ) 1 g k ,
where x k ( M ) S M denotes x k S M (resp. x k M S M ) in the case of the S M (resp. M S M ) method and ω k = 1 (resp. ω k = τ k : = 1 + t k t k 2 ) in the case of the S M (resp. M S M ) method. Similarly, the F S M and F M S M iterations are denoted by the common notation
x k + 1 F ( M ) S M = x k F ( M ) S M ν k t k ω k ( γ k F ( M ) S M ) 1 g k ,
where x k F ( M ) S M denotes x k F S M (resp. x k F M S M ) in the case of the F S M (resp. F M S M ) method and ω k = 1 (resp. ω k = τ k ) in the case of the F S M (resp. F M S M ) method. Since the scalar matrix approximation of the Hessian enables to assume that f is twice continuously differentiable, instead of (28) and (27), we assume only the following bounds for γ k F ( M ) S M :
m γ k F ( M ) S M M , 0 < m 1 M , m , M R .
In addition, f n reduces to f n .
Lemma 2 estimates the iterative decreasing of f ensured by S M and M S M iterations.
Lemma 2
([6,9]). Let f n and (31) be valid. Then, the S M sequence { x k } produced by (8), and the M S M sequence { x k } produced by (9), satisfy
f ( x k ( M ) S M ) f ( x k + 1 ( M ) S M ) μ g k 2 ,
such that
μ = min σ M , σ ( 1 σ ) L β .
Theorem 1 investigates the convergence of the F M S M and F S M iterative sequences.
Theorem 1.
Let f n and (31) be valid. Under these conditions, the F S M sequence induced by (16), and the F M S M sequence induced by (18), satisfy
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) μ ν k g k 2 ,
such that
μ ν k = min σ ν k M , σ ( 1 σ ) L β .
Proof. 
The F S M and F M S M iterations x k + 1 F ( M ) S M = x k F ( M ) S M t k ν k ω k ( γ k F ( M ) S M ) 1 g k are of the general D D pattern x k + 1 = x k + t k d k in the case d k = ν k ω k ( γ k F ( M ) S M ) 1 g k . According to the stopping condition used in Algorithm 1, it follows
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) σ t k g k T d k , k N .
In the occurrence t k < 1 , using (36) with d k = ν k ω k ( γ k F ( M ) S M ) 1 g k , one obtains
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) σ t k g k T d k = σ t k g k T ν k ω k ( γ k F ( M ) S M ) 1 g k .
Now, (26) implies
t k β ( 1 σ ) L · g k T d k d k 2 = β ( 1 σ ) L · g k T ν k ω k ( γ k F ( M ) S M ) 1 g k ν k ω k ( γ k F ( M ) S M ) 1 g k 2 = β ( 1 σ ) L · ν k ω k ( γ k F ( M ) S M ) 1 g k 2 ν k 2 ω k 2 ( γ k F ( M ) S M ) 2 g k 2 = β ( 1 σ ) L · γ k F ( M ) S M ν k ω k .
Now, (37), in conjunction with the last inequality, initiates
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) σ t k ν k ω k ( γ k F ( M ) S M ) 1 g k T g k σ β ( 1 σ ) L · γ k F ( M ) S M ν k ω k ν k ω k ( γ k F ( M ) S M ) 1 g k T g k σ ( 1 σ ) β L g k 2 .
According to (31), in the occurrence t k = 1 , we conclude
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) σ g k T d k = σ g k T ( ν k ω k ( γ k F ( M ) S M ) 1 g k ) = σ ν k γ k F ( M ) S M g k 2 σ ν k M g k 2 .
Starting from the above two inequalities, we obtain (34) in both possible situations, t k < 1 and t k = 1 , which completes the statement. □
Remark 2.
Based on (32) and (34), respectively, it follows
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) μ ν k g k 2 , + and f ( x k ( M ) S M ) f ( x k + 1 ( M ) S M ) μ g k 2 , + . According to (13), it follows μ ν k μ if f ( x k + 1 F ( M ) S M ) < f ( x k F ( M ) S M ) . So,
f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) μ ν k g k 2 , + μ g k 2 , + . This means that values f ( x k F ( M ) S M ) f ( x k + 1 F ( M ) S M ) belong to the interval with values greater than or equal to the interval which includes values f ( x k ( M ) S M ) f ( x k + 1 ( M ) S M ) . Furthermore, it means that possibilities for the reduction of f ( x k + 1 F ( M ) S M ) compared with f ( x k F ( M ) S M ) are greater than or equal to possibilities for the reduction of f ( x k + 1 ( M ) S M ) compared with f ( x k ( M ) S M ) .
Theorem 2 confirms a linear convergence rate of the F ( M ) S M method for uniformly convex functions.
Theorem 2.
Let f n and (31) be valid. If the iterates { x k } are generated by Algorithm 3, it follows that
lim k g k F ( M ) S M = 0 ,
and { x k } converges to x * with the least linear convergence rate.
Proof. 
The proof is analogous to [6] (Theorem 4.1). □
In Lemma 3, we investigate the convergence of the F ( M ) S M method on the class of quadratic strictly convex functions
f ( x ) = 1 2 x T A x b T x ,
wherein A is a real n × n symmetric positive definite and b R n . Denote by λ 1 λ n the sorted eigenvalues of A. The gradient of (39) is given as
g k = A x k b .
Lemma 3.
The eigenvalues of f n defined in (39) by a positive definite symmetric matrix A R n satisfy
λ 1 γ k + 1 F ( M ) S M t k + 1 2 λ n σ , k N ,
such that γ k F ( M ) S M is determined by (17) and (19), and t k is defined in Algorithm 1.
Proof. 
Simple calculation leads to
f ( x k + 1 F ( M ) S M ) f ( x k ) F ( M ) S M = 1 2 x k + 1 T A x k + 1 b T x k + 1 1 2 x k T A x k + b T x k .
The replacement of (18) in (42) leads to
f ( x k + 1 F ( M ) S M ) f ( x k F ( M ) S M ) = 1 2 x k ν k t k ω k ( γ k F ( M ) S M ) 1 g k ] T A [ x k ν k t k ω k ( γ k F ( M ) S M ) 1 g k b T [ x k ν k t k ω k ( γ k F ( M ) S M ) 1 g k ] 1 2 x k T A x k + b T x k = 1 2 ν k t k ω k ( γ k F ( M ) S M ) 1 x k T A g k 1 2 ν k t k ω k ( γ k F ( M ) S M ) 1 g k T A x k + 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 2 g k T A g k + ν k t k ω k ( γ k F ( M ) S M ) 1 b T g k .
Applying (40) in the previous equation, we conclude
f ( x k + 1 F ( M ) S M ) f ( x k F ( M ) S M ) = ν k t k ω k ( γ k F ( M ) S M ) 1 [ b T g k x k T A g k ] + 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 2 g k T A g k = ν k t k ω k ( γ k F ( M ) S M ) 1 [ b T x k T A ] g k + 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 2 g k T A g k = ν k t k ω k ( γ k F ( M ) S M ) 1 g k T g k + 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 2 g k T A g k .
After replacing (43) into (19), the parameter γ k + 1 F ( M ) S M becomes
γ k + 1 F ( M ) S M = 2 γ k F ( M ) S M γ k F ( M ) S M f k + 1 f k + ν k t k ω k g k 2 ( ν k t k ω k ) 2 g k 2 = 2 γ k F ( M ) S M ν k t k ω k g k 2 + 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 1 g k T A g k + ν k t k ω k g k 2 ( ν k ( t k ω k ) ) 2 g k 2 = 2 γ k F ( M ) S M 1 2 ( ν k t k ω k ) 2 ( γ k F ( M ) S M ) 1 g k T A g k ( ν k t k ω k ) 2 g k 2 = g k T A g k g k 2 .
The last identity implies that γ k + 1 F ( M ) S M is the Rayleigh quotient of the real symmetric matrix A at g k . So,
λ 1 γ k + 1 F ( M ) S M λ n , k N .
The left inequality in (41) is implied by (44), due to t k + 1 ( 0 , 1 ] . To verify the right inequality from (41), we use the upper limit imposed by the line search
t k β ( 1 σ ) γ k L ,
which implies
γ k + 1 F ( M ) S M t k + 1 < L β ( 1 σ ) .
Taking into account (40), and the symmetricity of A, we derive
g ( x ) g ( y ) = A x b ( A y b ) = A x A y A x y = λ n x y .
Based on the last inequality, it is concluded that the constant L in (45) can be defined as the largest eigenvalue λ n of A. Considering the backtracking parameters σ ( 0 , 0.5 ) , β ( σ , 1 ) , it is obtained that
γ k + 1 F ( M ) S M t k + 1 < L β ( 1 σ ) = λ n β ( 1 σ ) < 2 λ n σ .
Therefore, the right-hand side inequality in (41) is proved, and the proof is finished. □
In Theorem 3, we consider the convergence of the F S M and F M S M iterations under the supplemental presumption λ n < 2 λ 1 .
Theorem 3.
Let f be a strictly convex quadratic in (39). If the eigenvalues of A satisfy λ n < 2 λ 1 , F S M iterations (16) and F M S M iterations (18) fulfill
( d i k + 1 ) 2 δ 2 ( d i k ) 2 ,
wherein
δ = max 1 σ λ 1 2 λ n , λ n λ 1 1 ,
and
lim k g k F ( M ) S M = 0 .
Proof. 
Let { x k } be the output of Algorithm 3 and { v 1 , , v n } be orthonormal eigenvectors of A. In this case, for a random vector x k in (40), there exist real constants d 1 k , d 2 k , , d n k such that
g k = i = 1 n d i k v i .
Now, using (18), we have
g k + 1 = A x k + 1 b = A x k ν k t k ω k ( γ k F ( M ) S M ) 1 g k b = A x k b ν k t k ω k ( γ k F ( M ) S M ) 1 A g k = g k ν k t k ω k ( γ k F ( M ) S M ) 1 A g k = I ν k t k ω k ( γ k F ( M ) S M ) 1 A g k .
Next, using the (50), we obtain
g k + 1 = i = 1 n d i k + 1 v i = i = 1 n 1 ν k t k ω k ( γ k F ( M ) S M ) 1 λ i d i k v i .
To prove (47), it is enough to show that 1 λ i ( ν k t k ω k ) 1 γ k F ( M ) S M δ . Two cases are observable. Firstly, if λ i γ k F ( M ) S M ν k t k ω k using (41), we deduce
1 > λ i ν k t k ω k ) 1 γ k F ( M ) S M σ λ 1 2 λ n 1 λ i ( ν k t k ω k ) 1 γ k F ( M ) S M 1 σ λ 1 2 λ n δ .
Now, let us examine another case γ k F ( M ) S M ν k t k ω k < λ i . Since
1 < λ i ( ν k t k ω k ) 1 γ k F ( M ) S M λ n λ 1 ,
it follows that
1 λ i ( ν k t k ω k ) 1 γ k F ( M ) S M λ n λ 1 1 δ .
Now, we use the orthonormality of the eigenvectors { v 1 , , v n } and (50) and obtain
g k 2 = i = 1 n ( d i k ) 2 .
Since (47) holds and 0 < δ < 1 , based on (55), it follows that (50) holds, which completes the proof. □

4. Numerical Experiments

In this section, we prove the numerical efficiency of the gradient methods based on a dynamic neutrosophic set (DNS). We consider six methods, of which three are F M S M , F S M , and F G D based on DNS, while the other three methods, M S M , S M , and G D , are well-known in the literature. To this aim, we perform competitions on standard test functions with given initial points from [33,34]. We compare the M S M , S M , G D , F M S M , F S M , and F G D methods on three criteria:
  • The CPU time in seconds—CPUts.
  • The number of iterative steps—NI.
  • The number of function evaluations—NFE.
The methods which participate in the competition are presented in Section 2 (Table 1). Test problems in ten dimensions [100, 500, 1000, 3000, 5000, 7000, 8000, 10,000, 15,000, 20,000] are considered. The codes are tested in MATLAB R2017a and an LAP (Intel(R) Core(TM) i3-6006U, up to 2.0 GHz, 8 GB Memory) with the Windows 10 Pro operating system.
Algorithms M S M , S M , G D , F S M , F G D , and F M S M are compared using the backtracking line search with parameters σ = 0.0001 , β = 0.8 and the stopping criterion
g k ϵ and | Δ k | 1 + | f k | δ ,
where ϵ = 10 6 and δ = 10 16 . Specific parameters used only in the F S M , F G D , and F M S M methods are given in Table 2.
In the following, we give a double analysis of the obtained numerical results. One analysis of the numerical results is based on the Dolan–Moré performance profile, and the other on the ranking of the optimization methods.

4.1. Comparison Based on the Dolan–Moré Performance Profile

In this subsection, we give numerical results for the F S M , F G D , and F M S M methods and then compare them with the numerical results obtained for the M S M , S M , and G D methods.
Summarized numerical results for the competition (between M S M , S M , G D , F S M , F G D , and F M S M methods), obtained by testing 30 test functions (300 tests), are given in Table 3, Table 4 and Table 5. Table 3, Table 4 and Table 5 include numerical results obtained by monitoring the criteria NI, NFE, and CPUts.
The performance profiles given in [35] are applied to compare numerical results for the criteria CPUts, NI, and NFE, generated by considered methods. The method that achieves the best results generates the upper performance profile curve.
In Figure 4 (resp. Figure 5), we compare the performance profiles NI (resp. NFE) for the M S M , S M , G D , F S M , F G D , and F M S M methods based on numerical values included in Table 3 (resp. Table 4). A careful analysis reveals that the FMSM method solves 20.00 % of the test problems, with the least NI compared with M S M ( 33.33 % ) , S M ( 26.67 % ) , F S M ( 33.33 % ) , GD ( 13.33 % ) , and F G D ( 10.00 % ) . From Figure 4, it is perceptible that the F M S M graph attains the top level first, which indicates that F M S M outperforms other methods with respect to NI.
From Figure 5, we see that the F M S M and F S M methods are more efficient than the M S M , S M , G D , and F G D methods, with respect to NFE, since they solve F M S M ( 10.00 % ) and F M S ( 33.33 % ) of the test problems with the least NFE compared with M S M ( 40.00 % ) , SM ( 26.67 % ) , G D ( 13.33 % ) , and F G D ( 6.67 % ) . From Figure 5, it can be observed that the F M S M and F S M graphs first come to the top, so that F M S M and F S M are the winners relative to NFE. On the other hand, the slowest iterations are G D and F G D .
Figure 6 shows the performance profile of the considered methods based on the CPUts for the numerical values included in Table 5. The F M S M method solves 23.33 % of the test problems with the least CPUts compared with M S M ( 30.00 % ) , S M ( 23.33 % ) , F S M ( 23.33 % ) , G D ( 6.67 % ) , and F G D ( 0 % ) . According to Figure 6, the F M S M and F S M graphs achieve the upper limit level 1 first, which verifies their dominance considering CPUts. Moreover, G D and F G D are the slowest methods.
Based on the data involved in Table 3, Table 4 and Table 5 and graphs in Figure 4, Figure 5 and Figure 6, it is noticed that the F M S M and F S M methods achieved the best results compared with the M S M , S M , G D , and F G D methods, with respect to three basic criteria: NI, NFE, and CPUts.
Table 6 contains the average CPU time, average number of iterations, and the average number of function evaluations for all 300 numerical experiments. Minimal values are marked in bold.
The average results in Table 6 confirm that the average results for F M S M and F S M are smaller with respect to the corresponding values for M S M and S M relative to NI, NFE, and CPUts. Such observation leads us to conclude that the use of a dynamic neutrosophic set (DNS) in gradient methods enables an improvement in the numerical results.

4.2. Closer Examination of the Optimization Methods

A closer examination of the optimization methods is presented in this subsection. The optimization methods G D , S M , M S M , F G D , F S M , and F M S M are used to solve two test functions from Table 3, Table 4 and Table 5 under different initial conditions (ICs). These functions are the Extended Penalty and the Diagonal 6, while the ICs were set to IC1: 1.5 · 1 100 , IC2: 1 100 , and IC3: 4.5 · 1 100 for the former and IC1: 1.5 · 1 100 , IC2: 2.5 · 1 100 , and IC3: 3.5 · 1 100 for the latter. It is important to note that 1 100 denotes a vector of ones with dimensions 100 × 1 . The results of the optimization methods are depicted in Figure 7.
In the case of the Extended Penalty function, Figure 7a–c show, respectively, the convergence of the optimization methods with IC1, IC2 and IC3. Therein, the convergence of F G D and F S M are identical in the cases of IC1 and IC2, whereas the convergence of F G D is slightly faster than G D ’s, and the convergence of F S M is slightly faster than S M ’s in the case of IC3. The convergence of F M S M is faster than M S M ’s in the cases of IC2 and IC3, but it is slower than the convergence of F G D and F S M in the case of IC1. Additionally, F M S M finds the function’s minimum point for all ICs with greater accuracy than the other methods.
In the case of the Diagonal 6 function, Figure 7d–f show, respectively, the convergence of the optimization methods with IC1, IC2, and IC3. Therein, the convergence of G D and F G D are identical for all ICs, whereas the convergence of F S M is faster than S M ’s for all ICs. The convergence of F M S M is faster than M S M ’s in the cases of IC1 and IC2 and slower in the case of IC3. However, F M S M finds the function’s minimum point in the cases of IC2 and IC3 with greater accuracy than the other methods, while M S M finds the function’s minimum point in the case of IC1 with greater accuracy than the other methods. Additionally, G D and F G D have the fastest convergence in the case of IC1, while F S M has the fastest convergence in the cases of IC2 and IC3.
In general, all the optimization methods presented here were able to find the minimum of the Extended Penalty and the Diagonal 6 functions. The ICs have a significant impact on the optimization methods’ accuracy and speed of convergence. However, F G D , F S M , and F M S M have faster convergence than G D , S M , and M S M , respectively, in most cases.

4.3. Ranking the Optimization Methods

In this subsection, the performances of the optimization methods G D , S M , M S M , F G D , F S M , and F M S M on solving the 30 test functions included in Table 3, Table 4 and Table 5 are ranked from best to worst, i.e., rank 1 to rank 6, respectively.After determining the rank for each test function for each method, it is necessary to calculate the final rank of the methods. The final rank of the methods is based on the average of the ranks obtained for each method in relation to the observed test functions. The method with the lowest average has the highest rank, i.e., rank 1, while the method with the highest average has the lowest rank, i.e., rank 6. We denote by n m (resp. n t f ) the number of methods (resp. the number of test functions). Given a set of methods M and a set of functions F, the rank of the method x on the function y is defined by r x , y . In our case, r x , y stands rank method x for the observed test function y and can have rank 1 to rank 6. The average rank of method x M is calculated in the following way:
A R x = y F r x , y n t f ,
where A R x represents the average of all ranks of the observed method x. The final average rank in our case is obtained when all average ranks are ranked from best to worst, i.e., rank 1 to rank 6, respectively.
Figure 8 shows the iterations’ performance rank of the optimization methods on 30 functions and their average iterations’ rank. Note that a method is regarded as rank 1 if it requires the fewest iterations out of all the considered methods. If a method has the second-fewest iterations compared with all the compared methods, it would be considered rank 2, and so on. Particularly, Figure 8a displays the number of functions in which each method is ranked as rank 1, rank 2, etc., while Figure 8b displays the final rank of the methods based on the average of the results presented in Figure 8a.
For example, in Figure 8a, M S M reached rank 1 in the same or a higher number of test functions than F S M and F M S M . However, because M S M achieved rank 6 in many more functions than F S M and F M S M in Figure 8b, M S M has an average rank 3, F S M an average rank 2, and F M S M an average rank 1. In other words, F M S M outperforms FSM and MSM in terms of iteration performance. Moreover, the fact that F M S M and F S M iterations outperform their corresponding original methods is another important discovery from Figure 8b.
Figure 9 shows the function evaluations performance ranking on 30 functions and their average rank. Note that a method is regarded as rank 1 if it requires the fewest number of function evaluations out of all the considered methods. If a method has the second-fewest function evaluations compared with all the compared methods, it would be considered rank 2, and so on. Particularly, Figure 9a displays the number of functions in which each method is ranked as rank 1, rank 2, etc., whereas Figure 9b displays the final function evaluation ranks of the methods based on the average of the results presented in Figure 9a.
M S M achieved rank 1 positions in a higher number of functions than all the methods considered in Figure 9a, whereas FGD was considered rank 6 in a higher number of functions than all the methods that were considered. As a result, M S M has the average rank 1, and F G D takes the average rank 6 in Figure 9b. That is, M S M outperforms all the considered methods in terms of function evaluation performance. Moreover, the fact that F S M , the fuzzy method, outperforms the original S M method is another crucial discovery from Figure 9b.
Figure 10 shows the CPU time consumption performance rank of the optimization methods on 30 functions and their average rank. A method is of rank 1 if it requires the least amount of CPU time compared with all the methods considered. A method achieves rank 2 if it requires the second-least amount of CPU time compared with all the methods, and so on. Particularly, Figure 10a displays the number of functions in which each method is ranked as rank 1, rank 2, etc., whereas Figure 10b displays the final rank of the methods, based on the average of the results presented in Figure 10a.
M S M is observed as rank 1 in a higher number of functions than all the methods considered in Figure 10a, whereas F G D was considered rank 6 in a higher number of functions than all the compared methods. As a result, M S M has an average rank 3 and F G D an average rank 6 in Figure 10b. If we look at Figure 10b, we can see that F M S M outperforms all the methods considered in terms of CPU time consumption performance.
To summarize, all the fuzzy methods work excellently in finding the minimum of the 30 functions. In general, F M S M has the best iteration performance, M S M has the best function evaluation performance, and F M S M has the best CPU time consumption performance.
We use the notation M i M j to signify that the method M i is ranked better than M j .
  • Figure 8b leads to the conclusion F M S M F S M M S M S M G D F G D .
  • Figure 9b leads to the conclusion M S M F S M S M F M S M G D F G D .
  • Figure 10b leads to the conclusion F M S M S M M S M F S M G D F G D .
In general, F M S M has the best iteration performance, M S M has the best function evaluation performance, and F M S M has the best CPU time consumption performance. An interesting conclusion is G D F G D in the last positions according to all criteria. A particularly interesting observation is that the proposed fuzzy parameter ν k improves the S M and M S M methods, but it is not suitable for G D . The logical conclusion is that the fuzzy parameter ν k is not desirable to use in the role of an isolated parameter, but it is preferable to use it in combination with other scaling parameters.

4.4. Application of the Fuzzy Optimization Methods to Regression Analysis

Regression analysis is an important statistical tool commonly used in the fields of accounting, economics, management, physics, finance, and many more. This tool is used to study the interaction between independent and dependent variables of various data sets. The classical function of regression analysis is defined as
y = f ( x 1 , x 2 , , x k + ϵ ) ,
where x i , i = 1 , 2 , , k , k > 0 are predictor variables, y is the response variable, and ϵ is the error. The linear regression function is obtained by a straight line relationship between y and x
y = a 0 + a 1 x 1 + a 2 x 2 + + a k x k + ϵ ,
where a 0 , a 1 , , a k are the parameters of the regression. The main aim of regression analysis is to estimate the parameters a 0 , a 1 , , a k so that the error ϵ is minimized. However, the linear relationship rarely occurs. Thus, a nonlinear regression scheme is frequently used. In this paper, we considered the quadratic regression model. The least squares method is the most popular approach to fitting a regression line and is defined by
y = a 0 + a 1 x + a 2 x 2 .
The errors for a set of data ( x i , y i ) , i = 1 , 2 , , n are defined as follows
E i ( a ) = y i ( a 0 + a 1 x i + a 2 x i 2 ) , a = ( a 0 , a 1 , a 2 ) .
The main goal would be to fit the “best” line through the data in order to minimize the sum of the residual error squares for all the available data
min a R 3 i = 1 n E i 2 ( a ) , a = ( a 0 , a 1 , a 2 ) .
The data set in Table 7 is a detailed description of people killed in traffic accidents in Serbia from 2012–2021. This set was considered based on the annual reports of the Agency for Traffic Safety of the Republic of Serbia. The ordinal number of the year of data collection is denoted by the x variable and the number of people killed in traffic accidents in Serbia is represented by the y variable. Moreover, only data from 2012–2020 would be considered for the data fitting, while data for 2021 would be reserved for the error analysis.
The least squares, FMSM, FSM, and FGD methods are used for fitting the regression models to the data collected. The least squares method is frequently used to solve overdetermined linear systems, which usually occurs when the given equations are greater than the number of unknowns [36]. The least squares method includes determining the best approximating line by comparing the total least squares error.
The approximate function for the nonlinear least squares method derived using the data in Table 7 is defined as follows:
f ( x ) = 0.5303030303031 x 2 24.1030303030320 x + 685.1666666666750 .
For more details on how the approximate function (61) is calculated, see [36]. Let x i denote the ordinal number of the year and y i be the number of people killed in traffic accidents in that year. Then, the least squares method (58) is transformed into the following unconstrained minimization problems:
min a R 3 f ( a ) = min a R 3 i = 1 n E i 2 ( a ) = i = 1 n y i ( a 0 + a 1 x i + a 2 x i 2 ) 2 , a = ( a 0 , a 1 , a 2 ) .
where n = 9, i.e., i has values from 1 to 9, corresponding to the years 2012 to 2020. The data from 2012–2020 are utilized to formulate the nonlinear quadratic model for the least square method and the corresponding test function of the unconstrained optimization problem. However, the data for 2021 are excluded from the unconstrained optimization function so that it could be used to compute the relative errors of the predicted data. The relative error is calculated using the following formula to measure the precision of a regression model:
R e l a t i v e E r r o r = | E x a c t v a l u e A p p r o x i m a t e v a l u e | | E x a c t v a l u e | .
The regression model with the least relative error is considered the best.
The application of the conjugate gradient method in regression analysis to the optimization problems in finding the regression parameters a 0 , a 1 , , a k was considered in [37,38,39,40]. To overcome the difficulty of computing the values of a 0 , a 1 , and a 2 using the matrix inverse, the researchers employed the proposed FMSM, FSM, and FGD methods to solve the test function (62), and the result is presented in Table 8.
The statistics of people killed in traffic accidents in Serbia is estimated using the proposed FMSM, FSM, FGD, least squares, and trend line methods. The trend line is plotted based on the real data obtained from Table 7 using Microsoft Excel and is shown in Figure 11. The equation for the trend line is in the form of a nonlinear quadratic equation
y = 0.5303 x 2 24.103 x + 685.17 .
If we compare the approximation functions (61) and (64), as well as the regression parameters from Table 8 obtained using the FMSM, FSM, and FGD methods, we can see that there are small differences in the values of the parameters a 0 , a 1 , and a 2 .
The functions of the trend line (64) and the least square method (61) are compared with approximation functions from the FMSM, FSM, and FGD methods obtained by substituting the values of the parameters a 0 , a 1 , and a 2 in (58) for the initial point (1,1,1).
The primary aim of regression analysis is to estimate the parameters a 0 , a 1 , , a k such that the error ϵ is minimized. From Table 9, the proposed FMSM, FSM, and FGD methods have similar relative errors compared with the least square and trend line methods.
Thus, we can conclude that the proposed FMSM, FSM, and FGD methods are applicable to real-life situations.

5. Conclusions

It is known that iterations for solving nonlinear unconstrained minimization are based on the step size defined by the inexact line search. Such step size enables just a sufficient decrease in the value of the objective function. However, after that, there are plenty of possibilities for future adjustments based on the behavior of the objective function. Our goal is to use additional step length parameters to improve convergence. One of these parameters is the g a m m a k parameter, which is defined in previous works based on Taylor expansion of the objective function. The second parameter, ν k , is defined in this paper using neutrosophic logic and the behavior of the objective function in two consecutive iterations. The enhancements of main line search iterations for solving unconstrained optimization are provided based on application of netrosophic logic. Using an appropriate neutrosophic logic, we propose an additional gain parameter ν k to solve uncertainty in defining parameters of nonlinear optimization methods. The parameter arises as the output from an appropriately defined neutrosophic logic system, and it is usable in various gradient descent methods as a corrective step size.
Performed theoretical analysis reveals convergence of novel iterations under the same conditions as for corresponding original methods. Numerical comparison and statistical ranking point out better results generated by the proposed enhanced methods compared with some existing methods. Moreover, statistical measures reveal advantages of fuzzy and neutrosophic improvements compared with original line search optimization methods. Precisely, our numerical experience shows that the neutrosophic parameter ν k is particularly efficient as an additional step size composed with previously defined parameters. Direct application of ν k is not so effective.
Additional research includes several new directions. First of all, other strategies in neutrosophication and de-neutrosophication are possible, as well as other frameworks parallel to neutrosophic sets, known as picture fuzzy sets and spherical fuzzy sets, discussed in the following articles [41,42]. These can be discussed in future research.
Empirical evaluation shows high sensitivity of the results on the choice of the parameters that define the truth, falsity, and indeterminacy membership functions. Such experience confirms the assumption that a different configuration of parameters, as well as improvements in the neutrosophic logic engine, can lead to further improvements of defined methods. The possibility to define if–then rules in a more sophisticated way based on the history of the obtained values of f ( x ) remains an open topic for future research. Another topic of future study is the investigation of a neutrosophic approach to enhance stochastic optimization methods. In addition, positive definite matrices B k are usable as more precise approximations of the Hessian compared with simplest diagonal approximations. Finally, continuous-time nonlinear optimization assumes time-varying scaling parameters inside a selected time interval.

Author Contributions

Conceptualization, P.S.S. and V.N.K.; methodology, P.S.S., V.N.K. and L.A.K.; software, B.I. and S.D.M.; validation, V.N.K., P.S.S., D.S. and L.A.K.; formal analysis, P.S.S., S.D.M. and D.S.; investigation, P.S.S., S.D.M., V.N.K. and L.A.K.; resources, B.I. and S.D.M.; data curation, B.I. and S.D.M.; writing—original draft preparation, P.S.S., D.S. and S.A.E.; writing—review and editing, P.S.S., S.D.M. and S.A.E.; visualization, B.I. and S.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Data Availability Statement

Data and code will be provided on request to authors.

Acknowledgments

Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM). This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, W.; Yuan, Y.-X. Optimization Theory and Methods: Nonlinear Programming; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Brezinski, C. A classification of quasi-Newton methods. Numer. Algorithms 2003, 33, 123–135. [Google Scholar] [CrossRef]
  3. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  4. Petrović, M.J.; Stanimirović, P.S. Accelerated Double Direction method for solving unconstrained optimization problems. Math. Probl. Eng. 2014, 2014, 965104. [Google Scholar] [CrossRef] [Green Version]
  5. Petrović, M.J.; Rakocević, V.; Kontrec, N.; Panić, S.; Ilić, D. Hybridization of accelerated gradient descent method. Numer. Algorithms 2018, 79, 769–786. [Google Scholar] [CrossRef]
  6. Stanimirović, P.S.; Miladinović, M.B. Accelerated gradient descent methods with line search. Numer. Algorithms 2010, 54, 503–520. [Google Scholar] [CrossRef]
  7. Stanimirović, P.S.; Milovanović, G.V.; Petrović, M.J. A transformation of accelerated double step size method for unconstrained optimization. Math. Probl. Eng. 2015, 2015, 283679. [Google Scholar] [CrossRef] [Green Version]
  8. Petrović, M.J. An accelerated Double Step Size method in unconstrained optimization. Applied Math. Comput. 2015, 250, 309–319. [Google Scholar]
  9. Ivanov, B.; Stanimirović, P.S.; Milovanović, G.V.; Djordjević, S.; Brajević, I. Accelerated multiple step-size methods for solving unconstrained optimization problems. Optim. Methods Softw. 2021, 36, 998–1029. [Google Scholar] [CrossRef]
  10. Petrović, M.J.; Stanimirović, P.S.; Kontrec, N.; Mladenović, J. Hybrid modification of Accelerated Double Direction method. Math. Probl. Eng. 2018, 2018, 1523267. [Google Scholar] [CrossRef]
  11. Picard, E. Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives. J. Math. Pures Appl. 1890, 6, 145–210. [Google Scholar]
  12. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  13. Khan, S.H. A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 2013, 69. [Google Scholar] [CrossRef]
  14. Rakočević, V.; Petrović, M.J. Comparative analysis of accelerated models for solving unconstrained optimization problems with application of Khan’s hybrid rule. Mathematics 2022, 10, 4411. [Google Scholar] [CrossRef]
  15. Humaira, M.S.; Tunç, C. Fuzzy fixed point results via rational type contractions involving control functions in complex–valued metric spaces. Appl. Math. Inf. Sci. 2018, 12, 861–875. [Google Scholar] [CrossRef]
  16. Vrahatis, M.N.; Androulakis, G.S.; Lambrinos, J.N.; Magoulas, G.D. A class of gradient unconstrained minimization algorithms with adaptive step-size. J. Comp. Appl. Math. 2000, 114, 367–386. [Google Scholar] [CrossRef] [Green Version]
  17. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  18. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  19. Smarandache, F. A Unifying Field in Logics, Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, NM, USA, 1999. [Google Scholar]
  20. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  21. Khalil, A.M.; Cao, D.; Azzam, A.; Smarandache, F.; Alharbi, W.R. Combination of the single-valued neutrosophic fuzzy set and the soft set with applications in decision-making. Symmetry 2020, 12, 1361. [Google Scholar] [CrossRef]
  22. Mishra, K.; Kandasamy, I.; Kandasamy W.B., V.; Smarandache, F. A novel framework using neutrosophy for integrated speech and text sentiment analysis. Symmetry 2020, 12, 1715. [Google Scholar] [CrossRef]
  23. Tu, A.; Ye, J.; Wang, B. Symmetry measures of simplified neutrosophic sets for multiple attribute decision-making problems. Symmetry 2018, 10, 144. [Google Scholar] [CrossRef] [Green Version]
  24. Smarandache, F. Neutrosophic Logic—A Generalization of the Intuitionistic Fuzzy Logic. 25 January 2016. Available online: https://ssrn.com/abstract=2721587 (accessed on 1 September 2021).
  25. Ansari, A.Q. From fuzzy logic to neutrosophic logic: A paradigme shift and logics. In Proceedings of the 2017 International Conference on Intelligent Communication and Computational Techniques (ICCT), Jaipur, India, 22–23 December 2017; pp. 11–15. [Google Scholar]
  26. Guo, Y.; Cheng, H.D.; Zhang, Y. A new neutrosophic approach to image denoising. New Math. Nat. Comput. 2009, 5, 653–662. [Google Scholar] [CrossRef]
  27. Christianto, V.; Smarandache, F. A Review of Seven Applications of Neutrosophic Logic: In Cultural Psychology, Economics Theorizing, Conflict Resolution, Philosophy of Science, etc. Multidiscip. Sci. J. 2019, 2, 128–137. [Google Scholar] [CrossRef] [Green Version]
  28. Andrei, N. An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algorithms 2006, 42, 63–73. [Google Scholar] [CrossRef]
  29. Andrei, N. Relaxed Gradient Descent and a New Gradient Descent Methods for Unconstrained Optimization. Visited 29 November 2022. Available online: https://camo.ici.ro/neculai/newgrad.pdf (accessed on 1 September 2021).
  30. Shi, Z.-J. Convergence of line search methods for unconstrained optimization. App. Math. Comput. 2004, 157, 393–405. [Google Scholar] [CrossRef]
  31. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equation in Several Variables; Academic Press: New York, NY, USA; London, UK, 1970. [Google Scholar]
  32. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  33. Andrei, N. An unconstrained optimization test functions collection. Adv. Model. Optim. 2008, 10, 147–161. [Google Scholar]
  34. Bongartz, I.; Conn, A.R.; Gould, N.; Toint, P.L. CUTE: Constrained and unconstrained testing environments. ACM Trans. Math. Softw. 1995, 21, 123–160. [Google Scholar] [CrossRef] [Green Version]
  35. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  36. Dawahdeh, M.; Mamat, M.; Rivaie, M.; Sulaiman, I.M. Application of conjugate gradient method for solution of regression models. Int. J. Adv. Sci. Technol. 2020, 29, 1754–1763. [Google Scholar]
  37. Moyi, A.U.; Leong, W.J.; Saidu, I. On the application of three-term conjugate gradient method in regression analysis. Int. J. Comput. Appl. 2014, 102, 1–4. [Google Scholar]
  38. Sulaiman, I.M.; Bakar, N.A.; Mamat, M.; Hassan, B.A.; Malik, M.; Ahmed, A.M. A new hybrid conjugate gradient algorithm for optimization models and its application to regression analysis. Indones. J. Electr. Eng. Comput. Sci. 2021, 23, 1100–1109. [Google Scholar] [CrossRef]
  39. Sulaiman, I.M.; Malik, M.; Awwal, A.M.; Kumam, P.; Mamat, M.; Al-Ahmad, S. On three–term conjugate gradient method for optimization problems with applications on COVID–19 model and robotic motion control. Adv. Contin. Discret. Model. 2022, 2022, 1. [Google Scholar] [CrossRef] [PubMed]
  40. Sulaiman, I.M.; Mamat, M. A new conjugate gradient method with descent properties and its application to regression analysis. J. Numer. Anal. Ind. Appl. Math. 2020, 14, 25–39. [Google Scholar]
  41. Mahmood, T.; Ullah, K.; Khan, Q.; Jan, N. An approach toward decision–making and medical diagnosis problems using the concept of spherical fuzzy sets. Neural Comput. Appl. 2019, 31, 7041–7053. [Google Scholar] [CrossRef]
  42. Ullah, K. Picture fuzzy maclaurin symmetric mean operators and their applications in solving multiattribute decision–making problems. Math. Probl. Eng. 2021, 2021, 1098631. [Google Scholar] [CrossRef]
Figure 1. The general structure of the fuzzy optimization methods.
Figure 1. The general structure of the fuzzy optimization methods.
Symmetry 15 00250 g001
Figure 2. The NLC structure decomposed by the neutrosophic rules.
Figure 2. The NLC structure decomposed by the neutrosophic rules.
Symmetry 15 00250 g002
Figure 3. Neutrosophication (20)–(22) and de-neutrosophication (24) under the parameters in Table 2. (a) Neutrosophication. (b) De-neutrosophication.
Figure 3. Neutrosophication (20)–(22) and de-neutrosophication (24) under the parameters in Table 2. (a) Neutrosophication. (b) De-neutrosophication.
Symmetry 15 00250 g003
Figure 4. NI performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Figure 4. NI performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Symmetry 15 00250 g004
Figure 5. NFE performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Figure 5. NFE performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Symmetry 15 00250 g005
Figure 6. CPUts performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Figure 6. CPUts performance profiles for the M S M , S M , G D , F S M , F G D , and F M S M methods.
Symmetry 15 00250 g006
Figure 7. Convergence of the optimization methods under different ICs. (a) Extended Penalty function with IC1. (b) Extended Penalty function with IC2. (c) Extended Penalty function with IC3. (d) Diagonal 6 function with IC1. (e) Diagonal 6 function with IC2. (f) Diagonal 6 function with IC3.
Figure 7. Convergence of the optimization methods under different ICs. (a) Extended Penalty function with IC1. (b) Extended Penalty function with IC2. (c) Extended Penalty function with IC3. (d) Diagonal 6 function with IC1. (e) Diagonal 6 function with IC2. (f) Diagonal 6 function with IC3.
Symmetry 15 00250 g007
Figure 8. Iterations’ performance ranks of the optimization methods on 30 functions and their average rank. (a) Iterations’ performance. (b) Average of iterations’ performance.
Figure 8. Iterations’ performance ranks of the optimization methods on 30 functions and their average rank. (a) Iterations’ performance. (b) Average of iterations’ performance.
Symmetry 15 00250 g008
Figure 9. Function evaluation performance ranks of the optimization methods on 30 functions and their average rank. (a) Function evaluations performance. (b) Average of function evaluation performance.
Figure 9. Function evaluation performance ranks of the optimization methods on 30 functions and their average rank. (a) Function evaluations performance. (b) Average of function evaluation performance.
Symmetry 15 00250 g009
Figure 10. CPU time consumption performance ranks of the optimization methods on 30 functions and their average rank. (a) Time consumption’s performance. (b) Average of time consumption’s performance.
Figure 10. CPU time consumption performance ranks of the optimization methods on 30 functions and their average rank. (a) Time consumption’s performance. (b) Average of time consumption’s performance.
Symmetry 15 00250 g010
Figure 11. Nonlinear quadratic trend line for people killed in traffic accidents in Serbia.
Figure 11. Nonlinear quadratic trend line for people killed in traffic accidents in Serbia.
Symmetry 15 00250 g011
Table 1. Parameters in gradient descent methods and neutrosophic modifications.
Table 1. Parameters in gradient descent methods and neutrosophic modifications.
MethodStep Sizes
FirstSecondThird
GD t k --
FGD ν k t k -
SM t k ( γ k S M ) 1 -
FSM ν k t k ( γ k S M ) 1
MSM τ k ( γ k M S M ) 1 -
FMSM ν k τ k ( γ k M S M ) 1
Table 2. Recommended parameters in NLC.
Table 2. Recommended parameters in NLC.
SetMembership Functionc 1 c 2 Weight
InputTruthSigmoid131
FalsitySigmoid131
IndeterminacyGaussian601
Output(24)3-1
Table 3. Summary of NI results for M S M , S M , G D , F S M , F G D , and F M S M .
Table 3. Summary of NI results for M S M , S M , G D , F S M , F G D , and F M S M .
Test FunctionNo. of Iterations
MSMFMSMSMFSMGDFGD
Extended Penalty Function65137754937212551250
Perturbed Quadratic function44,41975,43177,45874,473372,356369,992
Raydan 1 function12,96512,43715,91311,03558,74358,594
Raydan 2 function9087909467129
Diagonal 1 function52,52711,571895512,18941,20842,290
Diagonal 2 function26,21524,86630,91229,957543,249543,054
Diagonal 3 function754512,58613,89213,05062,12861,072
Hager function28,07380083981731042956
Generalized Tridiagonal 1 function290440270376656665
Extended TET function13024813022519741856
Extended quadratic penalty QP1 function328189246177563549
Extended quadratic penalty QP2 function1538210533023564134,401122,926
Quadratic QF2 function44,91114,20383,95711,488409,859411,364
Extended quadratic exponential EP1 function8710064109496528
Extended tridiagonal 2 function56842141941511451099
Almost perturbed quadratic function44,02978,45280,55979,793374,841375,518
ENGVAL1 function (CUTE)363298302291573557
QUARTC function (CUTE)185216246211524,612524,612
Diagonal 6 function9087909567129
Generalized quartic function15015015723814531751
Diagonal 7 function12411390136543570
Diagonal 8 function1008610389583573
Diagonal 9 function16,92017,22111,48717,752195,362195,155
HIMMELH function (CUTE)10090100909090
Extended Rosenbrock505050505050
Extended BD1 function (block diagonal)189204191223650682
NONDQUAR function (CUTE)423942353330
DQDRTIC function (CUTE)827635126349715,32015,398
Extended Beale function48098063983112,83412,826
EDENSCH function (CUTE)337314275275663705
Table 4. Summary of NFE results for M S M , S M , G D , F S M , F G D , and F M S M .
Table 4. Summary of NFE results for M S M , S M , G D , F S M , F G D , and F M S M .
Test FunctionNo. of Funct. Evaluation
MSMFMSMSMFSMGDFGD
Extended Penalty Function352725852394238847,37848,057
Perturbed quadratic function257,063438,335439,924423,19516,171,46616,069,927
Raydan 1 function89,50869,79187,50861,5951,667,2381,658,647
Raydan 2 function190233190235144291
Diagonal 1 function526,95856,91447,87458,1551,615,8281,664,760
Diagonal 2 function158,515144,005171,300166,5671,086,5081,086,118
Diagonal 3 function41,52871,02476,33670,5402,407,0252,364,254
Hager function271,94034023308316556,82454,818
Generalized tridiagonal 1 function10121587931144510,86711,432
Extended TET function44068144060119,80018,859
Extended quadratic penalty QP1 function191819922507184210,77111,268
Extended quadratic penalty QP2 function10,73114,28524,23426,5283,875,7683,545,317
Quadratic QF2 function245,407102,882465,61580,62619,072,36719,141,623
Extended quadratic exponential EP1 function80760458783013,64314,852
Extended tridiagonal 2 function255021232285211195709464
Almost perturbed quadratic function259,487452,388452,360445,02816,285,62116,309,931
ENGVAL1 function (CUTE)197427002098231587878593
QUARTC function (CUTE)4204925424721,049,2741,049,304
Diagonal 6 function229335229263158332
Generalized quartic function40947042378119,06225,071
Diagonal 7 function458547293109433484286
Diagonal 8 function32646298061239214078
Diagonal 9 function141,78190,94871,35389,0238,449,9468,455,412
HIMMELH Function (CUTE)210190210190190190
Extended Rosenbrock110110110110110110
Extended BD1 function (Block Diagonal)55869659869176608452
NONDQUAR function (CUTE)208420852057206025002501
DQDRTIC function (CUTE)4090280565182542395,014400,147
Extended Beale function2200472032773416207,852208,551
EDENSCH function (CUTE)11981213956872940310,615
Table 5. Summary of CPUts results for M S M , S M , G D , F S M , F G D , and F M S M .
Table 5. Summary of CPUts results for M S M , S M , G D , F S M , F G D , and F M S M .
Test FunctionCPU Time
MSMFMSMSMFSMGDFGD
Extended penalty function3.7341.9691.9691.84417.67219.078
Perturbed quadratic function167.063323.266298.813317.25010,163.6889771.406
Raydan 1 function46.81335.14150.95330.234727.281667.094
Raydan 2 function0.4530.2810.2810.3440.2500.531
Diagonal 1 function522.70386.50059.29799.9531836.7662091.281
Diagonal 2 function236.531228.188271.094276.2812105.2192158.156
Diagonal 3 function75.484172.250139.859157.5943842.6254025.688
Hager function384.4389.5949.4539.250116.922118.609
Generalized tridiagonal 1 function2.6563.1882.0003.79711.64114.875
Extended TET function0.9531.3130.9061.35915.92216.281
Extended quadratic penalty QP1 function1.6881.6251.8751.5784.2034.391
Extended quadratic penalty QP2 function5.8449.8917.20310.516746.328770.500
Quadratic QF2 function124.34447.875243.68835.3597611.6568436.359
Extended quadratic exponential EP1 function0.9690.5940.4691.1095.2817.297
Extended tridiagonal 2 function1.9061.3131.6091.2663.3593.766
Almost perturbed quadratic function135.484314.953238.625267.7509271.01613,902.047
ENGVAL1 function (CUTE)2.0311.7971.8441.8284.1254.422
QUARTC function (CUTE)2.8132.9843.2503.2196253.8288032.547
Diagonal 6 function0.3280.2190.3440.4840.2030.438
Generalized quartic function0.3440.2660.4380.6256.76611.922
Diagonal 7 function0.9530.7970.5311.8133.6724.406
Diagonal 8 function0.7810.9221.7971.0475.5784.469
Diagonal 9 function249.87574.48453.23477.2192478.4222705.781
HIMMELH function (CUTE)0.7970.5940.7810.7970.6090.641
Extended Rosenbrock0.2030.0940.1560.2030.2190.141
Extended BD1 function (block diagonal)0.7660.7660.8590.9694.9844.469
NONDQUAR function (CUTE)7.2668.8917.7979.0479.40610.406
DQDRTIC function (CUTE)2.5161.5002.9061.500118.250127.844
Extended Beale function7.21918.7349.76616.016488.328546.359
EDENSCH function (CUTE)6.1416.4224.0165.06324.67236.766
Table 6. Average numerical outcomes for 30 test functions tested on 10 numerical experiments.
Table 6. Average numerical outcomes for 30 test functions tested on 10 numerical experiments.
Average PerformancesMSMFMSMSMFSMGDFGD
Average no. of iterations9477.438493.2011,086.338631.5791,962.6091,565.67
Average no. of funct. evaluation67,587.6049,020.1362,247.9048,309.732,416,934.772,406,242.00
Average CPU time (s)66.4445.2147.1944.511529.301783.27
Table 7. The number of people killed in traffic accidents in Serbia from 2012 to 2021.
Table 7. The number of people killed in traffic accidents in Serbia from 2012 to 2021.
YearNumber of Data (x)The Number of People Killed in Traffic Accidents in Serbia (y)
20121688
20132650
20143536
20154599
20165607
20176579
20187548
20198534
20209492
202110521
Table 8. Test results for optimization of quadratic model for the FMSM, FSM, and FGD methods.
Table 8. Test results for optimization of quadratic model for the FMSM, FSM, and FGD methods.
MethodInitial PointNINFECPUtsRegression Parameters (a 0 , a 1 , a 2 )
a 0 a 1 a 2
FMSM(1,1,1)28,998119,8981.484685.166632504562 24.1030144870845 0.530301492634611
FSM(1,1,1)29,612120,5451.609685.166666629541 24.1030302889654 0.530303029090458
FGD(1,1,1)173,0047,861,47135.125685.161769964723 24.1009143873562 0.530114238129987
FMSM(5,5,5)29,791126,4491.750685.166627004962 24.102996538241 0.530299060289809
FSM(5,5,5)29,504119,7061.406685.166666659503 24.1030303019929 0.530303030290009
FGD(5,5,5)172,8767,855,58436.812685.161745521808 24.1009038359837 0.530113219772043
FMSM(−1,−1,−1)29,259120,6951.484685.166666761033 24.1030303425383 0.530303033790302
FSM(−1,−1,−1)29,513119,9121.328685.166388359794 24.1029100449169 0.530292483042678
FGD(−1,−1,−1)173,6987,893,03037.797685.161987072222 24.1010082057947 0.530122579942827
Table 9. Estimation point and relative errors for 2021 data.
Table 9. Estimation point and relative errors for 2021 data.
MethodEstimation PointRelative Error
FMSM497.166640.045745419
FSM497.166670.045745362
FGD497.164050.045750384
Least Square497.166670.045745361
Trend line497.170000.045738964
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanimirović, P.S.; Ivanov, B.; Stanujkić, D.; Katsikis, V.N.; Mourtas, S.D.; Kazakovtsev, L.A.; Edalatpanah, S.A. Improvement of Unconstrained Optimization Methods Based on Symmetry Involved in Neutrosophy. Symmetry 2023, 15, 250. https://doi.org/10.3390/sym15010250

AMA Style

Stanimirović PS, Ivanov B, Stanujkić D, Katsikis VN, Mourtas SD, Kazakovtsev LA, Edalatpanah SA. Improvement of Unconstrained Optimization Methods Based on Symmetry Involved in Neutrosophy. Symmetry. 2023; 15(1):250. https://doi.org/10.3390/sym15010250

Chicago/Turabian Style

Stanimirović, Predrag S., Branislav Ivanov, Dragiša Stanujkić, Vasilios N. Katsikis, Spyridon D. Mourtas, Lev A. Kazakovtsev, and Seyyed Ahmad Edalatpanah. 2023. "Improvement of Unconstrained Optimization Methods Based on Symmetry Involved in Neutrosophy" Symmetry 15, no. 1: 250. https://doi.org/10.3390/sym15010250

APA Style

Stanimirović, P. S., Ivanov, B., Stanujkić, D., Katsikis, V. N., Mourtas, S. D., Kazakovtsev, L. A., & Edalatpanah, S. A. (2023). Improvement of Unconstrained Optimization Methods Based on Symmetry Involved in Neutrosophy. Symmetry, 15(1), 250. https://doi.org/10.3390/sym15010250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop