Next Article in Journal
G2 Hermite Interpolation by Segmented Spirals
Next Article in Special Issue
Existence Theoremsfor Solutions of a Nonlinear Fractional-Order Coupled Delayed System via Fixed Point Theory
Previous Article in Journal
Global Stability of Delayed SARS-CoV-2 and HTLV-I Coinfection Models within a Host
Previous Article in Special Issue
The Gao-Type Constant of Absolute Normalized Norms on ℝ2
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On New Generalized Viscosity Implicit Double Midpoint Rule for Hierarchical Problem

by
Thanyarat Jitpeera
1,
Anantachai Padcharoen
2 and
Wiyada Kumam
3,*
1
Department of Science, Faculty of Science and Agricultural Technology, Rajamangala University of Technology Lanna (RMUTL), Chiangrai 57120, Thailand
2
Department of Mathematics, Faculty of Science and Technology, Rambhai Barni Rajabhat University, Chanthaburi 22000, Thailand
3
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Program in Applied Statistics, Department of Mathematics and Computers Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4755; https://doi.org/10.3390/math10244755
Submission received: 3 October 2022 / Revised: 8 December 2022 / Accepted: 9 December 2022 / Published: 14 December 2022

Abstract

:
The implicit midpoint rules are employed as a powerful numerical technique, and in this article we attend a class of viscosity iteration approximations on hierarchical problems for the implicit double midpoint rules. We prove the strong convergence theorem to the unique solution on hierarchical problem of this technique is established under some favorable conditions imposed on the control parameters in Hilbert spaces. Furthermore, we propose some applications to the constrained convex minimization problem, nonlinear Fredholm integral equation and variational inequality on fixed point problem. Moreover, some numerical examples are also presented to illustrate the different proposed methods and convergence results. Our results modified the implicit double midpoint rules with the hierarchical problem.

1. Introduction

To begin with, we first give some necessary notations that we use throughout our paper. In the framework of a real Hilbert space H with inner product · , · and its induced norm · , let C be a subset of H with its properties which are closed and convex. The notations ⇀ and → refer to weak convergence and strong convergence, respectively.
Next, we recall some definitions which will be considered in the next part of our paper. We shall start with the well-known problem referred to as The variational inequality [1] which is to find the solution x * C that satisfies the following inequality
A x * , x x * 0 , x C ,
where C is nonempty. The set of its solution is denoted by V I ( C , A ) , that is,
V I ( C , A ) = x * C : A x * , x x * 0 , x C .
The contraction mapping  f : C C with a constant ρ [ 0 , 1 ) is defined as follows: for all x , y C
f ( x ) f ( y ) ρ x y .
A self-mapping on H, A, is said to be α -strongly monotone if there exists a positive real number α satisfying
A x A y , x y α x y 2 , x , y H .
A self-mapping on H is called L-Lipschitz continuous if there exists a real number L > 0 such that for all x , y in H which satisfies the following:
A x A y L x y .
An operator A which is linear and bounded is titled as a strongly positive on H if there exists a positive constant γ ¯ that meets the following inequality:
A x , x γ ¯ x 2 , x H .
A well-known nonexpansive mapping, T, is defined by
T x T y x y
for all elements x , y in C.
We shall say that a point x in C is a fixed point of a mapping T when that x satisfies the equality T x = x . Undoubtedly, for any mapping T, there may be one or various or no fixed point. However, where it is present we will denote the set of its fixed point as F i x ( T ) , i.e., F i x ( T ) = { x C : T x = x } .
For a nonexpansive mapping T : C C where C is bounded, closed and convex, F i x ( T ) is exactly nonempty [2].
Recently, since the variational inequality problem has attracted many mathematicians to find the best way to solve it, there arose a new interesting problem, known as the hierarchical problem, that was improved from the classical variational inequality. Instead of considering the variational inequality over a closed convex set C, we mention that problem over the fixed point set of a nonexpansive mapping T : C C . This problem can be stated as follows:
Let A : C H and T : C C be a monotone continuous mapping and a nonexpansive mapping, respectively. This hierarchical problem is to find x * F i x ( T ) which satisfies
A x * , x x * 0 , x F i x ( T ) ,
where F i x ( T ) is nonempty and we aim to denote its solution set as V I ( F i x ( T ) , A ) . There are many researches involving this problem in the literature [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17].
In 2011, Yao et.al [18] proposed an iterative algorithm that provides a strong convergence to a unique solution of variational in equality in case of hierarchical problem. Their iterative algorithm for generating the sequence { x n } is designed by
x n + 1 = β n x n + ( 1 β n ) T P C [ I α n ( A γ f ) ] x n , n 0 ,
where x 0 C is chosen arbitrarily and both sequence { α n } and { β n } are in [ 0 , 1 ] . Under some appropriate assumptions, they can gaurantee that the generated sequence converges to a unique solution x * F i x ( T ) of the following variational inequality:
( A γ f ) x * , x x * 0 , x F i x ( T )
where A : C H which is a strongly positive linear bounded operator, f : C H is a ρ -contraction and T : C C which is a nonexpansive mapping where F i x ( T ) is nonempty. They identified the solution set of (1) by Ω 1 : = V I ( F i x ( T ) , A γ f ) .
Later, in 2011, Ceng et.al [19] studied a strong convergence to a unique solution of the variational inequality on the modified hierarchical problem. For x 0 C which is chosen arbitrarily, define a sequence { x n } followed by
x n + 1 = P C [ λ n γ ( α n f ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) T x n ] , n 0 ,
where the sequences { α n } and { λ n } in [ 0 , 1 ] . Then, { x n } converges strongly to x * F i x ( T ) which is the unique solution of the variational inequality which is to find x * F i x ( T ) satisfying
( μ F γ ) x * , x x * 0 , x F i x ( T ) .
By algorithm (2), the assumption of an F : C H is a Lipschitzian and strongly monotone operator, f : C H is a contraction mapping, S , T are both nonexpansive mappings with F i x ( T ) being nonempty and others satisfying certain conditions. They give a notation of the solution set of (3) as Ω 2 : = V I ( F i x ( T ) , μ F γ ) .
Next, in 2014, Kumam and Jitpeera [20] consider a strong convergence to a unique solution of the hybrid hierarchical problem. They generated the sequence { x n } iteratively as follows:
x n + 1 = γ λ n ϕ ( x n ) + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n ] , n 0 ,
where x 0 C can be chosen arbitrarily and both sequences { β n } and { λ n } in [ 0 , 1 ] . They found that the generated sequence { x n } converges strongly to a unique solution x * F i x ( T ) of the following vriational inequality:
Find x * F i x ( T ) such that ( μ F γ ϕ ) x * , x x * 0 , x F i x ( T ) ,
where F : C H is a Lipschitzian and strongly monotone operator, ϕ : C C is a contraction mapping and S , T are nonexpansive mappings with F i x ( T ) is nonempty. The solution set of (5) is denoted by Ω 3 : = V I ( F i x ( T ) , μ F γ ϕ ) .
In recent years, the implicit midpoint rule has been proved in many papers [21,22]. The implicit midpoint rule is one of the powerful methods for finding ordinary differential equations. In 2019, Dhakal and Sintunavarat [23] studied the viscosity method to the implicit double midpoint rule for nonexpansive mapping. For x 0 C is chosen arbitrarily, the sequences { x n } be generated by the following algorithm,
x n + 1 = α n f x n + x n + 1 2 ( 1 α n ) T x n + x n + 1 2 , n 0 ,
where the sequences α n ( 0 , 1 ) . Under some mild conditions, they can show that the generated sequence { x n } converges strongly to a unique solution x * F i x ( T ) of the following variational inequality.
Find x * F i x ( T ) such that ( I f ) x * , x x * 0 , x F i x ( T ) ,
where f : C C is a contraction mapping and T is nonexpansive mapping with F i x ( T ) is a nonempty set. They denoted Ω 4 : = V I ( F i x ( T ) , I f ) as the solution set of (6).
By considering the previous mentioned research, we aim to consider a hybrid viscosity method using implicit double midpoint rule to solve a hybrid hierarchical problems, stated as follows:
y n = P C [ β n S x n + ( 1 β n ) x n + 1 ] , x n + 1 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) T y n , n 0 ,
where S , T are nonexpansive mappings with F ( T ) is nonempty, F : C H is a Lipschitzian and strongly monotone operator, ϕ : H H is a contraction mapping and other control sequences satisfy some mild conditions. Our mentioned problem is stated as follows:
Find x * F i x ( T ) such that ( μ F γ ϕ ) x * , x x * 0 , x F i x ( T ) .
We also give the notation of its solutions set by Ω : = V I ( F i x ( T ) , μ F γ ϕ ) , that is
V I ( F i x ( T ) , μ F γ ϕ ) = x * F i x ( T ) : ( μ F γ ϕ ) x * , x x * 0 , x F i x ( T ) .
Under some appropriate assumptions, we exactly claim the strong convergence of our sequence { x n } generated by our proposed algorithm. The results improve the main theorem of Dhakal and Sintunavarat [23], Kumam and Jitpeera [20]. Thus, our solution is V I ( F i x ( T ) , μ F γ ϕ ) , which is more general than V I ( F i x ( T ) , I f ) . Furthermore, our new algorithm (7) is more general than (4) that uses the double midpoint rule.
The remainder of this paper is divided into six sections. In Section 1, we recall some definitions and properties to be used in the sequel. In Section 2, lemmas are provided for using in proof. In Section 3, we prove the strong convergence theorem of the hybrid hierarchical problem with double midpoint rules in the Hilbert spaces. Some deduced results are provided in Section 4. In Section 5, we present some applications and numerical examples. The conclusion is given in the final section.

2. Preliminaries

In this section, we collect some definitions, properties and lemmas that are necessary for use in this paper. We start with the following inequality: x + y 2 x 2 + 2 y , x + y , x , y H .
An operator P C : H C , that project every point x H to a unique nearest point in C is called the metric projection of H onto C, that is, P C x = min { x y , y C . From it definitions, it is trivial that the following properties hold.
x y , P C x P C y P C x P C y 2 , x , y H .
x P C x , y P C x 0 , x , y H .
x y 2 x P C x 2 + y P C x 2 , x , y H .
Furthermore, for a monotone mapping A : C H , the properties (8) implies that
x * V I ( C , A ) x * = P C ( x * λ A x * ) , λ > 0 .
Next, we recall some lemmas that will be used in the proof.
Lemma 1
([24]). Let { a n } be a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + δ n , n 0 ,
where { γ n } ( 0 , 1 ) and { δ n } is a sequence in R such that
(i) n = 1 γ n = ,
(ii) lim sup n δ n γ n 0 or n = 1 | δ n | < .
Then lim n a n = 0 .
Lemma 2
([25]). Let C be a nonempty closed and convex subset of a real Hilbert space H, and T : C C be a nonexpansive mapping with F i x ( T ) . If { x n } is a sequence in C such that { x n } } converges weakly to x and { ( I T ) x n } converges strongly to 0, where I is the identity mapping, then T x = x .

3. Main Results

In this section, we propose our algorithm for solving hierarchical problem by using technique of the viscosity method together with a generalized implicit double midpoint rule. We also verify the strong convergence of our generated sequence to a fixed point of nonexpansive mapping which is also a unique solution of a mentioned variational inequality.
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H. F : C C be κ-Lipschitzian and η-strongly monotone operators with constant κ and η > 0 . ϕ : C C be a ρ-contraction with coefficient ρ [ 0 , 1 ) . T : C C be a nonexpansive mapping with F i x ( T ) , S : H H be a nonexpansive mapping. Let 0 < μ < 2 η / κ 2 and 0 < γ < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Suppose { x n } is a sequence generated by the following algorithm which x 0 C is chosen arbitrarily,
y n = P C [ β n S x n + ( 1 β n ) x n + 1 ] , x n + 1 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) T y n , n 0 ,
where { λ n } ( 0 , 1 ) , { β n } , { w n } ( 0.5 , 1 ) satisfy the following conditions:
(C1): β n k λ n ;
(C2): lim n λ n = 0 , lim n λ n λ n 1 λ n = 0 , n = 0 λ n = ;
(C3): lim n β n β n 1 β n = 0 .
Then, { x n } converges strongly to x * F i x ( T ) , which is the unique solution of another variational inequality:
( μ F γ ϕ ) x * , x x * 0 , x F i x ( T ) ,
where Ω = V I ( F i x ( T ) , μ F γ ϕ ) . On the other hand, x * is a unique fixed point P F i x ( T ) ( γ ϕ μ F ) , that is, P F i x ( T ) ( γ ϕ μ F ) ( x * ) = x *
Proof. 
First, we want to show the existence of a sequence { x n } defined by (9). Consider the mapping S n : C C by S n x = γ λ n ϕ ( w n w + ( 1 w n ) x ) + ( I λ n μ F ) T P C [ β n S w + ( 1 β n ) x ] for all x C . We will show the mapping S n is a contraction mapping for all n N . For each n N and x , y C , we have
S n x S n y = γ λ n ϕ ( w n w + ( 1 w n ) x ) + ( I λ n μ F ) T P C [ β n S w + ( 1 β n ) x ] γ λ n ϕ ( w n w + ( 1 w n ) y ) ( I λ n μ F ) T P C [ β n S w + ( 1 β n ) y ] γ λ n ϕ ( w n w + ( 1 w n ) x ) ϕ ( w n w + ( 1 w n ) y ) + ( I λ n μ F ) T P C [ β n S w + ( 1 β n ) x ] T P C [ β n S w + ( 1 β n ) y ] ρ γ λ n ( w n w + ( 1 w n ) x ) ( w n w + ( 1 w n ) y ) + ( 1 λ n τ ) [ β n S w + ( 1 β n ) x ] [ β n S w + ( 1 β n ) y ] ρ γ λ n ( 1 w n ) x ( 1 w n ) y + ( 1 λ n τ ) ( 1 β n ) x ( 1 β n ) y ρ γ λ n ( 1 w n ) x y + ( 1 λ n τ ) ( 1 β n ) x y = [ ρ γ λ n ( 1 w n ) + ( 1 λ n τ ) ( 1 β n ) ] x y = ρ ´ x y ,
where ρ ´ = ρ γ λ n ( 1 w n ) + ( 1 λ n τ ) ( 1 β n ) [ 0 , 1 ) for all n N . This shows that the mapping S n is a contraction mapping for all n N . From the Banach contraction principle, S n has a unique fixed point for all n N . Thus, we conclude the existence of a sequence { x n } defined by (9). We will divide the proof into six steps.
Step 1. First, we claim that { x n } is bounded. Indeed, for any x * F i x ( T ) , we can see that
x n + 1 x * = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ x * + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] T P C x * γ ρ λ n { w n x n x * + ( 1 w n ) x n + 1 x * } + λ n γ ϕ x * μ F x * + ( 1 λ n τ ) · { β n x n x * + ( 1 β n ) x n + 1 x * + β n S x * x * } = ( γ ρ λ n w n + ( 1 λ n τ ) β n ) x n x * + ( γ ρ λ n ( 1 w n ) + ( 1 λ n τ ) ( 1 β n ) ) x n + 1 x * + ( 1 λ n τ ) β n S x * x * + λ n γ ϕ x * μ F x * γ ρ λ n w n + ( 1 λ n τ ) β n γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) x n x * + ( 1 λ n τ ) β n γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) S x * x * + λ n γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) γ ϕ x * μ F x * 1 λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) x n x * + k λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) · 1 τ γ ρ S x * x * + λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) · 1 τ γ ρ γ ϕ x * μ F x * 1 λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) x n x * + λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) · k τ γ ρ S x * x * + 1 τ γ ρ γ ϕ x * μ F x * max x n x * , 1 τ γ ρ ( k S x * x * + γ ϕ x * μ F x * ) .
By induction, it follows that
x n x * max x 0 x * , 1 τ γ ρ ( k S x * x * + γ ϕ x * μ F x * ) , n 0 .
Therefore, { x n } is bounded.
Step 2. We verify that lim n x n + 1 x n = 0 . For each n N with n > 1 , we obtain
x n + 1 x n = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] γ λ n 1 ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) ( I λ n 1 μ F ) T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] .
So that
x n + 1 x n = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) + γ ( λ n λ n 1 ) ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) + ( I λ n μ F ) · T P C [ β n S x n + ( 1 β n ) x n + 1 ] T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] + μ ( λ n 1 λ n ) F T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] γ ρ λ n ( 1 w n ) ( x n + 1 x n ) + w n 1 ( x n x n 1 ) + γ | λ n λ n 1 | ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) + ( 1 λ n τ ) · ( 1 β n ) ( x n + 1 x n ) + β n ( S x n S x n 1 ) + ( β n β n 1 ) ( S x n 1 x n ) + μ | λ n λ n 1 | F T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] γ ρ λ n ( 1 w n ) + ( 1 λ n τ ) ( 1 β n ) x n + 1 x n + γ ρ λ n w n 1 + ( 1 λ n τ ) β n x n x n 1 + | λ n λ n 1 { γ ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) + μ F T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] } + ( 1 λ n τ ) | β n β n 1 | S x n 1 x n = γ ρ λ n ( 1 w n ) + ( 1 λ n τ ) ( 1 β n ) x n + 1 x n + γ ρ λ n w n 1 + ( 1 λ n τ ) β n x n x n 1 + | λ n λ n 1 | M 1 + ( 1 λ n τ ) | β n β n 1 | M 2 , γ ρ λ n w n 1 + ( 1 λ n τ ) β n γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) x n x n 1 + | λ n λ n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 1 + ( 1 λ n τ ) | β n β n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 2 = 1 λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) x n x n 1 + | λ n λ n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 1 + ( 1 λ n τ ) | β n β n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 2 ,
which M 1 : = sup n N γ ϕ ( w n 1 x n 1 + ( 1 w n 1 ) x n ) + μ F T P C [ β n 1 S x n 1 + ( 1 β n 1 ) x n ] and M 2 : = S x n 1 x n . This yields that for all n N with n > 1 . We can also write
x n + 1 x n ( 1 α n ) x n x n 1 + δ n
for all n N with n > 1 , where
α n : = λ n ( τ γ ρ ) γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ )
and
δ n : = | λ n λ n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 1 + ( 1 λ n τ ) | β n β n 1 | γ ρ λ n w n + ( 1 λ n τ ) β n + λ n ( τ γ ρ ) M 2 .
Using the conditions (C1), (C2) and comparing (10) with Lemma 1, we obtain
lim n x n + 1 x n = 0 .
Step 3. We want to show that lim n x n T x n = 0 . For each n N , we have
x n T x n x n x n + 1 + x n + 1 T P C [ β n S x n + ( 1 β n ) x n + 1 ] + T P C [ β n S x n + ( 1 β n ) x n + 1 ] T x n x n x n + 1 + λ n γ ϕ ( w n x n + ( 1 w n ) x n + 1 ) μ F T P C [ β n S x n + ( 1 β n ) x n + 1 ] + β n S x n + ( 1 β n ) x n + 1 x n x n x n + 1 + λ n γ ϕ ( w n x n + ( 1 w n ) x n + 1 ) + μ F T P C [ β n S x n + ( 1 β n ) x n + 1 ] + ( 1 β n ) x n x n + 1 + β n x n S x n x n x n + 1 + λ n M 1 + ( 1 β n ) x n x n + 1 + β n x n x n + 1 + β n x n + 1 S x n 2 x n x n + 1 + λ n M 1 + β n M 2 .
From the conditions (C1), (C2) and using (11), we obtain
lim n x n T x n = 0 .
Step 4. We need to claim that ω w ( x n ) F i x ( T ) , where
ω w ( x n ) : = { x H : { x n i } x } .
Let us consider x ω w ( x n ) . Then there exists a subsequence { x n i } of { x n } such that x n i x . From (12), we obtain
lim i ( I T ) x n i = lim n x n i T x n i = 0 .
It implies that { ( I T ) x n i } strong convergence to 0. Using Lemma 2, we obtain T x = x and x F i x ( T ) . Thus, we can conclude that ω w ( x n ) F i x ( T ) .
Step 5. We want to show that
lim sup n x * ϕ ( x * ) , x * x n 0 ,
where x * F i x ( T ) is a unique fixed point of P F i x ( T ) ϕ , that is x * = P F i x ( T ) ϕ ( x * ) . Since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that it has weak convergence to p . Without loss of generality, we may assume that x n i p as i for some p H and
lim sup n x * ϕ ( x * ) , x * x n = lim i x * ϕ ( x * ) , x * x n i .
From the Step 4, we obtain p F i x ( T ) . Using (8), we obtain
lim sup n x * ϕ ( x * ) , x * x n = lim i x * ϕ ( x * ) , x * x n i = x * ϕ ( x * ) , x * p 0 .
Step 6. Finally, we will prove x n + 1 x * . From (9), we note that
x n + 1 x * 2 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * 2 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ x * + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * + λ n ( γ ϕ x * μ F x * ) 2 γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ x * + ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * 2 + 2 λ n γ ϕ x * μ F x * , x n + 1 x * γ 2 λ n 2 ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ x * 2 + ( 1 λ n τ ) 2 T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * 2 + 2 γ λ n ( 1 λ n τ ) · ϕ ( w n x n + ( 1 w n ) x n + 1 ) ϕ x * , T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * + 2 λ n γ ϕ x * μ F x * , x n + 1 x * γ 2 ρ 2 λ n 2 w n ( x n x * ) + ( 1 w n ) ( x n + 1 x * ) 2 + η n + 2 γ ρ λ n ( 1 λ n τ ) w n ( x n x * ) + ( 1 w n ) ( x n + 1 x * ) · β n ( x n x * ) + ( 1 β n ) ( x n + 1 x * ) + β n ( S x * x * ) γ 2 ρ 2 λ n 2 ( w n 2 x n x * 2 + w n ( 1 w n ) ( x n x * 2 + x n + 1 x * 2 ) + ( 1 w n ) 2 x n + 1 x * 2 ) + 2 γ ρ λ n ( 1 λ n τ ) ( w n β n x n x * 2 + ( w n ( 1 β n ) + ( 1 w n ) β n ) x n x * x n + 1 x * + ( 1 w n ) ( 1 β n ) x n + 1 x * 2 ) + 2 γ ρ λ n β n ( 1 λ n τ ) S x * x * w n x n x * + ( 1 w n ) x n + 1 x * + η n γ 2 ρ 2 λ n 2 w n x n x * 2 + γ 2 ρ 2 λ n 2 ( 1 w n ) x n + 1 x * 2 + 2 γ ρ λ n ( 1 λ n τ ) w n β n x n x * 2 + 2 γ ρ λ n ( 1 λ n τ ) ( 1 w n ) ( 1 β n ) x n + 1 x * 2 + γ ρ λ n ( 1 λ n τ ) ( w n ( 1 β n ) + ( 1 w n ) β n ) · x n x * 2 + x n + 1 x * 2 + 2 γ ρ λ n β n ( 1 λ n τ ) S x * x * w n x n x * + ( 1 w n ) x n + 1 x * + η n γ 2 ρ 2 λ n 2 w n + 2 γ ρ λ n ( 1 λ n τ ) w n β n x n x * 2 + γ 2 ρ 2 λ n 2 ( 1 w n ) + 2 γ ρ λ n ( 1 λ n τ ) ( 1 w n ) ( 1 β n ) x n + 1 x * 2 + γ ρ λ n ( 1 λ n τ ) ( w n ( 1 β n ) + ( 1 w n ) β n ) x n x * 2 + γ ρ λ n ( 1 λ n τ ) ( w n ( 1 β n ) + ( 1 w n ) β n ) x n + 1 x * 2 + 2 γ ρ λ n β n ( 1 λ n τ ) S x * x * w n x n x * + ( 1 w n ) x n + 1 x * + η n .
Hence
x n + 1 x * 2 = γ 2 ρ 2 λ n 2 w n + γ ρ λ n ( 1 λ n τ ) ( w n + β n ) x n x * 2 + γ 2 ρ 2 λ n 2 ( 1 w n ) + γ ρ λ n ( 1 λ n τ ) ( 2 w n β n ) x n + 1 x * 2 + 2 γ ρ λ n β n ( 1 λ n τ ) S x * x * w n x n x * + ( 1 w n ) x n + 1 x * + η n ,
where
η n : = ( 1 λ n τ ) 2 T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * 2 + 2 λ n γ ϕ x * μ F x * , x n + 1 x * .
This implies that
x n + 1 x * 2 γ 2 ρ 2 λ n 2 w n + γ ρ λ n ( 1 λ n τ ) ( w n + β n ) 1 γ 2 ρ 2 λ n 2 ( 1 w n ) + γ ρ λ n ( 1 λ n τ ) ( 2 w n β n ) x n x * 2 + 2 γ ρ λ n β n ( 1 λ n τ ) S x * x * w n x n x * + ( 1 w n ) x n + 1 x * 1 γ 2 ρ 2 λ n 2 ( 1 w n ) + γ ρ λ n ( 1 λ n τ ) ( 2 w n β n ) + ( 1 λ n τ ) 2 T P C [ β n S x n + ( 1 β n ) x n + 1 ] x * 2 1 γ 2 ρ 2 λ n 2 ( 1 w n ) + γ ρ λ n ( 1 λ n τ ) ( 2 w n β n ) + ( 2 λ n γ ϕ x * μ F x * , x n + 1 x * 1 γ 2 ρ 2 λ n 2 ( 1 w n ) + γ ρ λ n ( 1 λ n τ ) ( 2 w n β n )
This completes the proof. □

4. Some Deduced Results

Corollary 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H. F : C C be κ-Lipschitzian and η-strongly monotone operators with constant κ and η > 0 . Let T : C C be a nonexpansive mapping with F i x ( T ) , S : H H be a nonexpansive mapping. Let the control conditions be 0 < μ < 2 η / κ 2 and 0 < γ < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Suppose the generated sequence { x n } is designed by the following algorithm where x 0 C can be chosen arbitrarily:
x n + 1 = ( I λ n μ F ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] , n 0 ,
where { λ n } ( 0 , 1 ) , { β n } , ( 0.5 , 1 ) satisfy conditions (C1)–(C3). Then, { x n } converges strongly to x * F i x ( T ) , which is the unique solution of variational inequality:
( μ F I ) x * , x x * 0 , x F i x ( T ) ,
where Ω a = V I ( F i x ( T ) , μ F I ) . On the other hand, x * is a unique fixed point P F i x ( T ) ( I μ F ) , that is, P F i x ( T ) ( I μ F ) ( x * ) = x * .
Proof. 
Putting ϕ 0 into Theorem 1, we can immediately obtain the desired result. □
Corollary 2.
Let C be a nonempty closed and convex subset of a real Hilbert space H. ϕ : H H be a ρ-contraction with coefficient ρ [ 0 , 1 ) , T : C C be a nonexpansive mapping with F i x ( T ) and S : H H be a nonexpansive mapping. Suppose { x n } is a sequence generated by the following algorithm x 0 C arbitrarily:
y n = P C [ β n S x n + ( 1 β n ) x n + 1 ] , x n + 1 = λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( 1 λ n ) T y n , n 0 ,
where { λ n } ( 0 , 1 ) , { β n } , { w n } ( 0.5 , 1 ) satisfy the following conditions (C1)–(C3). Then, { x n } converges strongly to x * F i x ( T ) , which is the unique solution of variational inequality:
( I ϕ ) x * , x x * 0 , x F i x ( T ) ,
where Ω b = V I ( F i x ( T ) , I ϕ ) . On the other hand, x * is a unique fixed point P F i x ( T ) ( ϕ I ) , that is, P F i x ( T ) ( ϕ I ) ( x * ) = x * .
Proof. 
Putting γ = 1 , μ = 2 and F I 2 in Theorem 1, we can immediately obtain the desired result. □
Corollary 3.
Let C be a nonempty closed and convex subset of a real Hilbert space H. T : C C be a nonexpansive mapping with F i x ( T ) and S : H H be a nonexpansive mapping. Suppose { x n } is a sequence generated by the following algorithm x 0 C arbitrarily:
x n + 1 = ( 1 λ n ) T P C [ β n S x n + ( 1 β n ) x n + 1 ] , n 0 ,
where { β n } ( 0.5 , 1 ) satisfy the following condition (C1)-(C3). Then { x n } converges strongly to x * F i x ( T ) , which is the unique solution of variational inequality:
( I S ) x * , x x * 0 , x F i x ( T ) .
where Ω c = V I ( F i x ( T ) , I S ) . On the other hand, x * is a unique fixed point P F i x ( T ) ( S I ) , that is, P F i x ( T ) ( S I ) ( x * ) = x * .
Proof. 
Putting ϕ 0 and in Corollary 2, we can immediately obtain the desired result. □
Corollary 4.
Let C be a nonempty closed and convex subset of a real Hilbert space H. T : C C be a nonexpansive mapping with F i x ( T ) and S : C C be a nonexpansive mapping. Suppose { x n } is a sequence generated by the following algorithm x 0 C arbitrarily:
x n + 1 = λ n x n + ( 1 λ n ) T [ β n S x n + ( 1 β n ) x n ] , n 0 ,
where { λ n } ( 0 , 1 ) , { β n } ( 0.5 , 1 ) satisfy the following conditions (C1)-(C3). Then { x n } converges strongly to x * F i x ( T ) , which is the unique solution of variational inequality:
( I S ) x * , x x * 0 , x F i x ( T ) .
where Ω d = V I ( F i x ( T ) , I S ) . On the other hand, x * is a unique fixed point P F i x ( T ) ( S I ) , that is, P F i x ( T ) ( S I ) ( x * ) = x * .
Proof. 
Putting ϕ I , { w n } = 1 , P C I in Corollary 2, we can immediately obtain the desired result. □

5. Applications and Numerical

5.1. Nonlinear Fredholm Integral Equation

In this part, we consider the following nonlinear Fredholm integral equation:
x ( r ) = h ( r ) + 0 1 Q ( r , t , x ( t ) ) d t , r [ 0 , 1 ] ,
where h is a continuous function on the interval [ 0 , 1 ] .
Q : [ 0 , 1 ] × [ 0 , 1 ] × R R is a continuous function. In this case, if we assume that Q satisfies the Lipschitz continuity condition, i.e.,
| Q ( r , t , x ) Q ( r , t , y ) | | x y | , r , t [ 0 , 1 ] , x , y R ,
then we can verify that Equation (13) has at least one solution in L 2 [ 0 , 1 ] (see [26], Theorem 3.3). Define the mappings S , T : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] by:
( S x ) ( r ) = h ( r ) + 0 1 Q ( r , t , x ( t ) ) d t , r [ 0 , 1 ] ,
and
( T x ) ( r ) = h ( r ) + 0 1 Q ( r , t , x ( t ) ) d t , r [ 0 , 1 ] .
Then, for any x , y L 2 [ 0 , 1 ] , we have:
S x T y 2 = 0 1 | ( S x ) ( r ) ( T y ) ( r ) | 2 d r = 0 1 0 1 Q ( r , t , x ( t ) ) Q ( r , t , y ( t ) ) d t 2 d r 0 1 0 1 x ( t ) y ( t ) d t 2 d r 0 1 x ( t ) y ( t ) 2 d t x y 2 ,
which implies that S and T are nonexpansive mapping on L 2 [ 0 , 1 ] . We can definitely say that the solution finding of Equation (13) and the solution finding of a commom fixed point of S and T in L 2 [ 0 , 1 ] are equivalent.
Theorem 2.
Let a mapping Q : [ 0 , 1 ] × [ 0 , 1 ] × R R satisfies the Lipschitz continuity condition and h be a continuous function on closed interval [ 0 , 1 ] . Let S , T : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be a mapping defined by (14) and (15). Let F : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be κ-Lipschitzian and η-strongly monotone operators with constant κ and η > 0 , respectively, ϕ : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be a ρ-contraction with coefficient ρ [ 0 , 1 ) . Let 0 < μ < 2 η / κ 2 , κ > 0 and 0 < γ < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Suppose that { β n } , { w n } and { λ n } are the sequences in ( 0 , 1 ) and satisfy the conditions ( C 1 ) - ( C 3 ) of Theorem 1. For any x 0 ( r ) L 2 [ 0 , 1 ] , let { x n } be a sequence generated by:
y n ( r ) = β n S x n ( r ) + ( 1 β n ) x n + 1 ( r ) , x n + 1 ( r ) = γ λ n ϕ ( w n x n ( r ) + ( 1 w n ) x n + 1 ( r ) ) + ( I λ n μ F ) T y n ( r ) , n 0 ,
where r [ 0 , 1 ] . Then, the sequence { x n ( r ) } converges strongly in L 2 [ 0 , 1 ] to the solution of the integral Equation (13).

5.2. Application to Convex Minimization Problem

In this part, we consider the well-known optimization problem
min x C Ψ ( x ) ,
where Ψ : C R is a convex and differentiable function. Assume that (17) is consistent, and let a nonempty set Ω + refers to its set of solutions. We generate the sequence { x n } iteratively by using the gradient projection method as follows:
x n + 1 = P C ( x n μ Ψ ( x n ) ) ,
where 0 < μ < 2 η / κ 2 , κ > 0 and Ψ is (Gâteaux) differentiable. If Ψ is L-Lipschtzian, then Ψ is 1 L -inverse strongly monotone, that is,
A x A y , x y 1 L A x A y 2 , x , y H , L > 0 .
Theorem 3.
Let C be a nonempty closed convex subset a real Hilbert space H . For the minimization problem (17), assume that Ψ is (Gâteaux) differentiable and the gradient Ψ is 1 L -inverse strongly monotone mapping with L > 0 . Let ϕ : C C be a ρ-contraction with coefficient ρ [ 0 , 1 ) . Let 0 < μ < 2 η / κ 2 , κ > 0 and 0 < γ < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Suppose that { β n } , { w n } and { λ n } are the sequences in ( 0 , 1 ) that satisfy the conditions ( C 1 ) - ( C 3 ) of Theorem 1. For a given x 0 C , let { x n } be a sequence generated by:
y n = β n S x n + ( 1 β n ) x n + 1 , x n + 1 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ Ψ ) P C ( 1 μ Ψ ) y n , n 0 .
Then { x n } converges strongly to a solution ( x * ) of the minimization problem (17), which is also the unique solution of the variational inequality
( μ Ψ γ ϕ ) x * , x x * 0 , x Ω ,
where Ω : = V I ( F i x ( T ) , μ Ψ γ ϕ ) .

5.3. Application to Hierarchical Minimization

The following hierarchical minimization problem will be mentioned in this subsection. (see [27] and references therein).
Let Ψ 0 , Ψ 1 : H R be lower semi-continuous convex functions. The hierarchical minimization is shown as follows:
min x Ω 0 Ψ 1 ( x ) , and Ω 0 : = argmin x H Ψ 0 ( x ) .
Assume that Ω 0 is nonempty. Let Ω * : = argmin x Ω 0 Ψ 1 ( x ) and assume Ω .
Let Ψ 0 and Ψ 1 are differentiable and their gradients satisfy the Lipschitz continuity conditions:
Ψ 0 ( x ) Ψ 0 ( y ) L 0 x y and Ψ 1 ( x ) Ψ 1 ( y ) L 1 x y
Note that the condition (18) implies that Ψ i is 1 L i -inverse strongly monotone ( i = 0 , 1 ) . Now let
T 0 = I γ 0 Ψ 0 , and T 1 = I γ 1 Ψ 1 ,
where γ 0 > 0 and γ 1 > 0 . Note that T i is nonexpansive if 0 < γ i < 2 L i ( i = 0 , 1 ) . Furthermore, it is easily seen that Ω 0 = F ( T 0 ) .
The optimality condition for x * Ω 0 to be a solution of the hierarchical minimization (19) is the VI:
x * Ω 0 , Ψ 1 ( x * ) , x x * 0 , x Ω 0 .
Theorem 4.
Assume the hierarchical minimization problem (19) is solvable. Let ϕ : C C be a ρ-contraction with coefficient ρ [ 0 , 1 ) . Let 0 < μ < 2 η / κ 2 , κ > 0 and 0 < γ < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Suppose that { β n } , { w n } and { λ n } are the sequences in ( 0 , 1 ) that satisfy the conditions ( C 1 ) - ( C 3 ) of Theorem 1. Let { x n } be a sequence generated by:
x n + 1 = γ λ n ϕ ( w n x n + ( 1 w n ) x n + 1 ) + ( I λ n μ F ) P Ω 0 ( I μ Ψ 1 ) ( β n x n + ( 1 β n ) x n + 1 ) .
If the condition (18) is satisfied and 0 < γ i < 2 L i ( i = 0 , 1 ) , then { x n } converges in norm to a solution x * of the VI (19) that is, a solution of hierarchical minimization problem (17) which also solves the VI
( I γ ϕ ) x * , x x * 0 , x Ω * .

5.4. Numerical Experiments

Example 1.
Let C = [ 0 , 1 ] be a subset of a real Hilbert space R with the usual inner product · , · and define the mappings S , T , F , ϕ : C C by
S ( x ) = x 3 , T ( x ) = x 2 , F ( x ) = 2 x , and ϕ ( x ) = x 4 .
Let sequence { x n } be generated by algorithm (9), where β n = 1 10 n + 1 , w n = 1 20 n + 1 , λ n = 1 30 n + 1 , μ = 1 4 and γ = 1 4 Then, sequence { x n } converges strongly to 0 .
Under the different setting of initial points x 0 = 0.25 , 0.45 , 0.65 , 0.85 , the computational results of algorithm (9) are given in both Table 1 and Figure 1.

6. Conclusions

According to the importance and attractiveness of hierarchical problems, in our research, we applied the viscosity technique together with a generalized implicit double midpoint rule to find a fixed point of nonexpansive mapping in the framework of real Hilbert spaces. We obtain the strong convergence theorem of our designed algorithm which can solve fixed point problem and also it is the same solution of our mentioned hierarchical problem. We also we propose the deduced corollaries and express how to apply our algorithm to solve other problems including the nonlinear Fredholm integral equation, convex minimization problem and hierarchical minimization. Moreover, we conduct a numerical experiment under a different initial point to illustrate the effectiveness of our algorithm.

Author Contributions

Conceptualization, T.J.,W.K. and A.P.; methodology, T.J.; writing—original draft preparation, T.J.; writing—review and editing, T.J. and W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the Rajamangala University of Technology Thanyaburi and Rajamangala University of Technology Lanna.

Acknowledgments

This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant no.FRB650070/0168). This research block grants was managed under Rajamangala University of Technology Thanyaburi (FRB65E0633M.2).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  2. Kirk, W.A. Fixed point theorem for mappings which do not increase distance. Am. Math. Mon. 1965, 72, 1004–1006. [Google Scholar] [CrossRef] [Green Version]
  3. Combettes, P.L. A block-itrative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 2003, 51, 1771–1782. [Google Scholar] [CrossRef] [Green Version]
  4. Gu, G.; Wang, S.; Cho, Y.J. Strong convergence algorithms for hierarchical fixed points problems and variational inequalities. J. Appl. Math. 2011, 2011, 164978. [Google Scholar] [CrossRef] [Green Version]
  5. Hirstoaga, S.A. Iterative selection method for common fixed point problems. J. Math. Anal. Appl. 2006, 324, 1020–1035. [Google Scholar] [CrossRef] [Green Version]
  6. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
  7. Marino, G.; Xu, H.K. Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149, 61–78. [Google Scholar] [CrossRef]
  8. Pakdeerat, N.; Sitthithakerngkiet, K. Approximating methods for monotone inclusion and two variational inequality. Bangmod Int. J. Math. Comp. Sci. 2020, 6, 71–89. [Google Scholar]
  9. Slavakis, K.; Yamada, I. Robust wideband beamforming by the hybrid steepest descent method. J. Math. Anal. Appl. 2007, 55, 4511–4522. [Google Scholar] [CrossRef]
  10. Slavakis, K.; Yamada, I.; Sakaniwa, K. Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method. Signal Process 2003, 83, 1135–1140. [Google Scholar] [CrossRef]
  11. Yamada, I. The hybrid steepest descent method for the variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms Feasibility Optim. Their Appl. 2001, 8, 473–504. [Google Scholar]
  12. Yao, Y.; Cho, J.; Liou, Y.C. Iterative algorithms for hierarcical fixed points problems and variational inequalities. Math. Comput. Model. 2010, 52, 1697–1705. [Google Scholar] [CrossRef]
  13. Yao, Y.; Cho, J.; Liou, Y.C. Hierarchical convergence of an implicit double net algorithm for nonexpansive semigroups and variational inequality problems. Fixed Point Theory Appl. 2011, 2011, 101. [Google Scholar] [CrossRef] [Green Version]
  14. Yao, Y.; Cho, Y.J.; Yang, P.X. An iterative algorithm for a hierarchical problem. J. Appl. Math. 2012, 2012, 320421. [Google Scholar] [CrossRef] [Green Version]
  15. Yao, Y.; Liou, Y.C.; Chen, C.P. Hierarchical convergence of a double-net algorithm for equilibrium problems and variational inequality problems. Fixed Point Theory Appl. 2010, 2010, 642584. [Google Scholar] [CrossRef] [Green Version]
  16. Yamada, I.; Ogura, N. Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mapping. Numer. Funct. Anal. Optim. 2004, 25, 619–655. [Google Scholar] [CrossRef]
  17. Yamada, I.; Ogura, N.; Shirakawa, N. A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Am. Math. Soc. 2002, 313, 269–305. [Google Scholar]
  18. Yao, Y.; Liou, Y.C.; Kang, S.M. Algorithms construction for variational inequaliies. Fixed Point Theory Appl. 2011, 2011, 794203. [Google Scholar] [CrossRef] [Green Version]
  19. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151, 489–512. [Google Scholar] [CrossRef]
  20. Kumam, P.; Jitpeera, T. Strong convergence of an iterative algorithm for hierarchical problems. Abstract Appl. Anal. 2014, 2014, 678147. [Google Scholar] [CrossRef] [Green Version]
  21. Alghamdi, M.A.; Alghamadi, M.A.; Shahzad, N.; Xu, H.K. The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014, 2014, 96. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, H.K.; Alghamdi, M.A.; Shahzad, N. The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 41. [Google Scholar] [CrossRef] [Green Version]
  23. Dhakal, S.; Sintunavarat, W. The viscosity method for the implicit double midpoint rule with numerical results and its applications. Comput. Appl. Math. 2019, 38, 40. [Google Scholar] [CrossRef]
  24. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  25. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  26. Nieto, J.J.; Xu, H.K. Solvability of nonlinear Volterra and Fredholm equations in weighted spaces. Nonlinear Anal. 1995, 24, 1289–1297. [Google Scholar] [CrossRef]
  27. Cabot, A. Proximal point algorithm controlled by a slowly vanishing term: Applications to hierarchical minimization. SIAM J. Optim. 2005, 15, 555–572. [Google Scholar] [CrossRef]
Figure 1. Values of x n .
Figure 1. Values of x n .
Mathematics 10 04755 g001
Table 1. The approximation value via the algorithm (9) in the initial point x 0 .
Table 1. The approximation value via the algorithm (9) in the initial point x 0 .
Iterate x 0 = 0.25 x 0 = 0.45 x 0 = 0.65 x 0 = 0.85
10.12554357300.22597843140.32641328980.4268481482
20.06290548530.11322987360.16355426180.2138786501
30.03149708360.05669475040.08189241720.1070900841
40.01576513220.02837723790.04098934360.0536014494
50.00788919450.01420055010.02051190570.0268232613
60.00394735730.00710524320.01026312900.0134210149
70.00197486110.00355475000.00513463890.0067145278
80.00098794780.00177830600.00256866420.0033590224
90.00049420370.00088956670.00128492970.0016802927
100.00024720530.00044496950.00064273380.0008404980
110.00012364970.00022256940.00032148910.0004204088
120.00006184640.00011132350.00016080060.0002102777
130.00003093310.00005567960.00008042620.0001051727
140.00001547120.00002784810.00004022510.0000526020
150.00000773770.00001392790.00002011810.0000263083
160.00000386990.00000696580.00001006170.0000131576
170.00000193540.00000348380.00000503210.0000065804
180.00000096790.00000174230.00000251660.0000032910
190.00000048410.00000087130.00000125860.0000016458
200.00000024210.00000043580.00000062940.0000008231
210.00000012110.00000021790.00000031480.0000004116
220.00000006050.00000010900.00000015740.0000002059
230.00000003030.00000005450.00000007870.0000001029
240.00000001510.00000002730.00000003940.0000000515
250.00000000760.00000001360.00000001970.0000000257
260.00000000380.00000000680.00000000980.0000000129
270.00000000190.00000000340.00000000490.0000000064
280.00000000090.00000000170.00000000250.0000000032
290.00000000050.00000000090.00000000120.0000000016
300.00000000020.00000000040.00000000060.0000000008
310.00000000010.00000000020.00000000030.0000000004
320.00000000010.00000000010.00000000020.0000000002
330.00000000010.00000000010.00000000010.0000000001
340.00000000010.00000000010.00000000010.0000000001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jitpeera, T.; Padcharoen, A.; Kumam, W. On New Generalized Viscosity Implicit Double Midpoint Rule for Hierarchical Problem. Mathematics 2022, 10, 4755. https://doi.org/10.3390/math10244755

AMA Style

Jitpeera T, Padcharoen A, Kumam W. On New Generalized Viscosity Implicit Double Midpoint Rule for Hierarchical Problem. Mathematics. 2022; 10(24):4755. https://doi.org/10.3390/math10244755

Chicago/Turabian Style

Jitpeera, Thanyarat, Anantachai Padcharoen, and Wiyada Kumam. 2022. "On New Generalized Viscosity Implicit Double Midpoint Rule for Hierarchical Problem" Mathematics 10, no. 24: 4755. https://doi.org/10.3390/math10244755

APA Style

Jitpeera, T., Padcharoen, A., & Kumam, W. (2022). On New Generalized Viscosity Implicit Double Midpoint Rule for Hierarchical Problem. Mathematics, 10(24), 4755. https://doi.org/10.3390/math10244755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop