Next Article in Journal
A New Global Optimization Algorithm for a Class of Linear Fractional Programming
Next Article in Special Issue
An Improved Curvature Circle Algorithm for Orthogonal Projection onto a Planar Algebraic Curve
Previous Article in Journal
Fourier Truncation Regularization Method for a Time-Fractional Backward Diffusion Problem with a Nonlinear Source
Previous Article in Special Issue
An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing

by
Anantachai Padcharoen
1 and
Pakeeta Sukprasert
2,*
1
Department of Mathematics, Faculty of Science and Technology, Rambhai Barni Rajabhat University, Chanthaburi 22000, Thailand
2
Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Thanyaburi, Pathumthani 12110, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 866; https://doi.org/10.3390/math7090866
Submission received: 16 August 2019 / Revised: 10 September 2019 / Accepted: 13 September 2019 / Published: 19 September 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
Splitting methods have received a lot of attention lately because many nonlinear problems that arise in the areas used, such as signal processing and image restoration, are modeled in mathematics as a nonlinear equation, and this operator is decomposed as the sum of two nonlinear operators. Most investigations about the methods of separation are carried out in the Hilbert spaces. This work develops an iterative scheme in Banach spaces. We prove the convergence theorem of our iterative scheme, applications in common zeros of accretive operators, convexly constrained least square problem, convex minimization problem and signal processing.

1. Introduction

Let E be a real Banach space. The zero point problem is as follows:
find x E such that 0 A u + B u ,
where A : E E is an operator and B : E 2 E is a set-valued operator. This problem includes, as special cases, convex programming, variational inequalities, split feasibility problem and minimization problem [1,2,3,4,5,6,7]. To be more precise, some concrete problems in machine learning, image processing [4,5], signal processing and linear inverse problem can be modeled mathematically as the form in Equation (1).
Signal processing and numerical optimization are independent scientific fields that have always been mutually influencing each other. Perhaps the most convincing example where the two fields have met is compressed sensing (CS) [2]. Several surveys dedicated to these algorithms and their applications in signal processing have appeared [3,6,7,8].
Fixed point iterations is an important tool for solving various problems and is known in a Banach space E . Let K be a nonempty closed convex subset of E and S : K K is the operator with at least one fixed point. Then, for u 1 K :
  • The Picard iterative scheme [9] is defined by:
    u n + 1 = S u n , n N .
  • The Mann iterative scheme [10] is defined by:
    u n + 1 = ( 1 η n ) u n + η n S u n , n N ,
    where { η n } is a sequence in ( 0 , 1 ) .
  • The Ishikawa iterative scheme [11] is defined by:
    u n + 1 = ( 1 η n ) u n + η n S [ ( 1 ϑ n ) u n + ϑ n S u n ] , n N ,
    where { η n } and { ϑ n } are sequences in ( 0 , 1 ) .
  • The S -iterative scheme [12] is defined by:
    u n + 1 = ( 1 η n ) S u n + η n S [ ( 1 ϑ n ) u n + ϑ n S u n ] , n N ,
    where { η n } and { ϑ n } are sequences in ( 0 , 1 ) .
Recently, Sahu et al. [13] and Thakur et al. [14] introduced the following same iterative scheme for nonexpansive mappings in uniformly convex Banach space:
w n = ( 1 ξ n ) u n + ξ n S u n , z n = ( 1 ϑ n ) w n + ϑ n S w n , u n + 1 = ( 1 η n ) S w n + η n S z n , n N ,
where { η n } , { ϑ n } and { ξ n } are sequences in ( 0 , 1 ) . The authors proved that this scheme converges to a fixed point of contraction mapping, faster than all known iterative schemes. In addition, the authors provided an example to support their claim.
In this paper, we first develop an iterative scheme for calculating common solutions and using our results to solve the problem in Equation (1). Secondly, we find common solutions of convexly constrained least square problems, convex minimization problems and applied to signal processing.

2. Preliminaries

Let E be a real Banach space with norm · and E be its dual. The value of f E at u E ia denoted by u , f . A Banach space E is called strictly convex if u + v 2 < 1 , for all u , v E with u = v = 1 . It is called uniformly convex if lim n u n v n = 0 for any two sequences { u n } , { v n } in E such that u = v = 1 and lim n u + v 2 = 1 .
The (normalized) duality mapping J from E into the family of nonempty (by Hahn Banach theorem) weak-star compact subsets of its dual E is defined by
J ( u ) = { f E : u , f = u 2 = f 2 }
for each u E , where · , · denotes the generalized duality pairing.
For an operator A : E 2 E , we denote its domain, range and graph as follows:
D ( A ) = { u E : A u } R ( A ) = { A p : p D ( A ) } ,
and
G ( A ) = { ( u , v ) E × E : u D ( A ) , v A u } ,
respectively. The inverse A 1 of A is defined by u A 1 v , if and only if v A u . If u i D ( A ) and v i A u i ( i = 1 , 2 ) , and there is j J ( u 1 u 2 ) such that v 1 v 2 , j 0 , then A is called accretive.
An accretive operator A in a Banach space E is said to satisfy the range condition if D ( A ) ¯ R ( I + μ A ) for all μ > 0 , where D ( A ) ¯ denotes the closure of the domain of A . We know that for an accretive operator A which satisfies the range condition, A 1 0 = F i x ( J μ A ) for all μ > 0 .
A point u K is a fixed point of S provided S u = u . Denote by F i x ( S ) the set of fixed points of S , i.e., F i x ( S ) = { u K : S u = u } .
  • The mapping S is called L Lipschitz, L > 0 , if
    S u S v L u v , u , v K .
  • The mapping S is called nonexpansive if
    S u S v u v , u , v K .
  • The mapping S is called quasi-nonexpansive if F i x ( S ) and
    S u v u v , u K , v F i x ( S ) .
In this case, H is a real Hilbert space. If A : E 2 E is an m accretive operator (see [15,16,17]), then A is called maximal accretive operator [18], and for all μ > 0 , R ( I + μ A ) = H if and only if A is called maximal monotone [19]. Denote by dom ( h ) the domain of a function h : H ( , ] , i . e . ,
dom ( h ) = { u H : h ( u ) < } .
The subdifferential of h Γ 0 ( H ) at u H is the set
h ( u ) = { z H : h ( u ) h ( v ) + z , u v , v H } ,
where Γ 0 ( H ) denotes the class of all l . s . c . functions from H to ( , ] with nonempty domains.
Lemma 1
([20]). Let h Γ 0 ( H ) . Then, h is maximal monotone.
We denote by B λ [ v ] the closed ball with the center at v and radius λ :
B λ [ v ] = { u E : v u λ } .
Lemma 2
([21]). Let E be a Banach space, and p > 1 and R > 0 be two fixed numbers. Then, E is uniformly convex if and only if there exists a continuous, strictly increasing, and convex function φ : [ 0 , ) [ 0 , ) with φ ( 0 ) = 0 such that
α u + ( 1 α ) v p u p + ( 1 α ) v p α ( 1 α ) φ ( u v ) ,
for all u , v B R [ 0 ] and α [ 0 , 1 ] .
Definition 1
([22]). A vector space H is said to satisfy Opial’s condition, if for each sequence { u n } in H which converges weakly to point u H ,
lim inf n u n u < lim inf n u n v , v H , v u .
Lemma 3
([23]). Let K be a nonempty subset of a Banach space E , let S : K E be a uniformly continuous mapping, and let { u n } K an approximating fixed point sequence of S . Then, { v n } is an approximating fixed point sequence of S whenever { v n } is in K such that lim n u n v n = 0 .
Lemma 4
([16]). Let K be a nonempty closed convex subset of a uniformly convex Banach space E . If S : K E is a nonexpansive mapping, then I S has the demiclosed property with respect to 0 .
A subset K of Banach space E is called a retract of E if there is a continuous mapping Q from E onto K such that Q u = u for all u K . We call such Q a retraction of E onto K . It follows that, if a mapping Q is a retraction, then Q v = v for all v in the range of Q . A retraction Q is called a sunny if Q ( Q u + λ ( u Q u ) ) = Q u for all u E and λ 0 . If a sunny retraction Q is also nonexpansive, then K is called a sunny nonexpansive retract of E [24].
Let E be a strictly convex reflexive Banach space and K be a nonempty closed convex subset of E . Denote by P K the (metric) projection from E onto K , namely, for u E , P K ( u ) is the unique point in K with the property
inf { u v : v K } = u P K ( u ) .
Let an inner product · , · and the induced norm · are specified with a real Hilbert space H . Let K is a nonempty subset of H , we have the nearest point projection P K : H K is the unique sunny nonexpansive retraction of H onto K . It is also known that P K ( u ) K and
u P K ( u ) , P K ( u ) v 0 , u H , v K .

3. Main Results

Let K be a nonempty closed convex subset of a Banach space E with Q K as a sunny nonexpansive retraction. We denote by Ψ : = F i x ( S ) F i x ( T ) .
Lemma 5.
Let K be a nonempty closed convex subset of a Banach space E with Q K as the sunny nonexpansive retraction, let S , T : K E be quasi-nonexpansive mappings which Ψ , and let { η n } , { ϑ n } and { ξ n } be sequences in ( 0 , 1 ) for all n N . Let { u n } be defined by Algorithm 1. Then, for each u ¯ Ψ , lim n u n u ¯ exists and
w n u ¯ | | u n u ¯ , a n d z n u ¯ u n u ¯ , n N .
Algorithm 1: Three-step sunny nonexpansive retraction
Mathematics 07 00866 i001
Proof. 
Let u ¯ Ψ . Then, we have
w n u ¯ = Q K [ ( 1 ξ n ) u n + ξ n S u n ] u ¯ ( 1 ξ n ) ( u n u ¯ ) + ξ n ( S u n u ¯ ) ( 1 ξ n ) u n u ¯ + ξ n S u n u ¯ ( 1 ξ n ) u n u ¯ + ξ n u n u ¯ = u n u ¯ ,
z n u ¯ = Q K [ ( 1 ϑ n ) w n + ϑ n T w n ] u ¯ ( 1 ϑ n ) ( w n u ¯ ) + ϑ n ( T w n u ¯ ) ( 1 ϑ n ) w n u ¯ + ϑ n T w n u ¯ ( 1 ϑ n ) w n u ¯ + ϑ n w n u ¯ = w n u ¯ u n u ¯ ,
and
u n + 1 u ¯ = Q K [ ( 1 η n ) S w n + η n T z n ] u ¯ ( 1 η n ) ( S w n u ¯ ) + η n ( T z n u ¯ ) ( 1 η n ) S w n u ¯ + η n T z n u ¯ ( 1 η n ) w n u ¯ + η n z n u ¯ ( 1 η n ) u n u ¯ + η n u n u ¯ = u n u ¯ .
Therefore,
u n + 1 u ¯ u n u ¯ u 1 u ¯ , n N .
Since { u n u ¯ } is monotonically decreasing, we have that the sequence { u n u ¯ } is convergent. □
From Lemma 5, we have results:
Theorem 1.
Let K be a nonempty closed convex subset of a Banach space E with Q K as the sunny nonexpansive retraction, let S , T : K E be quasi-nonexpansive mappings with Ψ , and let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let u 1 K , P Ψ ( u 1 ) = u and { u n } is defined by Algorithm 1. Then, we have the following:
(i) 
{ u n } is in a closed convex bounded set B λ [ u ] K , where λ is a constant in ( 0 , ) such that u 1 u λ .
(ii) 
If S is uniformly continuous, then lim n u n S u n = 0 and lim n u n T u n = 0 .
(iii) 
If E fulfills the Opial’s condition and I S and I T are demiclosed at 0 , then { u n } converges weakly to an element of Ψ B λ [ u ] .
Proof. 
( i ) Since u Ψ , from Equation (6), we obtain
u n + 1 u u n u u 1 u λ , n N .
Therefore, { u n } is in the closed convex bounded set B λ [ u ] K .
( ii ) Suppose that S is uniformly continuous. Using Lemma 5, we get that { u n } , { z n } and { w n } are in B λ [ u ] K , and hence, from Equation (2), we obtain
T w n u λ , S w n u λ and S u n u λ , n N .
Using Lemma 2 for p = 2 and R = λ , from Equation (5), we obtain
u n + 1 u 2 ( 1 η n ) ( S w n u ) + η n ( T z n u ) 2 ( 1 η n ) S w n u 2 + η n T z n u 2 η n ( 1 η n ) φ ( S w n T z n ) ( 1 η n ) w n u 2 + η n z n u 2 η n ( 1 η n ) φ ( S w n T z n ) ( 1 η n ) u n u 2 + η n u n u 2 η n ( 1 η n ) φ ( S w n T z n ) = u n u 2 η n ( 1 η n ) φ ( S w n T z n ) ,
which implies that
η n ( 1 η n ) φ ( S w n T z n ) = u n u u n + 1 u 2 .
Note that: c 1 ( 1 c ^ 1 ) η n ( 1 η n ) . Thus,
c 1 ( 1 c ^ 1 ) i = 1 n φ ( S w i T z i ) = u 1 u u n + 1 u 2 , n N .
In the same way, we obtain
c 1 ( 1 c ^ 1 ) n = 1 φ ( S w n T z n ) u 1 u < .
Therefore, we have lim n S w n T z n = 0 . From the relations in Algorithm 1, we obtain
w n u 2 ( 1 ξ n ) u n u 2 + ξ n S u n u 2 ξ n ( 1 ξ n ) φ ( u n S u n ) ( 1 ξ n ) u n u 2 + ξ n u n u 2 ξ n ( 1 ξ n ) φ ( u n S u n ) = u n u 2 ξ n ( 1 ξ n ) φ ( u n S u n )
and
z n u 2 ( 1 ϑ n ) ( w n u ) + ϑ n ( T w n u ) 2 ( 1 ϑ n ) w n u 2 + ϑ n T w n u 2 ϑ n ( 1 ϑ n ) φ ( w n T w n ) ( 1 ϑ n ) w n u 2 + ϑ n w n u 2 = w n u 2 ϑ n ( 1 ϑ n ) φ ( w n T w n ) u n u 2 ϑ n ( 1 ϑ n ) φ ( w n T w n ) .
From Equations (8), (13) and (12), we obtain
u n + 1 u 2 ( 1 η n ) ( S w n u ) + η n ( T z n u ) 2 ( 1 η n ) S w n u 2 + η n T z n u 2 η n ( 1 η n ) φ ( S w n T z n ) ( 1 η n ) w n u 2 + η n z n u 2 η n ( 1 η n ) φ ( S w n T z n ) ( 1 η n ) [ u n u 2 ξ n ( 1 ξ n ) φ ( u n S u n ) ] + η n [ u n u 2 ϑ n ( 1 ϑ n ) φ ( w n T w n ) ] η n ( 1 η n ) φ ( S w n T z n ) = u n u 2 ( 1 η n ) ξ n ( 1 ξ n ) φ ( u n S u n ) η n ϑ n ( 1 ϑ n ) φ ( w n T w n ) η n ( 1 η n ) φ ( S w n T z n ) .
Note that: ( 1 c ^ 1 ) c 3 ( 1 c ^ 3 ) ( 1 η n ) ξ n ( 1 ξ n ) and c 1 c 2 ( 1 c ^ 2 ) η n ϑ n ( 1 ϑ n ) . Thus,
( 1 c ^ 1 ) c 3 ( 1 c ^ 3 ) i = 1 n φ ( u i S u i ) u 1 u 2 u n + 1 u 2 , n N .
It follows that lim n u n S u n | | = 0 . Note that:
w n u n = Q K [ ( 1 ξ n ) u n + ξ n S u n ] Q K [ u n ] S u n u n 0 as n .
Since S is uniformly continuous, it follows from Lemma 3 that lim n w n S w n = 0 . Thus, from lim n S w n T z n = 0 , we obtain lim n u n T u n = 0 .
( iii ) By assumption, E satisfies the Opial’s condition. Let w Ψ such that w B λ [ u ] K . From Lemma 5, we have lim n u n w exists. Suppose there are two subsequences { u n q } and { u m l } which converge to two distinct points u and v in B λ [ u ] K , respectively. Then, since both I S and I T have the demiclosed property at 0 , we have S u = T u = u and S v = T v = v . Moreover, using the Opial’s condition:
lim n u n u = lim q u n q u < lim l u m l v = lim n u n v .
Similarly, we obtain
lim n u n v < lim n u n u ,
which is a contradiction. Therefore, u = v . Hence, the sequence { u n } converges weakly to an element of Ψ B λ [ u ] K .  □
Theorem 2.
Let K be a nonempty closed convex subset of a Banach space E with Q K as the sunny nonexpansive retraction, let S , T : K E be nonexpansive mappings with Ψ , and let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let u 1 K , P Ψ ( u 1 ) = u and { u n } is defined by Algorithm 1. Then, we have the following:
(i) 
{ u n } is in a closed convex bounded set B λ [ u ] K , where λ is a constant in ( 0 , ) such that u 1 u λ .
(ii) 
lim n u n S u n = 0 and lim n u n T u n = 0 .
(iii) 
If E fulfills the Opial’s condition, then { u n } converges weakly to an element of Ψ B λ [ u ] .
Proof. 
It follows from Theorem 1. □
Corollary 1.
Let K be a nonempty closed convex subset of a real Hilbert space H , let S , T : K E be nonexpansive mappings with Ψ , and let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let { u n } be defined by
w n = ( 1 ξ n ) u n + ξ n S u n , z n = ( 1 ϑ n ) w n + ϑ n T w n , u n + 1 = ( 1 η n ) S w n + η n T z n , n N .
Then, { u n } converges weakly to an element of Ψ .
Proof. 
It follows from Theorem 1. □

4. Applications

4.1. Common Zeros of Accretive Operators

From Equation (15), we set S = J μ A and T = J μ B , and inherit the convergence analysis for solving Equation (1).
Theorem 3.
Let K be a nonempty closed convex subset of a r . u . c . Banach space E satisfying the Opial’s condition. Let A : D ( A ) K 2 E , B : D ( B ) K 2 E be accretive operators, for which D ( A ) K μ > 0 R ( I + μ A ) , D ( B ) K μ > 0 R ( I + μ B ) and A 1 ( 0 ) B 1 ( 0 ) . Let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let μ > 0 , u 1 K and P A 1 ( 0 ) B 1 ( 0 ) ( u 1 ) = u . Let { u n } be defined by
w n = ( 1 ξ n ) u n + ξ n J μ A u n , z n = ( 1 ϑ n ) w n + ϑ n J μ B w n , u n + 1 = ( 1 η n ) J μ A w n + η n J μ B z n , n N .
Then, we have the following:
(i) 
{ u n } is in a closed convex bounded set B λ [ u ] K , where λ is a constant in ( 0 , ) such that u 1 u λ .
(ii) 
lim n u n J μ A u n = 0 and lim n u n J μ B u n = 0 .
(iii) 
{ u n } converges weakly to an element of A 1 ( 0 ) B 1 ( 0 ) B λ [ u ] .
Proof 
By assumption D ( A ) K μ > 0 R ( I + μ A ) , we known that J μ A , J μ B : K K be nonexpansive. Note that D ( A ) D ( B ) K and hence
u A 1 ( 0 ) B 1 ( 0 ) u D ( A ) D ( B ) with 0 A u and 0 B u u K with J μ A u = u and J μ B u = u u F i x ( J μ A , J μ B ) K .
Next, set S = J μ A and T = J μ B . Hence, Theorem 3 is the same way as Theorem 2. □

4.2. Convexly Constrained Least Square Problem

We provide applications of Theorem 2 for finding solutions to common problems with two convexly constrained least square problems. We consider the following problem:
Let A , B B ( H ) , and y , z H . Define φ , ψ : H R by
φ = A u y 2 and ψ = B u z 2 , u H ,
where H is a real Hilbert space.
Let K be a nonempty closed convex subset of H . The objective is to find b K such that
b arg min u K φ ( u ) arg min u K ψ ( u ) ,
where
arg min u K φ ( u ) : = { u ¯ K : φ ( u ) = inf u K φ ( u ) } .
Proposition 1
([8]). Let H be a real Hilbert space, A B ( H ) with the adjoint A and y H . Let K be a nonempty closed convex subset of H . Let b H and δ ( 0 , ) . Then, the following statements are equivalent:
(i) 
b solves the following problem:
min u K A u y 2 .
(ii) 
b = P K ( b δ A ( A b y ) ) .
(iii) 
A v A b , y A b 0 , for all v K .
Theorem 4.
Let K be a nonempty closed convex subset of a real Hilbert space H , y , z H and A , B B ( H ) , for which the solution set of the problem in Equation (17) is nonempty. Let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let u 1 H , P arg min u K φ ( u ) arg min u K ψ ( u ) ( u 1 ) = u , δ ( 0 , 2 min { 1 A 2 , 1 B 2 } ) , u 1 K and { u n } is defined by
w n = ( 1 ξ n ) u n + ξ n S u n , z n = ( 1 ϑ n ) w n + ϑ n T w n , u n + 1 = ( 1 η n ) S w n + η n T z n , n N .
where S , T : K K defined by S u = P K ( u δ A ( A u y ) ) and T u = P K ( u δ B ( B u z ) ) for all u K . Then, we have the following:
(i) 
{ u n } is in the closed ball B λ [ u ] , where λ is a constant in ( 0 , ) such that u 1 u λ .
(ii) 
lim n u n S u n = 0 and lim n u n T u n = 0 .
(iii) 
{ u n } converges weakly to an element of arg min u K φ ( u ) arg min u K ψ ( u ) B λ [ u ] .
Proof. 
Note that: φ ( u ) = A ( A u y ) , for all u H ; we obtain that φ ( u ) φ ( v ) = A ( A u y ) A ( A v y ) A 2 u v , for all u , v H . Thus, φ is 1 A 2 -ism and hence ( I δ φ ) is nonexpansive from K into H for σ ( 0 , 2 A 2 ) . Therefore, S = P K ( I σ φ ) and T = P K ( I τ φ ) are nonexpansive mappings from K into itself for σ ( 0 , 2 A 2 ) and τ ( 0 , 2 B 2 ) , respectively. Hence, Theorem 4 is the same way as Theorem 2. □

4.3. Convex Minimization Problem

We give an application to common solutions to convex programming problems in a Hilbert space H . We consider the following problem:
Let g 1 , g 2 : H ( , ] be proper l . s . c . functions. The objective is to find x H such that:
x g 1 1 ( 0 ) g 2 1 ( 0 ) .
Note that: J μ g 1 = p r o x μ g 1 .
Theorem 5.
Let K be a nonempty closed convex subset of a real Hilbert space H . Let g 1 , g 2 Γ 0 ( H ) , for which the solution set of the problem in Equation (19) is nonempty. Let { η n } , { ϑ n } and { ξ n } be sequences of real numbers, for which 0 < c 1 η n c ^ 1 < 1 , 0 < c 2 ϑ n c ^ 2 < 1 , 0 < c 3 ξ n c ^ 3 < 1 for all n N . Let μ > 0 , u 1 H and P g 1 1 ( 0 ) g 2 1 ( 0 ) ( u 1 ) = u . Let u 1 K and { u n } is defined by
w n = ( 1 ξ n ) u n + ξ n p r o x μ g 1 ( u n ) , z n = ( 1 ϑ n ) w n + ϑ n p r o x μ g 2 ( w n ) , u n + 1 = ( 1 η n ) p r o x μ g 1 ( w n ) + η n p r o x μ g 2 ( z n ) , n N .
Then, we have the following:
(i) 
{ u n } is in the closed ball B λ [ u ] , where λ is a constant in ( 0 , ) such that u 1 u λ .
(ii) 
lim n u n p r o x μ g 1 ( u n ) = 0 and lim n u n p r o x μ g 2 ( u n ) = 0 .
(iii) 
{ u n } converges weakly to an element of g 1 1 ( 0 ) g 2 1 ( 0 ) B λ [ u ] .
Proof. 
Using Lemma 1, we have that g 1 is maximal monotone. We know that R ( I + μ f ) = H and using the maximal monotonicity of g 1 . Thus, J μ g 1 = p r o x μ g 1 : H H is nonexpansive. Similarly, J μ g 2 = p r o x μ g 2 : H H is nonexpansive. Hence, Theorem 5 is the same way as Theorem 2. □

4.4. Signal Processing

We consider some applications of our algorithm to inverse problems occurring from signal processing. For example, we consider the following underdeterminated linear equation system:
y = A u + e ,
where u R N is recovered, y R M is observations or measured data with noisy e, and A : R N R M is a bounded linear observation operator. It determines a process with loss of information. For finding solutions of the linear inverse problems in Equation (21), a successful one of some models is the convex unconstrained minimization problem:
min u R N 1 2 A u y 2 + d u 1 ,
where d > 0 and · 1 is the l 1 norm. Thus, we can find solution to Equation (22) by applying our method in the case g 1 ( u ) = 1 2 A u y 2 and g 2 ( u ) = d u 1 . For any α ( 0 , 2 L ] , the corresponding forward-backward operator J α g 1 , d · 1 as follows:
J α g 1 , d · 1 ( u ) = p r o x α d · 1 ( u α g 1 ( u ) ) ,
where g 1 is the squared loss function of the Lasso problem in Equation (22). The proximity operator for l 1 norm is defined as the shrinkage operator as follows:
p r o x α d · 1 ( u ) = max ( | u i | α d , 0 ) · sgn ( u i ) ,
where sgn ( · ) is the signum function. We apply the algorithm to the problem in Equation (22) follow as Algorithm 2:
Algorithm 2: Three-step forward-backward operator
Mathematics 07 00866 i002
In our experiment, we set the hits of a signal u R N . The matrix A R M × N was generated from a normal distribution with mean zero and one invariance. The observation y is generated by Gaussian noise distributed normally with mean 0 and variance 10 4 . We compared our Algorithm 2 with SPGA [12]. Let η n = ϑ n = ξ n = 0.5 , α = 0.1 and d = 0.01 in both Algorithm 2 and SPGA. The experiment was initialized by u 1 = A y and terminated when u n + 1 u n u n < 10 4 . The restoration accuracy was measured by means of the mean squared error: MSE = u u 2 N , where u is an estimated signal of u . All codes were written in Matlab 2016b and run on Dell i-5 Core laptop. We present the numerical comparison of the results in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.

5. Conclusions

In this work, we introduce a modified iterative scheme in Banach spaces and solve common zeros of accretive operators, convexly constrained least square problem, convex minimization problem and signal processing. In the case of signal processing, all results are compared with the forward—backward method in Algorithm 2 and SPGA, as proposed in [12]. The numerical results show that Algorithm 2 has a better convergence behavior than SPGA when using the same step sizes for both.

Author Contributions

A.P. and P.S.; writing original draft, A.P. and P.S.; data analysis, A.P. and P.S.; formal analysis and methodology.

Funding

This research was funded by Rajamangala University of Technology Thanyaburi (RMUTT).

Acknowledgments

The first author thanks Rambhai Barni Rajabhat University for the support. Pakeeta Sukprasert was financially supported by Rajamangala University of Technology Thanyaburi (RMUTT).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SymbolsDisplay
l . s . c . lower semicontinuous, convex
B ( H ) the set of all bounded and linear operators from H into itself
r . u . c . real uniformly convex

References

  1. Kankam, K.; Pholasa, N.; Cholamjiak, P. On convergence and complexity of the modified forward–backward method involving new linesearches for convex minimization. Math. Meth. Appl. Sci. 2019, 42, 1352–1362. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  3. Suantai, S.; Kesornprom, S.; Cholamjiak, P. A new hybrid CQ algorithm for the split feasibility problem in Hilbert spaces and Its applications to compressed Sensing. Mathematics 2019, 7, 789. [Google Scholar] [CrossRef]
  4. Kitkuan, D.; Kumam, P.; Padcharoen, A.; Kumam, W.; Thounthong, P. Algorithms for zeros of two accretive operators for solving convex minimization problems and its application to image restoration problems. J. Comput. Appl. Math. 2019, 354, 471–495. [Google Scholar] [CrossRef]
  5. Padcharoen, A.; Kumam, P.; Cho, Y.J. Split common fixed point problems for demicontractive operators. Numer. Algorithms 2019, 82, 297–320. [Google Scholar] [CrossRef]
  6. Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  7. Jirakitpuwapat, W.; Kumam, P.; Cho, Y.J.; Sitthithakerngkiet, K. A general algorithm for the split common fixed point problem with its applications to signal processing. Mathematics 2019, 7, 226. [Google Scholar] [CrossRef]
  8. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  9. Picard, E. Memoire sur la theorie des equations aux d’erives partielles et la methode des approximations successives. J. Math Pures Appl. 1890, 231, 145–210. [Google Scholar]
  10. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  11. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  12. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61–79. [Google Scholar]
  13. Sahu, V.K.; Pathak, H.K.; Tiwari, R. Convergence theorems for new iteration scheme and comparison results. Aligarh Bull. Math. 2016, 35, 19–42. [Google Scholar]
  14. Thakur, B.S.; Thakur, D.; Postolache, M. New iteration scheme for approximating fixed point of non-expansive mappings. Filomat 2016, 30, 2711–2720. [Google Scholar] [CrossRef]
  15. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  16. Browder, F.E. Nonlinear mappings of nonexpansive and accretive type in Banach spaces. Bull. Am. Math. Soc. 1967, 73, 875–882. [Google Scholar] [CrossRef] [Green Version]
  17. Browder, F.E. Semicontractive and semiaccretive nonlinear mappings in Banach spaces. Bull. Am. Math. Soc. 1968, 7, 660–665. [Google Scholar] [CrossRef]
  18. Cioranescu, I. Geometry of Banach Spaces, Duality Mapping and Nonlinear Problems; Kluwer: Amsterdam, The Netherlands, 1990. [Google Scholar]
  19. Takahashi, W. Nonlinear Functional Analysis, Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  20. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef] [Green Version]
  21. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  22. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  23. Sahu, D.R.; Pitea, A.; Verma, M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algorithms 2019. [Google Scholar] [CrossRef]
  24. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry and Non Expansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
Figure 1. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 10 spikes, respectively.
Figure 1. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 10 spikes, respectively.
Mathematics 07 00866 g001
Figure 2. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 10 spikes, respectively.
Figure 2. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 10 spikes, respectively.
Mathematics 07 00866 g002
Figure 3. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 30 spikes, respectively.
Figure 3. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 30 spikes, respectively.
Mathematics 07 00866 g003
Figure 4. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 30 spikes, respectively.
Figure 4. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 30 spikes, respectively.
Mathematics 07 00866 g004
Figure 5. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 50 spikes, respectively.
Figure 5. From top to bottom: Original signal, observation data, recovered signal by Algorithm 2 and SPGA with N = 4096 , M = 1024 and 50 spikes, respectively.
Mathematics 07 00866 g005
Figure 6. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 50 spikes, respectively.
Figure 6. Comparison MSE of two algorithms for recovered signal with N = 4096 , M = 1024 and 50 spikes, respectively.
Mathematics 07 00866 g006

Share and Cite

MDPI and ACS Style

Padcharoen, A.; Sukprasert, P. Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing. Mathematics 2019, 7, 866. https://doi.org/10.3390/math7090866

AMA Style

Padcharoen A, Sukprasert P. Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing. Mathematics. 2019; 7(9):866. https://doi.org/10.3390/math7090866

Chicago/Turabian Style

Padcharoen, Anantachai, and Pakeeta Sukprasert. 2019. "Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing" Mathematics 7, no. 9: 866. https://doi.org/10.3390/math7090866

APA Style

Padcharoen, A., & Sukprasert, P. (2019). Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing. Mathematics, 7(9), 866. https://doi.org/10.3390/math7090866

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop