Iterative Arrangements of the MSCFP for Strictly Pseudocontractive Mappings
Abstract
In this paper, we consider the multiple-set split common fixed point problem in Hilbert spaces. We first study a couple of critical properties of strictly pseudocontractive mappings and particularly the property under mix activity. By utilizing these properties, we propose new iterative strategies for settling this problem as well as several connected issues. Under delicate conditions, we state weak convergence of the proposed strategies that expands the current works from the case of two subsets to the instance of multiple subsets. As an application, we give an exhibit of the theoretical results to the multiple-set split equality problem and the elastic net regularization.
1. Introduction
Furthermore, we [19] extended the above results from the class of firmly nonexpansive mappings to the class of strictly pseudocontractive mappings.
Inspired by the above work, we will continue to present and investigate strategies for addressing the MSCFP in Hilbert spaces. We initially explore a few properties of strictly pseudocontractive mappings and track down its soundness under arched combinatorial operation. Exploiting these properties, we propose another iterative algorithm to address the MSCFP, as well as the MSFP. Under gentle conditions, we acquire weak convergence of the proposed algorithm. Our outcomes broaden related work from the instance of two groups to the case of multiple groups.
2. Preliminary
Moreover, the fixed point set of T is convex and closed. We now collect further properties of strictly pseudocontractive mappings.
Lemma 1. A mapping T : H⟶H is k -strictly pseudocontractive with k < 1 if and only if there is a nonexpansive mapping R such that
Proof. “⇒” Assume T is k-strictly pseudocontractive. Let R = kI + (1 − k)T. It is easy to verify that R fulfils (13). It remains to show that R is nonexpansive. To this end, fix any x, z ∈ H. It then follows from (8) and the property of strictly pseudocontractive mappings that
Hence, we have ‖Rx − Rz‖ ≤ ‖x − z‖; that is, R is nonexpansive.
“⇐” Assume that there is a nonexpansive mapping R such that (13) follows. Choose any x, z ∈ H. It then follows from (8) and the property of nonexpansive mappings that
Hence, T is strictly pseudocontractive, and thus, the proof is complete.
Remark 2. Note that a firmly nonexpansive mapping is −1 -strictly pseudocontractive. It is well known that a mapping T is firmly nonexpansive if and only if there is a nonexpansive mapping R such that T = (I + R)/2. The following lemma can be regarded as an extension of this assertion.
Lemma 3. Assume that Ti : H⟶H is strictly pseudocontractive for each i = 1, 2 ⋯ t. Let where . If is nonempty, then
Proof. It suffices to show that Fix and choose any x ∈ F(T). By our hypothesis, there exists ki < 1 such that
Thus, Since wi(1 − ki) > 0, we have ‖x − Tix‖ = 0 for all i = 1, 2 ⋯ t. Moreover, since x is chosen arbitrarily, we get Hence, the proof is complete.
Lemma 4. For each i = 1, 2 ⋯ t, let 0 < wi < 1 and , and Ti : H⟶H is strictly pseudocontractive with ki < 1. Then, is strictly pseudocontractive with
Proof. By our hypothesis, for each i = 1, 2 ⋯ , t, there exists a nonexpansive mapping Ri such that . Now, let us define a mapping R as
From Lemma 1, it remains to show that R is nonexpansive. To this end, choose any x, z ∈ H. By , we have
Hence, R is nonexpansive, and thus, the proof is complete.
3. The Case for Strictly Pseudocontractive Mappings
First, let us recall a weak convergence theorem of iterative method (5) for approximating a solution of the two-set split common fixed point problem.
Theorem 5 ([19], Theorem 3.1). Let k, l ∈ (−∞, 1). Assume that U and T are, respectively, k - and l -strictly pseudocontractive mappings, and where
Then, the sequence {xn}, generated by (5), converges weakly to a solution of problem (3).
We next consider the MSCFP under the following basic assumption.
- (i)
MSCFP is consistent; that is, it admits at least one solution
- (ii)
Ui : H1⟶H1, i = 1, 2, ⋯, t is ki-strictly pseudocontractive with ki < 1
- (iii)
Tj : H2⟶H2, j = 1, 2, ⋯, s is lj-strictly pseudocontractive with lj < 1
Algorithm 1. Let x0 be arbitrary. Given xn, update the next iteration via
Theorem 6. Assume that conditions (A1)-(A3) hold and {τn} is chosen so that
Then, the sequence {xn}, generated by Algorithm 1, converges weakly to a solution of MSCFP.
Proof. Let and By Lemma 4, we conclude that U is k-strictly pseudocontractive with , and T is l-strictly pseudocontractive with Hence, by formula (23), we have
Moreover, by Lemma 3, and . Therefore, by applying Theorem 5, we at once get the assertion as desired.
It seems that the choice of the stepsize above requires the prior information of ki, lj and the norm ‖A‖. However, as shown below, there is a special case in which the selection of stepsizes ultimately has no relation with ki, lj and the norm ‖A‖.
Corollary 7. Assume that conditions (A1)-(A3) hold, and the stepsize is chosen so that
Then, the sequence {xn} generated by Algorithm 1 converges weakly to a solution of MSFP.
Significantly, if the nonlinear mappings in (4) are all metric projections, then the MSCFP is reduced to the MSFP. Consequently, we can apply our outcome to solve the MSFP. As an application of Algorithm 1, we get the following algorithm for solving problem (2).
Algorithm 2. Let x0 be arbitrary. Given xn, update the next iteration via
Corollary 8. Assume that MSFP is consistent. If the stepsize is chosen so that
Proof. Let and By Lemma 4, we conclude that U and T are both −1-strictly pseudocontractive, that is, firmly nonexpansive. In this situation, we have By applying Theorem 6, we at once get the assertion as desired.
Corollary 9. Assume MSFP is consistent. If the stepsize is chosen so that
4. Applications
In this part, we first give an application of our theoretical results to the multiple-set split equality problem (MSEP), which is more general than the original split equality problem [22].
Example 1. The multiple-set split equality problem (MSEP) expects to find (x1, x2) ∈ H1 × H2 such that
We next consider the MSFP under the following basic assumption.
- (i)
MSEP is consistent; that is, it admits at least one solution
- (ii)
Ui : H1⟶H1, i = 1, 2, ⋯, t is ki-strictly pseudocontractive with ki < 1
- (iii)
Tj : H2⟶H2, j = 1, 2, ⋯, s is lj-strictly pseudocontractive with lj < 1
Under this situation, we propose a new method for solving problem (32).
Algorithm 3. For an arbitrary initial guess (x0, y0), define (xn, yn) recursively by
where {τn} ⊂ (0, ∞) is a sequence of positive numbers.
To proceed the convergence analysis, we consider the product space H≔H1 × H2, in which the inner product and the norm are, respectively, defined by
where x = (x1, x2), y = (y1, y2) with x1, y1 ∈ H1, x2, y2 ∈ H2. Define a linear mapping A : H⟶H3 by
Let T be the the metric projection onto the set {0}⊆H, and define a nonlinear mapping U : H⟶H as
where αi and βj are as above.
Lemma 10 ([23], Lemma 12). Let the mapping A be defined as in (35). Then A is linear bounded. Moreover, for x = (x1, x2), it follows
Lemma 11. Let the mapping U be defined as in (36). Then, F(U) = ⋂iF(Ui) × ⋂jF(Tj). Moreover, if conditions (B1)-(B3) are met, then U is k-strictly pseudocontractive with
Proof. By Lemma 3, it is easy to verify the first assertion. To show the second assertion, fix any x, y ∈ H. By our hypothesis, is k-strictly pseudocontractive with
is l-strictly pseudocontractive with
It then follows that
From (38), we obtain the result as desired.
Theorem 12. Assume that conditions (B1)-(B3) hold. If {τn} is chosen so that where
Proof. Let zn = (xn, yn) and let A, U, T be defined as above. Thus, problem (32) is equivalently changed into finding z ∈ H such that
Moreover, Algorithm 3 can be rewritten as
Note that by Lemma 10, U is κ-strictly pseudocontractive and T is −1-strictly pseudocontractive. Hence, by Theorem 5, we conclude that {zn} converges weakly to some z = (x, y) such that
By Lemma 11, it is readily seen that x ∈ ⋂iF(Ui), y ∈ ⋂jF(Tj) and A1x = A2y.
We next give an application of our theoretical results to a problem derived from the real world. In statistics and machine learning, least absolute shrinkage and selection operator (LASSO for short) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces. It was originally introduced by Tibshirani in [24] who coined out the term and provided further insights into the observed performance.
Subsequently, a number of LASSO variants have been created in order to remedy certain limitations of the original technique and to make the method more useful for particular problems. Among them, elastic net regularization adds an additional ridge regression-like penalty which improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy. More specifically, the LASSO is a regularized regression method with the L1 penalty, while the elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the LASSO and ridge methods. Here, the L1 penalty is defined as , and the L2 penalty is defined as .
Example 2 (see [25].)The elastic net requires to solve the problem
Algorithm 4. Let x0 be arbitrary. Given xn, update the next iteration via
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Open Research
Data Availability
No data were used to support this study.