Volume 2022, Issue 1 3665713
Research Article
Open Access

Yosida Approximation Iterative Methods for Split Monotone Variational Inclusion Problems

Mohammad Dilshad

Mohammad Dilshad

Computational & Analytical Mathematics and Their Applications Research Group, Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk, Saudi Arabia ut.edu.sa

Search for more papers by this author
Abdulrahman F. Aljohani

Abdulrahman F. Aljohani

Computational & Analytical Mathematics and Their Applications Research Group, Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk, Saudi Arabia ut.edu.sa

Search for more papers by this author
Mohammad Akram

Mohammad Akram

Department of Mathematics, Faculty of Sciences, Islamic University of Madinah, Madinah, Saudi Arabia iu.edu.sa

Search for more papers by this author
Ahmed A. Khidir

Corresponding Author

Ahmed A. Khidir

Computational & Analytical Mathematics and Their Applications Research Group, Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk, Saudi Arabia ut.edu.sa

Faculty of Technology of Mathematical Sciences and Statistics, Alneelain University, Khartoum, Sudan neelain.edu.sd

Search for more papers by this author
First published: 25 January 2022
Citations: 3
Academic Editor: Nawab Hussain

Abstract

In this paper, we present two iterative algorithms involving Yosida approximation operators for split monotone variational inclusion problems (SpMVIP). We prove the weak and strong convergence of the proposed iterative algorithms to the solution of SpMVIP in real Hilbert spaces. Our algorithms are based on Yosida approximation operators of monotone mappings such that the step size does not require the precalculation of the operator norm. To show the reliability and accuracy of the proposed algorithms, a numerical example is also constructed.

1. Introduction

Variational inequality which was brought into existence by Hartman and Stampacchia [1] plays an important role as mathematical model in physics, economics, optimization, networking structural analysis, and medical images. In 1994, Censor and Elfving [2] first presented the split feasibility problems (in short, SFP) for modeling in medical image reconstruction. From the last two decades, SFP has been implemented widely in intensity-modulation therapy treatment planning and other branches of applied sciences (see, e.g., [35]). Censor et al. [6] combined the VIP and SFP and presented a new type of variational inequality problem called split variational inequality problem (in short, SVIP) as follows:
(1)
where C and Q are closed, convex subsets of Hilbert spaces H1 and H2, respectively, A : H1H2 is a bounded linear operator, V1 : H1H1 and V2 : H2H2 are two operators, VIP(V1; C) = {yC : 〈V1(y), xy〉 ≥ 0, ∀xC} and VIP(V2; Q) = {zQ : 〈g(z), xz〉 ≥ 0, ∀xQ}.
Moudafi [7] generalized SVIP into split monotone variational inclusion problem (in short, SpMVIP) as follows:
(2)
where and are set-valued mappings on Hilbert spaces H1 and H2, respectively, VI(V1, G1; H1) = {yH1 : 0 ∈ V1(y) + G1(y)} and VI(V2, G2; H2) = {zH2 : 0 ∈ V2(z) + G2(z)}.
Moudafi [7] formulated the following iterative algorithm to find the solution of SpMVIP. Let λ > 0, select an arbitrary starting point x0H1, and compute
(3)
where A is an adjoint operator of A, γ ∈ (0, 1/L) with L being a spectral radius of operator AA, and
Let NC(x) = {zH1 : 〈z, yx〉 ≤ 0, ∀yC} and NQ(x) = {wH2 : 〈w, yx〉 ≤ 0,  ∀ yQ} be normal cones to the closed and convex sets C and Q, respectively. If G1 = NC and G2 = NQ, then SpMVIP reduces to SpVIP. If V1 = V2 = 0, then SpMVIP reduces to the split variational inclusion problem (in short, SpVIP) for set-valued maximal monotone mappings, introduced and studied by Byrne et al. [8]:
(4)
where VI(G1; H1) = {yH1 : 0 ∈ G1(y)} and VI(G2; H2) = {zH2 : 0 ∈ G2(z)}, G1, G2 are the same as in (2). We denote the solution set of SpVIP by Δ. Moreover, Byrne et al. [8] presented the following iterative algorithm to find the solution of SpVIP. Let λ > 0, and select a starting point x0H1. Then, compute
(5)
where A is the adjoint operator of A, L = ‖AA‖, γ ∈ (0, 2/L) and , are the resolvents of monotone mappings G1, G2, respectively. It can be easily seen that x solves SpVIP if and only if . Kazmi and Rizwi [9] proposed the following iterative method for approximating the common solutions of SpVIP and fixed point problem of a nonexpansive mapping:
(6)
where f is a contraction and S is nonexpansive mapping. Later, Sitthithakerngkiet et al. [10] studied the common solutions of SpVIP and a fixed point of an infinite family of nonexpansive mappings and introduced the following iterative method:
(7)

where uH1 is a given point and Wn is W-mapping which is generated by an infinite family of nonexpansive mappings. Similar results related to SpVIP can be found in [1117].

The common figure among the above-explained iterative methods is that they used the resolvent of associated monotone mappings; secondly, the step size depends on the operator norm ‖AA‖. To avoid this obstacle, self-adaptive step size iterative algorithms have been introduced (see, for example, [1824]). Lopez et al. [20] introduced a relaxed method for solving split feasibility problem with self-adaptive step size. Recently, Dilshad et al. [25] proposed two iterative algorithms to solve SpVIP in which the precalculation of the operator norm ‖AA‖ is not required. They studied the weak and strong convergence of the proposed methods to approximate the solution of SpVIP with the step size , which do not depend upon the precalculated operator norm.

The resolvent of a maximal monotone operator G is defined as , where λ is a positive real number. A resolvent operator of maximal monotone operator is single valued and firmly nonexpansive. Due to the fact that the zeros of maximal monotone operator are the fixed point sets of resolvent operator, the resolvent associated with a set-valued maximal monotone operator plays an important role to find the zeros of monotone operators. Following Byrne’s iterative method (5), which is mainly based on the resolvents of monotone mappings, many researchers introduced and studied various iterative methods for SpVIP (see, for example, [79, 18, 25, 26] and references therein).

Yosida approximation operator for a monotone mapping G and parameter λ > 0 is defined as . It is well known that set-valued monotone operator can be regularized into a single-valued monotone operator by the process known as the Yosida approximation. Yosida approximation is a tool to solve a variational inclusion problem using nonexpansive resolvent operator and has been used to solve various variational inclusions and system of variational inclusions in linear and nonlinear spaces (see, for example, [18, 2530]).

Due to the fact that the zero of Yosida approximation operator associated with monotone operator G is the zero of inclusion problem 0 ∈ G(x) and inspired by the work of Moudafi, Byrne, Kazmi, and Dilshad et al., our motive is to propose two iterative methods to solve SpMVIP. The rest of the paper is organized as follows.

The next section contains some fundamental results and preliminaries. In Section 3, we describe two iterative algorithms using Yosida approximation of monotone mappings G1 and G2. Section 4 is devoted to the study of weak and strong convergence of the proposed iterative methods to the solution of SpMVIP. In the last section, we give a numerical example in support of our main results and show the convergence of sequence obtained from the proposed algorithm to the solution of SpMVIP.

2. Preliminaries

Let H be a real Hilbert space endowed with norm ‖·‖ and inner product 〈·, ·〉. The strong and weak convergence of a sequence {xn} to x is denoted by xnx and xnx, respectively. The operator T : HH is said to be a contraction if ∀x, yH, ‖T(x) − T(y)‖ ≤ κxy‖, κ ∈ (0, 1); if κ = 1, then T is called nonexpansive and firmly nonexpansive if ∀x, yH, ‖T(x) − T(y)‖2 ≤ 〈xy, TxTy〉; T is called τ-inverse strongly monotone if there exists τ > 0 such that 〈T(xT(y), xy〉 ≥ τT(x) − T(y)‖2.

For some xH1, there exists a unique nearest point in C denoted by PCx such that
(8)
PCx is called the projection of x onto CH, which satisfies
(9)
Moreover, PCx is also characterized by the fact that
(10)
In Hilbert spaces, the following equality and inequality hold for all x, y, zH, α, β, γ ∈ [0, 1] such that α + β + γ = 1
(11)
(12)

Let G : H⟶2H be a set-valued operator. The graph of G is defined by {(x, y): yG(x)}, and inverse of G is denoted by G−1 = {(y, x): yG(x)}. A set-valued mapping G is said to be monotone if 〈uv, xy〉 ≥ 0, for all uG(x), vG(y). A monotone operator G is called a maximal monotone if there exists no other monotone operator such that its graph properly contains the graph of G.

Lemma 1 (see [31].)If {an} is a sequence of nonnegative real numbers such that

(13)
where {βn} is a sequence in (0, 1) and {δn} is a sequence in such that
  • (i)

  • (ii)

    or

then

Lemma 2 (see [32].)Let H be a Hilbert space. A mapping F : HH is τ -inverse strongly monotone if and only if IτF is firmly nonexpansive, for τ > 0.

Lemma 3 (see [33].)Let H be a Hilbert space and {xn} be a bounded sequence in H. Assume there exists a nonempty subset CH satisfying the properties

  • (i)

    limn⟶∞xnz‖ exists for every zC

  • (ii)

    ωw(xn) ⊂ C

Then, there exists xC such that {xn} converges weakly to x.

Lemma 4 (see [34].)Let Γn be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence of Γn such that for all k ≥ 0. Also, consider the sequence of integers defined by

(14)

Then, is a nondecreasing sequence verifying limn⟶∞σ(n) = ∞ and for all nn0,

(15)

3. Yosida Approximation Iterative Methods

Let V1 : H1H1, V2 : H2H2 be single-valued monotone mappings and be set-valued mappings such that and are set-valued maximal monotone mappings; , and , are the resolvents and Yosida approximation operators of V1 + G1 and V2 + G2, respectively. We propose the following iterative methods to approximate the solution of SpMVIP.

Algorithm 1. For an arbitrary x0, compute the n + 1th iteration as follows:

(16)
where γn and μn are defined as
(17)
(18)
where λ1 > 0, λ2 > 0 and θ = min{2λ1, 2λ2} such that τn ∈ (0, θ).

Algorithm 2. For an arbitrary x0, compute the n + 1th iteration as follows:

(19)
where γn and μn are defined as
(20)
where αn, βn ∈ (0, 1), λ1 > 0, λ2 > 0, and θ = min{2λ1, 2λ2} such that τn ∈ (0, θ).

4. Main Results

We assume that the problem SpMVIP is consistent and the solution set is denoted by Δ.

First, we prove following lemmas, which are used in the proof of our main results.

Lemma 5. Let V1 : H1H1 be single-valued monotone mappings and be set-valued mappings such that be set-valued maximal monotone mapping. If and are the resolvent and Yosida approximation operators of V1 + G1, respectively, then for λ1 > 0, following are equivalent:

  • (i)

    xH1 is the solution of

  • (ii)

  • (iii)

Proof. The proof is trivial which is an immediate consequence of definitions of resolvent and Yosida approximation operator of maximal monotone mapping V1 + G1.☐

Theorem 6. Let H1, H2 be real Hilbert spaces; V1 : H1H1, V2 : H2H2 be single-valued monotone mappings, , be set-valued maximal monotone mappings such that V1 + G1 and V2 + G2 are maximal monotone, and A : H1H2 be a bounded linear operator. Assume that θ = min{2λ1, 2λ2} such that infτn(θτn) > 0. Then, the sequence {xn} generated by Algorithm 1 converges weakly to a point zΔ.

Proof. Let zΔ. Since the Yosida approximation operator is λ1-inverse strongly monotone, for λ1 > 0, then by Algorithm 1 and (12), we have

(21)

Now, using (17), we estimate that

(22)

From (21) and (22), we get

(23)

Since is λ2-inverse strongly monotone and using (12), we estimate

(24)

By (18), it turns out that

(25)

It follows from (24) and (25) that

(26)

Combining (23) and (26), we get

(27)
(28)
which implies that {xn} is Fejér monotone with respect to Δ and hence bounded, which assures that exists for all zΔ. Keeping in mind that θ = min{2λ1, 2λ2}, from (27), we have
(29)

Due to the assumption that infτn(θτn) > 0 and the properties of convergent series, we conclude that

(30)

Hence, there exist constants K1 and K2 such that

(31)

By Algorithm 1 and (30), we get

(32)

Let{x} ∈ ωw(xn)andbe a subsequence of{xn}that converges weakly to{x}, which implies thatandalso converge to{x}. Recall that is λ1-inverse strongly monotone and converges to x, and using (30), we get

(33)

Taking limit k⟶∞, we obtain

Replacing by , by with the same arguments, we get This completes the proof.☐

Theorem 7. Let H1, H2 be real Hilbert spaces; V1 : H1H1, V2 : H2H2 be single-valued monotone mappings, , be set-valued maximal monotone mappings such that V1 + G1 and V2 + G2 are maximal monotone, and A : H1H2 be a bounded linear operator. If {αn}, {βn} are real sequences in (0, 1) and θ = min{2λ1, 2λ2} such that τn ∈ (0, θ) and

(34)
then the sequence {xn} generated by Algorithm 2 converges strongly to z = PΔ(0).

Proof. Let z = PΔ(0); then, from (23) and (26) of the proof of Theorem 6, we have

(35)
(36)

Since τn ≤ min{2λ1, 2λ2}, we get ∥vnz∥≤∥unz∥≤∥xnz∥. From Algorithm 2, we have

(37)
which implies that the sequence {xn} is bounded and hence, the sequences {un},{vn}, and are also bounded. Now,
(38)

Combining (35), (36), and (38), we obtain

(39)

We discuss the two possible cases.

Case 1. If the sequence{‖xnz‖} is nonincreasing, then there exists a number k ≥ 0 such that ‖xn+1z‖ ≤ ‖xnz‖, for each nk. Then, exists and hence, . Thus, it follows from (39) that

(40)

From (40), we conclude that and We observe from Algorithm 2 that xn+1un = αn(vnun) + γnun⟶0; thus,

(41)

This shows that the sequence {xn} is asymptotically regular. By Theorem 6, we have that ωw(xn) ⊂ Δ. Settingzn = (1 − αn)un + αnvnand rewritingxn+1 = (1 − βn)zn + αnβn(vnun), we have

(42)

From (42) and Algorithm 2, we get

(43)
or
(44)
where an = ‖xnz‖, bn = 2βn{αnvnun, xn+1z〉 + 〈−z, xn+1z〉}.

Since ωw(xn) ⊂ Δ and z = PΔ(0), then using (40), we get

(45)

Thus, by Lemma 1, we obtain xnz.

Case 2. If the sequence {‖xnz‖} is not nonincreasing, we can select a subsequence of {‖xnz‖} such that for all k. In this case, we define a subsequence of positive integers σ(n)⟶∞ with the properties

(46)

If ‖xn+1z‖ > ‖xnz‖ for some n ≥ 0, then it follows from (39) that

(47)

Replacing n by σ(n) and taking limit n⟶∞, we get the following relation for the subsequences {xσ(n)}, {uσ(n)}, and {vσ(n)}:

(48)

Thus, we have ‖xσ(n + 1)xσ(n)‖⟶0, as n⟶∞ and ωw(xσ(n)) ⊂ Δ. It is remaining to show that xnz.

Replacing n by σ(n) in (47), using ‖xσ(n)z‖ < ‖xσ(n)+1z‖ and boundedness of ‖xnz‖, we have

(49)

Since z = PΔ(0), ω(xσ(n)) ⊂ Δ with using ‖vσ(n)uσ(n)‖⟶0 and ‖xσ(n)+1xσ(n)‖⟶0, we have

(50)

From (49) and (52), we conclude that xσ(n)z and

(51)
that is, xnz. This complete the proof.☐

For τn = 1, we have the following result for the convergence of Algorithm 2.

Corollary 8. Let H1, H2, V1, V2, G1, G2, and A, A be the same as defined in Theorem 7. If {αn}, {βn} are sequences in (0, 1) and assuming that λ1 > 1/2 and λ2 > 1/2 satisfying

(52)
then the sequence {xn} generated by Algorithm 2 (with τn = 1) converges strongly to z = PΔ(0).

For βn = 0, we have the following corollary for the convergence of Algorithm 2.

Corollary 9. Let H1, H2, V1, V2, G1, G2, and A, A be the same as defined in Theorem 7. If {αn} is a sequence in (0, 1) and assuming that θ = min{2λ1, 2λ2} such that τn ∈ (0, θ) and

(53)
then the sequence {xn} generated by the iterative method
(54)
where γn and μn are defined as in Algorithm 2 (with τn = 1), converges strongly to zΔ.

For τn = 1 and βn = 0, we have the following corollary for the convergence of Algorithm 2.

Corollary 10. Let H1, H2, V1, V2, G1, G2, and A, A be the same as defined in Theorem 7. If {αn} be a sequence in (0, 1) and assuming that

(55)
then the sequence {xn} generated by the iterative method
(56)
where γn and μn are defined in Algorithm 2 (with τn = 1), converges strongly to zΔ.

5. Numerical Example

Let H1 = H2 = and V1 = V2 = 0; G1 : , G2 : are defined as G1(x) = 2x + 3 and G2(x) = 2(x + 1), respectively. One can easily check that G1 and G2 are monotone and the Yosida approximation operator of G1 and G2 for λ1 = λ2 = 1 is computed as
(57)
Let A : H1H2 be defined as A(x) = 2x/3, then, for τn = (2 − (e1/n/2)) ∈ (0, 2), we compute the step size as
(58)
Then, for αn = (2 − (e1/n/3)) and two different values of (βn) (for example, βn = 1/(n + 5) and βn = 1/(n + 10)) and for arbitrary x0 (for example, x0 = −2 and x0 = 0), we compute the iterative sequences from Algorithm 2 as follows:
(59)

In Figures 1 and 2, we show that the obtained sequences {un}, {vn}, and {xn} converge to z = −(3/2) for randomly selected arbitrary values of x0 = −2 and 0.

Details are in the caption following the image
Convergence of iterative sequences {un}, {vn}, and {xn} to z = −1.5 for βn = 1/n + 5 and x0 = −2.
Details are in the caption following the image
Convergence of iterative sequences {un}, {vn}, and {xn} to z = −1.5 for βn = 1/n + 10 and x0 = 0.

6. Conclusions

We have proposed two iterative algorithms for SpMVIP which are mainly based on the Yosida approximation operators. Since the zero of Yosida approximation of monotone mapping V1 + G1 is the solution of , we used the Yosida approximations of monotone mappings V1 + G1 and V2 + G2 to solve SpMVIP. We proved the weak and strong convergence of the composed iterative algorithms to investigate the solution of SpMVIP under some suitable assumptions such that the estimation of step size does not require any prior calculation of the operator norm ‖AA‖. To show the accuracy and efficiency of our algorithms, we have present a numerical example and showed the convergence using different parameters.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Data Availability

We claim that this work is a theoretical result, and there is no available data source.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.