Volume 2021, Issue 1 9980309
Research Article
Open Access

Convergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaces

Panisa Lohawech

Panisa Lohawech

Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand nu.ac.th

Search for more papers by this author
Anchalee Kaewcharoen

Corresponding Author

Anchalee Kaewcharoen

Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand nu.ac.th

Search for more papers by this author
Ali Farajzadeh

Ali Farajzadeh

Department of Mathematics, Faculty of Science, Razi University, Kermanshah 67146, Iran razi.ac.ir

Search for more papers by this author
First published: 03 June 2021
Citations: 1
Academic Editor: Sergejs Solovjovs

Abstract

In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. The strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

1. Introduction

In 2005, Censor et al. [1] introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows:
(1)
where Ci (i = 1,2, …, N) and Qj(j = 1,2, …, M) are nonempty closed convex subsets of Hilbert spaces H1 and H2, respectively, and A : H1H2 is a bounded linear mapping. Denote by Ω the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see [13]). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1, 4, 5] and the references therein).
When N = M = 1, the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving [5], which is formulated as follows:
(2)

Denote by Γ the set of solutions of SFP (2).

Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that xC solves (2) if and only if it solves the fixed point equation
(3)
where γ is a positive constant, A is the adjoint operator of A, and PC and PQ are the metric projections of H1 and H2 onto C and Q, respectively (for more details, see [6]).
The variational inequality problem (VIP) was introduced by Stampacchia [7], which is finding a point
(4)
where C is a nonempty closed convex subset of a Hilbert space H and F : CH is a mapping. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see [812]).
In [13], we see that xC solves (4) if and only if it solves the fixed point equation
(5)

Moreover, it is well known that if F is k-Lipschitz continuous and η-strongly monotone, then VIP (4) has a unique solution (see, e.g., [14]).

Since SFP and VIP include some special cases (see [15, 16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung [17] studied the common solution of variational inequality problem and split feasibility problem: find a point
(6)
where Γ is the solution set of SFP (2) and F : H1H1 is an η-strongly monotone and k-Lipschitz continuous mapping. After that, for solving problem (6), Buong [2] considered the following algorithms, which were proposed in [14, 18], respectively:
(7)
(8)
where T = PC(IγA(IPQ)A), and under the following conditions:
  • (C1) tn ∈ (0,1),  tn⟶0 as n and .

  • (C2) 0 < liminfnαn ≤ limsupnαn < 1.

Moreover, Buong [2] considered the sequence {xn} that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1):
(9)
where and or and in which αi and βj, for 1 ≤ iN and 1 ≤ jM, are positive real numbers such that .

Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

2. Preliminaries

In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let C be a nonempty closed convex subset of a real Hilbert space H with inner product 〈⋅, ⋅〉 and norm ‖⋅‖.

Definition 1. A mapping T : HH is called

  • (i)

    k-Lipschitz continuous, if ‖TxTy‖ ≤ kxy‖ for all x, yH, where k is a positive number.

  • (ii)

    Nonexpansive, if (i) holds with k = 1.

  • (iii)

    η-strongly monotone, if ηxy2 ≤ 〈TxTy, xy〉 for all x, yH, where η is a positive number.

  • (iv)

    Firmly nonexpansive, if ‖TxTy2 ≤ 〈TxTy, xy〉 for all x, yH.

  • (v)

    α-Averaged, if T = (1 − α)I + αN for some fixed α ∈ (0,1) and a nonexpansive mapping N.

In [5], we know that the metric projection PC : HC is firmly nonexpansive and (1/2)-averaged.

We collect some basic properties of averaged mappings in the following results.

Lemma 1 (see [16].)We have

  • (i)

    The composite of finitely many averaged mappings is averaged. In particular, if Ti is αi-averaged, where αi ∈ (0,1) for i = 1,2, then the composite T1T2 is α-averaged, where α = α1 + α2α1α2.

  • (ii)

    If the mappings are averaged and have a common fixed point, then

(10)

Proposition 1 (see [19].)Let D be a nonempty subset of H, m ≥ 2 be an integer, and ϕ; (0,1)m⟶(0,1) be defined by

(11)

For every i ∈ {1, …, m}, let αi ∈ (0,1) and Ti : DD be αi-averaged. Then, T = T1, …, Tm is α-averaged, where α = ϕ(α1, …, αm).

The following properties of the nonexpansive mappings are very convenient and helpful to use.

Lemma 2 (see [20].)Assume that H1 and H2 are Hilbert spaces. Let A : H1H2 be a linear bounded mapping such that A ≠ 0 and let T : H2H2 be a nonexpansive mapping. Then, for 0 ≤ γ < 1/‖A2, IγA(IT)A is γA2-averaged.

Proposition 2 (see [19].)Let C be a nonempty subset of H, and let be a finite family of nonexpansive mappings from C to H. Assume that and such that ∑iIδi = 1. Suppose that, for every iI, Ti is -averaged; then, T = ∑iIδiTi is α-averaged, where .

The following results play a crucial role in the next section.

Lemma 3 (see [14].)Let t be a real number in (0,1]. Let F : HH be an η-strongly monotone and k-Lipschitz continuous mapping. The mapping ItμF, for each fixed point μ ∈ (0, (2η/k2)), is contractive with constant 1 − tτ, i.e.,

(12)
where .

Theorem 1 (see [21].)Let F be a k-Lipschitz continuous and η-strongly monotone self-mapping of H. Assume that is a finite family of nonexpansive mappings from H to H such that . Then, the sequence {xn} defined by the following algorithm converges strongly to the unique solution x of the variational inequality (4):

(13)
where μ ∈ (0, 2η/k2), , for i = 1, …, N, and under the following conditions:
  • (i)

    tn ∈ (0,1),  tn⟶0 as n and .

  • (ii)

    , for some α, β ∈ (0,1), and as n,  (i = 0, …, N).

Theorem 2 (see [22].)Let F, C, , {tn}, and be as in Theorem 1. Then, the sequence {xn} defined by the following algorithm:

(14)
converges strongly to the unique solution x of variational inequality (4).

3. Main Results

In this section, we consider the following iterative algorithm by combining Yamada’s hybrid steepest descent method [14] and Wang’s algorithm [18] for solving problem (6):
(15)
where T = PC(IγA(IPQ)A). If we set αn = 0 for n, then (15) is reduced to (7) studied by Buong [2]. On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong [2] and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8).

Throughout our results, unless otherwise stated, we assume that H1 and H2 are two real Hilbert spaces and A : H1H2 is a linear bounded mapping. Let F be an η-strongly monotone and k-Lipschitz continuous mapping on H1 with some positive constants η and k. Assume that μ ∈ (0, 2η/k2) is a fixed number.

Theorem 3. Let C and Q be two closed convex subsets in H1 and H2, respectively. Then, as n, the sequence {xn} defined by (15), where the sequences {tn} and {αn} satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).

Proof. From Lemma 2, we have that IγA(IPQ)A is γA2-averaged. Since T = PC(IγA(IPQ)A), by Lemma 1 (i), we get that T is λ-averaged where λ = (1 + γA2/2). Moreover, we obtain that z ∈ Γ if and only if z ∈ Fix(T). It follows from Definition 1 (iv) that T = (1 − λ)I + λS, where S is nonexpansive. Then, iterative algorithm (15) can be rewritten as follows:

(16)
where and T = (1 − λ)I + λS. Since (1 − λ)I + λS and ItnμF are nonexpansive, then (ItnμF)T is also nonexpansive. Therefore, the strong convergence of (15) to the element x in the solution set of (6) follows by Theorem 2.

In [23], Miao and Li showed the weak convergence results of the sequence {xn} converging to the element of Fix(T) where {xn} is generated by the following algorithm:

(17)
which {tn} satisfies condition (C3) . Next, we will show the strong convergence for (17) where {tn} satisfies condition (C1).

Theorem 4. Let C and Q be two closed convex subsets in H1 and H2, respectively. Then, as n, the sequence {xn} defined by (17), where the sequence {tn} satisfies condition (C1) and {βn} and {αn} satisfy condition (C2), converges strongly to the solution of (6).

Proof. In the proof of Theorem 3, one can rewrite iterative algorithm (17) as follows:

(18)
where and T = (1 − λ)I + λS. Since (ItnμF)T is nonexpansive, then the strong convergence of (17) to the element x in the solution set of (6) follows by Theorem 1.

Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a point

(19)
where Ω is a solution set of (1), and F : H1H1 is an η-strongly monotone and k-Lipschitz continuous mapping. This problem has been studied in [2].

Theorem 5. Let and be two finite families of closed convex subsets in H1 and H2, respectively. Assume that γ ∈ (0, 1/‖A2), {tn} and {αn} satisfy conditions (C1) and (C2), respectively, and the parameters {δn} and {ζn} satisfy the following conditions:

  • (a)

    δi > 0 for 1 ≤ iN such that .

  • (b)

    ζj > 0 for 1 ≤ jM such that .

Then, as n, the sequence {xn}, defined by
(20)
with one of the following cases:
  • (A1) and

  • (A2) and

  • (A3) and

  • (A4) and ,

converges to the element x in the solution set of (19).

Proof. Let T = P1(IγA(IP2)A). We will show that T is averaged.

In the case of (A1), and . Since is (1/2)-averaged for all i = 1, …, N, by Proposition 1, we get that P1 is λ1-averaged, where λ1 = N/(N + 1). Similarly, we have that P2 is also averaged and so P2 is nonexpansive. By using Lemma 2, we deduce that IγA(IP2)A is λ2-averaged, where λ2 = γA2. It follows from Lemma 1 (i) that T is λ-averaged with λ = N/(N + 1) + γA2 − (N/(N + 1))γA2.

If and , then by using Proposition 2 and condition (a), we obtain that P1 is (1/2)-averaged. From condition (b) and taking into account that is nonexpansive, for all j = 1, …, M, we have that P2 is also nonexpansive. It follows from Lemma 2 that IγA(IP2)A is γA2-averaged. Thus, T is λ-averaged with λ = (1 + γA2)/2.

Cases (A3) and (A4) are similar. This implies that T : = (1 − λ)I + λS, where S is nonexpansive. Moreover, by Lemma 1, we get that

(21)

Then, iterative algorithm (20) can be rewritten as follows:

(22)
where and T = (1 − λ)I + λS. Since (1 − λ)I + λS and ItnμF are nonexpansive, then (ItnμF)T is nonexpansive. Thus, the strong convergence of (20) to the element x in the solution set of (19) follows by Theorem 2.

Theorem 6. Let , , and {ζn} be as in Theorem 5. Then, as n, the sequence {xn}, defined by

(23)
with one of the cases (A1)–(A4), converges strongly to an element in the solution set of (19).

Proof. In the proof of Theorem 5, one can rewrite iterative algorithm (23) as follows:

(24)
where and T = (1 − λ)I + λS. Since (ItnμF)T is nonexpansive, the strong convergence of (23) to the element x in the solution set of (19) follows by Theorem 1.

4. Numerical Example

In this section, we present the numerical example comparing algorithm (8) which is given by Buong [2] and algorithm (15) (new method) to solve the following test problem in [2]: find an element x ∈ Ω such that
(25)
where φ is a convex function, having a strongly monotone and Lipschitz continuous derivative φ(x) on the Euclidian space , where
(26)
, for 1 ≤ kn and 1 ≤ iN,
(27)
, for 1 ≤ lm and 1 ≤ jM, and A is an n × m-matrix.

Example 1. We consider test problem (25), where N = M = 1, n = m = 2, φ(x) = (1 − a)‖x2/2 for some fixed a ∈ (0,1), and

(28)

So, we have that F : = φ = (1 − a)I is a k-Lipschitz continuous and η-strongly monotone mapping with k = η = (1 − a). For each algorithm, we set ai = (1/i, −1), bi = 0, for all i = 1, …, N, and aj = (1/j, 0), Rj = 1, for all j = 1, …, M. Taking a = 0.5, γ = 0.3, the stopping criterion is defined by En = ‖xn+1xn‖ < ε where ε = 10−4 and 10−6. The numerical results are listed in Table 1 with different initial points x1, where n is the number of iterations and s is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1.

Table 1. Computational results for Example 1 with different methods.
10−4 10−6
Initial point n s n s
(−2,1)T Buong method 29461 0.364595 2946204 31.362283
New method 11784 0.241371 1178481 23.411679
  
(1,3)T Buong method 30632 0.565431 3063343 33.468210
New method 12252 0.324808 1225336 25.570356
Details are in the caption following the image
The convergence behavior of En with the initial point (−2,1)T.
Details are in the caption following the image
The convergence behavior of En with the initial point (1,3)T.

Remark 1. From the numerical analysis of our results in Table 1 and Figures 1 and 2, we get that algorithm (15) (new method) has less number of iterations and faster convergence than algorithm (8) (Buong method).

Example 2. In this example, we consider algorithm (23) for solving test problem (25), where N = 5 and M = 4. Let , , φ, a, and A be as in Example 1. In the numerical experiment, we take the stopping criterion En < 10−4. The numerical results are listed in Table 2 with different cases of P1 and P2. In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of P1 and P2 using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of γ.

Table 2. Computational results for Example 2 with different methods.
Initial point A1 A2 A3 A4
(−2,1)T n 28577 24264 28577 24264
s 1.491225 1.355074 1.534414 1.282528
  
(1,3)T n 33407 31438 33407 31438
s 1.746868 1.693069 1.816897 1.690618
Details are in the caption following the image
The convergence behavior of En with the initial point (−2,1)T.
Details are in the caption following the image
The convergence behavior of En with the initial point (1,3)T.
Table 3. Computational results for Example 2 with different γ.
γ 0.1 0.2 0.3
(−2,1)T n 9675 19200 28577
s 0.669508 1.245136 1.666702
  
(1,3)T n 11311 22447 33407
s 0.764536 1.372600 1.958486

Remark 2. We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when P1 and P2 satisfy (A4) and the slowest convergence when P1 and P2 satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when γ is chosen very small and close to zero.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

The first author is thankful to the Science Achievement Scholarship of Thailand. The authors would like to thank the Department of Mathematics, Faculty of Science, Naresuan University (grant no. R2564E049), for the support.

    Data Availability

    No data were used to support this study.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.