A Hybrid Gradient-Projection Algorithm for Averaged Mappings in Hilbert Spaces
Abstract
It is well known that the gradient-projection algorithm (GPA) is very useful in solving constrained convex minimization problems. In this paper, we combine a general iterative method with the gradient-projection algorithm to propose a hybrid gradient-projection algorithm and prove that the sequence generated by the hybrid gradient-projection algorithm converges in norm to a minimizer of constrained convex minimization problems which solves a variational inequality.
1. Introduction
Since the Lipschitz continuity of the gradient of f implies that it is indeed inverse strongly monotone (ism) [12, 13], its complement can be an averaged mapping. Recall that a mapping T is nonexpansive if and only if it is Lipschitz with Lipschitz constant not more than one, that a mapping is an averaged mapping if and only if it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping, and that a mapping T is said to be ν-inverse strongly monotone if and only if 〈x − y, Tx − Ty〉 ≥ ν∥Tx − Ty∥2 for all x, y ∈ H, where the number ν > 0. Recall also that the composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite T1 ⋯ TN [14]. In particular, an averaged mapping is a nonexpansive mapping [15]. As a result, the GPA can be rewritten as the composite of a projection and an averaged mapping which is again an averaged mapping.
2. Preliminaries
This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.
Throughout this paper, we write xn⇀x to indicate that the sequence {xn} converges weakly to x, xn → x implies that {xn} converges strongly to x. is the weak ω-limit set of the sequence .
Lemma 2.1 (see [17].)Assume that is a sequence of nonnegative real numbers such that
- (i)
;
- (ii)
either limsup n→∞δn ≤ 0 or ;
- (iii)
.
Lemma 2.2 (see [18].)Let C be a closed and convex subset of a Hilbert space H, and let T : C → C be a nonexpansive mapping with Fix T ≠ ∅. If is a sequence in C weakly converging to x and if converges strongly to y, then (I − T)x = y.
Lemma 2.3. Let H be a Hilbert space, and let C be a nonempty closed and convex subset of H. h : C → C a contraction with coefficient 0 < ρ < 1, and F : C → C a κ-Lipschitzian continuous operator and η-strongly monotone operator with κ, η > 0. Then, for 0 < γ < μη/ρ,
Lemma 2.4. Let C be a closed subset of a real Hilbert space H, given x∈H and y∈C. Then, y = PCx if and only if there holds the inequality
3. Main Results
Proposition 3.1. Let xs be defined by (3.6).
- (i)
{xs} is bounded for s ∈ (0, (1/τ)).
- (ii)
lim s→0∥xs − ProjC(I − λs∇f)(xs)∥ = 0.
- (iii)
xs defines a continuous curve from (0,1/τ) into H.
Proof. (i) Take a , then we have
(ii) By the definition of {xs}, we have
(iii) Take s, s0 ∈ (0,1/τ), and we have
Our main result in the following shows that {xs} converges in norm to a minimizer of (1.1) which solves some variational inequality.
Theorem 3.2. Assume that {xs} is defined by (3.6), then xs converges in norm as s → 0 to a minimizer of (1.1) which solves the variational inequality
Proof. It is easy to see that the uniqueness of a solution of the variational inequality (3.12). By Lemma 2.3, μF − γh is strongly monotone, so the variational inequality (3.12) has only one solution. Let x* ∈ S denote the unique solution of (3.12).
To prove that xs → x* (s → 0), we write, for a given ,
We next prove that is a solution of the variational inequality (3.12). Since
Taking F = A, μ = 1 in Theorem 3.2, we get the following
Corollary 3.3. We have that {xs} converges in norm as s → 0 to a minimizer of (1.1) which solves the variational inequality
Taking F = I, μ = 1, γ = 1 in Theorem 3.2, we get the following.
Corollary 3.4. Let zs ∈ H be the unique fixed point of the contraction z ↦ sh(z)+(1 − s)ProjC(I − λs∇f)(z). Then, {zs} converges in norm as s → 0 to the unique solution of the variational inequality
- (i)
θn → 0;
- (ii)
;
- (iii)
;
- (iv)
.
Theorem 3.5. Assume that the minimization problem (1.1) is consistent and the gradient ∇f satisfies the Lipschitz condition (1.2). Let {xn} be generated by algorithm (3.29) with the sequences {θn} and {λn} satisfying the above conditions. Then, the sequence {xn} converges in norm to x* that is obtained in Theorem 3.2.
Proof. (1) The sequence is bounded. Setting
(2) We prove that ∥xn+1 − xn∥→0 as n → ∞. Let M be a constant such that
(3) We prove that ωw(xn) ⊂ S. Let , and assume that for some subsequence of . We may further assume that due to condition (1.4). Set V : = ProjC(I − λ∇f). Notice that V is nonexpansive and Fix V = S. It turns out that
(4) We prove that xn → x* as n → ∞, where x* is the unique solution of the VI (3.12). First observe that there is some Such that
We now compute
Corollary 3.6 (see [11].)Let {xn} be generated by the following algorithm:
Acknowledgments
Ming Tian is Supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012 K001) and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).