Volume 2012, Issue 1 142862
Research Article
Open Access

Convergence of a Proximal Point Algorithm for Solving Minimization Problems

Abdelouahed Hamdi

Corresponding Author

Abdelouahed Hamdi

Department of Mathematics, Prince Sultan University, P.O. Box 66833, Riyadh 11586, Saudi Arabia psu.edu.sa

Search for more papers by this author
M. A. Noor

M. A. Noor

Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan ciit.edu.pk

Search for more papers by this author
A. A. Mukheimer

A. A. Mukheimer

Department of Mathematics, Prince Sultan University, P.O. Box 66833, Riyadh 11586, Saudi Arabia psu.edu.sa

Search for more papers by this author
First published: 17 April 2012
Citations: 2
Academic Editor: Yonghong Yao

Abstract

We introduce and consider a proximal point algorithm for solving minimization problems using the technique of Güler. This proximal point algorithm is obtained by substituting the usual quadratic proximal term by a class of convex nonquadratic distance-like functions. It can be seen as an extragradient iterative scheme. We prove the convergence rate of this new proximal point method under mild assumptions. Furthermore, it is shown that this estimate rate is better than the available ones.

1. Introduction

The purpose of this paper is twofold. Firstly, it proposes an extension of the proximal point method introduced by Güler [1] in 1992, where the usual quadratic proximal term is substituted by a class of strictly convex distance-like functions, called Bregman functions. Secondly, it offers a general framework for the convergence analysis of the proximal point method of Güler. This framework is general enough to apply different classes of Bregman functions and still yield simple convergence proofs. The methods being analyzable in this context are called Güler′s generalized proximal point algorithm, and are closely related to the Bregman proximal methods [25]. The analysis, we develop is different from the works in [4, 5], since our method is based on Güler′s technique.

2. Preliminaries

To be more specific, we consider the minimization problem in the following form:
()
where f : n ∪ {} is a closed proper convex function. To solve the problem (2.1), Teboulle [6], Chen and Teboulle [2, 6], Eckstein [4] and Burachik [3] proposed a general scheme using the Bregman proximal mappings of the type 𝒥c
()
where Dh is given by
()
with h is a strictly convex and continuously differentiable function.

Throughout this paper, ∥·∥ denotes the l2-norm and 〈·, ·〉 denotes the Euclidean inner product in n. Let G be a continuous single-valued mapping from n into n. The mapping G is Lipschitz continuous with Lipschitz constant L, if for all x,   yn,   G(x) − G(y)∥ ≤ Lxy∥. We denote also by ρ(x, X) the distance of x to the set X and it is given by ρ(x, X) = min yXxy∥. Further notations and definitions used in this paper are standard in convex analysis and may be found in Rockafellar′s book [7].

This type of kernels was introduced first by [8] in 1967. The corresponding algorithm using these Bregman proximal mappings is called the Generalized Proximal Point Method (GPPM) and known also under the terminology of Bregman Proximal Methods. These proximal method solve (2.1) by considering a sequence of unconstrained minimization problems, which can be summed as follows.

Algorithm 2.1. (1) Initialize x0n : f(x0) < ,   c0 > 0.

(2) Compute the solution xk+1 by the iterative scheme:

()
where {ck} is a sequence of positive numbers and Dh(·, ·) is defined by (2.3).

For Dh(x, y) = (1/2)∥xy2, Algorithm 2.1 coincides with the classical proximal point algorithm (PPA) introduced by Moreau [9] and Martinet [10].

Under mild assumptions on the data of (2.1) ergodic convergence was proved [2, 5] when with the following global rate of convergence estimate:
()
Our purpose in this paper is to propose an algorithm of the same type as Algorithm 2.1 which has better convergence rate. To this goal, we propose to combine Güler′s scheme [1] and the Bregman proximal method. The main difference concerns the generation of an additional sequence {yk} ⊂ n in the unconstrained minimization (2.4) in such a way:
()
We show (see Section 4) that this new proximal method possesses the following rate estimate
()
which is faster than (2.5). Further, the convergence in terms of the objective values occurs when γn which is weaker than σn.
We briefly recall here the notion of Bregman functions called also D-functions introduced by Brègman ([8], 1967), developed and used in the proximal theory by [4, 6, 1113]. Let S be an open subset of n and let be a finite-valued continuously differentiable function on S be and let Dh defined by
()

Definition 2.2. h is called a Bregman function with zone S or a D-function if:

  • (a)

    h is continuously differentiable on S and continuous on ,

  • (b)

    h is strictly convex on ,

  • (c)

    for every λ, the partial level sets and L2(x, λ) = {yS : Dh(x, y) ≤ λ} are bounded for every yS and , respectively,

  • (d)

    if {yk} ∈ S is a convergent sequence with limit y*, then Dh(y*, yk) → 0,

  • (e)

    if {xk} and {yk} are sequences such that , {xk} is bounded and Dh(xk, yk) → 0, then xky*.

From the above definition, we extract the following properties (see, for instance, [6, 13]).

Lemma 2.3. Let h be a Bregman function with zone S. Then,

  • (i)

    Dh(x, x) = 0 and Dh(x, y) ≥ 0 for and yS,

  • (ii)

    for all a, bS and ,

    ()

  • (iii)

    for all a, bS,

    ()

  • (iv)

    for all

  • (v)

    let {xk} ∈ S such that xkx*S, then Dh(x*, xk) → 0 and Dh(xk, x*) → 0.

Lemma 2.4. (i) Let g : n be a strictly convex function such that

()
then g is a Bregman function.

(ii) If g is a Bregman function, then g(x) + cx + d for any cn,   d, also is a Bregman function.

Remark 2.5. Dh(·, ·) cannot be considered as a distance because of the lack of the triangle inequality and the symmetry property. Dh(·, ·) is usually called an entropy distance.

The paper is organized as follows. In Section 3, we recall briefly the proximal point method of Güler. Section 4 will be devoted to the presentation and convergence analysis of the proposed algorithm. Finite convergence is shown in Section 5. Finally, in Section 6 we present an application of this method to solve variational inequalities problem.

3. Extragradient Algorithm

In 1992, Güler [1] has developed a new proximal point approach similar to the classical one (PPA) based on the idea stated by Nesterov [14].

Güler′s proximal point algorithm (GPPA) can be summed up as follows.

Algorithm 3.1. (i) Initialize x0n : f(x0) < ,   c0 > 0,   A > 0.

Define ν0 : = x0,   A0 : = A,   k = 0.

(ii) Compute .

(iii) Compute the solution xk+1 by the iterative scheme:

()

For the convergence analysis, see Güler [1].

Remark 3.2. The GPPA can be seen as a suitable conjugate gradient type modification of the PPA of Rockafellar applied to (2.1).

4. Main Result

4.1. Introduction

The method that we are proposing is a modification of Güler′s new proximal point approach GPPA discussed in Section 3 and can be considered as a nonlinear (or a nonquadratic) version of GPPA with Bregman kernels. In this paper it is shown that this method, which we call BGPPA possesses the strong convergence results obtained by Güler [1] and therefore this new scheme provides faster (global) convergence rates than the classical Bregman proximal point methods (cf. [2, 46, 11, 13, 15]). In this paper, we propose the following algorithm generalizing Güler′s proximal point algorithm and summed up as follows.

Algorithm 4.1. (i) Initialize: x0n : f(x0) < ,   c0 > 0,   A > 0.

Define ν0 : = x0,   A0 : = A,   k = 0

(ii) Compute: αk such that .

(iii) Compute the solution xk+1 by the iterative scheme:

()

In this section we develop convergence results for the generalized Güler′s proximal point algorithm GGPPA presented in Section 4.2. Our analysis is basically based on the following lemma.

Lemma 4.2 ([1, page 654]).   One has

()
for all αj ∈ [0,1[ and A > 0.

Theorem 4.3. For all such that f(x) < , one has the following convergence rate estimate:

()

Proof. Using the fact that ϕ0(x) : = f(x0) + ADh(x, x0), x0𝒮, and Lemma 4.2, we obtain

()
Since , then (4.3) holds.

Theorem 4.4. Consider the sequence {xk} generated by GGPPA and let x* be a minimizer of f(x) on n. Assume that

  • (1)

    h is a Bregman function with zone 𝒮 such that ,

  • (2)

    Im(∇h) = n or Im(∇h) is open,

  • (3)

    h is Lipschitz continuous with coefficient L,

then
  • (a)

    for all x0 ∈ Dom (∇h), the sequence {xk} is well defined,

  • (b)

    the GGPPA possesses this following convergence rate estimate:

    ()

  • (c)

    f(xk) − f* → 0, when ,

  • (d)

    ()
    if ckc > 0.

Proof. (a) Follows from [8, Theorem 4].

(b) Uses assumption (2.3) in the following manner

()
and by taking x = x* in (4.3), then we have
()
Since x* is arbitrary, then (4.5) holds.

(c) Is obvious.

(d) It suffices to observe that if ckc > 0, we have

()

4.2. Finite Convergence

Note that the finite convergence property was established for the classical proximal point algorithm in the case of sharp minima, see, for example, [16]. Recently, Kiwiel [5] has extended this property to his generalized Bregman proximal method (BPM). In the following theorem we prove that Algorithm 3.1 has this property. Our proof is based on Kiwiel′s one [5, Theorem 6.1 page 1151].

Definition 4.5. A closed proper convex function f : n is said to have a sharp minimum on n if and only if there exists τ > 0 such that

()

Theorem 4.6. Under the same hypothesis as in Theorem 4.4 and by considering GGPPA with f having a sharp minimum on n and ck being bounded, then there exists k such that 0 ∈ f(xk) and xkX*.

Proof. Straightforward, using Theorem 4.4 and [5, Theorem 6.1, page 1151].

5. Convergence Rate of GGPPA

If {xk} is a sequence of points, one forms the sequence {zn} of weighted averages given by
()
where ck > 0. If the sequence {zn} converges, then is said to converge ergodically.

Theorem 5.1. GGPPA possesses the following convergence rate:

()
that is, σn(f(xn) − f*) → 0. Furthermore, if ckc > 0, then one has
()

Proof. Let x* be a minimizer of f. For brevity, we denote Wk = f(xk) − f*,   Vk = f(yk) − f* and Δ = ∇h(yk) − ∇h(xk+1). At optimality in the unconstrained minimization in GGPPA, we can write

()
and by the convexity of f, we have
()
Setting x = xk in (5.5), we obtain
()
and for x = yk, we have
()
Or again, if we set x = x* in (5.5), and using the Cauchy-Schwartz inequality, we obtain
()
that is,
()
Since h is convex, 〈xk+1yk, Δ〉 ≤ 0. Then we can write
()
that is,
()
Using the relation ∥Δ∥2L〈Δ, ykxk+1〉 and the inequality (5.7), we get the relation
()
For short we denote Mk = ∥ykx*∥ thus, (5.12) becomes
()
Then by dividing both terms by , we get
()
Since the left-side term is positive, then
()
Now following Güler [17, page 410], we use the fact that (1+x)−1 ≤ 1 − 2x/3,   for  all  x ∈ [0,1/2]. To apply this inequality, it suffices to show that is less than or equal to 1/2. This can be deduced from this relation (see Lemma 2.3 (ii)):
()
Indeed, since Dh(·, ·) ≥ 0, then (the proof of this next inequality can be found in the proof of Theorem 4.4-(b))
()
Therefore, and we obtain
()
To continue the proof, we will separate some different cases.

Case 1.   If . Then Wk+1VkWk and we have . Thus, (5.18) becomes

()
and by summation from k = 0 to k = n, we get
()
that is,
()
Then,
()
Since x* is an arbitrary solution, we can write
()
and by multiplying both terms by , we obtain
()
Since yk and xk+1 converge to the same point (indeed, we can see it via the formula giving νk+1 in the algorithm GGPPA and ρ(xk+1, X*) → 0, then ; hence, we obtain
()
which implies
()
that is,
()

Case 2.   If Then Wk+1WkVk and we have therefore; for nk we have . Thus, using inequality (5.18), we write

()
and by summation from k = 0 to k = n, we get
()
Then,
()
Since x* is an arbitrary solution, we can write
()
and by multiplying both terms by , we obtain
()
Since yk and xk+1 converge to the same point (indeed, we can see it via the formula giving νk+1 in the algorithm GGPPA and ρ(xk+1, X*) → 0, then ; hence, we obtain
()
which implies,
()
that is,
()

Case 3.   If . In this case we observe that sequence {f(xk)}  is increasing, which may imply a divergence of the approach.

Since f is convex, then the following convergence rate estimate can be derived directly.

Corollary 5.2. If one assumes that ckc > 0 for all k, then

()

6. Conclusion

We have introduced an extragradient method to minimize convex problems. The algorithm is based on a generalization of the technique originally proposed by Nesterov [14] and readapted by Güler in [1, 17], where the usual quadratic proximal term was substituted by a class of convex nonquadratic distance-like functions. The new algorithm has a better theoretical convergence rate compared to the available ones. This motivates naturally the study of the numerical efficiency of the new algorithm and its application to solve variational inequality problems [18, 19]. Also, further efforts are needed to consider the given study for nonconvex situations and apply it to solve nonconvex equilibrium problems [20].

Acknowledgments

The authors would like to thank Dr. Osman Güler for providing them with reference [12] and the English translation of the original paper of Nesterov [14].

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.