An Augmented Lagrangian Algorithm for Solving Semiinfinite Programming
Abstract
We present a smooth augmented Lagrangian algorithm for semiinfinite programming (SIP). For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP. We get this global convergence result without any boundedness condition or coercive condition. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP. Finally, numerical results are given.
1. Introduction
Semi-infinite programming has wide applications such as engineering technology, optimal control, characteristic value calculation, and statistical design. Many methods have been proposed to solve semi-infinite programming (see [1–4]). As we know, the main difficulty for solving SIP is that it has infinite constraints. If transforming the infinite constraints into an integral function, SIP (1.1) is equivalent to a nonlinear programming with finite constraints.
For nonlinear programming with finite equality constraints, Hestenes [5] and Powell [6] independently proposed an augmented Lagrangian function by incorporating a quadratic penalty term in the conventional Lagrangian function. This augmented Lagrangian function avoids the shortcoming that the conventional Lagrangian function is only suitable for convex function. So the augmented Lagrangian function can be applied to nonconvex optimization problem. Later, the augmented Lagrangian function was extended to inequality constrained optimization problems and thoroughly investigated by Rockafellar [7]. Recently, Yang and Teo [8] and Rückmann and Shapiro [9] introduced the augmented Lagrangian function for SIP (1.1). In [9], necessary and sufficient conditions for the existence of corresponding augmented Lagrange multipliers were presented. [8] proposed a nonlinear Lagrangian method and established that the sequence of optimal values of nonlinear penalty problems converges to that of SIP (1.1), under the assumption that the level set of objective function is bounded. In this paper, using the equivalent relation of semi-infinite programming (1.1) and nonlinear programming (1.5), without any boundness condition, we present an augmented Lagrangian algorithm for SIP (1.1).
We notice that although the constraints of NP (1.5) are finite, but the constraint function is nonsmooth. Therefore, existing gradient-based optimization methods cannot be used to solve NP (1.5) directly. To overcome this inconvenience, we have to smooth the constraint function. For SIP (1.1), [10–13] presented semismooth Newton methods and smoothing Newton methods. They proved that each accumulation point is a generalized stationary point of SIP (1.1). However, at each iteration of these methods, a Hessian matrix needs to be computed. When the size of the problem is large, computing a Hessian matrix is very expensive. Based on exact l1 penalty function that is approximated by a family of smoothing functions, a smoothed-penalty algorithm for solving NP (1.5) was proposed by [14]. They proved that if the constrained set is bounded or the objective function is coercive, the algorithm generates a sequence whose accumulation points are solutions of SIP (1.1).
In this paper, for SIP (1.1), we present a smooth augmented Lagrangian algorithm by smoothing the classical augmented Lagrangian function [7]. In this algorithm, we need not have to get an exact global optimal solution of unconstraint subproblem at each iteration. It is sufficient to search an inexact solution. It is not difficult to obtain an inexact solution, whenever the evaluation of the integral function is not very expensive. For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP (1.1). We get this global convergence result without any boundedness condition or coercive condition. It is noteworthy that the boundedness of the multiplier sequence is a sufficient condition in many literatures about Lagrangian method (see [15–17]). However, in our algorithm, the multiplier sequence can be unbounded. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP (1.1).
The paper is organized as follows. In the next section, we present a smooth augmented Lagrangian algorithm. In Section 3, we establish the perturbation theorem of the algorithm. By this theorem, we obtain a global convergence property and a sufficient and necessary condition in which the algorithm forces the sequence of objective functions convergence to the optimal value of SIP (1.1). Finally, we give some numerical results in Section 4.
2. Smooth Augmented Lagrangian Algorithm
- (a)
ϕ(·) is nonnegative and monotone increasing;
- (b)
for any t > 0, ϕ(t) ≥ t;
- (c)
lim t→+∞ (ϕ(t)/t) = 1.
Based on the smooth augmented Lagrangian function fρ(x, β, r), we present the following smooth augmented Lagrangian algorithm.
Algorithm 2.1. Set x0 ∈ Rn, r0 > 0, ε0 > 0, ρ0 > 0, , k : = 0.
Step 1. Compute
Since f(x) is bounded below and ϕ(·) is nonnegative, an inexact solution satisfying (2.10) always exists. Thus Algorithm 2.1 is feasible.
3. Convergence Properties
In this section, by using a perturbation theorem of Algorithm 2.1, we will obtain a global convergence property, a sufficient and necessary condition that Algorithm 2.1 forces the sequence of objective functions convergence to the optimal value of SIP (1.1). To prove the perturbation theorem, we first give the following two lemmas.
Let Ω+(x) = {s ∈ Ω∣g(x, s) > 0}, Ω0(x) = {s ∈ Ω∣g(x, s) = 0}, Ω−(x) = {s ∈ Ω∣g(x, s) < 0}.
Lemma 3.1. Suppose that the point sequence {xk} is generated by Algorithm 2.1. Then for any ε > 0, there exists a positive integer k0 such that xk ∈ Rε, for all k > k0.
Proof.
Case 1. When k → +∞, ρk tends to a finite number. From Algorithm 2.1, there exists a positive integer N1 such that for all k > N1. Notice that εk → 0, so for any ε > 0, there exists a positive integer N2 such that εk < ε for all k > N2. Therefore, when k > max {N1, N2}, we have .
Case 2. When k → +∞, ρk → +∞. We suppose that the conclusion does not hold. Then for , there exists an infinite subsequence K⊆N = {1,2, 3, …} such that for all k ∈ K, that is,
By using Lemma 3.1, we have the following Lemma 3.2.
Lemma 3.2. Suppose that the point sequence {xk} is generated by Algorithm 2.1. Then for every accumulation point x* of {xk}, one has x* ∈ R0.
Theorem 3.3. Suppose that the sequence {xk} is generated by Algorithm 2.1, then
- (i)
;
- (ii)
;
- (iii)
.
Proof. Since θf(ε) is monotonically decreasing with respect to ε > 0 and has below bound, we know exists and is finite. By Algorithm 2.1, we have εk ↓ 0. Then
On the other hand, by Lemma 3.1, for any ε > 0, when k is sufficiently large, we have
Now, we prove the global convergence of Algorithm 2.1.
Corollary 3.4. Suppose that the point sequence {xk} is generated by Algorithm 2.1. Then every accumulation point of {xk} is the optimal solution of the problem (1.1).
Proof. Let x* be an accumulation point of {xk}; from Lemma 3.2, we have
By using Theorem 3.3, we have the following Corollary 3.5.
Corollary 3.5. lim k→∞ f(xk) = θf(0) if and only if θf(ε) is lower semi-continuous at the point ε = 0.
4. Numerical Results
-
k : number of iterations;
-
x0: starting point;
-
ϕ(t): smoothing function;
-
xk: the final iteration point;
-
λk: the final Lagrangian multiplier;
-
f(xk): the function value of f(x) at the final xk.
k | ϕ(t) | xk | λk | f(xk) |
---|---|---|---|---|
26 | log (1 + et) | (−0.0968, 0.0938) | 367.2453 | 2.1982 |
30 | (−0.0959, 0.0947) | 923.9.40 | 2.1993 | |
19 | et | (−0.0953, 0.0953) | 6.1129 | 2.1999 |
n | k | ϕ(t) | xk | λk | f(xk) |
---|---|---|---|---|---|
3 | 17 | log (1 + et) | (0.0839, 0.4494, 1.0105) | 18.5016 | 0.6472 |
3 | 16 | (0.0839, 0.4494, 1.0108) | 18.5869 | 0.6472 | |
3 | 16 | et | (0.0873, 0.4248, 1.0460) | 12.3942 | 0.6484 |
6 | 16 | log (1 + et) | (0.00, 1.03, −0.25, 1.24, −1.39, 0.94) | 5.3234 | 0.6161 |
6 | 17 | (−0.00, 1.03, −0.25, 1.23, −1.39, 0.94) | 7.5705 | 0.6161 | |
6 | 16 | et | (−0.00, 1.02, −0.25, 1.24, −1.41, 0.95) | 4.1742 | 0.6161 |
8 | 17 | log (1 + et) | (−0.00, 1.00, −0.06, 0.77, −1.49, 2.77, −2.41, 0.96) | 4.3033 | 0.6157 |
8 | 18 | (−0.00, 1.00, −0.06, 0.77, −1.47, 2.78, −2.40, 0.96) | 7.0902 | 0.6157 | |
8 | 16 | et | (0.00, 1.00, −0.06, 0.78, −1.49, 2.78, −2.42, 0.97) | 3.3116 | 0.6158 |
Example 4.1 (see [18].)Consider the following:
Example 4.2 (see [18].)Consider the following:
Throughout the computational experiments, we use trust region method for solving an unconstrained optimization subproblem at each step. For the corresponding trust region subproblem, we directly use the trust function in Matlab toolbox. The test results of Example 4.1 are summarized in Table 1. We test the three cases for ϕ(t) = log (1 + et), and ϕ(t) = et, which are, respectively, used as the smoothing approximation functions. k denotes the number of the iteration, λk denotes the approximate Lagrangian multiplier at the final iteration, and xk and f(xk) are the approximate solution and the objective function at the final iteration. For Example 4.2, we test the results when n = 3, n = 6, and n = 8 in Table 2. Numerical results demonstrate that augmented Lagrangian algorithm established in this paper is a practical and effective method for solving semi-infinite programming problem.
Acknowledgments
This work was supported by National Natural Science Foundation under Grants 10971118, 10901096, and 11271226, the Scientific Research Fund for the Excellent Middle-Aged and Youth Scientists of Shandong Province under Grant BS2012SF027, and the Natural Science Foundation of Shandong Province under Grant ZR2009AL019.