Optimality Conditions for Nonsmooth Generalized Semi-Infinite Programs
Abstract
We consider a class of nonsmooth generalized semi-infinite programming problems. We apply results from parametric optimization to the lower level problems of generalized semi-infinite programming problems to get estimates for the value functions of the lower level problems and thus derive necessary optimality conditions for generalized semi-infinite programming problems. We also derive some new estimates for the value functions of the lower level problems in terms of generalized differentiation and further obtain the necessary optimality conditions.
1. Introduction
GSIP in itself is of complex and exclusive structures such as the nonclosedness of the feasible set, nonconvexity, nonsmoothness, and bilevel structure and thus a difficult problem to solve see, for example, [2, 5, 6]. We also refer to [7–10] for some recent study on the structure of GSIP.
In this paper we concentrate on the optimality conditions of nonsmooth GSIP whose defining functions are Lipschitz continuous. Similar works are [14, 15]. We achieve this via the lower level optimal value function reformulation and then derive its necessary optimality conditions via the generalized differentiation. One of the key steps is to estimate the generalized gradients of the lower level optimal value function which involves parametric optimization. We will consider two cases with different approaches related to the two reformulations of GSIP as previously mentioned. Firstly, we develop optimality conditions via the min-max formulation with Lipschitz lower level optimal value function. Secondly, we develop optimality conditions via bilevel formulation under the assumption of partial calmness.
2. Preliminaries
Proposition 1 (see [16].)Let f be proper and lsc around . Then,
Proposition 2 (see [19].)If A is convex, or if is pointed, then the multifunction is closed at ; that is, for all , , one has .
Proposition 3. The Clarke normal cone has the robustness property
Proof. It suffices to prove that . Let v ∈ limsup y→xco NA(y). Then there are yk ∈ A, vik ∈ NA(yk), i = 1, …, n + 1 such that
The following definitions are required for further development.
Definition 4 (see [17], Definition 1.63.)Let S : X⇉Y be a set-valued mapping with , the domain of S.
- (i)
Given , we say that the mapping S is inner semicontinuous at if for every sequence there is a sequence yk ∈ S(xk) converging to as k → ∞.
- (ii)
S is inner semicompact at if for every sequence there is a sequence yk ∈ S(xk) that contains a convergent subsequence as k → ∞.
- (iii)
S is μ-inner semicontinuous at (μ-inner semicompact at ) if in above two cases, is replaced by with .
Here the concept of μ-inner semicontinuity/semicompactness is important for our considerations. It is typical that the value function ϕ of the lower level problem Q(x) of GSIP is not continuous, even taking value −∞.
Theorem 5 (subdifferentiation of maximum functions [17, Theorem 3.46]). Consider the maximum function of the form
Note that qualification (22) always holds if all related functions are locally Lipschitz.
The following two results are about continuity properties and estimates of subdifferentials of marginal functions which are crucial to our analysis for GSIP problems.
Proposition 6 (limiting subgradients of marginal functions [20]). Consider the parametric optimization problem
- (i)
Assume that M is μ-inner semicontinuous at (the graph of M ), that φ and all φi are Lipschitz continuous around , and that the following qualification condition is satisfied:
()
One has the inclusions
- (ii)
Assume that M is μ-inner semicompact at , and all φ and φi are Lipschitz continuous around for all , and qualification (25) holds for all , . Then
()
Proposition 7 (Lipschitz continuity of marginal functions [21, Theorem 5.2]). Continue to consider the parametric problem (24) in Proposition 6. Then the following assertions hold.
- (i)
Assume that M is μ-inner semicontinuous at and φ is locally Lipschitz around this point. Then μ is Lipschitz around provided that it is lsc around and G is Lipschitz-like around .
- (ii)
Assume that M is μ-inner compact at and φ is locally Lipschitz around for all . Then μ is Lipschitz around provided that it is lsc around and G is Lipschitz-like around for all .
3. Main Results
Theorem 8 (optimality for GSIP with Lipschitz lower level optimal value function). Consider the GSIP problem (1), and let be its locally optimal solution. Assume that all functions f, g, and v are Lipschitz continuous, Y0 is ϕ-inner semicompact at , and Y is Lipschitz-like at for all . Then there is and , , , i = 1, …, l, j = 1, …, k such that and
Proof. Under regularity and Lipschitz continuity, since the following calculus rule for basic subgradients holds (see [16, Corollary 10.11]):
Corollary 9. In addition to the assumptions in Theorem 8, assume that f, g, and v are continuously differentiable. Then the optimality condition at the optimal point is that there exist and , , , i = 1, …, l, j = 1, …, k such that and
Next we consider the case where the lower level value function ϕ may fail to be Lipschitz and give estimates for the subdifferentials of ϕ and thus further derive the optimality conditions for GSIP. However, it requires to use the Clarke subdifferentials.
Proposition 10. Consider the parametric optimization problem same as (24):
- (i)
φ is lower semicontinuous at ;
- (ii)
M is μ-inner semicompact at and is nonempty and compact;
- (iii)
if , , , i ≤ n + 1, and ∑i ui + vi = 0, then ui = vi = 0, wi = 0;
- (iv)
the cones and for all are pointed.
Proof (sketch: the definition of <!--${ifMathjaxEnabled: 10.1155%2F2013%2F131938}-->∂-= cl co {∂+∂∞}<!--${/ifMathjaxEnabled:}--><!--${ifMathjaxDisabled: 10.1155%2F2013%2F131938}--><!--${/ifMathjaxDisabled:}-->). The proof is divided into two parts. First the set on the right hand of the required inclusion, denoted by Λ, is closed. The second step is to justify that ∂ + ∂∞ ⊂ Λ.
Let , , , and
Based on the assumption (iii), we have (u1i, v1i) = (0,0), (u2i, v2i) = (0,0). This contradicts the fact that (u1i, v1i, u2i, v2i) is of norm 1, and thus {zk} is bounded. Then have convergent subsequences, say . Thus u = ∑ λiui with ui = x1i + x2i. That is to say, Λ is closed.
Next, we justify that ∂ + ∂∞ ⊂ Λ. Assume that , . Under the semicompactness assumption, invoking [17, Theorem 1.108], one gets that
Employing the sum rule from [17, Theorem 3.36] to the two above leads to
Theorem 11 (convexified normal cone to inequality system [22]). Consider G defined by inequality system G(x) = {y∣ϕ(x, y) ≤ 0}. Let ϕ be Lipschitz, and the qualification (non-smooth MFCQ) at holds:
Theorem 12 (necessary conditions of optimality of GSIP). Let be an optimal solution of GSIP with , and . Let the data functions f, g, and v be Lipschitz and the partial calmness condition (55) hold at for some . Assume that the following conditions hold:
- (i)
Qualification (51) holds for Ω : = {(x, y)∣v(x, y) ≤ 0, g(x, y) ≤ 0} at .
- (ii)
Y0 is ϕ-inner semicompact at and .
- (iii)
If , , , i ≤ n + 1, and ∑i ui + vi = 0, then ui = vi = 0 and wi = 0.
- (iv)
The cones and for all are pointed.
Proof. Let Ω = {(x, y)∣v(x, y) ≤ 0, g(x, y) ≤ 0}. Under our assumptions, GSIP can be relaxed into problem (6) and also solves (6) for all . Due to the partial calmness of (6), we also have that solves (56). Under the qualification assumption, the necessary optimality condition for problem (56) is (see, e.g., [24, Theorem 5.1] or [22, Theorem 6.2])
If v ∈ ∂+(−ϕ)(x), then . Indeed, by definition,
Applying Proposition 10 to −ϕ, there are λi ≥ 0, , i = 1, …, r such that and
So, noting that ,
This completes the proof.
Conflict of Interests
The authors declare that there is no conflict of interests.
Acknowledgments
This work is supported by the National Science Foundation of China (11001289) and the Key Project of Chinese Ministry of Education (211151).