Volume 2013, Issue 1 724190
Research Article
Open Access

Optimal Control Problems for Nonlinear Variational Evolution Inequalities

Eun-Young Ju

Eun-Young Ju

Department of Applied Mathematics, Pukyong National University, Busan 608-737, Republic of Korea pknu.net

Search for more papers by this author
Jin-Mun Jeong

Corresponding Author

Jin-Mun Jeong

Department of Applied Mathematics, Pukyong National University, Busan 608-737, Republic of Korea pknu.net

Search for more papers by this author
First published: 12 February 2013
Citations: 1
Academic Editor: Ryan Loxton

Abstract

We deal with optimal control problems governed by semilinear parabolic type equations and in particular described by variational inequalities. We will also characterize the optimal controls by giving necessary conditions for optimality by proving the Gâteaux differentiability of solution mapping on control variables.

1. Introduction

In this paper, we deal with optimal control problems governed by the following variational inequality in a Hilbert space H:
()
Here, A is a continuous linear operator from V into V* which is assumed to satisfy Gårding′s inequality, where V is dense subspace in H. Let ϕ : V → (−, +] be a lower semicontinuous, proper convex function. Let 𝒰 be a Hilbert space of control variables, and let B be a bounded linear operator from 𝒰 into L2(0, T; H). Let 𝒰ad be a closed convex subset of 𝒰, which is called the admissible set. Let J = J(v) be a given quadratic cost function (see (61) or (103)). Then we will find an element u𝒰ad which attains minimum of J(v) over 𝒰ad subject to (1).

Recently, initial and boundary value problems for permanent magnet technologies have been introduced via variational inequalities in [1, 2] and nonlinear variational inequalities of semilinear parabolic type in [3, 4]. The papers treating the variational inequalities with nonlinear perturbations are not many. First of all, we deal with the existence and a variation of constant formula for solutions of the nonlinear functional differential equation (1) governed by the variational inequality in Hilbert spaces in Section 2.

Based on the regularity results for solution of (1), we intend to establish the optimal control problem for the cost problems in Section 3. For the optimal control problem of systems governed by variational inequalities, see [1, 5]. We refer to [6, 7] to see the applications of nonlinear variational inequalities. Necessary conditions for state constraint optimal control problems governed by semilinear elliptic problems have been obtained by Bonnans and Tiba [8] using methods of convex analysis (see also [9]).

Let xu stand for solution of (1) associated with the control u𝒰. When the nonlinear mapping f is Lipschitz continuous from × V into H, we will obtain the regularity for solutions of (1) and the norm estimate of a solution of the above nonlinear equation on desired solution space. Consequently, in view of the monotonicity of ϕ, we show that the mapping uxu is continuous in order to establish the necessary conditions of optimality of optimal controls for various observation cases.

In Section 4, we will characterize the optimal controls by giving necessary conditions for optimality. For this, it is necessary to write down the necessary optimal condition due to the theory of Lions [9]. The most important objective of such a treatment is to derive necessary optimality conditions that are able to give complete information on the optimal control.

Since the optimal control problems governed by nonlinear equations are nonsmooth and nonconvex, the standard methods of deriving necessary conditions of optimality are inapplicable here. So we approximate the given problem by a family of smooth optimization problems and afterwards tend to consider the limit in the corresponding optimal control problems. An attractive feature of this approach is that it allows the treatment of optimal control problems governed by a large class of nonlinear systems with general cost criteria.

2. Regularity for Solutions

If H is identified with its dual space we may write VHV* densely and the corresponding injections are continuous. The norm on V, H, and V* will be denoted by ||·||, |·|, and ||·||*, respectively. The duality pairing between the element v1 of V* and the element v2 of V is denoted by (v1, v2), which is the ordinary inner product in H if v1, v2H.

For lV* we denote (l, v) by the value l(v) of l at vV. The norm of l as element of V* is given by
()
Therefore, we assume that V has a stronger topology than H and, for brevity, we may regard that
()
Let a(·, ·) be a bounded sesquilinear form defined in V × V and satisfying Gårding′s inequality
()
where ω1 > 0 and ω2 is a real number. Let A be the operator associated with this sesquilinear form:
()
Then −A is a bounded linear operator from V to V* by the Lax-Milgram Theorem. The realization of A in H which is the restriction of A to
()
is also denoted by A. From the following inequalities
()
where
()
is the graph norm of D(A), it follows that there exists a constant C0 > 0 such that
()
Thus we have the following sequence
()
where each space is dense in the next one with continuous injection.

Lemma 1. With the notations (9) and (10), we have

()
where (V, V*) 1/2,2 denotes the real interpolation space between V and V*(Section  1.3.3 of [10]).

It is also well known that A generates an analytic semigroup S(t) in both H and V*. For the sake of simplicity we assume that ω2 = 0 and hence the closed half plane {λ : Reλ ≥ 0} is contained in the resolvent set of A.

If X is a Banach space, L2(0, T; X) is the collection of all strongly measurable square integrable functions from (0, T) into X and W1,2(0, T; X) is the set of all absolutely continuous functions on [0, T] such that their derivative belongs to L2(0, T; X). C([0, T]; X) will denote the set of all continuously functions from [0, T] into X with the supremum norm. If X and Y are two Banach spaces, (X, Y) is the collection of all bounded linear operators from X into Y, and (X, X) is simply written as (X). Here, we note that by using interpolation theory we have
()
First of all, consider the following linear system:
()

By virtue of Theorem 3.3 of [11] (or Theorem 3.1 of [12, 13]), we have the following result on the corresponding linear equation of (13).

Lemma 2. Suppose that the assumptions for the principal operator A stated above are satisfied. Then the following properties hold.

  • (1)

    For x0V = (D(A), H) 1/2,2 (see Lemma 1) and kL2(0, T; H), T > 0, there exists a unique solution x of (13) belonging to

    ()

  • and satisfying

    ()

  • where C1 is a constant depending on T.

  • (2)

    Let x0H and kL2(0, T; V*),  T > 0. Then there exists a unique solution x of (13) belonging to

    ()

  • and satisfying

    ()

  • where C1 is a constant depending on T.

Let f be a nonlinear single valued mapping from [0, ) × V into H.
  • (F)

    We assume that

    ()

  • for every x1,   x2V.

Let Y be another Hilbert space of control variables and take 𝒰 = L2(0, T; Y) as stated in the Introduction. Choose a bounded subset U of Y and call it a control set. Let us define an admissible control 𝒰ad as
()
Noting that the subdifferential operator ϕ is defined by
()
the problem (1) is represented by the following nonlinear functional differential problem on H:
()

Referring to Theorem  3.1 of [3], we establish the following results on the solvability of (1).

Proposition 3. (1) Let the assumption (F) be satisfied. Assume that uL2(0, T; Y), B(Y, V*), and where is the closure in H of the set D(ϕ) = {uV : ϕ(u) < }. Then, (1) has a unique solution

()
which satisfies
()
where (ϕ) 0 : HH is the minimum element of ϕ and there exists a constant C2 depending on T such that
()
where C2 is some positive constant and L2C = L2(0, T; V)∩C([0, T]; H).

Furthermore, if B(Y, H) then the solution x belongs to W1,2(0, T; H) and satisfies

()

(2) We assume the following.

  • (A)

    A is symmetric and there exists hH such that for every ϵ > 0 and any yD(ϕ)

    ()

  • where Jϵ = (I + ϵA) −1.

Then for uL2(0, T; Y), B(Y, H), and (1) has a unique solution

()
which satisfies
()

Remark 4. In terms of Lemma 1, the following inclusion

()
is well known as seen in (9) and is an easy consequence of the definition of real interpolation spaces by the trace method (see [4, 13]).

The following Lemma is from Brézis [14; Lemma A.5].

Lemma 5. Let mL1(0, T; ) satisfying m(t) ≥ 0 for all t ∈ (0, T) and a ≥ 0 be a constant. Let b be a continuous function on [0, T] ⊂ satisfying the following inequality:

()
Then,
()

For each (x0, u) ∈ H × L2(0, T; Y), we can define the continuous solution mapping (x0, u) ↦ x. Now, we can state the following theorem.

Theorem 6. (1) Let the assumption (F) be satisfied, x0H, and B(Y, V*). Then the solution x of (1) belongs to xL2(0, T; V)∩C([0, T]; H) and the mapping

()
is Lipschtz continuous; that is, suppose that (x0i, ui) ∈ H × L2(0, T; Y) and xi be the solution of (1) with (x0i, ui) in place of (x0, u) for i = 1, 2,
()
where C is a constant.

(2) Let the assumptions (A) and (F) be satisfied and let B(Y, H) and . Then xL2(0, T; D(A))∩W1,2(0, T; H), and the mapping

()
is continuous.

Proof. (1) Due to Proposition 3, we can infer that (1) possesses a unique solution xL2(0, T; V)∩C([0, T]; H) with the data condition (x0, u) ∈ H × L2(0, T; Y). Now, we will prove the inequality (33). For that purpose, we denote x1x2 by X. Then

()
Multiplying on the above equation by X(t), we have
()
Put
()
By integrating the above inequality over [0, t], we have
()
Note that
()
integrating the above inequality over (0, t), we have
()
Thus, we get
()
Combining this with (38) it holds that
()
By Lemma 5, the following inequality
()
implies that
()
From (42) and (44) it follows that
()
Putting
()
The third term of the right hand side of (45) is estimated as
()
The second term of the right hand side of (45) is estimated as
()
Thus, from (47) and (48), we apply Gronwall′s inequality to (15), and we arrive at
()
where C > 0 is a constant. Suppose (x0n, un)→(x0, u) in H × L2(0, T; Y), and let xn and x be the solutions (1) with (x0n, un) and (x0, u), respectively. Then, by virtue of (49), we see that xnx in L2(0, T, V)∩C([0, T]; H).

(2) It is easy to show that if x0V and B(Y, H), then x belongs to L2(0, T; D(A))∩W1,2(0, T; H). Let (x0i, ui) ∈ V × L2(0, T; H), and xi be the solution of (1) with (x0i, ui) in place of (x0, u) for i = 1,2. Then in view of Lemma 2 and assumption (F), we have

()
Since
()
we get, noting that |·|≤||·||,
()
Hence arguing as in (9) we get
()
Combining (50) and (53) we obtain
()
Suppose that
()
and let xn and x be the solutions (1) with (x0n, un) and (x0, u), respectively. Let 0 < T1T be such that
()
Then by virtue of (54) with T replaced by T1 we see that
()
This implies that in V × L2(0, T; D(A)). Hence the same argument shows that xnx in
()
Repeating this process we conclude that xnx in L2(0, T; D(A))∩W1,2(0, T; H).

3. Optimal Control Problems

In this section we study the optimal control problems for the quadratic cost function in the framework of Lions [9]. In what follows we assume that the embedding D(A) ⊂ VH is compact.

Let Y be another Hilbert space of control variables, and B be a bounded linear operator from Y into H; that is,
()
which is called a controller. By virtue of Theorem 6, we can define uniquely the solution map ux(u) of L2(0, T; Y) into L2(0, T; V)∩C([0, T]; H). We will call the solution x(u) the state of the control system (1).
Let M be a Hilbert space of observation variables. The observation of state is assumed to be given by
()
where G is an operator called the observer. The quadratic cost function associated with the control system (1) is given by
()
where zdM is a desire value of x(v) and R(L2(0, T; Y)) is symmetric and positive; that is,
()
for some d > 0. Let 𝒰ad be a closed convex subset of L2(0, T; Y), which is called the admissible set. An element u𝒰ad which attains minimum of J(v) over 𝒰ad is called an optimal control for the cost function (61).

Remark 7. The solution space 𝒲 of strong solutions of (1) is defined by

()
endowed with the norm
()

Let Ω be an open bounded and connected set of n with smooth boundary. We consider the observation G of distributive and terminal values (see [15, 16]).

(1) We take M = L2((0, T) × Ω) × L2(Ω) and G(𝒲, M) and observe

()

(2) We take M = L2((0, T) × Ω) and G(𝒲, M) and observe

()
The above observations are meaningful in view of the regularity of (1) by Proposition 3.

Theorem 8. (1) Let the assumption (F) be satisfied. Assume that B(Y, V*) and . Let x(u) be the solution of (1) corresponding to u. Then the mapping ux(u) is compact from L2(0, T; Y) to L2(0, T; H).

(2) Let the assumptions (A) and (F) be satisfied. If B(Y, H) and , then the mapping ux(u) is compact from L2(0, T; Y) to L2(0, T; V).

Proof. (1) We define the solution mapping S from L2(0, T; Y) to L2(0, T; H) by

()
In virtue of Lemma 2, we have
()
Hence if u is bounded in L2(0, T; Y), then so is x(u) in L2(0, T; V)∩W1,2(0, T; V*). Since V is compactly embedded in H by assumption, the embedding L2(0, T; V)∩W1,2(0, T; V*) ⊂ L2(0, T; H) is also compact in view of Theorem  2 of Aubin [17]. Hence, the mapping uSu = x(u) is compact from L2(0, T; Y) to L2(0, T; H).

(2) If D(A) is compactly embedded in V by assumption, the embedding

()
is compact. Hence, the proof of (2) is complete.

As indicated in the Introduction we need to show the existence of an optimal control and to give the characterizations of them. The existence of an optimal control u for the cost function (61) can be stated by the following theorem.

Theorem 9. Let the assumptions (A) and (F) be satisfied and . Then there exists at least one optimal control u for the control problem (1) associated with the cost function (61); that is, there exists u𝒰ad such that

()

Proof. Since 𝒰ad is nonempty, there is a sequence {un} ⊂ 𝒰ad such that minimizing sequence for the problem (70) satisfies

()
Obviously, {J(un)} is bounded. Hence by (62) there is a positive constant K0 such that
()
This shows that {un} is bounded in 𝒰ad. So we can extract a subsequence (denoted again by {un}) of {un} and find a u𝒰ad such that w − lim un = u in U. Let xn = x(un) be the solution of the following equation corresponding to un:
()
By (15) and (17) we know that {xn} and are bounded in L2(0, T; V) and L2(0, T; V*), respectively. Therefore, by the extraction theorem of Rellich′s, we can find a subsequence of {xn}, say again {xn}, and find x such that
()
However, by Theorem 8, we know that
()
From (F) it follows that
()
By the boundedness of A we have
()
Since ϕ(xn) are uniformly bounded from (73)–(77) it follows that
()
and noting that ϕ is demiclosed, we have that
()
Thus we have proved that x(t) satisfies a.e. on (0, T) the following equation:
()
Since G is continuous and ||·||M is lower semicontinuous, it holds that
()
It is also clear from that
()
Thus,
()
But since J(u) ≥ m by definition, we conclude u𝒰ad is a desired optimal control.

4. Necessary Conditions for Optimality

In this section we will characterize the optimal controls by giving necessary conditions for optimality. For this it is necessary to write down the necessary optimal condition
()
and to analyze (84) in view of the proper adjoint state system, where DJ(u) denote the Gâteaux derivative of J(v) at v = u. Therefore, we have to prove that the solution mapping vx(v) is Gâteaux differentiable at v = u. Here we note that from Theorem 6 it follows immediately that
()
The solution map vx(v) of L2(0, T; Y) into L2(0, T; V)∩C([0, T]; H) is said to be Gâteaux differentiable at v = u if for any wL2(0, T; Y) there exists a Dx(u) ∈ (L2(0, T; Y), L2(0, T; V)∩C([0, T]; H) such that
()
The operator Dx(u) denotes the Gâteaux derivative of x(u) at v = u and the function Dx(u)wL2(0, T; V)∩C([0, T]; H)) is called the Gâteaux derivative in the direction wL2(0, T; Y), which plays an important part in the nonlinear optimal control problems.

First, as is seen in Corollary 2.2 of Chapter II of [18], let us introduce the regularization of ϕ as follows.

Lemma 10. For every ϵ > 0, define

()
where Jϵ = (I + ϵϕ) −1. Then the function ϕϵ is Fréchet differentiable on H and its Frećhet differential ϕϵ is Lipschitz continuous on H with Lipschitz constant ϵ−1. In addition,
()
where (ϕ) 0(x) is the element of minimum norm in the set ϕ(x).

Now, we introduce the smoothing system corresponding to (1) as follows.
()

Lemma 11. Let the assumption (F) be satisfied. Then the solution map vx(v) of L2(0, T; Y) into L2(0, T; V)∩C([0, T]; H) is Lipschtz continuous.

Moreover, let us assume the condition (A) in Proposition 3. Then the map vϕϵ(x(v)) of L2(0, T; Y) into L2(0, T; H)∩C([0, T]; V*) is also Lipschtz continuous.

Proof. We set w = vu. From Theorem 6, it follows immediately that

()
so the solution map vx(v) of L2(0, T; Y) into L2(0, T; V)∩C([0, T]; H) is Lipschtz continuous. Moreover, since
()
by the assumption (A) and (2) of Theorem 6, it holds
()
and, by the relation (12),
()
So we know that the map vϕϵ(x(v)) of L2(0, T; Y) into L2(0, T; H)∩C([0, T]; V*) is also Lipschtz continuous.

Let the solution space 𝒲1 of (1) of strong solutions is defined by
()
as stated in Remark 7.
In order to obtain the optimality conditions, we require the following assumptions.
  • (F1)

    The Gâteaux derivative 2f(t, x) in the second argument for (t, x)∈(0, T) × V is measurable in t ∈ (0, T) for xV and continuous in xV for a.e. t ∈ (0, T), and there exist functions θ1, θ2L2(+; ) such that

    ()

  • (F2)

    The map xϕϵ(x) is Gâteaux differentiable, and the value Dϕϵ(x)Dx(u) is the Gâteaux derivative of ϕϵ(x)x(u) at uL2(0, T; U) such that there exist functions θ3, θ4L2(+; ) such that

    ()

Theorem 12. Let the assumptions (A), (F1), and (F2) be satisfied. Let u𝒰ad be an optimal control for the cost function J in (61). Then the following inequality holds:

()
where y = Dx(u)(vu) ∈ C([0, T]; V*) is a unique solution of the following equation:
()

Proof. We set w = vu. Let λ ∈ (−1,1), λ ≠ 0. We set

()
From (89), we have
()
Then as an immediate consequence of Lemma 11 one obtains
()
thus, in the sense of (F2), we have that y = Dx(u)(vu) satisfies (98) and the cost J(v) is Gâteaux differentiable at u in the direction w = vu. The optimal condition (84) is rewritten as
()

With every control uL2(0, T; Y), we consider the following distributional cost function expressed by
()
where the operator C is bounded from H to another Hilbert space X and zdL2(0, T; X). Finally we are given that R is a self adjoint and positive definite:
()
Let xu(t) stand for solution of (1) associated with the control uL2(0, T; Y). Let 𝒰ad be a closed convex subset of L2(0, T; Y).

Theorem 13. Let the assumptions in Theorem 12 be satisfied and let the operators C and N satisfy the conditions mentioned above. Then there exists an element u𝒰ad such that

()
Furthermore, the following inequality holds:
()
holds, where ΛY is the canonical isomorphism Y onto Y* and pu satisfies the following equation:
()

Proof. Let x(t) = x0(t) be a solution of (1) associated with the control 0. Then it holds that

()
where
()
The form π(u, v) is a continuous form in L2(0, T; Y) × L2(0, T; Y) and from assumption of the positive definite of the operator R, we have
()
If u is an optimal control, similarly for (97), (84) is equivalent to
()
Now we formulate the adjoint system to describe the optimal condition:
()

Taking into account the regularity result of Proposition 3 and the observation conditions, we can assert that (112) admits a unique weak solution pu reversing the direction of time tTt by referring to the well-posedness result of Dautray and Lions [19, pages 558–570].

We multiply both sides of (112) by y(t) of (98) and integrate it over [0, T]. Then we have

()
By the initial value condition of y and the terminal value condition of pu, the left hand side of (113) yields
()
Let u be the optimal control subject to (103). Then (111) is represented by
()
which is rewritten by (106). Note that C*B(X*, H) and for ϕ and ψ in H we have (C*ΛXCψ, ϕ) = 〈Cψ,CϕX, where duality pairing is also denoted by (·, ·).

Remark 14. Identifying the antidual X with X we need not use the canonical isomorphism ΛX. However, in case where XV* this leads to difficulties since H has already been identified with its dual.

Acknowledgment

This research was supported by Basic Science Research Program through the National research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-0007560).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.