1 INTRODUCTION
The Mullins–Sekerka problem in a bounded geometry is a moving boundary problem which appears as the gradient flow of the area functional with respect to a suitable metric on the tangent space of all oriented hypersurfaces which enclose a fixed volume [23, 38]. It describes the evolution of two domains and together with the sharp interfaces that separates them in such a way that the volumes of are preserved and the area of is decreased [15, 23, 25]. The Mullins–Sekerka problem may also be derived as a singular limit of the Cahn–Hilliard problem when the thickness of the transition layer between the phases vanishes [3, 43]. This model has been introduced by Mullins and Sekerka in [37] to study the solidification and liquidation of materials of negligible specific heat.
Most of the mathematical studies regarding this two-phase problem consider a bounded geometry with being open subsets of a larger domain and either is a compact manifold without boundary [8, 19, 21, 36] or intersects the boundary of orthogonally [2, 4, 24]. Existence results in the setting of classical solutions have been established almost simultaneously in [12, 19, 21] under the assumption that is a compact -hypersurface without boundary in , and , with in [12] and in [19, 21]. Subsequently, the well-posedness of the Mullins–Sekerka problem for initial geometries, where , was proven in the recent monograph [41], see also [28]. The existence theory in the situation with a contact angle condition of was established only recently in [2, 24]. We also refer to [21, 24, 41] where stability issues are investigated and to [8, 22, 39, 45] for numerical studies pertaining to this problem. Finally, we mention the papers [10, 11, 26, 27, 42] where weak solutions to the Mullins–Sekerka problem are studied.
In this paper, we consider the situation when the two phases are both unbounded and we restrict to the two-dimensional case. To be more precise, we assume that at each time instant
we have
where
,
, is an unknown function. The same setting has been also considered in [
13], where the authors establish convergence rates to a planar interface for global solutions (assuming they exist). Our goal is to establish the well-posedness of the Mullins–Sekerka problem in this unbounded regime for initial data whose regularity is close of being optimal. To be more precise, the equations of motion are described by the following system of equations
() for
. Above,
,
, and
are the unit normal which points into
, the normal velocity, and the curvature of
. Moreover,
represents the jump of
across
in the normal direction. The system (
1.1a) is supplemented by the initial condition
() Before presenting our main result, we emphasize that, under suitable conditions, the interface
identifies at each time instant
the functions
uniquely, see Proposition
2.4. Therefore, from now on, we shall only refer to
as being a solution to Equation (
1.1). A further observation is that if
is a solution to Equation (
1.1) then, given
, also the function
with
is a solution to Equation (
1.1). Since
where
is the homogeneous Sobolev norm, we identify
and
as critical spaces for Equation (
1.1). In Theorem
1.1, we establish the well-posedness of Equation (
1.1) together with a parabolic smoothing property in all subcritical Sobolev spaces
with
. With respect to this point, we mention that all previous existence results in the setting of classical solutions [
2, 12, 19, 21, 24, 41] consider initial data with at least
-regularity.
The main result of this paper is the following theorem.
Theorem 1.1.Let and choose . Then, given , there exists a unique maximal solution to (1.1) such that
where
is the maximal existence time and
is defined by
Moreover,
defines a semiflow on
which is smooth in the open set
and
()
In Theorem 1.1, we let denote the spatial derivative .
The strategy to prove Theorem 1.1 consists in several steps. To begin with, we first prove that if is known and belongs to , then the first three equations of Equation (1.1a) identify the functions uniquely, see Proposition 2.4. Furthermore, we can also represent the right side of Equation (1.1a) in terms of certain singular integral operators which involve only the function . In this way, we reformulate the problem as an evolution problem with only as unknown, see Equation (3.1). In the proof of Proposition 2.4, we rely on potential theory and some formulas, see Lemma 2.2 (iv), that relate the derivatives of certain singular integral operator evaluated at some density to the -adjoints of these operators evaluated at , which have been used already in the context of the Muskat problem in [14, 30]. Thanks to these formulas, we may formulate Equatin (1.1), see Equation (3.1) in Section 3.1, as an evolution problem in , , with nonlinearities which are expresses as a derivative. Then, using a direct localization argument, we show in Section 3.2 that the problem is of the parabolic type by identifying the right side of Equation (3.1) as the generator of an analytic semigroup. The proof of the main result is established in Section 3.3 and relies on the quasilinear parabolic theory presented in [5, 35].
1.1 Notation
Given Banach spaces and , we define as the space of bounded linear operators from to and . Moreover, is the open subset of which consists of isomorphisms and . Furthermore, , , is the space of -linear, bounded, and symmetric operators . The set of all locally Lipschitz continuous mappings from to is denoted by and is the set which consists only of smooth mappings from an open set to .
If
is additionally densely embedded in
, we set (following [
6])
Given a Banach space , an interval , , and , we define as the set of all -times continuously differentiable functions and is its subset consisting of those functions which possess a locally -Hölder continuous th derivative. Moreover, is the Banach space of functions with bounded and uniformly continuous derivatives up to order and denotes its subspace which consists of those functions which have a uniformly -Hölder continuous th derivative. We also set Finally, if is open and , then is the set of functions defined on which possess uniformly continuous derivatives up to order .
2 SOLVABILITY OF SOME BOUNDARY VALUE PROBLEMS
Our strategy to solve Equation (
1.1) is to reformulate this model as an evolution problem for the function
only. To this end, we first solve via the method of potentials, for each given function
, the (decoupled) boundary value problems for
and
given by the systems
() where
Below
is the outward unit normal at
which points into
. The corresponding existence and uniqueness result is provided in Proposition
2.4 below. Before stating this result we first introduce some notation. Observe that
is the image of the diffeomorphism
defined by
for
. Then, the pulled-back curvature
is given by the relation
()
Moreover, given functions
, we set
()
2.1 Some singular integral operators
We now introduce some singular integral operators which are used when solving Equation (
2.1). Given
, we set
() for
, where
is the principal value and
Lemma
2.1 (i) below ensures that these singular integral operators belong to
. Their
-adjoints are given by the relations
() An important observation is that the operators defined in Equations (
2.4) and (
2.5) can be represented in terms of a family of singular integral operators
which we now introduce. Given
and Lipschitz continuous mappings
, we set
() for
. In particular, if
is Lipschitz continuous we use the short notation
() These operators have been defined in the context of the Muskat problem in [
31]. It is now a straightforward consequence of Equations (
2.4)–(
2.7) to observe that
() In view of the representation (
2.8), several mapping properties for the operators introduced in Equations (
2.4) and (
2.5) can be derived from the following result.
Lemma 2.1.Let .
- (i) Let be Lipschitz continuous mappings. Then, there exists a positive constant such that for all Lipschitz continuous functions we have
Moreover, .
- (ii) Given , it holds that .
- (iii) Given , it holds that .
- (iv) Let and be given. Then, there exists a positive constant such that for all we have
Proof.The property (i) is established in [31, Lemma 3.1]. The claim (ii) is proven for in [34, Lemma 4.3] and the case follows from this result via induction. Moreover, (iii) is established in [33, Appendix C] and (iv) in [29, Lemma 2.5].
The next lemma collects some important properties of the operators defined in Equations (2.4) and (2.5).
Lemma 2.2.Let .
- (i) If , then .
- (ii) If , , then .
- (iii) If , then .
- (iv) If and , then and belong to with
- (v) If , then .
Proof.The property (i) follows from [31, Theorem 3.5] and (ii) is established in [1, Theorem 5] and [30, Propositin 3.4]. Moreover, the claim (iii) is proven in [31, Proposition 3.6 and Lemma 3.8] and (iv) in [30, Proposition 2.3]. The assertion (v) is a consequence of (iii) and (iv). Indeed, given , , and , the properties (iii) and (iv) imply that with
Hence,
and
the inequalities in the second last line of the formula (with a sufficiently small constant
independent of
and
) being a straightforward consequence of (iii). The assertion (v) follows now from this estimate via the method of continuity [
6, Proposition I.1.1.1].
2.2 The solvability of the boundary value problems ()
As a preliminary result, we provide in Proposition 2.3 the unique solvability of a transmission-type boundary value problem which is used to establish the uniqueness claim in Proposition 2.4.
Proposition 2.3.Given and , the boundary value problem
() has a solution
such that
. Moreover, the solution is, up to an additive constant, unique.
Proof.We first prove uniqueness of solutions in the class described above. Let therefore be a solution to the homogeneous problem associated with (2.9) (that is with ). Setting Stokes' theorem leads us to the equation
Hence,
is the real part of a holomorphic function
. Since
is also holomorphic and
is bounded and vanishes for
, it follows that
, meaning that
is constant in
.
In order to establish the existence of solutions, we set
() Defining
by the formula
() and setting
, we next show that
is a solution to (
2.9) with the required properties. To start, we note that
and, for every
, we have
for
and locally uniformly in
. This shows that
is well-defined and that integration and differentiation with respect to
and
may be commuted.
Furthermore, Equation (2.11) and [9, Lemma A.1, Lemma A.4], imply that with , the gradient being given by the formula
() Using the matrix identity
together with integration by parts we obtain that
() Combining Equation (
2.13) and [
9, Lemma
A.1, Lemma A 4], we obtain that
satisfies also Equation (
2.9)
and
It is now easy to infer from Equation (
2.13) that also Equation (
2.9)
holds true, and therewith we established the existence of a solution.
We are now in a position to solve the boundary value problems (
2.1) for
and
. To this end, we first motivate heuristically the explicit formula (
2.15) for the gradient
of the solution, which is the building block in the proof of Proposition
2.4 (the formula for
is motivated similarly). The starting point is the observation that
in
, which implies there is a stream function
such that
. Set
. Taking into account that
in
and using Stokes's formula, we deduce that the distribution
is supported on the interface
, and we presume that
() with some unknown density function
, that is
We now formally obtain
by taking the convolution of the right side of Equation (
2.14) with the fundamental solution
of the Laplacian given by
hence
Formally computing
we arrive, in view of the relation
, at the integral formula (
2.15). In the proof of Proposition
2.4, we show, under suitable assumptions, there exists a unique density
such that the formula for
identifies, via the relation
, the unique solution
to Equation (
2.1).
Proposition 2.4.Given , there exist unique solutions to Equation (2.1) such that for some functions . Furthermore, in , where
() and with density functions
given by the relation
()
Proof.
- (i) Existence. According to Lemma 2.2 (iii), we have and, since , the density functions defined in Equation (2.16) are well-defined and belong to . We next infer from [9, Lemmas A.1, A.4] that the vector fields defined in Equation (2.15) belong to and
() Moreover, satisfies the asymptotic boundary condition for and
see [9, Lemma A.4]. Setting , the relation in ensures that the functions
where and , satisfy in . Moreover, and, since are divergence free, Equation (2.1) is satisfied. It is clear that also the asymptotic boundary conditions (2.1) hold. Combining Equations (2.4), (2.17), and the relation on , we further have
() In order to show that are derivatives of functions in we define by the relations
() see Lemma 2.2 (v). We next differentiate Equation (2.19) with respect to and infer then from Lemma 2.2 (iii)–(iv) that and
Setting , it follows from Equation (2.8) and Lemma 2.1 (ii) that . Moreover, Equation (2.18) lead to . As a final step, we show that the additive constants can be chosen such that also Equation (2.1) are satisfied. Indeed, in view of Equations (2.16) and (2.17), we have
so that is a constant function. Therewith, we established the existence of a solution to Equation (2.1).
- (ii) Uniqueness. It suffices to show that the homogeneous problems
() have unique solutions with the required properties. We establish only the uniqueness of (that of follows by similar arguments). Let thus be the function which satisfies the relation . Setting and , we note that solves the boundary value problem (2.9) (with ) and it is thus given by formula (2.11). In particular, it follows from Equation (2.11) and [9, Lemma A.1] that
and together with Lemma 2.2 (iv) we get
However, as shown in [31, Equations (3.22) and (3.25)], there exits a positive constant such that for all . Therefore , hence also . We now infer from Equation (2.11) that , and the uniqueness claim is proven.
3 THE EVOLUTION PROBLEM AND THE PROOF OF THE MAIN RESULT
In this section, we first formulate the original problem (1.1) as an evolution problem for , see Equation (3.1). Subsequently, we prove that the linearization of the right side of Equation (3.1) is the generator of an analytic semigroup, see Theorem 3.1, and we conclude the section with the proof of the main result stated in Theorem 1.1.
3.1 The evolution problem
In order to formulate the system (
1.1) as an evolution problem for
we first infer from Proposition
2.4 that if
is a solution to Equation (
1.1) as stated in Theorem
1.1, then, for each
, we have
Together with Equation (
1.1)
we arrive at the following evolution equation:
As we want to solve the latter equation in the phase space
with
, we encounter the problem that the curvature
is in general not a function, but a distribution. However, taking full advantage of the quasilinear character of the curvature operator we can formulate the system (
1.1) as the following quasilinear evolution problem:
() where
is defined by the following formula:
() with
given by
() We point out that, if
, then
is exactly the pulled-back curvature
. Moreover, arguing as in [
33, Appendix C], it is not difficult to prove that
() Recalling Equation (
2.8), it follows from Lemmas
2.1 (iii) and
2.2 (ii), by also using the smoothness of the map which associate to an isomorphism its inverse, that
() Gathering Equations (
3.2)–(
3.5), we obtain in view of
that
()
3.2 The parabolicity property
Our next goal is to prove that the problem (3.1) is of parabolic type in the sense that, for each , , the operator is the generator of an analytic semigroup in . This is the content of the next result.
Theorem 3.1.Given , , it holds that .
In the proof of Theorem 3.1, we exploit the fact that, given , the action is the derivative of a function which lies in . The proof of Theorem 3.1 is postponed to the end of this subsection and it relies on a strategy inspired by [16, 17, 20].
As a first step, we associate with
the continuous path
and we note that
where
is the Hilbert transform. In particular,
is the Fourier multiplier defined by the symbol
. As a second step, we locally approximate in Proposition
3.2 the operator
by Fourier multipliers which coincide, up to some positive multiplicative constants, with
. As a final third step, we establish for these Fourier multipliers suitable (uniform) resolvent estimates, see Equations (
3.14) and (
3.15). The proof of Theorem
3.1 follows then by combining the results established in these three steps.
Before presenting Proposition
3.2, we choose for each
, a finite
-localization family, that is a family
with
sufficiently large, such that
,
, and
- is an interval of length for ;
- ;
- if or
- for all .
To each finite
-localization family, we associate a second family
with the following properties:
- on for and ;
- is an interval of length and with the same midpoint as , .
It is not difficult to prove that, given
and
, there exists
such that for all
we have
()
We are now in a position to establish the aforementioned localization result.
Proposition 3.2.Let , , and be given. Then, there exist , a -localization family , and a constant such that
() for all
,
, and
, where, letting
, we set
Proof.In the following, and are constants that do not depend on , while constants denoted by may depend on .
Given , , and we have
where, in view of Equations (
2.8), (
3.3), Lemmas
2.1 (i), and
2.2 (i) we have
Since
is a contraction, we have shown that
() In remains to estimate the first term on the right side of Equation (
3.9). To this end, several steps are needed.
Step 1. Given and we define as the unique solutions to
() see Equation (
3.4) and Lemma
2.1 (ii). In this step, we prove that there exists a constant
such that for all
,
,
, and
we have
() Indeed, after multiplying Equation (
3.10) by
, we arrive at
and it can be easily shown that
Moreover, since
, the commutator estimate in Lemma
A.1 together with Equation (
2.8) yields
The estimates (
3.11) follow now from Lemma
2.2 (ii).
Step 2. Recalling Equation (2.8), we infer from Lemma A.2 if , respectively from Lemma A.3 if , that, if is sufficiently small, then for all , , and we have
The estimates (
3.11) and the property (
3.4) (with
) enable us to conclude that for all
,
, and
it holds that
provided that
is sufficiently small, and therefore
() Step 3. We show that, if is sufficiently small, then for all , , and we have
() To start with, we note that since
is an isometry we have
and it remains to estimate the right side of the latter inequality. To this end, we first infer from Equation (
3.10) that
Noticing that
and
for
uniformly with respect to
and using the estimate
we have in view of
that
for all
,
, and
, provided that
is sufficiently small. Similarly, for
we have
Furthermore, appealing to Lemma
A.2 if
, respectively to Lemma
A.3 if
, we find together with the representation (
2.8) of
that, if
is sufficiently small, then
and, together with Equation (
3.11) and the property (
3.4) (with
), we get
for all
,
, and
. This proves Equation (
3.13).
Combining the estimates (3.9), (3.12), and (3.13), we conclude that Equation (3.8) holds true and this completes the proof.
In Proposition
3.2, we have locally approximated
by Fourier multipliers
, and, since
is a bounded function, there exists a constant
such that
. Elementary Fourier analysis arguments enable us to conclude there exists a constant
such that for all
and all
we have
()
()
We are now in a position to establish Theorem 3.1.
Proof of Theorem 3.1.Let be as identified in Equation (3.15). Setting , Proposition 3.2 ensures that there exist , a -localization family , and a constant such that for all , , and we have
Recalling Equation (
3.15), we also have
for all
,
,
, and
. Combining these estimates, we get
Summing up over
, the estimates (
3.7), Young's inequality, and the interpolation property
() cf., for example, [
44, Section 2.4.2/Remark 2], where
is the complex interpolation functor, imply there exist constants
and
such that for all
,
, and
we have
() The property (
3.14) together with the method of continuity [
6, Proposition I.1.1.1] and Equation (
3.17) now yield
() The desired generator property follows now directly from Equation (
3.17) (with
) and Equation (
3.18), see [
6, Chapter I].
3.3 The proof of the main result
We complete this section with the proof of the main result which exploits the abstract quasilinear parabolic theory presented in [5] (see also [35, Theorem 1.1]).
Proof of Theorem 1.1.Let , , and . Defining and , it holds that , , and . Theorem 3.1 together with the regularity property (3.6) (both with ) ensure that . This enables us to apply [35, Theorem 1.1] in the context of the quasilinear parabolic evolution problem (3.1). Consequently, given , there exists a unique maximal classical solution to Equation (3.1) such that
() and
() where
is the maximal existence time and
can be chosen arbitrary small, cf. [
35, Remark 1.2 (ii)]. Moreover, the mapping
defines a semiflow on
which is smooth in the open set
We next prove that the uniqueness claim holds in the class of classical solutions; that is, of solutions which satisfy merely Equation (3.19). To this end, prove that each such solution with the property (3.19) satisfies Equation (3.20) for some small . Let therefore be arbitrary but fixed. Then, there exists a positive constant such that for all we have
() Moreover, in virtue of Lemma
2.2 (i) and (ii)
for all
, and together with Equation (
3.16) and the observation that
we get that
. Since by Lemma
2.1 (i) and (ii) and Equation (
2.8) the mapping
is in particular continuous, we may chose
sufficiently large to guarantee that for all
it holds that
() Therefore, setting
,
, we infer from Equations (
3.21) and (
3.22) that there exists a constant
such that for all
we have
Above
is the
-scalar product. Since
for
, see Equation (
3.2), it follows now from Lemma
2.1 (iv) and Equation (
2.8) there exists a constant
such that for all
To derive the last inequality, we have use the continuity of the multiplication operator
see [
29, Equation (1.8)]. To summarize, we have shown that
Since
, the latter estimate together with the mean value theorem and the observation that for
it holds that
, see Equation (
3.16), yields
which proves Equation (
3.20). Recalling Proposition
2.4, we have established the existence and uniqueness of maximal classical solutions to Equation (
1.1). Finally, the parabolic smoothing property (
1.2) may be shown by using a parameter trick employed also in other settings, see [
7, 18, 32, 40]. Since the arguments are more or less identical to those used in [
32, Theorem 1.3], we refrain to present them here.
ACKNOWLEDGMENTS
We would like to express our thanks to the anonymous referees for the careful reading of the manuscript and for the valuable comments.
Open access funding enabled and organized by Projekt DEAL.
APPENDIX A: SOME PROPERTIES OF THE SINGULAR INTEGRAL OPERATORS
We recall some recent results that are available for the singular integrals operators introduced in Equation (2.7) and which are used in the analysis in Section 3. We begin with a commutator type estimate.
Lemma A.1.Let , , , and be given. Then, there exists a constant that depends only on , , , and such that for all we have
Proof.See [1, Lemma 12].
The next results describe how to localize the singular integrals operators . They may be viewed as generalizations of the method of freezing the coefficients of elliptic differential operators.
Lemma A.2.Let , , , and be given. Let further and . For any sufficiently small , there exists a constant that depends only on , and (if ) such that for all and we have
Proof.See [1, Lemma 13] if , respectively [33, Lemma D.5] if .
Lemma A.3 describes how to localize the operators “at infinity.”
Lemma A.3.Let , , , and be given. Let further and . For any sufficiently small , there exists a constant that depends only on , and (if ) such that for all
and
Proof.See [1, Lemma 15] if , respectively [33, Lemma D.6] if .