1. Introduction
We consider existence of solutions at resonance to first-order three-point BVPs with nonlinear boundary conditions using results developed in [1, 2].
Consider
(1.1)
(1.2)
where
M,
N, and
R are constant square matrices of order
n,
A(
t) is an
n ×
n matrix with continuous entries,
E : [0,1] →
ℝ is continuous,
F : [0,1] ×
ℝn × (−
ɛ0,
ɛ0) →
ℝn is a continuous function where
ɛ0 > 0,
ℓ ∈
ℝn,
η ∈ (0,1), and
g :
ℝ3n →
ℝn is continuous.
Our existence theorem uses the implicit function theorem; see for example Nagle [
3]. Nagle [
3] extended the alternative method considered by Hale [
4] for handling the periodic case of non-self-adjoint problems subject to homogeneous boundary conditions. These results extend the work of Feng and Webb [
5] and Gupta [
6] of three-point BVPs with linear boundary conditions for
α = 1 and
αη = 1 to nonlinear boundary conditions. Feng and Webb [
5] studied the existence of solutions of the following BVPs (
1.3) and (
1.4):
(1.3)
(1.4)
where
η ∈ (0,1),
α ∈
ℝ,
f : [0,1] ×
ℝ2 → ×
ℝ is a continuous function, and
e : [0,1] →
ℝ is a function in
L1[0,1]. Both of the problems are resonance cases under the assumption
α = 1 for the problem (
1.3), and
αη = 1 for the problem (
1.4). The problem for nonlinear boundary conditions for discrete systems has been studied by Rodriguez [
7,
8]. Rodriguez [
7] extended results of Halanay [
9], who considered periodic boundary conditions and also extended those of Rodriguez [
10] and Agarwal [
11] who considered linear boundary conditions. To our knowledge there appears to be no research in the literature on multipoint BVPs for systems of first-order equations with nonlinear boundary conditions at resonance. The results of this paper fill this gap in the literature.
Our results are analogues for three-point boundary conditions of those periodic boundary conditions for perturbed systems of first-order equations at resonance considered by Coddington and Levinson [12] and Cronin [13, 14]. Moreover, our results extend the work of Urabe [15], Liu [16], and of Nagle [3], where he solved the two-point BVP using the Cesari-Hale alternative method.
2. Preliminaries
Now we state the following basic existence theorems for systems with a parameter and use them to formulate the existence results for problem (1.1) and (1.2).
Theorem 2.1 (see Coppel [17], Page 19.)
- (i)
Let F(t, x, ɛ) be a continuous function of (t, x, ɛ) for all points (t, x) in an open set D and all values ɛ near .
- (ii)
Let x(t, c, ɛ) be any noncontinuable solution of the differential equation
(2.1)
If is defined on the interval [0,1] and is unique, then x(t, c, ɛ) is defined on [0,1] for all (c, ɛ) sufficiently near and is a continuous function of its threefold arguments at any point .
Theorem 2.2 (see Coppel [17], Page 22.)
- (i)
Let F(t, x, ɛ) be a continuous function of (t, x, ɛ) for all points (t, x) in a domain D and all values of the vector parameter ɛ near .
- (ii)
Let be a solution of the differential equation
(2.2)
defined on a compact interval [0,1].
- (iii)
Suppose that F has continuous partial derivatives Fx, Fɛ at all points with t ∈ [0,1].
Then for all (c, ɛ) sufficiently near the differential equation
(2.3)
has a unique solution
x(
t,
c,
ɛ) over [0,1] that is close to the solution
of (ii). The continuous differentiability of
F with respect to
x and
ɛ implies the additional property that the solution
x(
t,
c,
ɛ) is differentiable with respect to (
t,
c,
ɛ) for (
c,
ɛ) near
.
We recall the following results of [2].
Lemma 2.3 (see [2].)Consider the system
(2.4)
where
A(
t) is an
n ×
n matrix with continuous entries on the interval [0,1]. Let
Y(
t) be a fundamental matrix of (
2.4). Then the solution of (
2.4) which satisfies the initial condition
(2.5)
is
x(
t) =
Y(
t)
Y−1(0)
c where
c is a constant
n-vector. Abbreviate
Y(
t)
Y−1(0) to
Y0(
t). Thus
x(
t) =
Y0(
t)
c.
Lemma 2.4 (see [2].)Let Y(t) be a fundamental matrix of (2.4). Then any solution of (1.1) and (2.5) can be written as
(2.6)
The solution (
1.1) satisfies the boundary conditions (
1.2) if and only if
(2.7)
where
ℒ =
M +
NY0(
η) +
RY0(1),
𝒩(
c,
α,
η,
ɛ) =
+
−
g(
c,
x(
η),
x(1))),
+
, and
x(
t,
c,
ɛ) is the solution of (
1.1) given
x(0) =
c.
Thus (2.7) is a system of n real equations in ɛ, c1, …, cn where c1, …, cn are the components of c. The system (2.7) is sometimes called the branching equations.
Next we suppose that
ℒ is a singular matrix. This is sometimes called the resonance case or degenerate case. Now we consider the case rank
ℒ =
n −
r, 0 <
n −
r <
n. Let
Er denote the null space of
ℒ, and let
En−r denote the complement in
ℝn of
Er; that is,
(2.8)
Let
x1, …,
xn be a basis for
ℝn such that
x1, …,
xr is a basis for
Er and
xr+1, …,
xn a basis for
En−r.
Let
Pr be the matrix projection onto Ker
ℒ =
Er, and
Pn−r =
I −
Pr, where
I is the identity matrix. Thus
Pn−r is a projection onto the complementary space
En−r of
Er, and
(2.9)
Without loss of generality, we may assume
(2.10)
We will identify
Prc with
cr = (
c1, …,
cr) and
Pn−rc with
cn−r = (
cr+1, …,
cn) whenever it is convenient to do so.
Let
H be a nonsingular
n ×
n matrix satisfying
(2.11)
Matrix
H can be computed easily. The nature of the solutions of the branching equations depends heavily on the rank of the matrix
ℒ.
Lemma 2.5 (see [2].)The matrix ℒ has rank n − r if and only if the three-point BVP (2.4) and Mx(0) + Nx(η) + Rx(1) = 0 has exactly r linearly independent solutions.
Next we give a necessary and sufficient condition for the existence of solutions of x(t, c, ɛ) of three-point BVPs for ɛ > 0 such that the solution satisfies x(0) = c where c = c(ɛ) for suitable c(ɛ).
We need to solve (
2.7) for
c when
ɛ is sufficiently small. The problem of finding solutions to (
1.1) and (
1.2) is reduced to that of solving the branching equations (
2.7) for
c as function of
ɛ for |
ɛ| <
ɛ0. So consider (
2.7) which is equivalent to
(2.12)
Multiplying (
2.7) by the matrix
H and using (
2.11), we have
(2.13)
where
H𝒩((
Pr +
Pn−r)
c,
α,
η,
ɛ) =
+
−
g(
c,
x(
η),
x(1))) and
Hd =
+
.
Since the matrix H is nonsingular, solving (2.7) for c is equivalent to solving (2.13) for c. The following theorem due to Cronin [13, 14] gives a necessary condition for the existence of solutions to the BVP (1.1) and (1.2).
Theorem 2.6 (see [2].)A necessary condition that (2.13) can be solved for c, with |ɛ| < ɛ0, for some ɛ0 > 0 is PrHd = 0.
If ℒ is a nonsingular matrix then the implicit function theorem is applicable to solve (2.7) uniquely for c as a function of ɛ in a neighborhood of the initial solution c (see Cronin [14]). The implicit function theorem may be stated as in Voxman and Goetschel [18, page 222].
Theorem 2.7 (the implicit function theorem). Let Ω ⊂ ℝn × ℝm be an open set, and let F : Ω → ℝm be function of class C1. Suppose (x0, y0) = 0. Assume that
(2.14)
where
F = (
F1, …,
Fm). Then there are open sets
U ⊂
ℝn and
V ⊂
ℝm, with
x0 ∈ U and
y0 ∈
U, and a unique function
f :
U →
V such that
(2.15)
for all
x ∈
U with
y0 =
f(
x0). Furthermore,
f is of class
C1.
3. Main Results
In this section sufficient conditions are introduced for the existence of solutions to the BVP (1.1), (1.2). We recall the following Definition 1 of [2] to develop our main results.
Definition 3.1 (see [2].)Let Er denote the null space of ℒ, and let En−r denote the complement in ℝn of Er. Let Pr be the matrix projection onto Ker ℒ = Er, and Pn−r = I − Pr, where I is the identity matrix. Thus Pn−r is a projection onto the complementary space En−r of Er. If En−r is properly contained in ℝn, then Er is an r-dimensional vector space where 0 < r < n. If c = (c1, …, cn), let Prc = cr and Pn−r = cn−r, then define a continuous mapping Φɛ : ℝr → ℝr, given by
(3.1)
where
cn−r(
cr,
ɛ) =
cn−r is a differentiable function of
cr and
ɛ. By abuse of notation we will identify
Prc and
cr when convenient and where the meaning is clear from the context so that in defining Φ
ɛ above from the context we interpreted
PrH𝒩 as (
H𝒩1, …,
H𝒩r). Similarly we will sometimes identify
Pn−rc and
cn−r. Setting
ɛ = 0, we have
(3.2)
where
cn−r(
cr, 0) =
Pn−rHd; note that from the context
cn−r(
cr, 0) =
Pn−rHd is interpreted as
cn−r(
cr, 0) = (
Hdr+1, …,
Hdn).
If Er = ℝn and Pr = I, then Pn−r = 0. Since Pn−r = 0, it follows that the matrix H is the identity matrix. Thus define a continuous mapping Φɛ : ℝn → ℝn, given by Φɛ(c) = 𝒩(c, α, η, ɛ). Setting ɛ = 0, we have Φ0(c) = 𝒩(c, α, η, 0).
The following theorem is the main theorem of this paper and gives sufficient conditions for the existence of solutions of (1.1), (1.2) for |ɛ| < ɛ0, for some ɛ0 > 0. The existence theorem can be established using the implicit function theorem; see Theorem 2.7.
Theorem 3.2. If c = (c1, …, cn) ∈ ℝn, let cr = (c1, …, cr). Let the conditions (i), (ii), and (iii) of Theorem 2.2 hold, and let k1 > 0, k > 0 and ɛ0 > 0 be small enough so that (1.1) has a unique n-vector x(t, c, ɛ) defined on . Let , given by
(3.3)
where
cn−r(
cr,
ɛ) =
cn−r is a differentiable function of
cr and
ɛ, and
(3.4)
for
. If
and
(3.5)
for some
, then there is
,
, and
δ > 0 such that (
1.1), (
1.2) has a unique solution
x(
t,
c(
ɛ),
ɛ) for all
such that
and
.
Proof. The existence and uniqueness of a solution x(t, c, ɛ) for |ɛ| < ɛ0 with x(0, c, ɛ) = c ∈ ℝn follows directly from conditions (i), (ii), and (iii) of Theorem 2.2. Now
(3.6)
for some
, thus it follows from the implicit function theorem that there is
,
such that (
3.3) has a unique solution (
c1, …,
cr) = (
c1(
ɛ), …,
cr(
ɛ)), with
, for all
ɛ,
. From this it follows that
x(
t,
c(
ɛ),
ɛ) is a unique solution of the BVP (
1.1), (
1.2) which satisfies the initial value
x(0,
c(
ɛ),
ɛ) =
c(
ɛ) and
and
, where
.
We now consider the BVP (1.1), (1.2) in the case r = n; that is, ℒ is the zero matrix, which is sometimes called the totally degenerate case.
Theorem 3.3 (compare with Theorem 3.8, page 69 of Cronin [14]). If r = n, a necessary condition in order that (2.7) has a solution for each ɛ with |ɛ| < ɛ0 for some ɛ0 > 0 is d = 0; that is,
(3.7)
Theorem 3.4. Let the conditions (i), (ii), and (iii) of Theorem 2.2 hold, and let k1 > 0, k > 0 and ɛ0 > 0 be small enough so that (1.1) has a unique solution x(t, c, ɛ) defined on . If r = n, d = 0, and
(3.8)
then there is
,
, and
δ > 0 such that (
1.1), (
1.2) has a unique solution
x(
t,
c(
ɛ),
ɛ) for all
such that
and
.
Proof. If r = n and d = 0, then Pn−r = 0. This implies Pr = I. Since Pn−r = 0, it follows that H = I, the identity matrix.
The existence and uniqueness of a solution
x(
t,
c(
ɛ),
ɛ) for
with
x(0,
c,
ɛ) =
c ∈
ℝn follows directly from conditions (i), (ii) and (iii) of Theorem
2.2. Now
(3.9)
If
,
(3.10)
for some
; thus it follows from the implicit function theorem that there is
,
such that (
3.8) has a unique solution
c =
c(
ɛ), with
, for all
ɛ,
. From this it follows that
x(
t,
c(
ɛ),
ɛ) is a unique solution of the BVP (
1.1), (
1.2) which satisfies the initial values
x(0,
c(
ɛ),
ɛ) =
c(
ɛ) ∈
ℝn for all
ɛ,
such that
and
.
4. Some Examples
To find c for ɛ small using Theorem 2.6, we need to compute Φ0(c) from (3.3). We apply Theorem 3.2 to show the existence of solutions.
Example 4.1. α = 1, rank ℒα=1 = 1 < 2, ℓi ≡ 0 for i = 1,2.
Consider the BVP
(4.1)
where
f ∈
C([0,1] ×
ℝ2 × (−
ɛ0,
ɛ0);
ℝ),
e ∈
C[0,1],
g ∈
C(
ℝ6;
ℝ2). Then the BVP (
4.1) is equivalent to
(4.2)
(4.3)
where
(4.4)
By Lemma
2.4, we find
ℒ:
(4.5)
The resonance happens if det (
ℒ) = −1 +
α = 0; that is the case where
α = 1. For
α = 1, rank
ℒα=1 = 1; that is,
(4.6)
Let
E1 denote the null space of
ℒα=1. Thus
is a basis for Ker (
ℒα=1), and Ker (
ℒα=1) =
Span e1. Let
P1 be the matrix projection onto Ker (
ℒα=1).
.
. Set
H =
so that
Hℒα=1 =
P2. In system (
4.2), (
4.3) let
,
, and let
g2(
c1,
c2,
x1(1/2),
x2(1/2),
x1(1),
x2(1)) = 2
x1(1/2)/256
π4. We need to show that
P1Hd = 0 which is a necessary condition in order to apply Theorem
2.6:
(4.7)
Since
, it follows that
P1Hd = 0. From the boundary condition (
4.3), we have
x2(0) =
c2 = 0. Then, by the variation of constants formula, we obtain
(4.8)
Thus the BVP (
4.2), (
4.3) has a solution if
α = 1,
ɛ = 0; namely,
x1(
t,
c, 0) =
c1 + ((1 − cos 4
πt)/16
π2),
x2(
t,
c, 0) = sin 4
πt/4
π,
x1(0) =
x1(1/2) =
x1(1) =
c1,
x2(0) =
x2(1/2) =
x2(1) = 0. Setting
ɛ = 0, thus
and
g2(
c1,
x1(1/2),
x1(1)) = 2
c1/256
π4. Hence
(4.9)
If
c1 ≈ −3.5023
×
10
−3 or
c1 ≈ 4.2938
×
10
−3, then Φ
0(
c1) = 0 and
(4.10)
Hence by Theorem 3.2 there is , and δ > 0 such that the BVP (4.2), (4.3) has a unique solution x(t, c(ɛ), ɛ) which satisfies the initial values x(0, c(ɛ), ɛ) = c(ɛ) ∈ ℝ2 for all such that c(0) = (c1, 0) and |c(ɛ) − c(0)| < δ.
Example 4.2. Rank ℒ = 2 < 3.
Consider the BVP
(4.11)
(4.12)
where
fi ∈
C([0,1] ×
ℝ3 × (−
ɛ0,
ɛ0);
ℝ),
i = 1,2, 3,
ℓ = (
ℓ1,
ℓ2,
ℓ3) ∈
ℝ3,
g ∈
C(
ℝ9;
ℝ3),
(4.13)
By Lemma
2.4, the problem of solving (
4.11), (
4.12) is reduced to that of solving
ℒc =
ɛ𝒩(
c,
α,
η,
ɛ) +
d for
c provided solutions
x(
t,
c,
ɛ) of initial value problems exist on [0,1] for each (
c,
ɛ). Thus we find
ℒ:
(4.14)
Since rank
ℒ = 2, it follows that the matrix
ℒ is singular. Let
E3 denote the null space of
ℒ. Thus
is a basis for Ker (
ℒ), and Ker (
ℒ) =
Span e3. Let
P3 be the matrix projection onto Ker (
ℒ).
. So
. Set
so that
Hℒ =
P2.
(4.15)
Since
d = 0, it follows that
P3Hd = 0. Thus a necessary condition of Theorem
2.6 holds. We also have
P2Hd = 0. To obtain Φ
0(
c) we must first calculate
x(
t,
c, 0); that is the solution of
x′ =
A(
t)
x +
e(
t). By Lemma
2.3, and boundary condition (
4.12),
x′ =
A(
t)
x has a solution
x(
t) with
x(0) =
c = (
c1,
c2,
c3)
T. We note that at
ɛ = 0,
P2Hd =
P2c, where
and
P2Hd = 0. Hence
c1 = 0 and
c2 = 0. Thus
(4.16)
Thus the BVP (
4.11), (
4.12) has a solution if
ɛ = 0; namely,
x1(
t,
c, 0) =
x2(
t,
c, 0) = 0 and
x3(
t,
c, 0) =
c3, and thus
ℓi = 0,
i = 1,2,
x3(
π) =
c3 = −
ℓ3, and
x3(2
π) =
c3:
(4.17)
where
(4.18)
Thus Φ
ɛ(
c3) =
𝒩3(
c3 ⊕
c2(
c3,
ɛ),
α,
η,
ɛ), where
c3 =
P3c = (0,0,
c3) and
c2 =
P2c = (
c1,
c2, 0). Setting
ɛ = 0, we have Φ
0(
c3) =
𝒩3(
c3,
α,
η, 0), where
c2(
c3, 0) =
P2Hd = 0. Writing out the components and setting
ɛ = 0, we obtain
x1(
t,
c, 0) =
x2(
t,
c, 0) = 0 and
x3(
t,
c, 0) =
c3. Hence
(4.19)
where
xi(
π) =
xi(2
π) = 0,
i = 1,2,
x3(
π) =
c3 = −
ℓ3, and
x3(2
π) =
c3. Let
, and
. Hence
(4.20)
If
, then Φ
0(
c3) = 0 and
(4.21)
Hence by Theorem
3.2 there is
,
and
δ > 0 such that the BVP (
4.11), (
4.12) has a unique solution
x(
t,
c(
ɛ),
ɛ) which satisfies the initial values
x(0,
c(
ɛ),
ɛ) =
c(
ɛ) ∈
ℝ3 for all
such that
c(0) = (0,0,
c3) and |
c(
ɛ) −
c(0)| <
δ.