1. Background
Let
W be the self-adjoint Jacobi matrix operator acting on
as follows:
()
via
()
where
an > 0 and
. This operator can be viewed as the one-dimensional discrete Schrödinger operator if
an = 1 for all
n. A variety of papers examined such operators; for example, we quote the work by Killip and Simon in [
1], where they obtained sum rules for such Jacobi matrices. Additionally, Hundertmark and Simon in [
2] were able to find spectral bounds for these operators. We thus state their result.
If
an → 1,
bn → 0 rapidly enough, as
n → ±
∞, the essential spectrum
σess(
W) of
W is absolutely continuous and coincides with the interval [−2,2] (see, e.g., [
3]). Besides,
W may have simple eigenvalues
where
, and
()
Indeed, in [
2] the authors found the following.
Theorem 1. If , , γ ≥ 1/2, then
()
where
()
The author (see [4]) then improved their result, achieving the smaller constant: , by translating a well-known method employed by Dolbeaut et al. in [5] to the discrete scenario. They, in turn, used a simple argument by Eden and Foias (see [6]) to obtain improved constants for Lieb-Thirring inequalities in one dimension.
The aim of this paper is to answer the natural question of whether these methods can be generalised to give bounds for higher order Schrödinger-type operators and thus “polydiagonal” Jacobi-type matrix operators, which we will define below.
2. Notation and Preliminary Material
For a sequence
, let
D and
D* be the difference operator and its adjoint, respectively, denoted by
Dφ(
n) =
φ(
n + 1) −
φ(
n) and
D*φ(
n) =
φ(
n − 1) −
φ(
n). We then denote the discrete one-dimensional Laplacian by
ΔD : = (
D*Dφ)(
n) = −
φ(
n + 1) + 2
φ(
n) −
φ(
n − 1). For
,
, and a sequence
, with
, we define
by
()
We note that
ΔD being self-adjoint immediately implies that
is also self-adjoint.
Finding an explicit formula for requires a few combinatorial techniques, all of which are standard. Let , for . Then we have the following: (i) , (ii) , and (iii) .
A simple induction argument then delivers our formula for the σth order discrete Laplacian operator as follows:
()
Furthermore, in order to identify our essential spectrum, we apply the discrete fourier transform as follows:
()
which, after some rearrangement, yields
()
The essential spectrum of the operator
will thus be the range of the above symbol, which can be found to be
.
3. Main Results
We now let
,
, be the orthonormal system of eigensequences in
corresponding to the negative eigenvalues
of the (2
σ)th order discrete Schrödinger-type operator as follows:
()
where
j ∈ {1, …,
N} and we assume that
bn ≥ 0 for all
. Our next result is concerned with estimating those negative eigenvalues.
Theorem 2. Let bn ≥ 0, , γ ≥ 1. Then the negative eigenvalues of the operator satisfy the inequality
()
where
()
Remark 3. As the discrete spectrum of lies in [−∞, 0] and [4σ, ∞], we shift our operator to the left by 4σ and by analogy have an estimate for the positive eigenvalues of that operator, thus immediately obtaining Corollary 4.
Corollary 4. Let bn ≥ 0, , γ ≥ 1. Then the positive eigenvalues of the operator satisfy the inequality:
()
Finally we will apply these results to obtain spectral bounds for the following operator.
We let
Wσ be a polydiagonal self-adjoint Jacobi-type matrix operator as follows:
()
viewed as an operator acting on
as follows: for
,
i ∈ {1, …,
σ},
()
where
,
, for all
i ∈ {1, …,
σ}. We denote
where we understand {·} to mean
. We are then interested in perturbations of the following special case:
()
where
, and explicitly
()
called the free Jacobi-type matrix of order
σ. In particular, we examine the case where
is compact. Thus in what follows we assume that our sequences tend to the operator coefficients rapidly enough; that is,
,
bn → 0, as
n → ±
∞. Then the essential spectrum
ςess is given by
and
Wσ may have simple eigenvalues
where
, and
()
Theorem 5. Let γ ≥ 1, , and for all i ∈ {1, …, σ}. Then for the eigenvalues of the operator Wσ we have
()
where
()
4. Auxiliary Results
We require the following discrete Kolmogorov-type inequality.
Lemma 6. For a sequence , and for n > k ≥ 1, we have the following inequality:
()
Proof. We proceed by induction, where we note that the initial case, k = 1, n = 2, holds true as the inequality
()
is in fact the simple inequality found by Copson in [
7]. This case in turn, if used repeatedly, shows that the inequality holds true for all
k, if
n =
k + 1. We then take the inductive step on the variable
n. Hence we assume that we have the required inequality for
k <
n ≤
m, given a fixed
k, and proceed to prove the statement for
n =
m + 1. Thus
()
We thus apply our induction hypothesis and set
k =
m − 1 and
n =
m as follows:
()
We now return to the induction hypothesis as follows:
()
We are now equipped to prove an Agmon-Kolmogorov-type inequality.
Proposition 7. For a sequence , we have for any
()
Proof. First we use Lemma 6 with k = 1, n = σ as follows:
()
and we apply this estimate to the well-known discrete Agmon inequality (see [
4]):
()
Proposition 8. Let be an orthonormal system of sequences in ; that is, 〈ψj, ψk〉 = δjk, and let . Then
()
Proof. Let . By Proposition 7, we have
()
Let
and as
,
()
5. Proof of Theorem
We take the inner product with
ψj(
n) on (
10) and sum both sides of the equation with respect to
j. We obtain
()
We now use Proposition
8 and apply the appropriate Hölder’s inequality; that is,
()
We define
()
The latter inequality can be written as
()
The LHS is maximal when
()
Substituting this into (
33), we obtain
()
Therefore,
()
We lift this bound now with regard to moments by using the standard Aizenman-Lieb procedure (see [
8]). We let
be the negative eigenvalues of the operator
. By the variational principle for the negative eigenvalues
of the operator
we have
()
By this estimate, we find that
()
by (
38) above, where
B(
x,
y) =
Γ(
x)
Γ(
y)/
Γ(
x +
y) is the well-known Beta function. Thus, after a change of variable,
()
completing our proof.
6. Proof of Theorem
We have the following matrix bounds for square,
m ×
m matrices, as given in [
2]. For
,
, we have
()
We thus use this on each block of indices of
Wσ as follows:
()
where
is given by
()
that is,
()
Now we relate these to our Schrödinger-type operators:
()
()
Now (
) are positive eigenvalues of
. Thus by using (
43) and the variational principle, we have
()
where
are the positive eigenvalues of
()
Let us now define (
bn)
+ : = max(
bn, 0), (
bn)
− : = −min(
bn, 0). Then, by Corollary
4 for the positive eigenvalues of our operator, we have
()
Thus, applying (
48),
()
where
()
Similarly, using Theorem
2 on (
47),
()
Using the following application of Jensen’s inequality, that is, for
i ∈ {1, …, 2
σ + 1}, let
, with
q ≥ 1,
()
to each of (
51) and (
53), we have
()
Summing these two inequalities, we arrive at
()
where
()
and the proof of Theorem
5 is complete.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.