Fine Spectra of Tridiagonal Symmetric Matrices
Abstract
The fine spectra of upper and lower triangular banded matrices were examined by several authors. Here we determine the fine spectra of tridiagonal symmetric infinite matrices and also give the explicit form of the resolvent operator for the sequence spaces c0, c, ℓ1, and ℓ∞.
1. Introduction
The spectrum of an operator is a generalization of the notion of eigenvalues for matrices. The spectrum over a Banach space is partitioned into three parts, which are the point spectrum, the continuous spectrum, and the residual spectrum. The calculation of these three parts of the spectrum of an operator is called the fine spectrum of the operator.
The spectrum and fine spectrum of linear operators defined by some particular limitation matrices over some sequence spaces was studied by several authors. We introduce the knowledge in the existing literature concerning the spectrum and the fine spectrum. Wenger [1] examined the fine spectrum of the integer power of the Cesàro operator over c and, Rhoades [2] generalized this result to the weighted mean methods. Reade [3] worked on the spectrum of the Cesàro operator over the sequence space c0. Gonzáles [4] studied the fine spectrum of the Cesàro operator over the sequence space ℓp. Okutoyi [5] computed the spectrum of the Cesàro operator over the sequence space bv. Recently, Rhoades and Yildirim [6] examined the fine spectrum of factorable matrices over c0 and c. Coşkun [7] studied the spectrum and fine spectrum for the p-Cesàro operator acting over the space c0. Akhmedov and Başar [8, 9] have determined the fine spectrum of the Cesàro operator over the sequence spaces c0, ℓ∞, and ℓp. In a recent paper, Furkan, et al. [10] determined the fine spectrum of B(r, s, t) over the sequence spaces c0 and c, where B(r, s, t) is a lower triangular triple-band matrix. Later, Altun and Karakaya [11] computed the fine spectra for Lacunary matrices over c0 and c.
In this work, our purpose is to determine the fine spectra of the operator, for which the corresponding matrix is a tridiagonal symmetric matrix, over the sequence spaces c0, c, ℓ1, and ℓ∞. Also we will give the explicit form of the resolvent for this operator and compute the norm of the resolvent operator when it exists and is continuous.
-
(R1) exists,
-
(R2) is bounded, and
-
(R3) is defined on a set which is dense in X.
The resolvent set ρ(T) of T is the set of all regular values λ of T. Its complement σ(T) = ℂ∖ρ(T) in the complex plane ℂ is called the spectrum of T. Furthermore, the spectrum σ(T) is partitioned into three disjoint sets as follows: the point spectrum σp(T) is the set such that does not exist. A λ ∈ σp(T) is called an eigenvalue of T. The continuous spectrum σc(T) is the set such that exists and satisfies (R3) but not (R2). The residual spectrum σr(T) is the set such that exists but does not satisfy (R3).
Theorem 1.1 (cf. [13]). Let T be an operator with the associated matrix A = (ank).
- (i)
T ∈ B(c) if and only if
(1.6)(1.7)(1.8) - (ii)
T ∈ B(c0) if and only if (1.6) and (1.7) with ak = 0 for each k.
- (iii)
T ∈ B(ℓ∞) if and only if (1.6).
- (iv)
T ∈ B(ℓ1) if and only if
(1.10)In this case, the operator norm of T is .
Corollary 1.2. Let μ ∈ {c0, c, ℓ1, ℓ∞}. S(q, r) : μ → μ is a bounded linear operator and ∥S(q, r)∥(μ:μ) = |q | + 2 | r|.
2. The Spectra and Point Spectra
Theorem 2.1. σp(S, μ) = ∅ for μ ∈ {ℓ1, c0, c}.
Proof. Since ℓ1 ⊂ c0 ⊂ c, it is enough to show that σp(S, c) = ∅. Let λ be an eigenvalue of the operator S. An eigenvector x = (x0, x1, …) ∈ c corresponding to this eigenvalue satisfies the linear system of equations:
Case 1 (p = −2). Then characteristic polynomial has only one root: α = 1. Hence, the solution of the recurrence relation is of the form
Case 2 (p = 2). Then characteristic polynomial has only one root: α = −1. The solution of the recurrence relation, found as in Case 1, is xn = (n + 1)(−1) n. So, there is no eigenvalue in this case.
Case 3 (p ≠ ±2). Then the characteristic polynomial has two distinct roots α1 ≠ ±1 and α2 ≠ ±1 with α1α2 = 1. Let |α1| ≥ 1 ≥ |α2|. The solution of the recurrence relation is of the form
Repeating all the steps in the proof of this theorem for ℓ∞, we get to the following.
Theorem 2.2. σp(S, ℓ∞) = (q − 2r, q + 2r).
Theorem 2.3. Let p = (q − λ)/r. Let α1 and α2 be the roots of the polynomial P(x) = x2 + px + 1, with |α2 | > 1>|α1|. Then the resolvent operator over c0 is , where
Proof. Let α1 and α2 be as it is stated in the theorem. From (1/r)Sλx = y we get to the system of equations:
If T : μ → μ (μ is ℓ1 or c0) is a bounded linear operator represented by the matrix A, then it is known that the adjoint operator T* : μ* → μ* is defined by the transpose At of the matrix A. It should be noted that the dual space of c0 is isometrically isomorphic to the Banach space ℓ1 and the dual space , of ℓ1 is isometrically isomorphic to the Banach space ℓ∞.
Corollary 2.4. σ(S, μ)⊂[q − 2r, q + 2r] for μ ∈ {ℓ1, c0, c, ℓ∞}.
Proof. . And by Cartlidge [14], if a matrix operator A is bounded on c, then σ(A, c) = σ(A, ℓ∞). Hence we have σ(S, c0) = σ(S, ℓ1) = σ(S, ℓ∞) = σ(S, c). What remains is to show that σ(S, c0)⊂[q − 2r, q + 2r]. By Theorem 2.3, there exists a resolvent operator of Sλ which is continuous and the whole space c0 is the domain if the roots of the polynomial P(x) = x2 + px + 1 satisfy
Theorem 2.5. σ(S, μ) = [q − 2r, q + 2r] for μ ∈ {ℓ1, c0, c, ℓ∞}.
3. The Continuous Spectra and Residual Spectra
Lemma 3.1 (see [15], page 59.)T has a dense range if and only if T* is one to one.
Corollary 3.2. If T ∈ (μ : μ) then σr(T, μ) = σp(T*, μ*)∖σp(T, μ).
Theorem 3.3. σr(S, c0) = ∅.
Theorem 3.4. σr(S, ℓ1) = (q − 2r, q + 2r).
Proof. Similarly as in the proof of the previous theorem, we have σp(S, ℓ∞)∖σp(S, ℓ1) = (q − 2r, q + 2r).
Theorem 3.5. σr(S, c) = {q + 2r}.
Proof. Let x = (x0, x1, …) ∈ ℂ ⊕ ℓ1 be an eigenvector of S* corresponding to the eigenvalue λ. Then we have (2r + q)x0 = λx0 and Sx′ = λx′ where x′ = (x1, x2, …). By Theorem 2.1, x′ = (0,0, …). Then x0 ≠ 0. And λ = 2r + q is the only value that satisfies (2r + q)x0 = λx0. Hence σp(S*, c*) = {2r + q}. Then σr(S, c) = σp(S*, c*)∖σp(S, c) = {2r + q}.
Now, since the spectrum σ is the disjoint union of σp, σr, and σc, we can find σc over the spaces ℓ1, c0, and c. So we have the following.
Theorem 3.6. For the operator S, one has the following:
4. The Resolvent Operator
The following theorem is a generalization of Theorem 2.3.
Theorem 4.1. Let μ ∈ {c0, c, ℓ1, ℓ∞}. The resolvent operator S−1 over μ exists and is continuous, and the domain of S−1 is the whole space μ if and only if 0 ∉ [q − 2r, q + 2r]. In this case, S−1 has a matrix representation (snk) defined by
Proof. Let μ be one of the sequence spaces in {c0, c, ℓ1, ℓ∞}. Suppose S has a continuous resolvent operator where the domain of the resolvent operator is the whole space μ. Then λ = 0 is not in σ(S, μ) = [q − 2r, q + 2r]. Conversely if 0 ∉ [q − 2r, q + 2r], then S has a continuous resolvent operator, and since S is bounded by Lemma 7.2-7.3 of [12] the domain of this resolvent operator is the whole space μ.
Now, suppose 0 ∉ [q − 2r, q + 2r]. Let α1 and α2 be the roots of the polynomial P(x) = rx2 + qx + r where |α1 | ≤|α2|. Since 0 ∉ [q − 2r, q + 2r], by the proof of Corollary 2.4 | α1 | ≠|α2|. Then |α1 | < 1<|α2|. So S satisfies the conditions of Theorem 2.3. Hence the resolvent operator of S is represented by the matrix S−1 = (snk) defined by
Remark 4.2. If a matrix A is a triangle, we can see that the resolvent (when it exists) is the unique lower triangular left hand inverse of A. In our case, S is far away from being a triangle. The matrix S−1 of this theorem is not the unique left inverse of the matrix S for 0 ∉ [q − 2r, q + 2r]. For example, the matrix T = (tnk) defined by
Theorem 4.3. Let 0 ∉ [q − 2r, q + 2r], and, α1 be the root of P(x) = rx2 + qx + r with |α1 | < 1. Then for μ ∈ {c0, c, ℓ1, ℓ∞} we have
Proof. Since S−1 is a symmetric matrix, the supremum of the ℓ1 norms of the rows is equal to the supremum of the ℓ1 norms of the columns. So, according to Theorem 1.1, what we need is to calculate the supremum of the ℓ1 norms of the rows of S−1. Denote the nth row S−1 by for n = 0,1, …. Now, let us fix the row n and calculate the ℓ1 norm for this row. Let . By using Theorem 4.1, we have
Acknowledgment
The author thanks the referees for their careful reading of the original paper and for their valuable comments.