Left and Right Inverse Eigenpairs Problem for κ-Hermitian Matrices
Abstract
Left and right inverse eigenpairs problem for κ-hermitian matrices and its optimal approximate problem are considered. Based on the special properties of κ-hermitian matrices, the equivalent problem is obtained. Combining a new inner product of matrices, the necessary and sufficient conditions for the solvability of the problem and its general solutions are derived. Furthermore, the optimal approximate solution and a calculation procedure to obtain the optimal approximate solution are provided.
1. Introduction
Throughout this paper we use some notations as follows. Let Cn×m be the set of all n × m complex matrices, UCn×n, HCn×n, SHCn×n denote the set of all n × n unitary matrices, hermitian matrices, skew-hermitian matrices, respectively. Let , AH, and A+ be the conjugate, conjugate transpose, and the Moore-Penrose generalized inverse of A, respectively. For A, B ∈ Cn×m, 〈A, B〉 = re(tr (BHA)), where re(tr (BHA)) denotes the real part of tr (BHA), the inner product of matrices A and B. The induced matrix norm is called Frobenius norm. That is, ∥A∥ = 〈A, A〉 1/2 = (tr (AHA)) 1/2.
Hill and Waters [7] introduced the following matrices.
Definition 1. Let κ be a fixed product of disjoint transpositions, and let K be the associated permutation matrix, that is, , K2 = In, a matrix A ∈ Cn×n is said to be κ-hermitian matrices (skew κ-hermitian matrices) if and only if , i, j = 1, …, n. We denote the set of κ-hermitian matrices (skew κ-hermitian matrices) by KHCn×n (SKHCn×n).
From Definition 1, it is easy to see that hermitian matrices and perhermitian matrices are special cases of κ-hermitian matrices, with k(i) = i and k(i) = n − i + 1, respectively. Hermitian matrices and perhermitian matrices, which are one of twelve symmetry patterns of matrices [8], are applied in engineering, statistics, and so on [9, 10].
- (1)
A ∈ KHCn×n if and only if A = KAHK.
- (2)
A ∈ SKHCn×n if and only if A = −KAHK.
- (3)
If K is a fixed permutation matrix, then KHCn×n and SKHCn×n are the closed linear subspaces of Cn×n and satisfy
(2)
- (4)
A ∈ KHCn×n if and only if there is a matrix such that .
- (5)
A ∈ SKHCn×n if and only if there is a matrix such that .
Proof. (1) From Definition 1, if A = (aij) ∈ KHCn×n, then , this implies A = KAHK, for .
(2) With the same method, we can prove (2). So, the proof is omitted.
(3) (a) For any A ∈ Cn×n, there exist A1 ∈ KHCn×n, A2 ∈ SKHCn×n such that
-
where A1 = (1/2)(A + KAHK), A2 = (1/2)(A − KAHK).
- (b)
If there exist another , such that
(4)(3)-(4) yields(5)Multiplying (5) on the left and on the right by K, respectively, and according to (1) and (2), we obtain(6)Combining (5) and (6) gives , . - (c)
For any A1 ∈ KHCn×n, A2 ∈ SKHCn×n, we have
(7)This implies 〈A1, A2〉 = 0. Combining (a), (b), and (c) gives (3).
(4) Let , if A ∈ KHCn×n, then . If , then and .
(5) With the same method, we can prove (5). So, the proof is omitted.
In this paper, we suppose that K is a fixed permutation matrix and assume (λi, xi), i = 1, …, h, be right eigenpairs of A; (μj, yj), j = 1, …, l, be left eigenpairs of A. If we let X = (x1, …, xh) ∈ Cn×h, Λ = diag (λ1, …, λh) ∈ Ch×h; Y = (y1, …, yl) ∈ Cn×l, Γ = diag (μ1, …, μl) ∈ Cl×l, then the problems studied in this paper can be described as follows.
Problem 2. Giving X ∈ Cn×h, Λ = diag (λ1, …, λh) ∈ Ch×h; Y ∈ Cn×l, Γ = diag (μ1, …, μl) ∈ Cl×l, find A ∈ KHCn×n such that
This paper is organized as follows. In Section 2, we first obtain the equivalent problem with the properties of KHCn×n and then derive the solvability conditions of Problem 2 and its general solution’s expression. In Section 3, we first attest the existence and uniqueness theorem of Problem 3 then present the unique approximation solution. Finally, we provide a calculation procedure to compute the unique approximation solution and numerical experiment to illustrate the results obtained in this paper correction.
2. Solvability Conditions of Problem 2
We first discuss the properties of KHCn×n
Lemma 4. Denoting M = KEKGE, and E ∈ HCn×n, one has the following conclusions.
- (1)
If G ∈ KHCn×n, then M ∈ KHCn×n.
- (2)
If G ∈ SKHCn×n, then M ∈ SKHCn×n.
- (3)
If G = G1 + G2, where G1 ∈ KHCn×n, G2 ∈ SKHCn×n, then M ∈ KHCn×n if and only if KEKG2E = 0. In addition, one has M = KEKG1E.
Proof. (1) KMHK = KEGHKEKK = KE(KGK)KE = KEKGE = M.
Hence, we have M ∈ KHCn×n.
(2) KMHK = KEGHKEKK = KE(−KGK)KE = −KEKGE = −M.
Hence, we have M ∈ SKHCn×n.
(3) M = KEK(G1 + G2)E = KEKG1E + KEKG2E, we have KEKG1E ∈ KHCn×n, KEKG2E ∈ SKHCn×n from (1) and (2). If M ∈ KHCn×n, then M − KEKG1E ∈ KHCn×n, while M − KEKG1E = KEKG2E ∈ SKHCn×n. Therefore from the conclusion (3) of Definition 1, we have KEKG2E = 0, that is, M = KEKG1E. On the contrary, if KEKG2E = 0, it is clear that M = KEKG1E ∈ KHCn×n. The proof is completed.
Lemma 5. Let A ∈ KHCn×n, if (λ, x) is a right eigenpair of A, then is a left eigenpair of A.
Proof. If (λ, x) is a right eigenpair of A, then we have
Combining (13) and the conclusion (4) of Definition 1, it is easy to derive the following lemma.
Lemma 6. If X, Λ, Y, Γ are given by (13), then Problem 2 is equivalent to the following problem. If X, Λ, Y, Γ are given by (13), find KA ∈ HCn×n such that
Lemma 7 (see [11].)If giving X ∈ Cn×h, B ∈ Cn×h, then matrix equation has solution if and only if
Theorem 8. If X, Λ, Y, Γ are given by (13), then Problem 2 has a solution in KHCn×n if and only if
Proof. Necessity: If there is a matrix A ∈ KHCn×n such that (AX = XΛ, YTA = ΓYT), then from Lemma 6, there exists a matrix KA ∈ HCn×n such that KAX = KXΛ, and according to Lemma 7, we have
Sufficiency: If (17) holds, then (20) holds. Hence, matrix equation KAX = KXΛ has solution KA ∈ HCn×n. Moreover, the general solution can be expressed as follows:
3. An Expression of the Solution of Problem 3
From (18), it is easy to prove that the solution set SE of Problem 2 is a nonempty closed convex set if Problem 2 has a solution in KHCn×n. We claim that for any given B ∈ Rn×n, there exists a unique optimal approximation for Problem 3.
Theorem 9. Giving B ∈ Cn×n, if the conditions of X, Y, Λ, Γ are the same as those in Theorem 8, then Problem 3 has a unique solution . Moreover, can be expressed as
Proof. Denoting E1 = In − E, it is easy to prove that matrices E and E1 are orthogonal projection matrices satisfying EE1 = 0. It is clear that matrices KEK and KE1K are also orthogonal projection matrices satisfying (KEK)(KE1K) = 0. According to the conclusion (3) of Definition 1, for any B ∈ Cn×n, there exists unique
Algorithm 10. (1) Input X, Λ, Y, Γ according to (13). (2) Compute XHKXΛ, , XΛX+X, XΛ, if (17) holds, then continue; otherwise stop. (3) Compute A0 according to (19), and compute B1 according to (28). (4) According to (25) calculate .
Example 11 (n = 8, h = l = 4).
B =
From the first column to the fourth column
From the first column to the fourth column
Conflict of Interests
There is no conflict of interests between the authors.
Acknowledgments
This research was supported by National Natural Science Foundation of China (31170532). The authors are very grateful to the referees for their valuable comments and also thank the editor for his helpful suggestions.