Volume 2013, Issue 1 230408
Research Article
Open Access

Left and Right Inverse Eigenpairs Problem for κ-Hermitian Matrices

Fan-Liang Li

Corresponding Author

Fan-Liang Li

Institute of Mathematics and Physics, School of Sciences, Central South University of Forestry and Technology, Changsha 410004, China csuft.edu.cn

Search for more papers by this author
Xi-Yan Hu

Xi-Yan Hu

College of Mathematics and Econometrics, Hunan University, Changsha 410082, China hnu.edu.cn

Search for more papers by this author
Lei Zhang

Lei Zhang

College of Mathematics and Econometrics, Hunan University, Changsha 410082, China hnu.edu.cn

Search for more papers by this author
First published: 11 April 2013
Academic Editor: Panayiotis J. Psarrakos

Abstract

Left and right inverse eigenpairs problem for κ-hermitian matrices and its optimal approximate problem are considered. Based on the special properties of κ-hermitian matrices, the equivalent problem is obtained. Combining a new inner product of matrices, the necessary and sufficient conditions for the solvability of the problem and its general solutions are derived. Furthermore, the optimal approximate solution and a calculation procedure to obtain the optimal approximate solution are provided.

1. Introduction

Throughout this paper we use some notations as follows. Let Cn×m be the set of all n × m complex matrices, UCn×n, HCn×n, SHCn×n denote the set of all n × n unitary matrices, hermitian matrices, skew-hermitian matrices, respectively. Let , AH, and A+ be the conjugate, conjugate transpose, and the Moore-Penrose generalized inverse of A, respectively. For A, BCn×m, 〈A, B〉 = re(tr (BHA)), where re(tr (BHA)) denotes the real part of tr (BHA), the inner product of matrices A and B. The induced matrix norm is called Frobenius norm. That is, ∥A∥ = 〈A, A〉 1/2 = (tr (AHA)) 1/2.

Left and right inverse eigenpairs problem is a special inverse eigenvalue problem. That is, giving partial left and right eigenpairs (eigenvalue and corresponding eigenvector), (λi, xi), i = 1, …, h; (μj, yj), j = 1, …, l, a special matrix set S, finding a matrix AS such that
(1)
This problem, which usually arises in perturbation analysis of matrix eigenvalues and in recursive matters, has profound application background [16]. When the matrix set S is different, it is easy to obtain different left and right inverse eigenpairs problem. For example, we studied the left and right inverse eigenpairs problem of skew-centrosymmetric matrices and generalized centrosymmetric matrices, respectively [5, 6]. Based on the special properties of left and right eigenpairs of these matrices, we derived the solvability conditions of the problem and its general solutions. In this paper, combining the special properties of κ-hermitian matrices and a new inner product of matrices, we first obtain the equivalent problem, then derive the necessary and sufficient conditions for the solvability of the problem and its general solutions.

Hill and Waters [7] introduced the following matrices.

Definition 1. Let κ be a fixed product of disjoint transpositions, and let K be the associated permutation matrix, that is, , K2 = In, a matrix ACn×n is said to be κ-hermitian matrices (skew κ-hermitian matrices) if and only if , i, j = 1, …, n. We denote the set of κ-hermitian matrices (skew κ-hermitian matrices) by KHCn×n (SKHCn×n).

From Definition 1, it is easy to see that hermitian matrices and perhermitian matrices are special cases of κ-hermitian matrices, with k(i) = i and k(i) = ni + 1, respectively. Hermitian matrices and perhermitian matrices, which are one of twelve symmetry patterns of matrices [8], are applied in engineering, statistics, and so on [9, 10].

From Definition 1, it is also easy to prove the following conclusions.
  • (1)

    AKHCn×n if and only if A = KAHK.

  • (2)

    ASKHCn×n if and only if A = −KAHK.

  • (3)

    If K is a fixed permutation matrix, then KHCn×n and SKHCn×n are the closed linear subspaces of Cn×n and satisfy

    (2)

The notation V1V2 stands for the orthogonal direct sum of linear subspace V1 and V2.
  • (4)

    AKHCn×n if and only if there is a matrix such that .

  • (5)

    ASKHCn×n if and only if there is a matrix such that .

Proof. (1) From Definition 1, if A = (aij) ∈ KHCn×n, then , this implies A = KAHK, for .

(2) With the same method, we can prove (2). So, the proof is omitted.

(3) (a) For any ACn×n, there exist A1KHCn×n, A2SKHCn×n such that

(3)
  • where A1 = (1/2)(A + KAHK), A2 = (1/2)(AKAHK).

  •  (b)

    If there exist another , such that

    (4)
    (3)-(4) yields
    (5)
    Multiplying (5) on the left and on the right by K, respectively, and according to (1) and (2), we obtain
    (6)
    Combining (5) and (6) gives , .

  •  (c)

    For any A1KHCn×n, A2SKHCn×n, we have

    (7)
    This implies 〈A1, A2〉 = 0. Combining (a), (b), and (c) gives (3).

(4) Let , if AKHCn×n, then . If , then and .

(5) With the same method, we can prove (5). So, the proof is omitted.

In this paper, we suppose that K is a fixed permutation matrix and assume (λi, xi), i = 1, …, h, be right eigenpairs of A; (μj, yj), j = 1, …, l, be left eigenpairs of A. If we let X = (x1, …, xh) ∈ Cn×h, Λ = diag  (λ1, …, λh) ∈ Ch×h; Y = (y1, …, yl) ∈ Cn×l, Γ = diag  (μ1, …, μl) ∈ Cl×l, then the problems studied in this paper can be described as follows.

Problem 2. Giving XCn×h, Λ = diag (λ1, …, λh) ∈ Ch×h; YCn×l, Γ = diag (μ1, …, μl) ∈ Cl×l, find AKHCn×n such that

(8)

Problem 3. Giving BCn×n, find such that

(9)
where SE is the solution set of Problem 2.

This paper is organized as follows. In Section 2, we first obtain the equivalent problem with the properties of KHCn×n and then derive the solvability conditions of Problem 2 and its general solution’s expression. In Section 3, we first attest the existence and uniqueness theorem of Problem 3 then present the unique approximation solution. Finally, we provide a calculation procedure to compute the unique approximation solution and numerical experiment to illustrate the results obtained in this paper correction.

2. Solvability Conditions of Problem 2

We first discuss the properties of KHCn×n

Lemma 4. Denoting M = KEKGE, and EHCn×n, one has the following conclusions.

  • (1)

    If GKHCn×n, then MKHCn×n.

  • (2)

    If GSKHCn×n, then MSKHCn×n.

  • (3)

    If G = G1 + G2, where G1KHCn×n, G2SKHCn×n, then MKHCn×n if and only if KEKG2E = 0. In addition, one has M = KEKG1E.

Proof. (1) KMHK = KEGHKEKK = KE(KGK)KE = KEKGE = M.

Hence, we have MKHCn×n.

(2) KMHK = KEGHKEKK = KE(−KGK)KE = −KEKGE = −M.

Hence, we have MSKHCn×n.

(3) M = KEK(G1 + G2)E = KEKG1E + KEKG2E, we have KEKG1EKHCn×n, KEKG2ESKHCn×n from (1) and (2). If MKHCn×n, then MKEKG1EKHCn×n, while MKEKG1E = KEKG2ESKHCn×n. Therefore from the conclusion (3) of Definition 1, we have KEKG2E = 0, that is, M = KEKG1E. On the contrary, if KEKG2E = 0, it is clear that M = KEKG1EKHCn×n. The proof is completed.

Lemma 5. Let AKHCn×n, if (λ, x) is a right eigenpair of A, then is a left eigenpair of A.

Proof. If (λ, x) is a right eigenpair of A, then we have

(10)
From the conclusion (1) of Definition 1, it follows that
(11)
This implies
(12)
So is a left eigenpair of A.

From Lemma 5, without loss of the generality, we may assume that Problem 2 is as follows.
(13)

Combining (13) and the conclusion (4) of Definition 1, it is easy to derive the following lemma.

Lemma 6. If X, Λ, Y, Γ are given by (13), then Problem 2 is equivalent to the following problem. If X, Λ, Y, Γ are given by (13), find KAHCn×n such that

(14)

Lemma 7 (see [11].)If giving XCn×h, BCn×h, then matrix equation has solution if and only if

(15)
Moreover, the general solution can be expressed as
(16)

Theorem 8. If X, Λ, Y, Γ are given by (13), then Problem 2 has a solution in KHCn×n if and only if

(17)
Moreover, the general solution can be expressed as
(18)
where
(19)

Proof. Necessity: If there is a matrix AKHCn×n such that (AX = XΛ, YTA = ΓYT), then from Lemma 6, there exists a matrix KAHCn×n such that KAX = KXΛ, and according to Lemma 7, we have

(20)
It is easy to see that (20) is equivalent to (17).

Sufficiency: If (17) holds, then (20) holds. Hence, matrix equation KAX = KXΛ has solution KAHCn×n. Moreover, the general solution can be expressed as follows:

(21)
Let
(22)
This implies . Combining the definition of K, E and the first equation of (17), we have
(23)
Hence, A0KHCn×n. Combining the definition of K, E, (13) and (17), we have
(24)
Therefore, A0 is a special solution of Problem 2. Combining the conclusion (4) of Definition 1, Lemma 4, and E = InXX+HCn×n, it is easy to prove that A = A0 + KEKGEKHCn×n if and only if GKHCn×n. Hence, the solution set of Problem 2 can be expressed as (18).

3. An Expression of the Solution of Problem 3

From (18), it is easy to prove that the solution set SE of Problem 2 is a nonempty closed convex set if Problem 2 has a solution in KHCn×n. We claim that for any given BRn×n, there exists a unique optimal approximation for Problem 3.

Theorem 9. Giving BCn×n, if the conditions of X, Y, Λ, Γ are the same as those in Theorem 8, then Problem 3 has a unique solution . Moreover, can be expressed as

(25)
where A0, E are given by (19) and B1 = (1/2)(B + KBHK).

Proof. Denoting E1 = InE, it is easy to prove that matrices E and E1 are orthogonal projection matrices satisfying EE1 = 0. It is clear that matrices KEK and KE1K are also orthogonal projection matrices satisfying (KEK)(KE1K) = 0. According to the conclusion (3) of Definition 1, for any BCn×n, there exists unique

(26)
such that
(27)
where
(28)
Combining Theorem 8, for any ASE, we have
(29)
It is easy to prove that KEKA0E = 0 according to the definitions of A0, E. So we have
(30)
Obviously, is equivalent to
(31)
Since EE1 = 0, (KEK)(KE1K) = 0, it is clear that , for any , is a solution of (31). Substituting this result to (18), we can obtain (25).

Algorithm 10. (1) Input X, Λ, Y, Γ according to (13). (2) Compute XHKXΛ, , XΛX+X, XΛ, if (17) holds, then continue; otherwise stop. (3) Compute A0 according to (19), and compute B1 according to (28). (4) According to (25) calculate .

Example 11 (n = 8, h = l = 4).

(32)

B =   

From the first column to the fourth column

(33)
From the fifth column to the eighth column
(34)
It is easy to see that matrices X, Λ, Y, Γ satisfy (17). Hence, there exists the unique solution for Problem 3. Using the software “MATLAB”, we obtain the unique solution of Problem 3.

From the first column to the fourth column

(35)
From the fifth column to the eighth column
(36)

Conflict of Interests

There is no conflict of interests between the authors.

    Acknowledgments

    This research was supported by National Natural Science Foundation of China (31170532). The authors are very grateful to the referees for their valuable comments and also thank the editor for his helpful suggestions.

        The full text of this article hosted at iucr.org is unavailable due to technical difficulties.