A Note on k-Potence Preservers on Matrix Spaces over Complex Field
Abstract
Let ℂ be the field of all complex numbers, Mn the space of all n × n matrices over ℂ, and Sn the subspace of Mn consisting of all symmetric matrices. The map ϕ : Sn → Mn satisfies that A − λB is k-potent in Sn implying that ϕ(A) − λϕ(B) is k-potent in Mn, where λ ∈ ℂ, then there exist an invertible matrix P ∈ Mn and ϵ ∈ ℂ with ϵk = ϵ such that ϕ(X) = ϵP−1(X)P for every X ∈ Sn. Moreover, the inductive method used in this paper can be used to characterise similar maps from Mn to Mn.
1. Introduction
Let ℂ be the field of all complex numbers, Mn the space of all n × n matrices over ℂ, Tn the subspace of Mn consisting of all triangular matrices, and Sn the subspace of Mn consisting of all symmetric matrices. For fixed integer k ≥ 2, A ∈ Mn is called a k-potent matrix if Ak = A; especially, A is an idempotent matrix when k = 2. The map ϕ : Sn → Mn satisfies that A − λB is a k-potent matrix in Sn implying that ϕ(A) − λϕ(B) is a k-potent matrix in Mn, where λ ∈ ℂ, is a kind of the so-called weak preservers. While replacing “implying that” with “if and only if,” ϕ is called strong preserver. Obviously, a strong preserver must be a weak preserver, while a weak preserver may not be a strong preserver.
The preserver problem in this paper is from LPPs but without linear assumption (more details about LPP in [1–3]). You and Wang characterized the strong k-potence preservers from Mn to Mn in [4]; then Song and Cao extended the result to weak preservers from Mn to Mn in [5]. In [6], Wang and You characterized the strong k-potence preservers from Tn to Mn. In this paper, the authors characterized the weak k-potence preservers from Sn to Mn and proved the following theorem.
Theorem 1. Suppose ϕ : Sn → Mn satisfy that A − λB is a k-potent matrix in Sn implying that ϕ(A) − λϕ(B) is a k-potent matrix in Mn, where λ ∈ ℂ. Then there exist invertible P ∈ Mn and ϵ ∈ ℂ with ϵk = ϵ such that ϕ(X) = ϵP−1XP for every X ∈ Sn.
Furthermore, we can derive the following corollary from Theorem 1.
Corollary 2. Suppose ϕ : Sn → Sn satisfy that A − λB is a k-potent matrix in Sn implying that ϕ(A) − λϕ(B) is a k-potent matrix in Sn, where λ ∈ ℂ. Then there exist invertible P ∈ Mn and ϵ ∈ ℂ with ϵk = ϵ such that ϕ(X) = ϵP−1XP for every X ∈ Sn, where PPt = aIn for some nonzero a ∈ ℂ.
In fact, the proof of Theorem 1 through some adjustments is suitable for the weak k-potence preserver from Mn to Mn, and more details can be seen in remarks.
2. Notations and Lemmas
Γn denotes the set of all k-potent matrices in Mn, while SΓn = Γn∩Sn. Λ denotes the set of all complex number ϵ satisfying ϵk−1 = 1, Δ = Λ ∪ {0}. Eij denotes matrices in Mn with 1 in (i, j) and 0 elsewhere, and In denotes the unit matrix in Mn. 〈n〉 denotes the set of integer s satisfy 1 ≤ s ≤ n. GLn denotes the general linear group consisting of all invertible matrices in Mn. Dn denotes an arbitrary diagonal matrix in Mn. For A, B ∈ Mn, A and B are orthogonal if AB = BA = 0. ℂn×1 denotes the space of all n × 1 matrices over ℂ. Φn denotes the set of all maps ϕ : Sn → Mn satisfying that A − λB is a k-potent matrix in Sn implying that ϕ(A) − λϕ(B) is a k-potent matrix in Mn, where λ ∈ ℂ.
For an arbitrary matrix X ∈ Mn, we denote by X[i, j] the term in (i, j) position of X, by the s × t matrix with the term in its (p, q) position equal to X[ip, jq], where i1 < ⋯<is and j1 < ⋯<jt. Moreover, we denote by the n × n matrix with the term in its (ip, jq) position equal to X[ip, jq] and terms elsewhere equal to 0. We especially simplify it with when s = t, and il = jl for every l ∈ 〈s〉. Naturally, X{i} = X[i, i]Eii for every i ∈ 〈n〉.
Without fixing X, also denotes a matrix in Mn with 0 in its (p, q) position, where p ∉ {i1, …, is}, q ∉ {j1, …, jt}, and 1 ≤ i1 < ⋯<is ≤ n, 1 ≤ j1 < ⋯<jt ≤ n.
At first, we need the following Lemmas 3, 4, 5, and 7, which are about k-potent matrices and orthogonal matrices.
Lemma 3 (see [2].)Suppose X, Y ∈ Γn, and X + ϵY ∈ Γn for every ϵ ∈ Λ; then X and Y are orthogonal.
Lemma 4 ([7, Lemma 1]). Suppose A1, A2, …, An are n × n mutually orthogonal nonzero k-potent matrices; then there exists P ∈ GLn such that P−1AiP = ciEii with for every i ∈ 〈n〉.
Lemma 5. Suppose Z ∈ Mn−1, p, q, g, h ∈ ℂ(n−1)×1 with ght ≠ 0, δ ∈ ℂ, for arbitrary nonzero α ∈ ℂ with htg + α2 ≠ 0 and τ = (α−1htg + α) −1, . Then Z = 0, δ = 0, and there exist λ1, λ2 ∈ ℂ with (λ1 + 1)(λ2 + 1) = 1 such that p = λ1g and q = λ2h.
Proof. By the assumption of α and τ, is idempotent. Denote this matrix by X, and then we can get the following equation:
Since the matrices on both sides of X satisfy the following equation:
We denote by A the following matrix:
Let , then we calculate it and get the following equations:
It is easy to get and the following equation:
By the assumption of α, we have Z = 0 and δ = 0. Then the following equations are true:
Now, we calculate the upper left part of τ−3k+2(⋯) 2.
When k = 2, τ−3k+2(⋯) 2 = τ−4A2, of which the upper left part is τ−4[pqt(In−1 − τα−1ght) − qtpα−1gτht] = τ−4[pqt − τα−1pqtght − τα−1qtpght]. Then in the upper left part of , the highest degree of α is 4, and the coefficient matrix is pqt + gqt + pht.
When k > 2, if appears in the left (or right) end of an additive item of τ−3k+2(⋯) 2, then the upper left part of this item is 0. So, the upper left part of τ−3k+2(⋯) 2 is equal to the upper left part of ; that is, the upper left part is , and the highest degree of α is 3k − 2 with pqt as the coefficient matrix of α3k−2.
By the assumption of α, we have pqt + gqt + pht = 0.
By ght ≠ 0, we have g ≠ 0, h ≠ 0, and p = 0 if and only if q = 0. When p ≠ 0, we can get p = λ1g by p(qt + ht) + gqt = 0, and q = λ2h by (p + g)qt + pht = 0, where λ1 and λ2 satisfy λ1λ2ght + λ2ght + λ1ght = 0; that is, λ1λ2 + λ2 + λ1 = 0 by ght ≠ 0, which is equivalent to (λ1 + 1)(λ2 + 1) = 1. When p = q = 0, λ1 = λ2 = 0.
Remark 6. Replacing ght ≠ 0 with ght = 0 in Lemma 5, we have g = 0 implies p = 0 or q + h = 0, and h = 0 implies q = 0 or p + g = 0. These cases will not appear in the proof of Theorem 1, but are necessary for the weak preservers from Mn to Mn.
Lemma 7. Suppose for arbitrary a, b ∈ ℂ with a ≠ b, where λ : ℂ → ℂ is a map satisfying λ(x) ≠ 0 for every x ∈ ℂ. Then there exists nonzero λ0 ∈ ℂ such that λ(x) = λ0 for every x ∈ ℂ.
Proof. Since the trace of A is equal to 1, then (λ(a) − λ(b))(λ−1(a) − λ−1(b))/(a − b) 2 = 0, or −1, especially, when equal to −1, k − 1 = 6p with p ∈ Z+. Denote λ(a)/λ(b) by y, and a − b by c; then we have (2 − y − y−1)/c2 = 0 or −1.
- (1)
If (2 − y − y−1)/c2 = 0, then y = 1, that is, λ(a) = λ(b);
- (2)
if (2 − y − y−1)/c2 = −1, then . When c = 1, ; when c = 2, . But implies λ(b + 2)/λ(b) = (λ(b + 2)/λ(b + 1))(λ(b + 1)/λ(b)) = 1, or . It is a contradiction! So it is impossible that (2 − y − y−1)/x2 = −1.
Hence, there exists nonzero λ0 ∈ ℂ such that λ(x) = λ0 for every x ∈ ℂ.
We can prove the following Lemmas 8 and 9 similar as Lemmas 4 and 5 in [4].
Lemma 8 (see [4], Lemma 4.)Suppose ϕ ∈ Φn, A and B are n × n orthogonal k-potent matrices; then ϕ(A) and ϕ(B) are orthogonal.
Lemma 9 (see [4], Lemma 5.)Suppose ϕ ∈ Φn; then ϕ are homogeneous; that is, ϕ(λX) = λϕ(X) for every X ∈ Sn and every λ ∈ ℂ.
Corollary 10. Suppose ϕ ∈ Φn, A + B, C ∈ SΓn, and for every ϵ ∈ Λ, A + B + ϵC ∈ SΓn, ϕ(B + ϵC) = ϕ(B) + ϕ(ϵC). Then ϕ(A) + ϕ(B) and ϕ(C) are orthogonal.
Proof. By the assumption and Lemma 9, we have ϕ(A) + ϕ(B) ∈ Γn, ϕ(C) ∈ Γn, ϕ(A) + ϕ(B + ϵC) = ϕ(A) + ϕ(B) + ϵϕ(C) ∈ Γn. By Lemma 3, ϕ(A) + ϕ(B) and ϕ(C) are orthogonal.
Corollary 11. Suppose ϕ ∈ Φn and ϕ(Dn) = Dn for arbitrary diagonal matrix Dn ∈ Mn. Then for every i, j ∈ 〈n〉 with i ≠ j, , where λij ∈ ℂ is only decided by i and j.
Proof. Let A = (1/2)(Eij + Eji + Dn), B = (1/2)(Eii + Ejj − Dn), and C = ∑l≠i,j Ell; then A, B and C satisfy the assumption of Corollary 10, and ϕ(A) + ϕ(B) and ϕ(C) are orthogonal; that is, ϕ((Eij + Eji + Dn)) = αiiEii + βijEij + γjiEji + δjjEjj + Dn for some αii, βij, γji, and δjj ∈ ℂ.
Since (η−1 + η) −1[(Eij + Eji + Dn)−(Dn − η−1Eii − ηEjj)] = (η−1 + η) −1(η−1Eii + Eij + Eji + ηEjj) ∈ SΓn for arbitrary nonzero η ∈ ℂ with 1 + η2 ≠ 0, after applying ϕ, we have (η−1 + η) −1[αiiEii + βijEij + γjiEji + δjjEjj + η−1Eii + ηEjj] = (η−1 + η) −1[αiiEii + (βij − 1)Eij + (γji − 1)Eji + δjjEjj]+(η−1 + η) −1(η−1Eii + Eij + Eji + ηEjj) ∈ Γn. By Lemma 5, αii = δjj = 0, βijγji = 1.
Let , where xl ∈ ℂ for every l ∈ 〈n〉; then βij is the function of i, j, and xl and denote by βij(Dn) the value of βij on x1, …, xn, i, and j.
Fix i, j, and Dn and add a free variable x to xl for some l ∈ 〈n〉; then βij(Dn + xEll) becomes into a map of x. Since (1/(a − b))(Eij + Eji + Dn + aEjj)−(1/(a − b))(Eij + Eji + Dn + bEjj) ∈ SΓn for arbitrary a and b ∈ ℂ with a − b ≠ 0, then by and , we can derive that Γn. By Lemma 7, βij(Dn + aEjj) = βij(Dn + bEjj) for fixed i, j, and Dn; that is, βij(Dn + xEjj) = βij(Dn) for arbitrary x ∈ ℂ. Similarly, we can prove βij(Dn + xEii) = βij(Dn) for arbitrary x ∈ ℂ.
In fact, we have proved that βij(Dn + xEii) = βij(Dn) and βij(Dn + yEjj) = βij(Dn) for arbitrary x, y ∈ ℂ and arbitrary Dn; then βij(Dn + xEii + yEjj) = βij(Dn + xEii)( = βij(Dn + yEjj)) = βij(Dn) follows.
Since βij(Dn + xEjj + yEll) = βij(Dn + yEll) for fixed i, j, and l with l ≠ i, j, and arbitrary x, y ∈ ℂ, then (1/(a − b))(Eij + Eji + Dn + (a − b)Ejj + aEll)−(1/(a − b))(Eij + Eji + Dn + bEll) ∈ SΓn implies ∈Γn. By Lemma 7, we can get βij(Dn + aEll) = βij(Dn + bEll) for arbitrary a and b ∈ ℂ with a − b ≠ 0; that is, βij(Dn + xEll) = βij(Dn) for arbitrary x ∈ ℂ.
Until now, we have proved that for arbitrary Dn; that is, βij is only decided by i and j.
Remark 12. The proof of Corollary 11 presents the basic procedure of proof of Theorem 1. In order to decide the image of matrix A, we use Corollary 10 and the images of B and C, which usually are diagonal matrices or some matrices with images already decided.
If ϕ is a weak preserver from Mn to Mn, then Corollary 11 is also true. Let A = Eij + Dn, B = −(Eij + Eji + Dn) + Eii, and C = ∑l≠i,j Ell; then we can prove ϕ(A) = aiiEii + aijEij + ajiEji + ajjEjj + Dn similarly as proving ϕ((Eij + Eji + Dn)) = αiiEii + βijEij + γjiEji + δjjEjj + Dn, and . Since α−1A + α−1(−(Eij + Eji + Dn) + αEii) = −α−1Eji + Eii ∈ Γn for arbitrary nonzero α, then the following matrix is k-potent:
If ; then (1/3)Eij + (1/3)(Eij + Eji + 2Eii + Ejj) = (1/3)(2Eij + Eji + 2Eii + Ejj) ∈ Γn implies ; that is, −2/9 ∈ Δ, which is a contradiction. Hence, we proved that it is impossible or .
If ϕ(Eij) = λijEij and ϕ(Eji) = λijEij, then (1/2)(Eii + Eij + Eji + Ejj) ∈ Γn implies (1/2)(ϕ(Eij) + ϕ(Eii + Eji + Ejj)) ∈ Γn; that is, (1/2)(Eii + 2λijEij + Ejj) ∈ Γn, which is a contradiction. Hence, we proved that ϕ(Eij) = λijEij and , or and ϕ(Eji) = λijEij.
3. Proof of Theorem 1
Suppose ϕ ∈ Φn, then we can derive Theorem 1 from Propositions 13, 14, and 16.
Proposition 13. Suppose i, j ∈ 〈n〉 with i ≠ j; then ϕ(Eii) = 0 if and only if ϕ(Ejj) = 0.
Proof. Suppose ϕ(Eii) = 0 and ϕ(Ejj) ≠ 0 for some i, j ∈ 〈n〉 with i ≠ j. At first, we prove that ϕ(aEii + Ejj) = ϕ(Ejj) for arbitrary a ∈ ℂ. Since the equation is already true when a = 0, then we assume a ≠ 0 in the following proof.
Let A = a−1(aEii + Ejj), B = −a−1Ejj, and C = Ejj; then it is easy to verify A, B, and C satisfying the assumption of Corollary 10. So ϕ(a−1(aEii + Ejj)) + ϕ(−a−1Ejj) and ϕ(Ejj) are orthogonal. Moreover, we can derive ϕ(aEii + Ejj) ∈ Γn from (aEii + Ejj) − aEii ∈ SΓn and ϕ(Eii) = 0. Let a−1(ϕ(aEii + Ejj) − ϕ(Ejj)) = D, then D and ϕ(Ejj) are orthogonal k-potent matrices. While ϕ(aEii + Ejj) ∈ Γn implies aD + ϕ(Ejj) ∈ Γn; then aD ∈ Γn. There are two cases on a.
- (1)
If a ∉ Λ, then D = 0; that is, ϕ(aEii + Ejj) = ϕ(Ejj);
- (2)
if a ∈ Λ, we can derive that (1/3)ϕ(aEii + Ejj)−(1/3)ϕ[(a − 3)Eii + Ejj] ∈ Γn from (1/3)(aEii + Ejj)−(1/3)[(a − 3)Eii + Ejj] ∈ SΓn. Note that a − 3 ∉ Λ, so it is true that ϕ[(a − 3)Eii + Ejj] = ϕ(Ejj); that is, (1/3)ϕ(aEii + Ejj)−(1/3)ϕ(Ejj) = (a/3)D ∈ Γn. Finally, we can derive D = 0 from a/3 ∉ Λ and D ∈ Γn. At the same time, ϕ(aEii + Ejj) = ϕ(Ejj).
Anyway, ϕ(aEii + Ejj) = ϕ(Ejj) for arbitrary a ∈ ℂ.
Since (b−1 + b) −1(b−1Eii + Eij + Eji + bEjj) ∈ SΓn for every nonzero b ∈ ℂ with 1 + b2 ≠ 0, then (b−1 + b) −1[ϕ(Eij + Eji) + ϕ(b−1Eii + bEjj)] ∈ Γn, and (b−1 + b) −1[ϕ(Eij + Eji) + bϕ(Ejj)] ∈ Γn by ϕ(b−1Eii + bEjj) = bϕ(Ejj). While the equation (b−1 + b) −k[ϕ(Eij + Eji) + bϕ(Ejj)] k = (b−1 + b) −1[ϕ(Eij + Eji) + bϕ(Ejj)] is equivalent to bk−1[ϕ(Eij + Eji) + bϕ(Ejj)] k = (1 + b2) k−1[ϕ(Eij + Eji) + bϕ(Ejj)]. Note that ϕ(Eij + Eji) is the constant term of the equation; then ϕ(Eij + Eji) = 0 by the infinite property of b, and (b−1 + b) −1bϕ(Ejj) ∈ Γn follows. Then we can derive ϕ(Ejj) = 0 which is a contradiction to the assumption.
Proposition 14. Suppose ϕ(Eii) = 0 for every i ∈ 〈n〉; then ϕ(X) = 0 for arbitrary X ∈ Sn.
Proof. The proof will be completed by induction on the following equation for arbitrary X ∈ Sn with X[i, i] = xi for every i ∈ 〈n〉:
When m = 1, (10) is equivalent to for arbitrary .
At first, by the assumption, it is already true that ϕ(Eii) = 0 for every i ∈ 〈n〉.
Suppose for every s ∈ 〈n − 1〉 with 1 ≤ i1 < ⋯<is ≤ n; then by the homogeneity of ϕ, we just need to prove the following equation for is+1 with is < is+1 ≤ n:
- (1)
If Bs ∉ SΓn, then there exists l ∈ 〈s〉 such that , and the following statements are true:
()
Note that ϕ(Bs) = 0 and by the assumption; then the following statements are true:
Since , then , and follows.
- (2)
If Bs ∈ SΓn, then we have the following statements:
()
Since ; then by case 1, and follows. While , hence we get .
Anyway, we prove ; then by the induction, (10) is true for m = 1.
Suppose (10) is true for m ∈ 〈n − 1〉, then we prove the case on m + 1.
Let Xm = X[1,…,m; 1,…,m], g = X[1,…,m;m+1], ; then we have gt = X[m+1; 1,…,m] and the following equation:
For arbitrary nonzero α ∈ ℂ with gtg + α2 ≠ 0, the following n × n matrix B is idempotent:
Note that and An−m−1 satisfy the following equation:
After applying ϕ on the above matrices, we have τϕ(Xm+1 ⊕ An−m−1) ∈ Γn by the inductive assumption. Then ϕ(Xm+1 ⊕ An−m−1) = 0 because of the assumption of α; that is, (10) holds for m + 1.
Finally, we prove that ϕ(X) = 0 for every X ∈ Sn by the induction.
Remark 15. If ϕ is a weak k-potence preserver from Mn to Mn; then Propositions 13 and 14 (replacing gt with ht for arbitrary X ∈ Mn in the proof of Proposition 14) hold since Corollary 10 is true under this assumption.
Proposition 16. Suppose ϕ(Eii) ≠ 0 for every i ∈ 〈n〉, then there exist P ∈ GLn and c ∈ Λ such that ϕ(X) = cP−1XP for every X ∈ Sn.
Proof. The proof will be completed in the following 4 steps.
Step 1. ϕ(Eii) = ciEii, where ci ∈ Λ for every i ∈ 〈n〉:
Since ϕ(Eii) is nonzero k-potent, then we can derive from Lemma 4 that there exists P1 ∈ GLn such that for every i ∈ 〈n〉, where ci ∈ Λ. It is obvious that the following map φ ∈ Φn and φ(Eii) = ciEii for every i ∈ 〈n〉.
Step 2. , for arbitrary diagonal matrix .
The proof of this step can be seen in Step 3, Section 3 in [5].
Step 3. ci = c ∈ Λ for every i ∈ 〈n〉.
Let A = (1/2)(Eij + Eji), B = (1/2)(Eii + Ejj), and C = ∑l∈<n>∖{i,j} Ell, we can derive the following equation from Step 2 and Corollary 10:
Note that pEii + q(Eij + Eji)+(1 − p)Ejj ∈ SΓn for p, q ∈ ℂ with q2 = p(1 − p). In fact, 0 and 1 are all the eigenvalues of this matrix. Applying ϕ on the matrix q(Eij + Eji)+[pEii + (1 − p)Ejj], we have H(p) = q(α0Eii + β0Eij + γ0Eji + δ0Ejj) + pciEii + (1 − p)cjEjj = (pci + qα0)Eii + qβ0Eij + qγ0Eji + ((1 − p)cj + qδ0)Ejj ∈ Γn.
Since k is fixed, then Δ is the finite set which contains all of eigenvalues of H(p), and there exists w ∈ {c + d∣c, d ∈ Δ} such that the trace of H(p) is w for infinite choices of p; that is, there exist (p1, p2) with p1 ≠ p2 such that the traces of H(p1) and H(p2) are all equal to w; then we have the following equation:
Naturally, there are infinite choices of p2 for fixed p1 such that the above equation is true. If (q1 − q2)/(p2 − p1) is equal to some a ∈ ℂ, where p2 ≠ p1, p1 and q1 are fixed, then we can derive from the following equation:
Since α0 + δ0 and ci − cj are all fixed numbers for fixed ϕ, then α0 + δ0 ≠ 0 implies that there are at least two different values of ci − cj = (q1 − q2)/(p2 − p1)(α0 + δ0) for fixed p1 and infinite choices of p2; it is a contradiction. So α0 + δ0 = 0 and ci = cj follows. Hence ci = c ∈ Λ for every i ∈ 〈n〉.
Step 4. ϕ(X) = X for every X ∈ Sn.
After the discussion in Steps 1, 2, and 3, we already have the following equation:
The proof in this step will be completed by induction on the following equation for arbitrary X ∈ Sn with X[i, i] = xi for every i ∈ 〈n〉:
When m = 2, (25) is equivalent to ϕ(Eij + Eji + Dn) = Eij + Eji + Dn for arbitrary diagonal matrix Dn ∈ Sn and i, j ∈ 〈n〉 with i < j, since ϕ is homogeneous. The proof will be completed in the following (1) and (2).
- (1)
ϕ(Eii+1 + Ei+1i + Dn) = Eii+1 + Ei+1i + Dn for every i ∈ 〈n − 1〉.
We already derive from Corollary 11 that for every i ∈ 〈n − 1〉, where λi ∈ ℂ is only decided by i.
Suppose the map ρ : Sn → Mn satisfies the following equation for every X ∈ Sn,
Without loss of generality, we can assume ϕ(Eii+1 + Ei+1i + Dn) = Eii+1 + Ei+1i + Dn for every i ∈ 〈n − 1〉 and arbitrary Dn.
- (2)
Suppose ϕ(Eij + Eji + Dn) = Eij + Eji + Dn for every i, j with 1 ≤ j − i < s < n − 1; then ϕ(Eij + Eji + Dn) = Eij + Eji + Dn for every i, j with j − i = s.
At first, we have to prove that ϕ(xii+1(Eii+1 + Ei+1i) + xi+1i+m(Ei+1i+m + Ei+mi+1) + Dn) = xii+1(Eii+1 + Ei+1i) + xi+1i+m(Ei+1i+m + Ei+mi+1) +Dn for arbitrary nonzero xii+1 and xi+1i+m ∈ ℂ.
By the assumption, we already have the following equations:
Let X1 = xii+1(Eii+1 + Ei+1i) + xi+1i+m(Ei+1i+m + Ei+mi+1) + Dn, X2 = xii+1(Eii+1 + Ei+1i) + Dn, and X3 = xi+1i+m(Ei+1i+m + Ei+mi+1) + Dn. Then the following statements are true
Let A = X1, B = −(X2 − ai+1Ei+1i+1 − ai+mEi+mi+m), and , then A, B, and C satisfy the assumption of Corollary 10. Hence we get ϕ(A) + ϕ(B) and ϕ(C) are orthogonal; that is,
Similarly, we can derive the following equation from Corollary 10:
Comparing the above two equations, we have zi = yi+m = 0, zi+1 = yi+1, zii+1 = zi+1i = xii+1, and yi+1i+m = yi+mi+1 = xi+1i+m, that is, ϕ(X1) = X1 + yi+1Ei+1i+1.
We will prove zi+1 = yi+1 = 0. For arbitrary nonzero α with , let , and ; then τX1 + τX4 ∈ SΓn implies τϕ(X1) + τϕ(X4) ∈ Γn; that is, the following matrix is k-potent since ϕ(X4) = X4 by the assumption
Now we prove ϕ(Eii+m + Ei+mi + Dn) = Eii+m + Ei+mi + Dn.
By Corollary 11, we already have .
For arbitrary nonzero α with 2 + α2 ≠ 0, (2α−1 + α) −1(Eii+m + Ei+mi + Dn)−(2α−1 + α) −1(−α−1(Eii + Eii+1 + Ei+1i + Ei+1i+1) − Ei+1i+m − Ei+mi+1 − αEi+mi+m + Dn) = (2α−1 + α) −1(α−1(Eii + Eii+1 + Ei+1i + Ei+1i+1)+(Eii+m + Ei+1i+m)+(Ei+mi + Ei+mi+1) + αEi+mi+m) is idempotent.
After applying ϕ on the above matrices, we have (2α−1 + α) −1ϕ(Eii+m + Ei+mi + Dn)−(2α−1 + α) −1ϕ(−α−1(Eii + Eii+1 + Ei+1i + Ei+1i+1) − Ei+1i+m − Ei+mi+1 − αEi+mi+m + Dn) = .
Then λii+m = 1 by Lemma 5.
By the induction, we prove ϕ(Eij + Eji + Dn) = Eij + Eji + Dn for every i, j with 1 ≤ i < j ≤ n.
- (3)
Suppose (25) is true for every s with 2 ≤ s < m ≤ n; then we prove it holds on m.
For arbitrary X ∈ Sn with X[i, i] = xi for every i ∈ 〈n〉, let A, B, U, V, , and τ satisfy the following equations:
Note that and ϕ(C) = C by the assumption; then τϕ(A) + τϕ(C) and are orthogonal by Corollary 10; that is, for some Y ∈ Mn.
On the other hand, implies ∈Γn by τϕ(A) + τϕ(C) ∈ Γn. By Lemma 5, we can derive the following equations:
Let B1 and B2 satisfy the following equations:
Comparing the above three sets of equations, we can get ϕ(A) = A, which is equivalent to (25) on m.
By the induction, we prove that ϕ(X) = X for arbitrary X ∈ Sn.
Remark 17. If ϕ is a weak k-potence preserver from Mn to Mn, then the proof in Steps 1, 2, and 3 of Proposition 16 holds, and we prove ϕ(X) = X or ϕ(X) = Xt in Step 4. We omit the detailed proof since the case on Xt is totally the same after changing relevant notations.