Volume 2011, Issue 1 164371
Research Article
Open Access

On Generalized Transitive Matrices

Jing Jiang

Corresponding Author

Jing Jiang

School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China uestc.edu.cn

Mathematics and Statistics College, Chongqing University of Arts and Sciences, Chongqing 402160, China cqu.edu.cn

Search for more papers by this author
Lan Shu

Lan Shu

School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China uestc.edu.cn

Search for more papers by this author
Xinan Tian

Xinan Tian

Modern Education Technology Center, Chongqing University of Arts and Sciences, Chongqing 402160, China cqu.edu.cn

Search for more papers by this author
First published: 31 October 2011
Citations: 1
Academic Editor: Vu Phat

Abstract

Transitivity of generalized fuzzy matrices over a special type of semiring is considered. The semiring is called incline algebra which generalizes Boolean algebra, fuzzy algebra, and distributive lattice. This paper studies the transitive incline matrices in detail. The transitive closure of an incline matrix is studied, and the convergence for powers of transitive incline matrices is considered. Some properties of compositions of incline matrices are also given, and a new transitive incline matrix is constructed from given incline matrices. Finally, the issue of the canonical form of a transitive incline matrix is discussed. The results obtained here generalize the corresponding ones on fuzzy matrices and lattice matrices shown in the references.

1. Introduction

Generalized transitive matrices [1] over a special type of semiring are introduced. The semiring is called incline algebra. Boolean algebra, fuzzy algebra, and distributive lattice are inclines. And the Boolean matrices, the fuzzy matrices, and the lattice matrices are the prototypical examples of the incline matrices. Inclines are useful tools in diverse areas such as design of switching circuits, automata theory, medical diagnosis, information systems, and clustering. Besides inclines are applied to nervous system, probable reasoning, dynamical programming, and decision theory.

Transitive matrices are an important type of generalized matrices which represent transitive relation (see, e.g., [26]). Transitive relation plays an important role in clustering, information retrieval, preference, and so on [5, 7, 8]. The transitivity problems of matrices over some special semirings have been discussed by many authors (see, e.g., [917]). In 1982, Kim [18] introduced the concept of transitive binary Boolean matrices. Hashimoto [11] presented the concept of transitive fuzzy matrices and considered the convergence of powers of transitive fuzzy matrices. Kołodziejczyk [10] gave the concept of s-transitive fuzzy matrices and considered the convergence of powers of s-transitive fuzzy matrices. Tan [17, 19] discussed the convergence of powers of transitive lattice matrices. Han and Li [1] studied the convergence of powers of incline matrices. In [12, 13], the canonical form of a transitive matrix over fuzzy algebra was established, and, in [14, 15, 17], the canonical form of a transitive matrix over distributive lattice was characterized. In [9, 16, 20], some properties of compositions of generalized fuzzy matrices and lattice matrices were examined.

In this paper, we continue to study transitive incline matrices. In Section 3, the transitive closure of an incline matrix is discussed. In Section 4, the convergence of powers of transitive incline matrices is considered. In Section 5, some properties of compositions of incline matrices are given and a new transitive incline matrix is constructed from given incline matrices. In Section 6, the issue of the canonical form of an incline matrix is further discussed. Some results in this paper generalize the corresponding results in [14, 17, 20].

2. Definitions and Preliminary Lemmas

In this section, we give some definitions and lemmas.

Definition 2.1 (see [1].)A nonempty set L with two binary operations + and · is called an incline if it satisfies the following conditions:

  • (1)

    (L, +) is a semilattice;

  • (2)

    (L, ·) is a commutative semigroup;

  • (3)

    x(y + z) = xy + xz for all x, y, zL;

  • (4)

    x + xy = x for all x, yL.

In an incline L, define a relation ≤ by xyx + y = y. Obviously, xyx for all x, yL. It is easy to see that ≤ is a partial order relation over L and satisfies the following properties.

Proposition 2.2 (see [21].)Let L be an incline and a, b, cL. Then,

  • (1)

    0 ≤ a ≤ 1;

  • (2)

    if ab, then a + cb + c, acbc, cacb;

  • (3)

    aa + b, and a + b is the least upper bound of a and b;

  • (4)

    aba, abb. In other words, ab is a lower bound of a and b;

  • (5)

    acbab;

  • (6)

    a + b = 0 if and only if a = b = 0;

  • (7)

    ab = 1 if and only if a = b = 1.

Boolean algebra ({0,1}, ∨, ∧), fuzzy algebra ([0,1], ∨, T) (T is a t-norm) and distributive lattice are inclines. Let (L, ≤) be a poset and a, bL. If ab or ba, then a and b are called comparable. Otherwise, a and b are called incomparable, in notation, a    b. If for any a, bL, a and b are comparable, then L is linear and L is called a chain. An unordered poset is a poset in which a    b for all ab. A chain B in a poset L is a nonempty subset of L, which, as a subposet, is a chain. An antichain B in a poset L is nonempty subset which, as a subposet, is unordered. The width of a poset L, denoted by w(L), is n, where n is a natural number, iff there is an antichain in L of n elements and all antichains in L have ≤n elements. A poset (L, ≤) is called an incline if L satisfies Definition 2.1. It is clear that any chain is an incline, which is called a linear incline.

An element a of an incline L is said to be idempotent if a2 = a. The set of all idempotent elements in L is denoted by I(L), that is, I(L) = {aLa2 = a}.

A matrix is called an incline matrix if its entries belong to an incline. In this paper, the incline (L, ≤, +, ·) is always supposed to be a commutative incline with the least and greatest elements 0 and 1, respectively. Let Mn(L) be the set of all n × n matrices over L. For any A in Mn(L), we will denote by aij or Aij the element of L which stands in the (i, j)th entry of A. For convenience, we will use N to denote the set {1,2, …, n}, and Z+ denotes the set of all positive integers.
  • For any A, B, C in Mn(L) and a in L, we define

  • A + B = C iff cij = aij + bij for all i, j in N;

  • AB = C iff for all i, j in N;

  • AT= C iff cij = aji for all i, j in N;

  • aA = C iff cij = aaij for all i, j in N;

  • AB iff aijbij for all i, j in N and AB iff BA;

  • In = (tij), where

(2.1)
  • For any A in Mn(L), the powers of A are defined as follows:

  • A0 = In, Al = Al−1A, lZ+.

  • The (i, j)th entry of Al is denoted by (lZ+), and obviously

    (2.2)
    The following properties will be used in this paper.

  • (1)

    Mn(L) is a semigroup with the identity element In with respect to the multiplication;

  • (2)

    (Mn(L), +, ·) is a semiring.

If A2A, then A is called transitive; if A2 = A, then A is called idempotent; if AT = A, then A is called symmetric; if AA2, then A is called increasing; if AIn, then A is called reflexive; if aii = 0 for all iN, then A is called irreflexive; if Am = 0 (mZ+), then A is called nilpotent; if aij = 0 for i, j = 1,2, …, n, then A is called the zero matrix and denoted by 0n; A is called a permutation matrix if exactly one of the elements of its every row and every column is 1 and the others are 0.

Let BMn(L). The matrix B is called the transitive closure of A if B is transitive and AB, and, for any transitive matrix C in Mn(L) satisfying AC, we have BC. The transitive closure of A is denoted by A+. It is clear that if A has a transitive closure, then it is unique.

For any AMn(L) with index, the sequence
(2.3)
is of the form
(2.4)
where k = k(A) is the least integer such that Ak = Ak+d for some d > 0. The least integers k = k(A), d = d(A) are called the index and the period of A, respectively.

The following definition will be used in this paper.

Definition 2.3. A matrix A = (aij) ∈ Mn(L) is said to be

  • (1)

    row diagonally dominant if aij = aiiaij for all i, jN;

  • (2)

    column diagonally dominant if aij = aijajj for all i, jN;

  • (3)

    weakly diagonally dominant if for any iN, either aiiaij = aij for all jN or aiiaji = aji for all jN;

  • (4)

    strongly diagonally dominant if aij = aiiaij = aijajj for all i, jN;

  • (5)

    nearly irreflexive if aiiaij = aii for all i, jN.

Lemma 2.4. I(L) is a distributive lattice, where I(L) = {aLa2 = a}.

The proof can be seen in [1].

3. Transitive Closure of an Incline Matrix

In this section, some properties of the transitive closure of an incline matrix are given and an algorithm for computing the transitive closure of an incline matrix is posed.

Lemma 3.1. For any A in Mn(L), we have .

The proof can be seen in [21].

Lemma 3.2. Let AMn(L). Then,

  • (1)

    for any i, jN with ij and any kn, there exists s (s ∈ {1,2, …, n − 1}) such that ;

  • (2)

    for any iN and any kn, there exists tN such that .

Proof. (1) Let be any term of , where kn and 1 ≤ i, i1, i2, …, ik−1, jn. Since the number of indices in T is greater than n, a repetition among them must occur. Let us call the sequence of entries between two occurrences of one index a cycle. If we drop the cycle, a new expression T1 with m1k entries is obtained. If m1n, there must be a cycle in T1, then we delete the cycle and obtain a new expression T2 with m2m1 entries. The deleting method can be applied repeatedly until the new expression Ts contains ms < n entries. According to properties of the operation “·”, TT1 ≤ ⋯≤Ts, but Ts is a term of the (i, j)th entry of As for some s < n, we have , and so . This completes the proof.

(2) Let be any term of , where kn + 1 and 1 ≤ i, i1, i2, …, ik−1n. Since the number of indices in T is greater than n, there must be two indices iu and iv such that iu = iv for some u, v (u < v). Then, we delete from T and obtain a new expression with m1k entries. If m1n + 1, there are still two identical numbers in the subscripts i, i1, …, iu−1, iu, iv+1, …, ik−1, then we apply the deleting method used in the above. The method can be applied repeatedly until the subscripts left are pairwise different. Finally, we can get a new term Tt with mtn entries. According to properties of the operation “·”, TT1 ≤ ⋯≤Tt, but Tt is a term of the (i, i)th entry of At for some tn, we have , and so . This completes the proof.

Lemma 3.3. Let AMn(L). Then,

  • (1)

    for any k ≥ 1;

  • (2)

    for any mn.

Proof. From Lemma 3.2, the proof is obvious.

Proposition 3.4. If A+ is reflexive, then (A+) + = A+.

Proof. Since A+In, we have A+ ≤ (A+) 2. On the other hand, by Lemmas 3.1 and 3.3, we see that (A+) 2A+. Hence, (A+) + = A+.

Proposition 3.5. For any A = (aij) ∈ Mn(L) with index, if A is column (or row) diagonally dominant, then A+ = As, where As is transitive.

Proof. We only consider the case A is column diagonally dominant.

For any integer l > 0, we have AlsAs since As is transitive, and so , for all i, jN. Since A is column diagonally dominant, we have (because is the sum of some term in ) . Hence, , then A+As. On the other hand, since for any mn, we have A+As. Therefore, A+ = As.

Corollary 3.6. For any A = (aij) ∈ Mn(L) with index, if A is strongly diagonally dominant, then A+ = As, where As is transitive.

Proof. Obviously, any strongly diagonally dominant matrix is column (or row) diagonally dominant. Hence, the conclusion follows from Proposition 3.5.

Proposition 3.7. For any A = (aij) ∈ Mn(L) with index, if A is weakly diagonally dominant, then A+ = As, where As is transitive.

Proof. Since As is transitive, we have AsAls, and so , for any i, jN and any integer l > 0. Since A is weakly diagonally dominant, we have for any iN, either aiiaij = aij for all jN or aiiaji = aji for all jN.

Let

(3.1)

Case 1. If for all jN, we have . Then,

(3.2)

Case 2. If for all jN, we have . Then,

(3.3)

From above, we see that for any i, jN. Hence, , then A+As. On the other hand, since , we have A+As. Therefore, A+ = As.

Proposition 3.8. Let A = (aij) ∈ Mn(L). If the entries of A satisfy aiiajk = ajk (1 ≤ i, j, kn), then

  • (1)

    A2A3;

  • (2)

    for all iN;

  • (3)

    A converges to Ak(A) with k(A) ≤ n − 1.

Proof. (1) Since any term T of the (i, j)th entry of A2 is of the form , we have (1 ≤ i, j, kn). Because the hypothesis aiiajk = ajk, we can get . Since aikakkakj is a term of , we have .

Thus, A2A3.

(2) Because the hypothesis aiiajk = ajk, we have aiiaiiaii = aii. Hence,

(3.4)
then (for all sN).

(3) Since A2A3, we have Ak−1Ak (for any integer k ≥ 3), and so An−1An. Now we prove that An−1An. By (2), it is sufficient only to show that for ij. Let

(3.5)
Since the number of indices in is n + 1, there must be two indices iu and iv such that iu = iv for some u, v(u < v). Then, . Since is a term of , we have and so , that is, AnAm(m = n − (vu) < n) ≤ An−1 (because Ak−1Ak for any integer k ≥ 3). Therefore, An−1 = An. This proves the proposition.

Lemma 3.9. Let A = (aij) ∈ Mn(L). If the entries of A satisfy aiiajk = ajk (1 ≤ i, j, kn), then adj(A) = An−1.

The proof can be seen in [22].

Proposition 3.10. Let A = (aij) ∈ Mn(L). If the entries of A satisfy aiiajk = ajk (1 ≤ i, j, kn), then (adjA) + = An−1.

Proof. By the Proposition 3.8, we have An−1 = An. By Lemma 3.9, we can get adj(A) = An−1. Thus, (adjA) + = (An−1) + = An−1.

Corollary 3.11. Let A = (aij) ∈ Mn(L). If A is reflexive, then

  • (1)

    AA2;

  • (2)

    for all iN;

  • (3)

    A converges to Ak(A) with k(A) ≤ n − 1;

  • (4)

    (adjA) + = An−1;

  • (5)

    ((adjA) +) + = An−1.

Proof. From Propositions 3.10 and 3.8, the proof is obvious.

Proposition 3.12. Let A = (aij) ∈ Mn(L). If L has no nilpotent elements and A is nilpotent, then

  • (1)

    (adjA) 2 = 0;

  • (2)

    (adjA) + = adjA.

Proof. (1) The proof can be seen in [23].

(2) Form (1), the proof is obvious.

Proposition 3.13. Let A = (aij) ∈ Mn(L), A+ = (tij) n×n, , then

  • (1)

    ;

  • (2)

    ;

  • (3)

    (for all i, jN and ij).

Proof. (1) Since (In + A) is reflexive, we have (In + A) is increasing. By Lemma 3.1, we can get the conclusion.

(2) By Corollary 3.11, (In + A) converges to with k(In + A) ≤ n − 1. Thus, the conclusion is obtained.

(3) From (1) and (2), the proof is obvious.

At the end of this section, we may establish the following algorithm to find the (A+) ij (ij).

Let AMn(L), , and B = In + A.

Step 1. Compute successively

(3.6)
In (3.6), find pk − 1 such that , but . Then, . Go to Step 2.

Step 2. Find (A+) ij (ij).

Let (ij). Stop.

4. Convergence of Powers of Transitive Incline Matrices

In this section, the convergence of powers of transitive incline matrices in Mn(L) will be discussed.

Definition 4.1. Let A = (aij) ∈ Mn(L). For any i, jN, if aijaikaikaijakj = akj holds for all kN, then A is called a strongly transitive matrix.

Theorem 4.2. If A = (aij) ∈ Mn(L) is a strongly transitive matrix, then A is transitive.

Proof. Since A is strongly transitive, for any i, jN, we have aijaik = aik for all kN or aijaikaik and aijakj = akj for all kN.

Case 1. For any i, jN, suppose aijaik = aik for all kN, then we can get aijaijaik = aikaikakj, so , which means that A is transitive.

Case 2. For any i, jN, suppose aijaikaik and aijakj = akj for all kN, then we can get aijaijakj = akjaikakj, so , which means that AA2. This completes the proof.

Remark 4.3. If A is transitive, but A is not necessary to be a strongly transitive matrix.

Example 4.4. Consider the cline L = {0, a, b, c, 1} whose diagram is as Figure 1.

Let now

(4.1)

Then, A2A which means that A is transitive. But a11a12 = caa12 = a (because ca < a)⇏a11a21 = cb = a21 = b (because cb < b), which means that A is not a strongly transitive matrix.

Description unavailable

Theorem 4.5. Let A = (aij) ∈ Mn(L). If AA2 and aiiI(L) (for all iN), then

  • (1)

    for all iN;

  • (2)

    A converges to Ak(A) with k(A) ≤ n.

Proof. (1) By the hypothesis AA2, it follows that AkAk+1 (for all kN). Hence, AnAn+1 and (for all sN). Since aiiI(L), we can get (for all sN). Thus and .

(2) By (1), it is sufficient to verify AnAn+1.

Now, any term T of the (i, j)th entry of An is of the form , where 1 ≤ i1, i2, …, in−1n. Since the number of indices in T is greater than n, there must be two indices iu and iv such that iu = iv for some u, v (0 ≤ u < vn, i0 = i, in = j). Then, (because is a term of ) (because (1) (for all i, sN) and aiiI(L)) (because is the sum of some term in ) (because AkAk+1 for any kN). Thus, , and so . Therefore, AnAn+1. This completes the proof.

Corollary 4.6. If A = (aij) ∈ Mn(L) is strongly transitive, then A converges to Ak(A) with k(A) ≤ n.

Proof. Since A is strongly transitive, by Theorem 4.2, we have AA2 and aiiI(L) (for all iN), the conclusion is obtained.

Theorem 4.7. Let A = (aij) ∈ Mn(L) be transitive. If B = (bij) ∈ Mn(I(L)) and diag (A) ≤ BA, where diag (A) = (cij) with cii = aii (for all iN) and cij = 0 (ij, i, jN). Then,

  • (1)

    B converges to Bk(B) with k(B) ≤ n;

  • (2)

    if A satisfies (or ) for some k in N, then B converges to Bk(B) with k(B) ≤ n − 1;

  • (3)

    if B satisfies (or ) for some k in N, then B converges to Bk(B) with k(B) ≤ n − 1.

Proof. (1) Firstly, bii = aii (for all iN) since diag (A) ≤ BA. Since biiI(L), we have aiiI(L). By Theorem 4.5, we can get for all iN.

It follows that any term T of the (i, j)th entry of Bn is of the form , where 1 ≤ i1, i2, …, in−1n. Since {i, i1, i2, …, in−1, j}⊂{1,2, …, n} and n + 1 > n, there are u, v such that iu = iv (0 ≤ u < vn, i0 = i, in = j). Then, and . Since A is transitive, we have AAk for all k ≥ 1, and so for all i, jN. Thus, (because BI(L)) (because AB) . Since is the sum of some term in , we have (because BIn(L)), and so (because T is any term of ). Thus, Bn−1Bn.

Certainly, BnBn+1Bn+2 ≥ ⋯.

On the other hand, for any term of , there must be two indices iu and iv such that iu = iv for some u, v (0 ≤ u < vn, i0 = i, in = j). Then, (because is a term of ) (because BMn(I(L))) (because is the sum of some term in ) (because BkBk+1 for any kn − 1). Thus, , and so . Therefore, BnBn+1.

Consequently, we have Bn = Bn+1. This completes the proof.

(2) By the proof of (1), we have BnBn−1. Hence, Bn−1BnBn+1Bn+2 ≥ ⋯. In the following, we will show that Bn−1Bn. It is clear that any term T of the (i, j)th entry of Bn−1 is of the form , where 1 ≤ i, ii, i2, …, in−2, jn. Let i0 = i and in−1 = j.

Case 1. If iu = iv for some u and v (u < v, vu ≥ 1), then (because is a term of ) (because BMn(I(L))) (because is the sum of some term in ) (because Bn−1BnBn+1 ≥ ⋯). Thus, , and so . Therefore, Bn−1Bn.

Case 2. Suppose that iuiv for all uv. By the hypothesis for some k, we have (because AB) . Then, (because BMn(I(L))) (because ) (because is a term of ). Thus, , and so . Therefore, Bn−1Bn.

The case of is similar to that of the hypothesis .

Consequently, we have Bn−1 = Bn.

(3) The proof of (3) is similar to that of (2). This completes the proof.

Theorem 4.7 generalizes and develops Theorem  4.1 of Tan [17].

5. Compositions of Transitive Matrices

In this section, we construct a transitive matrix from given incline matrices and construct a new incline matrix with some special properties from transitive matrices. We know that any idempotent matrix is also a transitive matrix.

Theorem 5.1. Let A = (aij) ∈ Mn(L). If A is transitive and row diagonally dominant, then B = (bij) is idempotent, where bij = aijaji.

Proof. For any i, j in N, (because AA2) . Thus, B2B. On the other hand, since (because A is row diagonally dominant) = bij. Thus, B2B. Therefore, B2 = B. This completes the proof.

Definition 5.2. AD = C iff for any i, jN.

Theorem 5.3. Let A = (aij) ∈ Mn(L) be a symmetric and nearly irreflexive matrix. Then, the matrix M = CA is idempotent, where C = (cij) with cii = ciL, cij = 0 (ij).

Proof. By the definitions of the matrices A, C, and M, we have

(5.1)
The (i, j)th entry of M2 is .

Case 1 (i ≠ j). In this case, = ∑ki,jmikmkj + miimij + mijmjj = ∑ki,jakkajj + (aii + ∏lialici)ajj + ajj(ajj + ∏ljaljcj) = ajj (because A is nearly irreflexive) = mij.

Case 2 (i = j). In this case, (because A is nearly irreflexive) = mii.

Consequently, we can get M2 = M. Therefore, M = CA is idempotent.

Theorem 5.3 generalizes Theorem  3.5 of Tan [20].

Lemma 5.4. Let A, BMm×n(L), CMn×l(L) and DMp×m(L). Then,

  • (1)

    (BC) T = CTBT;

  • (2)

    If AB, then DADB and ACBC.

The proof is trivial.

Lemma 5.5. Let A = (aij) ∈ Mm×n(L) be a nearly irreflexive matrix, then AAT is nearly irreflexive and symmetric.

Proof. Let S = AAT. Then, (because A is nearly irreflexive). Thus, , that is, S = AAT is nearly irreflexive. By Lemma 5.4, we have (AAT) T = (AT) TAT = AAT.

Lemma 5.5 generalizes Theorem  3.3 of Tan [20].

Corollary 5.6. Let A = (aij) ∈ Mn(L) be a nearly irreflexive matrix. Then, C∘(AAT) is idempotent, where C = (cij) with cii = ciL, cij = 0 (ij).

Proof. It follows from Theorem 5.3 and Lemma 5.5.

Proposition 5.7. Let A = (aij) ∈ Mn(L) be a symmetric and nearly irreflexive matrix. Then,

  • (1)

    AAA;

  • (2)

    AA is symmetric and nearly irreflexive.

Proof. (1) Let R = AA. Then, (because A is nearly irreflexive) = aij, so that AAA.

(2) Let R = AA. Since , we have R is symmetric. Since (because A is nearly irreflexive), we can get . Thus, R is nearly irreflexive.

Proposition 5.7 generalizes Proposition  3.1 of Tan [20].

Corollary 5.8. Let A = (aij) ∈ Mn(L) be a symmetric and nearly irreflexive matrix. Then, C∘(AA) is idempotent, where C = (cij) with cii = ciL, cij = 0 (ij).

Proof. It follows from Theorem 5.3 and Proposition 5.7.

Corollary 5.8 generalizes Corollary  3.7 (1) of Tan [20].

Corollary 5.9. Let A = (aij) ∈ Mn(L) be a nearly irreflexive matrix. Then, (AAT) ∘ C is idempotent, where C = (cij) with cii = ciL, cij = 0 (ij).

The proof is trivial.

Corollary 5.9 generalizes Corollary  3.8 of Tan [20].

Proposition 5.10. Let AMn(L) be irreflexive and transitive. Then,

  • (1)

    AAT = 0, 0 ∈ Mn(L);

  • (2)

    ATA = 0, 0 ∈ Mn(L).

Proof. (1) Let R = AAT. Then, (because A is irreflexive) = aijajiaii (because A is transitive) = 0. Therefore, R = 0.

The proof of (2) is similar to that of (1). This proves the Proposition.

Proposition 5.10 generalizes Proposition  3.9 of Tan [20].

Definition 5.11. An incline is said to be a Brouwerian incline if for any a, bL, there exists an element such that bxaxba.

Obviously, ba is the largest element satisfying bxa.

Definition 5.12. AD = C iff for any i, jN.

Lemma 5.13. Let be a Brouwerian incline. Then, for any ,

  • (1)

    aa = 1;

  • (2)

    1 → a = a.

The proof is trivial.

Theorem 5.14. Let . Then, AAT is reflexive and transitive.

Proof. Let R = AAT. Then, . Obviously, (by Lemma 5.13 (1)). Thus, R is reflexive. Furthermore, since ajk(alkaik)(ajkalk) = (alkaik)(ajk(ajkalk)≤(alkaik)alkaik, we have (alkaik)(ajkalk) ≤ ajkaik. Therefore, , and so R2R. This proves the Theorem.

Theorem 5.14 generalizes Lemma  4.1 of Tan [20].

Theorem 5.15. Let . Then, the following conditions are equivalent.

  • (1)

    A is reflexive and transitive;

  • (2)

    AAT = A.

Proof. (1) (2). Let R = AAT. Then, (by Lemma 5.13(2)), and so RA. On the other hand, since aikaijajk for all i, j, kN, we have aijajkaik, and so (because ) = rij, that is, AR. Consequently, we have R = AAT = A.

(2) (1). It follows from Theorem 5.14.

Theorem 5.15 generalizes Proposition  4.2 of Tan [20].

Theorem 5.16. Let . Then, (AAT)A = A.

Proof. Let R = (AAT)A. Then, , so that RA. On the other hand, since AAT is reflexive (by Theorem 5.14), we have R = (AAT)AA. Therefore, (AAT)A = A.

Theorem 5.16 generalizes Lemma  4.3 of Tan [20].

6. On Canonical Form of an Incline Matrix

In this section, we will discuss the canonical form of an incline matrix. Let A be an n × n incline matrix. If there exists an n × n permutation matrix P such that F = PAPT = (fij) satisfies fijfji for i > j then, F is called a canonical form of A. The main results obtained here generalize the previous results on canonical form of a lattice matrix (see, e.g., [17]) and a fuzzy matrix (see, e.g., [12]).

Definition 6.1. Let A = (aij) ∈ Mn(L). For any i, j, kN with ij, ik, jk, if aik > aki and akj > ajk, we have aij > aji. Then, A is called a especially strongly transitive matrix.

Lemma 6.2. If A = (aij) ∈ Mn(L) is especially strongly transitive matrix and A is put in the block forms

(6.1)
where and A1, A2Mn−1(L), then A1, A2, and PAPT are especially strongly transitive matrices for any permutation matrix P.

The proof is similar to that of Lemma  3.1 in [14].

Lemma 6.3. Let A = (aij) ∈ Mm×n(L) and B = (bij) ∈ Mn×m(L). If ABT, then APTBTPT holds for any n × n permutation matrix P.

The proof is omitted.

Theorem 6.4. If A = (aij) ∈ Mn(L) is especially strongly transitive, then A has a canonical form.

The proof is similar to that of Theorem  3.1 in [14].

Theorem 6.5. Let L be an incline and n an integer with n ≥ 4. Then, for any n × n transitive matrix A over L, there exists an n × n permutation matrix P such that F = PAPT satisfies fijfji for i > j only if L is a linear incline.

Proof. Suppose that L is not a linear incline. Then, w(L) ≥ 2, and so that there must be two elements a and c in L such that ac. Therefore, ac < a < a + c and ac < c < a + c. Now, let A = aE12 + cE23 + aE34 + cE41 + acJnMn(L), where Jn = (1) n×n.

It is easy to see that A2A. This means A is transitive. Let P be any n × n permutation matrix. Then, there exists a unique permutation τ of the set {1,2, …, n} such that , and so . Therefore, F = (fij) n×n = PAPT = aEτ(1)τ(2) + cEτ(2)τ(3) + aEτ(3)τ(4) + cEτ(4)τ(1) + acJn. Thus, fτ(1)τ(2) = fτ(3)τ(4) = a, fτ(2)τ(3) = fτ(4)τ(1) = c and fτ(2)τ(1) = fτ(4)τ(3) = fτ(3)τ(2) = fτ(1)τ(4) = ac. Since τ is a permutation, we have τ(i) ≠ τ(j)(ij). By the hypothesis F = PAPT satisfies fijfji for i > j, we have τ(1) > τ(2), τ(2) > τ(3), τ(3) > τ(4), and τ(4) > τ(1). This implies τ(1) > τ(1), which leads to a contradiction. This proves the Theorem.

Theorem 6.5 generalizes Theorem  5.2 of Tan [17].

Remark 6.6. It is easy to verify that A always has a canonical form for any transitive matrix AM2(L).

Acknowledgments

This work was supported by the Foundation of National Nature Science of China (Grant no. 11071 178) and the Fostering Plan for Young and Middle Age Leading Research of UESTC (Grant no. Y020 18023601033).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.