Volume 2008, Issue 1 190836
Research Article
Open Access

A Markov Chain Approach to Randomly Grown Graphs

Michael Knudsen

Michael Knudsen

Bioinformatics Research Center, University of Aarhus, Høegh-Guldbergs Gade 10, Building 1090, Århus C 8000, Denmark

Search for more papers by this author
Carsten Wiuf

Carsten Wiuf

Bioinformatics Research Center, University of Aarhus, Høegh-Guldbergs Gade 10, Building 1090, Århus C 8000, Denmark

Search for more papers by this author
First published: 16 March 2008
Citations: 2
Academic Editor: Rahul Roy

Abstract

A Markov chain approach to the study of randomly grown graphs is proposed and applied to some popular models that have found use in biology and elsewhere. For most randomly grown graphs used in biology, it is not known whether the graph or properties of the graph converge (in some sense) as the number of vertices becomes large. Particularly, we study the behaviour of the degree sequence, that is, the number of vertices with degree 0, 1, …, in large graphs, and apply our results to the partial duplication model. We further illustrate the results by application to real data.

1. Introduction

Over the past decade, networks have played a prominent role in many different disciplines including theoretical physics, technology, biology, and sociology [19]. Particularly in biology, networks have become fundamental for the description of complex data structures. The appeal of networks may, at least partly, be due to the fact that in addition to being based on a rigorous mathematical base [1014], they also provide a convenient graphical representation of the data which allows for visual interpretation. Examples of complex data structures that can be described by networks include food webs in ecology, sexual partner networks in sociology, and protein interaction networks in biology.

The canonical model in random graph theory has been Erdös-Renyi random graphs, where each of a fixed number of vertices has a Poisson-distributed number of links to other vertices. A Poisson number of links have turned out barely to be realistic for many empirically observed networks, and other models have been suggested to accomodate the discrepancies between theory and observation. Barabási and Albert [2] proposed a simple stochastic model, the preferential attachment (PA) model, whereby the network gradually is built up by adding one vertex at a time until the network reaches the desired size. This model is able to account for the scale-free degree distribution that is observed in some empirical networks, but not many of the other features and motifs that are found in real networks (e.g., [1518]). Therefore, for mathematical and statistical analysis of network data, many other stochastic models have been proposed, in particular models that fall in the class of randomly grown graphs (RGGs; see next section for a definition) which share the property of the PA model of gradual growth. Overviews of different models and their properties can be found in [13, 16, 19, 20].

While the PA model has been under close mathematical scrutiny (e.g., [20]), other RGGs have been treated less extensively (e.g., [19, 21]) and mostly in the context of considering a continuous time approximation to the original discrete time Markov process (e.g., [13, 22, 23]). In this paper, we specifically address the question of the behavior of the vertex degrees as the number of vertices grows large. For a class of RGGs (including the PA model), the existence of a limiting degree distribution has been proven and its analytical form has been derived [21]. However, for most RGGs applied in biology, it is not known whether a limiting distribution exists, letting alone its form.

Biologically, it is of great interest to know whether the network stabilizes as it grows, or whether the degree distribution is a function of the size of the network, even for large network sizes. It relates to the question whether the network in an evolutionary perspective reaches an equilibrium, such that adding new vertices does not change the overall connectivity of the network. For example, in relation to protein interaction networks where vertices represent proteins and edges represent physical interactions between proteins, both scenarios seem a priori possible. Proteins may be able to engage in an unlimited number of interactions, or the number of interactions may be restricted by a number of factors such as space, time, and protein production rates. With the increasing statistical interest in analyzing complex biological networks with respect to evolutionary and functional properties [1, 5, 9, 13, 14, 24], it is also becoming of interest to understand the mathematical properties of the models.

We study a large class of RGGs that allows the construction of a simple, but time-inhomogeneous, Markov chain. For a given RGG, the corresponding Markov chain can be used to address questions about the RGG, for example, questions about the degree distribution. In particular, we focus on a special RGG, the partial duplication model, which has recently been used in the study of biological protein interaction networks [16, 18, 25, 26] and has formed the basis for new and more biologically realistic models (e.g., [16, 27]). The partial duplication model has two parameters (p and q) and we give conditions under which the chain is ergodic or transient. Further, based on the time-inhomogeneous Markov chain, we define a time-homogeneous Markov chain and a continuous time, time-homogeneous Markov process, and demonstrate that these, in general, are easier to study and apply than the original chain. Proofs rely on general theory of discrete Markov processes, which makes it easy to prove similar results for other RGGs.

Finally, we apply our results to a collection of real protein interaction data.

2. RGGs

An RGG is a Markov chain on undirected graphs such that Gt has exactly t vertices, and the set of vertices of Gt is a subset of the set of vertices of Gt+1 for all ts. Note that we do not require the set of edges of Gt to be a subset of the set of edges of Gt+1.

Denote by nt(k) the expected number of vertices of degree k at time t. Since, by assumption, the graph Gt has exactly t vertices, the expected relative frequency of vertices of degree k at time t is ft(k) = nt(k)/t. For many RGGs, one can derive a recursive formula for nt(k), often referred to as the master equation [13]. Here, we consider a very general class of master equations given by
(2.1)
where A(t) for all ts is an infinite real matrix with A(t) k,j = 0 for k > j + 1, and such that all columns sum to the same number a(t). Furthermore, assume for suitable real numbers bk,j that
(2.2)
with bk,j = 0 for k > j + 1. The latter condition guarantees that the vertex degree can increase by at most one. By construction, nt(k) = 0 for k + 1 > t, and hence A(t) is effectively a t × (t − 1) matrix. We assume that the entries (2.2) in this submatrix are positive.
One particular example of a model fulfilling the conditions above is the partial duplication model (details are found in Section 4). The master equation is given by
(2.3)
For several other models, the master equation takes a similar form. Among these models are the duplication-divergence model [16], an approximation to the duplication-mutation model [22, 23], and the models discussed in [21] after a suitable modification (see Section 5.2). Generally, (2.1) is fulfilled whenever the expected degree change in a vertex depends on the degree only, and not on the degrees of the other vertices.
It follows immediately from (2.1) that
(2.4)
where B(t) is the transpose of (t/(t + 1))A(t), and by assumption all rows of B(t) sum to b(t) = (t/(t + 1))a(t). It follows that
(2.5)
that is, b(t) = 1 and the matrices {B(t)} ts describe a Markov chain with time-dependent transition probabilities.

Proposition 2.1. Assume that ∑k≥0bj,k < for all j ≥ 0. If ft(j) → f(j) pointwise for all j ≥ 0, then {f(j)} j≥0 satisfies

(2.6)

Proof. The second part of the proposition is a simple application of Fatou′s lemma. By using (2.4), the definition of B(t), and ∑k≥0bj,k < , it follows that

(2.7)
for some real number dj, and it remains to prove that dj = 0. Note that
(2.8)
and by using Cesaro′s lemma, we get
(2.9)

Consider the jump chain corresponding to the Markov chain {B(t)} ts, that is, the Markov chain with transition probabilities B(t) j,k/(1 − B(t) j,j) for jk, unless B(t) j,j = 1 in which case the probability is put to 0. The jump chain has time-independent transition probabilities given by
(2.10)
and pj,j = 0 for all j ≥ 0. If 1 + bj,j = 0, then pj,k = 0. Occasionally, we consider a slightly modified jump chain (still with time-independent transition probabilities) which is allowed to stay in the same state with positive probability.
If a stationary distribution for the jump chain exists, it fulfills
(2.11)
Assume that inf j≥0(1 + bj,j) > 0 and put . Then we obtain that
(2.12)
and hence is a solution to the equation in Proposition 2.1. Furthermore, we may normalize to get a distribution, and hence (2.11) and (2.12) may be used to transfer a stationary distribution for the jump chain to the limit of the time-inhomogeneous Markov chain and vice versa.
In our main example, the partial duplication model (see Section 4 for details), we have b0,0 = 2q − 1 and
(2.13)
and hence the assumption inf j≥0(1 + bj,j) > 0 is fulfilled if q > 0.

3. A Continuous Time Approximation

In this section, we show that the time-inhomogeneous Markov chain converges to a continuous time, time-homogeneous Markov process after a suitable time transformation.

Denote by Ti the time of the ith jump in the time-inhomogeneous chain after a given time t0, and let Ji be the state to which it jumps. Set T0 = t0 and J0 = j0, the state of the chain at time t0. To simplify notation further, introduce si = (ti, ji) and Si = (Ti, Ji).

Note that at time t, the probability of staying in state j is B(t) j,j = 1 − (bj,j + 1)/(t + 1). In particular, if we let , then
(3.1)
for large ti−1 and ti. Now consider the transformation Zi = log Ti − log Ti−1 = log Ti/Ti−1. It follows that
(3.2)
as ti−1. That is, in the limit, the transformed waiting time is exponentially distributed with parameter αi.

Proposition 3.1. Let , z ≥ 0, take the value of the time-inhomogeneous Markov chain at time t, where t = ⌊t0ez⌋ and ⌊x⌋ denotes the integer part of x. At time 0, . For fixed j0, the process converges to a continuous time, time-homogeneous Markov process as t0.

Proof. Clearly, the process , z ≥ 0, is Markovian by definition. Let be the time of the ith jump, that is, and in the notation above. It follows from (3.2) that

(3.3)
for t0. (Subscript s0 in is used to underline the implicit dependency of s0 = (t0, j0).) Recall the transition probabilities (2.10) in the original jump chain. It follows immediately that
(3.4)
where . Combined with (3.3) this shows that, in the limit as t0, the rate of jumping to ji from ji−1 is βi. More precisely, it demonstrates that , z ≥ 0, converges to a continuous time, time-homogeneous Markov process with transition rate matrix Q = {qj,k} j,k≥0 given by qj,k = bk,j for jk, and qj,j = −qj = ∑kjqj,k. This sum is indeed finite because by assumption bk,j = 0 for k > j + 1 (see Section 2).

Note that a stationary equation for the continuous-time Markov chain fulfills the equation in Proposition 2.1 with f(j) replaced by πj.

4. The Partial Duplication Model

Consider the model , where Gs is a simple graph with s vertices, and where Gt+1 is obtained from Gt in the following way: introduce a new vertex v and choose uGt uniformly. With probability q, connect v and u. Independently of each other, connect each neighbor of u to v with probability p.

In this section, we follow the path outlined in the previous section. That is, we first find the jump chain corresponding to the partial duplication model. As already stated in Section 1, the master equation is given by
(4.1)

It can be seen in the following way: the first term corresponds to the case where a vertex of degree k keeps its degree, and this is the case unless one of two things happens: (i) the vertex is copied and receives a link to the new vertex, or (ii) it receives a link because one of its k neighbors is copied. The probabilities of these two events are q/t and kp/t, respectively. Similarly, the third term corresponds to the case where a vertex of degree k − 1 gets a new link in one of the above-mentioned ways. The two remaining terms correspond to the cases where the new vertex has degree k. The new vertex has degree k when a vertex of degree ≥k is copied and receives exactly k links to the neighbors of the copied vertex and no link to the copied vertex, or if a vertex of degree ≥k − 1 is copied and receives a link to the copied vertex and exactly k − 1 links to the neighbors of the copied vertex.

The cases q = 0 and q = 1 have been studied in [19, 26], respectively. Note, however, that the master equation given in [26] is incorrect. For general q, the model has been discussed in [18]. It follows immediately that
(4.2)
where we, in order to simplify notation, define
(4.3)
From (4.2), we may read off the description of the matrix B(t). Its entries satisfy that
(4.4)
and B(t) j,k = 0 otherwise. An easy calculation shows that
(4.5)
from which it follows that the probability of jumping from state j is
(4.6)
Motivated by this formula, we allow the jump chain to stay in state j with probability (1 − q)bj(j) + qbj(j − 1), and it follows that the transition probabilities pj,k in the modified jump chain satisfy that
(4.7)
and pj,k = 0 otherwise.

In particular, the chain is irreducible if and only if 0 < q < 1. If q = 0, the state 0 is absorbing, and if q = 1, the state 0 is not reachable from any other state. If state 0 is ignored, the resulting chain is irreducible for q = 1.

4.1. Classification of States

We first recall a theorem from [28]. The theorem is reformulated in [29], and we will use that formulation. If q = 1, then we ignore the state 0, and since in this case all pj,0 are zero, the conditions stated in theorems below stay the same.

Theorem 4.1. Let be a Markov chain. If there exist a sequence of non-negative real numbers and an integer N ≥ 1 such that

(4.8)
then the chain is ultimately recurrent.

Applied to the partial duplication model the theorem states that if there is a sequence of nonnegative real numbers with xj such that
(4.9)
then, if q = 0, the probability of ultimate absorption in 0 is 1. If q   ≠  0, the conclusion of the theorem is that all states are persistent.

The solution p of log (p) + p = 0, where log denotes the natural logarithm, is known as the omega constant, and we denote it by Ω. We have Ω ≈ 0.5671.

Proposition 4.2. Let p < Ω in the partial duplication model. If q = 0, the probability of ultimate absorption in 0 is 1, and if q > 0, the Markov chain is persistent.

In [26] it is claimed that for q = 0 there exists a limiting distribution different from the one we find.

Proof. Let xj = log (j + 1). Then is a nonnegative sequence of real numbers with xj, and hence it suffices to show that, for the choices of p and q in the proposition, the sequence satisfies (4.9). Since log  is a concave function, Jensen′s inequality implies that E(log (X + 1)) ≤ log (E(X) + 1) for a positive random variable X. In particular, using this for binomially distributed random variables, we get

(4.10)
and hence we need only prove that the right-hand side of this inequality is less than or equal to log (j + 1) for j ≫ 0. This may, for j ≫ 0, be rewritten as
(4.11)
and here the first two log -terms converge to 0, while the two remaining terms converge to log (p) and p, respectively. Here we have used that
(4.12)
Note that since p < Ω by assumption, we have log (p) + p < 0, and hence the inequality in (4.11) holds for all j ≫ 0.

Since zero is the only absorbing state, it follows that for p ≥ Ω, a limiting distribution takes the form (a0, 0, 0, …), with a0 ≤ 1. To infer the behaviour of the Markov chain for other values of q, we first recall a result proved in [30].

Theorem 4.3. Let be an irreducible, aperiodic Markov chain. If there exist a sequence of positive real numbers and an integer N ≥ 1 with

(4.13)
then the chain is transient.

Let Φ denote the golden ratio conjugate. That is, Φ is the unique positive real number p satisfying that 1/p = p + 1. We have Φ ≈ 0.6180.

Proposition 4.4. Let q > 0 in the partial duplication model. Then the Markov chain is transient for all p > Φ.

Proof. Put xj = 1/(j + 1) for all j ≥ 0. Then xj > 0 for all j ≥ 0, and xj → 0. Thus, in order to apply Theorem 4.3, we only need to verify that is a solution to the inequalities in (4.9).

It follows from a straightforward calculation that

(4.14)
such that is a solution if the right-hand side of this inequality is less than or equal to (1 + q + jp)/(j + 1) for j ≫ 0. This is equivalent to
(4.15)
and the left-hand side converges to 1/pp as j. Since p > Φ, it follows that 1/pp < 1, and hence the inequality in (4.15) holds for all j ≫ 0.

Let q > 0 such that the chain is irreducible. One may ask for which p the chain is ergodic. By Proposition 4.4, a necessary condition is p < Φ. However, as we will see, this may not be sufficient. To see this, we first recall another theorem from [28].

Theorem 4.5. Let be an irreducible, aperiodic Markov chain. If there exist an N ≥ 1 and a nonnegative sequence of real numbers such that

(4.16)
then the chain is ergodic.

In the partial duplication model, the second condition in the theorem is always fulfilled since pj,k = 0 for k > j + 1. Let Xt denote the state of the chain at time t. If there exists N ≥ 0 and ε > 0 such that
(4.17)
then this N, together with xj = j/ε, will work in Theorem 4.5. This is pointed out in [28].

Proposition 4.6. Let q > 0. Then the Markov chain is ergodic for all p < 1/2.

Proof. We find

(4.18)
Note that p < 1/2 implies 2 − 1/p < 0, and hence 2 − 1/p ≤ −ε for all sufficiently small ε > 0. That is, for a large N, (4.17) is fulfilled.

In general, it is not an easy task to actually find the stationary distribution of the jump chain or the time-inhomogeneous Markov chain. For q = 1, an attempt to solve (2.4) has been made in [19]. They assume that converges and show that, under this assumption, the limit (for p > 0) has a power-law tail. However, this does not establish the existence of a stationary distribution. Further, the power-law they provide for p > Ω is in fact not a distribution. In the special case p = 0, the stationary distribution is πj  = (1/2) j for j ≥ 1.

It is natural to ask what happens for the values of p not covered in the propositions above. In general, this is difficult. However, if Ω is not the maximal upper bound in Proposition 4.2, the culprit must be the particular choice of . Indeed, the damage provided by the use of Jensen′s inequality is not severe. This may be seen in the following way: denote by μk(j) the kth central moment of a binomially distributed random variable X with parameters j and p. From [31], we get μk(j) = O(jk/2), and by expanding log (X + 1) as a Taylor series around jp, it follows that E[log (X + 1)] = log (jp + 1) + O(j−1).

4.2. Application to Protein Interaction Networks

We used the computer program developed for [18] to estimate the parameters under the partial duplication model for different protein interaction networks. The Plasmodium falciparum (P. falciparum) dataset is obtained from [32], and the remaining datasets are downloaded from the Database of Interacting Proteins (http://dip.doe-mbi.ucla.edu). Curiously, we note that according to Proposition 4.6, all pairs of p and q correspond to ergodic Markov chains, indicating that the networks stabilize as the number of vertices becomes large.

For one of the networks, P. falciparum, we conducted some further experiments where 50 networks were simulated with the same number of vertices as in P. falciparum (1271) and the degree distribution was computed. All simulations were started from an initial network of two vertices connected by an edge. Furthermore, 1271 runs of the corresponding Markov chain were performed, and the degree distribution was calculated and compared to the degree distribution obtained from the simulated networks. Here, the initial state of the Markov chain is 1. The length of the runs was varied, as shown in Figure 1.

Details are in the caption following the image
Shown is the distribution of vertex degrees of 50 simulated networks (solid) and that of numbers simulated from the corresponding Markov chain (dashed), using parameters estimated for the P. falciparum dataset. In addition, the observed degree distribution for P. falciparum is shown (dot-dashed).
Details are in the caption following the image
Shown is the distribution of vertex degrees of 50 simulated networks (solid) and that of numbers simulated from the corresponding Markov chain (dashed), using parameters estimated for the P. falciparum dataset. In addition, the observed degree distribution for P. falciparum is shown (dot-dashed).

The simulations indicate that the Markov chain approach may be used to approximate the degree distribution. This is particularly useful for simulation of large networks in terms of memory usage; storing the connections between vertices requires memory capacity proportional to the number of vertices times the average number of connections. Simulation of the corresponding Markov chain requires memory capacity proportional to the current value of the chain.

The empirical degree distribution for P. falciparum shows that the partial duplication model does not provide a perfect fit. For example, no zero-degree vertices are included in the dataset (experimenter′s choice), and this needs to be incorporated into the model.

5. Other Models

We have applied the Markov chain approach to other models, and in this section we briefly present some of the results.

5.1. The Duplication-Divergence Model

The duplication-divergence model is an extension of the partial duplication model, and it has been used for analysis of protein interaction networks as well [15, 16, 27, 33]. However, the model is slightly more complicated than the partial duplication model, and it has three parameters p, q, and r. A step in this model is as follows: pick a vertex u in the graph uniformly, and add a new vertex v. Connect u and v with probability q, and create an edge ew between v and w whenever there is an edge between the vertices u and w. Now modify the pairs independently of each other in the following way: with probability p, keep both edges; otherwise, with probability r, keep ew and delete , and with probability 1 − r, keep and delete ew.

One can derive a master equation and go through the construction of the modified jump chain. In this case, the transition probabilities pj,k satisfy that
(5.1)
and pj,k = 0 otherwise. Here ψ = (1 − p)(1 − r), and bj(k, s) is the binomial probability from (4.3) with p replaced by s.
In order to apply Theorem 4.1, we put xj = log (j + 1). It follows from simple calculations, again using Jensen′s inequality, that is a solution to (4.9) if p and r satisfy that
(5.2)
Note that in the special cases r = 0 and r = 1, the left-hand side of the inequality reduces to log (p) + p, the same inequality as seen earlier. Actually, for r = 0 the model is the partial duplication model. It follows that if r = 0 or r = 1, a solution p of (5.2) must satisfty that p < Ω. For 0 < r < 1 an exact upper bound on p is harder to derive. For these values of r, the solution p is less than Ω and attains a minimum p ≈ 0.5235 for r = 1/2.

5.2. Another Class of Models

We believe that the Markov chain approach presented in this paper may be used to infer the behavior of other classes of models. In [21], simple models with master equations on the form
(5.3)
where ak and ck are nonnegative numbers, are studied. The resulting master equation for the relative frequencies ft(k) may be written in matrix form as
(5.4)
where , and c and ft are the column vectors consisting of all the numbers ck and ft(k), respectively. The matrix A(t) is given by
(5.5)

Note that columns of the partitioned matrix in (5.4) sum to t + 1. That is, when divided by t + 1, the transpose of this matrix represents a Markov chain with time-dependent transition probabilities. We identify the countable set of states with N ∪ {−} where the artificial state − accounts for the first row and the first column in the partitioned matrix.

We may compute the corresponding jump process, and again it turns out that its transition probabilities pj,k are time-independent. We may get rid of the state − by simply forgetting the time we spend there. That is, for j, k ≥ 0, we replace pj,k by the sum , and this leads to a Markov chain with transition probabilities given by
(5.6)

These jump chains are in fact all ergodic, and the stationary distribution of the time-inhomogeneous Markov chains has been derived in [21].

5.3. Other Extentions

Still other models do not fall under the conditions and assumptions introduced in this paper. For example, the master equation of the most general form of the duplication-mutation model [22, 23] depends on terms O(1/t2), and the columns of A(t) do not sum to the same number a(t) because of O(1/t2) terms, and because the requirement A(t) k,j = 0 for k > j + 1 is not fulfilled.

Some of these problems may be circumvented at the cost of a more technical and elaborate exposition, but often the results need to be stated as limiting results. For example, if the columns of A(t) do not sum to the same number, the jump chain in (2.10) should be considered as emerging in the limit t.

Furthermore, one may choose to ignore terms of order O(1/t2) in the master equation. As t, the influence from higher-order terms often becomes insignificant, justifying such an approximation. This is, for example, the case for the duplication-mutation model.

Table 1. Parameters estimated from protein interaction data.
Species Vertices Edges p q
H. pylori 675 1291 0.263 0.052
P. falciparum 1271 2642 0.026 0.789
C. elegans 2368 3767 0.315 0.105
S. cerevisiae 4968 17530 0.131 0.263

Acknowledgments

M. Knudsen is supported by the Centre for Theory in the Natural Sciences, University of Aarhus. C. Wiuf is supported by the Danish Cancer Society and the Danish Research Councils. They would like to thank an anonymous reviewer for valuable suggestions that improved the clarity of the paper.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.