Abstract
We consider independent multiple random walks on graphs and study comparison results of meeting times and infection times between many conditions of the random walks by obtaining the exact density functions or expectations.
Similar content being viewed by others
1 Introduction
Let G=(V,E) be a finite connected graph and \(X^{(0)}_{t}, \ldots, X^{(k)}_{t}\) independent continuous time or discrete time k+1-multiple random walks on G. We suppose |V|≥k+1 and regard \(X^{(0)}_{t}\) as an infected particle and consider an infection time t i n f e : it is the first time that \(X^{(0)}_{t}\) meets any other particles. Formally, it is given by
where
is the first time that \(X^{(0)}_{t}\) meet \(X^{(i)}_{t}\) and called meeting time. In this paper, we investigate the exact distributions or expectations for meeting times and infection times. By using the principle of inclusion-exclusion, we see that
for every path of the random walks, where
From this fact, our central aim is to obtain the density functions or the expectations of t m e e t (Λ) for each Λ⊂{1,…,k}. We will achieve it from the theorem derived in Section 2. It is well-known that a problem to obtain some first hitting times can be reduced to find solutions of some system of linear equations through the technique of the Laplace transform. In Section 2, we derive a basic relation of the Laplace transform of meeting time in Theorems 1 and 2. This result is just a kind of harmonic relation and an extension of that for first hitting times, but plays an important role to get exact value of meeting and infection times. Actually, solving the reduced system of linear equations for meeting times by that relations, we can show the exact densities and expectations for some graphs in Section 3. Moreover, we discuss comparison results among many conditions of random walks: starting point of the random walks, number of random walks, parameters of exponential holding times, graphs, and continuous times versus discrete times. In Section 4, another tool to analyze the meeting time of two random walks will be shown. It is mentioned in [1] that meeting times of two random walks on some graphs can be regarded as a first hitting time of a single random walk. We give a generalization of this fact.
1.1 Related work
Several mathematical models of infections are proposed and investigated. They are classified into the models in which the number of infected particles increases by meeting between particles, and the models in which the number of infected particles varies by the factors differ from the meetings.
In the latter models, many people have studied on infectious disease for a long time (cf. [8]).
In the former models, the particles are move on finite or infinite graphs. Aldous [1], Aldous and Fill [2], Bshouty et al. [3], Cooper et al. [4] and Coppersmith et al. [5] investigated the expected meeting time of two independent Markov chains on a finite graph. Using their results, Draief and Ganesh [7] derived the upper bound for the expected time that all particles are infected for complete graphs and regular graphs. In their model, the infected probability varies by the coincidence time with infected particles and the infected rate. Such a model is also studied in Datta and Dorlas [6], and is related to our models; we may take the parameter for the infected rate to be infinity. Another similar models as ours are studied in [14]. They derived the the upper and lower bound for the time that all particles are informed(broadcasting time) on a finite square grid. Kurtz et al. [10] and Machado et al. [12] studied frog models on complete graphs: infection rule is same as ours in this model, although non-infected particles are immobile. They derived the limit theorems for the number of sites visited by the infected particles. Kurkova et al. [9] investigated the model that the infection rule is same as ours for an infinite square grid.
1.2 Models and notations
Recall that G=(V,E) is a finite connected graph and \(X^{(0)}_{t}, \ldots, X^{(k)}_{t}\) are independent continuous time or discrete time Markov chain on V with a same transition probability P=(p(x,y)) x,y∈V . In this paper we call them random walks on G if P satisfies p(x,y),p(y,x)>0 if and only if x y∈E. A p-lazy version of P is the transition matrix given by p I+(1−p)P, where I is the identity matrix. We often consider the lazy version for convenience, especially for discrete time random walks, because lazy chains are aperiodic for any graph. For continuous time random walks, let \(\frac {1}{\theta _{i}} > 0\) be a mean of an exponential holding time of X (i) for each i∈{0,…,k}. Note that they are allowed to take a different values. We write \(\mathbb {P}_{x_{0},\ldots,x_{k}}\) for the probability measure corresponding to the random walks \((X^{(i)}_{t})_{i=0,\ldots,k}\) starting from (x i ) i=0,…,k respectively.
2. Laplace transform of meeting times
We give the key tools to calculate the distribution or the expectation of meeting times t m e e t (Λ) for any Λ⊂V.
Theorem 1.
(Discrete time). Let Λ⊂V. For a discrete time random walk, the Laplace transform of t m e e t =t m e e t (Λ) is given by
if x 0≠x j for all j∈Λ, and \(\mathbb {E}_{x_{0},\ldots,x_{k}} \left [e^{-\lambda t_{meet}(\Lambda)} \right ] = 1\) if x 0=x j for some j∈Λ. As a corollary, we have
if x 0≠x j for all j∈Λ.
Theorem 2.
(Continuous time). Let Λ⊂V. For a discrete time random walk, the Laplace transform of t m e e t =t m e e t (Λ) is given by
if x 0≠x j for all j∈Λ, and \(\mathbb {E}_{x_{0},\ldots,x_{k}} \left [e^{-\lambda t_{meet}} \right ] = 1\). As a corollary, we have
Remark 1.
We mention underlying two product graphs G=(V,E) with V=V k+1; in the proofs below, an extended Markov chain (cf. [11]) will moves on the graphs. Let x=(x 0,…,x k ),y=(y 0,…,y k )∈V. For a discrete time random walk, x y∈E if and only if
For a continuous time random walk, x y∈E if and only if there exists a unique i∈{0,…,k} such that
The former graph is called a tensor product of graphs and the latter graph is called a cartesian product of graphs.
Proof of Theorem 1.
We consider a Markov chain on V k+1 given by \(\mathbf {X_{t}} = \left (X^{(0)}_{t}, \ldots, X^{(k)}_{t}\right)\). Note that i-th marginal process of X t is equal in law to \(X^{(i)}_{t}\) respectively and the transition matrix of this chain is given by
Moreover, we notice that t m e e t (Λ) is equivalent to the first hitting time of X t to the set
Thus, the assertion is obtained by an ordinary first-step analysis of first hitting times. □
In order to prove Theorem 2, we let \(t_{jump}^{X^{(i)}} = \inf \left \{t \ge 0 ; X^{(i)}_{t} \neq X^{(i)}_{0} \right \}\) be a first jump time of \(X^{(i)}_{t}\) and \(t_{jump} = \min _{i \in \{0,\ldots,k\}} t_{jump}^{X^{(i)}}\).
Lemma 1.
where 1 A is the indicator function of a set A.
Proof.
We remark that
Thus,
□
Proof of Theorem 2.
By Lemma 1, we have
□
3 Exact computations for graphs
In this section we study exact computations of t i n f e =t i n f e (k)= maxi∈{1,…,k}t m e e t (i) for certain graphs. Note that t i n f e is nothing but t m e e t (1) when k=1.
3.1 Star graphs
We consider the p-lazy simple random walk on a star graph for the case k=1. Let G=(V,E) be a star graph with V={0,…,N} and E={0x;x=1,…,N}.
Proposition 1.
Set
Then,
if x 0,x 1≠0 and x 0≠x 1,
if x 0=0 and x 1≠0, and
if x 0≠0 and x 1=0.
Corollary 1.
We see that
if and only if θ 0≤θ 1, and
for any θ 0,θ 1>0.
Remark 2.
It is shown in [1, 2] that \(\mathbb {E}_{x_{0},x_{1}}[t_{infe}]\) can be bounded by the maximum expected hitting time for a reversible Markov chain, that is,
where \(t_{hit}(y) = \inf \left \{t\ge 0;X^{(0)}_{t}=y\right \}\). In addition, it is mentioned as a remark that the above bound is not tight for a star graph. This remark also can be verified from Proposition 1 as follows. Note that the maximum expected hitting time is
where f(N)∼g(N) means \({\lim }_{N \to \infty } f(N)/g(N) = 1\). On the other hand, from Proposition 1, we see that
Therefore, \(\max _{x_{0},x_{1}} \mathbb {E}_{x_{0},x_{1}}[t_{infe}] = O(1)\) and \(\max _{x,y \in V} \mathbb {E}_{x}[t_{hit}(y)] = O(N)\).
Lemma 2.
The Laplace transform of t i n f e is given by
if x 0,x 1≠0 and x 0≠x 1,
if x 0=0 and x 1≠0, and
if x 0≠0 and x 1=0, where A,B,C and D are given in Proposition 1.
Proof.
Put \(\alpha = \mathbb {E}_{1,2}[e^{-\lambda t_{infe}}], \beta = \mathbb {E}_{0,1}\left [e^{-\lambda t_{infe}}\right ]\) and \(\gamma = \mathbb {E}_{1,0}[e^{-\lambda t_{infe}}]\). From Theorem 2, they satisfy the following:
or, equivalently,
By solving the above equations, we obtain the assertion. □
Proof of Proposition 1.
The density functions immediately follow from Lemma 2 and
The expectations will be obtaind by taking \(- \frac {\partial }{\partial \lambda } \mathbb {E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}] \big |_{\lambda =0}\) in Lemma 2. Noting that
if x 0,x 1≠0 and x 0≠x 1, we have
If x 0=0 and x 1≠0,
If x 0≠0 and x 1=0,
□
Proof of Corollary 1.
The inequality (1) follows immediately from Proposition 1. The equality (2) will be shown as follows. Put α= max{θ 0,θ 1} and β= min{θ 0,θ 1}. Note that α β=θ 0 θ 1 and α+β=θ 0+θ 1. Then,
From the above two equalities and \(\Theta = \frac {1}{2}\left (\frac {\beta }{\alpha } + \frac {\alpha }{\beta }\right)\), we obtain
□
3.2 Cycle graphs
We consider a random walk on a cycle graph for the case k=1. Let V={0,1,…,N−1} and we give the transition probability by
where 0≤p,q≤1 and p+q=1. Note that we can apply Theorem 3 stated in Section 4 when the random walk is simple, that is, p=q=1/2.
Proposition 2.
Set
Then, the Laplace transform of t i n f e is given by
where f(z)=z mod N.
Corollary 2.
Put \(\nu = \frac {\theta _{0}q + \theta _{1}p}{\theta _{0}p + \theta _{1}q}\) and f(z)=z mod N. Then,
if p≠q and θ 0≠θ 1, and
if p=q or θ 0=θ 1.
Remark 3.
-
1.
It is worth noting that the distribution of t i n f e is independent of p if θ 0=θ 1, because of the definition of μ ±.
-
2.
Let \(t_{hit}(y) = \inf \left \{t\ge 0;X^{(0)}_{t}=y\right \}\) and
$$\begin{array}{@{}rcl@{}} \tilde{\mu}_{\pm} = \frac{\lambda+\theta_{0} \pm \sqrt{(\lambda+\theta_{0})^{2}-4{\theta_{0}^{2}}pq}}{2\theta_{0}p}. \end{array} $$Then, we can verify that
$$\begin{array}{@{}rcl@{}} E_{x}\!\left[e^{-\lambda t_{hit}(y)}\right] \,=\, \frac{\tilde{\mu}_{+}^{f(x-y)}\left(1 - \tilde{\mu}_{-}^{N}\right) \!+ \tilde{\mu}_{-}^{f(x-y)}\left(\tilde{\mu}_{+}^{N} - 1\right)} {\tilde{\mu}_{+}^{N}-\tilde{\mu}_{-}^{N}}. \end{array} $$Therefore the Laplace transform of the hitting time has exactly the same expression as Proposition 2 by replacing μ ± with \(\tilde {\mu }_{\pm }\). We notice that \({\lim }_{\theta _{1} \to 0} \mu _{\pm } = \tilde {\mu }_{\pm }\).
-
3.
We see from Corollary 2 that
$$\begin{array}{@{}rcl@{}} E_{x,y}\left[t_{infe}\right] =\left\{ \begin{array}{ll} O(N), & p \neq q \text{~and~} \theta_{0} \neq \theta_{1}, \\ O(N^{2}), & p = q \text{~or~} \theta_{0} = \theta_{1}. \end{array} \right. \end{array} $$Note that (p−q)(θ 0−θ 1)≠0 is equivalent to ν≠0.
Proof of Proposition 2.
Put \(\phantom {\dot {i}\!}a_{x,y} = E_{x,y}[e^{-\lambda t_{meet}(\Lambda)}]\). We first remark that
for any x,y∈V because of the definition of the transition probability. So we may just have \(\tilde {a}_{z} = E_{z,0}[e^{-\lambda t_{meet}(\Lambda)}]\) for z∈V. From Theorem 2, they satisfy
Here we used a f(z+1),0=a z,f(0−1) and a f(z−1),0=a z,f(0+1) for the last equality, which are obtained from (5). This system of equation together with the boundary conditions \(\tilde {a}_{0}=\tilde {a}_{N}=1\) has a solution
□
Proof of Corollary 2.
We prove the corollary in a similar fashion to that in the proof of Proposition 2. Put \(\phantom {\dot {i}\!}b_{x,y} = E_{x,y}[t_{meet}(\Lambda)]\) and \(\phantom {\dot {i}\!}\tilde {b}_{z} = E_{z,0}[t_{meet}(\Lambda)]\). Then, we see that
for x,y∈V and
for z∈V. This system of equation together with the boundary conditions \(\tilde {b}_{0}=\tilde {b}_{N}=0\) has a solution
if p≠q and θ 0≠θ 1, and
if p=q or θ 0=θ 1. □
3.3 Complete graphs
We consider the p-lazy simple random walk on a complete graph with N+1-vertices. For simplicity, we use the following notations:
3.3.1 The case k=1
Proposition 3.
Let x 0≠x 1. For continuous time random walks, it holds that
For discrete time random walks, it holds that
Remark 4.
We notice from Proposition 3 that t i n f e has an exponential distribution of the parameter \(\frac {\theta _{01}}{\tilde {N}}\) for continuous time random walks, and a geometric distribution of the parameter \(\frac {(1+p)\tilde {N} - 1}{\tilde {N}^{2}}\) for discrete time random walks.
Corollary 3.
Let \(t^{\text {(dis.)}}_{infe}\) be an infection time of a discrete time random walk and \(t^{\text {(con.)}}_{infe}\) an infection time of a continuous time random walk. Then,
if and only if \(p \le 1 + \frac {(\theta _{01}-2)N}{N+1}\).
Proof of Proposition 3.
Put \(\alpha = \mathbb {E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}]\) for any fixed x 0,x 1∈V with x 0≠x 1. From Theorem 2, α satisfies the following:
Thus, the density function is obtained from
The expectation is also obtained by \(- \frac {\partial \alpha }{\partial \lambda } \big |_{\lambda =0}\). On the other hand, from Theorem 1, α satisfies the following:
Thus, the density function is obtained from
The expectation is also obtained by \(- \frac {\partial \alpha }{\partial \lambda } \big |_{\lambda =0}\). □
Proof of Corollary 3.
From Proposition 3, the inequality
holds if and only if
□
3.3.2 The case k=2
Proposition 4.
Let x 0≠x 1,x 2 and
Then, the density function of t i n f e is given by
if x 1≠x 2, and
if x 1=x 2.
Lemma 3.
Consider the Laplace transform of t m e e t =t m e e t (Λ) for Λ={1,2}. If x 0≠x 1,x 0≠x 2 and x 1≠x 2, then
and if x 0≠x 1,x 0≠x 2 and x 1=x 2, then
where
Proof.
Put \(\alpha = \mathbb {E}_{0,1,2}\left [e^{-\lambda t_{meet}}\right ], \beta = \mathbb {E}_{0,1,1}\left [e^{-\lambda t_{meet}}\right ]\). From Theorem 2, they satisfy the following:
By solving the above equations, we obtain the assertion. □
Proof of Proposition 4.
Let Λ={1,2}. From Lemma 3, we see that
if x 0≠x 1,x 0≠x 2 and x 1≠x 2, and
if x 0≠x 1,x 0≠x 2 and x 1=x 2. Note that
Considering inverse Laplace transform as (3) and (4), we have
if x 0≠x 1,x 0≠x 2 and x 1≠x 2, and
if x 0≠x 1,x 0≠x 2 and x 1=x 2. Therefore, combining results above and Proposition 3, we can compute
□
Corollary 4.
We denote the infection time of k+1-multiple random walks by t i n f e (k). Then,
for any θ 0,θ 1,θ 2>0 and any mutually distinct x 0,x 1,x 2∈V.
Lemma 4.
Suppose x 0≠x 1 and x 0≠x 2. Then,
and
where
Proof.
The expectations of t m e e t ({1,2}) are obtained from Lemma 3 by taking \(- \frac {\partial }{\partial \lambda } \mathbb {E}_{x_{0},x_{1},x_{2}}[e^{-\lambda t_{meet}(\Lambda)}] \big |_{\lambda =0}\). The expectations of t i n f e are obtained by computing
□
Proof of Corollary 4.
We observe from Lemma 4 that
since N≥1. Thus the first inequality of (7) holds from
Here the second equality follows from the principle of inclusion-exclusion. The second inequality of the corollary is also holds from
since \(\mathbb {E}_{x_{0},x_{1},x_{2}}\left [t_{meet}(\{1,2\})\right ] \le \mathbb {E}_{x_{0},x_{1},x_{1}}\left [t_{meet}(\{1,2\})\right ]\) by Lemma 4.
□
4 Meeting times of t m e e t (1)
Finally, we give another tool for meeting times t m e e t (Λ) for Λ={1}.
A couple of examples of graphs is mentioned in [1] such that “difference" of two random walks \(X_{t}^{(0)} - X_{t}^{(1)}\) on the graph behaves precisely as \(X_{2t}^{(0)}\). If this property holds, then the meeting time of two random walks is equivalent to half of the first hitting time of a single random walk. Of course, it must be assumed that an additive operation on V is defined, that is, \(X_{t}^{(0)} - X_{t}^{(1)} \in V\). Other detailed assumptions are stated below.
Theorem 3.
We consider t m e e t =t m e e t ({1})of continuous time random walk on G=(V,E)such that an additive operation on V is defined. Let t h i t (x)=i n f{t≥0;Y t =x} for x∈V, where \(Y_{2t} = X^{(0)}_{t} - X^{(i)}_{t}\) is the continuous time random walk with mean \(\frac {2}{\theta _{0}+\theta _{i}}\) of exponential holding time. Then,
with respect to \(\mathbb {P}_{x_{0},x_{1}}\), if the transition matrix P=(p(x,y)) satisfies the following:
-
1.
p(x,y)=p(y,x) for all x,y∈V,
-
2.
p(x,y)=p(x−y,0) for all x,y∈V.
Proof of Theorem 3.
We remark that the hypotheses 1 and 2 in the theorem are equivalent to
Then, for any function \(f: \mathbb {R} \to \mathbb {R}\), we have
This completes the proof. □
Remark 5.
Theorem 3 also holds for discrete time random walk under the obvious modification.
Example 1.
We demonstrate how to define an additive operator on V for continuous time simple random walks on G=(V,E).
-
1.
Let G be a cycle graph with a vertex set V={0,…,n−1} and edge set E={x y;x−y=1 mod n}. Then, Theorem 3 holds if x−y is given by x−y mod n. In this context, we can recover the result in Section 3.2 for the case where p=q=1/2.
-
2.
Let G be a hamming graph with a vertex set V={0,…,n−1}d and edge set E={x y;|{i;x i ≠y i }|=1}. Then, Theorem 3 holds if (x−y) i for all i=1,…,d is given by x i −y i mod n. The hitting times for hamming graphs are investigated in [13].
References
Aldous, D.: Meeting times for independent Markov chains. Stoch. Process. Appl. 38, 185–193 (1991).
Aldous, D., Fill, J.: Reversible Markov chains and random walks on graphs. Monograph available at http://www.stat.berkeley.edu/~aldous/RWG/book.html (2002).
Bshouty, N., Higham, L., Warpechowska-Gruca, J.: Meeting times of random walks on graphs. Inf Process Lett. 69(5), 259–265 (1999).
Cooper, C., Frieze, A., Radzik, T.: Multiple random walks in random regular graphs. SIAM J Discrete Math. 22(4), 1738–1761 (2009).
Coppersmith, D., Tetali, P.: Winkler: Collisions among random walks on a graph. SIAM J Discrete Math. 6(3), 363–374 (1993).
Datta, N., Dorlas, T.C: Random walks on a complete graph: a model for infection. J Appl Probab. 41, 1008–1021 (2004).
Draief, M., Ganesh, A.: A random walk model for infection on graphs: spread of epidemics & rumours with mobile agents. Discrete Event Dyn Syst. 21, 41–61 (2011).
Kuperman, M.: Invited review: Epidemics on social networks. Papers in Physics. 5, art. 050003 (2013).
Kurkova, I., Popov, S., Vachkovskaia, M.: On infection spreading and competition between independent random walks. Electron. J. Probab. 9(11), 1–22 (2004).
Kurtz, T., Lebensztayn, E., Leichsenring, A., Machado, F.: Limit theorems for an epidemic model on the complete graph. Alea. 4, 45–55 (2008).
Levin, D., Peres, Y., Wilmer, E.: Markov Chains and Mixing Times. American Mathematical Society, Providence, RI (2009).
Machado, F., Mashurian, H., Matzinger, H.: CLT for the proportion of infected individuals for an epidemic model on a complete graph. Markov Process. Related Fields. 17, 209–224 (2011).
Markowsky, G.: Simple random walk on distance-regular graphs. preprint Arxiv:1301.6394 (2013).
Pettarin, A., Pietracaprina, A., Pucci, G.: Infectious Random Walks. preprint Arxiv:1007.1604v2 (2011).
Acknowledgements
The author is grateful to Mikio Shibuya for stimulus discussions, in particular for Theorem 3. The author was supported by JST, ERATO, Kawarabayashi Large Graph Project.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Ohwa, T. Exact computation for meeting times and infection times of random walks on graphs. Pac. J. Math. Ind. 7, 5 (2015). https://doi.org/10.1186/s40736-015-0016-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40736-015-0016-2