On the internal distance in the interlacement set
Abstract.
We prove a shape theorem for the internal (graph) distance on the interlacement set of the random interlacement model on , . We provide large deviation estimates for the internal distance of distant points in this set, and use these estimates to study the internal distance on the range of a simple random walk on a discrete torus.
Key words and phrases:
shape theorem, simple random walk, intersections of random walks, capacity2010 Mathematics Subject Classification:
Primary 60K35, 82B431. Introduction and the results
We study properties of the interlacement set of the random interlacement model. We are mainly interested in its connectivity properties, in particular in the internal distance (sometimes called the chemical distance) on the interlacement cluster.
The random interlacement model was introduced in [Szn10] in order to describe the microscopic structure in the bulk which arises when studying the disconnection time of a discrete cylinder or the vacant set of random walk on a discrete torus. It can be informally described as a dependent site percolation on , , which is ‘generated’ by a Poisson cloud of independent simple random walks whose intensity is driven by a nonnegative multiplicative parameter . The set covered by these random walks is called the interlacement set at level and is denoted by . As the precise definition of is rather lengthy, we postpone it to Section 2 and state our results first.
Let be the conditional distribution given that the origin is in the interlacement set . For we define to be the internal distance between and within the interlacement set :
where denotes the norm in . As we shall see below, the set is a.s. connected for all , so for all and . Assuming that , let be the ball centred at with radius in the internal distance. We abbreviate .
The first main result of this paper is the shape theorem for large balls in the internal distance.
Theorem 1.1.
For every and there exists a compact convex set such that for any there exists a a.s. finite random variable such that
for all .
Remark 1.2.
Clearly, the set is symmetric under rotations and reflections of and for all . It is straightforward to show that as ; it would be interesting, however, to be able to say something about the behaviour of when (e.g., does the shape become close to the Euclidean ball, and what can be said about the size of as ?).
The key technical step in the proof of Theorem 1.1 is a fact (which is of independent interest) that the distance within the interlacement cluster should typically be of the same order as the usual distance.
Theorem 1.3.
For every and there exist constants and such that
A corresponding result for the Bernoulli percolation on was proved by Antal and Pisztora; in their case the constant equals one and is optimal, see [AP96, Theorem 1.1]. We did not try to optimise the constant in Theorem 1.3.
Remark 1.4.
The methods used to show Theorem 1.1 also imply the following result.
Theorem 1.5.
It holds that .
Previously it was known that for every fixed , the set is a.s. connected (see (2.21) in [Szn10]); the above theorem means that a.s. there are no ‘exceptional values’ of the parameter . Remark also that much more is known about the connectivity of for fixed , see [PT11, RS12].
Theorems 1.1 and 1.3 indicate that the interlacement set looks at large scales very much like . In the same direction, Ráth and Sapozhnikov recently proved that the interlacement set percolates in slabs [RS11a], and that random walk on is transient [RS11b].
Theorem 1.3 can be also used to answer a related question: ‘How much the range of the random walk on the torus resembles the torus?’ To this end we consider to be a simple random walk on the discrete dimensional torus of size , , and write for its law when started from the uniform distribution. We let to denote the range of the random walk up to time ,
Let be the minimal distance of within , defined similarly as , and let be their usual graph distance on the torus.
Theorem 1.6.
For large enough and , we have
This theorem improves the result of Shellef [She10], where a similar claim was proved for growing very slowly with using entirely different methods. More precisely, [She10] requires where is the times iterated logarithm, being arbitrary. On the other hand, Shellef needs only; we do not have control on the size of this constant.
The main difficulty of the paper stems in proving our results for , in particular for . In fact, for there is a rather simple argument, based on the results of [RS11b], which shows Theorem 1.3 with , and which we sketch in the Appendix. This argument uses the fact that for the random interlacement restricted to a thickenough twodimensional slab dominates in some sense the standard Bernoulli percolation, which allows an application of [AP96]. Heuristically, in large dimensions it is possible to construct ‘long straight connections’ within locally, independently of the connections in other places.
It seems that this argument cannot be extended to . It is much harder to construct the straight connections locally in an independent manner. This we do in Section 6, where we dominate the internal distance between the origin and the point by the sum of a sequence of random variables with a finite range of dependence and stretched exponential tails, cf. (6.11) below. To obtain the finite range of dependence, we should show that connections within a large box of size can be constructed using less than random walk trajectories (which is the typical number of random walks intersecting this box; here and in the sequel we write when for positive constants we have for all ). In fact, in Proposition 4.2 we will show that a ‘backbone’ of in this box can be constructed using trajectories only, . This also means that for every the interlacement set is ‘largely supercritical’, that is it remains locally connected, even when considerably thinned.
The paper is organised as follows. After introducing the notation in Section 2, we collect in Section 3 some estimates on the hitting probabilities of sets and on the range of the simple random walk. Section 4 contains the key technical result of this paper, Proposition 4.2. This proposition roughly states that all points in (a possibly thinned version of) the set within box of size are at internal distance , with a very high probability. Using this proposition, in Section 5, we give a short proof of Theorem 1.5. Sections 6–8 contain the proofs of Theorems 1.3, 1.1, and 1.6.
Acknowledgements. The authors would like to thank Augusto Teixeira for many useful discussions, and Balázs Ráth for pointing out Shellef’s paper [She10]. The work of Serguei Popov was partially supported by CNPq (301644/2011–0) and FAPESP (2009/52379–8).
2. Preliminaries
In this section we fix the notation and recall the definition of the random interlacement model.
Let be the set of natural numbers. We denote with the coordinate vectors in , and write for the Euclidean, , and norms correspondingly. We use to denote the closed ball centred at with radius , and abbreviate . We say that is connected if for any there is a nearestneighbor path that lies fully inside and connects to . We write for the cardinality of , for its diameter in norm, and for its internal boundary.
Let us write for the law of a discretetime simple random walk on started from . For we denote with , and the entrance time in , the hitting time of , and the exit time from :
(2.1) 
Given finite, we define the equilibrium measure of by
and denote by its total mass.
We now recall the definition of the random interlacement from [Szn10]. In order to do this we need to introduce another notation which is, however, mostly used only locally. Let be the space of doublyinfinite nearestneighbour trajectories in which tend to infinity at positive and negative infinite times, and let be the space of equivalence classes of trajectories in modulo timeshift. (These spaces are equipped with algebras , as in (1.2), (1.10) of [Szn10].) The random interlacement is defined via a Poisson point process taking values in the space of point measures on the space with the intensity measure . We denote by the law of this process.
To describe the measure appearing in the intensity of the Poisson point process, for , , we denote by the mapping from to the space of point measures on which selects from the trajectories with labels smaller than intersecting and parametrises them so that they enter at time . Formally, for , , , we define
(2.2) 
where for an arbitrary in the equivalence class of , and is the unique in this equivalence class such that , , . As follows from [Szn10], Theorem 1.1, the measure is uniquely determined by the following two properties which we will frequently use:

For every finite set , under , the number of trajectories in with labels smaller than entering has the Poisson distribution with parameter .

Let , . Then, under , are i.i.d., independent of , with the law given by
for any measurable set in the space of singleinfinite nearestneighbour paths. It means that , restricted to nonnegative times, are i.i.d. simple random walk trajectories started from the normalised equilibrium measure .
The interlacement set at level is then defined as the trace of all trajectories in with labels smaller than ,
We now explain the conventions for the use of constants in this paper. We denote by the ‘global’ constants, that is, those that are used all along the paper and by the ‘local’ constants, that is, those that are used only in the small neighbourhood of the place where they appear for the first time. For the local constants, we restart the numeration either in the beginning of each subsection or in the beginning of each long proof. All these constants are positive and finite and may depend on dimension, , and other quantities that are supposed to be fixed; usually we omit expressions like ‘there exist positive constants such that …’ and just directly insert ’s to the formulas.
Also, the reader will notice that very frequently in this paper the probability of events (indexed by some integer parameter, say, ) will happen to be bounded from above by or from below by , where is typically (but not necessarily) between and . So, we decided to use the following definition:
Definition 2.1.
We say that is s.e.small (where s.e. stands for ‘stretchedexponentially’) if for all it holds that
and write .
Observe that for any fixed . So, it is quite convenient to use this notation e.g. in the following situation: assume that we have at most events, each of probability bounded from above by as well. . Then, the probability of their union is
3. Estimates on hitting probabilities
In this section we collect several estimates on hitting probabilities of subsets of by random walk trajectories. We recall that denotes the law of the simple random walk in , , starting at . We denote by the ‘stopped’ Green function:
and write for . For the case it holds that is finite for all , , and, for all
(3.1)  
(3.2) 
for all . The upper bound (3.2) follows directly from Theorem 1.5.4 of [Law91]. The lower bound (3.1) can be proved easily adapting the proof of the same theorem.
For , let
be the probability that, starting from , the simple random walk enters before time . We use the abbreviation for the hitting probabilities of onepoint sets, and for the probability that the simple random walk ever enters the set . It is elementary to obtain that for all and (see e.g. Theorem 2.2 of [AMP02])
(3.3) 
Next, for and a finite set , define
Clearly, is the expected number of visits to up to time , starting from . As before, we set .
The following lemma will be used repeatedly to estimate the hitting probabilities:
Lemma 3.1.
For all , finite , and
(3.4) 
Proof.
Using the definition of and the strong Markov property,
Since , the second inequality in (3.4) follows. The first inequality is then implied by
together with . ∎
Let us use the notation for the maximal distance between and the points of . Two following simple lemmas contain lower bounds on hitting probabilities of sets.
Lemma 3.2.
Suppose that is a connected finite subset of , containing at least two sites. Then, for all and ,
Proof.
Since is connected, it is possible to find (not necessarily connected) set with the following properties:

,

one can represent in such a way that for all .
Indeed, it holds that the size of the projection of on one of the coordinate axes is at least and this projection is an interval; then, for all points in the projection pick exactly one element of that projects there, and erase ‘unnecessary’ points of . Then, by (3.1) we have for any
and, by (3.2), for any ,
Since for all , the claim follows from Lemma 3.1. ∎
The previous lemma works well for sparse connected sets. For more densely packed sets we need another estimate:
Lemma 3.3.
For all , finite containing at least two sites, and all ,
Proof.
Again using (3.1), we have for any
To obtain an upper bound on for , we observe that
So, using (3.2), we have for
where we have used an obvious worstcase estimate (all the points of are grouped around , forming roughly a ball of radius ) on the passage from the first to the second line of the above display. Then, applying Lemma 3.1 we conclude the proof of Lemma 3.3. ∎
We end this section by stating a few wellknown facts about the behavior of the set of sites visited by a simple random walk by time . As we could not locate suitable references, we also sketch their proofs.
Lemma 3.4.
Suppose that and let be the set of sites visited by a simple random walk by time . Then, for any fixed ,
Proof.
The upper bound on the diameter follows from any convenient large deviation bound on the displacement of the simple random walk (e.g. Lemma 1.5.1 of [Law91]).
To control the diameter and the number of visited sites from below, we use the following simple argument: We divide the temporal interval into subintervals of length , for a large enough . Clearly, on each of the subintervals of length the maximal displacement of the simple random walk is at least with a constant probability, e.g., by the central limit theorem. Noting that by time the number of visited sites is at most , and that the expectation of this number is at least (it is straightforward to obtain this from (3.1)), we deduce that also with at least constant probability^{1}^{1}1For any random variable with a.s. and , it is true that . the number of different sites visited by the random walk during a fixed temporal interval of length is at least (if is large enough). Finally, to estimate the probability that the event of interest occurs on at least one of the subintervals, use the independence. The claim then follows easily. ∎
We also need an estimate on the number of different sites visited by several random walks:
Lemma 3.5.
Consider independent simple random walks started from arbitrary points , and denote , . Assume that for some fixed . Then, for any we have
Proof.
We use a similar argument as in the previous proof. We divide the walks into groups, each containing walks. Consider now the walks of the, say, first group, suppose that they are labelled from to . Let
be the set of sites visited by the walks from the first group. For , define
to be the number of walks of the first group that start at distance at most from . By (3.3), using , we have
So, if is large enough
Since, trivially, , it holds that with at least a constant probability. As the same reasoning applies to each of the groups, the claim of the lemma follows by independence. ∎
4. Intersections of random walks
In this section we show that the set of points visited by sufficiently many walks started in is typically well connected; the precise statement of this fact is contained in Proposition 4.2.
To state this proposition we need some notation. We consider two sequences of positive random variables satisfying and
(4.1)  
(4.2) 
for some and . Let be independent simple random walks starting from some sites . We write for the joint distribution of these walks. Let be the set of different sites visited by th random walk until time . We write , for the entrance and hitting time of by random walk (recall (2.1)).
Definition 4.1.
For integers we say that is connected to if there exist a sequence of integers such that
(We do not indicate the dependence on in order to keep the notations not too heavy.)
In words, the definition says that the trajectories are connected if one can go from the starting point of the th trajectory to the starting point of the th trajectory within the cluster of the first trajectories, by changing no more than times the trajectory, and using at most sites in the beginning of each trajectory.
Let us define for the following set of integers:
and let
(4.3) 
be the index set of the walks that do not come back to after the time .
For and , define
(4.4) 
(in fact, this quantity represents the necessary number of steps in the recursive construction used in the proof of Proposition 4.2, see (4.8) and (4.15); at this point we only observe that is finite since ).
The following proposition plays the key role in this paper:
Proposition 4.2.
Let , and , , be as above. Then
(4.5) 
Moreover,
(4.6) 
and
(4.7) 
Remark 4.3.
(a) The estimates in the above proposition only depend on the number of walks that we consider, they are uniform with respect to the choice of the starting positions.
(b) Typically, when applying Proposition 4.2 to the interlacement set (say, in the ball ), the variables , will be of order , so that . The proposition implies that the model of random interlacements is ‘far from the criticality’ with respect to the connectedness of the interlacement cluster; we typically need much less than walks to ensure that the interlacement set is ‘well connected’.
(c) In the most important case , it holds that , but then . Comparing this with the results of [RS12, PT11] (where it is proved that every two points in can be joined by a path switching the trajectory at most times) indicates that the constants are not optimal. The authors did not check if the formula (4.4) can be further simplified, but it is clear that as . In any case, for our needs it is enough to know that is finite for any and , and this fact is quite obvious.
First, let us describe informally the idea of the proof for the particular case (one may note that there are many similarities with the proof of Theorem 3.2 of [AMP02], and with techniques used in [RS12]). Consider the random walk and run it up to time . Then, is typically of order , so any other random walk hits the set with probability at least of order roughly (with logarithmic correction for ) by Lemma 3.2. Since there are other available walks, with high probability will be hit by different other walks. In dimension , running these walks for time units more after the respective hitting moments of is already enough to meet all the other trajectories (again applying Lemma 3.2, one obtains that the probability that any other trajectory hits none of those walks is almost exponentially small in ). In dimension this argument, however, just barely does not work.
So, what to do in dimension ? Consider those trajectories (of length ) that intersect the initial one. Together with the initial trajectory, they form a connected set of cardinality roughly . We then apply Lemma 3.3 to obtain that a random walk starting somewhere at the boundary of will hit such a set with probability at least of order . Since (recall that now ) we have walks in total, typically of them will hit that set. Since in four dimensions Lemma 3.2 gives lower bound of order for the hitting probability of the initial piece of length of a generic trajectory and walks a bit more we meet all the other trajectories with high probability (see on Figure 1 an illustration of the proof for ). , running these
Again, in dimension this fails since Lemma 3.2 now gives a lower bound of order . However, iterating the above construction, we then obtain roughly independent walks, and, since
If we recursively define the sequence
(4.8) 
then the necessary number of iterations can be calculated as follows:
Since it is straightforward to obtain from the recursion (4.8) that
we see that the above definition of agrees to (4.4).
In order to make the above argument rigorous, we have to address several issues, for example:

Deal with the dependence of the walks that participate in different stages of the above construction. This can be done by dividing the walks we use into groups and use one group on each stage.

In fact, the trajectories can go back to the ball at later epochs (i.e., much later then ). To prove (4.7), we have to assure that the random walks constructed on the th stage would meet these pieces of the trajectories too, otherwise we would have no good control on the distance within the interlacement cluster. So, we have to control the ‘total number of returns’ (see (4.11) below). In addition, in the above construction we shall use only the walks conditioned on not returning to after time (in order not to be obliged to condition on a too much detailed future behaviour of the trajectory).

Finally, all the events described in the informal construction should not only be ‘typical’ in some sense, but hold with probability at least on each stage. . For that, we need to ‘adjust’ (by sufficiently small amounts) the values in the power of
Proof of Proposition 4.2.
We start with the formal proof of Proposition 4.2. To simplify the notation we write . Recall (4.3) and define for
Since, clearly, there is a constant such that for all we have
(4.9) 
we obtain that
(4.10) 
Inequality (4.9) further implies that that for every
(4.11) 
In the sequel, we will repeatedly use the following observation. For a simple random walk , let be the piece of trajectory of the walk up to time . Then there is a constant such that for any event which depends only on the initial piece of the trajectory of length
(4.12) 
Indeed, to prove (4.12), we write
and use (4.9) to argue that the last term is at least of constant order.