Analysis of distributed systems via quasi-stationary distributions

Abstract We present a new probabilistic analysis of distributed systems. Our approach relies on the theory of quasi-stationary distributions (QSD) and the results recently developed by the first and third authors. We give properties on the deadlock time and the distribution of the model before deadlock, both for discrete and diffusion models. Our results apply to any finite values of the involved parameters (time, numbers of resources, number of processors, etc.) and reflect the real behavior of these systems, with potential applications to deadlock prevention.


Introduction
Today's distributed systems involve a huge (but finite) number of processes sharing common resources (i.e. are massively parallel). These systems are inherently fragile. For example, if a process is running out of some resource it can stop the whole system and deadlock may appear. Distributed systems have been the object of a huge number of publications over the last decades. Let us mention that [1] is an excellent book on this topic. We recommend it strongly (as well as the references therein).
Usually, analysis of distributed systems leads to asymptotic results (where the parameters of interest tend to infinity) and characterization of limit laws (of large numbers, central limit theorems, etc.). But non-asymptotic results, although usually less precise, may be closer to the real behavior of the distributed systems under consideration. Known results are usually asymptotic and assume small values for the number of processes and resourcestypically one or two (see the references given in Section 2 below). The purpose of this paper is to present an analysis of distributed systems which works for any finite number of processes and any finite number of resources and to provide both asymptotic and non-asymptotic results.
Our approach relies on the theory of quasi-stationary distributions (QSD) recently developed by Champagnat, Villemonais et al. [2][3][4][5][6]. We also refer the reader to [7] and [8,Chapter 7] for links between quasi-stationary distribution and Feynman-Kac formulae, h-processes, Schr€ odinger operators and particle absorption models. As far as we know, this paper is the first attempt to use this theory for analyzing distributed systems.
In particular, it requires the study of quasi-stationarity for diffusion processes with both absorbing and reflecting boundaries, which is new up to our knowledge and is the main contribution of this paper. We illustrate these results by simulations relying on the wellestablished approximation methods based on mean-field particle systems, as discussed for e.g. in [8][9][10].
The organization of this paper is as follows: simple distributed systems are presented in Section 2. Section 3 contains generalities on quasi-stationary distributions in finite state spaces. Our main results are stated and proved in Section 4. We provide nonasymptotic estimates on the deadlock time and the state of the system at deadlock for discrete models (Section 4.1) and asymptotic results which hold true for any numbers of processes and resources for diffusion models of distributed systems (Section 4.2). Simulations are presented in Section 5.

Examples
We describe two models of distributed systems with possible deadlocks, which will be studied numerically in Section 5.

Colliding stacks
This example is one of the simplest dynamic algorithm.
The presentation of this example is based on [11]. For pedagogical reasons, we consider only two stacks but, of course, real storage allocation algorithms involve a huge number of stacks.
Assume that two stacks are to be maintained inside a shared (contiguous) memory area of a fixed size m. A trivial algorithm will let them grow from both ends of that memory area until their cumulative sizes fill the initially allocated storage (m cells), and the algorithm stops having exhausted the available memory. That shared storage allocation algorithm is to be compared to another option, namely allocating separate zones of size m=2 to each of the two stacks. This separate storage allocation method will then halt as soon as any one of the two stacks reaches sizes m=2: Several measures may be introduced to compare these two schemes. One of them is the number of operations that can be treated by the algorithms under some appropriate probabilistic model. Another interesting measure of the efficiency of the shared allocation that was proposed by Knuth [12], is the size of the largest stack when both stacks meet and the algorithm runs out of storage. Flajolet [11] completely analyzed (combinatorially) this problem and thus solved a question posed by Knuth [12] (Vol. 1, Exercise 2.2.2.13). Partial results have been obtained earlier by Yao [13], but it appears that covering all cases of the original problem cannot be achieved by an extension of Yao's methods. As has been noticed since the problem was initially posed by Knuth [12], the natural formulation is in terms of random walks. Here the random walk takes place in a triangle in a 2-dimensional lattice space: a state is the couple formed with the size of both stacks. The random walk has two reflecting barriers along the axes (a deletion takes no effect on an empty stack) and one absorbing barrier parallel to the second diagonal (the algorithm stops when the combined sizes of the stacks exhaust the available storage).

Banker algorithm
This example is a simple financial model.
For simplicity, we restrict the presentation (as for the colliding stacks) to dimension 2. Consider two customers P 1 and P 2 sharing a fixed quantity of a given resource R (money, say). There are fixed upper bounds m 1 , m 2 on how much of the resource each of the customers will need at any time. The banker decides to affect to the customer P i (i ¼ 1, 2) the required units only if the remaining units are sufficient in order to fulfill the requirements of P j (j ¼ 1, 2; j 6 ¼ i). The situation is modeled by a random walk in a rectangle with a broken corner (i.e. ðx 1 , where the last constraint generates the broken corner). The random walk is reflected on the sides parallel to the axes and is absorbed on the sloping side.
Probabilistic analyses of this algorithm have been presented in [14,15,17] for two customers and in [18,19] for d 2 N customers.
Maier and Schott [20] proved partial results for d customers and r 2 N resources.

Description of the model in higher dimension
We consider the interaction of q processes P 1 , P 2 , … , P q , each with its own resource needs. We allow the processes to access to r different, non-substituable resources R 1 , R 2 , … , R r . We model resource limitations, and define resource exhaustion as follows. At any time s, process P i is assumed to have allocated some quantity y j i ðsÞ of resource R j , which may take discrete values (as for the random walks of the previous examples) or continuous values (as for the diffusion processes considered in [18]). Process P i is assumed to have some maximum need m ij of resource R j so that 0 y j i m ij , 8s ! 0: The constant m ij may be infinite; if finite, it is a hard limit which the process P i never attempts to exceed. The resources R j are limited: so that m j À 1 is the total amount of resource R j available for allocation. Remember that resource exhaustion occurs when some process P i issues an unfulfillable request for a quantity of some resource R j . Here" unfulfillable" means that fulfilling the request would violate one of the inequalities (2). The state space of the memory allocation system is a convex polytope with faces defined by hyperplanes H 1 , :::, H k which are either reflective or absorbing.
For example, we can consider a model with d processors and a single resource with limit m, so that the state space of the random walk is where v is a vector of R d þ n f0g with nonnegative coordinates and with absorbing state General case. For d processors and r resources, where v j are vectors of ðR þ Þ d n f0g with nonnegative coordinates and m j is the maximum amount of resource j, and with absorbing state

Quasi-stationary distributions
The goal of this section is to give a short survey on the main results on quasi-stationary distributions for absorbed Markov processes and their implications on deadlock prevention and analysis. To keep things simple, we focus here on the case of continuous-time processes taking values in a finite state space, like in Section 2.2. We consider a Markov process ðX t , t ! 0Þ taking values in a finite state space E [ @, where @ is absorbing, meaning that X t 2 @ a.s. for all t ! s @ :¼ infft ! 0 : X t 2 @g: We assume that s @ < 1 a.s., i.e. @ is accessible from any state in E. In the context of the colliding stacks and the banker models, s @ is the deadlock time. For all x 6 ¼ y 2 E [ @, we denote by q x, y ! 0 the transition rate from x to y and we set as usual We assume in all this section that the matrix Q :¼ ðq x, y Þ x, y2E is irreducible, so that Perron-Frobenius theorem applies to the exponential of the matrix Q: when t ! þ1, where Àk 0 is the spectral radius of the matrix Q, u and v are the normalized, positive left and right eigenvector of Q for the eigenvalue Àk 0 , i.e. uQ ¼ Àk where SpðQÞ is the spectrum of the matrix Q in C and ReðzÞ is the real part of z 2 C: Note that, since the matrix Q is irreducible and sub-conservative in the sense that Q1 0 with at least one negative coordinate, where 1 is the vector of R E with all coordinates equal to 1, Perron-Frobenius theory entails that Àk 1 < Àk 0 < 0: The next result was first proved in [21] and follows easily from the formulas There exists a constant C such that, for all x, y 2 E and all t ! 0, and the probability measure u ¼ ðu x , x 2 EÞ on E is a quasi-stationary distribution, in the sense that where P u ¼ P x2E u x P x . In addition, for all x 2 E and t ! 0, Quasi-stationary distributions like u satisfy general properties as explained for example in [6], summarized in the next proposition.
Proposition 3.2. When the initial population is distributed according to the quasi-stationary distribution u, the absorption time s @ is exponentially distributed with parameter k 0 , i.e.
s @ is independent of ðX s @ À , X s @ Þ, where X s @ À is the position just before exit and X s @ is the exit position. In addition, the joint law of ðX s @ À , X s @ Þ under P u is given by Note that, as a consequence, the exit position is distributed under P u as This indeed defines a probability distribution on @ since X In addition, the position just before exit is distributed under P u as This is the quasi-stationary distribution biased by the exit rate P y2@ q x, y of the process. Although the method of proof is quite standard, we give the proof for sake of completeness. Note that the first part of Proposition 3.2 already appears in a more general setting in [22,23] and [10, Section 1.3].
Proof. The first property follows from Markov's property and the definition of a quasistationary distribution (7) This is the "loss of memory" property characterizing exponential random variables, hence s @ is exponentially distributed. (5) that the parameter of the exponential distribution is k 0 .
The independence between s @ and X s @ follows from a similar computation: for all bounded measurable function f on E, The independence between s @ and X s @ À can be proved exactly the same way. Finally, due to the above independence, we have for all t ! 0 and all x 2 E, y 2 @, where ðJ i Þ i!1 is the sequence of jump times of the process ðX t , t ! 0Þ: Standard computations for discrete Markov processes entail This concludes the proof of the Proposition.

Main results
We study models with discrete state space as those presented above and diffusion models with boundary conditions as in Section 2.3. We obtain estimates on the QSD in both cases and on the Perron-Frobenius eigenvector (resp. Dirichlet eigenfunction) in the discrete case (resp. continuous case).
hence the distribution of the exit time s @ has an exponential tail. We can also deduce from (8) estimates on the expectation of functions of s @ : for example, Note that this estimate is accurate provided that k 1 ( k 0 : This is the regime where the state of the process can be approximated by the quasi-stationary distribution for intermediate times, as explained in the next proposition.
In addition, for all x, y 2 E and t ! 0, In the case where k 0 ( k 1 , for 1=k 1 ( t ( 1=k 0 , recalling that v x is bounded, we deduce that P x ðX t ¼ yÞ % u y for all x, y 2 E: Proof. The upper bound in (9) follows directly from (8). For the lower bound, we use the upper bound and the fact that , y6 ¼x u y À Ce Àðk 1 Àk 0 Þt u y ¼ 1 À e k 0 t þ u x e k 0 t À Ce Àðk 1 Àk 0 Þt þ Cu x e Àðk 1 Àk 0 Þt : The lower bound in (9) follows. To obtain (10), we combine (6), (8) and (9) in Ce Àðk 1 Àk 0 Þt ðv x e Àk 0 t þ Ce Àk 1 t Þ þ Cu y e Àk 1 t þ Cu y e Àk 1 t þ u y u x 1 À e Àk 0 t þ Ce Àk 1 t ð Þ : w We also obtain estimates on the distribution of ðX s @ À , X s @ Þ: For all x, y 2 E, all z 2 @ and all t ! 0, Again, the inequality (11) gives an accurate estimate on the distribution of ðX s @ À , X s @ Þ in the case where k 0 ( k 1 : taking 1=k 1 ( t ( 1=k 0 , it follows that P x ðX s @ À ¼ y, X s @ ¼ zÞ % k À1 0 u y q y, z for all x, y 2 E and z 2 @: Proof. The proof makes use of the estimates of Proposition 3.1 and the properties of Proposition 3.2: for all bounded measurable function f on E Â @, Cjjf jj 1 e Àðk 1 Àk 0 Þt þ Cjjf jj 1 e Àk 1 t : This entails Therefore, it follows from the inequality We conclude from (9) and Proposition 3.2. w The previous results are sharp under the condition k 0 ( k 1 , which means that absorption takes a long time (the typical absorption time is 1=k 0 ) and the process has a tendency to stay away from the absorbing boundaries. This is for example the case when the random walk converges to a deterministic process for which the interior of the domain is stable (see [24][25][26]). Our estimates can be applied to the examples of Section 2 by numerically computing the eigenvalues k 0 and k 1 .

The multi-dimensional diffusion model
The previous section gave results on absorbed Markov processes in finite state space, like those of the examples of Section 2. It is also common to model deadlocks replacing discrete random walks with diffusion processes in subsets of R d with partly absorbing and reflecting boundaries, like in the example presented in Section 2.3. In this case, quasi-stationary distributions may still be defined, although asymptotic properties as those of Propositions 3.1, and hence estimates as in Proposition 4.1, are harder to obtain. A general criterion for such results was recently obtained in [3], which has been applied to various classes of stochastic processes in [2][3][4][5]. A particular case of diffusion in a domain delimited by hyperplanes was studied in [5] using non-linear Lyapunov criteria. However, the study of convergence of conditional distributions of diffusions where parts of the boundary are absorbing and other parts reflecting is new to our knowledge. We are able to obtain the next result for general colliding stacks models. (3) and (4) with m j ¼ 1 for j 2 f1, :::, rg arbitrary nonnegative vectors v j , a i ¼ þ1 for i 2 f1, :::, dg, with the hyperplanes x i ¼ 0, i 2 f1, :::, dg as normal reflecting boundaries and the hyperplanes hx, v j i ¼ 1 as absorbing. Assume that the infinitesimal generator of X is given for all smooth function f vanishing on the hyperplanes hx, v j i ¼ 1 and with zero normal gradient on the hyperplanes x i ¼ 0, by

Theorem 4.3. Consider the diffusion process X evolving in E [ @ as defined in
where the matrix a ¼ ða ij Þ 1 i, j d is symmetric and uniformly elliptic and b is uniformly bounded, both H€ older continuous on E [ @. Then X admits a unique quasi-stationary distribution a and there exist positive constants C, c > 0 such that, for all t ! 0 and all probability measure l on E, kP l ðX t 2 Ájt < s @ Þ À ak TV Ce Àct : The conclusion of Theorem 4.3 is equivalent to Condition (A) in [3]. This has several implications. For instance, e k 0 t P x ðt < s @ Þ converges when t ! þ1, uniformly in x, to a positive, bounded eigenfunction g of L with Dirichlet boundary condition at the absorbing boundary, for the eigenvalue Àk 0 , characterized by the relation P a ðt < s @ Þ ¼ e Àk 0 t , 8t ! 0 [3, Proposition 2.3]. Moreover, it implies a spectral gap property [3, Corollary 2.4], the existence and exponential ergodicity of the so-called Q-process, defined as the process X conditioned to never hit the absorbing part of the boundary [3, Theorem 3.1] and a conditional ergodic property [4].
Note that the assumption that the constants m j are all equal to 1 may be relaxed by applying a linear scaling of coordinates, possibly different for each coordinate. Such a scaling has no impact on the required assumptions on the coefficients of the diffusion.
Proof of Theorem 4.3. We start by considering the case where b i ðxÞ ¼ 0 whenever x i ¼ 0. We shall extend, in a second step, our result to the general case.
We extend the definition of r and b on where jxj is defined as the vector ðjx 1 j, :::, jx d jÞ, by symmetry over the hyperplanes x i ¼ 0, i 2 f1, :::, dg : more precisely, for all x 2 E, we set aðyÞ ¼ aðxÞ if jyj ¼ x and b i ðyÞ ¼ signðy i Þb i ðxÞ for all i ¼ 1, :::, d: Since b i ðxÞ ¼ 0 whenever x i ¼ 0, the extended coefficients are also H€ older continuous on E 0 : Hence, we can define a diffusion process Y evolving in E 0 with infinitesimal generator (12), absorbed at the boundary of E 0 : It follows from standard properties of diffusions with normal reflexion on hyperplanes that the process ðjY 1 t j, :::, jY d t jÞ t!0 has the same law as the process X defined in the statement of Theorem 4.3. In the proof, we show that (13) holds true for Y, so that it holds true for X.
Since L is assumed to be elliptic, there exist two constants k ! k > 0 such that kjjxjj 2 ! hx, aðxÞxi ! kjjxjj 2 , for all x 2 E 0 : In order to prove that Condition (A) of Champagnat and Villemonais [3] holds true, we use the Lyapunov type criterion proved in [5,Proposition 2.7], with the functions ð1 À hev j , xiÞ a , where ev j is intended as a componentwise product ev where v ¼ sup 1 j r, 1 i d v j, i and v ¼ inf i, js:t:v j, i >0 v j, i : Note that both V and u are smooth functions on E 0 : Assumptions 2 and 4 in [5] are satisfied because L is, in particular, locally elliptic with H€ older coefficients (the arguments are detailed in [5,Section 4]). Assumption 3 in [5] is an easy consequence of the boundedness of E 0 and of the uniform ellipticity of L. One easily checks by standard arguments (see for instance [2,Section 3]) that, for all t 0 > 0, there exist a constant A > 0 such that Since dðx, @E 0 Þ ¼ min e2fÀ1, 1g d ð1 À hev j , xiÞ, we deduce that ð1 À hev j , xiÞ 1=2 d r AVðxÞ, which is Condition (2.9) of Assumption 1 in [5]. Hence, it remains to prove that there exist a compact set K & E 0 and positive constants d, c, c 0 , c 00 > 0 such that, for all x 2 E 0 , Indeed, by [5,Proposition 2.7], this implies that Assumption 1 in [5] is satisfied, so that (13)  Now, jjv j jj 2 ð1 À hev j , xiÞ 2 2 d dr v max 1 j r 1 j1 À hv j , jxjij 2 : Let K 0 ¼ fx 2 E 0 : 8j 2 f1, :::, rg, hv j , xi 1=2g: Our goal is to prove that hr, f iðxÞ C 0 jjf ðxÞjj 2 for all x 6 2 K 0 for an appropriate constant C 0 . We can assume without loss of generality that x ! 0, meaning that all its coordinates are nonnegative. We have for all 1 i d where x ðiÞ denotes the vector of R dÀ1 obtained from x by suppressing its i-th coordinate. Hence Let j 0 be such that hv j , xi is maximal. In particular, it is larger than 1/2, and there exists i 0 such that v j 0 i 0 x i 0 ! 1=2d: Then, hr, f iðxÞ:

So by definition of a
LuðxÞ ! auðxÞðjjf ðxÞjj 2 À ffiffiffi d p jjbjj 1 jjf ðxÞjjÞ: But jjf jj converges to þ1 when x ! @E 0 , so that the first part of (14) holds true.

LVðxÞ þ
VðxÞ 1þd where À bk 4r2 d VðxÞjjf ðxÞjj 2 þ B þ 1 is non-positive in a vicinity of the boundary @E 0 since VðxÞ ! dðx, @E 0 Þ as was proved above. Since u is uniformly bounded from below by a positive constant on any compact subset of E 0 and is positive on E 0 , we deduce that LVðxÞ þ VðxÞ 1þe uðxÞ e is smaller that c 00 uðxÞ for some constant c 00 > 0: As a consequence, the right hand side of (14) holds true, which concludes the proof when b i ðxÞ ¼ 0 whenever x i ¼ 0.
It only remains to extend the last result to cases where b i ðxÞ does not vanish when x i ¼ 0. Since by symmetry the functions V and u satisfy Neumann's boundary condition on the reflecting boundary of E, they both belong to the domain of the generator of the process X, hence we can reproduce the computations above which are actually valid for any bounded measurable b and conclude using the same criterion. This concludes the proof of Theorem 4.3.
w We can deduce from (13) similar estimates as in Proposition 4.1, with c playing the role of k 1 À k 0 and with the additional difficulty that we cannot obtain pointwise estimates on g and a, but only estimates on their mean values on small balls. Again, since g is bounded, these estimates are good when k 0 ( c: Proposition 4.4. For all t ! 0 and x 2 E, gðxÞ e k 0 t þ Ce Àct (16) and, for all measurable A & E such that aðAÞ > 0, 1 aðAÞ ð A gðxÞdaðxÞ ! e k 0 t þ Ce Àct À e k 0 t À 1 þ Ce Àct aðAÞ : In addition, for all B & E measurable such that aðBÞ > 0, we define the probability measure a jB as 1 x2B daðxÞ aðBÞ : Then, for all such B, all t ! 0 and all A & E measurable, we have P a jB ðX t 2 AÞ À aðAÞ aðAÞ aðBÞ 1 À e Àk 0 t þ Ce Àðk 0 þcÞt À Á þ Ce Àðk 0 þcÞt jjgjj 1 þ 2aðAÞ þ Ce Àct À Á : Proof. As explained above, the property (13) implies (see [3,Proposition 2.3]) that for constants C 0 and c 0 that can be assumed without loss of generality equal to th constants C and c of (13). In addition, Ð E gðxÞaðdxÞ ¼ 1: We first deduce from (19) the inequality (16). Combining this with aðEÞ ¼ EnA gðxÞdaðxÞ ! 1 À ðe k 0 t þ Ce Àct ÞaðE n AÞ ! 1 À e k 0 t À Ce Àct þ aðAÞðe k 0 t þ Ce Àct Þ: This entails (17).

Simulations
In this section, we focus on the examples presented in Sections 2.1 and 2.2. As illustration of the different models studied above, we present simulation results of the density of the QSD as well as the value of k 0 for a diffusion model of two colliding stacks and a discrete state-space model of a 2-dimensional banker algorithm. The numerical method applies to general models. We restrict to the case of two stacks and two consumers for pedagogical reasons.
We use a specific class of particle systems with singular mean field interaction which arises in the study of distributions of absorbed Markov processes conditioned to nonabsorption [8,9,27]. In these systems, particles move independently following the Markovian dynamics of the underlying process (here, colliding stacks or banker algorithms), until one gets absorbed, in which case it is immediately sent to the position of another particle, chosen uniformly at random. This method overcomes the problematic and necessary deterioration of classical Monte-Carlo techniques in the setting of absorbed processes, by maintaining a constant sample size of significant particles.  It is known in general that this method allows to approximate conditional distributions of the underlying Markov process in the limit of infinitely many particles [8,9,[28][29][30]. In our case, since we want to approximate the quasi-stationary distribution, we need to simulate the associated mean-field particle process for sufficiently long time. In practice, we compute the ergodic mean of the simulated system and stop the simulation when its variation goes below a threshold.
In this setting, the eigenvalue k 0 is obtained as the average rate of absorption of particles. This approximation relies on the unbiased estimator introduced in the proof of Theorem 2.1 of Villemonais [31].

Simulations for two colliding stacks
We simulate a diffusive model of two colliding stacks, given by the solution X t ¼ ðX ð1Þ t , X ð2Þ t Þ of the stochastic differential equation dX with a parameter r ! 0 and two independent Brownian motions B ð1Þ and B ð2Þ : The process is assumed to be killed when X ð1Þ t þ X ð2Þ t ¼ 1 and reflected when X ð1Þ t ¼ 0 or X ð2Þ t ¼ 0: The case r ¼ 0 corresponds to two independent Brownian motions for each coordinate. The parameter r governs the correlation between the two coordinates.
The numerical results are presented in Figure 1, where the density of the QSD is plotted for different values of r. We observe that small values of r have little influence on the density of the QSD (Figure 1(a) and (b)). Larger values of r have a tendency to concentrate the density close to the line x ¼ y. This is due to the stronger correlation between the two coordinates (with the limiting case r ¼ 1 where X ð1Þ ¼ X ð2Þ ). In all simulations, the density of the QSD vanishes at the absorbing boundaries of the domain, as expected, and decreases with respect to X ð1Þ and X ð2Þ : Hence the larger density is obtained at the point (0, 0), farthest from the absorbing boundary.
The computed eigenvalues k 0 increase with r. This means that the deadlock of the system is faster for larger values of r. This is due to the stronger correlation between the two coordinates, which makes the process move preferentially and faster in the direction orthogonal to the absorbing boundary. In the extreme case r ¼ 1, the process only moves in this direction and does not explore the major part of the domain. Although the densities of the QSD for r ¼ 0 and r ¼ 0:5 are quite similar, the values of k 0 are significantly different.

Simulations for the banker algorithm in the case of two consumers
We consider here a Markov chain on N 2 , which is a random walk X n ¼ ðX ði þ 1, j À 1Þ with probability 1 6 ði À 1, j þ 1Þ with probability 1 6 ði À 1, j À 1Þ with probability 1 6 , with additional killing of the process when it reaches X ð1Þ n þ X ð2Þ n ¼ 100 and with various schemes of reflection. In all cases, the first (resp. second) coordinate of the process is reflected when X ð1Þ n ¼ 0 (resp. X ð2Þ n ¼ 0). In addition to this, reflection occurs when X ð1Þ n or X ð2Þ n reach positive thresholds m 1 and m 2 which vary in the following simulations. Note that if m 1 ! 100 and m 2 ! 100, we are back to a model of two colliding stacks. We added a possibility for the process to remain at the same position to avoid periodicity problems. The numerical results are presented in Figure 2, where the QSD is plotted for different values of m 1 and m 2 . This corresponds to various shapes of the domain. We observe again that the QSD vanishes at the absorbing boundary and increases with the distance to this boundary. In the three simulations, we observe positive values of the QSD at reflection boundaries.
The computed eigenvalues k 0 depend both on the shape of the domain of the process (the larger the domain is, the smaller k 0 should be) and the size of the aborbing part of the boundary (the larger it is, the larger k 0 should be). Hence, the dependence of k 0 with respect to m 1 and m 2 is non-trivial. For example, when comparing Figure 2 (c) to (a), we see that the domain is larger but also the absorbing boundary. In this case, this produces a larger value of k 0 , hence a higher speed of deadlock, although the amount of available resources is larger.