Distributed systems -towards a formal approach, IFIP Congress, pp.155-160, 1977. ,
Detecting termination of distributed computations using markers, Proceedings of the second annual ACM symposium on Principles of distributed computing , PODC '83, pp.290-294, 1983. ,
DOI : 10.1145/800221.806729
Derivation of a termination detection algorithm for distributed computations, Proc. of the NATO Advanced Study Institute on Control flow and data flow: concepts of distributed programming, pp.507-512, 1986. ,
An improved algorithm for decentralized extrema-finding in circular configurations of processes, Communications of the ACM, vol.22, issue.5, pp.281-283, 1979. ,
DOI : 10.1145/359104.359108
On an improved algorithm for decentralized extrema finding in circular configurations of processors, Communications of the ACM, vol.25, issue.5, pp.336-337, 1982. ,
DOI : 10.1145/358506.358517
An O(nlog n) Unidirectional Algorithm for the Circular Extrema Problem, ACM Transactions on Programming Languages and Systems, vol.4, issue.4, pp.758-762, 1982. ,
DOI : 10.1145/69622.357194
Total order broadcast and multicast algorithms, ACM Computing Surveys, vol.36, issue.4, pp.372-421, 2004. ,
DOI : 10.1145/1041680.1041682
The Totem single-ring ordering and membership protocol, ACM Transactions on Computer Systems, vol.13, issue.4, pp.311-342, 1995. ,
DOI : 10.1145/210223.210224
Mutual exclusion in asynchronous systems with failure detectors, Journal of Parallel and Distributed Computing, vol.65, issue.4, pp.492-505, 2005. ,
DOI : 10.1016/j.jpdc.2004.11.008
URL : https://hal.archives-ouvertes.fr/hal-00130799
Distributed mutual exclusion on a ring of processes, Science of Computer Programming, vol.5, issue.3, pp.265-276, 1985. ,
DOI : 10.1016/0167-6423(85)90015-2
Facing peak loads in a P2P transaction system, Proceedings of the First Workshop on P2P and Dependability, P2P-Dep '12, 2012. ,
DOI : 10.1145/2212346.2212347
URL : https://hal.archives-ouvertes.fr/hal-01271769
TransPeer, Proceedings of the 2010 ACM Symposium on Applied Computing, SAC '10, 2010. ,
DOI : 10.1145/1774088.1774179
URL : https://hal.archives-ouvertes.fr/hal-01288779
A tree-based algorithm for distributed mutual exclusion, ACM Transactions on Computer Systems, vol.7, issue.1, pp.61-77, 1989. ,
DOI : 10.1145/58564.59295
A distributed mutual exclusion algorithm, ACM Transactions on Computer Systems, vol.3, issue.4, pp.344-349, 1985. ,
DOI : 10.1145/6110.214406
A resilient mutual exclusion algorithm for computer networks, IEEE Transactions on Parallel and Distributed Systems, vol.1, issue.3, pp.344-355, 1990. ,
DOI : 10.1109/71.80161
An efficient fault-tolerant mutual exclusion algorithm for distributed systems, Int. Conf. on Parallel and Distributed Computing Systems, pp.525-530, 1994. ,
A fault tolerant algorithm for distributed mutual exclusion, Proceedings Ninth Symposium on Reliable Distributed Systems, pp.146-154, 1990. ,
DOI : 10.1109/RELDIS.1990.93960
A Dual-Token-Based Fault Tolerant Mutual Exclusion Algorithm for MANETs, MSN, pp.572-583, 2007. ,
DOI : 10.1007/978-3-540-77024-4_52
Fault tolerance for token-based synchronization protocols Workshop on Fault- Tolerant Parallel and Distributed Systems, 2001. ,
Reliable broadcast protocols, ACM Transactions on Computer Systems, vol.2, issue.3, pp.351-273, 1984. ,
DOI : 10.1145/989.357400
Token-based atomic broadcast using unreliable failure detectors, Proceedings of the 23rd IEEE International Symposium on Reliable Distributed Systems, 2004., pp.52-65, 2004. ,
DOI : 10.1109/RELDIS.2004.1353003
4) =? HCount(t ,
If such was not the case, S x would belong to {Holder(t), .., S i } and, since P HolderM onitored(t) ensures that Holder(t) ? D(S i , t), S x would belong to D(S i , t) too. However, as S i called the function UseBackup, S x should be faulty ,
it updates its count such that count(S i , t + 1) = count(S i , t) + #(D(S i , t)) ? 1 (line 1). Hence, as show in Figure 7, we can include S i in {Holder(t), .., S x }) in order to be able to exploit such an increment, Thus, we prove that the fonction UseBackup preserve the counter property on all sites : P Count(t).(3) =? P Count ,
At t. At t + 1, site S i can call one of the three functions: 1. SafeReceiveTokenT OKEN, S T , count recv : Since P HolderM onitored(t) is true, p.Holder ,
S i in the ring: Holder(t + 1) = S i+1 Since a non-empty D of a non faulty site S j is always composed of the set of continuous predecessors of S j in the ring, and since P HolderM onitored(t) ensures that Holder(t) = S i belongs to this set, S i+1 belongs to every non empty set S j such that S j = S i (notice that in line 1 S i empties its D). Furthermore, for ,
To prove it, we must show that S i = Holder(t + 1) (Lemma 1) belongs to every non-empty detection set. We can easily prove it by contradiction. Let us suppose that there exists one non-faulty site S x with a non empty detection set ,
Based on the previous lemmas, we can easily prove by induction that the three properties P Saf etyCond, P Count and P HolderM onitored are always verified. At t = 0, the function Initialisation is executed and the three properties are thus verified: ? P Saf etyCond(0): S 0 is the only token holder (line 1) and there exists no pending TOKEN message ,
): all sites have the same count value (line 1) ,
Each node with a non-empty detection set monitors the token ,
S i+f +1 will eventually receive the message and will not discard it. Line 1 will thus set D of S i+f +1 to {S i+1 , .., S i+f +1 }. By assumption, the failures of sites {S i+1, S i+f } are (resp. will be) detected which will result in the calling of the UseBackup function of line 1 (resp. line 1) ,