A Theory of Integrating Tamper Evidence with Stabilization

. We propose the notion of tamper-evident stabilization {that combines stabilization with the concept of tamper evidence{ for computing systems. On the (cid:12)rst glance, these notions are contradictory; stabilization requires that eventually the system functionality is fully restored whereas tamper evidence requires that the system functionality is permanently degraded in the event of tampering. Tamper-evident stabilization captures the intuition that the system will tolerate perturbation upto a limit. In the event that it is perturbed beyond that limit, it will exhibit permanent evidence of tampering, where it may provide reduced (possibly none) functionality. We compare tamper-evident stabilization with (conventional) stabilization and with active stabilization and propose an approach to verify tamper-evident stabilizing programs in polynomial time. We demonstrate tamper-evident stabilization with two examples and argue how approaches for designing stabilization can be used to design tamper-evident stabilization. We also study issues of composition in tamper-evident stabilization. Finally, we point out how tamper-evident stabilization can eﬀectively be used to provide tradeoﬀ between fault-prevention and fault tolerance.

tamper-resistant systems that also stabilize. A tamper-resistant system ensures that an effort to tamper with the system makes the system less useful/inoperable (e.g., by zeroing out sensitive data in a chip or voiding the warranty). The notion of tamper resistance is contradictory to the notion of stabilization in that the notion of stabilization requires that in spite of any possible tampering the system inherently acquires its usefulness eventually.
Intuitively, the notion of tamper-evident stabilization is based on the observation that all tamper-resistant systems tolerate some level of tampering without making the system less useful/inoperable. For example, a tamper-resistant chip may have a circuitry that does some rudimentary checks on the input and discards the input if the check fails. A communication protocol may use CRC to ensure that most random bit-flips in the message are tolerated without affecting the system. However, if the tampering is beyond acceptable level then they become less useful/inoperable. Based on this intuition, we observe that a tamperevident stabilizing system will recover to its legitimate state if its perturbation is within an acceptable limit. However, if it is perturbed outside this boundary, it will make itself inoperable. Moreover, when the system enters the mode of making itself inoperable, it is necessary that it cannot be prevented.
Thus, if the system is outside its normal legitimate states, it is in one of two modes: recovery mode, where it is trying to restore itself to a legitimate state, or tamper-evident mode, where it is trying to make itself inoperable. The recovery mode is similar to the typical stabilizing systems in that the recovery should be guaranteed after external perturbations stop. However, in the tamper-evident mode, it is essential that the system makes itself inoperable even if outside perturbations continue.
To realize the last requirement, we need to make certain assumptions about what external perturbations can be performed during tamper-evident mode. For example, if these perturbations could restore the system to a legitimate state then designing tamper-evident stabilizing systems is impossible. Hence, we view the system execution to consist of (1) program executions (in the absence of fault and adversary); (2) program executions in the presence of faults; and (3) program execution in the presence of adversary.
Faults are random events that perturb the system randomly and rarely. By contrast, the adversary is actively preventing the system from making itself inoperable. However, unlike faults, the adversary may not be able to perturb the system to an arbitrary state. Also, unlike faults, adversary may continue to execute forever. Even if the adversary executes forever, it is necessary that system actions have some fairness during execution. Hence, we assume that the system can make some number (in our formal definitions, we have this as strictly greater than 1) of steps between two steps of the adversary.
The contributions of the paper are as follows. We formally define the notion of tamper-evident stabilization; compare the notion of tamper-evident stabilization with (conventional) stabilization and active stabilization, where a system stabilizes in spite of the interference of an adversary [7]; explain the cost of automated verification of tamper-evident stabilization; present some theorems about composing tamper-evident stabilizing systems; identify how methods for designing stabilizing programs can be used in designing tamper-evident stabilizing systems. We also identify potential obstacles in using those methods, and identify potential applications of tamper-evident stabilization and illustrate it with two examples.
Organization. The rest of the paper is organized as follows: In Section 2, we present the preliminary concepts on stabilization. We introduce the notion of tamper-evident stabilization, illustrate it with two examples, and compare it with (conventional) stabilization and active stabilization in Section 3. Section 4 represents an algorithm for automatic verification of tamper-evident stabilizing programs. We evaluate the composition of tamper-evident stabilizing systems in Section 5 and discuss a design methodology for tamper-evident stabilizing programs in Section 6. The relationship between tamper-evident stabilization and other stabilizing techniques is discussed in Section 7, and finally, Section 8 concludes our paper.

Preliminaries
Our program modeling utilizes standard approach for defining interleaving programs, stabilization [3,11,12], and active stabilization [7]. A program includes a finite set of variables with finite (or any finite abstraction of an infinite state system) domain. It also includes guarded commands (a.k.a. actions) [11] that update those program variables atomically. Since these internal variables are not needed in the definitions involved in this section, we describe a program in terms of its state space S p , and its transitions δ p ⊆ S p × S p , where S p is obtained by assigning each variable in p a value from its domain.

Definition 1 (Program).
A program p is of the form ⟨S p , δ p ⟩ where S p is the state space of program p and δ p ⊆ S p × S p .

Definition 2 (State Predicate).
A state predicate of p is any subset of S p .

Definition 3 (Computation).
Let p be a program with state space S p and transitions δ p . We say that a sequence ⟨s 0 , s 1 , s 2 , ...⟩ is a computation iff

Definition 5 (Invariant). A state predicate S is an
Remark 1. Normally, the definition of invariant (legitimate states) also includes a requirement that computations of p that start from an invariant state are correct with respect to its specification. The theory of tamper-evident stabilization is independent of the behaviors of the program inside legitimate states. Instead, it only focuses on the behavior of p outside its legitimate states. We have defined the invariant in terms of the closure property alone since it is the only relevant property in the definitions/theorems/examples in this paper. Definition 6 (Convergence). Let p be a program with state space S p and transitions δ p . Let S and T be state predicates of p. We say that T converges to S in p iff

Definition 7 (Stabilization).
Let p be a program with state space S p and transitions δ p . We say that program p is stabilizing for invariant S iff S p converges to S in p.
Using the approach in [7,15], we define the adversary as follows and define the notion of tamper-evident stabilization with respect to the capabilities of the given adversary in Section 3.

Definition 8 (Adversary).
We define an adversary for program p = ⟨S p , δ p ⟩ to be a subset of S p × S p .
Next, we define a computation of the program, say p, in the presence of the adversary, say adv. Definition 9 (⟨p, adv, k⟩-computation). Let p be a program with state space S p and transitions δ p . Let adv be an adversary for program p and k be an integer greater than 1. We say that a sequence ⟨s 0 , s 1 , s 2 , ...⟩ is a ⟨p, adv, k⟩-computation iff -∀j ≥ 0 :: s j ∈ S p , and -∀j ≥ 0 :: (s j , s j+1 ) ∈ δ p ∪ adv, and -∀j ≥ 0 :: ((s j , s j+1 ) ̸ ∈ δ p ) ⇒ (∀l | j < l < j + k :: (s l , s l+1 ) ∈ δ p ) Observe that a ⟨p, adv, k⟩-computation guarantees that there are at least k − 1 program transitions/actions between any two adversary actions for k > 1. Moreover, the adversary is not required to execute in a ⟨p, adv, k⟩-computation.

Remark 2 (Fairness among program transitions).
The above definition and definition 3 only consider fairness between program actions and adversary actions. If a program requires fairness among its actions to ensure stabilization, they can be strengthened accordingly. For reasons of space, this issue is outside the scope of this paper.

Definition 10 (Convergence in the presence of adversary).
Let p be a program with state space S p and transitions δ p . Let S and T be state predicates of p. Let adv be an adversary for p and let k be an integer greater than 1. We say that T ⟨adv, k⟩-converges to S in p in the presence of adversary adv iff Definition 11 (Active stabilization). Let p be a program with state space S p and transitions δ p . Let adv be an adversary for program p and k be an integer greater than 1. We say that program p is k-active stabilizing with adversary adv for invariant S iff S p ⟨p, adv, k⟩-converges to S in p.

Tamper-Evident Stabilization
This section defines the notion of tamper-evident stabilization, illustrates it in the context of two examples, and compares it with the notion of (conventional) stabilization and active stabilization.

The Definition of Tamper-Evident Stabilization
In this section, we define the notion of tamper-evident stabilization.

Definition 12 (Tamper-evident stabilization).
Let p be a program with state space S p and transitions δ p . Let adv be an adversary for program p. And, let k be an integer greater than 1. We say that program p is k-tamper-evident stabilizing with adversary adv for invariants ⟨S1, S2⟩ iff there exists a state predicate T of p such that -T converges to S1 in p -¬T ⟨adv, k⟩-converges to S2 in p.
From the above definition (especially closure of T and ¬T ), it follows that S1 and S2 must be disjoint (See Figure 1(a)). In addition, tamper-evident stabilization provides no guarantees about program behaviors if the adversary executes in T .

Remark 3.
Observe that in the above definition k must be greater than 1, as k=1 allows the adversary to prevent the program from executing entirely. In terms of permitted values of k, k = 2 provides the maximum power to the adversary. Hence, in most cases, in this paper we will consider k=2. In this case, we will omit the value of k. In other words tamper-evident stabilizing is the same as 2-tamper-evident stabilizing. should be a subset of T . Given this constraint, if S1 = T then it corresponds to a pure tamper evident system. If such a system is perturbed to a non-legitimate state then it is guaranteed to recover to S2 even in the presence of an adversary. And, if T = S p , then it corresponds to a stabilizing program (cf. Theorem 3). Thus, tamper-evident stabilization captures a range of systems from the ones that are pure tamper-evident and that are pure stabilizing.
The notion of tamper-evident stabilization prescribes the behavior of the program from all possible states. In this respect, it is similar to the notion of stabilizing fault tolerance. In [3], authors introduce the notion of nonmasking fault tolerance; it only prescribes behaviors in a subset of states. We can extend the notion of tamper-evident stabilization in a similar manner. We do so by simply overloading the definition of tamper-evident stabilization.

Definition 13 (Tamper-evident stabilization in environment U ).
Let p be a program with state space S p and transitions δ p . Let adv be an adversary for program p, and U be a state predicate. Moreover, let k be an integer greater than 1. We say that program p is k-tamper-evident stabilizing with adversary adv for invariants ⟨S1, S2⟩ in environment U iff there exists a state predicate T such that -S1, S2, and T are subsets of U , Observe that if U equals true then the above definition is identical to that of Definition 12.

The Token Ring Program
This section describes the well-known token ring program [10] and then represent that this program is tamper-evident stabilizing. The program consists of N processes arranged in a ring. Each process j, 0 ≤ j ≤ N −1, has a variable x.j with the domain {0, 1, · · · , N −1}. To model the impact of adversary actions on a process j, we add an auxiliary variable up.j, where process j has failed iff up.j is false. We say, a process j, 1 ≤ j ≤ N−1, has the token iff processes j and j−1 have not failed and x.j ̸ = x.(j −1). If process j, 1 ≤ j ≤ N −1, has a token then it copies the value of x.(j−1) to x.j. The process 0 has the token iff processes 0 and N −1 have not failed and x.(N −1) = x.0. If process 0 has the token then it increments its value in modulo N arithmetic (we show modulo N arithmetic by notation + N ). Thus, the actions of the program are as follows: Adversary action. The adversary can cause any process to fail. Hence, the adversary action can be represented as Tamper-evident stabilization of the program. To show that the token ring program T R is tamper-evident stabilizing in the presence of the adversary T R adv , we define the predicate T tr and invariants S1 tr and S2 tr as follows: Theorem 1. The token ring program T R is tamper-evident stabilizing with adversary T R adv for invariants ⟨S1 tr , S2 tr ⟩.
Proof. If T tr is true then the program is essentially the same as the token ring program from [11] and, hence, it stabilizes to S1 tr . If T tr is violated then the token cannot go past failed process(es). Hence, S2 tr would eventually be satisfied. Note that for the second constraint, adversary action (that may fail a process) cannot prevent the program from reaching S2 tr . ⊓ ⊔

Tamper-Evident Stabilizing Traffic Controller Program
This section describes another tamper-evident stabilizing program that illustrates a traffic light program that (1) recovers to normal operation from perturbations that do not cause the system to reach an unsafe state, and (2) permanently preserves the evidence of tampering if perturbations cause the system to reach an unsafe state. This example also illustrates why tamper-evident stabilization is desirable over (conventional) stabilization in some circumstances. Moreover, it can be used as a part of multiphase recovery [6] where a quick recovery is provided to safe states and complete recovery to legitimate states can be obtained later (or with human intervention).
Description of the program. In this program, we have an intersection with two one-way roads [5]. Each road is associated with a signal that can be either green (G), yellow (Y ), red (R), or flashing (F ). As expected, in any normal state, at least one of the signals should be red to ensure that traffic accidents do not occur. If such a system is perturbed by an adversary where an adversary can somehow affect the signal operation causing safety violations then it is crucial that such an occurrence is noted for potential investigation. (These adversary actions can be triggered with simple transient faults that reset clock variables. For simplicity, we omit the cause of such adversary actions and only consider their effects.) In this example, we consider the requirement that if both signals are simultaneously yellow or green then the system must reach a state where both signals are flashing to indicate a signal malfunction due to adversary.
Thus, this program consists of two variables sig 0 and sig 1 . The program consists of five actions: The first two actions are responsible for normal operation where a signal changes from G to Y to R and back to G. The third action considers the case where the system is perturbed outside legitimate states (e.g., by transient faults) and it is desirable that the system recovers from that state. The fourth action considers the case where the adversary actions perturb the system beyond an acceptable level and, hence, it is necessary that the system enters the tamper-evident state. Thus, the program actions are as follows: (In this program, j is instantiated to be either 0 or 1, and k is instantiated to be 1 − j.)

{notify the user that the system is in S2}
Adversary actions. The adversary T C adv can cause a red signal to become either yellow or green. Hence, the adversary actions can be represented as (j = 0, 1): Tamper-evident stabilization of the program. To show that the program T C is tamper-evident stabilizing in the presence of adversary T C adv , we define the predicate T tc and invariants S1 tc and S2 tc as follows: The traffic controller program T C is tamper-evident stabilizing with adversary T C adv for invariants ⟨S1 tc , S2 tc ⟩.
Proof. If T tc is true then the program is essentially the same as the traffic control program from [5] and, hence, it stabilizes to S1 tc . If the adversary T C adv violates T tc , the action T C4 can execute and one of the signals will be flashing. As a result, the other signal would eventually become flashing and S2 tr would be satisfied (See Figure 1(b)).

Stabilization, Tamper-evident Stabilization, and Active Stabilization
In this section, we compare the notion of (conventional) stabilization, active stabilization and tamper-evident stabilization. Specifically, Theorem 3 considers the case where p is stabilizing and evaluates whether it is tamper-evident stabilizing, and Theorem 4 considers the reverse direction. Relation with active stabilization follows trivially from these theorems.

Theorem 3.
If a program p is stabilizing for invariant S, then p is k-tamperevident stabilizing with adversary adv for invariants ⟨S, ∅⟩, for any adversary adv and k ≥ 2.
Proof. To prove tamper-evident stabilization, we need to identify a value of T . We set T = true, representing the state space of p. Now, we need to show that S p converges to S in p and ¬true ⟨adv, k⟩-converges to ϕ in p. Of these, the former is satisfied since p is stabilizing for invariant S, and the latter is trivially satisfied since ¬true corresponds to the empty set. ⊓ ⊔ Corollary 1. If program p is k-active stabilizing with adversary adv and k ≥ 2 for invariant S, then p is k-tamper-evident stabilizing with adversary adv for invariants ⟨S, ∅⟩.
Note that, if there exists k and adv such that program p is k-active stabilizing with adversary adv for invariant S, then p is stabilizing for invariant S. Theorem 4. If program p = ⟨S p , δ p ⟩ is k-tamper-evident stabilizing with adversary adv for invariants ⟨S1, S2⟩, then p is stabilizing for invariant (S1 ∨ S2).
Proof. Since program p is tamper-evident stabilizing, the two constraints in the definition of tamper-evident stabilizing are true. If the program p starts from T , it converges to S1. If p starts from ¬T , in the presence or absence of adversary adv, it converges to S2. This completes the proof.

⊓ ⊔
However, a similar result relating tamper-evident stabilization and active stabilization is not valid. In other words, it is possible to have a program p that is k-tamper-evident stabilizing with adversary adv for invariants ⟨S1, S2⟩ but it is not k-active stabilizing with adversary adv for invariant (S1 ∨ S2). This is due to the fact that if the program begins in T then in the presence of the adversary, there is no guarantee that it would recover to S1.

Verification of Tamper-evident Stabilization
To prove tamper-evident stabilization of a given program, we need to determine the predicate T (from Definition 12). Based on Definition 12, from every state in ¬T , we must eventually reach a state in S2. Hence, from ¬T , we cannot reach a state in S1. Also, from every state in T , we must reach a state in S1. Thus, the only possible choice for T is the states from where the program can reach S1. Therefore, Algorithm 1 starts with the construction of T (Lines 1-3) and checking the closure property of predicates T and ¬T , and invariants S1 and S2 (Lines 4-6). Thereafter, we utilize CheckCycle() to detect if program p has cycles in T − S1. Notice that if there is a cycle in a state predicate Y , then the following is true for any state s 0 in the cycle: ∃s 1 ∈ Y : (s 0 , s 1 ) ∈ p. As such, the absence of any cycles in Y would require the negation of the aforementioned expression to hold (see Line 16). This is the basic idea behind the CheckCycle routine (Lines [15][16][17][18][19]. If any states in T − S1 is not removed, it implies that some of them form a cycle. If such a cycle exists then p is not tamper-evident stabilizing. Utilizing the ideas in [7], we construct p 1 that considers the effect of adversary adv and checks for cycles of p 1 in ¬T − S2 (Line 8-9). In this construction, reach(s 0 , s 1 , l) denotes that s 1 can be reached from s 0 by execution of exactly l transitions of ¬T . If such cycles of p 1 do not exist then p is tamper-evident stabilizing.

Composing Tamper-evident Stabilization
In this section, we evaluate the composition of tamper-evident stabilizing systems by investigating different types of compositions considered for stabilizing systems.
Parallel Composition. A parallel composition of two programs considers the case where two independent programs are run in parallel on a weakly fair scheduler so that each program is guaranteed to execute its enabled actions. Weak fairness ensures that any action that is continuously enabled will be executed infinitely often. Thus, during the parallel execution, the behavior of one program does not affect the behavior of the other. Hence, if we have two programs p and q that do not share any variables such that p is stabilizing for S and q is stabilizing for R then parallel composition of p and q is stabilizing for S ∧ R. Now, we consider the case where we have two programs p and q that are tamper-evident stabilizing for ⟨S1, S2⟩ and ⟨S1 ′ , S2 ′ ⟩, and p and q do not share any variables. Is the parallel composition of p and q (denoted by p[]q) also tamper-evident stabilizing?

Theorem 6 (Parallel Composition). Given programs p and q that do not share variables.
p is tamper-evident stabilizing with adversary adv for ⟨S1, S2⟩ ∧ q is tamper-evident stabilizing with adversary adv for ⟨S1 ′ , S2 ′ ⟩ ⇒ p[]q is tamper-evident stabilizing with adversary adv for ⟨S1 ∧ S1 ′ , S2 ∨ S2 ′ ⟩ Note that in parallel composition of two tamper-evident stabilizing programs, the first predicate is combined by conjunction whereas the second one is combined by disjunction. However, we could make p[]q tamper-evident stabilizing for ⟨S1 ∧ S1 ′ , S2 ∧ S2 ′ ⟩ provided we add actions to p (respectively q) so that it checks if q (respectively, p) is in a state in S2 ′ (respectively, S2). Accordingly, p can change its own state to be in S2 (respectively, S2 ′ ).

Superposition.
We can also superpose two tamper-evident stabilizing systems in a similar manner. For example, consider the case where program p is superposed on program q, i.e., p has read-only access to variables of q and q does not have access to variables of p.
Transitivity. Tamper-evident stabilization preserves transitivity in a manner similar to stabilizing programs. Specifically,
We can also infer transitivity property by the following theorem.

Designing Tamper-evident Stabilization by Local Detection and Global/Local Correction
In this section, we identify some possible approaches for designing tamperevident stabilization. Specifically, we evaluate the use of some of the existing approaches for designing stabilization in designing tamper-evident stabilization.

Local Detection and Global Correction
One approach for designing stabilization is via local detection and global correction. In such a system, the invariant S of the system is of the form ∀j : S.j, where S.j is a local predicate that can be checked by process j. Each process j is responsible for checking its own predicate. If the system is outside the legitimate state then the local predicate of at least one process is violated. Hence, this process is responsible for initiating a global correction (such as distributed reset [19]) to restore the system to a legitimate state.
In this case, the actions of process j to obtain tamper-evident stabilization is as follows: ¬T.j ∧ ¬S2.j −→ Satisfy S2.j T.j ∧ ¬S1.j −→ Initiate global correction to restore S1 To utilize such an approach to design tamper-evident stabilization, we need to make some changes to global correction and put some reasonable constraints on what an adversary can do. In particular, the global correction to restore S1 involves changes to all processes. For tamper-evident stabilization, however, process j will execute its part in global correction only if T.j is true. Also, if process j observes that T.k is false for some neighbor k then j will satisfy S2.j. This will guarantee that if T.j is false for some process then the program will eventually reach a state in S2. The definition of tamper-evident stabilization requires that ¬T is closed in the adversary actions. This assumption is essential since if the adversary could move the system from a state in ¬T to T then the system would have forgotten that it was tampered beyond acceptable levels. In the context of this example, it would be necessary that the adversary cannot cause the program to start in a state where T.j is false for some process j and the adversary causes j to move to a state where T.j is true.

Local Detection and Local Correction
We can also utilize the above approach in the context of local detection and local correction [3] to add tamperevident-stabilization if invariant S1 is of the form ∀j :: S1.j, predicates of different processes are arranged in a partial order, and actions that correct S1.j preserve all predicates that come earlier in the order. In such a system when process j finds that T.j ∧ ¬S1.j is true it only locally satisfies S1.j. Given that we have a partial order, eventually we reach a state where S.j is true in all states.
Effect of the structure of the predicate T . Intuitively, in tamper-evident stabilization, we have two convergence requirements. T converges to S1 and ¬T converges to S2 in the presence of an adversary. If T is a conjunctive predicate then ¬T is a disjunctive predicate. Hence, a reader may wonder what would happen if T were a disjunctive predicate instead of a conjunctive predicate. We argue that this is likely to be a harder problem than the case where T is a conjunctive predicate.

The Relationship between Tamper-evident Stabilization and other Stabilization Techniques
Starting with Dijkstra's seminal work [10] on stabilizing algorithms for token circulation, several variations of stabilizing algorithms have been proposed during the past decades. These algorithms can be classified into two categories: stronger stabilizing and weaker stabilizing algorithms. The algorithms in the first category not only guarantee stabilization but also satisfy some additional properties. Examples of this category include faultcontainment stabilization, byzantine stabilization, Fault-Tolerant Self Stabilization (FTSS), multitolerance, and active stabilization. Fault-containment stabilization (e.g., [14,25]) refers to stabilizing programs that ensure that if one (respectively small number of) fault occurs then quick recovery is provided to the invariant. Byzantine stabilizing (e.g., [21,22]) programs tolerate the sce-narios where a subset of processes is byzantine. FTSS (e.g., [4]) covers stabilizing programs that tolerate permanent crash faults. Multitolerant stabilizing (e.g., [13,19]) systems ensure that, in addition to stabilization, the program masks a certain class of faults. Finally, active stabilization [7] requires that the program should recover to the invariant even if it is constantly perturbed by an adversary.
By contrast, a stabilizing program satisfies the constraints of weaker versions of stabilization. However, a program that provides a weaker version of stabilization may not be stabilizing. Examples of this include weak stabilization, probabilistic stabilization, and pseudo stabilization. Weak stabilization (e.g., [9,16]) requires that starting from any initial configuration, there exists an execution that eventually reaches a point from which its behavior is correct. However, the program may execute on a path where such a legitimate state is never reached. Probabilistic stabilization [18] refers to problems that ensure that starting from any initial configuration, the program converges to its legitimate states with probability 1. Nonmasking fault tolerance (e.g., [1,2]) targets the programs where the program recovers from states reached in the presence of a limited class of faults. However, this limited set of states may not cover the set of all states. Pseudo stabilization [8] relaxes the notion of points in the execution from which the behavior is correct. In other words, every execution has a suffix that exhibits correct behavior, yet time before reaching this suffix is unbounded.
The aforementioned stabilizing algorithms consider several problems including mutual exclusion, leader election, consensus, graph coloring, clustering, routing, and overlay construction. However, none of them considers problem of tampering (e.g., [20,23,24]). In part, this is due to the fact that stabilization and tamper evidence are potentially conflicting requirements.
Tamper-evident stabilization is in some sense a weaker version of stabilization in that from Theorem 3 every stabilizing program is also tamper-evident stabilizing. In particular, a stabilizing program guarantees that from all states program would eventually recover to legitimate states. By contrast, tamper-evident stabilizing program gives the option of recovering to tamper-evident states. (Although Theorem 4 suggests that every tamper-evident stabilizing program can be thought of as a stabilizing program, the invariant of such a stabilizing program is of the form ⟨S1, S2⟩, where S2 includes states that the system has no/reduced functionality.) Tamper-evident stabilization is stronger than the notion of nonmasking fault tolerance. In particular, nonmasking fault-tolerance also has the notion of faultspan (similar to T in Definition 12) from where recovery to the invariant is provided. In tamper-evident stabilization, if the program reaches a state in ¬T , it is required that it stays in ¬T . By contrast, in nonmasking fault-tolerance, the program may recover from ¬T to T .
Tamper-evident stabilization can be considered as a special case of nonmaskingfailsafe multitolerance, where a program that is subject to two types of faults F f and F n provides (i) failsafe fault tolerance when F f occurs, (ii) nonmasking tolerance in the presence of F n , and (iii) no guarantees if both F f and F n occur in the same computation. We have previously identified [13] sufficient conditions for efficient stepwise design of failsafe-nonmasking multitolerant systems, where F f and F n do not occur simultaneously and their scopes of perturbation outside the invariant are disjoint. Based on the role of T in Definition 12, we can ensure that these conditions are satisfied (Due to reasons of space, this proof is beyond the scope of the paper) for tamper-evident stabilization. This suggests that efficient algorithms can be designed for tamper-evident stabilization based on the approach in [13].

Conclusion and Future Work
This paper introduces the notion of tamper-evident stabilization that captures the requirement that if a system is perturbed within an acceptable limit then it restores itself to legitimate states. However, if it is perturbed beyond this boundary then it permanently preserves evidence of tampering. Moreover, the latter operation is unaffected even if the adversary attempts to stop it. We formally defined tamper-evident stabilization and investigated how it relates to stabilization and active stabilization. We argued that tamper-evident stabilization is weaker than stabilization in that every stabilizing system is indeed tamperevident stabilizing. Also, tamper-evident stabilization captures a spectrum of systems from pure tamper-evident systems to pure stabilizing systems. We also demonstrated two examples where we design tamper-evident stabilizing token passing and traffic control protocols. We identified how methods for designing stabilizing programs can be leveraged to design tamper-evident stabilizing programs. We showed that the problem of verifying whether a given program is tamper-evident stabilizing is polynomial in the state space of the given program. We note that the problem of adding tamper-evident stabilization to a given high atomicity program can be solved in polynomial time. However, the problem is NP-hard for distributed programs. Moreover, we find that parallel composition of tamper-evident stabilizing systems works in a manner similar to that of stabilizing systems. Nevertheless, superposition or transitivity requirements of tamper-evident stabilization are somewhat different than that for stabilizing systems.
We are currently investigating the design and analysis of tamper-evident stabilizing System-on-Chip (SoC) systems in the context of the IEEE SystemC language. Our objective here is to design systems that facilitate reasoning about what they do and what they do not do in the event of tampering. Second, we will leverage our existing work on model repair and synthesis of stabilization in automated design of tamper-evident stabilization. Third, we plan to study the application of tamper-evident stabilization in game theory (and vice versa).