Characterizing Polynomial and Exponential Complexity Classes in Elementary Lambda-Calculus

. In this paper an implicit characterization of the complexity classes k - EXP and k - FEXP , for k ≥ 0, is given, by a type assignment sys-tem for a stratiﬁed λ -calculus, where types for programs are witnesses of the corresponding complexity class. Types are formulae of Elementary Linear Logic ( ELL ), and the hierarchy of complexity classes k - EXP is characterized by a hierarchy of types.


Introduction
Context.Early work on the study of complexity classes by means of programming languages has been carried out by Neil Jones [10,11], in particular using functional programming.The interest of these investigations is twofold: from the computational complexity point of view, they provide new characterizations of complexity classes, which abstract away from machine models; from the programming language point of view, they are a way to analyze the impact on complexity of various programming features (higher-order types, recursive definitions, read/write operations).This fits more generally in the research line of implicit computational complexity (ICC), whose goal is to study complexity classes without relying on explicit bounds on resources but instead by considering restrictions on programming languages and calculi.Seminal research in this direction has been carried out in the fields of recursion theory [4,13], λ-calculus [15] and linear logic [9].These contributions usually exhibit a new specific language or logic for each complexity class, for instance PTIME, PSPACE, LOGSPACE: let us call monovalent the characterizations of this kind.We think however that the field would benefit from some more uniform presentations, which would consist in both a general language and a family of static criteria on programs of this language, each of which characterizing a particular complexity class.We call such a setting a polyvalent characterization; we believe that this approach is more promising for providing insights on the relationships between complexity classes.Polyvalent characterizations of this nature have been given in [11,14], but their criteria used for reaching point (2) referred to the construction steps of the programs.Here we are interested in defining a polyvalent characterization where (2) is expressed by means of the program's type in a dedicated system.
Stratification and Linear Logic.An ubiquitous notion in implicit complexity is that of stratification, by which we informally designate here the fact of organizing computation into distinct strata.This intuition underlies several systems: ramified and safe recursion [13,4], in which data is organized into strata; stratified comprehension [14], where strata are used for quantification; variants of linear logic [9] where programs are divided into strata thanks to a modality.More recently stratification of data has been related fruitfully to type systems for non-interference [18].The linear logic approach to ICC is based on the proofs-as-programs correspondence.This logic indeed provides a powerful system to analyse the duplication and sharing of arguments in functional computation: this is made possible by a specific logical connective for the duplication of arguments, the !modality.As in functional computation the reuse of an argument can cause a complexity explosion, the idea is to use weak versions of ! to characterize complexity classes.This intuition is illustrated by elementary linear logic (ELL) [9,8], a simple variant of linear logic which provides a monovalent characterisation of elementary complexity, that is to say computation in time bounded by a tower of exponentials of fixed height.Other variants of linear logic provide characterizations of PTIME, but they use either a more complicated language [9] or a more specific programming discipline [12].

Contribution and Comparison.
In [2] a polyvalent characterization in ELL proof-nets of the complexity classes k-EXP = ∪ i∈N DTIME(2 n i k ) for all k ≥ 0 has been obtained.However this approach has some shortcomings: 1.The complexity soundness proof uses a partly semantic argument ( [2] Lemma 3 p.10) and so it does not provide a syntactic way to evaluate the programs with the given complexity bound.
2. The characterization is given for classes of predicates, and not for classes of functions.Moreover it is not so clear how to extend this result to functions because of the semantic argument mentioned above.3. The language of proof-nets is not as standard and widespread as say that of λ-calculus.
In the present work, we wish to establish an analogous polyvalent characterization in the setting of λ-calculus, with a stronger complexity soundness result based on a concrete evaluation procedure.We think this could provide a more solid basis to explore other characterizations of this kind.
In particular we define the λ !-calculus, a variant of λ-calculus with explicit stratifications, which allows both to recover the results of [2] and to characterize also the function complexity classes k-FEXP, by two distinct hierarchies of types.In fact, the characterization obtained through a standard representation of data-types like in [2] does not account for some closure properties of the function classes k-FEXP, in particular composition, so we propose a new, maybe less natural, representation in order to grasp these properties.Our language makes it easier to define such non-standard representation.
Technical Approach.One could expect that the results of [2] might be extended to λ !-calculus by considering a translation of terms into proofnets.However it is not so straightforward: term reduction cannot be directly simulated by the evaluation procedure in [2], because (i) it follows a specific cut-elimination strategy and (ii) ultimately it uses a semantic argument.For this reason we give here a direct proof of the result in λ !-calculus, which requires defining new measures on terms and is not a mere adaptation of the proof-net argument.
Related Works.The first results on ELL [9,8] as well as later works [19,6] have been carried out in the setting of proof-nets.Other syntaxes have then been explored.First, specific term calculi corresponding to the related system LLL and to ELL have been proposed [22,17,16].Alternatively [5] used standard λ-calculus with a type system derived from ELL.The λ !-calculus we use here has a syntax similar to e.g.[21,7], and our type system is inspired by [5].
Outline.In the following we first introduce the λ !-calculus as an untyped calculus, delineate a notion of well-formed terms and study the complexity of the reduction of these terms (Sect.2).We then define a type system inspired by ELL and exhibit two families of types corresponding respectively to the hierarchies k-EXP and k-FEXP for k ≥ 0 (Sect.3).Finally we introduce a second characterization of this hierarchy, based on a nonstandard data-type (Sect.4).A conclusion follows.
A version of this work with a technical appendix containing detailed proofs is available as [3].

Terms and Reduction
We use a calculus, λ !-calculus, which adds to ordinary λ-calculus a !modality and distinguishes two notions of λ-abstraction: where x ranges over a countable set of term variables Var.The usual notions of free variables is extended with FV(λ As usual, terms are considered modulo renaming of bound variables, and = denotes the syntactic equality modulo this renaming. Contexts.We consider the class of (one hole) contexts generated by the following grammar: As usual, capture of variables may occur.The occurrence of a term N in M is a context C such that M = C[N]; in practice we simply write N for the occurrence if there is no ambiguity and call it a subterm of M.
Depth.The depth of the occurrence C in M, denoted by δ(C, M), is the number of !modalities surrounding the hole of C in M.
Moreover, the depth δ(M) of a term M is the maximal nesting of ! in M.
Dynamics.The reduction → is the contextual closure of the following rules: where [N/x] denotes the capture free substitution of x by N, whose definition is the obvious extension of the corresponding one for λ-calculus.
Observe that a term such as (λ !x.M)P is a redex only if P =!N for some N; the intuition behind these two kinds of redexes is that the abstraction λ expects an input at depth 0, while λ ! expects an input at depth 1.
A subterm at depth i in M is an occurrence C in M such that δ(C, M) = i; we denote by → i the reduction of a redex occurring at depth i.As usual, * → ( * → i ) denotes the reflexive and transitive closure of → (→ i ).We say that a term is in i-normal form if it does not have any redex at depth less than or equal to i; then M is in normal form iff it is in δ(M)-normal form.We denote as nf i the set of terms in i-normal form.
We have a confluence property, whose proof is adapted from [20], taking into account the notion of depth: We consider a specific subclass of terms, inspired by elementary linear logic (ELL) [9,17]: Definition 1 (Well-formed Term).A term M is well-formed (w.f.) if and only if, for any subterm N of M which is an abstraction, we have: 1. if N = λx.P, then x occurs at most once and at depth 0 in P; 2. if N = λ !x.P, then x can only occur at depth 1 in P.
The motivation behind such definition is that the depth of subterms in a w.f.term does not change during reduction: if an abstraction expects an input at depth 0 (resp.1), which is the case of λ (resp.λ ! ), then the substitutions occur at depth 0 (resp.1), as each occurrence of its bound variable is at depth 0 (resp.1).
The class of w.f.terms is preserved by reduction and their depth does not increase during reduction: Lemma 1.If M is w.f. and M → M , then M is w.f., and δ(M ) ≤ δ(M).
From now on, we assume that all terms are well formed.
Sizes.In order to study the reduction, it is useful to examine the size of M at depth i, denoted by |M| i , defined as follows: -If M = x, then |x| 0 = 1 and The definition is extended to contexts, where | | i = 0 for i ≥ 0. We consider how the size of a term changes during reduction: Strategy.The fact that by Prop.1.(i) reducing a redex does not create any redex at strictly lower depth suggests considering the following, non-deterministic, level-by-level reduction strategy: if the term is not in normal form reduce (non deterministically) a redex at depth i, where i ≥ 0 is the minimal depth such that M ∈ nf i .A level-by-level reduction sequence is a reduction sequence following the level-by-level strategy.We say that a reduction sequence is maximal if either it is infinite, or if it finishes with a normal term.Proposition 2. Any reduction of a term by the level-by-level strategy terminates.
It follows that a maximal level-by-level reduction sequence of a term M has the shape shown in (1), where i denotes one reduction step according to the level-by-level strategy, performed at depth i.We call round i the subsequence of i starting from M 1 i .Note that, for all i and j > i, M 1 j ∈ nf i .We simply write when we do not refer to a particular depth.
In a particular case, namely in Lemma 3, we use a deterministic version of the level-by-level strategy, called leftmost-by-level, which proceeds at every level from left to right, taking into account the shape of the different redexes in our calculus.That is to say, it chooses at every step the leftmost subterm of the shape MN, where M is an abstraction, and, in case it is already a redex it reduces it, in case it is of the shape (λ !x.P)N, where N =!Q, for some Q, then it looks for the next redex in N.This corresponds to using the call-by-name discipline for β-redexes and the call-by-value for !-redexes [20].
M =⇒ N denotes that N is obtained from M by performing one reduction step according to the leftmost-by-level strategy.All the notations for → are extended to and =⇒ in a straightforward way.

Representation of Functions
In order to represent functions, we first need to encode data.For booleans we can use the familiar encoding true = λx.λy.x and false = λx.λy.y.
For tally integers, the usual encoding of Church integers does not give w.f.terms; instead, we use the following encodings for Church integers and Church binary words: By abuse of notation we also denote by 1 the term λ !f.!f .Observe that the terms encoding booleans are of depth 0, while those representing Church integers and Church binary words are of depth 1.We denote the length of a word w ∈ {0, 1} by length(w).
We represent computation on a binary word by considering applications of the form P!w, with a !modality on the argument, because the program should be able to duplicate its input.Concerning the form of the result, since we want to allow computation at arbitrary depth, we require the output to be of the form ! k D, where k ∈ N and D is one of the data representations above.
We thus say that a function f : {0, 1} → {true, false} is represented by a term (program) P if P is a closed normal term and there exists k ∈ N such that, for any w ∈ {0, 1} and D = f (w) ∈ {true, false} we have: P!w * →! k D. This definition can be adapted to functions with other domains and codomains.

Complexity of Reduction
We study the complexity of the reduction of terms of the form P!w.Actually it is useful to analyze the complexity of the reduction of such terms to their k-normal form, i.e. by reducing until depth k, for k ∈ N. We define the notation 2 n i in the following way: 2 x 0 = x and 2 x i+1 = 2 2 x i .
Proposition 3. Given a program P, for any k ≥ 2, there exists a polynomial q such that, for any w ∈ {0, 1} , P!w where n = length(w).In particular, in the case where k = 2 we have a polynomial bound q(n).
In the rest of this section we prove Prop. 3. Let M = P!w and consider a level-by-level reduction sequence of M, using the notations of (1).By Lemma 2 we know that the number of steps at depth i is bounded by |M 1 i | and that there are (d + 1) rounds.In order to bound the total number of steps it is thus sufficient to bound Proof (Prop.3).We proceed by induction on k ≥ 2. We assume that P is of the form λ !y.Q (otherwise P!w is already a normal form).
-Case k = 2: We consider a level-by-level reduction sequence of P!w.We need to examine reduction at depths 0 and 1.At depth 0 we have 2 and by Lemma 2 this reduction is done in c steps, where c ≤ |M 1  1 | 1 ≤ c, so by Lemma 3 we have that so it is polynomial in n, and the statement is proved for k = 2.
-Assume the property holds for k and let us prove it for k + 1.
By assumption M reduces to M 1 k in at most 2 k+1 and by Lemma 2 and Lemma 3 we get for some polynomial q (n).
Approximations.From Prop. 3 we can easily derive a 2 q(n) k−2 bound on the number of steps of the reduction of P!w not only to its (k − 1)-normal form, but also to its k-normal form M 1 k+1 .Unfortunately this does not yield directly a time bound O(2 q(n) k−2 ) for the simulation of this reduction on a Turing machine, because during round k the size of the term at depth k + 1 could grow exponentially.However if we are only interested in the result at depth k, the subterms at depth k + 1 are actually irrelevant.For this reason we introduce a notion of approximation, inspired by the semantics of stratified coherence spaces [1], which allows us to compute up to a certain depth k, while ignoring what happens at depth k + 1.
We extend the calculus with a constant * ; its sizes | * | i are defined as for variables.If M is a term and i ∈ N, we define its i-th approximation , and for all other constructions (•) i acts as identity, e.g.
So M i is obtained by replacing in M all subterms at depth i + 1 by * .
For instance we have Proposition 4. Given a program P, for any k ≥ 2, there exists a polynomial q such that for any w ∈ {0, 1} , the reduction of P!w k to its k-normal form can be computed in time O(2 q(n) k−2 ) on a Turing machine, where n = length(w).
Proof.Observe that P!w k = P k !w.By Prop. 3 and Lemma 4.(i), it reduces We can then conclude by using the fact that one reduction step in a term M can be simulated in time p(|M|) on a Turing machine, for a suitably chosen polynomial p.

Type System
We introduce a type assignment system for λ !-calculus, based on ELL, such that all typed terms are also w.f. and the previous results are preserved.a "temporary" status, awaiting to be moved to the modal basis.We say that a term M is well-typed iff there is a derivation Π Γ | ∆ | ∅ M : σ for some Γ, ∆, σ: indeed parking variables are only considered as an intermediary status before becoming modal variables.When all three bases are empty we denote the derivation by Π M : σ.The main difference w.r.t. the type system of [5] is the (!) rule: here we allow only the parking context to be non-empty, in order to ensure that typable terms are well formed: it is the key to obtain a 2 poly(n) k complexity bound for a specific k depending on the type, instead of just an elementary bound.
Both the type and depth of a term are preserved during reduction: Proposition 5.If a term is well-typed, then it is also well-formed.
The proof comes easily from the following proposition: -if x ∈ dom(Γ ) ∪ dom(Θ), then x can only occur at depth 0 in M; -if x ∈ dom(∆), then x can only occur at depth 1 in M.

Datatypes
In section 2.2 we introduced w.f.terms encoding data, for which we now define the following types, adapted from system F, representing respectively booleans, Church tally integers and Church binary words: (a a).The following properties ensure that, given a datatype, every derivation having such type reduces to a term having the desired shape:

Complexity Soundness and Completeness
We are interested in giving a precise account of the hierarchy of classes characterized by this typed λ !-calculus.Denote by FDTIME(F (n)) and by DTIME(F (n)) respectively the class of functions and the class of predicates on binary words computable on a deterministic Turing machine in time O(F (n)); the complexity classes we are interested in, for k ≥ 0, are: In particular, observe that Soundness Let F(σ) denote the set of closed terms representing functions, to which type σ can be assigned: we prove that ).
Complexity soundness can be proved for functions by a similar proof, in which Prop. 7.(ii) is used in order to read the output as a Scott word: , where p is a polynomial.
Completeness We proved that F(!W !k+2 B) ⊆ k-EXP and F(!W !k+2 W S ) ⊆ k-FEXP; now we want to strengthen this result by examining the converse inclusions.To do so we simulate k-EXP time bounded Turing machines, by an iteration, so as to prove the following results: Theorem 4 (Extensional Completeness).
-Let f be a binary predicate in k-EXP, for any k ≥ 0; then there is a term M representing f such that M :!W !k+2 B. -Let g be a function on binary words in k-FEXP, for k ≥ 0; then there is a term M representing g such that M :!W !k+2 W S .
Note that this characterization, for k = 0, does not account for the fact that FPTIME is closed by composition: indeed, programs of type !W !k+2 W S cannot be composed, since we do not have any coercion from W S to W. For this reason, we explore an alternative characterization.

Refining Types for an Alternative Characterization
Our aim is to take a pair n, w to represent the word w such that: w = w if length(w) ≤ n, the prefix of w of length n otherwise.
For this reason, we introduce a new data-type using the connective ⊗ defined by σ ⊗ τ Note that we cannot define the abstraction in the usual way, i.e. λ(x 1 ⊗ x 2 ).M def = λx.x(λx 1 .λx 2 .M), otherwise we could not type pairs in a uniform way; moreover, when applied to a pair, this term reduces to the usual one.The associated reduction rules (λ( We represent a pair n, w through a term !n⊗! 2 w of type !N⊗! 2 W S , i.e. a combined data-type containing a Church integer !n and a Scott word ! 2 w: in practice, n is meant to represent the length of a list, whose content is described by w.In order to mantain this invariant, when computing on elements !n⊗! 2 w of this data-type, the property that the length of w is inferior or equal to n is preserved.
As before, we need to be able to extract the result, in this case a pair:

1 | 1 = 1 *
the occurrences of y in Q are at depth 1; denote by b the number of occurrences of y in Q, which does not depend on n.Since |Q[w/y]| 1 ≤ |Q| 1 + b • |w| 0 and |w| 0 = 2 (by definition of the encoding), we have that |M 1 |Q[w/y]| 1 ≤ |Q| 1 + 2b.Let c be |Q| 1 + 2b, which does not depend on n: then, by Lemma 2, the number of steps at depth 1 is bounded by c.This proves the first part of the statement.Let M 1 2 ∈ nf 1 be the term obtained after reduction at depth 1.By Prop.1.(ii) we have that M 1 =⇒ 1 M 1 .a a a N = ∀a.!(a a) !(a a) W = ∀a.!(a a) !(a a) !(a a) We also use Scott binary words, defined inductively as def = λf 0 .λf 1 .λx.x 0w def = λf 0 .λf 1 .λx.f 0 w 1w def = λf 0 .λf 1 .λx.f 1 w having type W S def = µb.∀a.(b a) (b a)

Theorem 2 (
Soundness).Let P :!W !k+2 B where P is a program, and let w : W where length(w) = n; then the reduction P!w * →! k+2 D can be computed in time 2 p(n) k , where D is either true or false and p is a polynomial.Proof.Recall that a program P is a typed closed term in normal form: we denote by M the normal form of P!w.By Prop. 4 we know that P!w k+2 can be reduced to a term N in nf k+2 in time O(2 p(n) k ) on a Turing machine, where n = length(w).Moreover by Lemma 4.(i) and Prop.1.(iii) we have that M k+2 = N.Now, as P!w has type !k+2 B, by Theorem 1 the term M is a closed term of type !k+2 B and, by Prop.7.(i), it is equal to ! k+2 true or !k+2 false.Then N = M k+2 = M , so P!w can be computed in time