**Abstract** : The earliest and most popular use of logic in computer science views computation as something that happens independent of logic: e.g., registers change, tokens move in a Petri net, messages are buffered and retrieved, and a tape head advances along a tape. Logics (often modal or temporal logics) are used to make statements about such computations. Model checkers and Hoare proof systems employ this computation-as-model approach. Early in the 20th century, some logicians invented various computational systems, such as Turing machines, Church's λ-calculus, and Post correspondence systems, which were shown to all compute the same set of recursive functions. With the introduction of high-level programming languages, such LISP, Pascal, Ada, and C, it was clear that any number of ad hoc computation systems could be designed to compute these same functions. Eventually, the large number of different programming languages were classified via the four paradigms of imperative, object-oriented, functional, and logic programming. The latter two can be viewed as an attempt to make less ad hoc computational systems by relying on aspects of symbolic logic. Unlike most programming languages, symbolic logic is a formal language that has well-defined semantics and which has been studied using model theory [17], category theory [8, 9], recursion theory [5, 6], and proof theory [3, 4]. The computation-as-deduction approach to programming languages takes as its computational elements objects from logic, namely, terms, formulas, and proofs. This approach has the potential to allow the direct application of logic's rich metatheory to proving properties of specific programs and entire programming languages. The influence of proof theory on logic programming The first thing that proof theory has offered to the logic programming paradigm is a clean and straightforward means to differentiate itself from functional programming. From the proof theory perspective, functional programs correspond to proofs (usually in natural deduction), and computation corresponds to proof normalization: that is, programs correspond to non-normal proofs and computation is seen as a series of normalization steps (using either β-convergence or cut-elimination). This programs-as-proof correspondence is known as the Curry-Howard isomorphism [16]. In contrast, proof search is a good charactorization of computation in logic programming. Here, quantificational formulas are used to encode both programs and goals (think about the rules and queries in database theory). Sequents are used to encode the state of a computation and (cut-free) proofs are used to encode computation traces: changes in sequents model the dynamics of computation. Although cut-elimination is not part of computation, it can be used to reason about computation. Also. the proof-theoretic notions of inference rule, schematic variable, proof checking, and proof search are directly implementable. The proof-normalization and the proof-search styles of computational specification remain distinct even in light of numerous recent developments in proof theory: for example, linear logic, game semantics, and higher-order quantification have all served to illustrate differences and not similarities between these two styles.