\input{header.tex}
\section{Introduction}
\label{sec:intro}
\only{\Short}{\vspace{-1ex}}
Function types in traditional type systems only provide information
about the arguments and return values of the functions but not about
the data that is captured in function closures. Such function
types naturally lead to simple and compositional type systems.
Recently, syntax-directed type systems have been increasingly used to
statically verify strong program properties such as resource
usage~\cite{LagoP13,Jost10,HoffmannAH12}, information
flow~\cite{HeintzeR98,SabelfeldM03}, and
termination~\cite{Abel07,Chin01,BartheGR08}.
%
In such type systems, it is sometimes necessary and natural to include
information in the function types about the data that is captured by
closures. To see why, assume that we want to design a type system to
verify resource usage. Now consider for example the curried append
function for integer lists which has the following type in OCaml.
\[ % \abovedisplayskip 0.3em \belowdisplayskip 0.6em
\code{append} : \type{int\ list} \to \type{int\ list} \to \type{int\ list}
\]
At first glance, we might say that the time complexity of
\code{append} is $O(n)$ if $n$ is the length of the first argument.
But a closer inspection of the definition of \code{append} reveals
that this is a gross simplification. In fact, the complexity of the
partial function call \code{app\_par = append $\ell$} is constant.
Moreover, the complexity of the function \code{app\_par} is
linear---not in the length of the argument but in the length of the
list $\ell$ that is captured in the function closure.
In general, we have to describe the resource consumption of a curried
function $f : A_1 \to \cdots \to A_n \to A$ with $n$ expressions
$c_i(a_1,\ldots,a_i)$ such that $c_i$ describes the complexity of the
computation that takes place after $f$ is applied to $i$ arguments
$a_1,\ldots,a_i$. We are not aware of any existing type system that can
verify a statement of this form.
To express the aforementioned statement in a type system, we have to
decorate the function types with additional information about the data
that is captured in a function closure. It is however not sufficient to
directly describe the complexity of a closure in terms of its arguments
and the data captured in the closure. Admittedly, this would work to
accurately describe the resource usage in our example function
\code{append} because the first argument is directly captured in the
closure. But in general, the data captured in a closure $f a_1 \cdots
a_i$ can be any data that is computed from the arguments
$a_1,\ldots,a_i$ (and from the data in the environment). To reference
this data in the types would not only be meaningless for a user, it
would also hamper the compositionality of the type system. It is for
instance unclear how to define subtyping for closures that capture
different data (which is, e.g., needed in the two branches of a
conditional.)
To preserve the compositionality of traditional type systems, we
propose to describe the resource usage of a closure as a function of
its argument and the data that is visible in the current environment.
To this end we introduce \emph{open closure types}, function types
that refer to their arguments and to the data in the current
environment.
More formally, consider a typing judgment of the form $\Gamma \der e :
\sigma$, in a type system that tracks fine-grained intensional
properties characterizing not only the shape of values, but the
behavior of the reduction of $e$ into a value (e.g., resource
usage). A typing rule for open closure types, $\Gamma,\Delta \der e :
\tfun{\Gamma'}{x \of \sigma}{\tau}$, captures the idea that, under a
weak reduction semantics, the computation of the closure itself, and
later the computation of the closure \emph{application}, will have
very different behaviors, captured by two different typing
environments $\Gamma$ and $\Gamma'$ of the same domain, the free
variables of $e$. To describe the complexity of \code{append}, we
might for instance have a statement
\[ \abovedisplayskip 0.3em \belowdisplayskip 0.6em
\ell \of \type{int\ list} \der \code{append}\ \ell :
\tfun{\ell \of \type{int\ list}}{y \of \type{int\ list}}{int\ list} \; .
\]
This puts us in a position to use type annotations to describe the
resource usage of $\code{append}\ \ell$ as a function of $\ell$ and
the future argument $y$. For example, using type-based amortized
analysis~\cite{HoffmannAH12}, we can express a bound on the number of
created list notes in \code{append} with the following open closure type.
\[ \abovedisplayskip 0.3em \belowdisplayskip 0.6em
\code{append} :
\tfun{}{ x \of \type{int\ list}^0}
{\tfun{x \of \type{int\ list}^1}{y \of \type{int\ list}^0}{int\ list}^0} \; .
\]
The intuitive meaning of this type for \code{append} is as follows.
To pay for the cons operations in the evaluation of $\code{append}\,
\ell_1$ we need $0 {\cdot} |\ell_1|$ resource units and to pay for the
cons operations in the evaluation of $\code{append}\, \ell_1 \,
\ell_2$ we need $0 {\cdot} |\ell_1| + 1 {\cdot}|\ell_2|$ resource
units.
The development of a type system for open closure types entails some
interesting technical challenges: term variables now appear in types,
which requires mechanisms for scope management not unlike dependent type
theories. If $x$ appears in $\sigma$, the context
$\Gamma,x\of\tau,y\of\sigma$ is not exchangeable with
$\Gamma,y\of\sigma,x\of\tau$. Similarly, the judgment $\Gamma, x \of
\tau \der e_2 : \sigma$ will not entail $\Gamma \der
\clet{x}{e_1}{e_2} : \sigma$, as the return type $\sigma$ may contain
open closures scoping over $x$, so we need to substitute variables in
types.
The main contribution of this paper is a type theory of open closure
types and the proof of its main properties. We start from the simply-typed
lambda calculus, and consider the simple intensional property of
data-flow tracking, annotating each simply-typed lambda-calculus type
with a single boolean variable. This allows us to study the metatheory of
open closure types in clean and straightforward way. This is the first
important step for using such types in more sophisticated type systems for
resource usage and termination.
Our type system for data-flow tracking captures higher-order data-flow
information. As a byproduct, we get our secondary contribution, a
non-interference property in the sense of information flow theory:
high-level inputs do not influence the (low-level) results of
computations.
To experiment with of our type system, we implemented a software
prototype in OCaml (see Section~\ref{sec:impl}).
\begin{version}{\Or\Short\Medium}
A full version of this article, containing the
full proofs and additional details and discussion, is available
online.\footnote{\url{http://gallium.inria.fr/~scherer/research/open_closure_types/scherer-hoffmann-closures-long.pdf}}
\end{version}
\only{\Short}{\vspace{-2ex}}
\subsubsection{Related Work}
In our type system we maintain the invariant that open closure types
only refer to variables that are present in the current typing
context. This is a feature that distinguishes open closure types from
existing formalisms for closure types.
Contextual types~\cite{NanevskiPP08,PientkaD08,StampoulisS12} also
decorate types with context information. However, it is not necessary
in contextual modal type theory that the context that is captured in a
type is related to the current context. % Another difference is that we
% only annotate function types with typing contexts.
Furthermore, our
goal of describing properties that may depend on previous function
arguments and other visible variables is quite different from the main
applications of contextual types in programming language support for
manipulating proof terms and meta-variables.
Having closure types carry a set of captured variables has been done in
the literature, as for example in Leroy~\cite{Leroy92}, which use
closure types to keep track of of \emph{dangerous type variables} that
can not be generalized without breaking type safety, or in the
higher-order lifetime analysis of Hannan et al.~\cite{Hannan97},
where variable sets denote variables that must be kept in memory.
However, these works have no need to vary function types in different
typing contexts and subtyping can be defined using set inclusion,
which makes the metatheory significantly simpler. On the contrary, our
scoping mechanism allows to study more complex properties, such as
value dependencies and non-interference.
The classical way to understand value capture in closures in a typed
way is through the \emph{typed closure conversion} of Minamide et
al.~\cite{MinamideMH96}. They use existential types to account for
hidden data in function closures without losing compositionality, by
abstracting over the difference between functions capturing from
different environments. Our system retains this compositionality,
albeit in a less apparent way: we get finer-grained information about
the dependency of a closure on the ambient typing environment. % This
% information is substituted when moving to a smaller, more general
% environment, so that functions of the same input and output types can
% always be mixed together in a given environment.
Typed closure conversion is still possible, and could be typed in
a more precise way, abstracting only over values that are outside the
lexical context.
Petricek et al.~\cite{PetricekUnpub} study \emph{coeffects} systems
with judgments of the form $C^r \Gamma \der e : \tau$ and function
types $C^s \sigma \to \tau$, where $r$ and $s$ are coeffect
annotations over a indexed comonad $C$. Their work is orthogonal to
the present one as they cover very different topics: on one side, the
comonadic semantics structure of coarse-grained effect indexes, and on
the other the syntactic scoping rules that arise from tracking each
variable of the context separately. We believe that our dependency of
types on term variables would make a semantic study significantly more
challenging, and conversely that use cases of open closure types are
not in general characterized by a comonadic structure.
The non-interference property that we prove is different from the
usual treatment in type systems for information flow like the SLam
Calculus~\cite{HeintzeR98}. In SLam, the information flow into
closure is accounted for at abstraction time. In contrast, we account
for the information flow into the closure at application time.
\section{A Type System for Open Closures}
\label{sec:types}
\only{\Short}{\vspace{-1ex}}
We define a type system for the simplest problem domain that exhibits
a need for open closure types. Our goal is to determine statically,
for an open term $e$, on which variables of the environment
the value of $e$ depends.
We are interested in weak reduction, and assume a call-by-value
reduction strategy. In this context, an abstraction $\lam x e$ is
already a value, so reducing it does not depend on the environment at
all. More generally, for a term $e$ evaluating to a function
(closure), we make a distinction between the part of the environment
the reduction of $e$ depends on, and the part that will be used when
the resulting closure will be applied. For example, the term
$(y, \lam {x} z)$ depends on the variable $y$ at evaluation time, but
will not need the variable $z$ until the closure in the right pair
component is applied.
This is where we need open closure types. Our function types are of
the form $\tfun{\Gamma^\Phi}{x \of \sigma^\phi}{\tau}$, where the
mapping $\Phi$ from variables to Booleans indicates on which variables
the evaluation depends at application time. The Boolean $\phi$
indicates whether the argument $x$ is used in the function body. We
call $\Phi$ the dependency annotation of
$\Gamma$. Our previous example would for instance be typed as follows.
\[ \abovedisplayskip 0.5em
y \of \sigma^1, z \of \tau^0 \der (y, \lam {x} z) :
\sigma * (\tfun{y \of \sigma^0, z \of \tau^1}{x \of \rho^0}{\tau}) \]
The typing expresses that the result of the computation depends on the
variable $y$ but not on the variable $z$. Moreover, result of the function
in the second component of the pair depends on $z$ but not on $y$.
In general, types are defined by the following grammar.
\[ \abovedisplayskip 0.5em
\begin{array}{l@{~}l@{~}l@{\quad}r}
\mathtt{Types} \ni \sigma,\tau,\rho & ::= & & \mbox{types} \\
& \mid & \alpha & \mbox{atoms} \\
& \mid & \tau_1 * \tau_2 & \mbox{products} \\
& \mid & \tfun{\Gamma^\Phi}{x \of \sigma^\phi}{\tau} & \mbox{closures} \\
\end{array}
\]
The closure type $\tfun{\Gamma^\Phi}{x \of \sigma^\phi}{\tau}$ binds
the new argument variable $x$, but not the variables occurring in
$\Gamma$ which are reference variables bound in the
current typing context. Such a type is \emph{well-scoped} only when
all the variables it closes over are actually present in the current
context. In particular, it has no meaning in an empty context, unless
$\Gamma$ is itself empty.
We define well-scoping judgments on contexts ($\Gamma \der$) and types
($\Gamma \der \sigma$). The judgments are defined simultaneously in
Figure~\ref{fig/scoping} and refer to each another. They use
non-annotated contexts: the dependency annotations characterize
data-flow information of \emph{terms}, and are not needed to state the
well-formedness of static types and contexts.
\begin{mathparfig}{fig/scoping}{Well-scoping of types and contexts}
\inferrule[Scope-Context-Nil]
{}
{\emptyset \der}
\inferrule[Scope-Context]
{\Gamma \der \sigma}
{\Gamma, x \of \sigma \der}
\\
\inferrule[Scope-Atom]
{\Gamma \der}
{\Gamma \der \alpha}
\inferrule[Scope-Product]
{\Gamma \der \tau_1\\ \Gamma \der \tau_2}
{\Gamma \der \tau_1 * \tau_2}
\inferrule[Scope-Closure]
{\Gamma_0, \Gamma_1 \der\\
\Gamma_0 \der \sigma\\
\Gamma_0, x \of \sigma \der \tau}
{\Gamma_0, \Gamma_1 \der \tfun{\Gamma_0^\Phi}{x \of \sigma^\phi}{\tau}}
\end{mathparfig}
Notice that the closure contexts appearing in the return type of
a closure, $\tau$ in our rule \Rule{Scope-Closure}, may capture the
variable $x$ corresponding to the function argument, which is why we
chose the dependent-arrow--like notation $(x \of \sigma) \to \tau$
rather than only $\sigma \to \tau$. There is no dependency of types on
terms in this system, this is only used for scope tracking.
Note that $\Gamma \der \sigma$ implies $\Gamma \der$ (as proved by
direct induction until an atom or a function closure is reached). Note also that a context
type $\tfun{\Gamma_0}{x\of\sigma}{\tau}$ is well-scoped in any larger
environment $\Gamma_0, \Gamma_1$: the context information may only mention
variables existing in the typing context, but it need not mention all
of them. As a result, well-scoping is preserved by context extension: if
$\Gamma_0 \der \sigma$ and $\Gamma_0, \Gamma_1 \der$, then
$\Gamma_0, \Gamma_1 \der \sigma$.
\only{\Short}{\vspace{-2ex}}
\subsubsection{A Term Language, and a Naive Attempt at a Type System}
Our term language, is the lambda calculus with pairs, let bindings and
fixpoints. This language is sufficient to discuss the most
interesting problems that arise in an application of closure types in
a more realistic language.
\[
\begin{array}{l@{~}l@{~}l@{\quad}r}
\mathtt{Terms} \ni t,u,e & ::= & & \mbox{terms} \\
& \mid & x & \mbox{variables} \\
& \mid & (e_1, e_2) & \mbox{pairs} \\
& \mid & \pi_i(e) & \mbox{projections ($i \in \{1,2\}$)} \\
& \mid & \lam x e & \mbox{lambda abstractions} \\
& \mid & t \app u & \mbox{applications} \\
& \mid & \clet{x}{e_1}{e_2} & \mbox{let declarations} \\
% & \mid & \fix f x e & \mbox{function fixpoints}
\end{array}
\]
For didactic purposes, we start with an intuitive type system
presented in Figure~\ref{fig/naive-typing}. The judgment
$\Gamma^\Phi \der e : \sigma$ means that the expression $e$ has type
$\sigma$, in the context $\Gamma$ carrying the intensional information
$\Phi$. Context variable mapped to $0$ in $\Phi$ are not used during
the reduction of $e$ to a value. We will show that the rules
\Rule{App-Tmp} and \Rule{Let-Tmp} are not correct, and introduce a new
judgment to develop correct versions of the rules.
\begin{mathparfig}{fig/naive-typing}{Naive rules for the type system}
\inferrule[Var]
{\Gamma, x \of \sigma, \Delta \der}
{\Gamma^0, x \of \sigma^1, \Delta^0 \der x:\sigma}
\inferrule[Product]
{\Gamma^{\Phi_1} \der e_1 : \tau_1\\
\Gamma^{\Phi_2} \der e_2 : \tau_2}
{\Gamma^{\Phi_1 + \Phi_2} \der (e_1, e_2) : \tau_1 * \tau_2}
\inferrule[Proj]
{\Gamma^\Phi \der e : \tau_1 * \tau_2}
{\Gamma^\Phi \der \pi_i(e) : \tau_i}
\inferrule[Lam]
{\Gamma^\Phi, x \of \sigma^\phi \der t : \tau}
{\Gamma^0 \der \lam x t : \tfun{\Gamma^\Phi}{x \of \sigma^\phi}{\tau}}
\Fixpoint{
\inferrule[Fix]
{\Gamma^\Phi,
f \of (\tfun{\Gamma^\Psi}{x \of \sigma^\phi}{\tau})^\chi,
x : \sigma^\phi
\der e : \tau}
{\Gamma^0 \der \fix f x e : \tfun{\Gamma^\Psi}{x \of \sigma^\phi}{\tau}}
}{}
\inferrule[App-Tmp]
{(\Gamma_0, \Gamma_1)^\Phifun \der
t :\tfun{\Gamma_0^\Phiclos
}{x \of \sigma^\phi}{\tau}\\
(\Gamma_0, \Gamma_1)^\Phiarg \der u : \sigma}
{(\Gamma_0, \Gamma_1)^{\Phifun+\Phiclos+\phi.\Phiarg} \der
t \app u : \tau}
\hspace{2em}
\inferrule[Let-Tmp]
{\Gamma^\Phidef \der e_1 : \sigma\\
\Gamma^\Phibody, x \of \sigma^\phi \der e_2 : \tau}
{\Gamma^{\phi.\Phidef+\Phibody} \der
\clet{x}{e_1}{e_2} : \tau}
\end{mathparfig}
In a judgment
$\Gamma^0 \der \lam x t : \tfun{\Gamma^\Phi}{x \of \sigma^0}{\tau}$,
$\Gamma$ is bound only in one place (the context), and
$\alpha$-renaming any of its variable necessitates a mirroring change
in its right-hand-side occurrences ($\Gamma^\Phi$ but also in $\sigma$
and $\tau$), while $x$ is independently bound in the term and in the
type, so the aforementioned type is equivalent to
$\tfun{\Gamma^\Phi}{y \of \sigma}{\tau[y/x]}$. In particular,
variables occurring in types do \emph{not} reveal implementation
details of the underlying term.
The syntax $\phi.\Phi$ used in the \Rule{App-Tmp} and \Rule{Let-Tmp}
rules is a product, or conjunction, of the single boolean dependency
annotation $\phi$, and of the vector dependency annotation $\Phi$. The
sum $\Phi_1 + \Phi_2$ is the disjunction. In the \Rule{Let-Tmp} rule
for example, if the typing of $e_2$ determines that the evaluation of
$e_2$ does not depend on the definition $x = e_1$ ($\phi$ is $0$),
then $\phi.\Phidef$ will mark all the variables used by $e_1$ as not
needed as well (all $0$), and only the variables needed by $e_2$ will
be marked in the result annotation $\phi.\Phidef + \Phibody$.
% We can erase dependency annotations from any such derivation, and get
% a correct derivation in the underlying type system without dependency
% information -- here the simply-typed lambda-calculus.
\begin{version}{\Short}
In the scoping judgment
$\Gamma \der \tfun{\Gamma^\Phi}{x \of \sigma}{\tau}$, the repetition
of the judgment $\Gamma$ is redundant. We could simply write
$\tfun{\Phi}{x \of \sigma}{\tau}$; -- because in our simplified
setting the intensional information $\Phi$ can be easily separated
from the rest of the typing information, corresponding to
simply-typed types. However, we found out that such a reformulation
made technical developments harder to follow; the $\Gamma^\Phi$
form allows one to keep track precisely of the domain of the
dependency annotation, and domain changes are precisely the
difficult technical aspect of open closure types. For a more
detailed discussion of this design point, see the full version of
this article.
\end{version}
\begin{version}{\Long}
In the introduction we present closure types of the form
$\tfun{\Gamma}{x \of \sigma}{\tau}$, while we here use the apparently
different form $\tfun{\Gamma^\Phi}{x \of \sigma^\phi}{\tau}$. This new
syntax is actually a simpler special case of the previous one: we
could consider a type grammar of the form $\sigma^\phi$, and the type
$\tfun{\Gamma}{x \of \sigma}{\tau}$ would then capture all the needed
information, as each type in $\Gamma$ would come with its own
annotation. Instead, we don't embed dependency information in the
types directly, and use annotated context $\Gamma^\Phi$ to carry
equivalent information. This simplification makes it easier to control
the scoping correctness: it is easier to notice that $\Gamma^\Phi$ and
$\Gamma^\Psi$ are contexts ranging over the same domain than if we
wrote $\Gamma$ and $\Gamma'$. It is made possible by two specific
aspects of this simple system:
\begin{itemize}
\item Our intensional information has a very simple structure, only
a boolean, that does not apply to the types in depth. The
simplification would not work, for example, in a security type
system where products could have components of different security
levels ($\tau^l * \sigma^r$), but the structure of the rules would
remain the same.
\item In this example, we are interested mostly in intensional
information on the contexts, rather than the return type of
a judgment. The general case rather suggests a judgment of the form
$\Gamma^\Phi \der e : \sigma^\phi$, but with only boolean
annotations this boils to a judgment
$\Gamma^\Phi \der e : \sigma^1$, when we are interested in the value
being type-checked, and a trivial judgment
$\Gamma^0 \der e : \sigma^0$, used to type-check terms that will not
be used in the rest of the computation, and which degenerates to
a check in the simply-typed lambda-calculus. Instead, we define
a single judgment $\Gamma^\Phi \der e : \sigma$ corresponding to the
case where $\phi$ is $1$, and use the notation $\phi.\Phi$ to
nullify the dependency information coming of from $e$ when the outer
computation does not actually depend on it ($\phi$ is $0$).
\end{itemize}
While dependency annotations make the development easier to follow,
they do not affect the generality of the type theory, as the common
denominator of open closure type systems is more concerned with the
scoping of closure contexts than the structure of the intensional
information itself.
\end{version}
\only{\Short}{\vspace{-2ex}}
\subsubsection{Maintaining Closure Contexts}
As pointed out, the rule \Rule{App-Tmp} and \Rule{Let-Tmp} of the
system above are wrong (hence the ``temporary'' name): the
left-hand-side of the rule \Rule{App-Tmp} assumes that the closure
captures the same environment $\Gamma$ that it is computed in. This
property is initially true in the closure of the rule \Rule{Lam}, but
is not preserved by \Rule{Let-Tmp} (for the body type) or
\Rule{App-Tmp} (for the return type). This means that the intensional
information in a type may become stale, mentioning variables that have
been removed from the context. We will now fix the type system to
never mention unbound variables.
We need a \emph{closure substitution mechanism} to
explain the type $\tau_f = \tfun{\Gamma^\Phi,y \of \rho^\chi}{x \of
\sigma^\phi}{\tau^\psi}$ of a closure $f$ in the smaller environment
$\Gamma$, given dependency information for $y$ in $\Gamma$. Assume
for example that $y$ was bound in a let binding $\code{let } y = e \ldots$
and that the type $\tau_f$ leaves the scope of $y$. Then we
have to adapt the type rules to express the following. ``If $f$
depends on $y$ (at application time) then $f$ depends on the
variables of $\Gamma$ that $e$ depends on.''
We define in Figure~\ref{fig/type-substitution} the judgment
$\Gamma, y \of \rho, \Delta \der \sigma \rewto{y}{\Psi} \Gamma, \Delta' \der \tau$. Assuming
that the variable $y$ in the context $\Gamma, y\of\rho, \Delta$ was
let-bound to an definition with usage information $\Gamma^\Psi$, this
judgment transforms any type $\sigma$ in this context in a type $\tau$
in a context $\Gamma, \Delta'$ that does not mention $y$ anymore. Note
that $\Delta$ and $\Delta'$ have the same domain, only their
intensional information changed: any mention of $y$ in a closure type
of $\Delta$ was removed in $\Delta'$. Also note that
$\Gamma, y \of \rho, \Delta$ and $\Gamma, \Delta'$, or $\sigma$ and
$\tau$, are not annotated with dependency annotations themselves: this
is only a scoping transformation that depends on the dependency
annotations of $y$ \emph{in the closures} of $\sigma$ and $\Delta$.
As for the scope-checking judgment, we simultaneously define the
substitutions on contexts themselves
$\Gamma, y \of \rho, \Delta \rewto{y}{\Psi} \Gamma, \Delta'$.
%
There are two rules for substituting a closure type. If the variable
being substituted is not part of the closure type context
(rule \Rule{Subst-Closure-Notin}), this closure type is
unchanged. Otherwise (rule \Rule{Subst-Closure}) the substitution is
performed in the closure type, and the neededness annotation for $y$
is reported to its definition context $\Gamma_0$.
The following lemma verifies that this substitution preserves
well-scoping of contexts and types.
\begin{lemma}[Substitution preserves scoping]
\label{lem:substitution_preserves_scoping}
If $\Gamma, y \of \rho, \Delta \der$ and
$\Gamma, y \of \rho, \Delta \rewto{y}{\Psi} \Gamma, \Delta'$ hold,
then $\Gamma, \Delta' \der$ holds.
%
If
$\Gamma, y \of \rho, \Delta \der \sigma$ and
$\Gamma, y \of \rho, \Delta \der \sigma \rewto{y}{\Psi} \Gamma,\Delta' \der \tau$ hold,
then $\Gamma,\Delta' \der \tau$ holds.
\end{lemma}
% TODO: mention an explicit weakening lemma; we probably have to add
% variables not just at the end of the context, but also in between.
\begin{version}{\Long}
\begin{proof}
By mutual induction on the judgments
${\Gamma, y \of \rho, \Delta \rewto{y}{\Psi} \Gamma, \Delta'}$
and\\
${\Gamma, y \of \rho, \Delta \der \sigma
\rewto{y}{\Psi} \Gamma, \Delta' \der \tau}$.
\begin{caselist}
\nextcase \Rule{Subst-Context-Nil}: using
\Rule{Scope-Context-Nil}, $\Gamma, x \of \sigma \der$ implies
$\Gamma \der \sigma$, which in turn implies $\Gamma \der$.
\nextcase \Rule{Subst-Context}: from our hypothesis
$\Gamma, y \of \rho, \Delta, x \of \sigma \der$ we deduce
$\Gamma, y \of \rho, \Delta \der \sigma$. By induction we can
deduce $\Gamma, \Delta' \der \tau$, which gives context
well-formedness $\Gamma, \Delta' \der$.
\nextcase \Rule{Subst-Atom}: direct by \Rule{Scope-Atom} and
induction hypothesis.
\nextcase \Rule{Subst-Product}: by inversion, the last rule of the
derivation of $\Gamma, y \of \rho \der (\sigma_1 * \sigma_2)$ is
\Rule{Scope-Product}, so we can proceed by direct induction on the
premises of both judgments.
\nextcase \Rule{Subst-Closure}:
Using our induction hypothesis on
$\Gamma, y \of \rho, \Delta, \Gamma_1
\rewto{y}{\Psi} \Gamma, \Delta', \Gamma'_1$ we can deduce that
$\Gamma, \Delta', \Gamma'_1 \der$ and in particular $\Gamma, \Delta' \der$.
By inversion, the last rule of
the derivation of
$\Gamma, y \of \rho, \Delta, \Gamma_1 \der
\tfun
{\Gamma^{\Phi_1}, y \of \rho^\chi, \Delta^{\Phi_2}}
{y : \sigma_1^{\phi_1}}{\sigma_2}$
is \Rule{Scope-Closure}. Its premises are
${\Gamma, y \of \rho, \Delta \der \sigma_1}$ and
$\Gamma, y \of \rho, \Delta, x \of \sigma_1 \der \sigma_2$, from which we
deduce by induction hypothesis $\Gamma, \Delta' \der \tau_1$ and
$\Gamma, \Delta', x \of \tau_1 \der \tau_2$ respectively, allowing to
deduce that
$\Gamma, \Delta' \der
\tfun
{\Gamma^{\Phi_1+\chi.\Psi}, {\Delta'}^{\Phi_2}}
{x : \tau_1}{\tau_2}$,
which allows to conclude by weakening with the well-scoped $\Gamma'_1$.
\nextcase \Rule{Subst-Closure-Notin}: direct by induction and inversion.
\end{caselist}
\qed
\end{proof}
\end{version}
\begin{mathparfig}{fig/type-substitution}{Type substitution}
\inferrule[Subst-Context-Nil]
{}
{\Gamma, y \of \rho, \emptyset \rewto{y}{\Psi} \Gamma}
\inferrule[Subst-Context]
{\Gamma, y \of \rho, \Delta \der \sigma
\rewto{y}{\Psi}
\Gamma, \Delta' \der \tau}
{\Gamma, y \of \rho, \Delta, x \of \sigma
\rewto{y}{\Psi}
\Gamma, \Delta', x \of \tau}
\inferrule[Subst-Atom]
{\Gamma, y \of \rho, \Delta \rewto{y}{\Psi} \Gamma, \Delta'}
{\Gamma, y \of \rho, \Delta \der \alpha \rewto{y}{\Psi} \Gamma, \Delta' \der \alpha}
\inferrule[Subst-Product]
{\Gamma, y \of \rho, \Delta \der \sigma_1
\rewto{y}{\Psi} \Gamma, \Delta' \der \tau_1\\
\Gamma, y \of \rho, \Delta \der \sigma_2
\rewto{y}{\Psi} \Gamma, \Delta' \der \tau_2}
{\Gamma, y \of \rho, \Delta \der \sigma_1 * \sigma_2
\rewto{y}{\Psi} \Gamma, \Delta' \der \tau_1 * \tau_2}
\inferrule[Subst-Closure-Notin]
{\Gamma_0, \Gamma_1, y \of \rho, \Delta
\rewto{y}{\Psi} \Gamma_0, \Gamma_1, \Delta'
}
{\Gamma_0, \Gamma_1, y \of \rho, \Delta \der
\tfun{\Gamma_0^{\Phi}}
{x \of \sigma_1^{\phi}}
{\sigma_2}
\rewto{y}{\Psi} \Gamma_0, \Gamma_1, \Delta' \der
\tfun{\Gamma_0^{\Phi}}
{x \of \sigma_1^{\phi}}
{\sigma_2}
}
\inferrule[Subst-Closure]
{\Gamma, y \of \rho, \Delta, \Gamma_1
\rewto{y}{\Psi} \Gamma, \Delta', \Gamma'_1\\
\Gamma, y \of \rho, \Delta \vdash \sigma_1
\rewto{y}{\Psi} \Gamma, \Delta' \der \sigma_1\\
\Gamma, y \of \rho, \Delta, x \of \sigma_1 \der \sigma_2
\rewto{y}{\Psi} \Gamma, \Delta', x \of \sigma_1 \der \tau_2
}
{\Gamma, y \of \rho, \Delta, \Gamma_1 \der
\tfun{\Gamma^{\Phi_1},y \of \rho^\chi, \Delta^{\Phi_2}}
{x \of \sigma_1^{\phi}}
{\sigma_2}
\rewto{y}{\Psi} \Gamma, \Delta', \Gamma'_1 \der
\tfun{\Gamma^{\Phi_1+\chi.\Psi}, {\Delta'}^{\Phi_2}}
{x \of \sigma_1^{\phi}}
{\tau_2}
}
\end{mathparfig}
We can now give the correct rules for binders:
\begin{mathpar}
\inferrule[Let]
{\Gamma^\Phidef \der e_1 : \sigma\\
\Gamma^\Phibody, x \of \sigma^\phi \der e_2 : \tau\\
\Gamma, x \of \sigma \der \tau \rewto{x}{\Phidef} \Gamma \der \tau'}
{\Gamma^{\phi.\Phidef+\Phibody} \der
\clet{x}{e_1}{e_2} : \tau'}
\inferrule[App]
{(\Gamma_0, \Gamma_1)^\Phifun \der
t :\tfun{\Gamma_0^\Phiclos}{x \of \sigma^\phi}{\tau}\\
(\Gamma_0, \Gamma_1)^\Phiarg \der u : \sigma\\
\Gamma_0, \Gamma_1, x \of \sigma \der \tau
\rewto{x}{\Phiarg} \Gamma_0, \Gamma_1 \der \tau'}
{(\Gamma_0,\Gamma_1)^{\Phifun+\Phiclos+\phi.\Phiarg} \der
t \app u : \tau'}
\end{mathpar}
%TODO give example of typing
\begin{lemma}[Typing respects scoping]
\label{lem:typing_preserves_scoping}
If $\Gamma \der t : \sigma$ holds,
then $\Gamma \der \sigma$ holds.
\end{lemma}
This lemma guarantees that we fixed the problem of stale intensional
information: types appearing in the typing judgment are always
well-scoped.
\begin{version}{\Long}
\begin{proof}
By induction on the derivation of $\Gamma \der t : \sigma$.
\begin{caselist}
\nextcase \Rule{Var}: from the premise
$\Gamma^0, x:\sigma^1, \Delta^0 \der$ we have
$\Gamma \der \sigma$.
\nextcase \Rule{Prod}: direct by induction.
\nextcase \Rule{Proj}: the induction hypothesis is
$\Gamma \der \tau_1 * \tau_2$, from which we get
$\Gamma \der \tau_i$ (for $i \in \{1,2\}$) by inversion.
\nextcase \Rule{Lam}: the induction hypothesis is
$\Gamma, x \of \sigma \der \tau$. From this we get
$\Gamma, x \of \sigma$ and therefore $\Gamma \der \sigma$, which
allows to conclude with \Rule{Scope-Closure}.
\Fixpoint{
\nextcase \Rule{Fix}: the hypothesis implies
$\Gamma, f \of \tfun{\Gamma^\Psi}{x \of \sigma^\phi}{\tau} \der$,
which in turn implies
$\Gamma \der \tfun{\Gamma^\Psi}{x \of \sigma^\phi}{\tau}$.
}{}
\nextcase \Rule{App}: Using our induction hypothesis on the first
premise give us that
$\Gamma_0, \Gamma_1 \der \tfun{\Gamma_0^{\Phi_{\code{clos}}}}{x \of \sigma}{\tau}$,
so by inversion $\Gamma_0, \Gamma_1 \der$ and
$\Gamma_0, x \of \sigma \der \tau$. The latter fact can be weakened into
$\Gamma_0, \Gamma_1, x \of \sigma \der \tau$, and then combined
with the last premise
$\Gamma_0, \Gamma_1, x \of \sigma \der \tau \rewto{x}{\Phi_{\code{arg}}}
\Gamma_0, \Gamma_1 \der \tau'$
and Lemma~\ref{lem:substitution_preserves_scoping} to get our goal
$\Gamma_0, \Gamma_1 \der \tau'$.
\nextcase \Rule{Let}: reasoning similar to the App case. By
induction on the middle premise, we have
$\Gamma, x \of \sigma \der \tau$, combined with the right premise
$\Gamma, x \of \sigma \der \tau \rewto{x}{\Phidef} \Gamma \der \tau'$
we get $\Gamma \der \tau'$.
\end{caselist}
\qed
\end{proof}
\end{version}
It is handy to introduce a convenient derived notation
$\Gamma^\Phi \der \tau \rewto{y}{\Psi} \Gamma'^\Phip \der \tau'$ that is defined below.
This substitution relation does not only remove $y$ from the open closure
types in $\Gamma$, it also updates the dependency annotation on
$\Gamma$ to add the dependency $\Psi$, corresponding to all the
variables that $y$ depended on -- if it is itself marked as needed.
%
\begin{mathpar} \abovedisplayskip 0em \belowdisplayskip 0em
\inferrule[]
{\Gamma, y \of \rho, \Delta \der \tau \rewto{y}{\Psi} \Gamma, \Delta' \der \tau'}
{\Gamma^{\Phi_1}, y \of \rho^\chi, \Delta^{\Phi_2} \der \tau \rewto{y}{\Psi} \Gamma^{\Phi_1 + \chi.\Psi}, {\Delta'}^{\Phi_2} \der \tau'}
\end{mathpar}
%%
\begin{version}{\Long}
It is interesting to see that substituting $y$ away in
$\Gamma^{\Phi_1}, y \of \rho, \Delta^{\Phi_2}$ changes the annotation
on $\Gamma$, but not its types ($\Gamma$ is unchanged in the output as
its types may not depend on $y$), while it changes the types in
$\Delta$ but not its annotation ($\Phi_2$ is unchanged in the output
as a value for $y$ may only depend on variables from $\Gamma$, not
$\Delta$).
\end{version}
\begin{version}{\Long}
The following technical results allow us to permute substitutions on
unrelated variables. They will be used in the typing soundness proof
of the next section~(Theorem~\ref{thm:sound}).
\begin{lemma}[Confluence]
\label{lem:confluence}
If $\Gamma_1 \der \tau_1 \rewto{x_a}{\Psi_a} \Gamma_{2a} \der \tau_{2a}$
and $\Gamma_1 \der \tau_1 \rewto{x_b}{\Psi_b} \Gamma_{2b} \der \tau_{2b}$
then there exists a unique $\Gamma_3 \der \tau_3$ such that
\[ \Gamma_{2a} \der \tau_{2a} \rewto{x_b}{(\Psi_b + \Psi_a . \Psi_b(x_a))} \Gamma_3 \der \tau_3
\qquad \textrm{and} \qquad
\Gamma_{2b} \der \tau_{2b} \rewto{x_a}{(\Psi_a + \Psi_b . \Psi_a(x_b))} \Gamma_3 \der \tau_3 \]
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $x_a$ appears before
$x_b$ in $\Gamma_1$, so in particular $\Psi_a(x_b) = 0$. For any subcontext of the form
\[ {\Delta_1}^{\Psi_1}, x_a \of {\sigma_a}^{\rho_a}, {\Delta_2}^{\Psi_2}, x_b \of {\sigma_b}^{\rho_b} \],
assume that substituting $\Psi_a$ for $x_a$ first results in
\[ {\Delta_1}^{\Psi_1 + \rho_a . \Psi_a}, {\Delta_{2a}}^{\Psi_2}, x_b \of {\sigma_{ba}}^{\rho_b} \],
while substituting $\Psi_b$ for $x_b$ first results in
\[ {\Delta_1}^{\Psi_1 + \rho_b . \Psi_b(\Delta_1)}, x_a \of {\sigma_a}^{\rho_a + \rho_b . \Psi_b(x_a)}, {\Delta_2}^{\Psi_2 + \rho_b . \Psi_b(\Delta_2)} \].
%
Substituting $\Psi_b + \Psi_a . \Psi_b(x_a)$ for $x_b$ in
\[ {\Delta_1}^{\Psi_1 + \rho_a . \Psi_a}, {\Delta_{2a}}^{\Psi_2}, x_b \of {\sigma_{ba}}^{\rho_b} \]
results in
\[ {\Delta_1}^{\Psi_1 + \rho_a . \Psi_a + \rho_b . (\Psi_b + \Psi_a . \Psi_b(x_a))(\Delta_1)}, {\Delta_{2a}}^{\Psi_2 + \rho_b . (\Psi_b + \Psi_a . \Psi_b(x_a))(\Delta_{2a})} \]
which simplifies to
\[ {\Delta_1}^{\Psi_1 + \rho_a . \Psi_a + \rho_b . \Psi_b(\Delta_1) + \rho_b . \Psi_b(x_a) . \Psi_a}, {\Delta_{2a}}^{\Psi_2 + \rho_b . \Psi_b(\Delta_{2a})} \]
%
Substituting $\Psi_a + \Psi_b . \Psi_a(x_b) = \Psi_a$ for $x_a$ in
\[ {\Delta_1}^{\Psi_1 + \rho_b . \Psi_b(\Delta_1)}, x_a \of {\sigma_a}^{\rho_a + \rho_b . \Psi_b(x_a)}, {\Delta_2}^{\Psi_2 + \rho_b . \Psi_b(\Delta_2)} \]
results in
\[ {\Delta_1}^{\Psi_1 + \rho_b . \Psi_b(\Delta_1) + (\rho_a + \rho_b . \Psi_b(x_a)) . \Psi_a)}, {\Delta_{2a}}^{\Psi_2 + \rho_b . \Psi_b(\Delta_2)} \]
which simplifies to
\[ {\Delta_1}^{\Psi_1 + \rho_b . \Psi_b(\Delta_1) + \rho_a . \Psi_a + \rho_b . \Psi_b(x_a) . \Psi_a}, {\Delta_{2a}}^{\Psi_2 + \rho_b . \Psi_b(\Delta_2)} \]
Given that $\Delta_2$ and $\Delta_{2a}$ have the same domain
(only different types), the restrictions $\Psi_b(\Delta_2)$ and
$\Psi_b(\Delta_{2a})$ are equal, allowing to conclude that the two substitutions indeed end in the same sequent
\[ {\Delta_1}^{\Psi_1 + (\rho_a + \rho_b . \Psi_b(x_a)) . \Psi_a + \rho_b . \Psi_b(\Delta_1)}, {\Delta_{2a}}^{\Psi_2 + \rho_b . \Psi_b(\Delta_2)} \]
Note that we can make sense, informally, of this resulting sequent. The variable used by this final contexts are
\begin{itemize}
\item the variables used of $\Delta_1$ used in the initial judgment ($\Psi_1$)
\item the variables of $\Delta_2$ (updated in $\Delta_{2a}$ to remove references to the substituted variable $x_a$) used in the initial judgment ($\Psi_2$)
\item the variables used by $\Psi_b$, if it is used ($\rho_b$ is $1$)
\item the variables used by $\Psi_a$ if either $x$ was used ($\rho_a$ is $1$), or if $x_b$ is used ($\rho_b$ is $1$) and itself uses $x_a$ ($\Psi_b(x_a)$ is $1$).
\end{itemize}
To get this intuition, we considered again the annotations as
booleans, but note that the equivalence proof was done in a purely
algebraic manner. It should therefore be preserved in future work
where the intensional information has a richer structure.
\qed
\end{proof}
\begin{corollary}[Reordering of substitutions]
\label{lem:reordering}
If $\Psi_a$ and $\Psi_b$ have domain $\Gamma$, and \[ {\Gamma_1}^{\Phi_1} \der \tau_1 \rewto{x_a}{\Psi_a} {\Gamma_2}^{\Phi_2} \der \tau_2 \rewto{x_b}{(\Psi_b + \Psi_b(x_a) . \Psi_a)} {\Gamma_3}^{\Phi_3} \der \tau_3 \]
then there exists ${\Gammap_2}^{\Phip_2} \der \tau'_2$ such that
\[ {\Gamma_1}^{\Phi_1} \der \tau_1 \rewto{x_b}{\Psi_b} {\Gammap_2}^{\Phip_2} \der \tau'_2 \rewto{x_a}{(\Psi_a + \Psi_a(x_b) .\Psi_b)} {\Gamma_3}^{\Phi_3} \der \tau_3 \]
\end{corollary}
\end{version}
\begin{version}{\Long}
\subsubsection{On open closure types on the left of function types}
Note that the \Rule{Subst-Closure} handles the function type on the
left-hand-side of the arrow, $\sigma_1$, is a specific and subtle way:
it must be unchanged by the substitution judgment. Under a slightly
simplified form, the judgment reads:
\begin{mathpar}
\inferrule
{\Gamma, y \of \rho, \Delta \vdash \sigma_1
\rewto{y}{\Psi} \Gamma, \Delta' \der \sigma_1\\
\Gamma, y \of \rho, \Delta, x \of \sigma_1 \der \sigma_2
\rewto{y}{\Psi} \Gamma, \Delta', x \of \sigma_1 \der \tau_2
}
{\Gamma, y \of \rho, \Delta \der
\tfun{\Gamma^{\Phi_1},y \of \rho^\chi, \Delta^{\Phi_2}}
{x \of \sigma_1^{\phi}}
{\sigma_2}
\rewto{y}{\Psi} \Gamma, \Delta' \der
\tfun{\Gamma^{\Phi_1+\chi.\Psi}, {\Delta'}^{\Phi_2}}
{x \of \sigma_1^{\phi}}
{\tau_2}
}
\end{mathpar}
This corresponds to the usual ``change of direction'' on the left of
arrow type. A substitution
$\Gamma, y \of \rho \der \tau \rewto{y}{\Psi} \Gamma \der \tau'$ is
a lossy transformation, as we forget how $y$ is used in $\tau$ and
instead mix its definition information with the rest of the context
information in $\tau'$. Such a loss makes sense for the return type of
a function: we forget information about the return value. But by
contravariance of input argument, we should instead \emph{refine} the
argument types.
But as the gain or loss or precision correspond to variables going out
of scope, such a refinement could only happen in smaller nested
scopes. On the contrary, when going out to a wider scope, the only
possibility is that the closure type does not depend on the particular
variable being substituted (so the type $\sigma$ is preserved,
$\Gamma, y\of\rho, \Delta \der \sigma \rewto{y}{\Psi} \Gamma, \Delta' \der \sigma$)
. If the variable was used, a loss of precision would be
possible: this substitution must be rejected.
% \pagebreak
Consider the following example:
\begin{lstlisting}
(* in context $\Gamma$ *)
let x : int = e_x in
let y : bool = e_y in
let f (g : $\tfun{\Gamma^0, x\of\tyc{int}^1}{z:\tyc{unit^0}}{\tyc{int}}$) : $\tyc{int}$ = g () in
f ($\lambda$z. x);
f
\end{lstlisting}
In the environment $\Gamma, x \of \tyc{int}, y \of \tyc{bool}$, the
type of \code{f}'s function argument \code{g} describes a function
whose result depends on \code{x}. We can still express this dependency
when substituting the variable \code{y} away, that is when considering
the type of the expression \code{(let y = ... in let f g = ... in f)}
as a whole: the argument type will still have type
$\tfun{\Gamma^0, x \of \tyc{int}^1}{z:\tyc{unit}}{\tyc{int}}$. However,
this dependency on \code{x} cannot make sense anymore if we remove
\code{x} itself from the context, the substitution does not preserve
this function type. This makes the whole expression
\code{(let x = ... in (let y = ... in let f g = ... in f))} ill-typed,
as \code{x} escapes its scope in the argument function type.
One way to understand this requirement is that there are two parts to
having an analysis be fully ``higher-order''. Fist, it handles programs
that take functions as input, and second, it handles programs that return functions as
result of computations. Some languages only pass functions as
parameters (this is in particular the case of C with pointers to
global functions), some constructions such as currying fundamentally
rely on function creation with environment capture. Our system
proposes a new way to handle this second part, and is intensionally
simplistic, to the point of being restrictive, on the rest.
In a non-toy language one would want to add subtyping of context
information, that would allow controlled loss of precision to, for
example, create lists of functions with slightly different context
information. Another useful feature would be context information
polymorphism to express functions being parametric with respect to the
context information of their argument. This is intentionally left to
future work.
\end{version}
\section{A Big-Step Operational Semantics}
\only{\Short}{\vspace{-1ex}}
In this section, we will define an operational semantics for our term
language, and use it to prove the soundness of the type system
(Theorem~\ref{thm:sound}). Our semantics is equivalent to the usual
call-by-value big-step reduction semantics for the lambda-calculus in
the sense that computation happens at the same time. There is however
a notable difference.
Function closures are not built in the same way as they are in classical
big-step semantics. Usually, we have a rule of the form
$ \inferrule{}{V \der \lam x t \redto{} (V, \lam x t)} $ where the
closure for $\lam x t$ is a pair of the value environment $V$
(possibly restricted to its subset appearing in $t$) and the function
code. In contrast, we capture no values at closure creation time in
our semantics:
$ \inferrule{}{V \der \lam x t \redto{} (\emptyset, \lam x t)}$. The
captured values will be added to the closure incrementally, during the
reduction of binding forms that introduced them in the context.
Consider for example the following two derivations; one in the
classic big-step reduction, and the other in our alternative
system.
\begin{mathpar}
\inferrule[Classic-Red-Let]
{x \of v \der x \credto v\\
x \of v, y \of v \der \lam z y \credto ((x \mapsto v, y \mapsto v), \lam z y)}
{x \of v \der \clet{y}{x}{\lam z y} \credto ((x \mapsto v, y \mapsto v), \lam z y)}
\inferrule[Our-Red-Let]
{x \of v \der x \redto v\\
x \of v, y \of v \der \lam z y \redto (\fvenv{x,y}, \emptyset, \lam z y)\\
(\emptyset, \lam z y) \rewto{y}{v} (\fvenv{x}, y \mapsto v, \lam z y)}
{x \of v \der \clet{y}{x}{\lam z y} \redto (\fvenv{x}, y \mapsto v, \lam z y)}
\end{mathpar}
Rather than capturing the whole environment in a closure, we store
none at all at the beginning (merely record their names), and add
values incrementally, just before they get popped from the
environment. This is done by the \emph{value substitution} judgment
$w \rewto{x}{v} w'$ that we will define in this section. The
reason for this choice is that this closely corresponds to our typing
rules, value substitution being a runtime counterpart to substitution
in types $\Gamma \der \sigma \rewto{x}{\Phi} \Gamma' \der \sigma'$;
this common structure is essential to prove of the type soundness
(Theorem~\ref{thm:sound}).
Note that derivations in this modified system and in the original
one are in one-to-one mapping. % We will prove this equivalence result
% between them: t
It should not be considered a new dynamic semantics, rather
a reformulation that is convenient for our proofs as it mirrors our
static judgment structure.
\only{\Short}{\vspace{-2ex}}
\subsubsection{Values and Value Substitution}
Values are defined as follows.
\[
\begin{array}{l@{~}l@{~}l@{\quad}r}
\mathtt{Val} \ni v,w & ::= & & \mbox{values} \\
& \mid & \code{v}_{\alpha} & \mbox{value of atomic type} \\
& \mid & (v, w) & \mbox{value tuples} \\
& \mid & (\fvenv{x_j}_\jJ, (x_i \mapsto v_i)_\iI, \lam x t)
& \mbox{function closures} \\
\Fixpoint{
& \mid & (\fvenv{x_j}_\jJ, (x_i \mapsto v_i)_\iI, \fix f x t)
& \mbox{recursive function closures} \\
}{}
\end{array}
\]
The set of variables bound in a closure is split into an ordered
mapping $(x_i \mapsto v_i)_\iI$ for variables that have been
substituted to their value, and a simple list $\fvenv{x_j}_\jJ$ of
variables whose value has not yet been captured. They are both binding
occurrences of variables bound in $t$; $\alpha$-renaming them is
correct as long as $t$ is updated as well.
To formulate our type soundness result, we define a typing judgment on
values $\Gamma \der v : \sigma$ in Figure~\ref{fig/value-typing}. An
intuition for the rule \Rule{Value-Closure} is the
following. Internally, the term $t$ has a dependency $\Gamma^\Phi$ on
the ambient context, but also dependencies $(\tau_i^{\psi_i})$ on the
captured variables. But externally, the type may not mention the
captured variables, so it reports a different dependency
$\Gamma^\Phip$ that corresponds to the internal dependency
$\Gamma^\Phi$, combined with the dependencies $(\Psi_i)$ of the
captured values. Both families $(\psi_i)_\iI$ and $(\Psi_i)_\iI$ are
existentially quantified in this rule.
In the judgment rule, the notation $(x_j:\tau_j)_{j