The course on typed tagless-final embeddings of domain-specific languages has been presented at the Spring School on Generic and Indexed Programming (SSGIP) < http://www.comlab.ox.ac.uk/projects/gip/school.html > at Wadham College, Oxford, UK on 22nd to 26th March 2010. This page collects the notes for the course in the form of the extensively commented Haskell and OCaml code.
These lecture notes introduce the final approach slowly and in detail, highlighting extensibility, the solution to the expression problem, and the seemingly impossible pattern-matching. We develop the approach further, to type-safe cast, run-time-type representation, Dynamics, and type reconstruction. We finish with telling examples of type-directed partial evaluation and encodings of type-and-effect systems and linear lambda-calculus.
We will be talking about ordinary data types and (generic) operations on them. The expression problem will make its appearance. The first-order case makes it easier to introduce de-serialization and seemingly non-compositional operations.
Symantics: parameterization of terms by interpreters
Initial and Final, Deep and Shallow, First-class
Algebraic data types are indeed not extensible
Adding a new expression form to the final view: solving the expression problem
Serialization and de-serialization
De-serializing the extended language
The traditional application of objects to represent extensible data types. Alas, the set of operations on these data types is not extensible.
Tagless-final embedding using modules
Tagless-final embedding with objects emulating type-class dictionaries. Both the language and the set of its interpretations are extensible.
Pushing the negation down: the initial implementation
Pushing the negation down: the final implementation
Pushing the negation down for extended tagless-final terms
Flattening of additions, the initial and the final implementations
The final-initial isomorphism, and its use for implementing arbitrary pattern-matching operations on tagless-final terms.
< http://www.comlab.ox.ac.uk/ralf.hinze/SSGIP10/Slides.pdf >
Ralf Hinze, in Part 7 of his Spring School course, has derived this `initial-final' isomorphism rigorously, generally and elegantly from the point of view of Category Theory. In the first-order case, both `initial' and `final' are the left and the right views to the same Initial Algebra. The `final' view is, in the first-order case, ordinary Church encoding.
Higher-order languages are data types with binding, so to speak. In the first part, only the interpreters were typed; we could get away with our object language being unityped. Now, the object language itself becomes typed, bringing the interesting issues of interpreting a typed language in a type language ensuring type preservation. It is this part that explains the attributes `typed' and 'tagless' in the title of the course.
The illustration of problems of embedding a typed DSL into a typed metalanguage
Either the Universal type (and hence spurious partiality, type tags and inefficiency), or fancy type systems seem inevitable. The problem stems from algebraic data types' being too broad: they express not only well-typed DSL terms but also ill-typed ones.
< http://www.iis.sinica.edu.tw/~scm/2008/typed-lambda-calculus-interprete/ >
Shin-Cheng Mu: Typed Lambda-Calculus Interpreter in Agda. September 24, 2008
Shin-Cheng Mu solves the problem of the type-preserving tagless interpretation of simply-typed lambda-calculus, relying on dependent types and type functions in full glory.
Tagless-initial and Tagless-final evaluators
Typed, tagless, final, with de Bruijn indices: Expressing the type system of simply-typed lambda-calculus in Haskell98. No dependent types are necessary after all. The types of methods in the Symantics type class read like the axioms and inference rules of the implication fragment of minimal logic.
Typed, tagless, final, in the higher order abstract syntax (HOAS). We illustrate extending the DSL with more constants, types, and expression forms.
Initial-final isomorphism, in the higher-order case
De-serialization: (Dynamic) Type Checking
In contrast to an earlier version of the type checker, we use de Bruijn indices and obtain a much clearer code. The code is quite similar to Baars and Swierstra's ``Typing Dynamic Typing'' (ICFP02). However, the result of our type-checking is an embedded DSL expression that can be interpreted many times and in many ways, rather than being merely evaluated. The set of possible interpretations is open. Also, our code is written to expose more properties of the type-checker for verification by the Haskell type-checker; for example, that closed source terms are de-serialized into closed target terms.
Type representation, equality and the type-safe generalized cast
We present an above-the-board version of
Data.Typeable , in the tagless-final style. Our implementation uses no GHC internal operations, no questionable extensions, or even a hint of unsafe operations.
< http://www.comlab.ox.ac.uk/projects/gip/school/tc.hs >
Stephanie Weirich some time ago wrote a very similar type-checker, but in the initial style, using GADTs. The comparison with the tagless-final style here is illuminating.
Our goal here is not to evaluate, view or serialize a tagless-final term, but to transform it to another one. The result is a tagless-final term, which we can interpret in multiple ways: evaluate, view, or transform again. We first came across transformations of tagless-final terms when we discussed pushing the negation down in the simple, unityped language of addition and negation. The general case is more complex. It is natural to require the result of transforming a well-typed term be well-typed. In the tagless-final approach that requirement is satisfied automatically: after all, only well-typed terms are expressible. We impose a more stringent requirement that a transformation be total. In particular, the fact that the transformation handles all possible cases of the source terms must be patently, syntactically clear. The complete coverage must be so clear that the metalanguage compiler should be able to see that, without the aid of extra tools.
Since the only thing we can do with tagless-final terms is to interpret them, the CPS transformer is written in the form of an interpreter. It interprets source terms yielding transformed terms, which can be interpreted in many ways. In particular, the terms can be interpreted by the CPS transformer again, yielding 2-CPS terms. CPS transformers are composable, as expected.
A particular complication of the CPS transform is that the type of the result is different from the type of the source term: the CPS transform translates not only terms but also types. Moreover, the CPS type transform and the arrow type constructor do not commute. For that reason, we have to introduce an extended Symantics class, ESymantics, which makes the meaning of function types dependent on a particular interpreter. As it turns out, we do not have to re-write the existing Symantics terms: we can re-interpret any old terms in the extended Symantics. Conversely, any extended Symantics term can be interpreted using any old, legacy, Symantics interpreter. The CPS transform is an extended Symantics interpreter proper.
The ordinary (Fischer or Plotkin) CPS transform introduces many administrative redices, which make the result too hard to read. Danvy and Filinski proposed a one-pass CPS transform, which relies on the metalanguage to get rid of the administrative redices. The one-pass CPS transform can be regarded as an example of the normalization-by-evaluation.
Ordinary and one-pass CPS transforms and their compositions
Olivier Danvy and Andrzej Filinski. Representing Control: A Study of the CPS Transformation.
Mathematical Structures in Computer Science, 1992.
Tagless-final presentation of type-directed partial evaluation
The imported module with sample functions to reify. Compiling this module makes for a nicer example.
< http://www.brics.dk/~danvy/tdpe-ln.pdf >
Olivier Danvy: Lecture notes on type-directed partial evaluation. The lecture notes are based on his POPL96 paper.
We build on the embedding of the ordinary simply typed lambda calculus with de Bruijn indices described earlier. An object term of the type
a was represented as a value of the type
repr h a where the binary type constructor
repr is a member of the class
Symantics . Here
h stands for the type environment, assigning types to free variables (`hypotheses') of a term. Linear lambda calculus regards bound variables as representing resources; referencing a variable consumes the resource. We use the type environment for tracking the state of resources: available or consumed. The type environment becomes the type state . We follow the approach described in Variable (type)state `monad' .
We represent linear lambda terms by Haskell values of the type
repr hi ho a , where
hi stands for the variable state before evaluating the term and
ho stands for the state after evaluating the term. To be more precise,
ho are the types of the variable states. We can determine the types and hence the state of the variables statically: As usual, the type checker does abstract interpretation. In our tagless-final encoding,
lam has the following type
lam :: repr (F a,hi) (U,ho) b -> repr hi ho (a->b)The expression
(lam e)has the type
repr hi ho (a->b)provided the body of abstraction,
e, has the type
repr (F a,hi) (U,ho) b. That is, in the environment extended with a term of the type
a, the body must produce the value of type
b. The body must consume the term at the top of the environment, changing the type of the first environment cell from
U(the type of the used variable).
A trivial modification turns the embedding of the linear lambda-calculus into that of the affine lambda-calculus, which allows to ignore bound variables. K combinator becomes expressible.
Commented code defining the typed linear lambda calculus and its two interpreters, to evaluate and to show linear lambda terms. Later we add general abstractions imposing no constraints on the use of bound variables.
Jeff Polakow: typed final-tagless HOAS interpreter for linear lambda calculus
Message posted on Haskell-Cafe on March 26, 2013.
< http://www.haskell.org/pipermail/haskell-cafe/2013-March/107215.html >
Polakow's tagless-final linear lambda-calculus interpreter relies on higher-order abstract syntax, rather than de Bruijn indices of LinearLC.hs. The interpreter explained in the posted message has only arrow types and integers. Attached to that message is the interpreter for the full linear lambda calculus with additives and units.
oleg-at-pobox.com or oleg-at-okmij.org
Your comments, problem reports, questions are very welcome!
Converted from HSXML by HSXML->HTML