Staging, Program Generation, Meta-Programming

 

 

Sound and Efficient Language-Integrated Query -- Maintaining the ORDER

As SQL moved from the English-like language for ad hoc queries by business users to its present status as the universal relational database access, the lack of abstractions and compositionality in the original design is felt more and more acute. Recently added subqueries and common table expressions compensate, albeit generally inefficiently. The inadequacies of SQL motivated language-integrated query systems such as (T-)LINQ, which offer an applicative, programming-like query language compiled to efficient SQL.

However, the seemingly straightforward ranking operations ORDER BY and LIMIT are not supported efficiently, consistently or at all in subqueries. The SQL standard defines their behavior only when applied to the whole query. Language-integrated query systems do not support them either: naively extending ranking to subexpressions breaks the distributivity laws of UNION ALL underlying optimizations and compilation.

We present the first compositional semantics of ORDER BY and LIMIT, which reproduces in the limit the standard-prescribed SQL behavior but also applies to arbitrarily composed query expressions and preserves the distributivity laws. We introduce the relational calculus SQUR that includes ordering and subranging and whose normal forms correspond to efficient, portable, subquery-free SQL. Treating these operations as effects, we describe a type-and-effect system for SQUR and prove its soundness. Our denotational semantics leads to the provably correctness-preserving normalization-by-evaluation. An implementation of SQUR thus becomes a sound and efficient language-integrated query system maintaining the ORDER.

Version
The current version is November 2017
References
sqr.pdf [372K]
doi:10.1007/978-3-319-71237-6_18
The paper (together with Tatsuya Katsushima) published in the Proc. APLAS 2017, November 27-29, 2017, Suzhou, China. Lecture Notes in Computer Science, v10695, 2017, pp. 364--383

sqr-talk.pdf [334K]
Slides of the talk presented at APLAS 2017, November 29, 2017. Suzhou, China

 

Finally, Safely-Extensible and Efficient Language-Integrated Query

Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible.

We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines.

With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.

Joint work with Kenichi Suzuki and Yukiyoshi Kameyama.

Version
The current version is January 2016
References
Sound and Efficient Language-Integrated Query -- Maintaining the ORDER is the next version, which accomplishes all that T-LINQ does, and adds ORDER and LIMIT. Whereas this present paper relies on the operational semantics and does syntactic normalization-by-rewriting, the new SQUR relies on the denotational semantics and does normalization-by-evaluation

quel.pdf [425K]
doi:10.1145/2847538.2847542
The paper published in the Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM), 2016, St. Petersburg, FL, USA, January 18-19, 2016, pp. 37-48

<http://logic.cs.tsukuba.ac.jp/~ken/quel/>
The source code of the system. It is distributed under the MIT License.

 

Let-insertion without pain or fear or guilt

We describe the so-called let-insertion and its safe and surprisingly convenient implementation as a BER MetaOCaml library. Implementors of domain-specific languages (DSL) embedded in MetaOCaml no longer have to meticulously specify where to place let bindings. Rather, they can program an automatic insertion at the `right' and assuredly safe place. The end-users of the DSL do not have to worry about let-bindings or even know about them. This safe and modular let-insertion has become possible in MetaOCaml for the first time.

Program generators, as programs in general, greatly benefit from compositionality: the meaning of a complex expression is determined by its structure and the meanings of its constituents. Building code fragments using MetaOCaml's brackets and escapes is compositional. For example, in

    let sqr: int code -> int code = fun e -> .<.~e * .~e>.
the meaning of sqr e -- that is, the code it generates -- is the multiplication of two copies of the expression generated by e. We are sure of that even if we know nothing of e except that it is pure (that is, free of side-effects including divergence). Likewise, in
    let make_incr_fun : (int code -> int code) -> (int -> int) code = fun body ->
    .<fun x -> x + .~(body .<x>.)>.
  
    let test1 = make_incr_fun (fun x -> sqr .<2+3>.)
     (* .<fun x_24  -> x_24 + ((2 + 3) * (2 + 3))>. *)
make_incr_fun body should produce the code for an OCaml function. The result, shown in the comments, confirms our expectations.

Compositionality lets us think about programs in a modular way, helping make sure the program result is the one we had in mind. We ought to strive for compositional programs and libraries. The trick however is selecting the `best' meaning for an expression. Earlier we took as the meaning of a code generator the exact form of the expression it produces. Under this meaning, compositionality requires the absence of side-effects. Our test1 is pure and so its result is easy to predict: it has to be the code of a function whose body contains two copies of the expression 2+3. Although the confidence in the result is commendable, the result itself is not. Duplicate expressions make the code large and inefficient: imagine something complex in 2+3's place. Furthermore, 2+3 in test1 does not depend on x and can be lifted out of the function's body -- so that it can be computed once rather than on each application of the function. In short, we would like the result of test1 to look like

    let test_desired = .<let t = 2+3 in fun x_24  -> x_24 + (t * t)>.

One may remark that a sufficiently smart compiler should be able to transform the code produced by test1 into test_desired, performing common subexpression elimination and invariant code motion. Although the optimizations may be expected for the simple code like test1, an expression more complex than 2+3 will thwart them. First of all, its harder to see the commonality of two large expressions. Second, the two optimizations are sound only if the expression to move and share is pure. If it calls external functions the compiler will not likely to be able to check that the functions are pure. Finally, if 2+3 were replaced with read_config "foo" where read_config does IO, the compiler shall not de-duplicate or move such an expression. The DSL designer however may know that read_config, albeit effectful, is safe to move and eliminate. Its side-effect, reading a file, is irrelevant for the end users of the program. Therefore, the DSL designer needs a way to explicitly perform the optimizations like duplicate elimination and invariant code motion, without relying on the compiler. After all, the aspiration of staging is to let DSL designers express expert knowledge and use it for domain-specific optimizations -- in effect turning a general-purpose compiler into a domain-specific optimization toolkit.

We change the meaning of code generators: sqr e now means the multiplication of two copies of e or of the result of evaluating e. The meaning is the generated code with possible let-expressions. Compositionality now permits effectful generators, whose side-effect is let-insertion. MetaOCaml could offer simplest such generators as built-ins. Adding more built-ins to MetaOCaml -- just like adding more system calls to an OS -- makes the system harder to maintain and ensure correctness. Generally, what can be efficiently implemented at the `user-level' ought to be done so. Let-insertion can be written as an ordinary library, relying on the delimited control library delimcc.

The first attempt at let-insertion takes only a few lines:

    open Delimcc
    
    let genlet : 'w code prompt -> 'a code -> 'a code = fun p cde -> 
       shift p (fun k -> .<let t = .~cde in .~(k .<t>.)>.)
    
    let with_prompt : ('w prompt -> 'w) -> 'w = fun thunk ->
      let p = new_prompt () in
      push_prompt p (fun () -> thunk p)
The function genlet takes a code expression and let-binds it -- at the place marked by with_prompt. The two functions communicate via the so-called prompt. The evaluation sequence for a simple example below demonstrates the process:
    with_prompt (fun p -> sqr (genlet p .<2+3>.))
    --> (* evaluating genlet *)
    with_prompt (fun p -> sqr (
       shift p (fun k -> .<let t = .~(.<2+3>.) in .~(k .<t>.)>.)))
    --> (* evaluating shift, capturing the continuation up to the prompt, binding it to k *)
    with_prompt (fun p ->
     let k = fun hole -> push_prompt p (fun () -> sqr hole) in
       .<let t = .~(.<2+3>.) in .~(k .<t>.)>.)
    -->
    with_prompt (fun p ->
     let k = fun hole -> push_prompt p (fun () -> sqr hole) in
       .<let t = 2+3 in .~(k .<t>.)>.)
    --> (* applying the captured continuation k *)  
    with_prompt (fun p -> 
       .<let t = 2+3 in .~(push_prompt p (fun () -> sqr .<t>.))>.)
    -->* (* evaluating (sqr .<t>.) *)
    with_prompt (fun p -> .<let t = 2+3 in .~(.<t * t>.)>.)
    -->* 
    .<let t = 2+3 in t * t>.

Indeed genlet took .<2+3>. and let-bound it where with_prompt was. The let-binding of 2+3 effectively eliminated the expression duplication, saving us or the compiler trouble searching for common subexpressions. The binding place can be arbitrarily away from genlet:

    with_prompt (fun p ->
      make_incr_fun (fun x -> sqr (genlet p .<2+3>.)))
     (* .<let t_17 = 2 + 3 in fun x_16  -> x_16 + (t_17 * t_17)>. *)
thus realizing the invariant code motion, moving 2+3 out of the function body. Code motion is desirable, but also can be dangerous:
    with_prompt (fun p ->
      make_incr_fun (fun x -> sqr (genlet p .<.~x+3>.)))
     (* BEFORE N100: .<let t_17 = x_16 + 3 in fun x_16  -> x_16 + (t_17 * t_17)>. *)
     (* BER N101: exception pointing out that in .<.~x+3>. the variable x
        escapes its scope *)
attempting to move out the expression that contains x outside its binding! Prior to BER N100, this attempt was successful, generating the shown code that exhibits the so-called scope extrusion. In the current version of BER MetaOCaml, the example type checks as before. However, its evaluation no longer succeeds. Rather, running the generator throws the exception with an informative message.

We have just seen the code generation with control effects, that let-insertion is highly desirable and highly dangerous, and that in the present MetaOCaml, it is finally safe. It is safe in the following sense: if the generator successfully finished producing the code, the result is well-typed and well-scoped.

Although the naive attempt to program let-insertion works and is now safe, it is not convenient. One has to explicitly mark the let-insertion place. When several places are marked, we or the user have to choose. We want to automate such choices. We would like the result of test1 to be the desired code test_desired, with let-bindings. The operations sqr and make_incr_fun -- which are programmed by the DSL designer -- may be re-defined. However, test1 should be left as it was. It is written by the end user, who should not care or know about let-insertion's taking place, let alone pass prompts around.

Recall that trying to insert let at a wrong place raises an exception. This dynamic exception lets us program genlet so to try the insertion at various places -- farther and farther up the call chain -- until we get an exception. The best place to insert let is the one that is farthest from genlet while causing no exceptions. We arrive at the following simplified interface for the let-insertion:

    val genlet    : 'a code -> 'a code
    val let_locus : (unit -> 'w code) -> 'w code
The interface no longer mentions any prompts. The function let_locus marks possible places for inserting let; genlet chooses the best place among the marks and let-binds its argument expression there. This let-insertion is safe, and the safety is easy to prove: since this interface is implemented as an ordinary library with no compiler magic, by elaborating the naive implementation, the static guarantees of MetaOCaml hold. In particular, if an attempt is made to generate an ill-scoped code, an exception will be thrown. By contraposition, if no exception is thrown, the generated code has no scope extrusion. As an example, we re-define sqr and make_incr_fun primitives for let-insertion. We fully reuse the earlier versions without re-implementing them: sqr should let-bind its argument and make_incr_fun says that places before and after the function binder are good places for let-insertion.
    let sqr e = sqr (genlet e)
    let make_incr_fun body = 
      let_locus @@ fun () ->
        make_incr_fun @@ fun x ->
          let_locus @@ fun () -> 
            body x
With these new sqr and make_incr_fun, the very same test1 produces the desired code (shown in comments below):
    let test1 = make_incr_fun (fun x -> sqr .<2+3>.)
       (* .<let t_17 = 2 + 3 in fun x_16  -> x_16 + (t_17 * t_17)>. *)
  
    let test2 = make_incr_fun (fun x -> sqr .<.~x + 3>.)
       (* .<fun x_18  -> x_18 + (let t_19 = x_18 + 3 in t_19 * t_19)>. *)
The slightly modified test2 also inserts let, at the different, and safe place.

We have demonstrated for the first time the self-adjusting, safe and convenient let-insertion with static guarantees. The scope extrusion check meant to crash bad generators surprisingly helps implement good ones -- more convenient than those possible otherwise.

Version
The current version is June 2014
References
genlet_example.ml [2K]
A simple illustration of let-insertion, its convenience and dangers (the example from the FLOPS 2014 talk)
The example uses the `traditional' implementation of let-insertion, which is not convenient.

gengenlet.ml [6K]
The safe and convenient let-insertion

talk-FLOPS.pdf [196K]
The annotated slides of the talk presented at FLOPS 2014 in Kanazawa, Japan on June 4, 2014.

loop_motion_gen.ml [7K]
Loop-invariant code motion with the convenient let-insertion (the main example in the FLOPS 2014 talk)

Let-insertion as a primitive, since MetaOCaml N104

Generating Mutually Recursive Definitions: let- and letrec-insertion, without any continuation or state effects

 

Why a program in CPS specializes better

It has become folk knowledge that writing a program in the continuation-passing style (CPS) helps a partial evaluator do more operations at specialization time and hence produce a better specialized program. The reason for that has not as widely spread. The following example, distilled from the papers by Bondorf and Lawall & Danvy, might help.

The example is the following simple Haskell expression.

    	\i -> (if i then 2 else 3) + 4
To partially evaluate it, we first annotate all sub-expressions as static S (known at compile time) or dynamic D (known only at run-time). That is, we perform the `binding-time analysis' (BTA) and add binding-time annotations. For clarity, we show all the steps of the analysis. First, we see no applications to our function and hence do not statically know i:
    	\i_D -> (if i_D then 2 else 3) + 4
The if-test with the dynamic condition is dynamic:
    	\i_D -> (if i_D then 2 else 3)_D + 4
and the addition expression with a dynamic operand is dynamic:
    	\i_D -> ((if i_D then 2 else 3)_D + 4)_D
All non-literal sub-expressions are dynamic and hence there are no opportunities for partial evaluation.

Let's re-write the original expression in CPS:

    \i k -> let k' = \v -> k (v+4) in
            if i then k' 2 else k' 3
and do the binding-time analysis again. As before, the function arguments, i and k, must be dynamic:
    \i_D k_D -> let k' = \v -> k_D (v+4) in
                if i_D then k' 2 else k' 3
The variable k' is static since its value, the \v-> ... abstraction, is statically known. Constants are obviously static:
    	\i_D k_D -> let k'_S = \v -> k_D (v+4) in
    	            if i_D then k'_S 2_S else k'_S 3_S
There are now two applications of a statically known argument to a statically known function. We can do them statically, at the specialization, or `compile'-time, obtaining:
    	\i k -> if i then k 6 else k 7
which is certainly an improvement: there are no longer any additions left for the run-time.

Bondorf (1992) showed the alternative to writing the source program in CPS. We should instead write the specializer (partial evaluator) in CPS, and skillfully massage the continuations to achieve binding-time improvements. A partial evaluator becomes harder to write, but the source programs may remain in the original, direct style, and still benefit from improved specialization. Lawall and Danvy (1994) showed that delimited control helps keep even the partial evaluator in direct style. Thus a specializer capable of improvements shown in this message becomes only a little more difficult to write.

Version
The current version is January 2012
References
Anders Bondorf: Improving Binding Times Without Explicit CPS-Conversion
Conf. Lisp and Function Programming, 1992
The seminal paper on binding-time improvements.

Julia L. Lawall and Olivier Danvy: Continuation-Based Partial Evaluation
Conf. Lisp and Function Programming, 1994

 

The Next Stage of Staging

Staging is a program generation paradigm with a clean, well-investigated semantics which statically ensures that the generated code is always well-typed and well-scoped. Staging is often used for specializing programs to the known properties or parts of data to improve efficiency, but so far it has been limited to generating terms. This short paper describes our ongoing work on extending staging, with its strong safety guarantees, to generation of non-terms, focusing on ML-style modules. The purpose is to map out the promises and challenges, then to pose a question to solicit the community's expertise in evaluating how essential our extensions are for the purpose of applying staging beyond the realm of terms.

We demonstrate our extensions' use in specializing functor applications to eliminate its (currently large) overhead in OCaml. We explain the challenges that those extensions bring in and identify a promising line of attack. Unexpectedly, however, it turns out that we can avoid module generation altogether by representing modules, possibly containing abstract types, as polymorphic records. With the help of first-class modules, module specialization reduces to ordinary term specialization, which can be done with conventional staging. The extent to which this hack generalizes is unclear. Thus we have a question to the community: is there a compelling use case for module generation? With these insights and questions, we offer a starting point for a long-term program in the next stage of staging research.

Joint work with Jun Inoue and Yukiyoshi Kameyama.

Version
The current version is January 2016
References
StagingNG.pdf [169K]
Staging Beyond Terms: Prospects and Challenges
doi:10.1145/2847538.2847548
The paper published in the Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM), 2016, St. Petersburg, FL, USA, January 18-19, 2016, pp. 103-108

 

A monadic approach for avoiding code duplication when staging memoized functions

Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions are staged. Without addressing this problem, it would be impossible to effectively stage dynamic programming algorithms. Intuitively, direct staging undoes the effect of memoization. To solve this problem once and for all, and for any function that uses memoization, we propose a staged monadic combinator library. Experimental results confirm that the library works as expected. Preliminary results also indicate that the library is useful even when memoization is not used.

The paper introduces the continuation+state monad for code generation with let-insertion, and briefly describes abstract interpretation for generating efficient code without its intensional analysis. These are the foundational techniques for typeful multi-staged (MetaOCaml) programming.

Joint work with Walid Taha, Kedar N. Swadi, and Emir Pasalic.

Version
The current version is January 2006
References
Proceedings of the 2006 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation (PEPM), 2006, Charleston, South Carolina, USA, January 9-10, 2006, pp. 160-169. doi:10.1145/1111542.1111570

Shifting the Stage: Staging with Delimited Control
An alternative, notationally more convenient approach relying on delimited control rather than monads

 

Type(class)-directed symbolic differentiation

We show how to take an ordinary numeric Haskell function, e.g., a term of the type D a => a -> a where D is a Floating-like class, and produce the function that is the symbolic derivative of the former. The original function can be given to us in a separately compiled module with no available source code. The derived function is an ordinary numeric Haskell function and can be applied to numeric arguments -- or differentiated further. The Floating-like class D currently supports arithmetic and a bit of trigonometry. We also support partial derivatives.
    test1f x = x * x + fromInteger 1
    test1 = test1f (2.0::Float)      -- 5.0
   
    test1f' x = diff_fn test1f x
    test1' = test1f' (3.0::Float)    -- 6.0
    
    test1f'' x = diff_fn test1f' x
    test1'' = test1f'' (10.0::Float) -- 2.0
The original function test1f can be evaluated numerically, (see test1), or differentiated symbolically: test1f'. The result is again an ordinary numeric function (i.e., 2x), which can be applied to a numeric argument, see test1'. Alternatively, test1f' can be differentiated further.

The original function is emphatically not represented as an algebraic data type -- it is a numeric function like sin or tan. Still, we are able to differentiate it symbolically (rather than numerically or automatically). The key insight is that Haskell98 supports a sort of reflection -- or, to be precise, type-directed partial evaluation and hence term reconstruction.

Our approach also shows off the specification of the differentiation rules via type classes (which makes the rules extensible) and the emulation of GADT via type classes. In 2006, we improved the approach by developing an algebraic simplifier and by avoiding any interpretative overhead.

Version
The current version is 1.1, Nov 24, 2004
References
differentiation.lhs [11K]
The complete literate Haskell code with more examples, including those of partial differentiation. It was originally posted as Typeful symbolic differentiation of compiled functions on the Haskell mailing list on Wed Nov 24 04:05:12 EST 2004.

Symbolic differentiation and optimization of compiled numeric functions

 

Symbolic differentiation and optimization of compiled numeric functions

We show symbolic differentiation of a wide class of numeric functions without imposing any interpretative overhead. The functions to differentiate can be given to us in a compiled form (in .hi files); their source code is not needed. We produce a (compiled, if needed) function that is an exact, algebraically simplified analytic derivative of the given function.

Our approach is reifying code into its `dictionary view', intensional analysis of typed code expressions, and staging so to evaluate under lambda. We improve the earlier, 2004 approach in algebraically simplifying the result of symbolic differentiation, and in removing interpretative overhead with the help of Template Haskell (TH). The computed derivative can be compiled down to the machine code and so it runs at full speed, as if it were written by hand to start with.

In the process, we develop a simple type system for a subset of TH code expressions (TH is, sadly, completely untyped) -- so that accidental errors can be detected early. We introduce a few combinators for the intensional analysis of such typed code expressions. We also show how to reify an identifier like (+) to TH.Name -- by applying TH to itself. Effectively we obtain more than one stage of computation.

Our technique can be considered the inverse of the TH splicing operation: given a (compiled) numeric expression of a host program, we obtain its source view as a TH representation. The latter can be spliced back into the host program and compiled -- after, perhaps, simplification, partial evaluation, or symbolic differentiation. As an example, given the definition of the ordinary numeric, Floating a => a->a function test1f x = let y = x * x in y + 1 (which can be located in a separately compiled file), we reify it into a TH code expression, print it, differentiate it symbolically, and algebraically simplify the result:

    *Diff> test1cp
    \dx_0 -> GHC.Num.+ (GHC.Num.* dx_0 dx_0) 1
    *Diff> test1dp
    \dx_0 -> GHC.Num.+ (GHC.Num.+ (GHC.Num.* 1 dx_0) (GHC.Num.* dx_0 1)) 0
    *Diff> test1dsp
    \dx_0 -> GHC.Num.+ dx_0 dx_0
The output is produced by TH's pretty-printer. We can splice the result in a Haskell program as $(reflectQC test1ds) 2.0 and use it as ordinary, hand-written function \x -> x+x.
Version
The current version is 1.1, Dec 3, 2006
References
Diff.txt [8K]
The message with the explanation of optimality and more examples. It was originally posted as Jones-optimal, typed, symbolic differentiation of (compiled) functions on the Haskell mailing list on Sun, 3 Dec 2006 17:56:27 -0800 (PST).

Diff.hs [9K]
The main implementation file, which defines the reification of code into TH representation, differentiation rules and algebraic simplification rules, all via the intensional analysis of the typed code. The file also includes many examples, including those of partial differentiation.

DiffTest.hs [<1K]
Running the splicing tests from Diff.hs. Due to the TH requirement, this code must be in a separate module.

TypedCode.hs [5K]
This file introduces the type Code a of typed TH code expressions. The (phantom) type parameter is the expression's type. The file defines combinators for building and analyzing these typed expressions.

TypedCodeAux.hs [2K]
Obtain the Name that corresponds to a top-level (Prelude-level) Haskell identifier by applying TH to itself.

 

Editorial: PEPM2012 Special Issue of `Higher-Order and Symbolic Computation'

The five articles in this special issue are the extended, journal versions of the papers first presented at the ACM SIGPLAN 2012 Workshop on Partial Evaluation and Program Manipulation. PEPM'12 took place in January 2012 in Philadelphia, Pennsylvania, USA. Out of 19 papers presented at PEPM 2012, the program committee selected six and invited their authors to submit an extended and improved version to the Special issue. Each submission was reviewed by three reviewers according to the rigorous journal standards. Five have been recommended for publication.

The PEPM Symposium/Workshop series is about the theory and practice of program transformation understood broadly, ranging from program manipulation such as partial evaluation, to program analyses in support of program manipulation, to treating programs as data objects (metaprogramming). PEPM focuses on techniques, supporting theory, tools, and applications of the analysis and manipulation of programs. PEPM specifically stresses that each technique or tool of program manipulation should have a clear, although perhaps informal, statement of desired properties, along with an argument how these properties could be achieved. The papers included in this special issue reflect the entire scope of PEPM, its interplay of theory and practice, and its stress on rigor and clarity.

Version
The current version is July 2013
References
PEPM2012-Special-issue-editorial.pdf [43K]
The full Editorial describing the five papers
Higher-Order and Symbolic Computation. August-September 2013. Special issue with selected papers from PEPM 2012. Edited together with Julia Lawall and Simon Thompson
doi:10.1007/s10990-013-9092-0