On this page:
Welcome
Classical Denotational Semantics
Classical Operational Semantics
Defining Macros
Hygienic Macros
Higher-Order Contracts
The Operational Semantics of Gradual Typing
Automatic Differentiation
Meetings
Hindley-Milner Type Inference
System F Type Systems:   From Theory to Practice
Ideas:   From Conception to Programming PER Semantics of Types
The PI Calculus
Session Types
Functional Reactive Programming
Tracing JIT Compilation
Meetings
Programming Languages and Operating Systems
Array Programming Languages
Relational Programming
How (Not) to Benchmark
Logical Relations
The Operational Semantics of Multi-Language Systems
Intermediate Compiler Forms for Functional Languages
The ML Module System Refinement Types
The Evolution of Dependent Types
The Implementation of Depdenent-Type Proof Assistants
Programming Languages for Computational Media
Teaching Languages
Good Bye
Bibliography
8.1.0.2

Lectures

This page simulatenously serves as a repository for lecture abstracts and a presentation schedule. It will stay as a permanent record of this instance of HoPL.

Welcome

19 Jan 2021

I will give a brief presentation on the dawn of programming language history and how researchers began to tease out essential concepts—syntax, scope, types, semantics, pragmatics—and the first ideas that showed up within these categories.

We will also discuss why/how the lambda calculus quickly appeared as the central idea to study these concepts, culminating in the thoroughly under-appreciated dissertation of Morris [1968].

At the end, I will explain how this course works, its focus, and its organization.

Classical Denotational Semantics

Matthias Felleisen

As the Welcome explained and as you may recall from your undergraduate course on PPL, formulating a semantics for a programming language emerged as the first hard problem. People imagined that, like syntax, semantics could be specified at a rather high level and, like parsers, code generators could be generated from a semantics.

Scott’s denotational semantics [Stoy 1977] of the (typed and untyped) lambda calculus emerged as the first dominant approach. Because denotational semantics seems to leverage ordinary mathematical ideas (algebra, geometry, topology, etc.), researchers hoped that tools from these areas could be used to analyze program behavior and that these analyses could then play a role in code generation and optimization. Sadly, this research direction ran into a rather unsolvable problems, often now called the “full abstraction” problem [Milner 1977; Plotkin 1977b].

While (classical) denotational semantics has disappeared (in the US), modern operational semantics directly draws on its ideas and will continue to do so in the future.

Classical Operational Semantics

Matthias Felleisen

The very first efforts in operational semantics focused on two different directions: interpreters [McCarthy 1960] and abstract machines [Landin 1964].

Compared to denotational semantics, people recognized early on that both approaches to operational semantics feel ad hoc and seem to make it difficult to study the meaning of programs systematically. Reynolds [1972]’s devastating critique of operational semantics—he uses the phrase meta-interpreter, but it equally applies to related works too—looked like a death sentence.

In parallel, though, Plotkin [1977a; 1981] developed two more frameworks for operational semantics, both of which play a significant role nowadays: lambda calculus as a semantics and so-called structural operational semantics. Clement et al. [1985]’s development of the closely related natural semantics demonstrated that this idea could be useful for compiler-generation frameworks.

Note These ideas are often covered to varying degrees in iPPL (7400). This presentation is from my perspective, recalling my own days as a second-year PhD student.

Defining Macros

Jared Gentner

Some language families do not respect the idea of fixed syntax, scope, semantics, and so on. The Lisp family is one of them.

From almost the very beginning, Lisp languages offer programmers mechanisms for changing almost every aspect of the language. The key mechanism is a macro—a function from (concrete) syntax tree to syntax tree—which a programmer can add to the front end of the compiler. Programmers can use those to eliminate syntactic pattern and even finely integrated build domain-specific languages.

The challenge is to write these syntax-to-syntax functions effectively. Beginning in the 1980s, researchers paid attention to this problem and took four significant steps:

Hygienic Macros

Michael Ballantyne

Parallel to the development of macro-definition systems, researchers also worked on understanding how syntax-to-syntax functions “mess” with the lexical scope of a program. Concretely, using macro may inadvertently capture identifiers in a way that violates a natural understanding of the surface syntax and its binding rules. The resulting bugs are rather insidious.

Kohlbecker et al. [1986] pointed out the problem and offered a first, simplistic solution, drawing on Barendregt’s tradition of treating the lambda calculus as a system of terms “moded out” under alpha equivalence (“the hygiene condition”). In this context, alpha renaming—known to “younglings” as identifier-rename refactoring—means picking a different concrete representative for the same equivalence class. Kohlbecker et al. describe an algorithm that properly preserves the apparent scope uniformly for all programs: the hygienic macro expansion algorithm.

Following this tradition, Dybvig et al. [1993] generalizes this expansion algorithm and improves its performance in significant ways. Their algorithm was the gold standard for nearly three decades.

In 2016, Flatt [2016] presented an entirely different algorithm, which he implemented for Racket. The algorithm was quickly used to implement macros for JavaScript.

Higher-Order Contracts

Cameron Moy

In the late 1960s, software engineering researchers coined the phrase “software crisis.” With computational power accelerating at an exponential rate, everyone expected programmers’ ability to deliver complex software to improve substantially, but alas it didn’t. So researchers began to ponder how to deal with this problem.

McIlroy [1968] is the first to propose a component marketplace that could supply high-quality, generic—as in interchangeable— and configurable modules, for use in many different systems. Parnas [1972] recognizes that, to be practical, such components would need to be accompanied by a formal interface specification.

The Eiffel programming language represents the first large-scale attempt to realize these aspirations. Meyer [1992b]’s contract From a software engineering perspective, this system encourages programmers to replace defensive programming—over-burdening method definitions with extensive checks—with concise interface contracts, which is where such checks belong for the client programmer. system for Eiffel allows programmers to express pre-conditions that a caller must satisfy in a method signature and post-conditions that the result is guaranteed to satisfy. If a method’s pre-condition fails, it is the caller’s fault; if the post-condition fails, the contract system points to the callee as the potential source of error. In fact, in Meyer [1992a]’s eyes, the contract system is the underpinning of his entire software design philosophy: “design by contract.”

Some ten years later Findler and Felleisen [2001] point out that contracts in Eiffel violate the Liskov substitution principle and, as a result, occasionally point to the wrong component as the faulty one. To rectify this flaw, they propose a “monitoring” approach that goes beyond checking just the pre and post condition.

By this time, programming languages also begin to include higher-order elements, such as Scheme’s lambda or Python’s functions on classes. Correctly handling these now-common programming language features with the simplistic contract system of Eiffel, however, proves impossible. In response, Findler and Felleisen [2002] extend the contract system to higher-order functions and develop a “blame assignment” system, setting off a flurry of research into the subject.

Subsequent work turns to many aspects and applications of higher-order contracts: semantics (what does it mean for a component to satisfy its contracts, what is “blame”), pragmatics (does blame assignment help programmers), implementation technology (how can contracts effectively and efficiently protect mutable objects), and more..The idea also finds its way into gradual typing via the system’s re-use in the construction of Typed Scheme. The most notable development investigated the correctness and completeness of blame assignment—resulting in the complete monitoring property. It demands that a contract system assigns blame correctly and monitors all channels of communication between modules [Dimoulas et al. 2012].

The Operational Semantics of Gradual Typing

Olek Gierczak

Gradual Typing seeks to combine static with dynamic typing, while maintaining as many benefits as possible, such as the program analysis benefits from static and legacy code support from dynamic. The prehistory of Gradual Typing is work surrounding interoperability between statically and dynamically typed languages, with one notable example being interoperability between Java and Scheme Gray et al. [2005]. Similar to the emphasis on how values flow between language boundaries in interoperability, Research on gradually typed systems emphasizes how a value is checked at the boundary between the statically and the dynamically typed portion.

The original core formalizations of Gradually Typed systems introduce the Natural semantics for checking values at boundaries  [Siek and Taha 2006; Tobin-Hochstadt and Felleisen 2006]. Alternatives are due to Vitousek et al. [2014] (also see  [Vitousek et al. 2017]), who propose “transient checks” and industrial systems, such as TypeScript which “erase” boundaries [Bierman et al. 2014].

For Natural, checks are inserted and propagated so that when a value needs to have a certain type, it is either checked to be that type fully, or wrapped in a delayed-checking monitor, similar to higher-order contracts. For Transient, checks are added in the translation that only ensure first-order properties, meaning for example when a function type is needed, transient checks the value is a function of any type. For Erasure, as the name suggests, no checks are added in the translation. The type is “erased”, meaning values flow without restrictions between different regions of code.

Given these varying semantics, researchers sought to compare them and justify their particular design choices with formal properties of the languages. In general, Erasure either doesn’t satisfy the properties or trivially does. The most familiar property is type soundness. In the case of Natural, the values are exactly the expected type, but in Transient, the value has only a matching top level constructor because of first-order checks don’t check in depth [Greenman and Felleisen 2018].

More recently, Greenman et al. [2019] show that Natural satisfies interesting properties that Transient fails. Technically, they present and prove a version of complete monitoring for gradually typed languages. Originally developed for the contract world, complete monitoring demands that whenever a value goes through a type boundary it is possible to check it completely for all possible uses. They show that Natural does satisfy this property while Transient does not.

As an alternative approach to comparing these semantics of gradual typing, Chung et al. [2018] give three example programs in a common framework—Kafka—and use them to demonstrate differences between the three semantics. These litmus tests exercise the occurrence of unrelated types, incompatible types, and crossing through multiple boundaries. They show all three run in Erasure, the unrelated types example signals a problem in Transient as well as Natural, and the crossing multiple boundaries example signals a violation in Natural only.

Automatic Differentiation

Daniel Melcer

Automatic differentiation allows a programmer to calculate the derivative of a function or pieces of the Jacobian matrix when a function has several inputs or outputs.

Wengert [1964] presents the first (WesternBefore the “iron curtain” came down, applied mathematicians and theoretical computer scientists in the West were unaware of advances in the East, such as Beda et al. [1959]’s work on the forward-mode automated differentiation.) description of automatic differentiation using dual numbers, that is, structures that track the value of a function for a given value along with its derivative at that same value. This approach is later refined to use structs instead of size-2 vectors Rall [1984].

This dual-number approach is effective for calculation when there are many output variables but few input parameters. But, it is run O(n) times with respect to the number of input variables, imposing an inefficiency as the number of inputs grows. Speelpenning [1980] introduces (an implementation of) the “reverse mode,” which keeps a log of computations. Using this log it is possible to back-propagates the derivative of all intermediate values with respect to a single output value, eventually finding the gradient of the entire input. Speelpennig’s tool is a precompiler, transforming FORTRAN source code to calculate derivatives. Source-transformation AD tools continued to improve—covering more of the full language—resulting in ADIFOR, as the most popular tool for a long time [Bischof et al. 1992].

In 2007, Siskind and Pearlmutter [2007b,a] reported on a new, reification-based technique for AD in the world of higher-order languages. But also see Manzyuk et al. [2019] for unsolved problems in this contexts.

Reverse mode has higher memory requirements than forward mode. Griewank [1992] partially mitigates this issue through check-pointing (while increasing the amount of required computation). These source transformation tools cannot handle every program, notably failing upon encountering a recursive function. Griewank et al. [1996] finally introduces a C++ runtime library that handles the full reverse mode, avoiding the many pitfalls of source transformations.

Meetings

16 Feb 2021

I will meet with everyone 1-1 on this day to discuss your progress for your first topic. Here is the schedule:

time

student

09:50am

Helena

10:00am

Josh

10:10am

Ryan

10:20am

Nathan

10:30am

Mitch

10:40am

Andrew

10:50am

Donald

11:00am

Emily

11:10am

Lucy

11:20am

Ankit

11:30am

Olek

11:40am

Jared

We will use the normal class link for these meetings.

Each meeting will discuss the following items in order:
  • your reading progress

  • your abstract

  • your note taking

  • a “dress rehearsal”

  • if you are a PhD student, your second topic; for others; an optional second topic.

MS/undergraduate students who have delivered a lecture are excused.

Hindley-Milner Type Inference

Josh Goldman

The goal of type inference is to reduce programmer’s workload. Generally speaking, type inference is an algorithm that restores type specification of variables that the programmer leaves out. Every such omitted type specification is a variable in a system of constraints, determined by the surrounding program context. As with ordinary mathematical equation systems, a type inference algorithm may find no solutions, an exact solution for all variables, or an infinite number of them. In this last case, the algorithm assigns many type to a function, say, and this function can be used at any of these types.

The idea was pre-discovered by an office mate of Milner, Hindley, ten years before, which is why it is known as Hindley-Milner type inference. Milner [1978] was the first to recognize the practical value of such an algorithm in the context of programming languages. He simultaneously injected the idea of a “well-typed program,” as characterized by Semantic and Syntactic Soundness Theorems. With his student Luis Damas, Milner later constructed Algorithm W to find the most general type for any expression in a functional subset of ML Damas and Milner [1982]. Similar algorithms had been in PL folklore for some years.

Hindley-Milner type inference turned out less optimal than thought, both in terms of performance and pragmatics. First, programmers initially thought of W as a linear-time or at least polynomial-time algorithm until Mairson [1989] took reduced typability in ML to deterministic exponential time. Second, people expected that the Hindley-Milner form of type inference could be extended to a broad range of language features. In 1993, Henglein [1993] reduced type inference for the Milner-Mycroft variant to the semi-unification, a problem known to be undecidable. Due to the nature of both limitations, researchers and programmers have questioned the status of type inference for the past two decades.

System F Type Systems: From Theory to Practice

Ryan Guo

In simply typed lambda calculus (STLC), functions such as the identity function must be re-defined for arguments with different type, yet these functions all share the same behavior. This duplication becomes more severe as the programs become more complex.

An extension of the STLC, due to Reynolds [1974], introduces abstraction over types, i.e., parametric polymorphic functions. This solves the aforementioned duplication issue. Reynolds proves a Representation Theorem for this system, which shows that different representations of primitive types will not affect the behavior of a typed program in this extension. This extension is more known as System F, named by Girard who independently discovered the same system in the context of proof theory in logic.

Cardelli and Wegner [1985] propose to use the “Fun” language for studying extensions of this system. It brings together the idea of parametric polymorphism from System F, existential quantification, and bounded quantification. However, bounded quantification cannot model objects that are elements of recursively-defined types. In response, Canning et al. [1989] introduced F-bounded quantification.

Modern developers now rely on some of these ideas in the context mainstream programming languages, a development that took three steps. The first is a language dubbed Pizza, due to Odersky and Wadler [1997]. The language runs on the JVM and syntactically extends Java, making use of F-bounded quantification and existential types to implement type-sound generic classes and methods. Pizza comes with abstract data types and higher order functions, demonstrating their compatibility with Java. But, the soundness constraints on these generics disallow linking Pizza code with certain Java libraries—raising a serious compatibility issue. To fix this flaw, Bracha et al. [1998] propose a small change, dubbed the GJ language. It is a conservative extension of Java that adds only generics; GJ introduces raw types to handle backward compatibility and retro-fitting to handle forward compatibility.

Ideas: From Conception to Programming PER Semantics of Types

Nathaniel Yazdani

abstract and citations needed

The PI Calculus

Mitch Gamburg

The PI-calculus is the most well-known and most well-developed formal system—among dozens—for studying concurrent and communicating software. Its immediate predecessor, the Calculus of Communicating Systems, describes a software component, called an agent, as its interactions with an external observer, which may also be an agent [Milner 1980]. All interactions take place as message exchanges via dedicated channels. Extended CCS  [Engberg and Nielsen 1986] allows agents to send channels over channels. As a result, agents may dynamically change their link-based connections. The PI-calculus itself is the result of removing all distinctions between channels and other values (such as numbers)  [Milner et al. 1992].

While the simplification of the value universe makes the PI-calculus more expressive than CCS and simplifies it at the same time, it also injects a sense of chaos and confusion. As Milner [1991] argues, the best way to inject discipline into such a world is to add a form of type system, which results in the polyadic PI-calculus. To facilitate communication in this context, the calculus also comes with tuple values.

Among its many variants, Pierce and Sangiorgi [1994]’s interface-oriented PI-calculus stands out. It extends Milner’s sorting discipline with specification of a channel’s interface: a channel could be read-only, write-only, or both read and write. By imposing a linearity constraint on such interfaces, Kobayashi et al. [1999] lay the foundation for behavioral types—also known as session types—for the agents of the PI-calculus.

Session Types

Andrew Wagner

The development of process calculi in the 1970s and 1980s triggered the search for type systems to quell deadlocks and races from process-oriented programs.

One promising approach, due to Honda [1993], identifies a core set of types for interactions between two processes. Central to ensuring communication safety in Honda’s theory are the properties of linearity and duality. Takeuchi et al. [1994] builds upon this set of types but also introduces the notion of a session, which is analogous to a type abstraction from ML. Rounding out the formative phase of the theory, Honda et al. [1998] add recursion and also channel delegation, a feature inspired by concurrent object-oriented programming.

Generalizing from two to many parties poses a daunting challenge, because duality is a linchpin of the theory. To accomplish it, Honda et al. [2008] replaces duality with a new notion of consistency, called coherence. It is defined over global types that jointly describe the multiparty interaction. To prove coherence, one projects global types onto local types, which are much like the binary session types of old.

Session types abound in the literature, but there is hardly a canonical system. One approach to find a canonical system equates session types with propositions [Caires and Pfenning 2010; Wadler 2012], developing a correspondence with logic in the spirit of Curry and Howard. Another approach encodes session-typed processes [Dardhaa et al. 2017], using a variant of the linear π-calculus from the previous lecture.

Functional Reactive Programming

Donald Pinckney

As computer hardware shrank in size, software controlled reactive systems became a center of focus. In response, a flurry of research in the 1980s produced a new breed of languages, so called “dataflow’ languages [Benveniste et al. 1991; Berry and Gonthier 1991; Halbwachs et al. 1991; Wadge and Ashcroft 1985]. Behind their design was the idea of writing down equations to specify how input signal data flows computes output—and to generate the program from these equations.Esterel is alive and well. Airbus uses it to design some control software for its aircraft.

In search of a GUI programming model for purely functional languages, Elliott and Hudak [1997] recognized the similarity of the problem to reactive control systems. Their adaptation of the idiom came to be known as Functional Reactive Programming (FRP). Since they were working with Haskell, FRP differed from the original dataflow languages in two ways: first, they exploited Haskell’s call-by-need semantics, and second, they added continuous time, which allowed for a natural expression of animations. But time is really based on sampling, which poses a new semantic problem.

Wan and Hudak [2000] explored approximate sampling in FRP, specifically conditions for getting a sampling-based semantics to agree with the continuous semantics. Their research uncovered pathological FRP terms, including Zeno’s paradox. Wan et al. [2001] then identified a pathological class of terms needing unbounded space. The understanding of these classes of pathological terms provided most of the motivations for foundational FRP research going forward. Over the next two decades, a number of researchers [Krishnaswami et al. 2012; Ploeg and Claessen 2015] used a variety of techniques to eliminate these pathological classes of programs from FRP.

Simultaneously to the notion of FRP, Hughes [1991] developed the notion of Arrows, a restricted computational model consisting of reified functions joined together by combinators. Nilsson et al. [2002] made use of Hughes’s Arrows to present an “arrowized” version of FRP. This revised semantics allowed for dynamically switching dataflow graph structure without losing space bounds.

Early work on FRP rested on the implicit assumption that a call-by-need semantics was needed to program in this manner. Cooper and Krishnamurthi [2006] showed the independence of FRP from call-by-need and in the process exposed the fundamental implementation principles of graph-based re-computation in a higher-order world. This work inspired implementation of Flapjax [Meyerovich et al. 2009] and Elm [Czaplicki and Chong 2013], a practical FRP language for building Web pages.

Tracing JIT Compilation

Emily Herbert

Over the past two decades, the use and study of just-in-time (JIT) compilers have overtaken the one of ahead-of-time (AOT) compilers. Within the family of JIT compilers, the tracing variant has come to dominate the field. Roughly speaking, a tracing JIT compiler identifies repeatedly executed traces of instructions and compiles those.

McCarthy [1960] hinted at the basic idea of JIT compilation in the context of LISP, which came with an implementation that allowed programmers to selectively compile certain functions to machine language and stored in core memory. Ungar’s Self project—a prototype-oriented language, with strong influences on JavaScript—pushed JIT compilation in the late 1980’s [Chambers and Ungar 1989]. This Self compiler generated machine code on demand and cached it for later use. While the work on Self came to a halt in 1998 when Sun abandoned the language in favor of Java, Cramer et al. [1997] transferred the JIT compilation technology to a compiler for the new language.

Tracing JIT’s began to take shape a little bit later when Bala et al. [2000] developed Dynamo, a system that applied JIT optimizations to machine code. To this end, they borrowed the concept of an “execution trace” from systems work. Inspired by Dynamo, Gal et al. [2006] introduced the “trace tree” concept for a Java compiler. This work became the gateway to modern systems such as the TraceMonkey JavaScript engine [Gal et al. 2009], PyPy for Python [Bolz et al. 2009], SPUR for CIL [Bebenita et al. 2010], and more recently a PHP 8 JIT compiler.

Meetings

16 Mar 2021

I will meet with every PhD student 1-1 to discuss the submitted reading list for the second topic. Here is the schedule:

time

   

student

   

topic

09:50am

   

Olek

   

Logical Relations

10:05am

   

Jared

   

Compiler Intermediate Forms

10:20am

   

Andrew

   

Designing Multi-Language Systems

10:35am

   

Donald

   

ML Modules

10:50am

   

Emily

   

Visual Teaching Languages

11:05am

   

Lucy

   

The Evolution of Dependent Types

11:20am

   

Ankit

   

The Implementation of Depdenent-Type Proof Systems

The primary goal of this meeting is to asses the reading list and theme within the topic, and to prune the list where possible.

Programming Languages and Operating Systems

Lucy Amidon

In 2021, three similar operating systems dominate the general-purpose market. They imply a common set of abstractions that an operating system supplies in order to allow convenient and multiplexed access to hardware. On top of these abstractions, programming language runtimes implement their own abstractions. Throughout the history of languages and operating systems, however, there have been many attempts to unify these two layers, reducing the friction sometimes caused by the impedance mismatches between them.

The LISP Machine [Bawden et al. 1977] is one of the earliest and most famous examples of a system that unified language and “operating-system” abstractions. As the name said, LISP machines famously used special-purpose hardware optimized for executing LISP, which, at the time, was a necessity to enable reasonable performance. At the same time, two alternative projects with similar goals emerged at the Xerox Palo Alto Research Center: Smalltalk [Ingalls 2001, 2021], created by language designers, and Pilot [Redell et al. 1980], created by systems designers. Pilot introduced the idea of providing modularized, standardized, interfaces to the “operating-system” components of the runtime, as well as using the properties of the language for some process protection [Lampson 1974]. Oberon is yet another “languages as operating systems” project from this era [Wirth and Gutknecht 1991] that focused on language-level protection and component boundaries.

While the research area went dormant for almost two decades, its resurrection in the late 1990s caused system researchers to think of their area as “irrelevant” as Rob Pike so memorably wrote in a widely discussed blog post [Pike 2000]. The focus of these new language-based projects was mostly linguistically-enforced safety:
  • Bershad et al. [1995] used the the properties of the Modula-3 compiler and runtime system to enable loading “untrusted” kernel extensions. Active research on this idea continues to tihs day, with a focus of using language-based systems for a purpose akin to kernel extensions in very small computers discussed [Levy et al. 2017].

  • Flatt et al. [1999] introduced concepts surrounding scheduling and protection to the world of LISP, allowing safety mechanisms for resource sharing—traditionally the domain of “real operating systems”—to be expressed inside a language, expressing the last major operating system abstraction intralinguistically.

  • Hunt and Larus [2007] developed a prototype of a complete general-purpose operating system that used linguistic mechanisms and static analysis to enforce safety and safe communication.

Array Programming Languages

Ankit Kumar

In 1957, Kenneth E. Iverson developed a mathematical notation to describe numerical processing for his courses at Harvard. The notation was implemented as the APL language during his time at IBM in 1965 [Iverson 1962].

APL, mostly known now for its distinct character set for mathematical operations, supports primitives for manipulating matrices of any rank, so-called rank polymorphism, also known as “implicit scaling.” APL operators deal with large multi-dimensional data as a unit, which allows the language to get rid of explicit control flow structures such as iteration or recursion (for many classes of programs). They also eliminate the conventional von Neunmann view of processing one word of data at a time.

Rank polymorphism relies on an ad-hoc semantics and a complex implementation. To eliminate some of this complexity, only APL’s built-in operators scale. But, with the introduction of nested arrays, it becomes difficult to control the grain of scaling. Two research projects address this problem in two different ways:
  • Thatte [1991] introduces a coercion-based semantics for implicit scaling, based on simple structural subtyping rules. This approach pragmatically eliminates ambiguities in a large class of programs.

  • Jay [1995] develops shape polymorphism and its semantics. This framework allows the parameterization of user-defined functions over “shapes,” but the shape of the result may depend only on the shape of the input(s) not their element type. While this framework supports the detection of shape errors at compile time, the class of programs benefitting from this technique is rather limited.

Modern Haskell accommodates various forms of shape polymorphism for arrays through the use of type classes and type families [Keller et al. 2010]. However, this implementation retains Barry’s restriction on shape polymorphism.

Remora [Slepak et al. 2014] is the first theoretical framework to overcome some of these limitations. In particular, it is expressive enough to describe array computations whose dimensions cannot be determined statically.

Relational Programming

Helena Dworak

Relational programming seeks to remedy the un-logical, or the coined-term “extralogical,” features that plague logic programming. Relational programming languages, such as miniKanren, evolved from the unresolved guarantees of logic languages or, more specifically, Prolog.

Like many language developments outside the mainstream, classical AI, specifically natural-language, inspired the creation of the first logic programming languages. More generally, though, logic programming promised an intriguing new method of solving computational problems: describe the problem, and let the “computer” figure out how to find the solution [Emden and Kowalski 1976; Kowalski 1974].

The passive voice of this sentence is the implicit acknowledgment of Franco-phone researchers that the Brits might just have had something to do with this story too.

Prolog, the most well-known logic programming language, was developed in 1971 as a means to express traditional logic as a grammar. Prolog leverages unification, a powerful tool that can be thought of as an advanced pattern-matcher, to answer queries bilaterally [Colmerauer and Roussel 1996].

In the early 1980s, Japan launched the Fifth Generation Computer System with the goal of solving all problems for good. The research team chose Prolog as the means to accomplish this goal and also as a goal, because they realized that the existing Prolog language was too weak. The overly ambitious project failed, and the AI research involving Prolog rapidly began to diminish [Emden 2010].

Prolog did not “die” for its association with a failed project alone. Its numerous extra-logical facilities—cut, assert, retract—caused a deep rift between theoreticians and practitioners of logic programming [Byrd 2014].

The goal of relational programming is the original goal of logic programming. It restricts any extralogical features abiding strictly to relations and their conjunctions and disjunctions. Additionally, relational programming improves upon both the implementation of unification and the search-and-resolution algorithm found in Prolog. As a result, relational programming as materialized in miniKanren, allows for powerful and easy creation of intriguing programming tools: program generation or the generation of quines and twines [Friedman et al. 2018].

How (Not) to Benchmark

Cameron Moy

The history of performance evaluation is the history of doing it wrong:
  • People developed benchmarks to compare new run-time systems and compiler optimizations. Those benchmarks were wrong [Blackburn et al. 2006; Richards et al. 2011].

  • People measured whether a new technique or technology had a positive impact on performance. Those measurements were wrong [Mytkowicz et al. 2009].

  • People developed a standard set of metrics to report when conducting an experiment. Those metrics were wrong [Georges et al. 2007].

  • People made fundamental assumptions about the properties of run-time systems and baked those into experiments. Those assumptions were wrong [Barrett et al. 2017].

The first step toward accurate performance evaluation is understanding and rectifying past mistakes, with an understanding that there are surely more to correct in the future. For some constructive hints, see the SIGPLAN checklist.

Logical Relations

Olek Gierczak

Proof by logical relations is a technique for establishing properties about programming languages. The idea is to (1) define a relation on elements of a language that is inductive over types, (2) prove that the relation contains all relevant elements in the language, and (3) show the relation implies the desired property. It is common to start with the third step and synthesize the relation via generalization.

The first known uses of logical relations for programming languages concern normalization of STLC [Tait 1967] and System F [Girard 1972]. Plotkin [1977b] employs a logical relation to prove computational adequacy for a domain theoretic model of PCF, that is, a reduction sequence for a program p is finite if and only if the denotation of p is not bottom. Since PCF includes the Y combinator to support recursive function definitions, Plotkin combines the logical relation technique with a form of step indexing to unfold recursion up to a specified, finite depth. However, these early results deal with terminating languages or simply-typed ones with non-termination as the only effect. Over the years, a number of researchers expanded the technique to cope with allocation [Pitts and Stark 1993], quantification [Pitts 2005], and recursive types [Birkedal and Harper 1997].

Appel and McAllester [2001] are the first to apply logical relations in a purely operational setting with the use of a step-indexed unary logical relation that suffices to prove a form of type soundness for a lambda calculus with recursive types. The paper is concerned with providing a relatively simple and widely applicable model for Foundational proof-carrying code (PCC), but the lasting idea is their proof technique.

Finally, Ahmed [2004] generalizes this technique to languages with mutable references, a truly interesting effect. When taken naively, type invariant systems with mutable references suffer from a problem of circularity in the definition of the logical relation, particularly in the possible worlds model used in modeling the store. Ahmed’s solution is to apply the step-indexing approach to the possible worlds model for mutable references. More generally, Ahmed’s dissertation makes the logical relations technique accessible by developing new notation for, and a thorough explanation of, the technique. Several examples demonstrate how the technique applies beyond “toy” languages, including languages with multiple effects.

The Operational Semantics of Multi-Language Systems

Andrew Wagner

Cross-language interoperability comes in many shapes and sizes; e.g., FFIs, frameworks like .NET, and interop-first languages like Scala. As of the early 2000s, research on these multi-language systems had largely focused on the “bits and bytes” of building efficient implementations, while the semantic consequences of mixing languages had not been studied (much).

The similarities are not a coincidence. The problem was “in the air” in the Racket community, and Jacob presented his first ideas in a NU PRL seminar before he published at POPL ’07 [Matthews and Findler 2009b], which inspired Sam and me to adapt our presentations to this framework.

Matthews and Findler [2009a] outline a general approach for describing the operational semantics of multi-language programs. Reminiscent of gradual typing techniques (see The Operational Semantics of Gradual Typing), the core idea is to use boundary terms to regulate the flow of values and control between languages, acting like “cross-language casts.” Matthews and Findler showcase the approach on an ML-Scheme mix and demonstrate that much of the existing meta-theoretic toolbox can be upcycled to the multi-language setting.

Ahmed and Blume [2011] show that the Matthews-and-Findler approach is useful not only for reasoning about surface languages, but also for proving the correctness of compilers. They develop a multi-language semantics for mixing the source and target languages of a compiler and use it to argue about the equivalence of source programs and their compiled counterparts.

Patterson et al. [2017] apply the Matthews-and-Findler technique to yet another high-level/low-level mix, a mix of functional language and typed assembly language. Achieving truly fine-grained interoperability requires an especially careful treatment of the boundaries, since the control flow and organization of an assembly component is drastically different from the functional contexts that may surround it (and vice versa).

Intermediate Compiler Forms for Functional Languages

Jared Gentner

Recall that this is not true for the equational theory of truth. In the 1980s, compiler writers thought call-by-value functional languages were at a disadvantage because the by-value proof system (λv) is weaker than the by-name one (λn) on the same syntax. For programs in continuation passing style (CPS), the two proof theories coincide [Plotkin 1977a], and such a program makes control flow explicit—a property that makes CPS forms well suited as intermediate compiler representation.

In Compiling with Continuations, Appel [1991] argues along these lines for the superiority of CPS representation and justifies its use in the SML implementation of Bell Labs in New Jersey.The use of CPS is inherited from T Scheme’s Orbit compiler.

In response, Sabry and Fellisen [1992] investigate what is missing from λv compared to λn. They discover a small set of additional axioms that when added to λv makes the theory equivalent to λn:

for all e1, e2: λv, A |- e1 = e2 if and only if λn |- cps(e1) = cps(e2).

They also notice that, when oriented properly as reductions, the equations in A satisfy the strong normalization theorem. Flanagan et al. [1993] exploit strong normalization to show that practical compilers actually "undo" the CPS transformation, in a way that yields programs A nf rather than just CPS.

Fifteen years later, Kennedy [2007] renews the case for CPS, revisiting issues with ANF that come up during optimization. One of these is that conditionals seem to pose a challenge to A nf because control flow branches. In response, Maurer et al. [2017] turn ANF into a graph representation with join points for conditionals. This simple extension to ANF re-enables the standard optimizations for conditionals.

To this day, it remains an open question as to what extent ANF can replace CPS as an intermediate compiler representation or whether compilers can benefit from a mix.

The ML Module System Refinement Types

Donald Pinckney

As Milner transitions ML from the tactic metalanguage for the LCF proof assistant to a general-purpose programming language, MacQueen [1984] recognizes the need to augment ML with a module system in support of “programming in the large.” This separation of interface vs implementation harks back to Parnas’s ideas (see Higher-Order Contracts ) and Wirth’s Modula and Modula II languages. This first module system proposes two essential elements: signatures, which are the public interfaces of modules, and structures, which are their private implementations. Additionally, it makes provisions for functorsfunctions from structures to structures—and nested modulesstructures encapsulated in structures.

Shortly after, MacQueen [1986] sketches out the first type-theoretic foundations for his ML module system, using both existential and dependent types. While this foundation is substantially expanded first by Harper and Mitchell [1988] andHarper et al. [1990]work originally dubbed XML or “essence of ML”—these foundations fail to account for several subtle issues, especially generativity and type sharing constraints.

Concurrently, Harper and Lillibridge [1994] and Leroy [1994; 1995] observe that this failure is due to the trade-off concerning information hiding. The existing models either disallow any structure information to leak out—formalized by weak sum type, and unworkable for real programming—or allow all structure information to leak out—formalized by strong sum types, and thus endangering separate compilation. To remedy this problem, both work out a novel type systems that allows careful propagation of partial information about types from one module to another, dubbed translucent and manifest types, respectively.

Russo [1999] shows that by utilizing the full power of dependent types, ML modules can be instead formalized using purely second-order parametricity, while showing that generativity—previously considered to be extra-logical—actually corresponds to existential quantification over types. Eventually Rossberg et al. [2014] clarify this idea even further with a direct translation of ML modules to System F-omega.

The Evolution of Dependent Types

Lucy Amidon

Dependent type systems underpin of most proof assistants. Active research seeks to integrate them into languages to enable the correct-by-verified-construction of programs.

Curry [1934] proposes the formulae-as-types notion, observing that the types of the primitives of his combinator calculus correspond with logical axioms. The legacy of this style of thinking may still be seen today in, for example, the HOL family of proof assistants. Although the history of the lambda calculus is deeply intertwined with the story of combinators, initial work attempts to equate λ-terms with propositions directly.

Howard [1969]Howard circulated this idea via a set of notes, published only 15 years after their initial formulation. is the first to suggest that propositions might instead correspond to types, with concrete terms representing proofs of these types, and the inference rules of the logical system in question corresponding directly with the typing rules of the calculus. The same observation is independently incorporated into the first dependently-typed proof assistant, AUTOMATH [Bruijn 1970].

Eventually Martin-Löf [1982] refines the form of equality and the need for a predicative universe hierarchy. He also combines the types-as-propositions notion with the terms-as-programs notion to produce a dependently typed model of a programming language. Coquand [1988b,a] simplifies the dependently typed core calculus with a different universe hierarchy, resulting in the Calculus of Constructions but missing inductive definitions. The Calculus of Inductive Constructions [Paulin-Mohring 2014] reintroduces inductive types to the Calculus of Constructions, resulting in a still-simple and small, but powerful calculus—essentially that used by the popular Coq proof assistant.

The Implementation of Depdenent-Type Proof Assistants

Ankit Kumar

LCF  [Milner 1972, 1979] is a mechanized system for proving theorems in the Logic of Computable Functions, which was devised by Dana Scott in 1969 but remained unpublished until 1993. Nowadays, the “LCF approach” refers to three key ideas: (1) there exits a special abstract type thm of theorems; (2) the constructors of the abstract type are the inference rules of the logical system; and (3) the system is implemented in a strongly-typed high-level language.

NuPRLThe prefix “nu” is Greek and pronounced “new.” is a family of instances of the Program Refinement Logic [Constable 1997; Joseph.L.Bates 1983]. The system is an automated theorem prover, constructed to assist with the logic-based program development of software systems. It is based on the Martin-Löf intuitionistic type theory [Martin-Löf 1982], which, in turn, is derived from the principles of mathematical constructivism, which requires any existence proof to contain a witness. In NuPRL, a witness is the program implicit in the proof.

Coq [Coquand and Huet 1985] is an implementation of the Calculus of Constructions (CoC), launched in 1984 by G. Huet and T. Coquand. ,The core of this system is a proof-checker for CoC seen as a typed λ-calculus, called the Constructive Engine. This engine is operated through a high-level language permitting the declaration of axioms and parameters, the definition of mathematical types and objects, and the explicit construction of terms in typed λ-calculus (proof objects). An interactive theorem prover permits the incremental construction of proof trees in a top-down manner, subgoaling recursively and backtracking from dead-ends by executing tactics. Keeping with the LCF approach, a basic set of tactics is predefined; users can extend this set via a tactic language. Version 4.4 is the first version of inductive types encoded impredicatively in the calculus, using a subsystem rec due to Christine Paulin [Paulin-Mohring 2014].

HOL [Gordon 1991], short for Higher Order Logic, is a direct descendent of LCF. It uses Classical ML as the meta language for proofs in LCF. HOL is based on classical set theory and uses strict compile-time type checking to derive theorems, which are types. It features a simple core logic consisting of a small set of primitive terms, axioms and primitive inference rules. Proofs are constructed by repeated application of inference rules to axioms or to previously proved theorems.

Programming Languages for Computational Media

Michael Ballantyne

In the 1970s researchers began to consider a future in which ubiquitous personal computing would transform the ways we communicate and think, as had the introduction of previous forms of media such as writing, printed books, and television. Alternate designs for computing environments could shape the impact of this new medium. The question arose which design might lead to the best possible future.

Alan Kay imagined a “personal dynamic medium” wherein users create their own interactive content and content-creation tools via a graphical interface and a programming language with objects closely tied to graphical elements. He saw simulation as the key new capacity of computing as a medium. Rather than merely describe a scientific model, dynamic media can simulate it so readers can explore its consequences. As a start towards this vision, his group’s Smalltalk 7X programming systems [Center 1976] integrated a minimalist object-oriented language with a graphical environment that introduced many of the GUI concepts we now take for granted.

Successor “computational media” systems integrate programming into other forms of documents while bringing along the behaviors we expect from document editing tools, like live updates and direct manipulation. The document structure also provides a concrete, spatial context for programming:

Teaching Languages

Emily Herbert

Logo [Feurzeig and Lukas 1972] is a dialect of Lisp, developed in 1967 as a tool for teaching children about mathematics via programming [Watson et al. 1992]. Since that time, over 300 successors have appeared.

StarLogo [Resnick 1996] and NetLogo [Tisue and Wilensky 2004], the first wave of Logo successors, exploit the massively-parallel programming models of Thinking Machine’s novel hardware. Using StarLogo, Klopfer et al. [2005] develops materials for teaching middle-school students about the emergent behavior of dynamic systems. To this day, NetLogo serves both as a teaching environment and a research tool for large simulations.

Logo’s recent successors, most notably the Scratch system [Maloney et al. 2010], introduce block-based programming into the mix, with the explicit goal of supporting introductory programming curricula [Meerbaum-Salant et al. 2013]. Current research on Scratch is focused on adding higher-order functions [Harvey and Mönig 2010].

Good Bye

20 Apr 2021

Scientists ask questions. Often that’s the most difficult part. Students should ask questions, especially at the end of a course. They tend to have trouble with asking questions, too.

What did HoPL teach me?

Where do I go from here?

Are hot topics good for me?

Bibliography

Amal Ahmed. Semantics of Types for Mutable State. Ph.D. dissertation, Princeton University, 2004. URL

Amal Ahmed and Matthias Blume. An equivalence-preserving CPS translation via multi-language semantics. In Proc. ACM International Conference on Functional Programming, pp. 431–444, 2011. URL

Andrew Appel. Compiling with Continuations. Cambridge Press, 1991.

Andrew W. Appel and David Allen McAllester. An indexed model of recursive types for foundational proof-carrying code. ACM Transactions on Programming Languages and Systems, pp. 657–683, 2001. URL

V. Bala, E. Duesterwald, and S. Banerjia. Dynamo: A transparent dynamic optimization system. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 2000. URL

Edd Barrett, Carl Friedrich Bolz-Tereick, Rebecca Killick, Sarah Mount, and Laurence Tratt. Virtual machine warmup blows hot and cold. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 1–27, 2017. URL

Alan Bawden, Richard Greenblatt, Jack Holloway, Thomas Knight, David Moon, and Daniel Weinreb. Lisp Machine Progress Report. Massachusetts Institute of Technology., AI Memo No. 444, 1977. URL

M. Bebenita, F. Brandner, M. Fahndrich, F. Logozzo, W. Schulte, N. Tillmann, and H. Venter. SPUR: a trace-based JIT compiler for CIL. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 708–725, 2010. URL

E. Beda, L. N. Korolev, N. V. Sukkikh, and T. S. Frolova. Programs For Automatic Differentiation For The Machine BESM. Institute for Precise Mechanics and Computation Techniques, Academy of Science, [Ostrovskii1971UdB], 1959.

Albert Benveniste, PaulLe Guernic, and Christian Jacquemot. Synchronous Programming with Events and Relations: the SIGNAL Language and Its Semantics. Science of Programming 16(2), pp. 103–149, 1991. URL

Gérard Berry and Georges Gonthier. The Esterel Synchronous Programming Language: Design, Semantics, and Implementation. Science of Programming 19(1), pp. 87–152, 1991. URL

Brian Bershad, Savage, S., Pardyak, P., Sirer, E. G., Fiuczynski, M. E., Becker, D., Chambers, C., and Eggers, S. Extensibility Safety and Performance in the SPIN Operating System. In Proc. ACM Symposium on Operating Systems Principles, pp. 267–283, 1995. URL

Gavin M. Bierman, Martín Abadi, and Mads Torgersen. Understanding TypeScript. In Proc. European Symposium on on Programming, pp. 257–281, 2014. URL

Lars Birkedal and Robert Harper. Relational interpretations of recursive types in an operational setting. In Proc. Theoretical Aspects of Computer Science, pp. 458–490, 1997. URL

Christian Bischof, Alan Carle, George Corliss, Andreas Griewank, and Paul Hovland. ADIFOR–Generating Derivative Codes from Fortran Programs. Scientific Programming 1(1), pp. 11–29, 1992. URL

Stephen M. Blackburn, Robin Garner, Chris Hoffmann, Asjad M. Khang, Kathryn S. McKinley, Rotem Bentzur, Amer Diwan, Daniel Feinberg, Daniel Frampton, Samuel Z. Guyer, Martin Hirzel, Antony Hosking, Maria Jump, Han Lee, J. Eliot B. Moss, Aashish Phansalkar, Darko Stefanović, Thomas VanDrunen, Daniel von Dincklage, and Ben Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 169–190, 2006. URL

C.F. Bolz, A. Cuni, M. Fijalkowski, and A. Rigo. Tracing the meta-level: PyPy’s tracing JIT compiler. In Proc. Workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages and Programming Systems, pp. 18–25, 2009. URL

Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler. Making the Future Safe for the Past: Adding Genericity to the Java Programming Language. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 183–200, 1998. URL

N. G. de Bruijn. The mathematical language AUTOMATH, its usage, and some of its extensions. In Proc. Symposium on Automatic Demonstration, pp. 29–61, 1970. URL

William Byrd. Logic and Relational Programming, miniKanren. InfoQ, An Interview, 2014. URL

L. Caires and F. Pfenning. Session Types as Intuitionistic Linear Propositions. In Proc. International Conference on Concurrency Theory, pp. 222–236, 2010. URL

Peter Canning, William Cook, Walter Hill, Walter Olthoff, and John C. Mitchell. F-bounded Polymorphism for Object-Oriented Programming. In Proc. ACM International Conference on Functional Programming Languages and Computer Architecture, pp. 273–280, 1989. URL

Luca Cardelli and Peter Wegner. On understanding types, data abstraction, and polymorphism. ACM Computing Surveys 17(4), pp. 471–523, 1985. URL

Learning Research Group. Xerox Palo Alto Research Center. Personal Dynamic Media. 1976. URL

Craig Chambers and David Ungar. Customization: Optimizing compiler technology for SELF, a dynamically-typed object-oriented programming language. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 146–160, 1989. URL

Benjamin Chung, Paley Li, Francesco Zappa Nardelli, and Jan Vitek. KafKa: Gradual Typing for Objects. In Proc. European Conference on Object-Oriented Programming, 2018. URL

Dominique Clement, Joelle Despeyroux, Thierry Despeyroux, Laurent Hascoet, and Gilles Kahn. Natural semantics on the computer. INRIA Sophia-Antipolis, RR-0416 00076140, 1985. URL

William D. Clinger, Jonathan A. Rees, H. Abelson, R Kent Dybvig, Christopher Thomas Haynes, Guillermo Juan Rozas, Norman I. Adams, Daniel Paul Friedman, Eugene E. Kohlbecker, Guy L. Steele Jr., David H Bartley, Robert H Halstead, D. Oxley, Gerald Jay Sussman, G. Brooks, Chris Hanson, Kent M. Pitman, and Mitch Wand. The revised4 report on SCHEME: a dialect of LISP. ACM SIGPLAN Lisp Pointers IV(3), pp. 1–55, 1991. URL

Alain Colmerauer and Philippe Roussel. The Birth of Prolog. In Proc. History of Programming Languages II, pp. 331–367, 1996. URL

Robert L. Constable. The Structure of Nuprl’s Type Theory. In Proc. Logic of Computation, pp. 123–155, 1997. URL

Gregory H. Cooper and Shriram Krishnamurthi. Embedding Dynamic Dataflow in a Call-by-Value Language. In Proc. European Symposium on on Programming, pp. 294–308, 2006. URL

T. Coquand. An Analysis of Girard’s Paradox. 1988a. URL

T. Coquand. The Calculus of Constructions. Information and Computation, pp. 95–120, 1988b. URL

Thierry Coquand and Gérard Huet. Constructions : a higher order proof system for mechanizing mathematics. In Proc. European Conference on Computer Algebra, pp. 151–184, 1985. URL

T. Cramer, R. Friedman, T. Miller, D. Seberger, R. Wilson, and M. Wolczko. Compiling Java just in time. IEEE Micro 17(3), pp. 36–43, 1997. URL

Ryan Culpepper and Matthias Felleisen. Fortifying macros. In Proc. ACM International Conference on Functional Programming, pp. 235–246, 2010. URL

H. B. Curry. Functionality in Combinatory Logic. Proc. Natl. Acad. Sci. USA 20(11), pp. 584–590, 1934. URL

Evan Czaplicki and Stephen Chong. Asynchronous Functional Reactive Programming for Guis. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 2013. URL

Luis Damas and Robin Milner. Principal Type-Schemes for Functional Programs. In Proc. ACM Symposium on Principles of Programming Languages, pp. 207–212, 1982. URL

O. Dardhaa, E. Giachinob, and D. Sangiorgib. Session Types Revisited. Information and Computation 256, 2017. URL

Christos Dimoulas, Sam Tobin-Hochstadt, and Matthias Felleisen. Complete Monitors for Behavioral Contracts. In Proc. European Symposium on on Programming, pp. 214–233, 2012.

Andrea A. diSessa and Harold Abelson. Boxer: A Reconstructible Computational Medium. Communications of the ACM 29(9), pp. 859–868, 1986. URL

Kent R. Dybvig. Writing hygienic macros in Scheme with syntax-case. Indiana University, Department of Computer Science, TR356, 1992. URL

Kent R. Dybvig, Robert Hieb, and Carl Bruggeman. Syntactic abstraction in Scheme. LISP and Symbolic Computation 5, pp. 295–326, 1993. URL

Conal Elliott and Paul Hudak. Functional Reactive Animation. In Proc. ACM International Conference on Functional Programming, pp. 263–273, 1997. URL

M. H. Van Emden. Who killed Prolog? Van Emden, A Programmer’s Place, WordPress Blog, 2010. URL

M. H. Van Emden and R. A. Kowalski. The Semantics of Predicate Logic as a Programming Language. Journal of the ACM 23(4), pp. 733–742, 1976. URL

Uffe Engberg and Mogens Nielsen. A Calculus of Communicating Systems with Label-Passing. In Proc. Proof, Language, and Interaction : Essays in Honour of Robin Milner. MIT Press., pp. 599–622, 1986. URL also available as DAIMI Tech. Rpt. 208, University of Aarhus

Wallace Feurzeig and George Lukas. LOGO—A programming language for teaching mathematics. Educational Technology 12(3), pp. 39–46, 1972. URL

Robert B. Findler and Matthias Felleisen. Contracts for Higher-Order Functions. In Proc. ACM International Conference on Functional Programming, pp. 48–59, 2002.

Robert Bruce Findler and Matthias Felleisen. Contract Soundness for Object-Oriented Languages. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 1–15, 2001. URL

Cormac Flanagan, Amr Sabry, Bruce Duba, and Matthias Fellisen. The Essence of Compiling with Continuations. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 1993. URL

Matthew Flatt. Bindings as sets of scopes. In Proc. ACM Symposium on Principles of Programming Languages, pp. 705–717, 2016. URL

Matthew Flatt, Robert Bruce Findler, Shriram Krishnamurthi, and Matthias Felleisen. Programming Languages as Operating Systems (or Revenge of the Son of the Lisp Machine). In Proc. ACM International Conference on Functional Programming, pp. 138–147, 1999. URL

Daniel P. Friedman, William E. Byrd, Oleg Kiselyov, and Jason Hemann. The Reasoned Schemer, 2nd Edition. MIT Press, 2018. URL

A. Gal, B. Eich, M. Shaver, D. Mandelin, D. Anderson, M. R. Haghighat, B. Kaplan, G. Hoare, B. Zbarsky, J. Orendorff, and J. Ruderman. Trace-based just-in-time type specialization for dynamic languages. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 465–478, 2009. URL

A. Gal, C.W. Probst, and M. Franz. HotpathVM: An effective JIT compiler for resource-constrained devices. In Proc. International Conference on Virtual Execution Environments, pp. 144–153, 2006. URL

Andy Georges, Dries Buytaert, and Lieven Eeckhout. Statistically rigorous java performance evaluation. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 57–76, 2007. URL

Jean-Yves Girard. Interprétation fonctionnelle et élimination des coupures de l’arithmétique d’ordre supérieur. Thèse d’État dissertation, Université Paris Diderot - Paris 7, 1972. URL

Mike Gordon. Introduction to the HOL system. In Proc. International Workshop on the HOL Theorem Proving System and Its Applications, pp. 1–2, 1991. URL

Kathryn E. Gray, Robert B. Findler, and Matthew Flatt. Fine-Grained Interoperability through Contracts and Mirrors. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 231–245, 2005.

Ben Greenman and Matthias Felleisen. A Spectrum of Type Soundness and Performance. Proceedings of the ACM on Programming Languages (ICFP) 2(71), pp. 1–32, 2018.

Ben Greenman, Matthias Felleisen, and Christos Dimoulas. Complete Monitors for Gradual Types. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, 2019.

Andreas Griewank. Achieving logarithmic growth of temporal and spatial complexity in reverse automatic differentiation. Optimization Methods and Software 1(1), pp. 35–54, 1992. URL

Andreas Griewank, David Juedes, and Jean Utke. Algorithm 755: {ADOL}-C: a package for the automatic differentiation of algorithms written in C/C++. Transactions on Mathematical Software 22(2), pp. 131–167, 1996. URL

The DynamicMedia Group. Dynamicland. 2018. URL

N. Halbwachs, P. Caspi, P. Raymond, and D. Pilaud. The Synchronous Data Flow Programming Language LUSTRE. In Proc. Proceedings of the IEEE volume 79, pp. 1305–1320, 1991. URL

Robert Harper and Mark Lillibridge. A type-theoretic approach to higher-order modules with sharing. In Proc. ACM Symposium on Principles of Programming Languages, pp. 123–137, 1994. URL

Robert Harper and John Mitchell. The essence of ML. In Proc. ACM Symposium on Principles of Programming Languages, pp. 28–46, 1988. URL

Robert Harper, John Mitchell, and Eugenio Moggi. Higher-order modules and the phase distinction. In Proc. ACM Symposium on Principles of Programming Languages, pp. 28–46, 1990. URL

Brian Harvey and Jens Mönig. Bringing no ceiling to Scratch: Can one language serve kids and computer scientists. In Proc. Constructionism, pp. 1–10, 2010. URL

Fritz Henglein. Type Inference with Polymorphic Recursion. ACM Transactions on Programming Languages and Systems 15(2), pp. 253–289, 1993. URL

K. Honda. Types for Dyadic Interaction. In Proc. International Conference on Concurrency Theory volume 715, pp. 509–523, 1993. URL

K. Honda, N. Yoshida, and M. Carbone. Multiparty Asynchronous Session Types. In Proc. ACM Symposium on Principles of Programming Languages, pp. 273–284, 2008. URL

Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language Primitives and Type Discipline for Structured Communication-Based Programming. In Proc. European Symposium on on Programming, pp. 122–138, 1998. URL

William A. Howard. The Formulae-as-types Notion of Construction, in To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus, and Formalism. Academic Press, 1969. URL

John Hughes. Generalising Monads to Arrows. Science of Programming 37(1), pp. 67–111, 1991. URL

Galen Hunt and James Larus. Singularity: Rethinking the Software Stack. Oper. Syst. Rev., pp. 37–49, 2007. URL

Apple Computer, inc. Hypercard User’s Guide. 1987. URL

Daniel H. H. Ingalls. Design Principles Behind Smalltalk. University of Virginia., CS655: Programming Languages, 2001. URL

Daniel H. H. Ingalls. The evolution of Smalltalk: from Smalltalk-72 through Squeak. In Proc. History of Programming Languages IV, 2021. URL

Kenneth E. Iverson. A Programming Language. John Wiley & Sons, 1962. URL

C. Barry Jay. A Semantics for Shape. Science of Computer Programming 25(2), pp. 251–283, 1995. URL

Joseph.L.Bates. Proofs as programs. ACM Transactions on Programming Languages and Systems 7(1), pp. 113–136, 1983. URL

G. Keller, M. Chakravarty, R. Leshchinskiy, S. Peyton Jones, and B. Lippmeier. Regular, Shape-Polymorphic, Parallel Arrays in Haskell. In Proc. ACM International Conference on Functional Programming, pp. 261–272, 2010. URL

Andrew Kennedy. Compiling with continuations, continued. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 2007. URL

Clemens N. Klokmose, James R. Eagan, Siemen Baader, Wendy Mackay, and Michel Beaudouin-Lafon. Webstrates: Shareable Dynamic Media. In Proc. ACM Symposium on on User Interface Software and Technology, pp. 280–290, 2015. URL

Eric Klopfer, Susan Yoon, and Tricia Um. Teaching complex dynamic systems to young students with StarLogo. Journal of Computers in Mathematics and Science Teaching 24(2), pp. 157–178, 2005. URL

Naoki Kobayashi, Benjamin Pierce, and David Turner. Linearity and the PI-Calculus. ACM Transactions on Programming Languages and Systems 21(5), pp. 914–947, 1999. URL

Eugene E. Kohlbecker, Daniel P. Friedman, Matthias Felleisen, and Bruce D. Duba. Hygienic macro expansion. In Proc. LISP and Functional Programming, pp. 151–161, 1986. URL

Eugene E. Kohlbecker and Mitch Wand. Macros by example. In Proc. ACM Symposium on Principles of Programming Languages, pp. 77–84, 1987. URL

Robert A. Kowalski. Predicate Logic as Programming Language. In Proc. Information Processing, Proceedings of the 6th {IFIP} Congress, pp. 569–574, 1974. URL

Neelakantan R. Krishnaswami, Nick Benton, and Jan Hoffmann. Higher-Order Functional Reactive Programming in Bounded Space. In Proc. ACM Symposium on Principles of Programming Languages, pp. 45–58, 2012. URL

Butler W. Lampson. Protection. Oper. Syst. Rev., pp. 18–24, 1974. URL

Peter J. Landin. The mechanical evaluation of expressions. The Computer Journal 6(4), pp. 308–320, 1964. URL

Xavier Leroy. Manifest types, modules, and separate compilation. In Proc. ACM Symposium on Principles of Programming Languages, pp. 109–122, 1994. URL

Xavier Leroy. Applicative functors and fully transparent higher-order modules. In Proc. ACM Symposium on Principles of Programming Languages, pp. 142–153, 1995. URL

Amit Levy, Campbell, Bradford, Ghena, Branden, Giffin, Daniel B., Pannuto, Pat, Dutta, Prabal, and Levis, Philip. Multiprogramming a 64kB Computer Safely and Efficiently. In Proc. ACM Symposium on Operating Systems Principles, pp. 234–251, 2017. URL

David MacQueen. Modules for Standard ML. In Proc. LISP and Functional Programming, pp. 198–207, 1984. URL

David MacQueen. Using dependent types to express modular structure. In Proc. ACM Symposium on Principles of Programming Languages, pp. 277–286, 1986. URL

Harry Mairson. Deciding ML Typability is Complete for Deterministic Exponential Time. In Proc. ACM Symposium on Principles of Programming Languages, pp. 382–401, 1989. URL

John Maloney, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. The Scratch programming language and environment. Transactions on Computing Education 10(4), pp. 1–15, 2010. URL

Oleksandr Manzyuk, Barak A. Pearlmutter, Alexey Andreyevich Radul, David R. Rush, and Jeffrey Mark Siskind. Perturbation Confusion in Forward Automatic Differentiation of Higher-Order Functions. Journal of Functional Programming 29, 2019. URL

Per Martin-Löf. Constructive Mathematics and Computer Programming. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 312, pp. 501–518, 1982. URL

Jacob Matthews and Robert B. Findler. Operational Semantics for Multi-Language Programs. ACM Transactions on Programming Languages and Systems 31(3), pp. 12:1–12:44, 2009a.

Jacob Matthews and Robert B. Findler. Operational Semantics for Multi-Language Programs. In Proc. ACM Symposium on Principles of Programming Languages, pp. 3–10, 2009b. URL

Luke Maurer, Paul Downen, Zena Ariola, and Simon Peyton Jones. Compiling without continuations. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 1–12, 2017. URL

John McCarthy. Recursive functions of symbolic expressions and their computation by machine, Part I. Communications of the ACM 4(3), pp. 182–200, 1960. URL

M. D. McIlroy. Mass Produced Software Components. In Proc. NATO Software Engineering Conference. NATO Scientific Affairs Division, pp. 79–87, 1968. URL

Orni Meerbaum-Salant, Michal Armoni, and Mordechai Ben-Ari. Learning computer science concepts with Scratch. Computer Science Education 23(3), pp. 239–364, 2013. URL

Bernard Meyer. Applying Design by Contract. IEEE Computer 25(10), pp. 45–51, 1992a.

Bernard Meyer. Eiffel: The Language. Prentice Hall, 1992b.

Leo A. Meyerovich, Arjun Guha, Jacob Baskin, Gregory H. Cooper, Michael Greenberg, Aleks Bromfield, and Shriram Krishnamurthi. Flapjax: A Programming Language for Ajax Applications. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 1–12, 2009. URL

Robin Milner. Logic for Computable Functions: description of a machine implementation. Stanford University, AIM-169, 1972. URL

Robin Milner. Fully abstract models of typed λ-calculi. Theoretical Computer Science 4(1), pp. 1–22, 1977. URL

Robin Milner. A Theory of Type Polymorphism in Programming. Journal of Computer and System Sciences 17, pp. 348–375, 1978. URL

Robin Milner. LCF: A way of doing proofs with a machine. In Proc. Mathematical Foundations of Computer Science , pp. 146–159, 1979. URL

Robin Milner. A Calculus of Communicating Systems. Lecture Notes in Computer Science, 1980. URL

Robin Milner. The Polyadic PI-Calculus: A Tutorial. Logic and Algebra of Specification, pp. 203–246, 1991. URL

Robin Milner, Joachim Parrow, and David Walker. A Calculus of Mobile Processes. Information and Computation, pp. 1–1, 1992. URL

James Hiram Morris. Lambda-Calculus Models of Programming Languages. Ph.D. dissertation, Massachusetts Institute of Technology, 1968.

Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proc. ACM Conference on Architectural Support for Programming Languages and Operating Systems, pp. 265–276, 2009. URL

Henrik Nilsson, Antony Courtney, and John Peterson. Functional Reactive Programming, Continued. In Proc. ACM International Conference on Functional Programming, pp. 51–64, 2002. URL

Martin Odersky and Philip Wadler. Pizza into Java: Translating Theory into Practice. In Proc. ACM Symposium on Principles of Programming Languages, pp. 146–159, 1997. URL

D. L. Parnas. A Technique for Software Module Specification with Examples. Communications of the ACM 15(5), pp. 330–336, 1972. URL

Daniel Patterson, Jamie Perconti, Christos Dimoulas, and Amal Ahmed. FunTAL: reasonably mixing a functional language with assembly. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 495–509, 2017. URL

C. Paulin-Mohring. Introduction to the Calculus of Inductive Constructions. 2014. URL

Benjamin Pierce and Davide Sangiorgi. Typing and Sub-typing of Mobile Processes. In Proc. Logic in Computer Science, pp. 376–385, 1994. URL

Rob Pike. Systems Software Research is Irrelevant. Bell Labs, Lucent Technologies., A Plan 9 Polemic, 2000. URL

Andrew M. Pitts. Typed operational reasoning. In Advanced Topics in Types and Programming Languages, pp. 245–289, 2005. URL

Andrew M. Pitts and Ian D. B. Stark. Observable properties of higher order functions that dynamically create local names, or: What’s new? In Proc. Mathematical Foundations of Computer Science, pp. 122–141, 1993. URL

Atze van der Ploeg and Koen Claessen. Practical Principled FRP: Forget the Past, Change the Future, FRPNow! In Proc. ACM International Conference on Functional Programming, pp. 302–314, 2015. URL

Godon D. Plotkin. Call-by-name, call-by-value, and the lambda calculus. Theoretical Computer Science 1n, pp. 125–159, 1977a. URL

Godon D. Plotkin. LCF considered as a programming language. Theoretical Computer Science 3(5), pp. 223–255, 1977b. URL

Godon D. Plotkin. A structural approach to operational semantics. University of Aarhus, DAIMI FN-19, 1981. URL

L.B. Rall. Differentiation in {PASCAL}-{SC}. Transactions on Mathematical Software 10(2), pp. 161–184, 1984. URL

David D. Redell, Dalal, Yogen K., Horsley, Thomas R., Lauer, Hugh C., Lynch, William C., McJones, Paul R., Murray, Hal G., and Purcell, Stephen C. Pilot: An Operating System for a Personal Computer. Communications of the ACM 23(2), pp. 81–92, 1980. URL

Mitchel Resnick. StarLogo: an environment for decentralized modeling and decentralized thinking. In Proc. Conference Companion on Human Factors in Computing Systems, pp. 11–12, 1996. URL

John C. Reynolds. Definitional interpreters for higher-order programming languages. In Proc. ACM Annual Conference, pp. 717–740, 1972. URL

John C. Reynolds. Towards a Theory of Type Structure. In Proc. Programming Symposium, pp. 408–425, 1974. URL

Gregor Richards, Andreas Gal, Brendan Eich, and Jan Vitek. Automated construction of JavaScript benchmarks. In Proc. ACM Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 677–694, 2011. URL

Andreas Rossberg, Claudio Russo, and Derek Dreyer. F-ing modules. Journal of Functional Programming 24(5), pp. 529–607, 2014. URL

Caludio Russo. Non-dependent Types for Standard ML Modules. In Proc. ACM International Conference on Principles and Practice of Declarative Programming, pp. 80–97, 1999. URL

Amr Sabry and Matthias Fellisen. Reasoning about Programs in Continuation Passing Style. In Proc. LISP and Functional Programming, pp. 288–298, 1992. URL

Jeremy G. Siek and Walid Taha. Gradual Typing for Functional Languages. In Proc. Workshop on Scheme and Functional Programming, pp. 81–92, 2006.

Jeffrey Mark Siskind and Barak A. Pearlmutter. First-Class Nonstandard Interpretations by Opening Closures. In Proc. ACM Symposium on Principles of Programming Languages, pp. 71–76, 2007a. URL

Jeffrey Mark Siskind and Barak A. Pearlmutter. Lazy Multivariate Higher-Order Forward-Mode {AD}. In Proc. ACM Symposium on Principles of Programming Languages, pp. 155–160, 2007b. URL

Justin Slepak, Olin Shivers, and Panagiotis Manolios. An Array-Oriented Language with Static Rank Polymorphism. In Proc. European Symposium on on Programming, pp. 27–46, 2014. URL

B. Speelpenning. Compiling Fast Partial Derivatives of Functions Given by algorithms. Ph.D. dissertation, University of Illinois, Urbana-Champagne, 1980. URL

Joe Stoy. Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory. MIT Press, 1977. URL

W. W. Tait. Intensional Interpretations of Functionals of Finite Type I. The Journal of Symbolic Logic, pp. 198–212, 1967. URL

K. Takeuchi, K. Honda, and M. Kubo. An Interaction-based Language and its Typing System. In Proc. International Conference on Parallel Architectures and Languages Europe, pp. 398–413, 1994. URL

Satish Thatte. A Type System for Implicit Scaling. Science of Computer Programming 17(1), pp. 217–245, 1991. URL

Seth Tisue and Uri Wilensky. Netlogo: A simple environment for modeling complexity. In Proc. International Conference On Complex Systems, pp. 16–21, 2004. URL

Sam Tobin-Hochstadt and Matthias Felleisen. Interlanguage Migration: from Scripts to Programs. In Proc. Dynamic Languages Symposium, pp. 964–974, 2006.

John Underwood. HyperCard and Interactive Video. Computer Assisted Language Instruction Consortium 6(3), pp. 7–20, 1989. URL

Michael M. Vitousek, Andrew Kent, Jeremy G. Siek, and Jim Baker. Design and Evaluation of Gradual Typing for Python. In Proc. Dynamic Languages Symposium, pp. 45–56, 2014. URL

Michael M. Vitousek, Cameron Swords, and Jeremy G. Siek. Big Types in Little Runtime: Open-World Soundness and Collaborative Blame for Gradual Type Systems. In Proc. ACM Symposium on Principles of Programming Languages, pp. 762–774, 2017. URL

William W. Wadge and Edward A. Ashcroft. Lucid, the Dataflow Programming Langugage. Academic Press, 1985. URL

P. Wadler. Propositions as Sessions. In Proc. ACM International Conference on Functional Programming, pp. 273–286, 2012. URL

Zhanyong Wan and Paul Hudak. Functional Reactive Programming from First Principles. In Proc. ACM Conference on Programming Language Design and Implementation, pp. 242–252, 2000. URL

Zhanyong Wan, Walid Taha, and Paul Hudak. Real-time FRP. In Proc. ACM International Conference on Functional Programming, pp. 45–58, 2001. URL

J. Allen Watson, Garrett Lange, and Vickie M. Brinkley. Logo mastery and spatial problem-solving by young children: Effects of Logo language training, route-strategy training, and learning styles on immediate learning and transfer. Journal of Educational Computing Research 8(4), pp. 521–540, 1992. URL

R. E. Wengert. A Simple Automatic Derivative Evaluation Program. Communications of the ACM 7, pp. 463–464, 1964. URL

Niklaus Wirth and Jürg Gutknecht. Project Oberon: The Design of an Operating System and Compiler. ACM Press, 1991. URL