Is this Lam or Fish?
lambda fish

In this blog I’ll talk about how to implement and understand the foundations of semantics, type theory, and computation. I’ve implemented over 12 lambda calculi including their AST, parser, and Repls for each. This means you can take whiteboard examples of these calculi and run them. The functionality is also tested using QuickCheck.

The code can be found here with instructions to build and run them.


I’ve been studying the Foundations of Programming Languages, Semantics, and Type Theory. I decided to implement some of the common Lambda Calculi to solidify my understanding.

The naming of this repo was inspired in part by Andrej Bauer’s plzoo.

One aim of the repo is to implement popular (functional) languages and extensions to portray how the theory translates into practice.

The languages are written in Haskell and are intentionally simple. That is, they do not use advanced features of Haskell but rather minimal use of type constructors, recursion, and functional programming.

The intention here is to maximise your understanding of language design whilst minimising the need to understand Haskell. Of course it helps if you know it!

The following list of language proceeds (roughly) in order of complexity of the lambda cube. We detour by some other seminal calculi, and if one is missing by all means please add it!


  1. ULC: Alonzo Church’s Untyped Lambda Calculus (Church Style)
  2. SKI: Moses Schonfinkel’s SKI Combinator Calculus. In essence an (untyped) combinator calculus equivalent in computational power to ULC, but without abstraction.
  3. STLC: Alonzo Church’s Simply-Typed Lambda Calculus (Church) with one base type and function types
  4. SystemT: Kurt Godel’s System T. In essence the STLC with Nat swapped out for the base type and primitive recursion on Nats.
  5. PCF: Gordon Plotkin’s Programming Computable Functions. In essence it’s System T but using the Y combinator for general recursion instead of primitive.
  6. Mu: Michel Parigot’s Lambda-Mu. In essence it’s STLC with continuations that don’t rely on the reduction strategy used.
  7. SystemF: John Reynolds’ System F. In essence it’s STLC with parametric polymorphism built in.
  8. SOL: John Mitchell and Gordon Plotkin’s SOL. In essence it’s System F but with existential types made explicit.
  9. Cata: In essence it’s STLC with inductive types.
  10. Ana: In essence it’s STLC with coinductive types.
  11. Sub: Benjamin Pierce’s Lambda Calculus with Subtypes. In essence it’s STLC with generalised records and subtype polymorphism.
  12. Omega: Renardel de Lavalette’s L(or λω). In essence it’s STLC with kinding and type-operators.
  13. FOmega: Jean Yves-Girard’s FOmega. In essence it’s SystemF + Omega which enables higher-order polymorphism.
  14. LF: Bob Harper, Furio Honsell, and Gordon Plotkin’s Edinburgh Logical Framework. In essence it’s STLC with pure first-order dependent types.
  15. C: Thierry Coquand and Gerard Huet’s Calculus of Constructions. In essence it is FOmega + LF written in a pure type systems style. This serves as the apex of the lambda cube and a constructive foundation of mathematics.

See each repo for details on installation/use.


Submit a PR if there’s something you want to add or fix! Bearing in mind a few things:

  1. Compile your code with -W, This catches any warnings. There shouldn’t be any warnings
  2. Use hlint, to handle code linting and suggestions. Like wall, there should be no suggesstions for file Foo.hs when running hlint Foo.hs.
  3. Ensure code has 100% Haddock coverage. This helps to document things if ever we want to.
  4. Keep in mind the motivations above, this code is not meant to be advanced Haskell, but rather simple (for demonstration) so try not to use advanced technologies if you can.

This is a work in progress so coverage of features may vary! check out the project issues for more information.


  1. Each language is designed using abstract data types in Haskell. This is a natural fit for higher-order and recursive terms in functional languages. The AST for each file contains any equivalence, typing, reduction and substitution rules needed.
  2. The parser for each instance is based around monadic parser combinators. The idea behind this is that we may compose parsers for several terms into a parser for all of them. The relationship between the left parser and the non-ambiguous grammars is thus a direct one.
  3. The Repl for each language is a simple recursive function that carries an environment with terms (and maybe types). Future work would replace these with Haskeline so that history and tab completion are available.
  4. The languages are tested using Quickcheck. We can randomly generate terms and parse them to ensure their show instance can be parsed into the equivalent term. Future work would see typing relations and reductions tested via type guided generation.