In the previous post, we have added polymorphism to the simply typed lambda calculus and implemented a type checker for the polymorphic lambda calculus. In this post, we’ll explore *type inference* or *reconstruction*.

In the polymorphic lambda calculus, we can write polymorphic (generic) functions that work on all types, using *parametric polymorphism*. This is a major benefit over the simply typed lambda calculus, because it reduces duplication: for example, we no longer have to write an identity function for every type that we might need one for, but can write exactly one identity function that works on all types.

But, as you might have noticed, it is quite some work to use such polymorphic functions. Where we could define \(\mathsf{const}\) as \(\lambda x. \lambda y. x\) and use it like \(\mathsf{const}\ (\lambda x. x)\ (\lambda f. \lambda x. f\ x)\) in the untyped lambda calculus, in the polymorphic lambda calculus we have to type \(\mathsf{const} = \Lambda X. \Lambda Y. \lambda x : X. \lambda y : Y. x\) and use it like the following for the same example: \[ \begin{align*} \mathsf{const}\ & (\forall X. X \rightarrow X) \\ & (\forall A. \forall B. (A \rightarrow B) \rightarrow A \rightarrow B) \\ & (\Lambda X. \lambda x : X. x) \\ & (\Lambda A. \Lambda B. \lambda f : A \rightarrow B. \lambda x : A. f\ x) \end{align*} \]

We have to do a whole lot of typing to make the type checker happy. Wouldn’t it be nice if we could write our terms like in the untyped lambda calculus, with the same static safety as in the polymorphic lambda calculus? It turns out that we can actually implement a type checker that *infers* or *reconstructs* the types from a fully untyped program. This technique is called *type inference* or *type reconstruction*, and the corresponding type system is called the *Hindley-Milner type system*.

To write programs without any type information, we remove all types from the syntax of terms. So no more type abstractions, type applications or lambda abstractions with explicit types (e.g., we’ll write \(\lambda x. x\) instead of \(\lambda x : X. x\)).

The AST looks like this:

```
data Term
= TmTrue
-- ^ True value
| TmFalse
-- ^ False value
| TmInt Integer
-- ^ Integer value
| TmVar String
-- ^ Variable
| TmAbs String Term
-- ^ Lambda abstraction
| TmApp Term Term
-- ^ Application
| TmAdd Term Term
-- ^ Addition
| TmIf Term Term Term
-- ^ If-then-else conditional
| TmLet String Term Term
-- ^ Let-in
deriving (Show, Eq)
```

You might notice that this is just the syntax of the untyped lambda calculus (`TmVar`

, `TmAbs`

, `TmApp`

) with the syntax constructs of the simply typed lambda calculus (`TmTrue`

, `TmFalse`

, `TmInt`

, `TmAdd`

, `TmIf`

), plus the addition of the `TmLet`

constructor, which is used for terms of the form \(\mathbf{let}\ x = t\ \mathbf{in}\ t'\). The addition of let-in terms is not strictly necessary, but it is if we actually want to use polymorphism. (This will be discussed later.)

For the syntax of types, we do have to make a substantial change, though. We must restrict our usage of polymorphism: we can only use \(\forall\)’s at the top level; no more \((\forall A. A \rightarrow A) \rightarrow (\forall B. B \rightarrow B)\), for example. We have to do this, because type inference for the polymorphic lambda calculus as we saw it in the previous post is undecidable. We will therefore split our type syntax into two: *monotypes* and *polytypes* (or *type schemes*).

The syntax for *polytypes* (for which we’ll write \(\sigma\)) is very simple:

\[ \begin{align*} \sigma ::=\ & \forall \vec{X}. \tau & \text{(polytype)} \\ \end{align*} \]

Here \(\tau\) is a monotype, and \(\vec{X}\) is a (possibly empty) list of type variables.

In Haskell, this is:

(We’ll use just `Type`

to refer to monotypes.)

The syntax for monotypes looks like this:

\[ \begin{align*} \tau ::=\ & X & \text{(type variable)} \\ \mid\ & \tau \rightarrow \tau' & \text{(function type)} \\ \mid\ & \mathsf{Bool} & \text{(boolean type)} \\ \mid\ & \mathsf{Int} & \text{(integer type)} \end{align*} \]

Or in Haskell:

```
data Type
= TyVar String
-- ^ Type variable
| TyFun Type Type
-- ^ Function type
| TyBool
-- ^ Boolean type
| TyInt
-- ^ Integer type
deriving (Show, Eq)
```

The type for the identity function (which we now write as just \(\lambda x. x\)), \(\forall X. X \rightarrow X\), is written in Haskell as:

```
tmId :: Term
tmId = TmAbs "x" (TmVar "x")
tyId :: Polytype
tyId = TyForall ["X"] $ TyFun (TyVar "X") (TyVar "X")
```

And \(\mathsf{const}\):

```
tmConst :: Term
tmConst = TmAbs "a" (TmAbs "b" (TmVar "a"))
tyConst :: Polytype
tyConst = TyForall ["A", "B"] $ TyFun (TyVar "A") (TyFun (TyVar "B") (TyVar "A"))
```

Type inference is quite a bit harder than type checking the simply typed lambda calculus or the polymorphic lambda calculus *with* explicit type annotations. We will use a constraint-based type inference algorithm, based on *Types and Programming Languages*, Benjamin C. Pierce, Chapter 22.3. I have found this to be the most intuitive approach. I will deviate a bit from Pierce’s approach, though, to make the rules somewhat easier to read.^{1}

For type inference, we will use a different typing relation than the one we used for the simply typed and the polymorphic (but explicitly typed) lambda calculus. Before, we used the relation \(\Gamma \vdash t : \tau\), which could be read something like: *\(\Gamma\) entails that \(t\) has type \(\tau\)*. Now, we will use the typing relation written as follows: \(\Gamma \vdash t : \tau \mid C\). This can be read as: *\(\Gamma\) entails that \(t\) has type \(\tau\) if the constraints of \(C\) are satisfied*. Our type inference program will generate a set of *constraints*, which ought to be *satisfied* for the type checker to succeed. (Another change is the context \(\Gamma\), which will now contain pairs \(x : \sigma\) of variables and *polytypes* instead of pairs \(x : \tau\) of variables and monotypes.)

A *constraint* \(\tau \sim \tau'\) states that \(\tau\) and \(\tau'\) should be *unified*. The constraint \(A \sim B \rightarrow \mathsf{Int}\), for example, asserts that the type variable \(A\) should be equal to the type \(B \rightarrow \mathsf{Int}\). A *constraint set* \(C\) is a set (or a list) of constraints. We want to write a a function that *unifies* a constraint set. This unification function will generate a substitution \(\mathcal{S}\), such that the substitution *unifies* all constraints in \(C\): for all constraints \(\tau \sim \tau'\), \(\mathcal{S} \tau\) (the substitution \(\mathcal{S}\) applied tot type \(\tau\)) should be equal to \(\mathcal{S} \tau'\).

In Haskell, we will create the following `Constraint`

type, with the infix constructor `(:~:)`

that corresponds to the \(\sim\) in a constraint:

For substitutions, we use a map:

The `substType`

function will apply a substitution to a type. Applying substitutions to monotypes (i.e., without \(\forall\)s) is quite easy, because we don’t have to worry about renaming.

When we come across a type variable, we replace it by the corresponding type in the substitution, or keep it when the variable does not occur in the substitution:

For function types, we just apply the substitution recursively:

With the `substType`

function, we can very easily apply a substitution to a constraint, by applying the substitution to the left-hand side and the right-hand side:

```
substConstraint :: Subst -> Constraint -> Constraint
substConstraint s (t1 :~: t2) = substType s t1 :~: substType s t2
```

We can also apply a substitution to a polytype \(\forall \vec{X}. \tau\), which applies the substitution to \(\tau\), with all elements from the substitution with a key from \(\vec{X}\) removed:

```
substPolytype :: Subst -> Polytype -> Polytype
substPolytype s (TyForall xs ty) =
let s' = foldr Map.delete s xs
in TyForall xs (substType s' ty)
```

As we’ve seen in the previous post, substitution is generally quite hard for types which bind type variables, because the programmer might use the same type variable twice in different contexts, causing them to clash in some cases. Luckily, this won’t be a problem here, since the programmer doesn’t write any type variables. Instead, all type variables that we use are generated by the inference algorithm, which makes sure they are all unique (or *fresh*). This will be explained later.

We also need to be able to compose two substitutions. In mathematical notation, we write \(\mathcal{S}_1 \circ \mathcal{S}_2\) for the composition of \(\mathcal{S}_1\) and \(\mathcal{S}_2\), where \(\mathcal{S}_2\) is applied first. We want \((\mathcal{S}_1 \circ \mathcal{S}_2)\tau\) for any type \(\tau\) to be equal to \(\mathcal{S}_1(\mathcal{S}_2\tau)\). We first apply \(\mathcal{S}_1\) to the codomain (that is, the *values*, not the keys, of the `Map`

) of \(\mathcal{S}_2\), and then return the union of the result and \(\mathcal{S}_1\), where values of the first substitution are preferred:

Then, we can write the unification function for a single constraint:

`UnifyError`

To unify two equal simple types, we don’t have to apply any substitution, so we’ll just return an empty substitution:

To unify two function types, we just need to unify both parameter types and both target types. We do this using the `solve`

function, which can unify a list of constraints. We’ll define `solve`

later.

To unify a type variable with another type, we use the `bind`

helper function, which we’ll also define later.

Any other constraint is unsolvable, so we’ll just throw an error:

For unifying a type variable with another type, we use the `bind`

function:

When `t`

is the same as the type variable `x`

, we don’t have to do any substituting:

When the type variable `x`

occurs freely in `t`

(and it is not `x`

itself, which we have checked in the previous case), we cannot unify them, since that would require infinite types. The constraint \(X \sim X \rightarrow X\), for example, has no solution:

Otherwise, we can just return the substitution which substitutes `x`

by `t`

:

The `occursIn`

function is very straight-forward:

```
occursIn :: String -> Type -> Bool
x `occursIn` t = case t of
TyBool -> False
TyInt -> False
TyFun t1 t2 -> x `occursIn` t1 || x `occursIn` t2
TyVar y -> x == y
```

Finally, we can solve a list of constraints:

Solving an empty list of constraints just corresponds to doing nothing:

To solve a non-empty list of constraints, we first unify the constraint `c`

, which gives us the substitution `s1`

. We apply this substitution to the rest of the constraints and solve the result, giving us the substitution `s2`

, and then return the composition of `s2`

and `s1`

:

Some examples:

```
solve [TyVar "X" :~: TyInt]
=> Right (fromList [("X",TyInt)])
solve [TyInt :~: TyBool]
=> Left (CannotUnify TyInt TyBool)
solve [TyInt :~: TyVar "X", TyVar "X" :~: TyFun TyBool TyBool]
=> Left (CannotUnify TyInt (TyFun TyBool TyBool))
solve [TyInt :~: TyVar "X", TyVar "Y" :~: TyBool]
=> Right (fromList [("X",TyInt),("Y",TyBool)])
solve [TyVar "X" :~: TyFun (TyVar "X") (TyVar "X")]
=> Left (InfiniteType "X" (TyFun (TyVar "X") (TyVar "X")))
```

We can also test whether `solve`

has the desired behaviour, namely that the resulting substitution unifies the constraints. To do this, we’ll use the QuickCheck library:

`solve`

We will first need an instance of `Arbitrary`

for `Type`

and `Constraint`

. The instance for `Type`

is adapted from the lambda calculus example. The frequency for `TyInt`

and `TyBool`

are relatively low, because a frequent occurrence of these simple types in the generated arbitrary types results in a lot of failed calls to `solve`

.

```
instance Arbitrary Type where
arbitrary = sized arbType
where
arbType n = frequency $
[ (10, TyVar <$> arbVar)
, (1, pure TyInt)
, (1, pure TyBool)
] <>
[ (5, TyFun <$> arbType (n `div` 2) <*> arbType (n `div` 2))
| n > 0
]
arbVar = elements [[c] | c <- ['A'..'Z']]
instance Arbitrary Constraint where
arbitrary = (:~:) <$> arbitrary <*> arbitrary
```

Then we write the function `unifies`

, which checks whether a substitution unifies the constraints. (Remember: a substitution \(\mathcal{S}\) satisfies a list of constraints \(C\) if for all constraints \(\tau \sim \tau'\) in \(C\), \(\mathcal{S}\tau = \mathcal{S}\tau'\).)

```
unifies :: Subst -> [Constraint] -> Bool
unifies s cs =
let cs' = fmap (substConstraint s) cs
in all (\(t1 :~: t2) -> t1 == t2) cs'
```

Now we can write our property, which will check whether every successful `solve`

returns a substitution that unifies the list of constraints. We will discard errors of `solve`

, since they occur quite often for arbitrary constraints, but aren’t useful for checking the property.

```
prop_solveUnifies :: [Constraint] -> Property
prop_solveUnifies cs =
case solve cs of
-- Discard errors
Left _ -> property Discard
Right subst -> property $ unifies subst cs
```

Now we can check the property:

Looks good!Now we know how to solve constraints, but we don’t know how to actually generate them. The typing rules will generate the constraints that should be solves afterwards.

Let’s first look at some easy rules. The rules for the values of the simple types are the still the same as for the simply typed lambda calculus, with the addition of \(\ldots \mid \varnothing\) at the end of the judgement, which states that the rules don’t generate any constraints (an empty set):

The rule for applications is also not that hard:

\[ \text{T-App: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ X \text{ fresh} \\ C' = C_1 \cup C_2 \cup \{\tau_1 \sim \tau_2 \rightarrow X\} \end{array} }{ \Gamma \vdash t_1\ t_2 : X \mid C' } \]

When type checking the application \(t_1\ t_2\), we first type check \(t_1\) and \(t_2\). We then generate a new constraint set which consists of all the constraints of \(C_1\), all of \(C_2\) and the constraint \(\tau_1 \sim \tau_2 \rightarrow X\). (The \(\cup\) symbol is mathematical notation for the *union* of two sets.) Because \(t_1\) is applied to \(t_2\), \(t_1\) should be a function with a parameter of the type of \(t_2\). We can’t yet know the resulting type, so we use a fresh type variable, denoted by \(X\), for which we add the constraint that \(\tau_1\) should be equal to \(\tau_2 \rightarrow X\).

To state that \(X\) should be a freshly chosen type variable, we write \(X \text{ fresh}\) in the typing rule. A fresh type variable is a type variable which is not already used elsewhere. Because all terms are implicitly typed (that is, they don’t contain types in their syntax), we can confidently use a predefined list of fresh type variables, since there is no chance of them clashing with type variables written by the programmer (because they don’t exist).

Other rules might add constraints regarding \(X\). The type inference of \(t_1\ t_2 + 3\), for example, will add the constraint \(X \sim \mathsf{Int}\).

The typing rules for if-then-else terms and addition terms are very easy: they are almost the same as for the simply typed lambda calculus, but now we can use constraints to specify that the condition of an if-then-else term must be a boolean, etc.:

\[ \text{T-If: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ \Gamma \vdash t_3 : \tau_3 \mid C_3 \\ C' = C_1 \cup C_2 \cup C_3 \cup \{\tau_1 \sim \mathsf{Bool}, \tau_2 \sim \tau_3\} \end{array} }{ \Gamma \vdash \mathbf{if}\ t_1\ \mathbf{then}\ t_2\ \mathbf{else}\ t_3 : \tau_2 \mid C' } \]

\[ \text{T-Add: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ C' = C_1 \cup C_2 \cup \{\tau_1 \sim \mathsf{Int}, \tau_2 \sim \mathsf{Int}\} \end{array} }{ \Gamma \vdash t_1 + t_2 : \mathsf{Int} \mid C' } \]

The rule for variables is a bit more involved. It looks like this:

\[ \text{T-Var: } \frac{ \begin{array}{c} x : \sigma \in \Gamma \\ \tau = \mathit{inst}(\sigma) \end{array} }{ \Gamma \vdash x : \tau \mid \varnothing } \]

Remember that the context \(\Gamma\) contains polytypes, but our typing relation uses monotypes (\(\Gamma \vdash t : \tau\) instead of \(\Gamma \vdash t : \sigma\)). To fix this, we use a function called \(\mathit{inst}\) (short for ‘instantiate’), which takes as its parameter a polytype \(\forall \vec{X}. \tau\). For every type variable \(X_i\) in \(\vec{X}\) (which is a list of type variables), it generates a new, fresh type variable \(Y_i\). It then performs the substitution \([X_1 := Y_1, \ldots, X_n := Y_n]\) on \(\tau\) and returns the result.

This trick is necessary for *let-polymorphism* (which I’ll discuss in more detail for the typing rule for let-in terms). When inferring the type of the term \[
\begin{array}{l}
\mathbf{let}\ \mathsf{id} = \lambda x. x\ \mathbf{in} \\
\mathbf{if}\ \mathsf{id}\ \mathsf{True}\ \mathbf{then}\ \mathsf{id}\ 4\ \mathbf{else}\ 5
\end{array}
\] we would add \(\mathsf{id} : \forall A. A \rightarrow A\) to the context. When we come across the term \(\mathsf{id}\ \mathsf{True}\), we would (without using \(\mathit{inst}\)) add the constraint \(A \sim \mathsf{Bool}\). But later, when we type check \(\mathsf{if}\ 4\), we would also add the constraint \(A \sim \mathsf{Int}\). This results in an error, since the unification algorithm can’t unify \(\mathsf{Bool} \sim \mathsf{Int}\) (and rightly so). \(\mathit{inst}\) prevents this problem, as we’ll see when looking at T-Let.

The rule for lambda abstractions looks like this:

\[ \text{T-Abs: } \frac{ \begin{array}{c} X \text{ fresh} \\ \Gamma, x : \forall \varnothing. X \vdash t : \tau \mid C \end{array} }{ \Gamma \vdash \lambda x. t : X \rightarrow \tau \mid C } \]

This can be read as follows: *if \(X\) is a free type variable and \(\Gamma, x : \forall \varnothing. X\) entails that \(t\) has type \(\tau\) with the generated constraints \(C\), then \(\Gamma\) entails that \(\lambda x. t\) has type \(X \rightarrow \tau\) with the same generated constraint set \(C\).* Since the constraint set stays the same, the T-Abs rule does not introduce any constraints.

Because lambda abstractions are no longer annotated with the type of the parameter (\(\lambda x : \tau. t\)), we don’t know what type we should give \(x\) in the context to type check the body of the lambda abstraction (\(t\)). We therefore use a fresh type variable \(X\) as \(x\)’s type. But, since the context contains polytypes, we can’t just add the pair \(x : X\). We instead add the pair \(x : \forall \varnothing. X\).

Not binding \(X\) with a \(\forall\) (i.e., adding \(x : \forall X. X\)) prevents \(\mathit{inst}\) from applying let-polymorphism to the arguments of lambda abstractions. The above example using a let-in term would not work as a lambda abstraction: \((\lambda \mathsf{id}. \mathbf{if}\ \mathsf{id}\ \mathsf{True}\ \mathbf{then}\ \mathsf{id}\ 4\ \mathbf{else}\ 5)\ (\lambda x. x)\) would fail to type check.

The rule for let-in terms, finally, looks like this:

\[ \text{T-Let: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \mathcal{S} = \mathit{solve}(C_1) \\ \sigma = \mathit{gen}(\mathcal{S}\Gamma, \mathcal{S}\tau_1) \\ \Gamma, x : \sigma \vdash t_2 : \tau_2 \mid C_2 \end{array} }{ \Gamma \vdash \mathbf{let}\ x = t_1\ \mathbf{in}\ t_2 : \tau_2 \mid C_2 } \]

This rule is executed in the following steps:

- The type of \(t_1\) is determined.
- The constraints generated while inferring the type of \(t_1\) are solved using the
`solve`

function, giving us the substitution \(\mathcal{S}\). - The substitution is applied to the context \(\Gamma\) and \(\tau_1\) and the resulting type is
*generalised*(using the \(\mathit{gen}\) function). The \(\mathit{gen}\) function creates a polytype \(\sigma\) of the form \(\forall \vec{X}. \mathcal{S}\tau_1\) for the monotype \(\mathcal{S}\tau_1\) in which all free type variables \(\vec{X}\) of \(\mathcal{S}\tau_1\) (not occurring in \(\mathcal{S}\Gamma\)) are bound by a \(\forall\). - The type of \(t_2\) is determined with \(x : \sigma\) added to the context.

This rule adds *let-polymorphism* to the language. These quite complicated steps are necessary to actually make use of polymorphism. As we saw before, we want lambda abstractions to not support polymorphism, so a parameter can only be used on one concrete type. But for let-in terms, we do want to be able to use the bound variable on multiple concrete types: the identity function on booleans, integers, integer-to-boolean functions, etc.

In the rule for variables, T-Var, we introduced the \(\mathit{inst}\) function. It creates a fresh type variable for every type variable bound in a polytype. To prevent it from generalising the parameters of lambda abstractions, we didn’t bind any type variables in the polytype we added to the context: \(\forall \varnothing. X\). For let-in terms, however, we do want \(\mathit{inst}\) to create another instance for the bound variable for every occurrence. Therefore, we find the most general type for the variable, and add it to the context. When type checking the term \(\mathbf{let}\ \mathsf{id} = \lambda x. x\ \mathbf{in}\ \mathsf{id}\ 1\), for example, \(\mathsf{id}\) is added to the context with its most general type: \(\forall X. X \rightarrow X\). When typing the body of the let-in term, then, the type of \(\mathsf{id}\) is instantiated as \(Y \rightarrow Y\) for example. Then the constraint \(Y \sim \mathsf{Int}\) is generated, because \(\mathsf{id}\) is applied to \(1\), but \(X\) is still untouched.

With these typing rules, we can move on to implementing the type inference algorithm.

For the implementation, we will use so-called monad transformers. However, you should not need to understand how monad transformers work in order to understand the implementation.

Our inference monad looks like this:

```
type Infer a = RWST Context [Constraint] [String] (Except TypeError) a
type Context = Map String Polytype
```

`TypeError`

The variable was not bound by a lambda abstraction.

An error occurred during unification.

The inference monad is a `Reader`

monad for the `Context`

, which is practically the same as having a `Context`

parameter for every function inside the `Infer`

monad, which is what we did before. Everywhere inside the `Infer`

monad we can get the context, but we can’t change it. `Infer`

is also a `Writer`

for a list of `Constraint`

s, which means that we can write to a list of constraints. This list of constraints is the \(\ldots \mid C\) in the typing rules. `Infer`

is furthermore a `State`

for a list of `String`

s, which will be the supply of fresh type variables. And lastly, `Infer`

can throw `TypeError`

s.

Using `runInfer`

, we can convert a value of `Infer a`

to an `Either TypeError (a, [String], [Constraint])`

:

```
runInfer :: Context
-> [String]
-> Infer a
-> Either TypeError (a, [String], [Constraint])
runInfer ctx fs m = runExcept $ runRWST m ctx fs
```

First, we need a function that generates a fresh type variable. The state should be an infinite list of type variable names, so we should always be able to get the following element from the list:

```
fresh :: Infer String
fresh = do
freshVars <- get
case freshVars of
[] -> error "Non-infinite list of fresh type variables."
(f:fs) -> do
put fs
pure f
```

With `get :: Infer [String]`

we can get the list of type variables. When it’s empty, we just use `error`

since the programmer has made a mistake by not using an infinite list of fresh type variables. When the list is non-empty, we return the `head`

, and we use the `tail`

as the new state by using `put :: [String] -> Infer ()`

, which replaces the state.

For the initial state of fresh variables, we will use the following:

```
freshVariables :: [String]
freshVariables = concatMap (\n -> [l : n | l <- ['A'..'Z']]) $
"" : fmap show [1..]
```

This list will look something like:

We will also need the `inst`

function:

```
inst :: Polytype -> Infer Type
inst (TyForall xs ty) = do
ys <- mapM (const fresh) xs
let subst = Map.fromList $ zip xs (fmap TyVar ys)
pure $ substType subst ty
```

For every type variable \(X\) bound by the \(\forall\), we create a fresh type variable \(Y\). Then we apply the substitution which substitutes every \(X_i\) for \(Y_i\).

We also need the `gen`

function, but before we can write it, we need to be able to get the set of free type variables from a type:

```
freeVarsType :: Type -> Set String
freeVarsType TyBool = Set.empty
freeVarsType TyInt = Set.empty
freeVarsType (TyVar x) = Set.singleton x
freeVarsType (TyFun t1 t2) = freeVarsType t1 `Set.union` freeVarsType t2
```

And the free type variables from a polytype, which are the free type variables in the monotype that are not bound by the \(\forall\).

```
freeVarsPolytype :: Polytype -> Set String
freeVarsPolytype (TyForall xs ty) = freeVarsType ty `Set.difference` Set.fromList xs
```

And also from the context, which corresponds to the union of the free type variables of all polytypes in the context:

Now we can write `gen`

. We will write it outside the `Infer`

monad, because it will be useful elsewhere too.

```
gen :: Context -> Type -> Polytype
gen ctx ty =
let xs = Set.toList (freeVarsType ty `Set.difference` freeVarsContext ctx)
in TyForall xs ty
```

`gen`

just finds the free type variables of `ty`

which don’t occur in the context, and returns a polytype in which those type variables are bound.

We will also need to be able to apply a substitution to a context, by applying the substitution to every polytype in the context:

Now we can finally implement the type inference algorithm:

\[ \text{T-False: } \frac{ }{ \varnothing \vdash \mathsf{True} : \mathsf{Bool} \mid \varnothing } \]

\[ \text{T-True: } \frac{ }{ \varnothing \vdash \mathsf{True} : \mathsf{Bool} \mid \varnothing } \]

\[ \text{T-Int: } \frac{ }{ \varnothing \vdash n : \mathsf{Int} \mid \varnothing } \]

Values of the simple types are, of course, easy:

\[ \text{T-App: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ X \text{ fresh} \\ C' = C_1 \cup C_2 \cup \{\tau_1 \sim \tau_2 \rightarrow X\} \end{array} }{ \Gamma \vdash t_1\ t_2 : X \mid C' } \]

For applications:

We first infer the types of `t1`

and `t2`

:

We generate a fresh type variable `f`

:

We generate the constraint `ty1 :~: TyFun ty2 f`

. We can add it to the list of constraints using the `tell :: [Constraint] -> Infer ()`

function.

Finally, we return the fresh type variable as the type:

\[ \text{T-If: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ \Gamma \vdash t_3 : \tau_3 \mid C_3 \\ C' = C_1 \cup C_2 \cup C_3 \cup \{\tau_1 \sim \mathsf{Bool}, \tau_2 \sim \tau_3\} \end{array} }{ \Gamma \vdash \mathbf{if}\ t_1\ \mathbf{then}\ t_2\ \mathbf{else}\ t_3 : \tau_2 \mid C' } \]

For if-then-else terms, we generate the constraints that the condition should be a boolean and that the arms should be of the same type :

```
infer (TmIf t1 t2 t3) = do
ty1 <- infer t1
ty2 <- infer t2
ty3 <- infer t3
tell [ty1 :~: TyBool, ty2 :~: ty3]
pure ty2
```

\[ \text{T-Add: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \Gamma \vdash t_2 : \tau_2 \mid C_2 \\ C' = C_1 \cup C_2 \cup \{\tau_1 \sim \mathsf{Int}, \tau_2 \sim \mathsf{Int}\} \end{array} }{ \Gamma \vdash t_1 + t_2 : \mathsf{Int} \mid C' } \]

The operands of an addition should be integers, and the result is also an integer:

```
infer (TmAdd t1 t2) = do
ty1 <- infer t1
ty2 <- infer t2
tell [ty1 :~: TyInt, ty2 :~: TyInt]
pure TyInt
```

\[ \text{T-Var: } \frac{ \begin{array}{c} x : \sigma \in \Gamma \\ \tau = \mathit{inst}(\sigma) \end{array} }{ \Gamma \vdash x : \tau \mid \varnothing } \]

For variables, we use the `inst`

function:

We can get the context using `ask`

:

We look up `x`

:

When it doesn’t exist in the context, we use `throwError :: TypeError -> Infer ()`

to throw an error:

Otherwise, we use `inst`

on the type:

\[ \text{T-Abs: } \frac{ \begin{array}{c} X \text{ fresh} \\ \Gamma, x : \forall \varnothing. X \vdash t : \tau \mid C \end{array} }{ \Gamma \vdash \lambda x. t : X \rightarrow \tau \mid C } \]

Then lambda abstractions. Using `local :: (Context -> Context) -> Infer a -> Infer a`

we can update the context for a local sub-computation. To infer the type of `t`

, we need to add `x`

’s type to the context, so we use `local`

. Note that the context is not changed in the outer computation:

```
infer (TmAbs x t) = do
f <- TyVar <$> fresh
ty <- local (Map.insert x (TyForall [] f)) $ infer t
pure $ TyFun f ty
```

\[ \text{T-Let: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \tau_1 \mid C_1 \\ \mathcal{S} = \mathit{solve}(C_1) \\ \sigma = \mathit{gen}(\mathcal{S}\Gamma, \mathcal{S}\tau_1) \\ \Gamma, x : \sigma \vdash t_2 : \tau_2 \mid C_2 \end{array} }{ \Gamma \vdash \mathbf{let}\ x = t_1\ \mathbf{in}\ t_2 : \tau_2 \mid C_2 } \]

And, finally, let-in terms:

We first get the context:

Then we use `listen :: Infer a -> Infer (a, [Constraint])`

to ‘listen’ to the constraints generated by `infer t1`

. These constraints will not be added to the final list of constraints, but are only generated ‘locally’:

Now we try to solve the constraints. If they’re not solvable, we just throw an error. Otherwise, we obtain a substitution:

We apply the substitution to `t1`

’s type, `ty1`

, giving us `ty1'`

.

And we generalise `ty1'`

in the context to which we have also applied the substitution, giving us a polytype `s`

:

We add `s`

to the context and infer `t2`

’s type:

That’s it! We’ve written an function which runs the inference algorithm on a term, giving us a type and a list of constraints.

Now, we still need to solve the constraints and apply the substitution to the type. We will write the function `polytypeOf`

, which runs the inference algorithm, solves the constraints, applies the substitution, and turns the resulting type into a polytype:

Run the inference algorithm in an empty context^{2}, giving us a type `ty`

, a list of fresh variables `fs`

and a list of constraints `cs`

:

Solve the constraints to obtain a substitution. Because `solve`

returns an `Either UnifyError Subst`

, we need to turn its error into a `TypeError`

, which we can do by applying the type constructor `TypeError`

to it. To do this, we use `first :: Bifunctor p => (a -> b) -> p a c -> p b c`

:

We apply the substitution to `ty`

:

We generalise the type in an empty context, giving us the polytype `s`

:

And we return `s`

:

Let’s try it!

The type of `id`

:

That is \(\forall A. A \rightarrow A\), correct!

The type of `const`

:

```
polytypeOf tmConst
=> Right (TyForall ["A","B"] (TyFun (TyVar "A") (TyFun (TyVar "B") (TyVar "A"))))
```

\(\forall A\ B. A \rightarrow B \rightarrow A\), again correct!

Now let’s try to use let-polymorphism, by trying the term: \[ \begin{array}{l} \mathbf{let}\ \mathsf{id} = \lambda x. x\ \mathbf{in} \\ \mathbf{if}\ \mathsf{id}\ \mathsf{True}\ \mathbf{then}\ \mathsf{id}\ 4\ \mathbf{else}\ 5 \end{array} \]

```
polytypeOf (TmLet "id" (TmAbs "x" (TmVar "x")) (TmIf (TmApp (TmVar "id") TmTrue) (TmApp (TmVar "id") (TmInt 4)) (TmInt 5)))
=> Right (TyForall [] TyInt)
```

And the same term, but using a lambda abstraction:

\[ (\lambda \mathsf{id}. \mathbf{if}\ \mathsf{id}\ \mathsf{True}\ \mathbf{then}\ \mathsf{id}\ 4\ \mathbf{else}\ 5)\ (\lambda x. x) \]

```
polytypeOf (TmApp (TmAbs "id" (TmIf (TmApp (TmVar "id") TmTrue) (TmApp (TmVar "id") (TmInt 4)) (TmInt 5))) (TmAbs "x" (TmVar "x")))
=> Left (UnifyError (CannotUnify TyBool TyInt))
```

Just like we expected, it can’t unify \(\mathsf{Bool} \sim \mathsf{Int}\).

One more: \[ \begin{array}{l} \mathbf{let}\ \mathsf{id} = \lambda x. x\ \mathbf{in} \\ \mathbf{let}\ \mathsf{const} = \lambda a. \lambda b. a\ \mathbf{in} \\ \mathsf{const}\ \mathsf{id}\ \mathsf{const} \end{array} \]

```
polytypeOf $ TmLet "id" tmId $ TmLet "const" tmConst $ TmApp (TmApp (TmVar "const") (TmVar "id")) (TmVar "const")
=> Right (TyForall ["F"] (TyFun (TyVar "F") (TyVar "F")))
```

It returns \(\forall F. F \rightarrow F\), which is exactly the type of \(\mathsf{id}\).

We’ve explored Hindley-Milner type inference, and implemented a type inference algorithm! This language is already quite close to Haskell.

Some exercises you might like to do:

- Write a function
`simplPolytype`

which ‘simplifies’ a polytype. It should rename the bound variables in a polytype to names in the beginning of the alphabet (or: the beginning of`freshVariables`

). The polytype of the last example is \(\forall F. F \rightarrow F\), for example, but it would be nicer if`polytypeOf`

returned \(\forall A. A \rightarrow A\). - Extend the language using other simple types and operations for them.

And, if you have trouble understanding some parts, try to experiment with them a lot. And feel free to ask questions on Reddit.

Other resources you might find useful:

*Types and Programming Languages*, Benjamin C. Pierce, Chapters 22 and 23.*Hindley-Milner type system*on Wikipedia.*Hindley-Milner inference*, chapter 6 of Stephen Diehl’s*Write You a Haskell*.

Pierce uses the typing relation \(\Gamma \vdash t : \tau \mid_X C\), where the set \(X\) keeps track of the used type variables. This is very useful to formally reason about the type inference algorithm, but it makes the typing rules more complex than necessary for a Haskell implementation. Instead, I will just write \(X \text{ fresh}\) for a type variable \(X\). This approach is more informal, since it doesn’t formally specify when a variable is

*fresh*, but I think it is easier.↩︎If you want to extend the language by having declarations, or by making a REPL, you might want to run

`infer`

in a specific context, so declarations aren’t lost. You would also have to run`gen`

with this context, instead of an empty context.↩︎

In the previous post, we have explored the simply typed lambda calculus (STLC), an extension of the untyped lambda calculus with simple types. In this post, we’ll take a look at the *polymorphic lambda calculus*, also called *System F*, an extension of the STLC with *polymorphism*.

We have seen in the previous post how to write the identity function on booleans: \(\lambda x : \mathsf{Bool}. x\). We have also seen the identity function on boolean-to-integer functions: \(\lambda x : \mathsf{Bool} \rightarrow \mathsf{Int}. x\). As you can see, these definitions are very similar: only the type of \(x\) is different, but the rest of the term is the exactly same.

This is suboptimal, because it means that we have duplication: in a large codebase, we may need the identity function on booleans, on integers, on boolean-to-boolean functions, on integer-to-boolean functions, etc. Not only is it annoying to write all those definitions, but what if we later realise we’ve made a mistake?^{1} We have to change *all* definitions, for every type!

To prevent such needless labour, we want to use *abstraction*: we want to be able to write the identity function for *all* types, with only *one* definition. We will therefore extend the STLC with *(parametric) polymorphism*. The result is called the *polymorphic lambda calculus* or *System F*.

To incorporate polymorphism in the STLC, we add two new sorts of types:

*Type variables*. These are just like ‘normal’, term-level variables, but instead of ranging over values, they range over types. We’ll write them with capital letters.*Polymorphic types*. These are written in formal syntax as \(\forall X. \tau\), where \(X\) is a type variable, and \(\tau\) a type. (\(\forall\) is the mathematical symbol with the meaning ‘for all’.) In more Haskell-like syntax, we may write`forall X. τ`

.An example of a polymorphic type is \(\mathsf{id} : \forall X. X \rightarrow X\), which is the type of a function that accepts a value of any type, and returns that value. (All terms with that type turn out to be equivalent to the identity function.)

The new syntax of types is thus:

\[ \begin{align*} \tau ::=\ & X & \text{(type variable)} \\ \mid\ & \forall X. \tau & \text{(polymorphic type)} \\ \mid\ & \tau \rightarrow \tau' & \text{(function type)} \\ \mid\ & \mathsf{Bool} & \text{(boolean type)} \\ \mid\ & \mathsf{Int} & \text{(integer type)} \end{align*} \]

The new AST type for types looks like this:

```
data Type
= TyVar String
-- ^ Type variable
| TyForall String Type
-- ^ Polymorphic type
| TyFun Type Type
-- ^ Function type
| TyBool
-- ^ Boolean type
| TyInt
-- ^ Integer type
deriving (Show, Eq)
```

Having updated the syntax of types, we also need to update the syntax of terms: we need terms that introduce and interact with polymorphic types. These are the terms we add:

*Type abstractions*. Type abstractions are just like normal abstractions, but instead of introducing a variable that ranges over values, it introduces a type variable that ranges over types.We write type abstractions with an uppercase lambda, to distinguish them from normal abstractions: \(\Lambda X. t\) for a type variable \(X\) and a term \(t\). In Haskell-like syntax, we write:

`/\X. t`

.Using type abstractions, we can write the generic identity function for which we’ve seen the type above: \(\mathsf{id} = \Lambda X. \lambda x : X. x\). In the right-hand side of the type abstraction, after the period, we now can refer to \(X\), but only in types. So we can create an abstraction that accepts a parameter of type \(X\).

*Type applications*. Type applications are used to*instantiate*a term with a specific type. If we want to use the identity function on an integer, we need to indicate that the type variable \(X\) in the definition of \(\mathsf{id}\) should be replaced by \(\mathsf{Int}\).In formal syntax, type applications are generally written the same as normal applications: \(\mathsf{id}\ \mathsf{Int}\). But to be more explicit, we can use the Haskell syntax

^{2}:`id @Int`

.

We add the following to the syntax of terms:

\[ \begin{align*} t ::=\ & \ldots \\ \mid\ & \Lambda X. t & \text{(type abstraction)} \\ \mid\ & t\ \tau & \text{(type application)} \end{align*} \]

The updated AST for terms:

The rest of the AST is exactly the same as in the STLC, but you can still see it by clicking:

```
| TmTrue
-- ^ True value
| TmFalse
-- ^ False value
| TmInt Integer
-- ^ Integer value
| TmVar String
-- ^ Variable
| TmAbs String Type Term
-- ^ Lambda abstraction
| TmApp Term Term
-- ^ Application
| TmAdd Term Term
-- ^ Addition
| TmIf Term Term Term
-- ^ If-then-else conditional
deriving (Show, Eq)
```

Let’s look at some examples.^{3} We’ve already seen the polymorphic identity function:

And its type:

We can also write the \(\mathsf{const}\) function, which returns its first parameter and ignores its second: \(\mathsf{const} = \Lambda A. \Lambda B. \lambda a : A. \lambda b : B. a\). In the Haskell AST:

And its type, \(\mathsf{const} : \forall A. \forall B. A \rightarrow B \rightarrow A\):

And we can try to use \(\mathsf{const}\) to return a value. The term \(\mathsf{const}\ \mathsf{Bool}\ \mathsf{Int}\ \mathsf{False}\ 5\) should evaluate to \(\mathsf{False}\), so its type should be \(\mathsf{Bool}\):

```
tmConstFalse5 = TmApp (TmApp (TmTyApp (TmTyApp tmConst TyBool) TyInt) TmFalse) (TmInt 5)
tyConstFalse5 = TyBool
```

Now we understand the syntax, we can move on to type checking.

Describing the type checking of the polymorphic lambda calculus isn’t actually that difficult. We will only add two typing rules: one for type abstractions and one for type applications. The rest of the rules will be exactly the same as those of the STLC.

The first rule we add is the one for type abstractions:

\[ \text{T-TyAbs: } \frac{ \Gamma \vdash t : \tau }{ \Gamma \vdash \Lambda X. t : \forall X. \tau } \]

This rule is quite simple: if \(t\) has type \(\tau\), then \(\Lambda X. t\) has type \(\forall X. \tau\). This is the introduction rule for polymorphic types, since it is the only typing rule that ‘produces’ a \(\forall\).

The rule for type applications is the elimination rule for polymorphic types: it ‘removes’ a \(\forall\). The rule is:

\[ \text{T-TyApp: } \frac{ \Gamma \vdash t : \forall X. \tau }{ \Gamma \vdash t\ \tau' : \tau[X := \tau'] } \]

This rule says: if \(t\) has type \(\forall X. \tau\), then \(t\ \tau'\) (\(t\) applied to type \(\tau'\)) has type \(\tau[X := \tau']\). This type is the result of a *substitution*; \(\tau[X := \tau']\) means: substitute every free type variable \(X\) in \(\tau\) with \(\tau'\). But, as we will see, that’s easier said than done…

First, let’s look at some examples of substitution:

\[ \begin{align*} X[X := \mathsf{Int}] & \rightarrow \mathsf{Int} \\ (X \rightarrow X)[X := \mathsf{Bool}] & \rightarrow (\mathsf{Bool} \rightarrow \mathsf{Bool}) \\ (X \rightarrow Y)[X := \mathsf{Int} \rightarrow \mathsf{Bool}] & \rightarrow ((\mathsf{Int} \rightarrow \mathsf{Bool}) \rightarrow Y) \\ (X \rightarrow (\forall X. X))[X := Y] & \rightarrow (Y \rightarrow (\forall X. X)) \end{align*} \]

We’ll try to write a function that performs a substitution. We write `subst x ty' ty`

for \(\mathit{ty}[x := \mathit{ty'}]\):

Defining `subst`

for the simple types is easy, because they do not contain any free variables.

Applying a substitution to a function type is also not that difficult: we’ll just apply the substitution to the source and to the target type:

When we come across a type variable `y`

, we should replace it with `ty'`

if `x`

is equal to `y`

. Otherwise, we keep `y`

:

When we apply the substitution to a polymorphic type, we need to be careful: we only want to apply the substitution to *free* variables, and the \(\forall\) binds the variables next to it. So only if the type abstraction binds a variable with a name different from `x`

, we should apply the substitution to the right-hand side of the polymorphic type:

Let’s check some examples. Applying a substitution on simple types should do nothing:

If we apply this substitution to the type variable `"X"`

, it should be replaced:

But if we apply it to `"Y"`

, it should not be replaced:

The substitution should only happen on polymorphic types when `"X"`

is not bound:

```
subst "X" (TyFun TyBool TyInt)
(TyForall "Y" (TyFun (TyVar "Y") (TyVar "X")))
=> TyForall "Y" (TyFun (TyVar "Y") (TyFun TyBool TyInt))
subst "X" (TyFun TyBool TyInt)
(TyForall "X" (TyFun (TyVar "Y") (TyVar "X")))
=> TyForall "X" (TyFun (TyVar "Y") (TyVar "X"))
```

Looks good, right?

Implementing the type checker for the added terms is now very easy:

```
typeOf ctx (TmTyAbs x t) = TyForall x <$> typeOf ctx t
typeOf ctx (TmTyApp t1 ty2) = do
ty1 <- typeOf ctx t1
case ty1 of
TyForall x ty12 -> Right $ subst x ty2 ty12
_ -> Left $ TypeApplicationNonPolymorphic t1 ty1
```

The rest of the type checker is exactly the same as the type checker for the STLC, which we’ve developed in the previous post. You can still see it here:

`typeOf`

```
typeOf ctx TmTrue = Right TyBool
typeOf ctx TmFalse = Right TyBool
typeOf ctx (TmInt n) = Right TyInt
typeOf ctx (TmVar x) =
case Map.lookup x ctx of
Nothing -> Left $ UnboundVariable x
Just ty -> Right ty
typeOf ctx (TmAbs x ty t) =
let ctx' = Map.insert x ty ctx
ty' = typeOf ctx' t
in TyFun ty <$> ty'
typeOf ctx (TmApp t1 t2) = do
ty1 <- typeOf ctx t1
ty2 <- typeOf ctx t2
case ty1 of
TyFun ty11 ty12 ->
if ty2 == ty11
then Right ty12
else Left $ ApplicationWrongArgumentType t1 ty1 t2 ty2
_ -> Left $ ApplicationNotFunction t1 ty1
typeOf ctx (TmAdd t1 t2) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyInt) $
Left $ AdditionNonInteger t1 ty1
ty2 <- typeOf ctx t2
when (ty2 /= TyInt) $
Left $ AdditionNonInteger t2 ty2
Right TyInt
typeOf ctx (TmIf t1 t2 t3) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyBool) $
Left $ NonBooleanCondition t1 ty1
ty2 <- typeOf ctx t2
ty3 <- typeOf ctx t3
when (ty2 /= ty3) $
Left $ ArmsOfDifferentType t2 ty2 t3 ty3
Right ty2
```

And the other necessary definitions:

```
typeOf :: Context -> Term -> Either TypeError Type
data TypeError
= UnboundVariable String
| AdditionNonInteger Term Type
| NonBooleanCondition Term Type
| ArmsOfDifferentType Term Type Term Type
| ApplicationWrongArgumentType Term Type Term Type
| ApplicationNotFunction Term Type
| TypeApplicationNonPolymorphic Term Type
deriving (Show, Eq)
type Context = Map String Type
```

We can try some examples:

```
typeOf Map.empty tmId
=> Right (TyForall "X" (TyFun (TyVar "X") (TyVar "X")))
typeOf Map.empty tmId == Right tyId
=> True
typeOf Map.empty tmConst
=> Right
(TyForall "A" (TyForall "B"
(TyFun
(TyVar "A")
(TyFun (TyVar "B") (TyVar "A")))))
typeOf Map.empty tmConst == Right tyConst
=> True
typeOf Map.empty tmConstFalse5
=> Right TyBool
typeOf Map.empty tmConstFalse5 == Right tyConstFalse5
=> True
```

Looks pretty good, doesn’t it? But there’s a sneaky problem, and it has to do with our definition of `subst`

.^{4}

Let’s say we want to write a function that flips the type arguments of \(\mathsf{const}\), so \(\Lambda A. \Lambda B. \lambda a : A. \lambda b : B. a\) should become \(\Lambda A. \Lambda B. \lambda a : B. \lambda b : A. a\). And we’re going to write it using the definition of \(\mathsf{const}\) we’ve already written. Writing this function is quite easy: \(\mathsf{constFlip} = \Lambda A. \Lambda B. \mathsf{const}\ B\ A\).

The type of \(\mathsf{const}\) is \(\forall A. \forall B. A \rightarrow B \rightarrow A\), so what should the type of \(\mathsf{constFlip}\) be? Well, that should be \(\forall A. \forall B. B \rightarrow A \rightarrow B\), right? Let’s ask our type checker:

```
typeOf Map.empty tmConstFlip
=> Right (TyForall "A" (TyForall "B" (TyFun (TyVar "A") (TyFun (TyVar "A") (TyVar "A")))))
```

Let’s make that a bit nicer to read: our type checker says that \(\mathsf{constFlip}\) has type \(\forall A. \forall B. A \rightarrow A \rightarrow A\).

What‽ That’s not right! We have lost all our \(B\)’s!

Indeed, we’ve made a mistake, namely in our definition of `subst`

. Let’s look at the type checking process of \(\mathsf{constFlip}\). The first step is:

\[ \text{T-TyApp: } \frac{ \Gamma \vdash \mathsf{const} : \forall A. \forall B. A \rightarrow B \rightarrow B }{ \Gamma \vdash \mathsf{const}\ B : (\forall B. A \rightarrow B \rightarrow A)[A := B] } \]

Applying the substitution with our definition of `subst`

gives: \(\forall B. B \rightarrow B \rightarrow B\). Note that the \(B\)’s that were first \(A\)’s are now *captured* by the \(\forall B\), which means that they now refer to something they shouldn’t refer to!

The next step:

\[ \text{T-TyApp: } \frac{ \Gamma \vdash \mathsf{const}\ B : \forall B. B \rightarrow B \rightarrow B }{ \Gamma \vdash \mathsf{const}\ B\ A : (B \rightarrow B \rightarrow B)[B := A] } \]

Applying this substitution gives: \(A \rightarrow A \rightarrow A\). In the following steps, the quantifiers are added back, so our end result is: \(\forall B. \forall A. A \rightarrow A \rightarrow A\).

The problem we run into here, is that we should rename some type variables. We can, for example, write \(\mathsf{const}\) as \(\Lambda C. \Lambda D. \lambda a : C. \lambda b : D. a\). The type is then \(\forall C. \forall D. C \rightarrow D \rightarrow C\). Now, if we type check \(\mathsf{constFlip}\), we get the right result:

```
tmConst' = TmTyAbs "C" (TmTyAbs "D" (TmAbs "a" (TyVar "C") (TmAbs "b" (TyVar "D") (TmVar "a"))))
tmConstFlip' = TmTyAbs "A" (TmTyAbs "B" (TmTyApp (TmTyApp tmConst' (TyVar "B")) (TyVar "A")))
```

```
typeOf Map.empty tmConstFlip'
=> Right (TyForall "A" (TyForall "B" (TyFun (TyVar "B") (TyFun (TyVar "A") (TyVar "B")))))
```

That is \(\forall A. \forall B. B \rightarrow A \rightarrow B\), exactly what we wanted.

To solve this problem, we should let our `subst`

function rename some type variables to *fresh* (i.e., not already used) variables. This isn’t *very* hard to implement, but there is a nicer solution that is easier to reason about.

We will use *De Bruijn-indices*. These indices will replace our type variable names, for which we used strings. Instead, we’ll use integers. The integer \(n\) will refer to the \(n\)th binding \(\forall\), counting outwards from the variable and starting from zero^{5}. So the type for \(\mathsf{const}\), which is \(\forall A. \forall B. A \rightarrow B \rightarrow A\), will be written as \(\forall. \forall. 1 \rightarrow 0 \rightarrow 1\). (We’ll actually keep the bound names in the AST: \(\forall A. \forall B. 1 \rightarrow 0 \rightarrow 1\), but that is not necessary.)

To apply these changes to the Haskell AST, we won’t just change `TyVar String`

into `TyVar Int`

, but we’ll write:

```
data Type x
= TyVar x
-- ^ Type variable
| TyForall String (Type x)
-- ^ Polymorphic type
| TyFun (Type x) (Type x)
-- ^ Function type
| TyBool
-- ^ Boolean type
| TyInt
-- ^ Integer type
deriving (Show, Eq)
```

This allows us to construct the ordinary types as well as the types with De Bruijn-indices. We choose to do this because it makes writing a parser significantly easier: the parser can return a `Type String`

, and we can later turn this into a `Type Int`

. The `deBruijn`

function does just that:

```
deBruijn :: [String] -> Type String -> Either String (Type Int)
deBruijn ctx (TyVar x) = case elemIndex x ctx of
Nothing -> Left x
Just i -> Right (TyVar i)
deBruijn ctx (TyForall x ty) = TyForall x <$> deBruijn (x : ctx) ty
deBruijn ctx (TyFun ty1 ty2) = TyFun <$> deBruijn ctx ty1 <*> deBruijn ctx ty2
deBruijn ctx TyBool = Right TyBool
deBruijn ctx TyInt = Right TyInt
```

The `deBruijn`

function turns an ordinary type into a type with De Bruijn-indices. It walks the abstract syntax tree recursively. When it comes across a \(\forall\), it adds the bound type variable to the context, which is a list of `String`

s here. When it sees a variable, it tries to find it in the context, and if it is found, it is replaced by the index of the variable in the context. If the variable is not found in the context, we return `Left x`

, to indicate that the function failed because `x`

was unbound.

We can also restore the names (because we haven’t removed the names that are bound by the \(\forall\)’s)^{6}:

```
restore :: Type Int -> Maybe (Type String)
restore = go []
where
go ctx (TyVar i) = TyVar <$> nth i ctx
go ctx (TyForall x ty) = TyForall x <$> go (x : ctx) ty
go ctx (TyFun ty1 ty2) = TyFun <$> go ctx ty1 <*> go ctx ty2
go ctx TyBool = Just TyBool
go ctx TyInt = Just TyInt
-- Get the @n@th element of a list, or 'Nothing'
-- if the length of the list is smaller than @n@.
-- As far as I can see, there is no such function
-- in base.
nth :: Int -> [a] -> Maybe a
nth n [] = Nothing
nth 0 (x:_) = Just x
nth n (x:xs) = nth (n - 1) xs
```

Having changed `Type`

, we also need to change `Term`

, since terms can contain types. Doing this is very straight-forward and quite boring, but you can view the new definition here:

`Term x`

```
data Term x
= TmTyAbs String (Term x)
-- ^ Type abstraction
| TmTyApp (Term x) (Type x)
-- ^ Type application
| TmTrue
-- ^ True value
| TmFalse
-- ^ False value
| TmInt Integer
-- ^ Integer value
| TmVar String
-- ^ Variable
| TmAbs String (Type x) (Term x)
-- ^ Lambda abstraction
| TmApp (Term x) (Term x)
-- ^ Application
| TmAdd (Term x) (Term x)
-- ^ Addition
| TmIf (Term x) (Term x) (Term x)
-- ^ If-then-else conditional
deriving (Show, Eq)
```

The substitution function for types with De Bruijn-indices is as follows:

The simple types are again very simple:

For function types, we just apply the substitution left and right:

When we see a variable, we only substitute it if `x`

equals `y`

:

And here is the tricky bit. A \(\forall\) binds a type variable, so to make `x`

still refer to the same \(\forall\) it was bound by, we need to increment it by one. But we also need to shift all free type variables in `ty'`

by one, because they will otherwise be bound by a different \(\forall\). (This was the problem we ran into before and can solve using De Bruijn-indices.)

Let’s look at the substitution \((\forall X. 0_X \rightarrow 2_Z)[1_Z := 0_Y]\). We’re working in a context \(Z, Y\) so the term \(Z\ Y\) should be written like \(1_Z\ 0_Y\). (I’ve added subscripts with the names to make the terms easier to read.) When we see the \(\forall X. \ldots\), another name is bound, so \(1\) no longer refers to \(Z\) but to \(Y\), and \(0\) no longer refers to \(Y\) but to \(X\). We need to shift \(1_Z\) by one, so it becomes \(2_Z\), and we need to shift \(0_Y\) by one, so it becomes \(1_Y\). The above substitution is then equal to \(\forall X. (0_X \rightarrow 2_Z)[2_Z := 1_Y]\). For this substitution, we don’t need to do any shifting, so the result is \(\forall X. 0_X \rightarrow 1_Y\).

It becomes more complicated when we want to substitute for a polymorphic type that binds some type variables. Let’s say we’re working in the context \(Y, B\) and we want to evaluate \((\forall A. A \rightarrow B)[B := \forall X. X \rightarrow Y]\). In De Bruijn-indices, this is: \((\forall A. 0_A \rightarrow 1_B)[0_B := \forall X. 0_X \rightarrow 2_Y]\). We see \(\forall A. \ldots\), so we need to shift the variables in the substitution up by one. Naïvely, we would just increment all type variables by one, so we get: \(\ldots[1 := \forall X. 1 \rightarrow 3]\). I’ve deliberately not written the subscripts, because they have changed. The \(0_X\) has become a \(1_B\), so the substitution has become a different one.

To solve this, we need to keep track of a *cutoff* (\(c\)). This value denotes the ‘depth’ of the type, that is, how many type variables are bound by \(\forall\)’s. The function `shift c i ty`

will shift the free type variables above a cutoff `c`

by `i`

:

There are no free variables in the simple types, so there is nothing to shift:

We shift function types by just shifting recursively:

When we see a \(\forall\), we need to increase the cutoff, since there is another bound variable introduced:

And finally, when we come across a variable, we should only shift it when it’s free (and thus not bound). That is the case when the variable is greater than or equal to the cutoff:

Some examples:

```
shift 0 1 (TyForall "X"
(TyFun (TyVar 0 {- bound: X -})
(TyVar 1 {- free -})))
=> TyForall "X" (TyFun (TyVar 0) (TyVar 2))
shift 0 1 (TyForall "X" (TyForall "Y"
(TyFun (TyVar 0 {- bound: X -})
(TyVar 1 {- bound: Y -}))))
=> TyForall "X" (TyForall "Y" (TyFun (TyVar 0) (TyVar 1)))
```

And let’s try the substitutions we’ve seen above. \((\forall X. 0_X \rightarrow 2_Z)[1_Z := 0_Y]\):

```
subst 1 (TyVar 0) (TyForall "X" (TyFun (TyVar 0) (TyVar 2)))
=> TyForall "X" (TyFun (TyVar 0) (TyVar 1))
```

That is: \(\forall X. 0_X \rightarrow 1_Y\).

And \((\forall A. 0_A \rightarrow 1_B)[0_B := \forall X. 0_X \rightarrow 2_Y]\):

```
subst 0 (TyForall "X" (TyFun (TyVar 0) (TyVar 2))) (TyForall "A" (TyFun (TyVar 0) (TyVar 1)))
=> TyForall "A" (TyFun (TyVar 0) (TyForall "X" (TyFun (TyVar 0) (TyVar 3))))
```

That is: \(\forall A. 0_A \rightarrow (\forall X. 0_X \rightarrow 3_Y)\).

Now we have written our definition of substitutions, we can *almost* move on to implementing the type checker. But first, we need to turn the `Term String`

s into `Term Int`

s. Note that we only use De Bruijn-indices for types, so terms still use variables with a string name:

```
deBruijnTerm :: [String] -> Term String -> Either String (Term Int)
deBruijnTerm ctx TmTrue = Right TmTrue
deBruijnTerm ctx TmFalse = Right TmFalse
deBruijnTerm ctx (TmInt n) = Right (TmInt n)
deBruijnTerm ctx (TmVar x) = Right (TmVar x)
```

Type abstractions introduce a type variable, so we should add it to the context:

```
deBruijnTerm ctx (TmTyAbs x t) = TmTyAbs x <$> deBruijnTerm (x : ctx) t
deBruijnTerm ctx (TmTyApp t ty) = TmTyApp <$> deBruijnTerm ctx t <*> deBruijn ctx ty
deBruijnTerm ctx (TmAbs x ty t) = TmAbs x <$> deBruijn ctx ty <*> deBruijnTerm ctx t
deBruijnTerm ctx (TmApp t1 t2) = TmApp <$> deBruijnTerm ctx t1 <*> deBruijnTerm ctx t2
deBruijnTerm ctx (TmAdd t1 t2) = TmAdd <$> deBruijnTerm ctx t1 <*> deBruijnTerm ctx t2
deBruijnTerm ctx (TmIf t1 t2 t3) = TmIf <$> deBruijnTerm ctx t1 <*> deBruijnTerm ctx t2 <*> deBruijnTerm ctx t3
```

Some examples:

```
deBruijnTerm [] tmId
=> Right (TmTyAbs "X" (TmAbs "x" (TyVar 0) (TmVar "x")))
deBruijnTerm [] tmConst
=> Right (TmTyAbs "A" (TmTyAbs "B" (TmAbs "a" (TyVar 1) (TmAbs "b" (TyVar 0) (TmVar "a")))))
deBruijnTerm [] tmConstFlip
=> Right (TmTyAbs "A" (TmTyAbs "B" (TmTyApp (TmTyApp (TmTyAbs "A" (TmTyAbs "B" (TmAbs "a" (TyVar 1) (TmAbs "b" (TyVar 0) (TmVar "a"))))) (TyVar 0)) (TyVar 1))))
```

Now we can implement the type checker:

```
type Context = Map String (Type Int)
typeOf :: Context -> Term Int -> Either (TypeError Int) (Type Int)
```

`TypeError`

The variable was not bound by a lambda abstraction.

An operand of an addition term was not an integer.

The condition of an if-then-else term is not a boolean.

The arms of an if-then-else term have different types.

A function is applied to an argument of the wrong type.

A term of a non-function type is the left part of an application.

A type is applied to a term with a non-polymorphic type.

Type checking a type abstraction is still pretty simple:

But type checking a type application is a bit more involved. We don’t just apply the substitution, but do some shifting around it. With the pattern matching, we assert that `ty1`

is of the form \(\forall X. \mathsf{ty12}\) for some type variable \(X\) and some type \(\mathsf{ty12}\). We need to shift \(\mathsf{ty2}\) up, because its context is one smaller than the context of \(\mathsf{ty12}\). And we need to shift \(\mathsf{ty12}\) one down after the substitution, because we have removed \(X\) from the context by pattern matching on `ty1`

:

```
typeOf ctx (TmTyApp t1 ty2) = do
ty1 <- typeOf ctx t1
case ty1 of
TyForall x ty12 -> Right $
shift 0 (-1) (subst 0 (shift 0 1 ty2) ty12)
_ -> Left $ TypeApplicationNonPolymorphic t1 ty1
```

Most of `typeOf`

is still the same:

`typeOf`

```
typeOf ctx TmTrue = Right TyBool
typeOf ctx TmFalse = Right TyBool
typeOf ctx (TmInt n) = Right TyInt
typeOf ctx (TmVar x) =
case Map.lookup x ctx of
Nothing -> Left $ UnboundVariable x
Just ty -> Right ty
typeOf ctx (TmAbs x ty t) =
let ctx' = Map.insert x ty ctx
ty' = typeOf ctx' t
in TyFun ty <$> ty'
typeOf ctx (TmAdd t1 t2) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyInt) $
Left $ AdditionNonInteger t1 ty1
ty2 <- typeOf ctx t2
when (ty2 /= TyInt) $
Left $ AdditionNonInteger t2 ty2
Right TyInt
```

But we also have to update how we type check normal applications and if-then-else terms. To check whether the argument type matches the parameter type of the left-hand side, we test whether they are equal. Similarly, for if-then-else terms we check whether the types of the arms are equal. But the `Eq`

instance for `Type`

s is derived, so two polymorphic types `TyForall x ty1`

and `TyForall y ty2`

are equal if and only if `x == y`

and `ty1 == ty2`

. But \(\forall X. 0_X\) and \(\forall Y. 0_Y\) are clearly the same type. So we can just ignore the first parameter of `TyForall`

when comparing them since we are using De Bruijn-indices which don’t have to be renamed. We’ll use the `tyEq`

function for testing whether two types are equal^{7}:

```
typeOf ctx (TmApp t1 t2) = do
ty1 <- typeOf ctx t1
ty2 <- typeOf ctx t2
case ty1 of
TyFun ty11 ty12 ->
if tyEq ty2 ty11
then Right ty12
else Left $ ApplicationWrongArgumentType t1 ty1 t2 ty2
_ -> Left $ ApplicationNotFunction t1 ty1
typeOf ctx (TmIf t1 t2 t3) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyBool) $
Left $ NonBooleanCondition t1 ty1
ty2 <- typeOf ctx t2
ty3 <- typeOf ctx t3
when (not (tyEq ty2 ty3)) $
Left $ ArmsOfDifferentType t2 ty2 t3 ty3
Right ty2
tyEq :: Type Int -> Type Int -> Bool
tyEq (TyVar x) (TyVar y) = x == y
tyEq (TyForall _ ty1) (TyForall _ ty2) = tyEq ty1 ty2
tyEq (TyFun ty11 ty12) (TyFun ty21 ty22) = tyEq ty11 ty21 && tyEq ty12 ty22
tyEq TyBool TyBool = True
tyEq TyInt TyInt = True
tyEq _ _ = False
```

And with that, we should have a working type checker for the polymorphic lambda calculus! Let’s try it:

```
let Right tmConstDB = deBruijnTerm [] tmConst
in typeOf Map.empty tmConstDB
=> Right (TyForall "A" (TyForall "B" (TyFun (TyVar 1) (TyFun (TyVar 0) (TyVar 1)))))
```

We can also `restore`

the term:

```
let Right tmDB = deBruijnTerm [] tmConst
Right ty = typeOf Map.empty tmDB
in restore ty
=> Just (TyForall "A" (TyForall "B"
(TyFun (TyVar "A")
(TyFun (TyVar "B")
(TyVar "A")))))
```

\(\mathsf{const} : \forall A. \forall B. A \rightarrow B \rightarrow A\), just what we expected!

Now let’s try \(\mathsf{constFlip}\), which failed previously:

```
let Right tmDB = deBruijnTerm [] tmConstFlip
Right ty = typeOf Map.empty tmDB
in restore ty
=> Just (TyForall "A" (TyForall "B"
(TyFun (TyVar "B")
(TyFun (TyVar "A")
(TyVar "B")))))
```

\(\mathsf{constFlip} : \forall A. \forall B. B \rightarrow A \rightarrow B\), hurray!

And let’s also check that we can apply polymorphic functions, \((\lambda \mathsf{id} : (\forall X. X \rightarrow X). \mathsf{id}\ \mathsf{Int}\ 6)\ (\Lambda Y. \lambda y : Y. y)\):

```
let tm = TmApp
(TmAbs "id" (TyForall "X" (TyFun (TyVar "X") (TyVar "X")))
(TmApp (TmTyApp (TmVar "id") TyInt) (TmInt 6)))
(TmTyAbs "Y" (TmAbs "y" (TyVar "Y") (TmVar "y")))
Right tmDB = deBruijnTerm [] tm
Right ty = typeOf Map.empty tmDB
in ty
=> TyInt
```

Cool! (Writing this example, I wished I had written a parser…)

Note, however, that restoring does not always work:

```
let tm = TmTyAbs "B" (TmTyApp tmConst (TyVar "B"))
Right tmDB = deBruijnTerm [] tm
Right ty = typeOf Map.empty tmDB
in (ty, restore ty)
=> ( TyForall "B" (TyForall "B" (TyFun (TyVar 1) (TyFun (TyVar 0) (TyVar 1))))
, Just (TyForall "B" (TyForall "B" (TyFun (TyVar "B") (TyFun (TyVar "B") (TyVar "B")))))
)
```

The first type, using De Bruijn-indices, is correct: \(\forall B. \forall B. 1_B \rightarrow 0_B \rightarrow 1_B\). The second, restored type, however, is: \(\forall B. \forall B. B \rightarrow B \rightarrow B\). If we turn this into a `Type Int`

, we get \(\forall B. \forall B. 0_B \rightarrow 0_B \rightarrow 0_B\), which is not equal to the original. To solve this, you would need to do some renaming.

Some more examples:

```
everything :: Term String -> Type String
everything = fromJust . restore . fromRight oops . typeOf Map.empty . fromRight oops . deBruijnTerm []
where
oops = error "everything: expected Right but found Left"
```

\(\mathsf{id}\ \mathsf{Bool}\ \mathsf{True}\):

\(\mathsf{const}\ \mathsf{Int}\ (\mathsf{Int} \rightarrow \mathsf{Bool})\ (10 + 20)\ (\mathsf{const}\ \mathsf{Bool}\ \mathsf{Int}\ \mathsf{False})\):

```
everything (TmApp (TmApp (TmTyApp (TmTyApp tmConst TyInt) (TyFun TyInt TyBool)) (TmAdd (TmInt 10) (TmInt 20))) (TmApp (TmTyApp (TmTyApp tmConst TyBool) TyInt) TmFalse))
=> TyInt
```

\((\mathbf{if}\ \mathsf{False}\ \mathbf{then}\ (\Lambda A. \lambda a : A. a)\ \mathbf{else}\ (\Lambda B. \lambda b : B. b))\ \mathsf{Int}\ 5\)

```
everything (TmApp (TmTyApp (TmIf TmFalse (TmTyAbs "A" (TmAbs "a" (TyVar "A" (TmVar "a")))) (TmTyAbs "B" (TmAbs "b" (TyVar "B" (TmVar "b"))))) TyInt) (TyInt 5))
=> TyInt
```

We have explored the polymorphic lambda calculus (or System F), which allows for more abstraction than the simply typed lambda calculus. We have met the trouble of substitution, and we have seen how we can solve it using De Bruijn-indices.

Most exercises for the STLC can also be applied to the polymorphic lambda calculus. Some other exercises:

- Add a pair type (tuple with two elements) with a constructor (you could use \((t, t')\) if you’re writing a parser; otherwise it doesn’t really matter for the abstract syntax tree) and
`fst`

and`snd`

to project elements out of the pair. Write the typing rules and extend the type checker. - Write a
`restore`

function that works on all types with De Bruijn-indices. You would need to keep track of the context, i.e., what type variables are used. And you need to be able to generate fresh type variables; you can try to add primes (`'`

) to the first parameter of`TyForall`

until the name is not bound in the context, for example.

In the next post, I will explore *type inference*, which will allow us to eliminate *all* types in the syntax of terms. No more \(\mathsf{const} = \Lambda A. \Lambda B. \lambda a : A. \lambda b : B. a\), but just \(\mathsf{const} = \lambda a. \lambda b. a\). And instead of \(\mathsf{const}\ \mathsf{Int}\ \mathsf{Bool}\ 19\ \mathsf{True}\), we will write just \(\mathsf{const}\ 19\ \mathsf{True}\).

If you want to read more about De Bruijn-indices, shifting and substitution, you might find the following resources useful:

*CS 4110 – Programming Languages and Logics Lecture #15: De Bruijn, Combinators, Encodings**Types and Programming Languages*, Benjamin C. Pierce, Chapter 6.

These resources are about using De Bruijn-indices in the untyped lambda calculus, but this knowledge can also be applied to types. If you find shifting and substitution for De Bruijn-indices a bit hard to grasp (I did when I first learnt about them), I recommend you try to work out some examples by hand.

Making a mistake writing the identity function is perhaps a bit silly. But in more complex programs, such as a sorting function, this could very well happen.↩︎

With the

`TypeApplications`

extension.↩︎You might notice that I don’t specify the types of these examples, i.e., I don’t write

`tmId :: Term`

. I haven’t forgotten them, but I purposefully omitted them. You’ll later see why.↩︎There is also another problem: the definition of

`(==)`

for types isn’t correct. We will later fix that problem.↩︎It is also common to start counting from one, but since we will use lists and their indices (which in Haskell’s

`Prelude`

start from zero), it is more convenient to start counting from zero.↩︎The

`restore`

function does not work in general, but it should work on types generated by`deBruijn`

. An example that doesn’t work: \(\forall X. \forall X. 0 \rightarrow 1\). Both \(0\) and \(1\) will be replaced by \(X\), and they will both refer to the inner \(X\), but the \(1\) should refer to the outer \(X\).↩︎Testing whether a type is equal to

`TyBool`

or`TyInt`

can still be done using`(==)`

.↩︎

Our exploration of type systems starts quite simple, with the *simply typed lambda calculus* (STLC). This type system is the foundation of more complex type systems such as Haskell’s. The simply typed lambda calculus is based on the *(untyped) lambda calculus*. To understand the simply typed lambda calculus, you do *not* have to understand the untyped lambda calculus, but it could be beneficial, as I will refer to some of its properties. If you want to read about the untyped lambda calculus, the following articles might be helpful:

The syntax of a (programming) language describes how the language is written. The syntax of the simply typed lambda calculus consists of two things: *terms* and *types*.

One major difference between the untyped lambda calculus and the simply typed, is that the latter has a notion of *types*. The STLC contains two different sorts of types:

*Function types*. We write the type of a function that accepts a parameter of type \(\tau\) and returns a value of type \(\tau'\) as \(\tau \rightarrow \tau'\). The identity function on booleans, for example, accepts a parameter of type \(\mathsf{Bool}\) (boolean), and returns a value of the same type. Its type is thus written as \(\mathsf{Bool} \rightarrow \mathsf{Bool}\). We also add that the function arrow is*right-associative*: \(\tau \rightarrow \tau' \rightarrow \tau''\) is the same as \(\tau \rightarrow (\tau' \rightarrow \tau'')\).*Simple types*(also called*constant types*). These types are what makes the STLC the simply typed lambda calculus. The simple types are the types of the constant values:`True`

has type`Bool`

(boolean),`8`

has type`Int`

(integer), et cetera.

We can choose the simple types however we like. Here, we’ll use booleans and integers, and add the if-then-else construct and addition. Adding operations like subtraction, multiplication, etc., is very straight-forward when you know how to handle addition, so I won’t explicitly explain how they work.

In more formal syntax, we write:

\[ \begin{align*} \tau ::=\ & \tau \rightarrow \tau' & \text{(function type)} \\ \mid\ & \mathsf{Bool} & \text{(boolean type)} \\ \mid\ & \mathsf{Int} & \text{(integer type)} \end{align*} \]

You can read the symbol \(::=\) as ‘*is defined by the following rules*’. The symbol \(\mid\) separates rules, and you can read it as ‘*or*’. The grammar description starts with a \(\tau\) (Greek letter tau, commonly used for denoting types); whenever you see a \(\tau\) or a \(\tau\) with any number of primes (which are used to make clear that these types may differ), it means that the syntax ‘expects’ another type there. The syntax of types is thus defined recursively. (This notation of grammars is called Backus-Naur form (BNF).)

Translating such a syntax definition to Haskell is quite easy. We define a type called `Type`

, which contains the *abstract syntax tree* (AST) for types. The AST does not directly correspond to the actual syntax of the types; we don’t encode in the AST how whitespace should be handled, how comments are written, that the function arrow is right-associative, etc. That’s why it’s called an *abstract* syntax tree. The Haskell data type for the AST of types looks like this:

```
data Type
= TyFun Type Type
-- ^ Function type. The type @TyFun ty1 ty2@
-- corresponds to @ty1 -> ty2@.
| TyBool
-- ^ Boolean type
| TyInt
-- ^ Integer type
deriving (Show, Eq)
```

There are five sorts of terms in the STLC. These are based on the terms of the untyped lambda calculus, with some additions: the syntax for lambda abstractions is a bit different and values and computation constructs are added. The terms of the STLC consist of:

*Variables*. These are names for values. We generally use strings of characters as variable names, but we could just as well use integers^{1}.What strings are valid variable names is not very important here, since we aren’t writing a parser. Variable names generally consist of alphanumeric characters, starting with an alphabetic character. We’ll use this as an informal rule.

*(Lambda) abstractions*. Lambda abstractions (or in short: abstractions) are functions. They accept one^{2}parameter and return a value. We write them like in the untyped lambda calculus, but add the type of the parameter.The identity function on booleans, \(\mathsf{id}_\mathsf{Bool}\), for example, is written like \(\lambda x : \mathsf{Bool}. x\). (Or, in more Haskell-like syntax:

`\x : Bool. x`

.) This function accepts a boolean parameter named \(x\). In the return value (which is written after the period), we can use the variable name \(x\) to refer to the value that was*bound*(i.e., introduced) by the abstraction.*Applications*. This is just function application. We write it using juxtaposition: \(f\) applied to \(x\) is written as \(f\ x\). Applications only really make sense when the left value is an abstraction (or a term that evaluates to one).*(Constant) values*. These are values like integers (`3`

), booleans (`True`

), characters (`'f'`

) et cetera. These values cannot be evaluated any further, and are pretty useless on their own, so we also need:*Computation constructs*. These are terms like conditionals (`if a then b else c`

), binary operations (`x + y`

), et cetera. The key aspect of these constructs is that they have some sense of computation:`if True then a else b`

should evaluate to`a`

,`5 + 6`

should evaluate to`11`

. We add these terms to the lambda calculus when adding simple types, because without them, we can’t ‘do anything’ with the values we added.

More formally, we describe the grammar of terms as follows:

\[ \begin{align*} t ::=\ & \mathsf{False} & \text{(false)} \\ \mid\ & \mathsf{True} & \text{(true)} \\ \mid\ & n & \text{(integer)} \\ \mid\ & x & \text{(variable)} \\ \mid\ & \lambda x : \tau.\ t & \text{(lambda abstraction)} \\ \mid\ & t\ t' & \text{(application)} \\ \mid\ & t + t' & \text{(addition)} \\ \mid\ & \mathbf{if}\ t\ \mathbf{then}\ t'\ \mathbf{else}\ t'' & \text{(if-then-else)} \end{align*} \]

We write \(x\) for variables, without explicitly defining what \(x\) can be. And for integers we write \(n\), also without explicitly specifying what valid values of \(n\) are. That’s because, as explained above, it doesn’t really matter what set of strings we allow as variable names for reasoning about programs. And it also doesn’t matter that much whether we use 32-bit, 64-bit, signed, unsigned, or unbounded integers.

Again, writing the Haskell definition is quite easy:

```
data Term
= TmTrue
-- ^ True value
| TmFalse
-- ^ False value
| TmInt Integer
-- ^ Integer value
| TmVar String
-- ^ Variable
| TmAbs String Type Term
-- ^ Lambda abstraction. @TmAbs x ty t@
-- corresponds to @\x : ty. t@.
| TmApp Term Term
-- ^ Application
| TmAdd Term Term
-- ^ Addition
| TmIf Term Term Term
-- ^ If-then-else conditional
deriving (Show, Eq)
```

Let’s look at some examples. The abstract syntax tree of the identity function on booleans, which we’ve seen before, is written like this in Haskell:

Another example is the `not`

function, which inverts its boolean argument: \(\lambda x : \mathsf{Bool}. \mathbf{if}\ x\ \mathbf{then}\ \mathsf{False}\ \mathbf{else}\ \mathsf{True}\). In Haskell:

A function that adds its two arguments: \(\lambda x : \mathsf{Int}. \lambda y : \mathsf{Int}. x + y\). In Haskell:

And its type, \(\mathsf{Int} \rightarrow \mathsf{Int} \rightarrow \mathsf{Int}\), which is the same as \(\mathsf{Int} \rightarrow (\mathsf{Int} \rightarrow \mathsf{Int})\), is in Haskell:

Now we know the syntax of terms and types, we can move on to the relation between the two.

A type checker checks that all values are used correctly, i.e., that they have the right type. Type checking is useful, because it can help us spot mistakes in our program. Without a type checker, if we were to evaluate the expression \(1 + \mathsf{True}\), the program would crash; it does not make sense to add a boolean and an integer. A type checker can prevent the program from crashing, because it will reject faulty programs before they are interpreted or compiled.

To express that a term has a certain type, we use a *typing judgement*. The judgement will look something like this in mathematical notation: \(\Gamma \vdash t : \tau\). You can read it as: *the context \(\Gamma\) entails that \(t\) has type \(\tau\)*.

The *context* is a set of *bindings*: variables and their types. Contexts are generally written like this:

- \(\varnothing\) denotes the empty context;
- \(\Gamma, x : \tau\) denotes the context \(\Gamma\) extended with \(x\) and its type \(\tau\).

The context \(\varnothing, x : \mathsf{Bool}, f : \mathsf{Bool} \rightarrow \mathsf{Int}\) contains two bindings: the boolean \(x\) and the boolean-to-integer function \(f\).

We can combine typing judgements to form *typing rules*. We use *inference rules* to make statements about how to reason about terms and types. These inference rules consist of a number of premises, a horizontal bar, and the conclusion. An example is *modus ponens*:

\[ \frac{ \begin{array}{c} A \\ A \rightarrow B \end{array} }{ B } \]

You can read this as: *if we have \(A\) and \(A \rightarrow B\) *(if \(A\) then \(B\))*, then we conclude \(B\).*

We use this notation for typing rules. The most simple rules are the rules for boolean and integer values:

\[ \text{T-True: } \frac{}{\varnothing \vdash \mathsf{True} : \mathsf{Bool}} \]

T-True is the name of the rule. This rule has no premises, and states that we can conclude in an empty context that \(\mathsf{True}\) has type \(\mathsf{Bool}\).

Instead of writing \(\varnothing \vdash t : \tau\), the \(\varnothing\) is usually omitted: \(\vdash t : \tau\). So, the rule for \(\mathsf{False}\) is:

\[ \text{T-False: } \frac{}{\vdash \mathsf{False} : \mathsf{Bool}} \]

And the rule for integers:

\[ \text{T-Int: } \frac{}{\vdash n : \mathsf{Int}} \]

Now let’s write some more complex rules. To find the type of variables, we look them up in the context. To denote that \(x\) has type \(\tau\) in \(\Gamma\), we write: \(x : \tau \in \Gamma\). So, the rule for variables is:

\[ \text{T-Var: } \frac{ x : \tau \in \Gamma }{ \Gamma \vdash x : \tau } \]

The rule for lambda abstractions looks like this:

\[ \text{T-Abs: } \frac{ \Gamma, x : \tau \vdash t : \tau' }{ \Gamma \vdash \lambda x : \tau. t : \tau \rightarrow \tau' } \]

To type check abstractions, we add \(x : \tau\) to the context (because \(t\) might use \(x\)) and check the type of \(t\). We then know that the abstraction takes an argument of type \(\tau\) and has a return type of the type of \(t\).

For applications, we need to have a term with a function type on the left side, that accepts an argument with the type of the right side:

\[ \text{T-App: } \frac{ \begin{array}{c} \Gamma \vdash t : \tau \rightarrow \tau' \\ \Gamma \vdash t' : \tau \end{array} }{ \Gamma \vdash t\ t' : \tau' } \]

For an addition, we require that the two operands are both integers. The type of the addition is then also an integer:

\[ \text{T-Add: } \frac{ \begin{array}{c} \Gamma \vdash t : \mathsf{Int} \\ \Gamma \vdash t' : \mathsf{Int} \end{array} }{ \Gamma \vdash t + t' : \mathsf{Int} } \]

When typing if-then-else terms, we expect the condition to be a boolean, and the two arms to have the same type:

\[ \text{T-If: } \frac{ \begin{array}{c} \Gamma \vdash t_1 : \mathsf{Bool} \\ \Gamma \vdash t_2 : \tau \\ \Gamma \vdash t_3 : \tau \end{array} }{ \Gamma \vdash \mathbf{if}\ t_1\ \mathbf{then}\ t_2\ \mathbf{else}\ t_3 : \tau } \]

These are all the typing rules we will be working with.

To determine the type of a more complex term, we can combine the typing rules. The type of \(\lambda n : \mathsf{Int}. 3 + n\), for example, is determined as follows:

\[ \text{T-Abs: } \dfrac{ \text{T-Add: } \dfrac{ \text{T-Int: } \dfrac{}{ \vdash 3 : \mathsf{Int} } \quad \text{T-Var: } \dfrac{ n : \mathsf{Int} \in \varnothing, n : \mathsf{Int} }{ \varnothing, n : \mathsf{Int} \vdash n : \mathsf{Int} } }{ \varnothing, n : \mathsf{Int} \vdash 3 + n : \mathsf{Int} } }{ \vdash \lambda n : \mathsf{Int}. 3 + n : \mathsf{Int} \rightarrow \mathsf{Int} } \]

Using these rules, we can implement a type checker in Haskell.

For the context, we’ll use a `Map`

:

The function `typeOf`

will determine the type of a term in a certain context, or will throw a type error. Its type is:

`TypeError`

The variable was not bound by a lambda abstraction.

An operand of an addition term was not an integer.

The condition of an if-then-else term is not a boolean.

The arms of an if-then-else term have different types.

A function is applied to an argument of the wrong type.

A term of a non-function type is the left part of an application.

The rules for boolean and integer values are really easy to implement:

```
typeOf ctx TmTrue = Right TyBool
typeOf ctx TmFalse = Right TyBool
typeOf ctx (TmInt n) = Right TyInt
```

We can implement T-Var with a simple lookup:

```
typeOf ctx (TmVar x) =
case Map.lookup x ctx of
Nothing -> Left $ UnboundVariable x
Just ty -> Right ty
```

For lambda abstractions, …

…, we add `x`

with the type `ty`

to the context, and determine the type of `t`

in the new context, …

…, and return the function type from `ty`

to `ty'`

:

(Note that `TyFun ty <$> ty'`

is the same as:

But using the fact that `Either`

is a `Functor`

allows us to use `fmap`

, or the infix version `(<$>)`

. This is more succinct that an explicit `case`

-`of`

.

In this `case`

-`of`

expression, `ty'`

has type `Type`

, but above `ty' :: Either TypeError Type`

.)

For type checking applications, we use the fact that `Either`

is a `Monad`

, and use the `do`

-notation:

We first determine the types of `t1`

and `t2`

:

The type of `t1`

should be a function type:

And the type of `t2`

should be the same as `t1`

’s argument’s type, `ty11`

:

If `t1`

doesn’t have a function type, then we can’t apply it:

For addition, if the two operands are integers, then the result is too:

```
typeOf ctx (TmAdd t1 t2) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyInt) $
Left $ AdditionNonInteger t1 ty1
ty2 <- typeOf ctx t2
when (ty2 /= TyInt) $
Left $ AdditionNonInteger t2 ty2
Right TyInt
```

We can also prevent duplication:

```
typeOf ctx (TmAdd t1 t2) = do
check t1
check t2
Right TyInt
where
check t = do
ty <- typeOf ctx t
when (ty /= TyInt) $
Left $ AdditionNonInteger t ty
```

When type checking if-then-else terms, we want the condition to be a boolean, and the arms to be of the same type:

```
typeOf ctx (TmIf t1 t2 t3) = do
ty1 <- typeOf ctx t1
when (ty1 /= TyBool) $
Left $ NonBooleanCondition t1 ty1
ty2 <- typeOf ctx t2
ty3 <- typeOf ctx t3
when (ty2 /= ty3) $
Left $ ArmsOfDifferentType t2 ty2 t3 ty3
Right ty2
```

And that’s it! We’ve now implemented our type checker. Let’s try it!

Let’s start with some terms we have already defined. The type of the identity function on booleans, \(\mathsf{id}_\mathsf{Bool}\), is:

We see that type checking has been successful, since we’ve got a `Right`

value back. And the type is indeed what we were expecting: \(\mathsf{Bool} \rightarrow \mathsf{Bool}\).

Let’s also define the identity functions on boolean-to-integer functions:

We expect its type to be \((\mathsf{Bool} \rightarrow \mathsf{Int}) \rightarrow (\mathsf{Bool} \rightarrow \mathsf{Int})\), and indeed:

The type of \(\mathsf{not}\) should be \(\mathsf{Bool} \rightarrow \mathsf{Bool}\):

And the type of \(\mathsf{add}\) should be \(\mathsf{Int} \rightarrow \mathsf{Int} \rightarrow \mathsf{Int}\):

So far, so good. Let’s also take a look at terms that should be rejected.

We expect our type checker to reject the term \(\mathsf{True} + 1\), since we can’t add booleans and integers:

Hurray, one mistake prevented!

We can’t refer to variables that are not bound:

But if the variable is defined in the context, that should be no problem:

We should also reject \(\mathsf{not}\ 14\), because \(\mathsf{not}\) expects a boolean parameter:

```
typeOf Map.empty (TmApp tmNot (TmInt 14))
=> Left
(ApplicationWrongArgumentType
(TmAbs "x" TyBool (TmIf (TmVar "x") TmFalse TmTrue))
(TyFun TyBool TyBool)
(TmInt 14)
TyInt)
```

It would be nice to display these errors more user-friendly, but that’s left as an exercise to the reader!

Let’s try applying to a non-function value:

And if-then-else terms with a non-boolean condition:

```
typeOf Map.empty (TmIf (TmAbs "x" TyBool (TmInt 0)) (TmInt 3) (TmInt 4))
=> Left
(NonBooleanCondition
(TmAbs "x" TyBool (TmInt 0))
(TyFun TyBool TyInt))
```

Or with non-matching arms:

```
typeOf Map.empty (TmIf TmTrue (TmInt 10) TmFalse)
=> Left (ArmsOfDifferentType (TmInt 10) TyInt TmFalse TyBool)
```

We’ve written a type checker for the simply typed lambda calculus!

If you want to a play a bit more with this type checker, you might want to do one of the following exercises, which I highly suggest:

- Add other binary operators on integers, such as subtraction, multiplication, etc. Extend the abstract syntax, write the typing rules for these terms and extend the type checker to follow these rules.
- Add support for another simple type, such as characters or strings. Extend the abstract syntax, write the typing rules and extend the type checker. Also add some computation constructs that interact with these values: for characters for example, you might want to add functions like Haskell’s
`ord :: Char -> Int`

and`chr :: Int -> Char`

. - Write an evaluator for the STLC.
- Write a parser for STLC terms. You might want to take a look at Parsec, or find an introduction to
*parser combinators*. - Rewrite the type checker using monad transformers. The type checker can be written in the
`ReaderT Context (Except TypeError)`

monad.*Learn You a Haskell for Great Good*has an introduction to monad transformers.

In the next post, I’ll describe how we can add more support for abstraction to the simply typed lambda calculus, and we’ll take a look at the *polymorphic lambda calculus*.

Using integers as variables is actually a well-known technique. It is useful for writing an evaluator of the lambda calculus, because it is a lot easier to define substitution that way. If you want to know more, read about

*De Bruijn-indices*.↩︎Instead of having support for functions with multiple parameters, we choose to write functions that return other functions. A function that adds its two integer parameters, for example, is written like \(\lambda a : \mathsf{Int}. \lambda b : \mathsf{Int}. a + b\). This is called Currying.↩︎

In this series, I will explain various type systems and their implementations in Haskell. The aim for this series is to be an approachable way to learn about type systems. I will try to cover both the theoretical aspects, such as formal (mathematical) notation, and the practical aspects, consisting of a Haskell implementation. After reading this series, you should have an understanding of the basics of type systems.

You can find a list of the series’ posts here.

*Types and Programming Languages*, Benjamin C. Pierce.*Software Foundations, volume 2: Programming Language Foundations*, Benjamin C. Pierce*et al.**Programming Language Foundations in Agda*, Philip Wadler, Wen Kokke and Jeremy Siek.

In this blog post, I explain how to use Emacs in a local Nix environment for all modes, without needing mode-specific configuration.

Recently, I was trying to get `haskell-mode`

in Emacs to work inside a (local) Nix environment, à la `nix-shell`

. I use Nix to manage my Haskell dependencies^{1}. Those dependencies aren’t installed globally (or rather, aren’t in my `$PATH`

), and I don’t want them to be installed by Cabal, so building the project and running GHCI should happen inside a Nix environment.

With `haskell-mode`

, you can run the function `haskell-process-load-file`

to run GHCI inside Emacs. If you set `haskell-process-type`

to `'cabal-repl`

(or `'cabal-new-repl`

), GHCI will use Cabal to manage dependencies, but it will run `cabal`

in your `$PATH`

, or the program specified in `haskell-process-path-cabal`

.

So I created a script with the following contents:

…, and set `haskell-process-path-cabal`

to the path to the script.

This works quite well, but isn’t very elegant.

Then I discovered the `haskell-mode`

option `haskell-process-wrapper-function`

, which ‘wraps or transforms Haskell process commands (…)’, according to the documentation. The documentation even contains an example value which makes the process commands run inside a `nix-shell`

(simplified a bit here):

```
(setq haskell-process-wrapper-function
(lambda (argv)
(list "nix-shell"
"-I"
"."
"--command"
(mapconcat 'identity argv " "))))
```

This is works well, and is a lot more elegant than the script above. But it only works for `haskell-mode`

: when I want to run Python with packages managed by Nix inside Emacs, I’ll have to search `python-mode`

for an option similar to `haskell-process-wrapper-function`

. And when I want to use yet another language, …

So I tried to find a general solution, and found lorri. lorri integrates direnv with Nix. With lorri, you don’t need `nix-shell`

anymore, since direnv automatically changes your path, and lorri automatically builds your shell environment. (See the lorri demonstration.)

The direnv home page explains how to install a direnv hook into your shell, but you can also add direnv to Emacs: the emacs-direnv package adds direnv support. It’s as simple as adding the following to your Emacs configuration (if you use `use-package`

):

If you now visit a file in a directory where lorri is initialised, your environment variables will be updated, and you can run all sorts of processes (`haskell-mode`

’s GHCI, `python-mode`

’s REPL, `eshell`

, etc.) inside a Nix environment.

direnv changes the environment variables (such as `$PATH`

) that were generated by lorri in a local directory. Because your `$PATH`

is changed when you visit a file in that directory, there is no need for any mode-specific Emacs configuration.

Read how to do this in the Nixpkgs manual.↩︎