Skip to content

Inductive types in Lean 4

Arthur Paulino edited this page May 17, 2022 · 1 revision

Lean allows us to define a wide range of inductive data structures. In its simplest form, an inductive is defined as such

inductive Foo where
  | constructor₁ : ... → Foo
  | constructor₂ : ... → Foo
  ...
  | constructorₙ : ... → Foo

where the ellipses stand for a potentially empty sequence of types A₁ → A₂ → ..., which gives the types of the arguments of each constructor. Although the types of each arguments might contain recursive references to the inductive data type being defined (i.e. Foo), it only allows for a limited form: Aₖ := D₁ → ... → Foo such that D₁ ... do not contain recursive references. This restriction is called strict positivity and it is a sufficient condition for the data types to be well-founded, meaning no loops/contradictions can be found by using their recursion scheme.

Examples of strictly positive data types are

inductive ListNat where
  | nil : ListNat
  | cons : Nat → ListNat → ListNat

inductive Foo where
  | bar : Foo
  | baz : (Bool → Foo) → Foo

while an inductive as such is not allowed

inductive Bad where
  | bad : (Bad → Bool) → Bad

Dependency on other values

Inductive data types can have a dependency on values in two different ways: as a parameter and as an index. We'll first look at parameters.

To add parameters to an inductive declaration we simply add bindings of form (a : A) as such

inductive Foo (a₁ : A₁) ... where
  | constructor₁ : ... → Foo a₁ ...
  | constructor₂ : ... → Foo a₁ ...
  ...
  | constructorₙ : ... → Foo a₁ ...

and we are allowed to use these variables anywhere inside the inductive declaration, with one restriction: all recursive references to the inductive must be applied to all variables a₁ ... in order.

As an example, we may define the parametrized list type

inductive List (A : Type) where
  | nil : List A
  | cons : (a : A) → (as : List A) → List A

Now, a data type with a dependency on an index is quite more general. To add indices, we need to give an explicit type to the inductive as such:

inductive Foo : (a₁ : A₁) ... → Sort u where
  | constructor₁ : ... → Foo ...
  | constructor₂ : ... → Foo ...
  ...
  | constructorₙ : ... → Foo ...

and although each recursive reference must be applied to as many expressions as there are indices, they are allowed to be any expression as long as they have the same type as their corresponding indices. We call such inductive definitions inductive families.

With indices it is possible to define, say, an inductive family over n : Nat that is only populated when n is even:

inductive Even : (n : Nat) → Type where
  | mk : (n : Nat) → Even (2*n)

or even an inductive family of vectors of known size

inductive Vector (A : Type) : (n : Nat) → Type where
  | nil : Vector A 0
  | cons : {n : Nat} → A → Vector A n → Vector A (n+1)

Now, you might be asking: if indices are more general than parameters, why bother having parameters at all? Well, as an example, if you try to define polymorphic lists with index

inductive List : (A : Type) → Type where
  | nil : {A : Type} → List A
  | cons : {A : Type} → A → List A → List A

you get the following error: (kernel) universe level of type_of(arg #1) of 'List.nil' is too big for the corresponding inductive datatype. This is saying that the universe level of List.nil (the same happens too for List.cons), namely 2, is bigger than the universe level of List, namely 1. The reason that constructors cannot be of a higher universe than its corresponding inductive is to prevent certain types of paradoxes, which leads to an inconsistent theory.

We see, however, that when A : Type is a parameter, then there's no explicit binding {A : Type} inside types of List.nil or List.cons, and thus List A is in the same universe level as A. Therefore, we can see that parameters are useful to define data types which are parametrically dependent on other types or type functions, but it is necessary to stress that parameters need not be types, nor can all dependency on types be defined parametrically. As an example, the following inductive is completely valid, having a value as a parameter:

inductive NatOrString (b : Bool) where
  | NatOrString : (if b then Nat else String) → NatOrString b

and this is an example of a type dependency that can only be defined as a type family, and which is also a case of a type index that does not require a higher inductive universe level:

inductive IsNat : (A : Type) → Type where
  | inhabited : IsNat Nat

Now, how do we decide whether a value should be a parameter or index? In general, for simplicity's sake, we opt for parameters, if it is possible to do so. The thing to keep in mind is that a parametrized dependency can always be removed by instantiating the parameter to a concrete value, and this instance of the data type can be defined as its own datatype. So for example, List Nat can be defined as its own definition, as in a previous example:

inductive ListNat where
  | nil : ListNat
  | cons : Nat → ListNat → ListNat

Note, however, that we cannot, in general, do this with indices. For instance, Vector Nat 4 does not stand on its own; you need to also define Vector Nat 3, and Vector Nat 2, and so on. That is, instances of a parametrized inductive type do not refer to other instances (they stand on their own) while, in general, inductive families might relate different instances with one another. In fact, in the type theory, parametrized inductive types are simply ordinary inductive types that were defined under a particular context that binds free variables to types, i.e., inside lambda expressions; the actual implementation simply mimics such definitions, though at the top level. Inductive families, on the other hand, are an actual extension to the concept of inductive types.

Recursive schemes

To each inductive definition, Lean defines an elimination rule, or recursion scheme. Such a recursion scheme is the computation content of the induction principle corresponding to each inductive data type. On simple enum types, it is simply a pattern matching on each variant. For example, if we define

inductive Three where
| A : Three
| B : Three
| C : Three

then Lean will automatically define a constant Three.rec with type

Three.rec :
  {motive : Three → Sort u} →
  motive Three.A →
  motive Three.B →
  motive Three.C →
  (t : Three) →
  motive t

(i.e., the induction principle on Three), which you can check by writing #print Three.rec in your source code. It has the following computational rules:

Three.rec {P} caseA caseB caseC Three.A = caseA
Three.rec {P} caseA caseB caseC Three.B = caseB
Three.rec {P} caseA caseB caseC Three.C = caseC

that is, it reduces exactly like a match:

Three.rec {P} caseA caseB caseC x = match x with
| Three.A => caseA
| Three.B => caseB
| Three.C => caseC

In fact, Lean does not have general pattern matching in the type theory; it mimics it using recursion schemes. To give you an example, consider the following function

def foo (x : Three) : Nat :=
  match x with
  | Three.A => 0
  | _ => 1

If you print foo (with #print foo), it will print the body of foo as if match was a primitive element of the language. However, this is just the work of Lean's pretty printer. In actuality, the pretty printer will hide another function which is automatically generated -- foo.match_1 -- and print a match in its place. For each match inside a function body f, Lean will define, in order, the functions f.match_1, f.match_2, etc. Now, printing foo.match_1 you will see that it is defined in terms of yet another function, foo.casesOn, which is finally defined in terms of foo.rec.

The reason for these generated functions such as foo.match_1 is that the Lean compiler will compile them to native pattern matching instead of defining them in terms of foo.rec. The recursion schemes are in general less efficient than actual pattern matching, since they require you to always give a separate case for each different constructor of the data type. So, for example, in an enum with a hundred variants, you need to give the recursor a hundred cases, even if they are all equal. With pattern matching, however, you are free to use the _ pattern, which matches any of the remaining cases. On the other hand, it is hard to formalize general pattern matching in the type theory; so we are stuck with having to use recursion schemes at computations in the type level, which are run by the typechecker's reducer.

Having said that, let's explore more complicated inductive types. Firstly, consider the natural numbers, which is defined inductively (i.e., it contains reference to itself in its definition):

inductive Nat where
| zero : Nat
| succ : (n : Nat) → Nat

Its recursion scheme, Nat.rec, has as its type the induction principle for natural numbers:

Nat.rec :
  {motive : Nat → Sort u} →
  motive Nat.zero →
  ((n : Nat) → motive n → motive (Nat.succ n)) →
  (t : Nat) →
  motive t

Such an induction principle is justified by its computational rules

Nat.rec {P} caseZero caseSucc Nat.zero = caseZero
Nat.rec {P} caseZero caseSucc (Nat.succ n) =
  caseSucc n (Nat.rec {P} caseZero caseSucc n)

Notice how caseSucc requires not only the argument of the constructor Nat.succ, namely n, but also, because n is a recursive argument, an inductive step of type P n. In general, the case associated with a constructor Foo.cnstr will take as arguments all arguments of Foo.cnstr plus induction steps for all recursive arguments.

To see how induction principles play with parameters and indices, we can consider the more complicated Vector type

inductive Vector (A : Type) : (n : Nat) → Type where
  | nil : Vector A 0
  | cons : {n : Nat} → (a : A) → (as : Vector A n) → Vector A (n+1)

Its induction principle is:

Vector.rec :
  {A : Type} →
  {motive : (n : Nat) → Vector A n → Sort u} →
  motive 0 Vector.nil →
  ({n : Nat} → (a : A) → (as : Vector A n) → motive n as → motive (n + 1) (Vector.cons a as)) →
  {n : Nat} →
  (t : Vector A n) →
  motive n t

We can see from this how parameters and indices are treated different. All parameters are taken before the motive, and it sets the instances for the types of all that comes after it: the motives, the cases for each constructor (also called minor premises), and the value to be matched (also called the major premise). On the other hand, indices are only taken after all parameters, motives, and cases; but the motive is a function of the indices as well.

Let's take a look at the reduction rules for Vector.rec:

Vector.rec {A} {P} caseNil caseCons {m} (Vector.nil {A}) = caseNil     -- where m must be equal to 0
Vector.rec {A} {P} caseNil caseCons {m} (Vector.cons {A} {n} a as) =   -- where m must be equal n+1
  caseCons {n} a as (Vector.rec {A} {P} caseNil caseCons {n} as)

Notice that I chose to write {m} instead of {0} or {n+1} to reinforce the fact that this is an independent argument to Vector.rec; it is the typechecker that restricts m in such a way that m == 0 or m == n+1. This also shows that m is not used at all in the left hand side of the rules. This is actually a general fact about indices: the indices of the major premise are not relevant to the computational rules.

Mutually recursive inductive types

Lean also supports mutually recursive inductive types by declaring the types inside a mutual block. The restrictions are that mutually recursive references too have to appear only at strictly positive positions, and that all inductives inside the mutual block must have the same exact parameters. As an example, consider the propositions Even and Odd, defined inside a mutual block:

mutual
  inductive Even : Nat → Type where
    | even_zero : Even 0
    | even_succ : {n : Nat} → (o : Odd n) → Even (n + 1)

  inductive Odd : Nat → Type where
    | odd_succ : {n : Nat} → (e : Even n) → Odd (n + 1)
end

Looking at the induction principles of Even and Odd

Even.rec :
  {motive_1 : (a : Nat) → Even a → Type} →
  {motive_2 : (a : Nat) → Odd a → Type} →
  motive_1 0 Even.even_zero →
  ({n : Nat} → (o : Odd  n) → motive_2 n o → motive_1 (n + 1) (Even.even_succ {n} o)) →
  ({n : Nat} → (e : Even n) → motive_1 n e → motive_2 (n + 1) (Odd.odd_succ  {n} e)) →
  {a : Nat} →
  (t : Even a) →
  motive_1 a t

Odd.rec :
  {motive_1 : (a : Nat) → Even a → Type} →
  {motive_2 : (a : Nat) → Odd a → Type} →
  motive_1 0 Even.even_zero →
  ({n : Nat} → (o : Odd  n) → motive_2 n o → motive_1 (n + 1) (Even.even_succ {n} o)) →
  ({n : Nat} → (e : Even n) → motive_1 n e → motive_2 (n + 1) (Odd.odd_succ  {n} e)) →
  {a : Nat} →
  (t : Odd a) →
  motive_2 a t

we notice a couple of interesting things: they now both take two motives instead of one, and they both take all of each other's minor premises. Their reduction rules are also intertwined:

Even.rec {P} {Q} caseEvenZ caseEvenS caseOddS {m} Even.even_zero = caseEvenZ  -- where m = 0
Even.rec {P} {Q} caseEvenZ caseEvenS caseOddS {m} (Even.even_succ {n} o) =    -- where m = n+1
  caseEvenS {n} o (Odd.rec  {P} {Q} caseEvenZ caseEvenS caseOddS {n} o)

Odd.rec  {P} {Q} caseEvenZ caseEvenS caseOddS {m} (Odd.odd_succ   {n} e) =    -- where m = n+1
  caseOddS  {n} e (Even.rec {P} {Q} caseEvenZ caseEvenS caseOddS {n} e)

Though it might seem that mutually defined inductives are a (proper) extension to the theory of inductive types, they are merely a syntactic convenience (or, a conservative extension), for there is, to each block of mutually defined inductives, a single inductive that can subsume such block. To give you a concrete example, we can construct the following inductive:

inductive EvenOdd : Bool → Nat → Type where
  | even_zero : EvenOdd true 0
  | even_succ : {n : Nat} → (o : EvenOdd false n) → EvenOdd true  (n + 1)
  | odd_succ  : {n : Nat} → (e : EvenOdd true  n) → EvenOdd false (n + 1)

It is easy to check that EvenOdd true n and EvenOdd false n are naturally isomorphic to Even n and Odd n respectively. In fact, manually replacing EvenOdd b by if b then Even else Odd in the sequence of constructor definitions, get us

  | even_zero : Even 0
  | even_succ : {n : Nat} → (o : Odd  n) → Even (n + 1)
  | odd_succ  : {n : Nat} → (e : Even n) → Odd  (n + 1)

exactly like in the mutual definition. The induction principle is given by

EvenOdd.rec :
  {motive : (b : Bool) → (n : Nat) → EvenOdd b n → Type} →
  motive true 0 EvenOdd.even_zero →
  ({n : Nat} (o : EvenOdd false n) → motive false n o → motive true  (n + 1) (EvenOdd.even_succ {n} o)) →
  ({n : Nat} (e : EvenOdd true  n) → motive true  n e → motive false (n + 1) (EvenOdd.odd_succ  {n} e)) →
  {b : Bool} →
  {n : Nat} →
  (t : EvenOdd b n) →
  motive b n t

with recursion rules:

EvenOdd.rec {motive} caseEvenZ caseEvenS caseOddS {b} {m} EvenOdd.even_zero = caseEvenZ  -- where m = 0,   b = true
EvenOdd.rec {motive} caseEvenZ caseEvenS caseOddS {b} {m} (EvenOdd.even_succ {n} o) =    -- where m = n+1, b = true
  caseEvenS {n} o (EvenOdd.rec {motive} caseEvenZ caseEvenS caseOddS {false} {n} o)
EvenOdd.rec {motive} caseEvenZ caseEvenS caseOddS {b} {m} (EvenOdd.odd_succ  {n} e) =    -- where m = n+1, b = false
  caseOddS  {n} e (EvenOdd.rec {motive} caseEvenZ caseEvenS caseOddS {true}  {n} e)

Now, although it takes a single motive, we can interpret motive as a pair, where motive true = P, a motive for Even, and motive false = Q, a motive for Odd. It is left as an exercise to check that this inductive princeple for EvenOdd is derivable from the induction principles of Even and Odd, and vice-versa, provided we give the following relations

EvenOdd b = if b then Even else Odd
motive b  = if b then P else Q

to go from Even and Odd to EvenOdd, and

Even = EvenOdd true
Odd  = EvenOdd false

P = motive true 
Q = motive false 

to go from EvenOdd to Even and Odd.

It is easy to see that this extends to larger blocks of mutuals, provided we define an appropriate enum type, instead of Bool, for it.

Nested inductive types

Lean also allows nested inductive types. These are inductives defined in terms of itself, where one or more recursive references occur inside another inductive. As an example, we can define the type of n-ary trees

inductive Tree (A : Type) where
  | branch : (a : A) → (trees : List (Tree A)) → Tree A

Nested inductive types is too a conservative extension, for nested inductive types can be defined as mutual inductive types:

mutual
  inductive Tree (A : Type) where
    | branch : (a : A) → (trees : TreeList A) → Tree A

  inductive TreeList (A : Type) where
    | nil : TreeList A
    | cons : (t : Tree A) → (ts : TreeList A) → TreeList A
end

where we can see clearly that TreeList A and List (Tree A) are isomorphic. It is quite inconvenient, however, to work with such a definition, since you must replicate all list functions that you might eventually want to use on TreeList A. Lean does allow for native nested inductives, replicating the induction principles and recursion rules as if they were defined in a mutual block. Thus, for instance, we can expect Tree's induction to have two motives:

Tree.rec :
  {A : Type} →
  {motive_1 : Tree A → Sort u} →
  {motive_2 : List (Tree A) → Sort u} →
  ((a : A) → (trees : List (Tree A)) → motive_2 trees → motive_1 (Tree.branch a trees)) →
  motive_2 [] →
  ((head : Tree A) → (tail : List (Tree A)) → motive_1 head → motive_2 tail → motive_2 (head :: tail)) →
  (t : Tree A) →
  motive_1 t

But this is an incomplete induction scheme, since there must be a new induction principle on List as well, corresponding to TreeList. Indeed, a new recursion scheme is also defined, by the name Tree.rec_1:

Tree.rec_1 :
  {A : Type} →
  {motive_1 : Tree A → Sort u} →
  {motive_2 : List (Tree A) → Sort u} →
  ((a : A) → (trees : List (Tree A)) → motive_2 trees → motive_1 (Tree.branch a trees)) →
  motive_2 [] →
  ((head : Tree A) → (tail : List (Tree A)) → motive_1 head → motive_2 tail → motive_2 (head :: tail)) →
  (t : List (Tree A)) →
  motive_2 t

The computational rules corresponding to the inductive principles can also be derived, as in the case of mutual definitions:

Tree.rec {A} {P} {Q} caseBranch caseNil caseCons (Tree.branch {A} a trees) =
  caseBranch a trees (Tree.rec_1 {A} {P} {Q} caseBranch caseNil caseCons trees)

Tree.rec_1 {A} {P} {Q} caseBranch caseNil caseCons (List.nil {Tree A}) =
  caseNil
Tree.rec_1 {A} {P} {Q} caseBranch caseNil caseCons (List.cons {Tree A} tree trees) =
  caseCons tree trees
    (Tree.rec   {A} {P} {Q} caseBranch caseNil caseCons tree)
    (Tree.rec_1 {A} {P} {Q} caseBranch caseNil caseCons trees)

The restrictions on nested inductive types are exactly as those of mutual definitions, if we convert nesting into mutual blocks. In particular, this means that not only do the compound inductive type (e.g. List Tree) must appear strictly positive in the inductive definition, but also the parameter of the outer inductive type must be strictly positive (e.g. Tree must appear strictly positively inside List). So, for instance, this is not allowed

inductive Bad where
| bad : (List Bad → Nat) → Bad

for List Bad appears in a negative position, but also

inductive Endo (A : Type) where
| endo : (A → A) → Endo A

inductive Bad where
| bad : Endo Bad → Bad

is not allowed, for, in this case, Bad is not strictly positive inside Endo Bad.

Finally, though it might be overwhelming to understand every nuance of inductive types, especially reading such a short explanation, all of these ideas, such as parameters, indices, recursion rules, etc, become more intuitive the more you work with inductives. For anyone trying to really get a grasp of these concepts, I suggest going through the examples, even writing your own inductives, and trying to derive, by hand, the inductive principles and recursion rules.