Lists can be defined as:
data List a = Nil | Cons a (List a)
If we translate this into our type algebra, we get
List(a) = 1 + a * List(a)
But we can now substitute List(a) again in this expression multiple times, in order to get:
List(a) = 1 + a + a*a + a*a*a + a*a*a*a + ...
This makes sense if we see a list as a type that can contain only one value, as in []
; or every value of type a
, as in [x]
; or two values of type a
, as in [x,y]
; and so on. The theoretical definition of List that we should get from there would be:
-- Not working Haskell code!
data List a = Nil
| One a
| Two a a
| Three a a a
...
We can do the same thing with binary trees, for example. If we define them as:
data Tree a = Empty | Node a (Tree a) (Tree a)
We get the expression:
Tree(a) = 1 + a * Tree(a) * Tree(a)
And if we make the same substitutions again and again, we would obtain the following sequence:
Tree(a) = 1 + a + 2 (a*a) + 5 (a*a*a) + 14 (a*a*a*a) + ...
The coefficients we get here correspond to the Catalan numbers sequence, and the n-th catalan number is precisely the number of possible binary trees with n nodes.