In another post, I dissected how to program recursion using anonymous functions, only. This ultimately leads to the famous $Y$-combinator. Actually, there are multiple versions of it, all doing the recursion-trick by some form of self-application. The running example in that blog post was the inevitable factorial function.

If that was not weird enough, here is a different and, arguably, even stranger way to program factorial. Namely:

without self-application, without any of the fix-point combinators like $Y$ (and of course without actual recursion and without other cheap tricks like using while or some loops that the Lisp/Scheme dialect may offer).

In the last post, the solution for factorial was straightforwardly generalizable to any recursive function, and that generalization was the $Y$-combinator (and its variations). This time, we won’t be able to generalize the construction, at least not for all recursive functions.

Intuitively that’s easy to understand. The $Y$-combinator allows to cover all recursive definitions, including those that result in non-terminating-procedures. Recursion corresponds to a higher-order functional version of capturing the essence of while-loops from imperative languages. Those can lead to non-termination as well. There exist also looping constructs that are guaranteed to terminate. Those are conventionally called for-loops. Never mind that some concrete programming languages like Java use the keyword for for general loops, including those we (and others) call while-loops, to distinguish them from their “weaker” siblings, the for-loops.

If we come up with a scheme to capture something akin to for-loops it means we cannot expect to capture non-terminating functions. But the factorial will be fine.

The $Y$-combinator (and its variations) are somewhat convoluted expressions using only (anonymous) functions applied to themselves. They can be given in the untyped $λ$-calculus, and one can program $Y$ in Scheme, though one has to be careful to take into account that Scheme is an language using eager evaluation, an aspect not typically considered when dealing with a $λ$-calculus (though of course one could focus on an eager $λ$-calculus, if one is interested in that).

The factorial function has other aspects, which are not actually part of the purest of $λ$-calculi. Pure here not in the sense of purely functional and without side-effects. Pure in the sense of “functions only!”. Remember: the first two chapters of SICP cover “building abstractions with procedures” and “building abstractions with data”. Actually the treatment of procedures comes before treating (compound) data.

Of course, without data to calculate on, there are not many things procedure can work with and compute on. One can of course define weird and powerful stuff working purely with procedures, like the $Y$-combinator, but that’s hardly a way to start a book teaching functional programming (actually SICP hardly even mentions the $Y$-combinator, it just crops up in a footnote in some exercises). Besides, when $Y$ to some function, say $F$, it still does not compute some real stuff: $Y F$ defines a function that can behave recursively, but to run that, we need to give some further input. So in order to get going, the procedures need to have some non-procedural data to operate on.

Like all programming languages, Scheme supports a number of built-in primitive data types. The most readily available ones are perhaps numbers and that’s why many initial examples in SICP are “numerical” in nature (lists, for instance comes later). Maybe one walks away with the (initial) impression that Scheme’s prime application area is number crunching. That’s actually the more or less the opposite what Lisp (and thus Scheme) was originally intended for. It was originally touted as language for “symbolic” computations, working on symbols, and the language of choice for artificial intelligence. If we take the trinity of three early, major, and surviving programming languages, Fortran, COBOL, and Lisp, Fortran was for number crunching, COBOL for “business” and managing data records, and Lisp, as said, for symbolic computations and AI.

Ok, Scheme supports numbers, but the pure $λ$-calculus does not.

In the lecture, we saw that higher-order procedures are powerful. As discussed, one can express recursion with them. Also, in connection with data structures, it was shown how pairs can be expressed by higher-order procedures. Pairs, like numbers, are built-in in Scheme, but SICP and the lecture showed, how to program the constructor and selectors for pairs (cons, car, and cdr) using procedures. Not that there would be a need for that, as they are built in, but if wished, it can be done.

Ok, then, what about (natural) numbers? At this point one may have guessed what the answer will be: yes, natural numbers can be encoded in the $λ$-calculus. At a general level, it should not be too surprising. If one has heard that the $λ$-calculus is Turing-complete, i.e., is expressive enough to compute everything a Turing-machine can compute (and thus everything that a full-blown programming language can compute), it’s implicit that somehow it must be possible (but maybe tricky).

Encoding numbers by procedures may seem like a pointless thing to do and anyway not needed (in Scheme) as they are built-in. That’s a valid standpoint, but one should also not forget that built-in is not the same a God-given. Numbers and other data structures may be built-in, but they won’t be directly processable by the ultimate hardware or platform. There will be an encoding, and it’s likewise complex. To work on standard hardware, maybe not us, but someone ultimately needs to encode numbers by $0$’s and $1$’s and to encode operations on number like addition by some manipulations of those bits. The encoding of course goes on behind the scenes (by the compiler or interpreter), and we are operating on the numbers with standard notation and operations which behave the way we are used to. But someone has to take care of the encoding to maintain the convenient abstraction for us.

Encoding the numbers (and pairs and lists) as procedures inside Scheme is of course not advisable from a standpoint of efficiency. Typical hardware can manipulate standard binary encodings of numbers fast and some basic procedures like addition may directly be supported by hardware. Other procedures (like factorial) of course not. Ultimately also they need to be represented in binary form to be run on a machine (or the interpreter that runs the and encoded as binary, in machine code). Standard hardware maybe suited to calculate basic stuff on numbers but not to juggle higher-order functions, at least not directly. Interestingly, there had been attempts to do tailor-made computer hardware for Lisp, those were known as Lisp machines (and they went the way of the dodo…)

Encoding numbers as procedures may seriously degrade performance, but it’s an interesting exercise, and it will allow to program factorial without recursion! The encoding is actually is well-known under the name Church numerals. The natural numbers is only one example of an infinite data structure, lists and trees are others that could be treated similarly. Actually, also finite data structure can be handled, for instance Booleans. All of those data structures could be encoded analogously, if one worked out the principles behind the Church numerals more clearly than we will do. The technique is also called Church encoding.

But we mostly stick to natural numbers as data structure. We approach the task from two angles: what’s the interface for numbers, and how are they represented. While we mostly stick to numbers concerning the encoding, later we will generalize beyond numbers as far as as interfaces are concerned.

The constructor interface of the encoding

The interface angle should be familiar from the lecture material about data abstraction. The goal is to have an encoding that works like the usual one (only quite a bit slower perhaps) seen from the outside. Then what belongs to the interface for numbers? And as we did in other examples, for instance when encoding pairs, the first question to answer is: how can I get natural numbers? That are the constructors of the data structure?

The two constructors of natural numbers are $0$ and $\operatorname{succ}$, maybe called zero and succ in Scheme.

Note, we are not meaning here the (built-in) number $0$, it’s meant that there are two procedures in the interface, and we call them, not unreasonably $0$ and $\mathit{succ}$ to remind us what they are intended for. We have not solved yet how to encode them properly, but once we solved the encoding, we obviously can represent all natural numbers, using that constructor interface. For instance (in Scheme) we could write

  (define seven (succ (succ (succ (succ (succ (succ (succ zero))))))))

Fair enough, that looks like a plausible way of writing down the number we pronounce “seven” in English. We mentioned that the encoding will degrade the performance. Besides that the encoding is also not very space efficient, as the above construct is a value, it’s a notation for 7. We are used to so-called positional number systems, so much so that we tend not to think about it all. For instance $137$ is a fairly compact encoding for a number which would be fairly long if we were forced to write it as with a tower of succ’s… The encoding that the built in $137$ typically uses in hardware, the binary representation with $0$’s and $1$’s, is also short, thanks to the positional representation. One could see the succ-notation as an unary number system (like tally marks, a prehistoric ``number system’’). To say it again, $0$ and $\operatorname{succ}$, resp.\ succ and zero is not the solution how to encode them as procedures, it’s the interface, the symbolic names of the two central procedures, whose internal representation we have still to figure out.

If $n$ is a function, what does that function do?

Being able to write down numbers using those constructor names is all fine and good, but in isolation it is of little use. We need to be able to do something with them, like computing with them.

So, what do we want to do with them? The most obvious thing to do is to calculate with numbers, like adding two numbers, calculating the square of numbers etc. Sure, that’s what one often does with numbers.

But perhaps we should switch perspective. Asking how to calculate with numbers as inputs and outputs, combining them in different ways is too “conventional”, too closed-minded. Remember, we are doing functional programming, where the boundary between data and procedures is blurred. In particular in a pure (non-applied) $λ$-calculus, everything is a procedure and we intend to factually encoding numbers as functions. So not only that procedures are first-class citizens in the sense that procedures can be handled in the same way as ordinary data, like serving as input or output to other procedure in the same way as numbers. It’s even more radical: Everything is in fact a function, including those that we intend to represent as natural numbers.

We learned to think of numbers not as procedures, but as data. Silly us, but that was before we learned about higher-order function… If we are about to encode numbers as procedures, we need to make up our mind (and open up our mind) what kind of procedure for instance the number (succ (succ (succ (zero))) is. If a number is represented not as a bit string or number in decimal notation or some passive piece of conventional data) but as a function, the number can be used do something when applied to an argument. So the switch in perspective is

Don’t ask what you want to do with numbers, ask what you want numbers to do for you!

Okeeeh… As everything in a purely functional setting are function, the question is what can a “procedural” natural number reasonably do when presented with a procedural argument? Church’s idea was roughly:

The number $n$ should be encoded as the function that, when given a function as input, iterates it $n$ times!

The computational essence of a number is its potential to iterate, it represents a for-loop with $n$ rounds (resp. it functional equivalent, since for-loops are structure typical for imperative languages).

One may swallow that switch of perspective, but still may want to complain a bit. It may sound interesting to see numbers are some “loop”. Interesting as that looks, aren’t we missing out an important fact. There is a good reason that standard presentations or encodings of numbers treats them as data. After all, calculating with them, like doing additions, multiplications etc., that’s certainly at least as important as having numbers as iterators, or not?

But on second thought, there is no reason not to have both. After all, having numbers are procedures does not mean they cannot also be treated data as well. Procedures are first-class citizens, right, and data is functions and functions are data.

So let’s just do some good old calculations, like addition. But what’s addition of two number other than iterating the successor function: like $7 + 4$ would be $\mathit{succ}^7 (4)$ (or the other way around), where $\mathit{succ}^7$ is meant as applying $\mathit{succ}$ 7 times to (the encoding of) $4$. Similarly, after having defined addition, multiplication can be explained as iterated addition. This way it seems, standard calculations could be achieved.

Since the sketched idea of the construction corresponds to for-loops and iteration and not to while-loops resp. general recursion, it’s obvious that there are calculations on numbers that cannot be done. Impossible are in particular functions that do not terminate (on some or all inputs). Where exactly the boundary lies, what can be represented with Church-numerals and iteration only (and without doing general recursion) is a question, we don’t explore here. But the boundary is not that all terminating functions can be represented iteratively, and only the non-terminating ones are out of reach. There is another post on the Ackerman-function and primitive-recursive function that discusses some aspects of that question.

The encoding itself

But then, what is the encoding? Actually it’s fairly easy. What we intend is a behavior as follows

\[n f = f^n\]

$n$ applied to a function $f$ corresponds to the $n$-time application of $f$. Another way of writing the same is

\[n f = \lambda x. \underbrace{f( f (f \ldots (f}_n x)))\]

The task then is to program $0$ and $\mathit{succ}$ accordingly (or zero and succ in Scheme). Here it is. Let’s start with $0$. It’s supposed to take a function to iterate $0$ times, so not at all. So $0\ f$ has no effect at all, and thus corresponds to the identify function $\lambda z. z$.

With $n$ as input, the successor returns $n+1$. Both numbers are encoded as iterators we are on the way of programming. In other words, with an $n$-iterator as input, $\mathit{succ}$ returns a function that iterates $n+1$. That leads to the following scheme:

\[\begin{array}[t]{rcl} 0 & = & \lambda s. \lambda z. z \\ \mathit{succ} & = & \lambda n. \lambda s. \lambda z. s\ (n\ s\ z) \end{array}\]

And here’s the same in Scheme

  (define zero (lambda (s) (lambda (z) z)))
  (define succ (lambda (n)
		 (lambda (s) (lambda (z)
			       (s ((n s) z))))))

The factorial

Actually, it’s straightforward. Let’s start by repeating a conventional, recursive definition of the factorial procedure

(define fac (lambda (n)
	      (if (= n 0)
		  1
		  (* n (fac (- n 1))))))

Calculating the factorial on some input $n$ means, going through the body of the function multiple times, building up a large multiplication $n \times (n-1) \ldots 2 \times 1 \times 1$ until hitting the base case. And then calculating the result $n!$. So let’s first give a name for the function that calculates one round of going through the body of the factorial, let’s call it $F$.

(define F
    (lambda (f)
      (lambda (n)
	(if (= n 0)
	    1
	    (* n (f (- n 1)))))))

The body covers the base case, but for the recursion case, it uses its functional argument $f$ for a continuation. If we pile up $n$ of those $F$’s, we can go though the body $n$ times. However, if we need to go through the body more than the number of $F$’s piled up, we fall out at the bottom, and so we need to plug in some continuation for $f$ for that case.

Of course we don’t intend to fall out at the bottom by arranging for a pile of $F$ high enough for the concrete input. In other words, it should not matter what we choose for that. Let’s just raise an error in Scheme, and call the function f0:

(define f0 (lambda (x) (error "Ouch! you should not see this...")))

This represents raising an error, and exceptions don’t fit well to pure forms of the $λ$-calculus. For instance, raising an error does not return a value, so errors don’t evaluate to something, they rather derail an ongoing evaluation. In the $λ$-calculus, non-termination is the only way to program a situation that don’t evaluate to something (and that would need recursion or something like the $Y$ combinator). At any rate, the traditional symbol for an “undefined” expression is $\bot$ (“bot” or “bottom”), and that’s what we use as well. If concerned that one would need recursion to achieve non-termination (representing being undefined), note that we simply need some function to plug in at the bottom and we don’t care which one, it can be any. So one can interpret $\bot$ also as being undefined in the sense of being not specified or arbitrary. And, as shown, in Scheme we raise an error.

As a side remark: note we have defined f0 not as (error "some message"). Doing so would not work. Remember that Scheme is an eager language, using applicative order, and without the preceding lambda, the f0 as argument would immediately raise the exception and derail the planned iteration before it even starts.

Now, to calculate $n!$, we need to iterate $F$ at least $n+1$ times, like that

\[\underbrace{F (F (F .... (F}_{n+1} \ \bot)\]

Since the church numerals exactly capture this form of iteration, we can simply write:

\[\mathit{fac} = \lambda n. ((\mathit{succ}\ n)\ F)\ n\]

Note: We would use the very same $F$, if we would use (a proper variant of) the famous $Y$-combinator instead of Church numerals, and (Y F) would give the factorial. That’s described in a different post.

What’s missing for $n!$? Or: The rest of the interface

We will not spell out the rest of the solution in full detail and only sketch what would need to be done. The above iteration of $F$ works fine, but we have not properly written up the body of $F$.

To say it differently: we have encoded or implemented $\operatorname{succ}$ and $0$, we also have hinted at that addition and multiplication can straightforwardly be defined, plus as iterated successor and multiplication as iterated addition. So we have covered the two constructors from the interface for natural numbers and we were able to do some more useful functions like $+$ and $\times$, but lacking are two other central aspects of the interface: the selectors and predicates. What we and SICP calls selectors is sometimes also called destructors (for instance in the Haskell community). One has to be a bit careful, also C++ and similar languages use the word “destructor” in connection with (object-oriented) data structures. But there it means something unrelated and has to do with object finalization and memory management, releasing memory at the end of a lifespan of some object. Functional languages, from the beginning, starting with Lisp, have automatic memory management, i.e., garbage collection, no functional programmer need to clean up personally…

Selectors are the inverse of constructors, constructors compose a larger composed structure from smaller parts, and selectors, also to access the parts from a composed one. As far as Lisp and Scheme are concerned, the constructor for pairs is cons, and the two destructors are car and cdr. For lists, the constructors are '() and cons, and the destructors are called same as for pairs. It might be clearer if one had used separate names, like left and right for the selectors for pairs and head and tail for the selectors for lists. Many (typed) functional languages would insist that two different abstract data type use a different set of constructors, so a cons (or whatever name or notation would be used for the constructor) would either construct a pair or a list, it can’t represent both constructions (even if internally both might be represented by something like cons-cells). Lisp and Scheme, being statically untyped, see nothing wrong with it.

So then what’s the selectors for natural numbers? The selectors have to give access to ``sub-structures’’ of a compound data structure. $0$ is not compound, it has no parts. So there is no selector corresponding to that. Numbers $n>0$ are structured, there are constructed as $\operatorname{succ} (n-1)$, with $n-1$ taking the role of a substructure. A good name for the destructor or selector this is $\operatorname{pred}$ for predecessor.

Of course the predecessor of $0$ is undefined, $0$ is the smallest number. Analogously the selectors for lists can be applied to non-empty lists only, and applying car or cdr to the empty list '() is undefined, resp. raises an error. So, natural numbers have one selector, the predecessor, and it is indeed the inverse of the constructor:

\[\operatorname{pred} (\operatorname{succ} n) = n \quad \operatorname{succ} (\operatorname{pred} n) = n \quad (\text{for $n>0$})\]

Note that we have not spelled out the implementation of $\operatorname{pred}$ as $λ$-expression or in Scheme. We specified its behavior in terms of how it works together with the constructors (inverting their effect). So that’s the interface contract the implementation of $\operatorname{pred}$ has to fulfill.

We won’t give an actual encoding or implementation for $\operatorname{pred}$ in our Church numerals here, it’s not really hard, but a bit more convoluted (and inefficient). If interested it can easily be found on the internet. The actual encoding is an interesting exercise, of more general interest are the underlying principles, like that the central ingredients of the structures interface are grouped into constructors and selectors/destructors with the requirement that one they are each others inverses. That principle generalizes also to other such data structure, they are generally called inductive data types.

$+$ and $\times$ are also important functions when talking about numbers. Useful as they are, they are not central to the inductive data type, they are build on top, and a natural number package could of course have very many useful functions besides $+$ and $\times$, for instance $n!$ etc.

But besides constructors and selectors, there is a third class of functions central to the interface (and used in $F$). In order to be generally useful, one needs a way of “comparing” numbers. At the very core of this is: we need to check if a number equals $0$ or not. Without that, we cannot separate the base case from the “recursion” case. As discussed at the beginning of this text, we are in a setting without the full power of recursion and we mentioned, that the selector/constructor way of presenting data structures leads to inductive data types. Thus, the recursion case(s) are alternatively also called induction or inductive case(s). Basically we doing inductive definitions, with induction being a restricted, more disciplined form of recursion, working on, well, inductive data types) Induction as proof principle (over natural numbers or over other inductively given structures) is likewise closely connected…

At any rate, for the natural numbers, the most basic form of “comparison” is the one for zero-ness. It’s a predicate that checks whether the base case applies or not. That’s exactly what we also need to program $F$ for the factorial.

With this basic predicate covered, other comparisons and useful predicates can be defined, like $n =^? m$ or also $n \leq^? m$ etc.

The check itself is encoded actually pretty simple. Remember that $0$ is an iterator, actually one that iterates it first argument function $0$-times, which means not at all, and thus gives back it’s second argument. For $n>0$, the first argument is iterated at least one time. So we have just to arrange that the second argument corresponds to true, and in all other situations we have to return false:

\[\operatorname{iszero?} = \lambda n.n\ (\lambda x.\operatorname{false})\ \operatorname{true}\]

or in Scheme.

(define iszero? (lambda (n)
		 ((n (lambda (x) #f)) #t)))

How to generalize beyond $n!$ and numbers: inductive data types and pattern matching

The Church numerals is on the one hand a perhaps strange exercise in exploring the power of higher-order functions, and not useful as actual implementation of numbers. But encoding is not “arbitrary”, if follows patterns that opens the way to likewise encode other data structures. For instance, lists could be done like that (which is another inductive data structure) or also Booleans. In the above code we allowed ourselves to use the built-in #t and #f, but we could have pushed further and encoded also Booleans. We could do so by asking the same question we started with: what can Booleans do when not seeing them as data, but as procedure (and the answer would be: make a decision between two alternatives).

The general principles behind such encodings may get buried underneath mountains of parentheses and $\lambda$’s. More interesting seems to me to focus on the interface talking about the three crucial ingredient of such structures, constructors, selectors, and basic predicates. The latter need to discriminate between various constructors, making the appropriate case distinctions.

In Scheme, when designing and implementing structured data, such as trees, etc, one of course does not do a Church encoding of those. One relies on recursion, builds say, trees, using symbols as tags, and checks the tags by properly programmed predicates. The centrally built in data structure of lists, which conceptually is an inductive data structure, of course also has the corresponding predicate called null?. So the flexibility of Scheme allows to build inductive data structures in a disciplined manner (mostly relying on the flexibility of nested lists). Not that it’s recommended, but as discussed one could even push Scheme’s flexibility to the limit and use Church’s numerals as inspiration to encode the data structures not by the built-in lists but by procedures.

Not all functional languages allowed the almost boundless flexibility of Scheme. In particular typed functional languages impose much more discipline on the programmer. For instance, the $Y$ combinator will in all likelihood no longer be programmable in a type-safe manner, and also Church tricks run into trouble and may no longer be accepted by the type system which in itself seems like not a big loss… However, the type system might easily get into the way to forbid nesting lists in flexible ways to serve as some disciplined inductive data type.

But inductive data structures are eminently important and an programming language need to support or at least allow them, without the type system getting into the way. Type functional language typically “integrate” inductive types as part of type system. After all, the structures are called abstract or inductive data types for good reason. In Scheme, without a static type level that would imposing some discipline, following some discipline is the programmer’s responsibility, coming up with a collection of procedures, perhaps grouping them personally into selectors, constructors, and predicates and conceptually think of all that as (inductive) data type. But for Scheme, it’s all procedures and a collection of lists and cons-pairs and it falls on the shoulders of the programmer to disciplined enough to use the interface, and not use directly combinations of cons, car and cdr, knowing how the interface is implemented using nested lists… Type systems supporting such data types enforce the discipline.

Using some concrete typed functional language as example, one could define the natural number inductively as follows. The code concretely is in ocaml, some ML-inspired language, but many typed functional language will support more or less similar notations.

type num =  Zero | Succ of num

Zero and Succ are the two constructors (constructors have to start with capitals, at least in ocaml), and num is the name given to the inductive data type it. We are no longer talking about of church numerals, which is mainly about encoding such inductive data structures in fancy way. We focus on the principles underlying the interface of such structures that we distilled when discussing the encoding. Of course the interpreter and the compiler will have to come up with some encoding, we don’t really care how it’s done, but we can be pretty sure, it’s not encoded by higher-order procedures …

Of course also ocaml or similar languages have numbers built in already, so we would actually of course not need to define the type num. We use it for illustration.

With the type definition we have covered the constructors of the interface. We can construct number like Succ (Succ (Succ Zero)) as representation of for $3$. Of course, the numbers won’t work as iterators (and we actually don’t miss that aspect much either). But what about selectors and predicates?

Actually, those are conventionally combined in an elegant manner in typed functional languages, namely by pattern matching. A typical use is in combination with a “case switch” to cover different shapes of the data structure, for our nats there are two cases, one base case and one inductive case (and using recursion):

let plus n m =
  match n with  // matching
    Zero -> m
  | Succ n' -> Succ ((plus n') m)

In Scheme, we used the predicate iszero? which covers the base case. The match here explicitly checks both possible cases, and note how the pattern match combines the check which case it is with the selection or deconstruction of $n$: the predecessor $n’$ of the argument is just mentioned in the matching expression.

If we would explicitly need the predecessor selector mentioned earlier, we could program it as shown below. Typically one would see no need in doing so, as its functionality is typically exploited in combination with matching and with a case distinction. Not just because it’s “elegant”, but to protect against run-time errors. Remember that pred on $0$ is an error, so it’s good practice to combine the use of pred with a prior check whether iszero? is false. And that’s exactly what the above body of plus does with the match and the case branches.

Anyway, if one needs the unsafe pred, the selector that does not care checking if there’s something to select, one could simply write

  let pred (Succ n') = n';;

The real selection functionality in done by matching the argument against the pattern Succ n', so that’s elegant enough as selection mechanism. That’s why we said above that one typically might not even see the need to give that match a name like pred.

The type system may prevent the flexibility offered by Scheme, but on the other hand it can warn us, if we have uncovered cases, for instance for the definition of pred it warns us:

Warning [partial-match]: this pattern-matching is not exhaustive.
Here is an example of a case that is not matched: Zero

We could get rid of the warning by issuing a tailor-made error message (or doing something else for $0$).

let pred n ->
    match n with
      | Succ n' -> n'
      | _ raise (Failure "Pred of Zero undefined");;

Still, pred is only partially defined, but as long as we don’t go to negative numbers as well, something does not fit if we really want the predecessor of $0$. And as said, the selection is best done in combination with a case match covering all cases, to avoid running into those troubles.

To sum up

We have sketched the idea of Church numerals, a procedural encoding of natural numbers. Each number $n$ is represented as a function that corresponds to an iterator or a loop of length $n$. All numbers can be defined by two constructors, we could call $0$ and $\operatorname{succ}$. Since these number are actually iterators, one can use the numbers to define further useful function (by iteration). The full power of recursion can’t be done this way, all procedures will terminate, but it’s enough to program factorial.

Focusing on the interface, we stressed that besides constructors, the core of the interface of a data structure like nat involves selectors and predicates to make the necessary case distinctions. Those kind of data structures are called inductive data types. Typed languages support constructors allowing to introduce types by specifying the constructors. The functionality of selectors and predicates is achieve in a combined manner by pattern matching.

There’s still something missing

2024

Worklist algorithms

12 minute read

A small amount of background information (best read in the context of the first sets)

Back to top ↑

2023

Y Y?

17 minute read

or why Y?

Back to top ↑

2022

Back to top ↑

2021

Fol Soundandcomplete

10 minute read

w— title: “Sound & (in-)complete proof system for first-order logics” excerpt: “Some clarification (I hope)” last_modified_at: <2021-07-15 Thu> hea...

Back to top ↑