In the introductory chapter, as a sort of “motivation”, I mention something about an employment guarantee for compiler writers; it’s not on the slides, mentioned only in the script-version, though). The correct term is
Full employment theorem (FET) for compiler writers.
There are other motivations to study compiler technology, of course, and the lecture mentions some: there will always be new languages, new platforms, there will always be a need to translate from one format to another etc. Last not least, the techniques and principles we learn in connection with compilers are often also relevant on other contexts.
The FET is of a different nature. It’s not actually a motivational statement, promising good job opportunities, but it’s actually a theorem, a mathematically proven fact with a peculiar name.
Concretely, it’s about optimization of compilers resp. optimizing compilers.
Improvement and optimizations
It goes without saying that the compiler is (supposed to be) correct. An optimizing compiler is one that makes some effort to perform well, without breaking correctness. What optimization exactly means, varies, but it generally refers to the quality of the produced code: it should run fast, be memory efficient, the code itself should be compact, etc. Beyond the quality of the produced code, there may be other criteria, like that the compilation itself is fast, but the latter is not meant by the FET.
Different legitimate optimization goals may clearly contradict each other. For example, there is often a trade-off between memory usage and speed. The theorem is also not bothered by that fact that goals may contradict each other. The argument works for all criteria that refer to the behavior of the produced code. For instance for the one of the simplest, the code size.
One can find two slightly different formulations of the FET for compiler writers. They are closely related; actually the second one is a pretty direct elaboration of the first one (and both are pretty direct elaborations of the well-known halting problem.)
One could say, the first formulation is concerned with optimizing compilers, the second one with the problem of optimizing a compiler. It’s the second version which is mentioned in the lecture.
As mentioned, an optimizing compiler is a compiler that does efforts to produce good code, given a criterion for that. As a very simple criterion, one can take the size of the produced code. That may not be the most important criterion in practice, but the argument works independent from what one chooses as optimization target. Using the size of the program just makes the argument particularly easy.
The word ``optimizing’’ is not meant as finding optimal code. The word “optimal” means “best”, better than or at least as good as all alternatives. And optimizing means, as said, producing ``good’’ code, not optimal. A (hypothetical) compiler that produces optimal code is called fully optimizing compiler.
Full optimization in that sense is impossible, and that fact is the core of the FET for compiler writers.
The proof actually is very simple. Very simple, at least, if one makes use of a central result of computation theory, the undecidability of the halting problem. The discovery, formulation, and proof of that required a giant like Alan Turing, but the FET is a very easy consequence. It works like many such impossibility results as a proof by contradiction: If such a thing as a fully optimizing compiler existed, one could use it also to solve the halting problem. As this is not possible …. Case closed.
Slightly more explicit: assume as optimizing criterion, as said, the size of the program. Programs that the compiler wants to (fully) optimize may or may not terminate, and it’s just undecidable generally what is the case. Of course, given one particular program, it’s well possible, maybe even easy to determine whether it terminates or not. It’s just that there is no algorithm (a decision procedure), that decides the termination status for all programs.
Let’s assume the smallest possible way to achieve a non-terminating behavior is something like
while true do skip, i.e., just an single non-terminating loop. Now, a fully optimizing compiler would optimize non-terminating programs into that non-terminating loop. Non-terminating programs, on the other hand, are optimized into other programs. Which case it is it’s easy decidable syntactically, looking at the optimized code.
It may be the case that the language to compile to has two or even several ways of expressing non-termination which are equally small. For instance, the language may support
repeat skip until false which may cost the same as the while-formulation in terms of memory. But that does not invalidate the argument; if there are variations, all of them are clearly representations of an empty, infinite loop, which can be syntactically determined. It’s important that the detection is syntactic, not semantical. The question ``is a program semantically equivalent to a non-terminating program’’ is nothing else than a slightly winded formulation of the halting problem.
Optimizing a compiler
A second formulation of the FET for compiler writers takes the argument a step further. It not resigns to the fact a fully optimizing compiler is plainly impossible, but points out that there is always room for improvement:
It can be proven that for each ``optimizing compiler’’ there is another one that beats it which therefore is ``more optimal’’.
That means, any compiler, no matter how ``optimized’’ and already tuned, can be improved, so compiler writers will never be out of work (even in the unlikely event that no new programming languages or hardwares will be developed in the future…).
The proof of that elaboration of the FET is likewise easy. It goes like this. Assume you have a optimizing compiler, say
Opt. It can be turned into a better one in the following way, where ``better’’ is understood in the sense that it produces smaller code. We simply take the insight from above one step further to make a tiny ``improvement’’ to
It’s clear that there are (non-trivial) programs that don’t terminate (actually, there will be infinitely many), and say
NT is such a program. Then one can improve the optimizing compiler
Opt by checking whether the input is (syntactically)
NT and optimize that particular one by the minimal infinite-loop program (say it’s called
Input (P) if P = NT then output ``LOOP'' else Opt(P)
Of course, in practice, that makes no sense!
It’s useless to improve a compiler on a case-by-case manner. Furthermore, the proof is not constructive in that it does not give a concrete construction or algorithm how to actually optimize a given compiler. Note, it also relies on identifying non-terminating programs
NT. It’s easy to construct arbitrarily many non-terminating programs, one simply needs to take the non-terminating program
LOOP and massage it a bit (like adding commands in the body or simply using
Q is some random code). But that’s a very fake and useless form of optimization, adding a check for one hand-crafted variation of a non-terminating program after the other.
So, the compiler writer can not only point out that a given optimizing compiler can always be improved, but that such improvement cannot meaningfully performed automatically: Hence, the services of of a smart compiler writer will always be in demand. And provably so!
As a side remark: the addition of individual checks for checking if a piece of code matches a know formulation is not altogether meaningless. I resembles the way virus scanners are improved and updated (``optimized’’). It’s undecidable to decided whether a piece of code downloaded from some page, or received via email, is harmful. That’s again because semantic properties of code in non-trivial languages are all undecidable. Thus, companies maintaining virus scanners keep data bases of know viruses and their variations and mutations. And keeping a virus scanner up-to date means, adding virus signatures to the scanner, that in the meantime have been detected as harmful. The signature is just a syntactic pattern that can be used to match against a virus. A virus scanner may not be a compiler, but it uses techniques like scanning and perhaps parsing to scrutinize code.
Do only compiler writers have a guarantee of full employment?
As seen, the FET is a fancy way to state the formal impossibility of automatic optimization, in this case for compilers. Optimization is a wide field where one often faces undecidability. Hence there exist FETs for other fields, as well.