G12FCO -- Formal Computation

Lecture 18 -- More limits of Turing machines

bottom of page

Discussion

You might [reasonably!] argue that the resolution of the halting problem as discussed in the last lecture is rather specialised. The question was about all TMs and all initial tapes [or, equivalently, about all programs (say, written in C) and all possible inputs]; and we found one TM and one initial tape which could not be properly analysed. Even that is a slight exaggeration: what we showed was that if the problem was solvable, we could find one TM and one initial tape which could not be properly analysed [and therefore the problem isn't solvable]. Perhaps this is just a semi-trivial counterexample, and if we somehow exclude [say] self-referential programs, then all becomes OK? No, that turns out not to be the problem. To see why, let's think about how we might partially solve the halting problem.

We have two principal weapons, both of which can be used. Firstly, we can look at the structure of the program [we assume, for convenience, that we are looking at a computer program in a language similar to QBasic or C or Pascal, rather than a state matrix]. If the program has no uncounted loops [WHILE ... ENDWHILE as opposed to FOR ... ENDFOR] and no recursive function or procedure calls, then it must be making progress and must eventually reach the end. Even with uncounted loops or recursive calls, there may be invariants [which may be suggested in the program, either as comments or, in some languages. as `assertions'] whose nature demonstrates that progress is being made, and that termination is inevitable. We saw this idea in action in the first part of this module. So there are many real-life programs whose termination we can guarantee. There are plenty of others that one could write down whose non-termination we can guarantee: recursions that clearly never unwind, loops that return to an exact previous state, invariants which are evolving in the `wrong' direction, etc. So, a structural analysis of the program, with or without reference to the actual input data, may yield an answer quite quickly; but there will be many other possible programs where this analysis fails to give an answer.

For these cases, we can turn to our second weapon -- emulation. We want to know whether a program halts, so we try it. If it halts, then we know the answer. If it doesn't halt, then `it must be in a loop', so we monitor its state and `detect the loop'. Sadly, although this is in one sense correct, it is a false dichotomy. At any given moment as the emulation proceeds, we are in one of not two but three states: the program has halted, or it is looping, or it is still running and has not yet entered a loop. It's the third case that kills us -- we just have to keep going, and wait for one of the other two things to happen. The trouble is that it may be that neither of them ever happens -- which is why `it must be in a loop' was in quotes above.

But, you may say, this is a nonsense. `If the program doesn't halt, then it must keep going indefinitely, so it doesn't halt.' Yes, but you don't know that it isn't going to halt, so this is a circular argument -- if we knew how to tell that it had gone past the point when it could halt, then we could solve the halting problem, but we can't so we don't.

OK, you reply, here is a more sophisticated version: `Consider the set of all programs and initial non-blank parts of their tapes whose total length is N. There are only finitely many such combinations, so this is a finite set. Some of these combinations in fact halt, some of them don't. Consider the set of those that do halt, a subset of a finite set, and hence itself a finite set. Emulate just these programs; they are all known to halt, so if we wait long enough we will find out how long we need to run the emulation for them all to halt. This defines an emulation length LN. We measure the length N of our given program and initial tape, emulate for LN steps, then if it's still going it can never halt. What's wrong with that, eh?' What's wrong with that is that the considered subset is not computable [in general -- it may be for some values of N!], and so LN is not computable -- if it were, then the halting problem would indeed be solvable.

But, you claim, we don't really care what LN is, we only need a bound, not the lowest bound on the number of steps taken to halt. Quite correct; so what we have just shown is that no sufficiently quickly increasing sequence of integers is computable.

Summary: You can slice it and dice it as much as you like, but the halting problem remains elusive. You can build TMs [or C programs] that will look at programs and their data and always correctly say YES, NO or DON'T KNOW -- even useful such programs! -- and you can build TMs that always say YES when that is the answer, sometimes say NO, and sometimes never give you an answer, but you still can't solve the halting problem.

Another point worth noting about the nature of the given proof: although it was a proof by contradiction [`If this can be done, then blah, blah, blah, which is impossible, so it can't be done'], and hence discusses properties of TMs which do not, in fact, exist, it is also a somewhat constructive proof. In other words, if you come to me with your machine A, and make weird and wonderful claims about its capabilities, then I can construct from it a machine C which demolishes your claim. `You say that A always works, but here is a C on which your machine fails.' We might think of it as a destructive proof.

A glance at a slightly different problem may give an insight into why rapidly-growing sequences become uncomputable. [Note that rapidly-growing here does not mean mere exponential growth, nor even exponentially-exponential growth, nor anything you can imagine a program for, but stupendously unimaginably rapid growth!] This is ...

The busy beaver problem

This is another silly competition! For some given N, you are invited to design a Turing machine with N states, and using only the symbols 0 [blank] and 1, which, when started with a completely blank tape, writes some 1s on the tape and then halts. [It is, of course, trivial to write even a tiny TM that writes 1s on the tape and never halts -- you can do it with a single state.] In general, there are many such machines; the winner is the one that produces the most 1s.

Now the number of such machines is finite, so there is an absolute winner of this contest. The number of 1s written by this winner is the Busy Beaver function, BB(N). Again, this is mathematically a perfectly well-defined function, and the first few values, BB(1), BB(2), ... are known [by exhaustive analysis of a handful of TMs]. It starts to get difficult around N = 6, when some of the TMs being analysed are universal and have extremely complicated behaviour.

There are quite small TMs for functions like N!, NNN, and so on, so BB(N) grows quite quickly. As you are no doubt expecting, it is in fact not computable. To be slightly more precise, there is no TM which, given N on its initial tape, will halt with BB(N) written on its tape.

The proof of this is a somewhat similar contradiction to the proof given in relation to the halting problem, but rather easier to comprehend. Basically, it derives from the fact that if we were given a BB machine, we could construct a BB-busting TM. We can assume, without loss of generality, that the BB machine already just uses a binary alphabet of blanks and 1s, and we can assume that it represents N and BB(N) in unary [e.g. representing seven as 1111111], for if it doesn't, then a simple program can do the conversion. However, it doesn't start with a blank tape. We can soon cure that; we add on a few extra states at the start so that its first action is to write a suitable N on its tape. How many states? Well, with k extra states, we can write BB(k) onto the tape, and then our BB machine [which we can suppose to have originally had, say, A states] writes BB(BB(k)) 1s onto the tape. In other words, an A+k state TM can write BB(BB(k)) 1s; and so BB(A+k) >= BB(BB(k)). Since it is clear that BB(k) is increasing with k, this is possible only if A+k >= BB(k), but this is clearly absurd for large k. Filling in all the detail is left as an exercise!

Extensions to the halting problem

The halting problem can be used to determine lots of other problems. The trick is to show that if we could solve the other problem, then we could solve the halting problem; this is usually done by showing how to convert an arbitrary instance of the halting problem into an instance of the other problem. Here are some examples:
The printing problem: Does a machine M ever write some specified symbol when started with tape T?
Start with an instance of the halting problem. If the machine/tape already use the given symbol, then first replace every occurrence of it by some other symbol, if necessary by extending its alphabet. Next, extend the machine by adding a new state; replace all instances of the halt state by the new state, and define the new state to write the given symbol and then halt. The new machine writes the given symbol if and only if the original machine halts, so a TM which could solve the printing problem could be adapted to solve the halting problem. Much the same idea disposed of any similar question about whether a TM ever enters a particular construct.
The blank-tape problem: Does a machine M halt if started with a blank tape?
[This is a special case of the halting problem, so it is conceivable that this case might be solvable even though the general case isn't.] Start with an instance of the halting problem. Replace its machine and tape by a new machine which has some additional initial states; these new states produce a copy of the desired tape. So, the halting problem for the initial machine/tape pair is equivalent to the blank-tape problem for the new machine. A blank-tape solver could thus be tweaked to be a halting-problem solver.
The uniform-halting problem: Does a machine M halt no matter what its initial tape?
Start with, for a change, an instance of the blank-tape problem. Choose some extra marker symbols which denote the `clean' area of tape, and add to the given machine some extra states which systematically push out the markers and replace them by blank tape whenever they occur as input [details left as exercise]. A uniform-halting solver which could determine the status of this tweaked machine would determine the status of the original machine on a blank tape.
The machine-equivalence problem: Do two machines have the same behaviour for every initial tape?
Tweak the machines similarly to the above so that when they would otherwise halt they first erase everything they've written. Let one of the machines be one that always halts; then the equivalence problem is the same as the halting problem for the other. [Details again left as an exercise.]

The perhaps slightly depressing conclusion from all this is that basically it is impossible in general to determine anything interesting about the behaviour of Turing machines [and hence of computers, or of any other model of computing that is Turing-equivalent].

There is one mildly interesting counter-example: it is possible to determine whether or not a TM ever writes a non-blank symbol given an initially blank tape. This is because within the number of moves given by its number of states it must either change the tape or else repeat a state and [blank] input symbol with the tape unchanged, after which it will repeat this cycle of moves indefinitely. This can be extended to non-blank initial tapes provided that there is an initial bound given on the size of the non-blank part of the tape -- you can then bound the number of moves within which the TM must either change the tape or else be cycling away down the [unbounded] blank part of the tape, and only a bounded number of changes can be made before the tape becomes blank. [Q: Why does this not contradict the printing problem above?]

top of page


border=0 G12FCO home page. border=0 Previous lecture. border=0 Next lecture. border=0 Problem Class 5 .
Copyright © Dr A. N. Walker, 1996, 2016.