Hackenstrings, and the 0.999... ?= 1 FAQ

A. N. Walker

School of Mathematical Sciences,

University of Nottingham, UK

Hackenstrings

Let's play a game. We each have some objects, say pebbles, labelled or coloured to indicate whether they are mine or yours. They are laid out in rows; for example:
A B B A B B A B B A
B A A B A B B A B
A A A A
B B B A B B B
B
where B denotes one of your pebbles, and A denotes one of mine. When it's your turn to move, you select one of your pebbles, and remove it and any pebbles to its right (whether yours or mine). Similarly, I choose one of mine and remove it and the rest of its row. Eventually, one of us will run out of moves, because all of our pebbles have been removed; whoever first cannot move is the loser. For example, in the given position, supposing it is your move, you might first choose the third B in the fourth row, and I might reply by taking the last A in the second row, giving the position
A B B A B B A B B A
B A A B A B B
A A A A
B B
B
If you now (somewhat foolishly) take the entire second row, then I can take the entire first row, and I am obviously winning, because I have a row of four, which I can take one-by-one from the right, while you have only three pebbles, which must therefore run out first. To save vertical space, we will in future show positions with fewer rows, but using '+' as a punctuation mark to indicate an internal end-of-row; thus the first position above can equally be given as
A B B A B B A B B A + B A A B A B B A B
A A A A + B B B A B B B + B

If each row contains pebbles of only one colour, then the game is very simple. If I have more pebbles than you, then I win by always selecting a pebble from the end of a row; if you have more than me, then you win similarly; and if we have the same number, then whoever plays first loses (as any move leaves a position in which the other player has more). For example,

A A A A + B B B B B + B + A A A
is a win for me because 4 + 3 > 5 + 1, and
A A A + A A + B B B B B
is a loss for whoever is to move because 3 + 2 - 5 = 0. Equivalently, you can count along the rows scoring +1 for each A and -1 for each B; if the result is positive then I win, if negative then you win, and if zero then whoever plays first loses.

What if there are mixed rows? Look first at the simplest such row, A B. This is better for me than nothing, as I win whether I go first or second. But it's not as good as A, because you do have a move if you play first, and indeed you win A B + B even if you go first, by taking the second pebble from the first row. So 0 < A B < 1. We suspect A B = 1/2, and we verify this by noting that

A B + A B + B = 0
(i.e. is a first-player loss); if I go first, then you win as just shown, while if you go first, then I win by taking whichever of my rows you didn't move in. Similarly, A B A is better for me than A B but worse than A, so we suspect that it is 3/4, which we verify by playing the game
A B A + A B A + B A + B
or 2 × A B A - 1/2 - 1 and discovering that this too is a first-player loss with best play (details left to the reader). Continuing, we find that A B B = 1/4, B A A B = -3/8, and so on.

The detailed rule for evaluating a row is as follows. If the first pebble is mine, then count +1, if yours then -1. Count a further +1 or -1 for each succeeding pebble of the same colour, up to the first change (or the end of the row). Thereafter, count +2-n for the pebble in the nth position after the change if it is mine, and -2-n if it is yours. The total gives the value of the row. If you add up the values of all the rows, then if the result is positive, I can win with best play; if it is negative, you win, and if it is zero, then whoever is to move loses, again with best play.

For example, the first given position is worth

+ 1 - 1/2 - 1/4 + 1/8 - 1/16 - 1/32 + 1/64 - 1/128 - 1/256 + 1/512 [first row] +
- 1 + 1/2 + 1/4 - 1/8 + 1/16 - 1/32 - 1/64 + 1/128 - 1 /256 [second row] +
+ 1 + 1 + 1 + 1 [third row] +
- 1 - 1 - 1 + 1/2 - 1/4 - 1/8 - 1/16 [fourth row] +
- 1 [fifth row]
or 147/512 - 91/256 + 4 - 2 15/16 - 1 = -3/512. Since this is negative, you were in fact winning (whether or not you were expected to move first).

From the above rules, it is easy to see that the best move is always to select the rightmost of your pebbles from that row which has most pebbles after the change. (So, in the example, you should have taken your rightmost pebble from the top row, changing the total value to -1/256; you could also have kept the win by choosing the rightmost pebble from the second row, changing the value to -1/512, but any other move would have lost.) Every possible move makes your position worse, in terms of the values given, and this move changes it by the least possible amount. All of these results are fairly easily proved by induction.

This game is called Hackenstrings as it is a simplified version of the game Hackenbush (described by Conway [1976] and Berlekamp [1982]), in which the general graphs of red-blue Hackenbush are replaced by the rows of pebbles. The results just quoted are examples of a very general result (Conway, op. cit.), according to which the positions of a very wide class of two-player games can be characterised by rational numbers with a finite binary expansion.

Hackenstrings and Numbers

In our ordinary Western civilisation, we all learn about numbers, and fractions, and perhaps even binary expansions, at an early age; so it is natural for us to investigate Hackenstrings as a `new' field of study, and to relate the results back to our familiar numbers. But it is only mildly perverse to imagine a civilisation which first learned Hackenstrings, and later learned our form of arithmetic. In such a civilisation, counting would be as natural as for us, being based on monochromatic rows of pebbles; the concept of addition is natural, consisting merely of laying out adjacent rows; negation, and hence subtraction, is natural, consisting of swapping the ownership of each pebble; the empty game, or zero, would, as with us, be a conceptual advance. Place notation for integers would arise, as with us, as a notational convenience to save dealing with large numbers of pebbles. Multiplication and division would be replicated addition or subtraction, just as with us. The techniques for manipulating fractions would be very similar to ours; addition involves lining up rows with the change in the same place, and learning a table of digit additions, including a carry from the place to the right. There would be an extra complication, in that illegal gaps can arise in the rows; these have to be filled by shifting left the next pebble to the right by one place, and replacing that pebble by an opposite one (corresponding to the result ¼ = ½ - ¼). In compensation, there are no extra rules for subtraction, which is simply adding the negation instead of having to learn another table of digit subtractions and borrows. This 'balanced binary' system of arithmetic is quite closely related computationally to the better-known 'balanced ternary', which uses 'trits' representing 0, +1 and -1 as its digits (see, for example, Knuth [1981], pp. 179-197); it seems to have escaped attention until its use in game theory.

Eventually, our hypothetical civilisation could learn about recurring `decimals' and irrationals, represented by infinite rows such as

1/3 = A B B A B A B A B A B A B A B A B A ... ,
pi = A A A A B B B A B B A B B B B A A A ... ,
and so on, no doubt with some notational conveniences such as grouping the pebbles in threes or fours for an octal or hexadecimal representation. In short, the whole of mathematics would be as accessible to them as to us.

Would there be any important differences between their mathematics and ours? I suggest that there would be one; namely, that their notational conveniences and advanced axioms would be driven by the game from which their mathematics was derived, instead of from counting and measurement. One way in which this would show up could be in the treatment of recurring decimals.

In conventional mathematics, a recurring decimal such as 0.333... is understood to mean the limit, as n tends to infinity, of the sequence whose nth term is 0.333...3 with n 3's after the decimal point. In Hackenstrings, there is no need to introduce the idea of a limit at this stage; instead, we can retain the idea of a game, with the player who moves in a `recurring' row still being required to chose an appropriate pebble. After this choice, the row becomes finite, and the game proceeds in the usual way. The development of the theory will be more simply described if we first extend the Hackenstrings notation to include recurring rows.

Extended Hackenstrings

We denote the recurring Hackenstrings row, shown above, for 1/3 by A B ( B A ). Temporarily, we allow only one pair of parentheses, which must be closed at the end of the row; we shall relax these constraints in due course. To move, you, as before, must choose one of your pebbles. If (and only if) the chosen pebble is within parentheses, then you must next replace the parenthesised row by as many copies as you choose of its contents, deleting the parentheses. There are now several copies of your chosen pebble; you must choose one of these copies and, as before, delete it and the pebbles to its right.

For example, if you are to move in A B ( B A ), you might choose the parenthesised B, and decide to make five copies of the BA, giving the row

A B B A B A B A B A B A
in which the last five B's (but not the first) are the copies of your chosen B. If you choose the third copy, then the row after your move is
A B B A B A
whose value is 1 - 1/2 - 1/4 + 1/8 - 1/16 + 1/32 = 11/32 = 1/3 + 1/96. It is not hard to see that any move that you make results in a row whose value is greater than 1/3, while any move that I make results in a value less than 1/3.

We can verify that A B ( B A ) has value 1/3 by playing the game

A B ( B A ) + A B ( B A ) + A B ( B A ) + B
and noting that whoever plays first will lose with best play. The reason for this is that the first player will be the one who first has to choose where to cut an arbitarily long recurring row; the other player can then choose to cut the next recurring row to a sufficiently longer length to leave the total having the right sign. For example, if you play first, following the example just given, to
A B B A B A + A B ( B A ) + A B ( B A ) + B
then I can reply to
A B B A B A + A B B A B A B + A B ( B A ) + B
whose value is 1/3 + 1/96 + 1/3 - 1/192 + 1/3 - 1 = 1/192, which is positive, so I am winning.

There are no limit processes involved in this game. In conventional mathematics, the recurring decimal must be given a single value, and we must therefore take the limit before we are able to calculate with the resulting number. In Hackenstrings, we can play the game without knowing its value -- indeed, it is the game that determines the value -- and the limit process is replaced by the indeterminacy in the permitted moves, which in turn can be thought of as simply giving the player the choice of how far along the (indefinitely long) row to choose a pebble. The row is never 'infinite'; it is merely as long as the player wants it to be. The player who moves first is disadvantaged by having to make that choice, and enabling the opponent in turn to choose a sufficiently longer row.

In Hackenstrings, there is no reason why there should not be further pebbles after the closing parenthesis; we can play with the row

A B ( B A ) A
for example. The value of this row is slightly greater than 1/3, for we note that the game
A B ( B A ) A + B A ( A B )
is a win for me whoever plays first. In particular, if I play first, I can choose my extra pebble in the first row, thus reaching a position which is obviously zero, by symmetry, and in which it is your move. However, the value is less than that of any finite non-recurring row whose value is greater than 1/3; if I play first in this row, I must make its value no more than 1/3, while if you play first, you can select a length long enough to make the value a rational between 1/3 and the larger value.

This simple notation for recurring rows gives us more. For example, the simplest such row, ( A ). gives a row which is better for me than any non-recurring row; when I move, I merely have to select a length which is longer than your total number of pebbles. The row ( A ) plays the role of infinity. The row A ( A ) is equivalent; but the row ( A ) A is different, and gives a row whose value is `one more than' infinity; it is clear that

( A ) A + B + ( B ) = 0.
Similarly, ( A ) A B can be thought of as `infinity + 1/2', ( A ) A B ( B A ) as `infinity + 1/3', and ( A ) ( A ) as `infinity + infinity', or `2 × infinity'. On the other hand, ( A ) ( B ) is not zero, but is greater than any finite number, and less than infinity by more than any finite number. Exercise: Show that in fact it is `infinity/2'.

Having allowed sequential parentheses, we had better allow nested parentheses: ( ( A ) ) is clearly `infinity times infinity', and is much better for me than any finite number of infinities. These numbers are not being used in the same way as the classical Western `infinity'; some of them are Cantor's infinite ordinals, often denoted omega, thus `omega + 1', `omega2', etc., but many of them, such as ( A ( A B ) ) are not; they are new `numbers'. Note however that, by construction, there can be only countably many of them, at least until our hypothetical civilisation discovers a game-theoretic way of generating the reals. See, for alternative developments of the theory, Gonshor [1986], Knuth [1974] and Conway [1976]. Gonshor in particular develops the theory of the `surreals' as ordinal (and therefore potentially infinite) terminating binary sequences.

There is an ambiguity in nested parentheses. Do you have to expand the inner parentheses first, or the outer? Indeed, if there are more than two levels, could you expand first an intermediate level? Simplest is to give the player the choice: each set of nesting parentheses that surrounds the selected pebble (but only such sets) must be expanded into a finite set of copies without the parentheses, in whatever order the player chooses, before the final cut is made. Different rules give different games, but we shall not explore the differences in this article.

Hackenstrings and 0.999 ...

A Frequently Asked Question (`FAQ') in the sci.math Internet newsgroup concerns the relationship between 0.999 ... and 1; indeed, this question is FAQ #1 for that group. Usually, some student or amateur cannot quite believe that these two numbers are the same, and asks for reassurance or proof. Of course, there is a satisfactory analytic proof, using limits; but this is often thought too difficult to present to the questioner, so a variety of other `proofs' are also seen. Typical of these are:

However, in Hackenstrings, the corresponding question would be about A B ( A ), compared with A; that is, in more conventional notation, we are invited to compare 1 - 1/2 + 1/4 + 1/8 + 1/16 + ... with 1. In Hackenstrings, these numbers clearly differ, and A is the greater (equivalently, A B ( A ) + B is a win for you, with or without first move). What happens to the above `proofs'?

The real proof breaks down, as indeed would any result involving `traditional' limits, because it relies on the Archimedean property of the real numbers, i.e. that there are no infinitesimal reals. Hackenstrings includes numbers such as A ( B ), which are positive, but so small that no finite number of them accumulate to one. We find that A B ( A ) + A ( B ) = A, so A B ( A ) is infinitesimally less than 1, and there are numbers such as A B ( A ) A, analogous to 0.999 ... 5,


In relation to Hackenstrings, it would be more consistent to write ordinary decimal numbers in binary or octal, so that the FAQ concerns the identity of 0.777 ... and 1, so that 1/3 = 0.252525 ..., and so that A B ( A ) A is analogous to 0.777 ... 4, but this change would be more confusing than helpful.
which lie strictly between A B ( A ) and A.

What about the second proof? If our Hackenstring civilisation regards 0.999 ... as infinitesimally less than 1, is 0.333 ... infinitesimally less than 1/3? No! As we saw above, A B ( B A ) is 1/3 exactly, in that three copies of it are equivalent in Hackenstrings to A. The objection to the `multiply 0.333 ... by 3' pseudo-proof is rather that the ordinary processes of arithmetic do not give 0.999 ... when we do this.

Conventionally, we multiply two finite strings of digits together from the right, propagating `carries' to the left as necessary. If we work from the left, then we must always be prepared to change the `result-so-far' in the light of later carries. For finite strings, the final results are equivalent, but in working from the left arbitrarily many digits may be only provisionally known at any one time, so that (for example) the computation cannot be performed by a finite-state machine even if the multiplier is specified in advance. So much the worse if either or both strings are infinite. Thus, if we multiply 0.333...3d... by 3, where the next digit, d, is thus far unknown, then all we know is that the answer is no less than 0.999...90 and no more than 1.000...20. If d turns out to be less than 3, then the start of the result is confirmed to be 0.999..., if greater than 3 then 1.000...; but if d is 3, then the decision is postponed. If the 3's persist indefinitely, then so does the postponement of the decision. Of course, if we know that the multiplicand is 1/3, then we know already that the 3's will persist indefinitely, and we can make out a special case for known rational numbers; but the theory required to do this will be entirely equivalent to the formal analytic proof that 0.999... = 1, and there is no didactic gain.

The `result' 0.333... × 3 = 0.999... is thus seen to be an optical delusion. This is confirmed by the observation that most mathematicians are somewhat less happy with the similar `result' 0.14285714... × 7 = 0.999..., where it is clearer that something must be done about the carries. Similarly, the `results' in the third proof that 10 × 0.999... = 9.999... and 9.999... - 0.999... = 9 are visually attractive, but can be justified only by making special cases, which are as hard as the original FAQ. (The Hackenstrings version of this is left to the reader.)

It should not be thought that the Hackenstrings approach in any way shows that conventional mathematics is `wrong'. Rather, the axioms of Hackenstrings arithmetic are different from those of the real numbers, one consequence being that in Hackenstrings 0.999... < 1. So any proof that 0.999... = 1 must fail when applied to Hackenstrings, which in turn must mean that one of the `different' axioms has been used. In particular the `optical' proofs are making an indirect use of the Archimedean axiom; but this use is rarely made explicit, which makes the proofs misleadingly simple or even inadequate.

Generalisations of Hackenstrings

There are two obvious generalisations. One of these involves generalising the concept of `row' to other topological structures of pebbles. This turns out to give nothing new. Any such generalisation is equivalent to some version of red-blue Hackenbush, and the resulting positions are still rationals corresponding to finite binary decimals, or extensions thereof along the lines developed previously.

More interesting is to allow other pebbles. There are many ways in which this can be done -- for example, we might allow `blocking' pebbles which can be removed only when certain conditions are met -- but the simplest is to allow `neutral' pebbles, here denoted by C, which are available to either side.

With this modification, it is no longer the case that moving first is always a disadvantage. The simplest example is C itself, or indeed any game composed of a single row headed by a C, which is won (trivially) by whoever plays first taking the entire row. If there are two rows of neutral pebbles, then by symmetry the game is zero if the rows are of equal length and is a win for the first player (who can equalise the rows) if they are unequal. In particular, C + C = 0, so rows containing neutral pebbles are not (conventional) numbers, even by Hackenstring standards. It is easy to see that games composed entirely of neutral pebbles are equivalent to Nim, which is again a special case of a very general result discussed in Conway [1976] and Berlekamp [1982].

Rows headed by neutral pebbles are always infinitesimal. With any game containing neutral pebbles, there can be associated a reduced game constructed by removing from each row the first neutral pebble and its followers. If the reduced game is positive (won for me), then so is the game including the neutral pebbles. The strategy for this is quite simple: whenever it is my turn, I take a neutral pebble if possible, and otherwise play my winning strategy in whatever is left of the reduced game. Since any move by you in the reduced game makes your position worse, this strategy wins. This result applies even if the reduced game is infinitesimal; for example, A ( B ) + C ... + C ... + ... is a win for me no matter how many rows beginning with C there are and no matter how those rows continue. So, neutrally-headed rows are infinitesimally infinitesimal. This does not stop them from having their own algebra, which will apply if the reduced game is zero; for example, C A + C is a win for me, and so positively infinitesimal; if I move first, I take the A, while if you move first you must kill one of the rows leaving me to kill the other. More generally, in C A ... + C B ..., that player will lose who is first forced to take a header pebble, so the result will be the same as that of the game obtained by deleting the two headers.

References

Berlekamp, Elwyn R., John H. Conway and Richard K. Guy [1982], Winning Ways, Volume 1, Academic Press Inc. (London) Limited. ISBN 0-12-091101-9.

Conway, John H. [1976], On Numbers and Games, (London Mathematical Society monographs #6), Academic Press, London and New York. ISBN 0-12-186350-6.

Gonshor, Harry [1986], An Introduction to the Theory of Surreal Numbers, (London Mathematical Society lecture note series #110), Cambridge University Press, Cambridge. ISBN 0-521-31205-1.

Knuth, Donald E. [1974], Surreal Numbers; How two ex-students turned on to pure mathematics and found total happiness, a mathematical novelette, Addison-Wesley, Reading, Mass.

Knuth, Donald E. [1981], The Art of Computer Programming, vol. 2, Seminumerical Algorithms, 2nd edition, Addison-Wesley, Reading, Mass. ISBN 0-201-03822-6.


Copyright © Dr A. N. Walker, 1999.