As this is not an engineering module, we are not primarily concerned [and they certainly weren't in the 1930s and before!] with the actual circuits, transistors, valves, string, sealing wax, silicon, paper or magnetic tape, and other contraptions that go to make up a computer. We are concerned only with the behaviour of these components. More specifically, we assume that each component is a `black box', which has some inputs and some outputs associated with it. The inputs [0, 1 or more of them] come from other components or from the outside world -- e.g. a temperature sensor, or a photo-electric voltage representing `hole' or `no-hole' in a punched paper tape, or a user typing something at a keyboard or moving a mouse. The outputs [1 or more] go to other components or to the outside world, either as information written to a screen or other device or as a voltage [or whatever] that drives some external device. A component with no outputs is essentially useless; one with no inputs may be used as, for example, a source of clock pulses or of random numbers.
Since we are concerned with logical behaviour and not physical behaviour, we assume that inputs, outputs, time, and other parameters of the system are quantised. That means that time proceeds in definite `ticks'; at each tick, our black box can inspect its inputs and will find each of them in one of a number of definite `states', will itself be in one of a number of definite states, and will set each of its outputs to one of a number of definite states.
This idea is much more familiar to us today than it was to mathematicians in the 1930s when these ideas were first being discussed. Many apparently continuous devices [photographs, telephone conversations, gramophone records, handwriting] are routinely handled by digital means today [television raster scans or CCD arrays, computerised telephone exchanges, CDs, keyboards]. Early papers spend much time explaining how a continuous function can be approximated by a sufficiently fine set of discrete values.Any device can then be characterised by a matrix. Each row of this matrix corresponds to one of the internal states of the device; each column to one of the possible states of the inputs; each entry to what happens to the device if it is in this state and detects this input. For deterministic machines, this entry will state what the next state of the device will be, and what the output will be; for probabilistic machines, it will have to include information about the probabilities with which the consequent events occur. At least in the simple case, this slightly unconventional matrix can be replaced by two perfectly normal matrices, one for state, one for output.
Example: A one-bit memory has two states: 0 [off] and 1 [on]. It has three inputs, each of which is either inactive [0] or active [1]: one to set the memory `on', one to clear it `off', and one to read its contents. There is one output, which is a copy of the memory if the `read' input is active and is otherwise `off'. This component can be described by the following pair of matrices:Notes:
next state | output 000 001 010 011 100 101 110 111 | 000 001 010 011 100 101 110 111 0 0 0 0 0 1 1 0 0 | 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 1 | 0 1 0 1 0 1 0 1 [Assumes that if both `set' and `clear' inputs are active simultaneously, then the status quo continues.] See also the example below. Note that it is unusual that there should be only 0s and 1s in these matrices -- a more complicated component might have dozens [or hundreds or billions] of states and possible outputs.
PLEASE PUT THE NEXT DISC INTO THE DRIVE, PRESS ANY KEY WHEN READY
TO PROCEED
.
As long as we can supply new discs whenever needed, and recover old
ones upon request, a finite computer has been given an unbounded amount
of storage.
Note that as long as we keep the discs in order, then the computer can
ask for the next disc or the previous disc till the cows come home;
but it cannot in general ask for a specifically-numbered disc,
for in an unbounded calculation the number of the disc will eventually
become too large for the computer to store, even with the help of
the entire hard disc and the floppy drive.
Naturally, this is a theoretical rather than practical constraint!
Anyway, we now look briefly at what finite machines look like and what they can and cannot do, and then we do likewise for unbounded machines. We start with an example of a model for finite machines that was initially developed for a quite different purpose, just to show that computers are not necessarily made of transistors and bits and bytes.
A full discussion would take us too far afield, but the
examples left show that we can easily construct cells to emulate
logical and, or and not.
So it is easy to construct arbitrarily complicated logical functions,
and hence the whole of mathematics!
Well, more-or-less.
There is much more about all this in Minsky.
For a slightly more extended example, here is a single bit of memory.
The memory itself is the lower cell;
note the simple feedback loop which enables it to retain its state
of firing or quiet.
It will be switched on whenever the excitor fires, and
off whenever the inhibitor fires.
[Note that, in contrast to the matrix given above, if both inputs fire
simultaneously, the cell is switched off, not left as is.]
At every tick, the memory state is `copied' to the upper and-cell,
which therefore copies the memory to the output whenever [and only
when] the `read' fibre fires;
so a memory bank of these cells can be set or cleared in any combination,
and arbitrary subsets of them `read', emulating the real behaviour
of computer memory.
[However, there is an inherent delay;
the output fibre fires depending on the state of the memory cell
before the previous, not the current, tick.]
It would not be incredibly hard to build a neuron network that modelled the behaviour of a real computer circuit. Minsky shows that an arbitrary finite machine can be emulated by a neuron network that uses only the and, or and not cells previously shown.