The objection from Gödel’s theorem he answered in the same way as he had in 1947, by separating ‘intelligence’ from ‘infallibility’. This time he gave an example of how the intelligent approach could be wrong, and the accurate one stupid:
It is related that the infant Gauss was asked at school to do the addition 15 + 18 + 21 + …+ 54 (or something of the kind) and that he immediately wrote down 483, presumably having calculated it as (15 + 54) (54 – 12)/2.3 …One can …imagine a situation where the children were given a number of additions to do, of which the first 5 were all arithmetic progressions, but the 6th was say 23 + 34 + 45 …+ 100 +112+ 122 …+ 199. Gauss might have given the answer to this as if it were an arithmetic progression, not having noticed that the 9th term was 112 instead of 111. This would be a definite mistake, which the less intelligent children would not have been likely to make.
More pertinent, perhaps, would have been the unmentionable fact that although careless about detail in the cryptanalytic work, he had still been regarded as the brain behind it. Implicitly, this argument appealed to the imitation principle of ‘fair play for machines’. This also lay behind his answer to a fifth objection, that
(e) In so far as a machine can show intelligence this is to be regarded as nothing but a reflection of the intelligence of its creator.
This he described as
similar to the view that the credit for the discoveries of a pupil should be given to his teacher. In such a case the teacher would be pleased with the success of his methods of education, but would not claim the results themselves unless he had actually communicated them to his pupil. He would certainly have envisaged in very broad outline the sort of thing his pupil might be expected to do, but would not expect to foresee any sort of detail. It is already possible to produce machines where this sort of situation arises in a small degree. One can produce ‘paper machines’ for playing chess. Playing against such a machine gives a definite feeling that one is pitting one’s wits against something alive.
This idea of teaching a machine to improve its behaviour into ‘intelligence’ was the key to most of the positive proposals of this essay. This time he would be putting the imitation principle to a constructive use. The real point of this paper was that he was beginning to think seriously about the nature of human intelligence, and how it resembled, or differed, from that of a computer. More and more clearly as time went on, the computer became the medium for his thought, not about mathematics, but about himself and other people.
He could see two possible lines of development. There was the ‘instruction note’ view, according to which better and better programs would be written, allowing the machine to take more and more over for itself. He thought this should be done. But his dominant interest now lay in the ‘states of mind’ approach to ‘building a brain’. His guiding idea was that ‘the brain must do it somehow’, and that it had not become capable of thought by virtue of some higher being writing programs for it. There must be a way in which a machine could learn for itself, according to this line of argument, just as the brain did. He explained his view that ‘intelligence’ had not been wired into the brain at birth, in a passage showing the influence of his recent research into physiology and psychology:
Many parts of a man’s brain are definite nerve circuits required for quite definite purposes. Examples of these are the ‘centres’ which control respiration, sneezing, following moving objects with the eyes, etc: all the reflexes proper (not ‘conditioned’) are due to the activities of these definite structures in the brain. Likewise the apparatus for the more elementary analysis of shapes and sounds probably comes into this category. But the more intellectual activities of the brain are too varied to be managed on this basis. The difference between the languages spoken on the two sides of the Channel is not due to differences in development of the French-speaking and English-speaking parts of the brain. It is due to the linguistic parts having been subjected to different training. We believe then that there are large parts of the brain, chiefly in the cortex, whose function is largely indeterminate. In the infant these parts do not have much effect: the effect they have is uncoordinated. In the adult they have great and purposive effect: the form of this effect depends on the training in childhood. A large remnant of the random behaviour of infancy remains in the adult.
All this suggests that the cortex of the infant is an unorganised machine, which can be organised by suitable interfering training. The organising might result in the modification of the machine into a universal machine or something like it.
Although expressed in more modern terms, reflecting the great debate between the protagonists of nature and nurture, this was little more than could have come from Natural Wonders, with its little homilies on the virtues of training the brain in childhood, and how languages and other skills could be best incorporated in the ‘remembering place’ while the brain was still receptive.
According to this view, therefore, it would be possible to start with an ‘unorganised’ machine, which he thought of as made up in a rather random way from neuron-like components, and then ‘teach’ it how to behave:
…by applying appropriate interference, mimicking education, we should hope to modify the machine until it could be relied on to produce definite reactions to certain commands.
The education that he had in mind was of the public school variety, by the carrot and stick that the Conservative party was currently accusing Attlee of taking away from the British donkey worker:
…The training of the human child depends largely on a system of rewards and punishments, and this suggests that it ought to be possible to carry through the organising with only two interfering inputs, one for ‘pleasure’ or ‘reward’ (R) and the other for ‘pain’ or ‘punishment’ (P). One can devise a large number of such ‘pleasure-pain’ systems. …Pleasure interference has a tendency to fix the character, i.e. towards preventing it changing, whereas pain stimuli tend to disrupt the character, causing features which had become fixed to change, or to become again subject to random variation. …It is intended that pain stimuli occur when the machine’s behaviour is wrong, pleasure stimuli when it is particularly right. With appropriate stimuli on these lines, judiciously operated by the ‘teacher’, one may hope that the ‘character’ will converge towards the one desired, i.e. that wrong behaviour will tend to become rare.
If the object were simply to produce a universal machine, it would be better to design and build it directly. The point, however, was that the machine thus educated would not only acquire the capacity to carry out complicated instructions, which Alan described as like a person who ‘would have no common sense, and would obey the most ridiculous orders unflinchingly’. It would not only be able to do its duty, but would have that elusive ‘initiative’ that characterised intelligence. Nowell Smith, worrying about the development of independence of character within a system of mere routine, could not better have formulated the problem:
If the untrained infant’s mind is to become an intelligent one, it must acquire both discipline and initiative. So far we have been considering only discipline. To convert a brain or machine into a universal machine is the extremest form of discipline. Without something of this kind one cannot set up proper communication. But discipline is certainly not enough in itself to produce intelligence. That which is required in addition we call initiative. This statement will have to serve as a definition. Our task is to discover the nature of this residue as it occurs in man, and to try and copy it in machines.
Alan would have liked the idea of training a boy machine to take the initiative. He did work out an example of modifying a ‘paper machine’ into a universal machine by these means, but decided it was a ‘cheat’, because his method amounted to working out the exact internal structure, the ‘character’ of the machine, and then correcting it – more like codebreaking than teaching.
It was all very laborious on paper, and he was eager to take the next step:
I feel that mo
re should be done on these lines. I would like to investigate other types of unorganised machine, and also to try out organising methods that would be more nearly analogous to our ‘methods of education’. I made a start on the latter but found the work altogether too laborious at present. When some electronic machines are in actual operation I hope that they will make this more feasible. It should be easy to make a model of any particular machine that one wishes to work on within such a [computer]* instead of having to work with a paper machine as at present. If also one decided on quite definite ‘teaching policies’ these could also be programmed into the machine. One would then allow the whole system to run for an appreciable period, and then break in as a kind of ‘inspector of schools’ and see what progress had been made.
It was a happy thought that, like a public school, the machine could grind its way along quite deterministically, but without anyone knowing what was going on inside. They would see only the end products. There was a distinctly behaviourist flavour to all this talk of pain and pleasure buttons, but his wry use of the word ‘training’, ‘discipline’, ‘character’ and ‘initiative’ showed how this was the behaviourism of Sherborne School.
More precisely, it was the official description of the school process, albeit presented as rather a joke. It bore little relationship to his own mental growth. No one had pressed any pleasure buttons to reward his initiative; precious few pleasure buttons at all, while the pain had been dispensed freely in order to enforce patterns of behaviour that had nothing to do with intellectual advance. The only hint of contact with his own experience was the remark that discipline was necessary for the sake of communication, for certainly he had to be pushed into conventional communication in order to advance. Yet even there, it had not been jabs of pain and pleasure that had stimulated his willingness to communicate, but the aura – was it pain? was it pleasure? – that surrounded the figure of Christopher Morcom. As Victor Beuttell had often said to him, it was a mystery from where his ‘intelligence’ derived, for no one had been able to teach him mathematics.
Wittgenstein also liked to talk about learning and teaching. But his ideas derived not from the example of an English public school, but from his experience in an Austrian elementary school, where he had explicitly tried to get away from the repressive rote-learning that Alan had endured. By this time Alan had compared his school experience with Robin, who had had a much happier time at Abbotsholme, the progressive boys’ boarding school where Edward Carpenter’s ideas had enjoyed an influence, and ‘Dear Love of Comrades’ was the school song. Alan, speaking to Robin of Sherborne, had said: ‘The great thing about a public school education is that afterwards, however miserable you are, you know it can never be quite so bad again’, † But there was no trace of his criticism of the Sherborne process in this essay, except inasmuch as he was enjoying a sally at the pompous old masters by talking of replacing them by machines. There was a gap here, a certain lack of seriousness. It was rather like Samuel Butler in Erewhon, wittily transposing the values attached to ‘sin’ and to ‘sickness’ in order to tease the official Victorian mentality, yet never questioning that beatings would be the appropriate ‘treatment’ for ‘sin’.
But in other ways, he certainly did recognise that his machine model of the brain was deprived of some very significant features of human reality. This was where he began to question the isolated puzzle-solver as a model for the understanding of Mind:
…in so far as a man is a machine he is one that is subject to very much interference. In fact interference will be the rule rather than the exception. He is in frequent communication with other men, and is continually receiving visual and other stimuli which themselves constitute a form of interference. It will only be when the man is ‘concentrating’ with a view to eliminating these stimuli or ‘distractions’ that he approximates a machine without interference … although a man when concentrating may behave like a machine without interference, his behaviour when concentrating is largely determined by the way he has been conditioned by previous interference.
In a soaring flight of imagination, he supposed it possible to equip a machine with ‘television cameras, microphones, loudspeakers, wheels and “handling servo-mechanisms” as well as some sort of “electronic brain’”. Tongue in cheek, he proposed that it should ‘roam the countryside’ so that it ‘should have a chance of finding things out for itself’, on the human analogy, and perhaps thinking of his own country walks at Bletchley, where his odd behaviour had attracted the spy-conscious citizen’s suspicion. But he admitted that even so well-equipped a robot would still ‘have no contact with food, sex, sport, and many other things of interest to the human being’ – and certainly of interest to Alan Turing. His conclusion was that it was necessary to investigate what
can be done with a ‘brain’ which is more or less without a body, providing at most organs of sight speech and hearing. We are then faced with the problem of finding suitable branches of thought for the machine to exercise its powers in.
The suggestions he made were simply the activities that had been pursued on and off duty in Hut 8 and Hut 4, rather surprisingly brought into the open:
(i) Various games e.g. chess, noughts and crosses, bridge, poker
(ii) The learning of languages
(iii) Translation of languages
(iv) Cryptography*
(v) Mathematics.
Of these (i), (iv) and to a lesser extent (iii) and (v) are good in that they require little contact with the outside world. For instance in order that the machine should be able to play chess its only organs need be ‘eyes’ capable of distinguishing the various positions on a specially made board, and means for announcing its own moves. Mathematics should preferably be restricted to branches where diagrams are not much used. Of the above possible fields the learning of languages would be the most impressive, since it is the most human of these activities. This field seems however to depend rather too much on sense organs and locomotion to be feasible.
The field of cryptography will perhaps be the most rewarding. There is a remarkably close parallel between the problems of the physicist and those of the cryptographer. The system on which a message is enciphered corresponds to the laws of the universe, the intercepted messages to the evidence available, the keys for a day or a message to important constants which have to be determined. The correspondence is very close, but the subject matter of cryptography is very easily dealt with by discrete machinery, physics not so easily.
There was more to Intelligent Machinery than this. One feature was that he laid down definitions of what was meant by ‘machine’, in such a way that it connected the 1936 Turing machine with the real world. He distinguished first:
’Discrete* and ‘Continuous’ machinery. We may call a machine ‘discrete’ when it is natural to describe its possible states as a discrete set. …The states of ‘continuous’ machinery on the other hand form a continuous manifold. …All machinery can be regarded as continuous, but when it is possible to regard it as discrete it is usually best to do so.
and then:
‘Controlling’ and ‘Active’ machinery. Machinery may be described as ‘controlling’ if it only deals with information. In practice this condition is much the same as saying that the magnitude of the machine’s effects may be as small as we please. … ‘Active’ machinery is intended to produce some definite physical effect.
He then gave examples:
A Bulldozer
Continuous Active
A Telephone
Continuous Controlling
A Brunsviga
Discrete Controlling
A Brain is probably
Continuous Controlling, but is very similar to much discrete machinery
The ENIAC, ace, etc.
Discrete Controlling
A Differential Analyser
Continuous Controlling
A ‘Brunsviga’ was a standard make of desk calculator, and the point was that such a machine, like
an Enigma, a Bombe, a Colossus, the ENIAC or the planned ACE was best regarded as a ‘controlling’ device. In practice it would have a physical embodiment, but the nature of the embodiment, and the magnitude of its physical effects, were essentially irrelevant. The Turing machine was the abstract version of such a ‘discrete controlling’ machine, and the cipher machines and decipherment machines were physical versions of them. They had taken up much of his working life. And the fundamental thesis of Intelligent Machinery was that the brain could also be ‘best regarded as’ a machine of this kind.
The paper also included a short calculation which bridged the two descriptions of a machine such as a computer, the logical description and the physical description. He showed that in a job taking more than 1010steps, a physical storage mechanism would be virtually certain to jump into the ‘wrong’ discrete state, because of the ever-present effects of random thermal noise. This was hardly a practical constraint. He might have made a similar calculation regarding the effect of quantum indeterminacy, and the upshot would have been the same. The determinism of the logical machine, although it could never be rendered with absolute perfection, was still effectively independent of all the ‘Jabberwocky’ of physics. This part of the paper integrated his several interests in logic and physics, mapped out where his own work stood within a wider framework, and summed up a long chapter of unfulfilled ambitions.
Alan Turing: The Enigma The Centenary Edition Page 60