On Turing’s “Computing Machinery and Intelligence”

I recently submitted the following paper for a research skills course as part of my program at Cambridge.  I have decided to post it here since people may be interested in learning more about his life and work given his recent royal pardon.  The focus is on his “Computing Machinery and Intelligence” paper, but his other work and life are also included.  I welcome any discussion in the comment section below. 

1.1 Background on the Author

Alan M Turing was a remarkable man, whose breadth and quality of contributions make him deserving of mention in the same caliber as such greats as Newton, Darwin, and Maxwell. In addition to Turing the mathematician, Turing the person is nearly as fascinating and more closely resembles the tortured souls seen in great artists than is typically seen in great scientists.

Turing was born in London, on 23 June 1912, to a middle class family [Turing, 2012]. He was seen as bright at an early age, inventive, and skilled in both words and mathematics. In 1930, he won a scholarship to study the Mathematics Tripos at King’s College, Cambridge. He attained only a second in his Part 1, but went on to be elected a Fellow of the college upon graduating, at age 22. A close friend, Christopher Morcom, died just before Turing was to start at Cambridge.

This e ffected him throughout his life – at times seeming to provide him with motivation, and at others – loneliness.

The fi rst contribution to bring attention to Turing was his solution to Hilbert and Ackermann’s “Entschudungsproblem”, which they had proposed in 1928 [Beeson, 2004]. The problem asks if an algorithm exists to take a set of axioms and a conjecture and determine in a finite amount of time the proof of the conjecture from the axioms, or to state that no proof exists. Turing solved the problem by considering his Universal Machine, proving shortly after Alonzo Chruch that some problems are unsolvable in a finite amount of time. The Universal (Turing) Machine was one of Turing’s great contributions. It has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head. The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine, and it formed the inspiration for all subsequent computer architectures.

Turing went to Princeton to work with Church, earning a PhD for two years of his e fforts [Turing, 2012]. Following that, he returned to King’s College until the outbreak of World War Two. Through the war, he was largely based in Bletchley Park, working as one of the lead cryptographers and played a major role in defeating the Enigma code. This role was instrumental to the Allied war e ffort in protecting the Allied merchant fleet from German U-Boats in the North Atlantic. He was awarded an Order of the British Empire for his e fforts.
Following the war, Turing joined the National Physical Laboratory to design some of the first computers. Through this time, he wrote internal reports and discussed machine intelligence with colleagues. But it wasn’t until he joined the faculty at the University of Manchester that he published “Computing Machinery and Intelligence”, having left the NPL in frustration with the slow progress of building computers there. This paper is one of the seminal papers in the eld of arti ficial intelligence, and continues to be controversial in the fields of psychology and philosophy due to the implications of having intelligent machines.

Having already made major contributions in computer science, cryptography, and artifi cial intelligence, Turing again wrote a seminal paper entitled “The Chemical Basis of Morphogenesis” in 1954, which probably makes Turing the father of computational biology as well [Turing, 2012].

Turing died from cyanide poisoning on 7 June 1954, in an apparent suicide. A few years before, Turing admitted to being homosexual, a crime in Britain at the time [Leavitt, 2007]. He was sentenced to be given estrogen injections, lost his government security clearance, and was no doubt embarrassed publicly by this. To have lived his life and accomplished what he did fi rst by living in secrecy followed by persecution adds to the legend of Alan Turing. In June 2013, Turing was fi nally issued a pardon by the British Government.

1.2 Summary of the Paper

“Computing Machinery and Intelligence” proposes a way to test if machines can think. In doing so, Turing provided a jolt to the field of Arti ficial Intelligence, and the paper provided motivation for much of the early work in the field. While AI eventually began to focus on individual applications, the paper remains relevant in the fields of psychology and philosophy, where Turing’s ideas study the human mind in a more systematic way [Millican and Clark, 1999].

Turing proposes an imitation game, where an examiner communicates with a hidden human and a hidden computer through teletype. The examiner must determine which of the examinees is which. If the computer is able to fool the examiner, it is said to have passed the (Turing) test. (Parenthesis to indicate the name “Turing Test” was given to the imitation game after Turing proposed it in the paper). The test is based on a game where the examinees are one each of a man and women, the description of which has had interesting commentary in
the context of Turing’s homosexuality and awkwardness with the female gender by
Leavitt and others [Leavitt, 2007].

While the (Turing) test is probably the most famous outcome of the paper, a minority of the manuscript is dedicated to it. Instead, Turing extends beyond the test to propose potential qualities of a machine that might eventually surpass his test, as well as address nine potential criticisms of the test. Turing’s description of the type of digital computer that may pass his test is remarkable in it similarities to how computer design has advanced. The basic architecture of storage, executive unit, and controller remains unchanged. He correctly predicted massive gains in storage and processing power, and anticipated that software would be the major limiting factor in AI. Anticipating the difficulty in manually coding a variety of behaviors, Turing proposes machine learning as a way to program an AI machine.

Of Turing’s nine possible objections, three can probably be discarded today as lacking scientifi c merit: the objections from theology, consequences-are-too-dreadful, and extra-sensory perception. The remaining objections and Turing’s responses will be summarized.

The mathematical objection is from theory of Turing, Church, and others proving that discrete-state machines have inherent limits on their capabilities. Turing’s response to this is that while machines may have inherent limitations, human intellect probably does as well. He feels the imitation game is a good test in the context of the mathematical objection.

Lack of consciousness is the second continuing objection, which has since formed the distinction between strong AI (has consciousness) and weak AI (does not). Turing recognizes that AI could pass his imitation test without having consciousness, and somewhat dismisses consciousness as being relevant for the ability to think. In part, his argument relies that consciousness is not well understood in humans, so it should not be used in a test of other entities. This objection was later raised in Searle’s famous “Chinese Room” paper, which will be described in the next section.

A third objection to Turing’s test is lumped objections that a machine may not have X, where X is a personifi ed quality such as kindness, a sense of humor, ability to love, ability to learn, or to enjoy strawberries and cream. The response is twofold: Turing objects to the arbitrariness of these qualities as a test of intelligence, and Turing believes most of these things possible for a machine to do.

A particular quality of X that Turing separates is inventiveness, also know as Lady Lovelace’s objection. In the context of Babbage’s early computers, Lady Lovelace commented that computers are limited in that they cannot do anything new; they can only do that which is previously instructed to them by their programmers. Turing objects, believing machine learning will allow for machines to initiate new knowledge. Today, this objection stands little ground given the importance of computers in so many fields, including algorithms that are able to make mathematical proofs that humans cannot.

The continuous nature of the nervous system is raised as a possible objection to the possibility of discrete or digital systems thinking is raised. Turing defeats this quite easily by arguing that with probability and decimals, continuous systems can be modeled by digital ones. Today, we would probably consider the nervous system to be more similar to a digital system than Turing considered.

Finally, the informal nature of human behavior compared to machines is mentioned as an objection. Turing uses an example of knowing what action to perform at a changing traffic light. Turing’s proposal is for the machine to have rules imposed on it to act more human – rules to seem as if it was less governed by rules, as it were. The commentary is interesting, coming from an eccentric man who seemed to have little patience for social norms himself [Hodges, 1992]. Today, uncertainty and probability are large focuses in AI, and would probably be the first approach for making a computer seem more informal in its behavior.

At the end of the paper, Turing concludes with speculation on the future of the fi eld. Two main approaches are proposed: focusing on an abstract activity, such as chess, or giving a machine sensory “organs” and teaching it like one would teach a child. In this, he foresees computers playing chess at the level of men (we know now that they surpassed our ability in chess in the 1990s when Deep Blue defeated World Champion Gary Kasparov). Machine learning also now plays a great role in AI, but it is the fi rst approach of specialized, expert applications that have proved to be the main role of AI in the modern world. The paper
ends with a highly quotable phrase, applicable to many fields of science in general:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

1.3 Importance, Impact, and Fallout of the Paper

Turing’s paper is special in the breadth of its impact – over sixty years later it maintains relevance in computer science, philosophy, psychology, and cognitive sciences. It is one of the foundational papers for AI and machine learning. Even with the bene t of hindsight, Turing’s predictions are remarkably correct considering how young the field was when he created them. As explained in other sections, the impact of AI was slower than expected, but is substantial in the modern world.

The strongest criticism of the paper came from Searle in his “Chinese Room” argument [Searle, 1980]. Searle takes us though a thought experiment, proposing he is trapped in a room with a cypher of Chinese characters. Written Chinese messages are fed to him, and he uses the cyphers to write responses. The implication is that he has no idea what the messages are actually meaning, and is of course an analogy to a machine answering questions in a similar way, without understanding meaning. Searle does concede that the de finition of “understand” is considerably more dicult than it appear on first glance. While

Turing does not get the benefi t of being able to respond to Searle’s attack, his original paper did anticipate this attack so we may assume he would assume that he would maintain his position that consciousness is di fferent than the ability to think. The additional lead is that one must question the room-con fined Searle. If he is unable to understand the Chinese characters being passed to him, does this make him unable to think?

Boden comes to Turing’s defense in Searle’s attack, largely by pointing out that the appeals Searle makes to the uniqueness of biologic systems compared to engineered systems are at best irrelevant and at worst incorrect [Boden, 2003]. Boden argues that there is nothing “special” about humans: computer vision can functionally equal our own, and our neurons behave in a digital way that is not unlike a silicon transistor.

This argument is in some ways improved by the perspective of Newell and Simon [Newell and Simon, 1976]. Even in 1976, they viewed computer science as an empirical science – a far cry from the logic-based mathematics initiated by Turing and his colleagues. Increased complexity of computers lead to them being regarded as “the programmed living machine…the organism we study” by the duo.

2.1 Description of Arti ficial Intelligence and Machine Learning

Turing’s “Computing Machinery and Intelligence” is a seminal paper in arti ficial intelligence and machine learning, and continues to be debated in robotics, computer science, psychology, and philosophy today. While the paper had far reaching and interdisciplinary implications, this analysis will focus on the implications to applied sciences.

Even the very de finition of Artifi cial Intelligence (AI) is controversial. When trying to describe intelligence, one soon recognizes that their description is limited by observations and experiences of a human, and only one human at that. Intelligence is considered an important distinction of the human experience, but we fi nd that in de fining intelligence we declare agency in a highly personifi ed way.

Worse, it is our instinct to assign intelligence by factors that, on examination, are arbitrary: use of language, maths, or tools, for example. We find ourselves no further along than Descartes, who ascribed agency to himself by declaring, “I think, therefore I am”. Unfortunately, this leaves us unable to evaluate other entities.

Turing’s solution, as will be outlined and analyzed in more detail in the second part of this paper, was to avoid any one criteria for assessing intelligence, and instead proposed that an intelligent machine is one that is able to successfully imitation of a human in written conversation. This trial has since been named as the “Turing Test”, and probably remains the best tool for evaluating arti ficial intelligence, although little recent eff ort has been put into designing a machine to defeat the test [Rich and Knight, 1991] [Haugeland, 1989].

It is perhaps difficult to separate early computing and arti ficial intelligence, as the young fi elds had yet to specialize. It could be argued the field of computing started quite early, philosophers such as Descartes and Hobbes contemplating the nature of the mind and machine, and similarities between the two. Early mechanical computers were proposed by Wilhelm Schickard (1952-1635), Blaise Pascal (1623-1662), Gittfried Leibnitz (1646-1716), and Charles Babbage (1792-1871) [Haugeland, 1989]. Arguably, Babbage’s design was the fi rst that could be considered a computer rather than a calculator, and Lady Lovelace’s contribution to the field in documenting and explaining Babbage’s never-built machine should be noted. In 1936, Turing proposed the Universal (Turing) Machine, a theoretical architecture for a computer that inspired the von Neumann architecture that is used in nearly all computers now [Haugeland, 1989]. A Universal Turing Machine has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head.

The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine. Arti ficial Intelligence was fi rst seriously considered around the time of the first computers, around 1950 when Turing published his paper proposing the “Turing Test”. From 1952-1969, quite good academic progress was made in AI, led by MIT, Stanford, Carnegie Mellon University, Dartmouth, and IBM [Boden, 1990].

Many of the early proposed problems in the field were solved, most being simple examples of playing games, word recognition, algorithmic problem solving, machine learning, or solving maths problems [Rich and Knight, 1991]. However, progress soon slowed as managing complexity of problems proved more difficult than had been anticipated.

The 1990’s saw again considerable progress, with considerable improvements in speech recognition, autonomous vehicles, industrial systems monitoring, and computers playing board games better than the best humans [Mitchell, 1997] [Norvig and Russell, 2003]. Many of these have since been commercialized, or are on the cusp of being so. Through this process, AI has become application-specifi c, and most work is focused on applications with more utility than passing the Turing Test [Norvig and Russell, 2003]. Dennett, who was Chair of the Loebner Prize Competition in the 1990s (a Turing Test challenge), questions if the Turing Test will ever be beat, and considers attempting to pass the test not useful research for serious modern AI [Dennett, 2004]. Turing himself anticipated the test would be
challenging to pass: he once predicted that by 2000 a machine would exist that would have a 30% chance of passing the test in a ve minute conversation [Norvig and Russell, 2003]. Later, he said in a radio interview that he expected it would be over 100 years (from 1952) until a machine was built that would reliably win the imitation game [Proudfoot and Copeland, 2004].

Theoretical AI has, in response, created some divisions within itself. Firstly, between strong and weak AI. Weak AI can act like it is intelligent, whereas strong AI is intelligent and conscious [Haugeland, 1989]. A criticism of the Turing Test is that it may allow an entity to pass which is weak AI. Modern AI has distanced itself from Good Old Fashioned AI (GOFAI), which was an approach based more on strict logic [Haugeland, 1989]. The present interest is more focused on the management of uncertainty through probabilistic systems.

An important technique in AI is machine learning, which is de ned as the performance at a task improving with experience [Mitchell, 1997]. Learning was proposed by Turing in the paper as being a possible technique to enable flexibility in a machine. The first examples were seen as early as 1955 [Proudfoot and Copeland, 2004]. But it wasn’t until the 1990s that the fi eld of computing advanced sufficiently for great advances to be made [Mitchell, 1997]. The general learning technique is similar to as proposed by Turing: the program is given a function such that it learns through trying to avoid a previously experienced “pain” of a
mistake. However, the actual implementation of these algorithms in a practical sense is probably more dicult than Turing and his contemporaries anticipated [Mitchell, 1997]. This in part explains the slower than expected progress in AI.

However, machine learning, and arti ficial intelligence in general, both benefi ts from and produces useful cross-pollination with other fields including statistics, philosophy, biology, cognitive science, and control theory [Mitchell, 1997].

2.2 Importance to Engineering and Industrial Practice

While Turing discussed AI in a quite general and academic sense, the field has since become application-specifi c, where programs are written for a niche task. Robotics, including autonomous vehicles, are probably the most mechanical of the applications of AI, and are of growing importance [Proudfoot, 1999]. Industrial processes and systems are increasingly controlled by AI and AI-inspired systems [Norvig and Russell, 2003].

In some ways, AI remains heavy on potential compared to current bene ts realized from it. Futurist Ray Kurzweil envisions AI being a key technology of a process that will create unprecedented change in our society [Kurzweil, 2001]. He argues that technology improvement and adoption follows an exponentiation rather than linear growth curve, as is seen in Moore’s Law of computer transistors. This is in part because improved technology allows the design of further improved technology. Assuming AI follows a similar progress, the relatively slow early progress is to be expected, with accelerated returns to be seen in the future.

This leads to what Kurzweil calls the Singularity, which is the point when the slope of this technology-growth curve becomes so steep as to be eff ectively asymptotic. Kurzweil predicts similar and inter-connected growth in neuroscience and (especially bio-flavored) nanotechnology. According to Kurzweil, when this all happens, we will have essentially infi nite access to technology. One outcome of this that Kurzweil is fond of mentioning is that we would see our own immortality [Kurzweil, 2001]. So, at least according to Kurzweil, the implications of AI to engineering, industry, and beyond are vast.

Conclusions

Alan Turing made considerable contributions to computer science and affiliated fi elds. “Computing Machinery and Intelligence” is one of several seminal papers contributing to the legend of the author. The paper has had wide impact, beyond engineering and computer science to shake the very foundations of how people view themselves. Like Galileo and Copernicus, Turing has forced us to reflect on our own state of being. On a more practical sense, he pioneered in this paper the fields of AI and machine learning, which have recently proven to be of considerable value. One can only anticipate that computer-based intelligence will play a greater role in the future of humankind.

In addition to his notable contributions to science and academics, Alan Turing is a fascinating man, and one whom history is only beginning to give proper recognition to. One can hope that the future will give the acknowledgment to Turing that his contributions to an increasingly important field deserve.

References

M Beeson. The Mechanization of Mathematics. In Alan Turing: Live and Legacy of a Great Thinker. 2004. URL http://cs.sjsu.edu/~beeson/Papers/turing2.pdf

M Boden. Escaping from the Chinese room. 2003. URL http://scholar.google.co.uk/scholar?hl=en&q=boden+escaping+the+chinese+room&btnG=&as_sdt=1%2C5&as_sdtp=#1

M Boden. The Philosophy of Arti ficial Intelligence (Oxford Readings in Philosophy). OUP Oxford, 1990. ISBN 0198248547. URL http://www.amazon.co.uk/Philosophy-Artificial-Intelligence-Oxford-Readings/dp/0198248547

Daniel Dennett. Can Machines Think? In C Teuscher, editor, Alan Turing: Life and Legacy of a Great Thinker, pages 121 – 145. 2004. URL http://www.citeulike.org/group/884/article/633858

John Haugeland. Arti ficial Intelligence: The Very Idea. A Bradford Book, 1989. ISBN 262580950. URL http://www.amazon.com/Artificial-Intelligence-The-Very-Idea/dp/0262580950

Andrew Hodges. Alan Turing: The Enigma. Vintage, 1992. ISBN 0099116413. URL http://www.amazon.co.uk/Alan-Turing-Enigma-Andrew-Hodges/dp/0099116413

Ray Kurzweil. The Law of Accelerating Returns, 2001. URL http://www.baxtek.com/products/wireless/files/law-of-accelerating-returns.pdf

David Leavitt. The Man Who Knew Too Much: Alan Turing and the invention of computers: Alan Turing and the Invention of the Computer. Phoenix, 2007. ISBN 0753822008. URL http://www.amazon.co.uk/The-Man-Who-Knew-Much/dp/0753822008

Millican and Clark. Machines and Thought: The Legacy of Alan Turing. In Christof Teuscher, editor, Machines and Thought: The Legacy of Alan Turing. Oxford University Press, 1999. 3
Thom M. Mitchell. MACHINE LEARNING (Mcgraw-Hill International Edit). McGraw-Hill Higher Education, 1997. ISBN 0071154671. URL http://www.amazon.co.uk/MACHINE-LEARNING-Mcgraw-Hill-International-Edit/dp/0071154671

A Newell and HA Simon. Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 1976. URL http://dl.acm.org/citation.cfm?id=360022

S Norvig and P Russell. Arti ficial Intelligence: A Modern Approach, 2003. URL http://jmvidal.cse.sc.edu/lib/russell03a.html

Proudfoot. Robots and Rule Following. In C. Teuscher, editor, Machines and Thought: The Legacy of Alan Turing, Volume I, page 312. Oxford University Press, USA, 1999. ISBN 0198238762. URL http://www.amazon.com/Machines-Thought-Legacy-Turing-Volume/dp/0198238762

Proudfoot and Copeland. The Computer, Arti ficial Intelligence, and the Turing Test. In Christof Teuscher, editor, Alan Turing: Live and Legacy of a Great Thinker. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-642-05744-1. doi: 10.1007/978-3-662-05642-4. URL http://link.springer.com/10.1007/978-3-662-05642-4

Elaine Rich and Kevin Knight. Arti cial intelligence. McGraw-Hill, 1991. ISBN 0070522634. URL http://books.google.co.uk/books/about/Artificial_intelligence.html?id=eH9QAAAAMAAJ&pgis=1

JR Searle. Minds, brains, and programs. Behavioral and brain sciences, 1980. URL http://journals.cambridge.org/production/action/cjoGetFulltext?fulltextid=6573600

Sara Turing. Alan M. Turing: Centenary Edition. Cambridge University Press, 2012. ISBN 1107020581. URL http://books.google.com/books?hl=en&lr=&id=07_ckaHY-2QC&pgis=1

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s