On Turing’s “Computing Machinery and Intelligence”

I recently submitted the following paper for a research skills course as part of my program at Cambridge.  I have decided to post it here since people may be interested in learning more about his life and work given his recent royal pardon.  The focus is on his “Computing Machinery and Intelligence” paper, but his other work and life are also included.  I welcome any discussion in the comment section below. 

1.1 Background on the Author

Alan M Turing was a remarkable man, whose breadth and quality of contributions make him deserving of mention in the same caliber as such greats as Newton, Darwin, and Maxwell. In addition to Turing the mathematician, Turing the person is nearly as fascinating and more closely resembles the tortured souls seen in great artists than is typically seen in great scientists.

Turing was born in London, on 23 June 1912, to a middle class family [Turing, 2012]. He was seen as bright at an early age, inventive, and skilled in both words and mathematics. In 1930, he won a scholarship to study the Mathematics Tripos at King’s College, Cambridge. He attained only a second in his Part 1, but went on to be elected a Fellow of the college upon graduating, at age 22. A close friend, Christopher Morcom, died just before Turing was to start at Cambridge.

This e ffected him throughout his life – at times seeming to provide him with motivation, and at others – loneliness.

The fi rst contribution to bring attention to Turing was his solution to Hilbert and Ackermann’s “Entschudungsproblem”, which they had proposed in 1928 [Beeson, 2004]. The problem asks if an algorithm exists to take a set of axioms and a conjecture and determine in a finite amount of time the proof of the conjecture from the axioms, or to state that no proof exists. Turing solved the problem by considering his Universal Machine, proving shortly after Alonzo Chruch that some problems are unsolvable in a finite amount of time. The Universal (Turing) Machine was one of Turing’s great contributions. It has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head. The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine, and it formed the inspiration for all subsequent computer architectures.

Turing went to Princeton to work with Church, earning a PhD for two years of his e fforts [Turing, 2012]. Following that, he returned to King’s College until the outbreak of World War Two. Through the war, he was largely based in Bletchley Park, working as one of the lead cryptographers and played a major role in defeating the Enigma code. This role was instrumental to the Allied war e ffort in protecting the Allied merchant fleet from German U-Boats in the North Atlantic. He was awarded an Order of the British Empire for his e fforts.
Following the war, Turing joined the National Physical Laboratory to design some of the first computers. Through this time, he wrote internal reports and discussed machine intelligence with colleagues. But it wasn’t until he joined the faculty at the University of Manchester that he published “Computing Machinery and Intelligence”, having left the NPL in frustration with the slow progress of building computers there. This paper is one of the seminal papers in the eld of arti ficial intelligence, and continues to be controversial in the fields of psychology and philosophy due to the implications of having intelligent machines.

Having already made major contributions in computer science, cryptography, and artifi cial intelligence, Turing again wrote a seminal paper entitled “The Chemical Basis of Morphogenesis” in 1954, which probably makes Turing the father of computational biology as well [Turing, 2012].

Turing died from cyanide poisoning on 7 June 1954, in an apparent suicide. A few years before, Turing admitted to being homosexual, a crime in Britain at the time [Leavitt, 2007]. He was sentenced to be given estrogen injections, lost his government security clearance, and was no doubt embarrassed publicly by this. To have lived his life and accomplished what he did fi rst by living in secrecy followed by persecution adds to the legend of Alan Turing. In June 2013, Turing was fi nally issued a pardon by the British Government.

1.2 Summary of the Paper

“Computing Machinery and Intelligence” proposes a way to test if machines can think. In doing so, Turing provided a jolt to the field of Arti ficial Intelligence, and the paper provided motivation for much of the early work in the field. While AI eventually began to focus on individual applications, the paper remains relevant in the fields of psychology and philosophy, where Turing’s ideas study the human mind in a more systematic way [Millican and Clark, 1999].

Turing proposes an imitation game, where an examiner communicates with a hidden human and a hidden computer through teletype. The examiner must determine which of the examinees is which. If the computer is able to fool the examiner, it is said to have passed the (Turing) test. (Parenthesis to indicate the name “Turing Test” was given to the imitation game after Turing proposed it in the paper). The test is based on a game where the examinees are one each of a man and women, the description of which has had interesting commentary in
the context of Turing’s homosexuality and awkwardness with the female gender by
Leavitt and others [Leavitt, 2007].

While the (Turing) test is probably the most famous outcome of the paper, a minority of the manuscript is dedicated to it. Instead, Turing extends beyond the test to propose potential qualities of a machine that might eventually surpass his test, as well as address nine potential criticisms of the test. Turing’s description of the type of digital computer that may pass his test is remarkable in it similarities to how computer design has advanced. The basic architecture of storage, executive unit, and controller remains unchanged. He correctly predicted massive gains in storage and processing power, and anticipated that software would be the major limiting factor in AI. Anticipating the difficulty in manually coding a variety of behaviors, Turing proposes machine learning as a way to program an AI machine.

Of Turing’s nine possible objections, three can probably be discarded today as lacking scientifi c merit: the objections from theology, consequences-are-too-dreadful, and extra-sensory perception. The remaining objections and Turing’s responses will be summarized.

The mathematical objection is from theory of Turing, Church, and others proving that discrete-state machines have inherent limits on their capabilities. Turing’s response to this is that while machines may have inherent limitations, human intellect probably does as well. He feels the imitation game is a good test in the context of the mathematical objection.

Lack of consciousness is the second continuing objection, which has since formed the distinction between strong AI (has consciousness) and weak AI (does not). Turing recognizes that AI could pass his imitation test without having consciousness, and somewhat dismisses consciousness as being relevant for the ability to think. In part, his argument relies that consciousness is not well understood in humans, so it should not be used in a test of other entities. This objection was later raised in Searle’s famous “Chinese Room” paper, which will be described in the next section.

A third objection to Turing’s test is lumped objections that a machine may not have X, where X is a personifi ed quality such as kindness, a sense of humor, ability to love, ability to learn, or to enjoy strawberries and cream. The response is twofold: Turing objects to the arbitrariness of these qualities as a test of intelligence, and Turing believes most of these things possible for a machine to do.

A particular quality of X that Turing separates is inventiveness, also know as Lady Lovelace’s objection. In the context of Babbage’s early computers, Lady Lovelace commented that computers are limited in that they cannot do anything new; they can only do that which is previously instructed to them by their programmers. Turing objects, believing machine learning will allow for machines to initiate new knowledge. Today, this objection stands little ground given the importance of computers in so many fields, including algorithms that are able to make mathematical proofs that humans cannot.

The continuous nature of the nervous system is raised as a possible objection to the possibility of discrete or digital systems thinking is raised. Turing defeats this quite easily by arguing that with probability and decimals, continuous systems can be modeled by digital ones. Today, we would probably consider the nervous system to be more similar to a digital system than Turing considered.

Finally, the informal nature of human behavior compared to machines is mentioned as an objection. Turing uses an example of knowing what action to perform at a changing traffic light. Turing’s proposal is for the machine to have rules imposed on it to act more human – rules to seem as if it was less governed by rules, as it were. The commentary is interesting, coming from an eccentric man who seemed to have little patience for social norms himself [Hodges, 1992]. Today, uncertainty and probability are large focuses in AI, and would probably be the first approach for making a computer seem more informal in its behavior.

At the end of the paper, Turing concludes with speculation on the future of the fi eld. Two main approaches are proposed: focusing on an abstract activity, such as chess, or giving a machine sensory “organs” and teaching it like one would teach a child. In this, he foresees computers playing chess at the level of men (we know now that they surpassed our ability in chess in the 1990s when Deep Blue defeated World Champion Gary Kasparov). Machine learning also now plays a great role in AI, but it is the fi rst approach of specialized, expert applications that have proved to be the main role of AI in the modern world. The paper
ends with a highly quotable phrase, applicable to many fields of science in general:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

1.3 Importance, Impact, and Fallout of the Paper

Turing’s paper is special in the breadth of its impact – over sixty years later it maintains relevance in computer science, philosophy, psychology, and cognitive sciences. It is one of the foundational papers for AI and machine learning. Even with the bene t of hindsight, Turing’s predictions are remarkably correct considering how young the field was when he created them. As explained in other sections, the impact of AI was slower than expected, but is substantial in the modern world.

The strongest criticism of the paper came from Searle in his “Chinese Room” argument [Searle, 1980]. Searle takes us though a thought experiment, proposing he is trapped in a room with a cypher of Chinese characters. Written Chinese messages are fed to him, and he uses the cyphers to write responses. The implication is that he has no idea what the messages are actually meaning, and is of course an analogy to a machine answering questions in a similar way, without understanding meaning. Searle does concede that the de finition of “understand” is considerably more dicult than it appear on first glance. While

Turing does not get the benefi t of being able to respond to Searle’s attack, his original paper did anticipate this attack so we may assume he would assume that he would maintain his position that consciousness is di fferent than the ability to think. The additional lead is that one must question the room-con fined Searle. If he is unable to understand the Chinese characters being passed to him, does this make him unable to think?

Boden comes to Turing’s defense in Searle’s attack, largely by pointing out that the appeals Searle makes to the uniqueness of biologic systems compared to engineered systems are at best irrelevant and at worst incorrect [Boden, 2003]. Boden argues that there is nothing “special” about humans: computer vision can functionally equal our own, and our neurons behave in a digital way that is not unlike a silicon transistor.

This argument is in some ways improved by the perspective of Newell and Simon [Newell and Simon, 1976]. Even in 1976, they viewed computer science as an empirical science – a far cry from the logic-based mathematics initiated by Turing and his colleagues. Increased complexity of computers lead to them being regarded as “the programmed living machine…the organism we study” by the duo.

2.1 Description of Arti ficial Intelligence and Machine Learning

Turing’s “Computing Machinery and Intelligence” is a seminal paper in arti ficial intelligence and machine learning, and continues to be debated in robotics, computer science, psychology, and philosophy today. While the paper had far reaching and interdisciplinary implications, this analysis will focus on the implications to applied sciences.

Even the very de finition of Artifi cial Intelligence (AI) is controversial. When trying to describe intelligence, one soon recognizes that their description is limited by observations and experiences of a human, and only one human at that. Intelligence is considered an important distinction of the human experience, but we fi nd that in de fining intelligence we declare agency in a highly personifi ed way.

Worse, it is our instinct to assign intelligence by factors that, on examination, are arbitrary: use of language, maths, or tools, for example. We find ourselves no further along than Descartes, who ascribed agency to himself by declaring, “I think, therefore I am”. Unfortunately, this leaves us unable to evaluate other entities.

Turing’s solution, as will be outlined and analyzed in more detail in the second part of this paper, was to avoid any one criteria for assessing intelligence, and instead proposed that an intelligent machine is one that is able to successfully imitation of a human in written conversation. This trial has since been named as the “Turing Test”, and probably remains the best tool for evaluating arti ficial intelligence, although little recent eff ort has been put into designing a machine to defeat the test [Rich and Knight, 1991] [Haugeland, 1989].

It is perhaps difficult to separate early computing and arti ficial intelligence, as the young fi elds had yet to specialize. It could be argued the field of computing started quite early, philosophers such as Descartes and Hobbes contemplating the nature of the mind and machine, and similarities between the two. Early mechanical computers were proposed by Wilhelm Schickard (1952-1635), Blaise Pascal (1623-1662), Gittfried Leibnitz (1646-1716), and Charles Babbage (1792-1871) [Haugeland, 1989]. Arguably, Babbage’s design was the fi rst that could be considered a computer rather than a calculator, and Lady Lovelace’s contribution to the field in documenting and explaining Babbage’s never-built machine should be noted. In 1936, Turing proposed the Universal (Turing) Machine, a theoretical architecture for a computer that inspired the von Neumann architecture that is used in nearly all computers now [Haugeland, 1989]. A Universal Turing Machine has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head.

The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine. Arti ficial Intelligence was fi rst seriously considered around the time of the first computers, around 1950 when Turing published his paper proposing the “Turing Test”. From 1952-1969, quite good academic progress was made in AI, led by MIT, Stanford, Carnegie Mellon University, Dartmouth, and IBM [Boden, 1990].

Many of the early proposed problems in the field were solved, most being simple examples of playing games, word recognition, algorithmic problem solving, machine learning, or solving maths problems [Rich and Knight, 1991]. However, progress soon slowed as managing complexity of problems proved more difficult than had been anticipated.

The 1990’s saw again considerable progress, with considerable improvements in speech recognition, autonomous vehicles, industrial systems monitoring, and computers playing board games better than the best humans [Mitchell, 1997] [Norvig and Russell, 2003]. Many of these have since been commercialized, or are on the cusp of being so. Through this process, AI has become application-specifi c, and most work is focused on applications with more utility than passing the Turing Test [Norvig and Russell, 2003]. Dennett, who was Chair of the Loebner Prize Competition in the 1990s (a Turing Test challenge), questions if the Turing Test will ever be beat, and considers attempting to pass the test not useful research for serious modern AI [Dennett, 2004]. Turing himself anticipated the test would be
challenging to pass: he once predicted that by 2000 a machine would exist that would have a 30% chance of passing the test in a ve minute conversation [Norvig and Russell, 2003]. Later, he said in a radio interview that he expected it would be over 100 years (from 1952) until a machine was built that would reliably win the imitation game [Proudfoot and Copeland, 2004].

Theoretical AI has, in response, created some divisions within itself. Firstly, between strong and weak AI. Weak AI can act like it is intelligent, whereas strong AI is intelligent and conscious [Haugeland, 1989]. A criticism of the Turing Test is that it may allow an entity to pass which is weak AI. Modern AI has distanced itself from Good Old Fashioned AI (GOFAI), which was an approach based more on strict logic [Haugeland, 1989]. The present interest is more focused on the management of uncertainty through probabilistic systems.

An important technique in AI is machine learning, which is de ned as the performance at a task improving with experience [Mitchell, 1997]. Learning was proposed by Turing in the paper as being a possible technique to enable flexibility in a machine. The first examples were seen as early as 1955 [Proudfoot and Copeland, 2004]. But it wasn’t until the 1990s that the fi eld of computing advanced sufficiently for great advances to be made [Mitchell, 1997]. The general learning technique is similar to as proposed by Turing: the program is given a function such that it learns through trying to avoid a previously experienced “pain” of a
mistake. However, the actual implementation of these algorithms in a practical sense is probably more dicult than Turing and his contemporaries anticipated [Mitchell, 1997]. This in part explains the slower than expected progress in AI.

However, machine learning, and arti ficial intelligence in general, both benefi ts from and produces useful cross-pollination with other fields including statistics, philosophy, biology, cognitive science, and control theory [Mitchell, 1997].

2.2 Importance to Engineering and Industrial Practice

While Turing discussed AI in a quite general and academic sense, the field has since become application-specifi c, where programs are written for a niche task. Robotics, including autonomous vehicles, are probably the most mechanical of the applications of AI, and are of growing importance [Proudfoot, 1999]. Industrial processes and systems are increasingly controlled by AI and AI-inspired systems [Norvig and Russell, 2003].

In some ways, AI remains heavy on potential compared to current bene ts realized from it. Futurist Ray Kurzweil envisions AI being a key technology of a process that will create unprecedented change in our society [Kurzweil, 2001]. He argues that technology improvement and adoption follows an exponentiation rather than linear growth curve, as is seen in Moore’s Law of computer transistors. This is in part because improved technology allows the design of further improved technology. Assuming AI follows a similar progress, the relatively slow early progress is to be expected, with accelerated returns to be seen in the future.

This leads to what Kurzweil calls the Singularity, which is the point when the slope of this technology-growth curve becomes so steep as to be eff ectively asymptotic. Kurzweil predicts similar and inter-connected growth in neuroscience and (especially bio-flavored) nanotechnology. According to Kurzweil, when this all happens, we will have essentially infi nite access to technology. One outcome of this that Kurzweil is fond of mentioning is that we would see our own immortality [Kurzweil, 2001]. So, at least according to Kurzweil, the implications of AI to engineering, industry, and beyond are vast.


Alan Turing made considerable contributions to computer science and affiliated fi elds. “Computing Machinery and Intelligence” is one of several seminal papers contributing to the legend of the author. The paper has had wide impact, beyond engineering and computer science to shake the very foundations of how people view themselves. Like Galileo and Copernicus, Turing has forced us to reflect on our own state of being. On a more practical sense, he pioneered in this paper the fields of AI and machine learning, which have recently proven to be of considerable value. One can only anticipate that computer-based intelligence will play a greater role in the future of humankind.

In addition to his notable contributions to science and academics, Alan Turing is a fascinating man, and one whom history is only beginning to give proper recognition to. One can hope that the future will give the acknowledgment to Turing that his contributions to an increasingly important field deserve.


M Beeson. The Mechanization of Mathematics. In Alan Turing: Live and Legacy of a Great Thinker. 2004. URL http://cs.sjsu.edu/~beeson/Papers/turing2.pdf

M Boden. Escaping from the Chinese room. 2003. URL http://scholar.google.co.uk/scholar?hl=en&q=boden+escaping+the+chinese+room&btnG=&as_sdt=1%2C5&as_sdtp=#1

M Boden. The Philosophy of Arti ficial Intelligence (Oxford Readings in Philosophy). OUP Oxford, 1990. ISBN 0198248547. URL http://www.amazon.co.uk/Philosophy-Artificial-Intelligence-Oxford-Readings/dp/0198248547

Daniel Dennett. Can Machines Think? In C Teuscher, editor, Alan Turing: Life and Legacy of a Great Thinker, pages 121 – 145. 2004. URL http://www.citeulike.org/group/884/article/633858

John Haugeland. Arti ficial Intelligence: The Very Idea. A Bradford Book, 1989. ISBN 262580950. URL http://www.amazon.com/Artificial-Intelligence-The-Very-Idea/dp/0262580950

Andrew Hodges. Alan Turing: The Enigma. Vintage, 1992. ISBN 0099116413. URL http://www.amazon.co.uk/Alan-Turing-Enigma-Andrew-Hodges/dp/0099116413

Ray Kurzweil. The Law of Accelerating Returns, 2001. URL http://www.baxtek.com/products/wireless/files/law-of-accelerating-returns.pdf

David Leavitt. The Man Who Knew Too Much: Alan Turing and the invention of computers: Alan Turing and the Invention of the Computer. Phoenix, 2007. ISBN 0753822008. URL http://www.amazon.co.uk/The-Man-Who-Knew-Much/dp/0753822008

Millican and Clark. Machines and Thought: The Legacy of Alan Turing. In Christof Teuscher, editor, Machines and Thought: The Legacy of Alan Turing. Oxford University Press, 1999. 3
Thom M. Mitchell. MACHINE LEARNING (Mcgraw-Hill International Edit). McGraw-Hill Higher Education, 1997. ISBN 0071154671. URL http://www.amazon.co.uk/MACHINE-LEARNING-Mcgraw-Hill-International-Edit/dp/0071154671

A Newell and HA Simon. Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 1976. URL http://dl.acm.org/citation.cfm?id=360022

S Norvig and P Russell. Arti ficial Intelligence: A Modern Approach, 2003. URL http://jmvidal.cse.sc.edu/lib/russell03a.html

Proudfoot. Robots and Rule Following. In C. Teuscher, editor, Machines and Thought: The Legacy of Alan Turing, Volume I, page 312. Oxford University Press, USA, 1999. ISBN 0198238762. URL http://www.amazon.com/Machines-Thought-Legacy-Turing-Volume/dp/0198238762

Proudfoot and Copeland. The Computer, Arti ficial Intelligence, and the Turing Test. In Christof Teuscher, editor, Alan Turing: Live and Legacy of a Great Thinker. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-642-05744-1. doi: 10.1007/978-3-662-05642-4. URL http://link.springer.com/10.1007/978-3-662-05642-4

Elaine Rich and Kevin Knight. Arti cial intelligence. McGraw-Hill, 1991. ISBN 0070522634. URL http://books.google.co.uk/books/about/Artificial_intelligence.html?id=eH9QAAAAMAAJ&pgis=1

JR Searle. Minds, brains, and programs. Behavioral and brain sciences, 1980. URL http://journals.cambridge.org/production/action/cjoGetFulltext?fulltextid=6573600

Sara Turing. Alan M. Turing: Centenary Edition. Cambridge University Press, 2012. ISBN 1107020581. URL http://books.google.com/books?hl=en&lr=&id=07_ckaHY-2QC&pgis=1

I can’t draw: I’m not an artist

Growing up, thought I was good at the sciences, less good at reading and writing, and lousy at the arts.  How did I know?  That was what my grades told me in school.

The problem is, for young minds, this labelling is false and harmful.

Art in school is how straight you can draw lines and if you can paint without crossing over the lines.  At least, this is what art is until the later years of school.  Then, you are taught what art really is, which is communicating a message between the creator and the person experiencing the art.  But by the time I was in high school, I had convinced myself that art wasn’t something I was good at or needed to be good at.


I don’t consider myself an artist today, but one closely-related role I do play is a designer.  This happened through an education in mechanical engineering, where my focus became on medical devices with additional interests in energy and space exploration.  This was around 2008, so not that long ago, but pre-iPad and pre-Kickstarter.

The result was that my design education and journey has been in this era where product success is so intertwined with “soft” design: user experience, human factors, and communicating a message to the user.  This part of design engineering is far, far more about art than it is math or physics.

I now find the “art” of design to often be more enjoyable than the “science” of it.  But I feel like I’m playing catch-up, I wish I hadn’t been pigeon-holed as “not an artist” when I was far too young for anyone to know what I was going to be.

I think feedback and even ranking systems are good, even for young students in grading.  Feedback helps to improve future performance, and comparative ranking by grades gives competition: a reflection of life beyond school.


The problem is that performance in subject X isn’t a good decider if the person will do well at X in the real world.  Studing X in school isn’t doing X.  School is designed to teach basic skills (arithmetic, writing essays, researching), the ability to learn, and, lastly, content.  Performance in school is judged on how well and quickly the student can learn these skills and content, which is different from how well the student can succeed in that field.

Premature judgements are made on this that can have an effect on the student’s career.  I have heard a lot of people who’ve picked a particular career because they were good at it in school, or avoided another because they were bad at it.  It is unfair for both the student and society that we are doing a poor job of helping people to careers they will enjoy and be good at.

We need to develop a culture in schools and the institutions that surround them that failure is ok.  Labelling people who are “good” or “bad” at certain things in school needs to stop, especially for young students.  These self-images can remain with students for a long time.  Instead, we should try different approaches with young students who are struggling.  We shouldn’t let the student or anyone else say they are “good” or “bad” at something before anyone can possibly know.

Saving Brick and Mortar

The big news story of the past week in Britain has been the cold temperatures and snowfall, which as a Canadian I am free to find fun in the fear caused by comparatively mild weather.

The second biggest story is the recent collapse of four major retailers here (from Yahoo): Comet, Jessops, HMV, and Blockbuster.  Comet is an electronics retailer, Jessops does photography, HMV sells music and video, and Blockbuster is a film and game rental company.  Such stories are not limited to the UK, as worldwide recessions and growth of e-retailors has hit retailers stateside also.

One simple explanation for the demise of these companies is disruption from online services, and perhaps also the rise of digital cameras and smartphones for the case of Jessops.  The Yahoo article linked to above provides a good summary of other reasons for the fall of these rather large companies.

Does this signal the beginning of the end of traditional, brick and mortar retailors?

Brick and mortar is facing a multi-pronged assault.  First is from online retailers and distributors – such as Amazon, Netflix, and iTunes.  Second is an emerging threat from home- and local mini-manufacturing, such as desktop 3D printing.  I am personally pretty bearish on this happening, although such systems have a chance to become mainstream if part quality can improve, costs continue to come down, and ability to work with multiple material improves.  A more likely threat is from local mini-manufacturers, using technologies like 3D printing, waterjet cutting, and injection molding to make semi-custom products on demand.  The advantage is of less machine down-time, distributing costs.  Additionally, staff at a mini-manufacturer will be able to assist with designing, design selection, machine operation, and assembly.

Each of these distribution methods has its own benefits and shortcomings.  Some are detailed in the table below.


Home manufacturing and mini-manufacturers are still in relative infancy, and it is hard to assess how great of threat they are to brick and mortar retail.  Personally, I think the processes most often suggested for home or mini- manufacturing inherently have weaknesses related to quality and multi-material products.  Additionally, the main value-add of this kind of manufacturing is customization, which I don’t think will have the mainstream appeal to justify higher costs over mass produced products for most instances.  I have previously written in more detail on my opinions here.

For brick and mortar sales, a lot of value comes from being able to interact with the product.  Look and fit are much easier to decide in person than through a web browser.  Personal and expensive products like wedding rings and cars are things that people usually don’t buy without first interacting with.  For these kinds of high-end products, knowledgeable sales staff is also valuable in choosing what the right purchase is for you.  This contrasts with economy-minded products, where online reviews are often more helpful than sales associates.

Groceries are a retail category that have been slow to gain popularity in e-retail.   This is due to a trade-off between time and choice.  Being perishable, groceries are a category where 2-day shipping is insufficient.  Further, people like to select the best produce, meats, and breads from those displayed at the store.  In cases where quick access to the product is required, brick and mortar is the preferred type of retailer.

Apple stores have been lauded as an example of a great brick and mortar model, evidenced in part by Tesla motors modeling their showrooms after Apple stores.  Both of these are an additional vertical in products and experiances that are already highly controlled by their respective companies.  This allows the companies to control the entire experience for customers using their product by including selection and purchase along with usage.  Additionally, there is a marketing and advertising aspect to having such a visible front for the respective products.

There are segments where brick and mortar seem unlikely to be able to compete with e-retailers.  When the product is information, near instantaneous, free transfer, near zero inventory cost, and convenience make a clear case for digital distribution, such as Netflix for films and TV, iTunes for music and other media, and the shift from newspapers and magazines to web-based equivalents.  The only option for brick and mortar in these industries may be to hope for a time machine, to travel back and time to develop or acquire digital distribution.

A second example is Dell, especially the company as it was around 2005.  Dell has a great model of direct-order, where the buyer can semi-customize their purchase and also get great value compared to buying in-store.  In some ways, the model lead to the cannibalization of the consumer computer hardware industry.  Competition and commoditization lead to collapse of margins.  It is said that in an efficient market, there is no money to be made.  That is what has happened in computer hardware – it’s great for consumers but not for manufacturers.

Distribution through online sales reduces regional market inefficiencies.  A customer in San Francisco no longer has to choose between local stores: they have access to wide areas of stores, subject only to tariffs and shipping costs.  This lowers prices because a store in San Francisco is now competing with online stores all over the world, in addition to local stores.

A discussion on e-retail would be incomplete without mentioning Amazon; the giant in the space.  Amazon wins due to massive diversity of products stocked, quick and cheap shipping, convenient use, and meaningful user reviews.  Hidden to the purchaser, they have the infrastructure and distribution centers to make the experience work.  With said infrastructure and its momentum, the burden is probably on any other retailor – web based or not – as to how they will beat Amazon.

How can this be done?  I have a few ideas and suggestions

1. e-retail is only as fast as the postman

For things that are urgent or perishable, brick and mortar has an advantage to e-retail.  While there is convenience to shopping from home, there is also convenience in buying something and getting it right away.

2. Quality sales advice

Brick and mortar stores should be much better than e-retail at consumer education, and it usually is for high-end and very personal products.  There is no reason e-retail resources such as price comparisons and user/expert reviews can’t be as accessible in-store as online.  There are apps and interactive displays that are starting on this, but I think there is further mileage.

However, the key advantage of brick and mortar should be knowledgeable and caring sales staff.  People interact with products in a very personal way, and a salesperson should be much better suited to understand the user’s needs than a web based script or robot.  Consumer information and marketing could be a key area for innovation for brick and mortar retail.

3. Beating e-retail on price is trench warfare

Competing on price alone is rarely a sustainable business model, and brick and mortar probably has an inherent disadvantage to online retail due to higher overhead.  The lone advantage may be in shipping in bulk for brick and mortar, compared to by unit for e-retail.  In this, a model like Costco may be able to remain competitive due to high volume, low number of products, and low-overhead operations.

I see a continued shift to e-retail from brick and mortar.  The consequences of this could be quite far reaching.  Not only will business be transferred from traditionally to online retailers, but there are implications for employment, international trade, a surplus of retail real estate, and decline to the cultural pastime of shopping.  It’s still early to conclude on home and mini-manufacturing, but I don’t see these as being major threats in the space, and especially not in the near-term.

There exists a continued opportunity to disrupt the retail space.  For example, mobile devices are still relatively greenfield for retail apps, without any dominant players.  Further, the social aspect of shopping should not be overlooked.  Perhaps this factor may lead to support brick and mortar, or some innovation improves the social aspect of online retail.

What do you think?  I’m always interested to hear your comment in the bottom.

Like it’s your last

Do it like it’s the last time you ever will.  Because eventually, it will be.

I mean this in a less morbid way than the phase is usually used, but also with seriousness.  I put a lot of value on the experience of life, and, even aside from death, there is a real probability that things you take for granted today will change.  For an infinite number of reasons, routines of today are not how they will be tomorrow.

Relationships and friendships begin, change, and end.  Things break, are replaced, or are improved. Perspectives and experiences mold us into doing different things and experiencing them differently.  People change geographies, and geographies change around people.

In technology industries, we get giddy over disruption.  Disruption is opportunity.  Disruption is change.

But change is also an end.  For even the most mundane of tasks, it could be a cherished experience if it you knew it would be the last time you experienced it.

The trouble is that you only rarely know when you are doing something that it will be the last.  The solution, I think, is to try for a frame of mind to experience everything like it’s your last.

How to Build it: Lean Prototyping Techniques for Hardware

I recently wrote on my vision for the future of 3D printers, largely on their use for manufacturing.  I wanted to expand more broadly on my thoughts on prototyping technologies, and particularly for rapid and lean prototyping for mechanical designs.

“Lean” started in the context of manufacturing automobiles, and has since been taken to describe prototyping and customer development for software start-ups.  Many software/web start-ups do not win because of a science or technology invention.  Instead, user experiences and marketing are what drive success.  I think people are realizing that this can apply for hardware as well, and the increasing ease of prototyping is helping to drive the increase in hardware based projects and start-ups such as those seen on Kickstarter’s design section.  Of course, hardware continues to have the challenge that production and distribution continues to be more difficult than for software.

I will outline here tools and methods I use in prototyping hardware.  What do you do? (please post in the comment field below)

The Dollar Store

Duct tape, super glue, spray paint, and a dollar store full of imagination are possibly the best (and maybe least expected) prototyping tools.  I’m a strong advocate of the super-alpha prototype: the more you can build quickly, the faster you can find what you don’t know.  It’s also easier to get excited about a project when you have something tangible to show people (potential customers!)

Don’t forget the spray paint!  A prototype that looks sketchy automatically throws off people you show it to.  Civilians will discount even the best features on a prototype if it looks unprofessional, unfinished, and ugly.

Amazon, electronics stores, and hardware stores are also great resources, especially once you have enough of an idea what you are building so that you can specify a specific part.  Before that, quick, cheap, and convenient should be the main criteria for finding parts and materials.

Pen and Paper

Very quick calculations can prove your idea violates the laws of physics.  Save yourself embarrassment and make sure that you are the one to do these calculations, not someone else (like an investor) and that you do them before you invest too much time into a project.  Such calculations can also help decide between design alternatives and optimize design choices.

Simple sketches can help realize ideas and form them to guide physical prototypes.  There is often a lot of different ways to build or do something.  Having different ways on paper can help deciding which direction to take.  They can also express your ideas quickly to other people.

Computer Modeling

CAD Model and Development, Image courtesy Nikola LK

A tried-and-true method for professional mechanical designers, some computer aided design (CAD) programs have come down in price a lot recently.  Alibre for instance is about the same price as Microsoft Office Home and Business, and gives probably 70-80% of the functionality of professional design software.  Entry-level CAD systems often don’t have simulation applications to test the physics of parts, but some packages are available open source that do.  Simulation also requires training to do make reliable models.

I’m not sure why CAD isn’t getting more publicity for maker, hacker, and hobbyist use – a physical model is often easy to make from a fully rendered CAD model. CAD models can be changed easier, quicker, and at less cost.  Design iteration time on CAD can be as quick as modifying software code.

However, if the final widget uses parts that interact with one another, a CAD model may not be able to prove everything works together.  This is especially true for moving parts or imbedded electronics.

2D Cutting

75mm thick steel, cut by waterjet, Image courtesy Fromthecorner

Waterjet and laser cutters etch or remove a pattern from sheet metal (and other materials).  The sheet can then be folded to a 3D final shape.  These can be very cheap and quick: for example a small part could be made in as little as five minutes and at a cost of $5.  The size of the machines makes them practical for anything from smartphone to laptop size, with exceptions either way for certain applications.

The machines are not common in people’s houses, and take a bit of a different design approach: you have to think about your 3D project on a 2D sheet.  Even if you don’t have one in your house, there should be several companies that will be able to cut your part in even a small city.

Additive Manufacturing

The fancy name for “3D printing”, additive manufacturing has become popular for hobbyists and the media.  It is fascinating to watch a part grow in front of you, and a variety of metals, plastics, and rubber-like materials are available, but generally not on the same machine.  Machines are also now small, cheap, and usable enough that they are no longer restricted to industrial use.  Assemblies that would otherwise require serveral components can be built as a single part on a 3D printer.  3D printers allow for making parts that are impossible with other processes, for instance parts with internal holes and voids.  They also can be used to make quick, inexpensive tooling for molds to make parts from.  A prototype can be made for $20-$100+ depending what it is.


Computer numeric control (CNC) usually refers to a milling machine that cuts a big chunk of metal (or other material) into a finished part.  It was probably the first type of “prototyping machine”, but is often used for production as well.  Usually people don’t have these in their house (although the hobbyist and homemade CNC group seems to be growing), and CNC parts can be more expensive than other contracted parts.  Usually parts are in the $150+ range.

Molding and Fibreglass

Carbon fibre aeroshell from a fibreglass mold, UBC Solar Car team

Molding and fibreglass are great for making irregular-shaped parts or if you need several copies of the same part.  There can be a lot more initial work to make a mold than other processes, but quick molds using hobbyist and film-industry materials can be made pretty quickly.  Some chemicals involved in fibreglass and some molds are toxic and require gloves and/or ventilation.  Materials can be quite cheap, $50-$100 is enough to make most small-medium sized parts.


Welding allows the joining of metal.  It is useful for many different parts including frames from metal tubes or making sheet metal into 3D parts.  Like molding, there can be a lot of set-up time in making jigs to hold parts in place when they are being welded.  Spot welders are good for quickly joining metal pieces and require much less skill to operate, and are particularly useful in joining 2D sheet metal projects that were cut on waterjet or laser to make 3D parts.  Often, glues are easier to use and will suffice for a prototype.


Arduino and other microcontrollers are an easy and cheap way to prototype and integrate electronics into a project.  There are lots of examples and support for the platform: someone else has probably already solved the problem you are having and is willing to help. Sparkfun and others have good sensors and other electrical accessories that work with Arduino and other platforms.

User Feedback

If making something for more than a few people to use, you have to talk to people you hope will use it.  Live demos or letting potential users play with your prototypes is important.  But it’s also important in who you pick to ask for feedback and how you let them use it.  With this feedback, you build improve the next round of prototypes, until the project is ready.  I expect there are many parallels to Lean software development here.

How I (try to) pick people for feedback:

Open to Change

Don’t take away Milton’s stapler, Image courtesy Devinpoore

If someone is too happy with what they already have, they will be resistant to change.  Even worse is if the user doesn’t want to change but they think their boss will force them to.  These types of people will think of any reason your prototype won’t work, and it can be tough to convince them differently.  Try to take away Milton’s stapler and he’ll burn the place down (reference to the movie Office Space).

Will Give a Fair Assessment

Like the above person who will only say negative things about your work, try to avoid people who will only say positive things.  Your mother is not the person to get good feedback from, assuming she is supportive of everything you do.

Some people get excited about anything just for being new.  Feedback from them can be motivating but may require coaching and interpreting to make the advice constructive.

Is Sympathetic to How Prototypes Are

Prototype for a hovercraft, Image courtesy Timothyrfries

Many people are never exposed to how things are made.  Stuff comes from Amazon or Walmart, and it better be perfect.  If it breaks, looks ugly, sounds funny, or crashes, it’s a bad product and the company that made it may never be trusted again.  Unfortunately for people looking for feedback, I expect most people fall into this category.

These people need to have their hands held if you choose them for product feedback, as they are often disappointed with what you show them. You need to manage expectations and teach them what exactly your prototype is showing.  If they understand the prototype is only testing a few features of a final product, they will be more understanding.  These people are why spray paint and making your prototype look good is so important: for early stage design, discussion should be about ideas and features, not distracted by aesthetics.


Prototyping is cheaper and easier than ever.  In my opinion, a prototype for many Kickstarter-ready design projects could be made for $1000 in parts and materials, some for even $100.  Like software development, the larger investment is in time put in by the designers. Of course, several (or sometimes many) stages of prototypes are needed to arrive at a final design.  Good user feedback is essential, and this feedback should guide making the next round of prototypes.  It is an iterative cycle.  The key to making good products is making mistakes early and learning from them.  This is best done through prototyping and getting user feedback.


Many of my ideas and views on prototyping were formed in the University of British Columbia Mechanical Engineering program, and particularly from the design faculty.  Some thoughts are inspired by work from the Center for Design Research group at Stanford and the Engineering Design Centre at Cambridge.  Any of these three groups are great places to look more in-depth on these points.



Start of some good discussion on HackerNews: http://news.ycombinator.com/item?id=4790562

Perspective on 3D printers from a mechanical designer

3D printed part that would be (extremely) difficult to make another way. Photo credit: Axel Hindemith Lizenz: Creative Commons CC-by-sa-3.0 de

This is in part motivated by Jon Evan’s recent article on TechCrunch (http://techcrunch.com/2012/11/03/one-of-these-things-is-not-like-the-other/) but more to add to the discussion of new manufacturing technologies from my experiences that have been brewing for a while.  I agree with Evan’s thesis that 3D printing is alike 2D printing only in name, but my experience is that 3D printing are a great tool, but will not be a total revolution in manufacturing as some suggest.

I am starting a PhD in Mechanical Engineering soon and have been using 3D printers, waterjet machines, and laser micromachines for three or four years now on a prototyping basis.  I am writing this mainly directed for the software tech crowd that has recently become more interested in hardware.  Some of my conclusions:

1. We will not see 3D printers in everyone’s home

I don’t see 3D printing as being a new fixture in everyone’s home.  This is because 3D printing:

  1. requires design input, which requires developed skill and time
  2. is a slow process, and
  3. materials are (currently) poor quality from an engineering perspective

I do see growth in residential use, but only by the same kinds of people who have a wood working shop and welder in their garage, or who do I/O software or robotics projects. They are a nice cross-over between software and hardware.

3D printing needs design input, and for more than the most trivial parts, this requires computer aided design software and the skill to use it.  It’s far more work than most people are willing to invest.  If you are just going to make parts of other people’s designs, why not just outsource the production to them as well?

3D printing is slow, with even small parts taking hours to make.  Unless it’s a custom part (that you’ve designed yourself), its much quicker (and cheaper) to find something off the shelf.

3D printing materials are apparently better than they used to be, but I still find they crack often and easier than would be acceptable in most uses.  Yes, there are examples of parts that are made by 3D printing that work fine for their use, but a molded, casted, or machined part will be stronger.

2. 3D printing will not revolutionize manufacturing

For the same reasons listed above, 3D printing is not a great production technology.  But, more importantly, the economies of 3D printing are only efficient below about 10-20 parts.  After that, casting or injection molding is typically cheaper, except in a few cases I will discuss later.  At large volumes, the per-unit cost of a plastic part in injection molding could be well under a dollar, and the same part in 3D printing could be over $100.  The exception is where 3D printing is making a part that is “impossible” by other methods or that would require multiple parts and assembly.

 3. 3D printing is good for prototyping and one-offs (but not the only way)

For doing something once, whether for prototyping or if you only need one, 3D printers can be a great tool.  One cool application is surgical planning or even custom implants by 3D printing (such as http://www.bbc.co.uk/news/technology-16907104).  And it makes sense because everyone is different enough to justify and one-off part for them (and medical device costs are high enough to allow it).  But replace overseas injection-molded parts with a 3D printer in your garage?  It doesn’t make sense in nearly all instances.

There are other rapid prototyping methods such as CNC waterjet cutting and laser machining that get less attention although they are, in my opinion, more useful prototyping tools.  Typically, sheet metal is cut, then folded into a 3D prototype.  These are nice because they are typically stronger than 3D printed part, are quicker, and time scales with complexity rather than size. There are also many types of 3D printers including those that “print” rubber-like materials, hard plastics, and metals.

New rapid prototyping methods are game-changing for developing new products on a shoe-string budget, and I would wager that most of the recent success stories on Kickstarter (www.kickstarter.com) were developed with the help of 3D printing prototypes.  In this, the technology really does allow for cheap innovation where design is the major innovation.  I would go as far as to say these technologies are lowering the barrier-to-entry of hardware projects to a similar level as software, at least until the product goes to production.  Great news for hardware entrepreneurs!

As an aside on Kickstarter since I brought it up, it is interesting they recently banned virtual renderings of design projects.  Rapid prototyping allows for moving from virtual models to prototyped models easily, quickly, and cheaply.  The problem is that the prototypes in no way prove the company is ready to handle the demands of transitioning into production, or that the prototype has had any reliability testing.

4. 3D printers allow for making things that are impossible any other way

This is probably the second biggest advantage of 3D printers, after their usefulness in rapid prototyping.  3D printing allows for making shapes that are impossible using other methods, or require multiple parts and assembly in other methods.  3D printing in particular allows for printing parts that have irregular voids or holes that curve are very difficult to make with other methods.  Mechanical Engineering Magazine (http://memagazine.asme.org/) has had a few good articles on this over the past year, or the image I posted above is a good example.

5. 3D printing (and other rapid prototyping) machines are less reliable

3D printing and other automated machines, especially lower end ones, continue to have reliability issues.  Most of my experience is with industrial machines (costing in the $50-100k range), and even these have a crippling amount of downtime.  There are also issues that occur while parts are being built.  At best, you catch these early and can restart the part.  At worst, you come back later to find you part is a mess and the machine is damaged.  These machines typically don’t have feedback, so the machine can’t tell if its made a mistake.  Good ol’ lathes and mills, even of the CNC variety, are much more reliable.  But rapid prototype machines may improve in the future.

 6. 3D printing users need to decide between ownership and out-sourced services

3D printing machines are becoming less expensive, but still lock up a lot of capital.  For most users, their machine is probably going to spend most of its time waiting for a job, and only a fraction actually printing.  I have used a few services, my favorite is Protogenic by Spectrum Plastics (http://www.spectrumplasticsgroup.com/protogenic).  I have found they have the best prices and have the best customer service of any vendor I’ve ever worked with.  Typically a part is delivered within 5 business days of ordering, and prices aren’t that much more than the material used in your own machine.  Therefore, the only reasons I can see for ownership of a 3D printer are:

  1. High usage
  2. Need for extremely quick turnaround times
  3. Desire for confidentiality (although most venders will agree to NDAs)
  4. Teaching CNC control theory (ie in engineering schools)
  5. To geek out

For most reading this article, this last point may be the main selling point for getting a 3D printer.  They are certainly fun and interesting toys.  And they do have niche roles in manufacturing, design, and maker culture.  But it’s time for a reality check: 3D printing is not the beginning of the end for injection molding, milling, casting, and other traditional manufacturing technologies.

Edit: there has been a vibrant discussion of this on HackerNews.  Thank you all for your thoughts and comments: http://news.ycombinator.com/item?id=4751489

Why I got a BlackBerry

BlackBerry 8900. Photo credit: By Pizue [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

My first smartphone was an iPhone 3GS, which I bought knowingly about a month before the iPhone 4 came out. That iPhone 3GS served me well for two and a half years, including a swim in a lake, dozens of drops, and a screen replacement.  Ultimately, that abuse caught up with it, and a chocolate bar melting into the microphone finally did it in.

I am leaving the continent in a few months and a new phone without a plan discount was out of the question.  I looked at Android, but had found the platform to be at times slow and with a tendency to crash.  There are also so many phones on the platform (with similar names!) that I found it difficult to figure out which of the second-hand ones I was looking at was any good.

I thought about getting an updated iPhone.  I was used to the interface and Apple certainly makes it easier to stay with their own devices, but I’ve been turned off by some of Apple’s products forcing their choices on the user.  Using Flash should have been up to the user, not the manufacturer.  Switching to an Apple-made maps app on the newest OS was also not for the good of the user.  The iTunes interface makes it difficult to switch to another platform by making it difficult to liberate contacts and music.  I was ready for something else.

While I could say I picked Blackberry through attrition, it was actually through preference that I wanted to try the platform.  Why?


Blackberry is made by Canadian company Research in Motion (RIM), and Canada is a country with vanishingly few winning tech companies to inspire young geeks like myself.  Canada has been home to top class tech companies (Corel, Nortel, Angiotech) that all went supernova.  #1 companies can be built here, but can they continue to compete with the world?

Cheering for the underdog.

Canadians have a strong history of cheering for the underdog, so maybe this falls under patriotism.  RIM has had a tough year-plus with botched or delayed launches, a leadership change, and a stock price that values the company at only a few bucks more than the cash they have in the bank.  It would be great to see them turn things around.

Hope for the next one.

Blackberry 10 has be long hailed as what will save Blackberry, especially after the Playbook didn’t.  I’m holding hope for this, and I saw this phone has a test to get used to the Blackberry platform to see if I’d want to move to Blackberry 10 when it comes and I’d be up for a new-contract-subsidized phone.

To complete the set.

I bought a Playbook when they started selling them off at what must be near-cost, although I admit it only really sees use during travel.  While a Macbook-iPad-iPhone could be the best “package set-up”, I was hoping to see if I’d utilize my tablet more if I had a Blackberry phone.  I havet, but haven’t set up the Blackberry Bridge yet.


No, I’m not overly paranoid of the government or have items of national security in my inbox, but I do value my privacy and hope that some of ideas may one day develop into industry trade secrets.  If Blackberry’s security is enough to upset some foreign governments, I see that as a good thing.

Of course, all of these played into my decision.  I am happy with it even though setting it up as an unlocked phone was a pain, largely due to my service provider.  I find the OS stability to be better than iPhone, but the browser is slower, media player clunkier, and the maps UI isn’t as good at the old version of Apple’s. These are the major apps I use on either system. There are definately some quirks in the system (ok… bugs) and I still haven`t set up the bridge to my Playbook (after almost a month), but I`m happy with my choice. I like the physical QWERTY keyboard a lot.  Its not perfect, but I wouldn`t consider it inferior to iOS or Andriod.  I like that the system feels both secure and highly tunable, but I don’t think this tuning is intuitive or easy for most people.

This may be the heart of RIM’s problem with Blackberry: what they have built is a Hummer when most people today want a Prius. Like many, I hold hope for the Blackberry 10 to live up to its promise, and I’ll be getting one on launch day if it does.


Update: Just set up the Blackberry Bridge.  It is really cool! One issue I’ve noticed is message I read on the playbook don’t immedeately get marked as read on the handheld.