Category Archives: Uncategorized

On Turing’s “Computing Machinery and Intelligence”

I recently submitted the following paper for a research skills course as part of my program at Cambridge.  I have decided to post it here since people may be interested in learning more about his life and work given his recent royal pardon.  The focus is on his “Computing Machinery and Intelligence” paper, but his other work and life are also included.  I welcome any discussion in the comment section below. 

1.1 Background on the Author

Alan M Turing was a remarkable man, whose breadth and quality of contributions make him deserving of mention in the same caliber as such greats as Newton, Darwin, and Maxwell. In addition to Turing the mathematician, Turing the person is nearly as fascinating and more closely resembles the tortured souls seen in great artists than is typically seen in great scientists.

Turing was born in London, on 23 June 1912, to a middle class family [Turing, 2012]. He was seen as bright at an early age, inventive, and skilled in both words and mathematics. In 1930, he won a scholarship to study the Mathematics Tripos at King’s College, Cambridge. He attained only a second in his Part 1, but went on to be elected a Fellow of the college upon graduating, at age 22. A close friend, Christopher Morcom, died just before Turing was to start at Cambridge.

This e ffected him throughout his life – at times seeming to provide him with motivation, and at others – loneliness.

The fi rst contribution to bring attention to Turing was his solution to Hilbert and Ackermann’s “Entschudungsproblem”, which they had proposed in 1928 [Beeson, 2004]. The problem asks if an algorithm exists to take a set of axioms and a conjecture and determine in a finite amount of time the proof of the conjecture from the axioms, or to state that no proof exists. Turing solved the problem by considering his Universal Machine, proving shortly after Alonzo Chruch that some problems are unsolvable in a finite amount of time. The Universal (Turing) Machine was one of Turing’s great contributions. It has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head. The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine, and it formed the inspiration for all subsequent computer architectures.

Turing went to Princeton to work with Church, earning a PhD for two years of his e fforts [Turing, 2012]. Following that, he returned to King’s College until the outbreak of World War Two. Through the war, he was largely based in Bletchley Park, working as one of the lead cryptographers and played a major role in defeating the Enigma code. This role was instrumental to the Allied war e ffort in protecting the Allied merchant fleet from German U-Boats in the North Atlantic. He was awarded an Order of the British Empire for his e fforts.
Following the war, Turing joined the National Physical Laboratory to design some of the first computers. Through this time, he wrote internal reports and discussed machine intelligence with colleagues. But it wasn’t until he joined the faculty at the University of Manchester that he published “Computing Machinery and Intelligence”, having left the NPL in frustration with the slow progress of building computers there. This paper is one of the seminal papers in the eld of arti ficial intelligence, and continues to be controversial in the fields of psychology and philosophy due to the implications of having intelligent machines.

Having already made major contributions in computer science, cryptography, and artifi cial intelligence, Turing again wrote a seminal paper entitled “The Chemical Basis of Morphogenesis” in 1954, which probably makes Turing the father of computational biology as well [Turing, 2012].

Turing died from cyanide poisoning on 7 June 1954, in an apparent suicide. A few years before, Turing admitted to being homosexual, a crime in Britain at the time [Leavitt, 2007]. He was sentenced to be given estrogen injections, lost his government security clearance, and was no doubt embarrassed publicly by this. To have lived his life and accomplished what he did fi rst by living in secrecy followed by persecution adds to the legend of Alan Turing. In June 2013, Turing was fi nally issued a pardon by the British Government.

1.2 Summary of the Paper

“Computing Machinery and Intelligence” proposes a way to test if machines can think. In doing so, Turing provided a jolt to the field of Arti ficial Intelligence, and the paper provided motivation for much of the early work in the field. While AI eventually began to focus on individual applications, the paper remains relevant in the fields of psychology and philosophy, where Turing’s ideas study the human mind in a more systematic way [Millican and Clark, 1999].

Turing proposes an imitation game, where an examiner communicates with a hidden human and a hidden computer through teletype. The examiner must determine which of the examinees is which. If the computer is able to fool the examiner, it is said to have passed the (Turing) test. (Parenthesis to indicate the name “Turing Test” was given to the imitation game after Turing proposed it in the paper). The test is based on a game where the examinees are one each of a man and women, the description of which has had interesting commentary in
the context of Turing’s homosexuality and awkwardness with the female gender by
Leavitt and others [Leavitt, 2007].

While the (Turing) test is probably the most famous outcome of the paper, a minority of the manuscript is dedicated to it. Instead, Turing extends beyond the test to propose potential qualities of a machine that might eventually surpass his test, as well as address nine potential criticisms of the test. Turing’s description of the type of digital computer that may pass his test is remarkable in it similarities to how computer design has advanced. The basic architecture of storage, executive unit, and controller remains unchanged. He correctly predicted massive gains in storage and processing power, and anticipated that software would be the major limiting factor in AI. Anticipating the difficulty in manually coding a variety of behaviors, Turing proposes machine learning as a way to program an AI machine.

Of Turing’s nine possible objections, three can probably be discarded today as lacking scientifi c merit: the objections from theology, consequences-are-too-dreadful, and extra-sensory perception. The remaining objections and Turing’s responses will be summarized.

The mathematical objection is from theory of Turing, Church, and others proving that discrete-state machines have inherent limits on their capabilities. Turing’s response to this is that while machines may have inherent limitations, human intellect probably does as well. He feels the imitation game is a good test in the context of the mathematical objection.

Lack of consciousness is the second continuing objection, which has since formed the distinction between strong AI (has consciousness) and weak AI (does not). Turing recognizes that AI could pass his imitation test without having consciousness, and somewhat dismisses consciousness as being relevant for the ability to think. In part, his argument relies that consciousness is not well understood in humans, so it should not be used in a test of other entities. This objection was later raised in Searle’s famous “Chinese Room” paper, which will be described in the next section.

A third objection to Turing’s test is lumped objections that a machine may not have X, where X is a personifi ed quality such as kindness, a sense of humor, ability to love, ability to learn, or to enjoy strawberries and cream. The response is twofold: Turing objects to the arbitrariness of these qualities as a test of intelligence, and Turing believes most of these things possible for a machine to do.

A particular quality of X that Turing separates is inventiveness, also know as Lady Lovelace’s objection. In the context of Babbage’s early computers, Lady Lovelace commented that computers are limited in that they cannot do anything new; they can only do that which is previously instructed to them by their programmers. Turing objects, believing machine learning will allow for machines to initiate new knowledge. Today, this objection stands little ground given the importance of computers in so many fields, including algorithms that are able to make mathematical proofs that humans cannot.

The continuous nature of the nervous system is raised as a possible objection to the possibility of discrete or digital systems thinking is raised. Turing defeats this quite easily by arguing that with probability and decimals, continuous systems can be modeled by digital ones. Today, we would probably consider the nervous system to be more similar to a digital system than Turing considered.

Finally, the informal nature of human behavior compared to machines is mentioned as an objection. Turing uses an example of knowing what action to perform at a changing traffic light. Turing’s proposal is for the machine to have rules imposed on it to act more human – rules to seem as if it was less governed by rules, as it were. The commentary is interesting, coming from an eccentric man who seemed to have little patience for social norms himself [Hodges, 1992]. Today, uncertainty and probability are large focuses in AI, and would probably be the first approach for making a computer seem more informal in its behavior.

At the end of the paper, Turing concludes with speculation on the future of the fi eld. Two main approaches are proposed: focusing on an abstract activity, such as chess, or giving a machine sensory “organs” and teaching it like one would teach a child. In this, he foresees computers playing chess at the level of men (we know now that they surpassed our ability in chess in the 1990s when Deep Blue defeated World Champion Gary Kasparov). Machine learning also now plays a great role in AI, but it is the fi rst approach of specialized, expert applications that have proved to be the main role of AI in the modern world. The paper
ends with a highly quotable phrase, applicable to many fields of science in general:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

1.3 Importance, Impact, and Fallout of the Paper

Turing’s paper is special in the breadth of its impact – over sixty years later it maintains relevance in computer science, philosophy, psychology, and cognitive sciences. It is one of the foundational papers for AI and machine learning. Even with the bene t of hindsight, Turing’s predictions are remarkably correct considering how young the field was when he created them. As explained in other sections, the impact of AI was slower than expected, but is substantial in the modern world.

The strongest criticism of the paper came from Searle in his “Chinese Room” argument [Searle, 1980]. Searle takes us though a thought experiment, proposing he is trapped in a room with a cypher of Chinese characters. Written Chinese messages are fed to him, and he uses the cyphers to write responses. The implication is that he has no idea what the messages are actually meaning, and is of course an analogy to a machine answering questions in a similar way, without understanding meaning. Searle does concede that the de finition of “understand” is considerably more dicult than it appear on first glance. While

Turing does not get the benefi t of being able to respond to Searle’s attack, his original paper did anticipate this attack so we may assume he would assume that he would maintain his position that consciousness is di fferent than the ability to think. The additional lead is that one must question the room-con fined Searle. If he is unable to understand the Chinese characters being passed to him, does this make him unable to think?

Boden comes to Turing’s defense in Searle’s attack, largely by pointing out that the appeals Searle makes to the uniqueness of biologic systems compared to engineered systems are at best irrelevant and at worst incorrect [Boden, 2003]. Boden argues that there is nothing “special” about humans: computer vision can functionally equal our own, and our neurons behave in a digital way that is not unlike a silicon transistor.

This argument is in some ways improved by the perspective of Newell and Simon [Newell and Simon, 1976]. Even in 1976, they viewed computer science as an empirical science – a far cry from the logic-based mathematics initiated by Turing and his colleagues. Increased complexity of computers lead to them being regarded as “the programmed living machine…the organism we study” by the duo.

2.1 Description of Arti ficial Intelligence and Machine Learning

Turing’s “Computing Machinery and Intelligence” is a seminal paper in arti ficial intelligence and machine learning, and continues to be debated in robotics, computer science, psychology, and philosophy today. While the paper had far reaching and interdisciplinary implications, this analysis will focus on the implications to applied sciences.

Even the very de finition of Artifi cial Intelligence (AI) is controversial. When trying to describe intelligence, one soon recognizes that their description is limited by observations and experiences of a human, and only one human at that. Intelligence is considered an important distinction of the human experience, but we fi nd that in de fining intelligence we declare agency in a highly personifi ed way.

Worse, it is our instinct to assign intelligence by factors that, on examination, are arbitrary: use of language, maths, or tools, for example. We find ourselves no further along than Descartes, who ascribed agency to himself by declaring, “I think, therefore I am”. Unfortunately, this leaves us unable to evaluate other entities.

Turing’s solution, as will be outlined and analyzed in more detail in the second part of this paper, was to avoid any one criteria for assessing intelligence, and instead proposed that an intelligent machine is one that is able to successfully imitation of a human in written conversation. This trial has since been named as the “Turing Test”, and probably remains the best tool for evaluating arti ficial intelligence, although little recent eff ort has been put into designing a machine to defeat the test [Rich and Knight, 1991] [Haugeland, 1989].

It is perhaps difficult to separate early computing and arti ficial intelligence, as the young fi elds had yet to specialize. It could be argued the field of computing started quite early, philosophers such as Descartes and Hobbes contemplating the nature of the mind and machine, and similarities between the two. Early mechanical computers were proposed by Wilhelm Schickard (1952-1635), Blaise Pascal (1623-1662), Gittfried Leibnitz (1646-1716), and Charles Babbage (1792-1871) [Haugeland, 1989]. Arguably, Babbage’s design was the fi rst that could be considered a computer rather than a calculator, and Lady Lovelace’s contribution to the field in documenting and explaining Babbage’s never-built machine should be noted. In 1936, Turing proposed the Universal (Turing) Machine, a theoretical architecture for a computer that inspired the von Neumann architecture that is used in nearly all computers now [Haugeland, 1989]. A Universal Turing Machine has a tape and a read/write scanner. The tape has many `bits’ on which information is either read from or written to by the head.

The head also knows what state it is in. From this only, this quite simple machine was proven by Turing to theoretically be able to model any other machine. Arti ficial Intelligence was fi rst seriously considered around the time of the first computers, around 1950 when Turing published his paper proposing the “Turing Test”. From 1952-1969, quite good academic progress was made in AI, led by MIT, Stanford, Carnegie Mellon University, Dartmouth, and IBM [Boden, 1990].

Many of the early proposed problems in the field were solved, most being simple examples of playing games, word recognition, algorithmic problem solving, machine learning, or solving maths problems [Rich and Knight, 1991]. However, progress soon slowed as managing complexity of problems proved more difficult than had been anticipated.

The 1990’s saw again considerable progress, with considerable improvements in speech recognition, autonomous vehicles, industrial systems monitoring, and computers playing board games better than the best humans [Mitchell, 1997] [Norvig and Russell, 2003]. Many of these have since been commercialized, or are on the cusp of being so. Through this process, AI has become application-specifi c, and most work is focused on applications with more utility than passing the Turing Test [Norvig and Russell, 2003]. Dennett, who was Chair of the Loebner Prize Competition in the 1990s (a Turing Test challenge), questions if the Turing Test will ever be beat, and considers attempting to pass the test not useful research for serious modern AI [Dennett, 2004]. Turing himself anticipated the test would be
challenging to pass: he once predicted that by 2000 a machine would exist that would have a 30% chance of passing the test in a ve minute conversation [Norvig and Russell, 2003]. Later, he said in a radio interview that he expected it would be over 100 years (from 1952) until a machine was built that would reliably win the imitation game [Proudfoot and Copeland, 2004].

Theoretical AI has, in response, created some divisions within itself. Firstly, between strong and weak AI. Weak AI can act like it is intelligent, whereas strong AI is intelligent and conscious [Haugeland, 1989]. A criticism of the Turing Test is that it may allow an entity to pass which is weak AI. Modern AI has distanced itself from Good Old Fashioned AI (GOFAI), which was an approach based more on strict logic [Haugeland, 1989]. The present interest is more focused on the management of uncertainty through probabilistic systems.

An important technique in AI is machine learning, which is de ned as the performance at a task improving with experience [Mitchell, 1997]. Learning was proposed by Turing in the paper as being a possible technique to enable flexibility in a machine. The first examples were seen as early as 1955 [Proudfoot and Copeland, 2004]. But it wasn’t until the 1990s that the fi eld of computing advanced sufficiently for great advances to be made [Mitchell, 1997]. The general learning technique is similar to as proposed by Turing: the program is given a function such that it learns through trying to avoid a previously experienced “pain” of a
mistake. However, the actual implementation of these algorithms in a practical sense is probably more dicult than Turing and his contemporaries anticipated [Mitchell, 1997]. This in part explains the slower than expected progress in AI.

However, machine learning, and arti ficial intelligence in general, both benefi ts from and produces useful cross-pollination with other fields including statistics, philosophy, biology, cognitive science, and control theory [Mitchell, 1997].

2.2 Importance to Engineering and Industrial Practice

While Turing discussed AI in a quite general and academic sense, the field has since become application-specifi c, where programs are written for a niche task. Robotics, including autonomous vehicles, are probably the most mechanical of the applications of AI, and are of growing importance [Proudfoot, 1999]. Industrial processes and systems are increasingly controlled by AI and AI-inspired systems [Norvig and Russell, 2003].

In some ways, AI remains heavy on potential compared to current bene ts realized from it. Futurist Ray Kurzweil envisions AI being a key technology of a process that will create unprecedented change in our society [Kurzweil, 2001]. He argues that technology improvement and adoption follows an exponentiation rather than linear growth curve, as is seen in Moore’s Law of computer transistors. This is in part because improved technology allows the design of further improved technology. Assuming AI follows a similar progress, the relatively slow early progress is to be expected, with accelerated returns to be seen in the future.

This leads to what Kurzweil calls the Singularity, which is the point when the slope of this technology-growth curve becomes so steep as to be eff ectively asymptotic. Kurzweil predicts similar and inter-connected growth in neuroscience and (especially bio-flavored) nanotechnology. According to Kurzweil, when this all happens, we will have essentially infi nite access to technology. One outcome of this that Kurzweil is fond of mentioning is that we would see our own immortality [Kurzweil, 2001]. So, at least according to Kurzweil, the implications of AI to engineering, industry, and beyond are vast.


Alan Turing made considerable contributions to computer science and affiliated fi elds. “Computing Machinery and Intelligence” is one of several seminal papers contributing to the legend of the author. The paper has had wide impact, beyond engineering and computer science to shake the very foundations of how people view themselves. Like Galileo and Copernicus, Turing has forced us to reflect on our own state of being. On a more practical sense, he pioneered in this paper the fields of AI and machine learning, which have recently proven to be of considerable value. One can only anticipate that computer-based intelligence will play a greater role in the future of humankind.

In addition to his notable contributions to science and academics, Alan Turing is a fascinating man, and one whom history is only beginning to give proper recognition to. One can hope that the future will give the acknowledgment to Turing that his contributions to an increasingly important field deserve.


M Beeson. The Mechanization of Mathematics. In Alan Turing: Live and Legacy of a Great Thinker. 2004. URL

M Boden. Escaping from the Chinese room. 2003. URL

M Boden. The Philosophy of Arti ficial Intelligence (Oxford Readings in Philosophy). OUP Oxford, 1990. ISBN 0198248547. URL

Daniel Dennett. Can Machines Think? In C Teuscher, editor, Alan Turing: Life and Legacy of a Great Thinker, pages 121 – 145. 2004. URL

John Haugeland. Arti ficial Intelligence: The Very Idea. A Bradford Book, 1989. ISBN 262580950. URL

Andrew Hodges. Alan Turing: The Enigma. Vintage, 1992. ISBN 0099116413. URL

Ray Kurzweil. The Law of Accelerating Returns, 2001. URL

David Leavitt. The Man Who Knew Too Much: Alan Turing and the invention of computers: Alan Turing and the Invention of the Computer. Phoenix, 2007. ISBN 0753822008. URL

Millican and Clark. Machines and Thought: The Legacy of Alan Turing. In Christof Teuscher, editor, Machines and Thought: The Legacy of Alan Turing. Oxford University Press, 1999. 3
Thom M. Mitchell. MACHINE LEARNING (Mcgraw-Hill International Edit). McGraw-Hill Higher Education, 1997. ISBN 0071154671. URL

A Newell and HA Simon. Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 1976. URL

S Norvig and P Russell. Arti ficial Intelligence: A Modern Approach, 2003. URL

Proudfoot. Robots and Rule Following. In C. Teuscher, editor, Machines and Thought: The Legacy of Alan Turing, Volume I, page 312. Oxford University Press, USA, 1999. ISBN 0198238762. URL

Proudfoot and Copeland. The Computer, Arti ficial Intelligence, and the Turing Test. In Christof Teuscher, editor, Alan Turing: Live and Legacy of a Great Thinker. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-642-05744-1. doi: 10.1007/978-3-662-05642-4. URL

Elaine Rich and Kevin Knight. Arti cial intelligence. McGraw-Hill, 1991. ISBN 0070522634. URL

JR Searle. Minds, brains, and programs. Behavioral and brain sciences, 1980. URL

Sara Turing. Alan M. Turing: Centenary Edition. Cambridge University Press, 2012. ISBN 1107020581. URL

I can’t draw: I’m not an artist

Growing up, thought I was good at the sciences, less good at reading and writing, and lousy at the arts.  How did I know?  That was what my grades told me in school.

The problem is, for young minds, this labelling is false and harmful.

Art in school is how straight you can draw lines and if you can paint without crossing over the lines.  At least, this is what art is until the later years of school.  Then, you are taught what art really is, which is communicating a message between the creator and the person experiencing the art.  But by the time I was in high school, I had convinced myself that art wasn’t something I was good at or needed to be good at.


I don’t consider myself an artist today, but one closely-related role I do play is a designer.  This happened through an education in mechanical engineering, where my focus became on medical devices with additional interests in energy and space exploration.  This was around 2008, so not that long ago, but pre-iPad and pre-Kickstarter.

The result was that my design education and journey has been in this era where product success is so intertwined with “soft” design: user experience, human factors, and communicating a message to the user.  This part of design engineering is far, far more about art than it is math or physics.

I now find the “art” of design to often be more enjoyable than the “science” of it.  But I feel like I’m playing catch-up, I wish I hadn’t been pigeon-holed as “not an artist” when I was far too young for anyone to know what I was going to be.

I think feedback and even ranking systems are good, even for young students in grading.  Feedback helps to improve future performance, and comparative ranking by grades gives competition: a reflection of life beyond school.


The problem is that performance in subject X isn’t a good decider if the person will do well at X in the real world.  Studing X in school isn’t doing X.  School is designed to teach basic skills (arithmetic, writing essays, researching), the ability to learn, and, lastly, content.  Performance in school is judged on how well and quickly the student can learn these skills and content, which is different from how well the student can succeed in that field.

Premature judgements are made on this that can have an effect on the student’s career.  I have heard a lot of people who’ve picked a particular career because they were good at it in school, or avoided another because they were bad at it.  It is unfair for both the student and society that we are doing a poor job of helping people to careers they will enjoy and be good at.

We need to develop a culture in schools and the institutions that surround them that failure is ok.  Labelling people who are “good” or “bad” at certain things in school needs to stop, especially for young students.  These self-images can remain with students for a long time.  Instead, we should try different approaches with young students who are struggling.  We shouldn’t let the student or anyone else say they are “good” or “bad” at something before anyone can possibly know.

Saving Brick and Mortar

The big news story of the past week in Britain has been the cold temperatures and snowfall, which as a Canadian I am free to find fun in the fear caused by comparatively mild weather.

The second biggest story is the recent collapse of four major retailers here (from Yahoo): Comet, Jessops, HMV, and Blockbuster.  Comet is an electronics retailer, Jessops does photography, HMV sells music and video, and Blockbuster is a film and game rental company.  Such stories are not limited to the UK, as worldwide recessions and growth of e-retailors has hit retailers stateside also.

One simple explanation for the demise of these companies is disruption from online services, and perhaps also the rise of digital cameras and smartphones for the case of Jessops.  The Yahoo article linked to above provides a good summary of other reasons for the fall of these rather large companies.

Does this signal the beginning of the end of traditional, brick and mortar retailors?

Brick and mortar is facing a multi-pronged assault.  First is from online retailers and distributors – such as Amazon, Netflix, and iTunes.  Second is an emerging threat from home- and local mini-manufacturing, such as desktop 3D printing.  I am personally pretty bearish on this happening, although such systems have a chance to become mainstream if part quality can improve, costs continue to come down, and ability to work with multiple material improves.  A more likely threat is from local mini-manufacturers, using technologies like 3D printing, waterjet cutting, and injection molding to make semi-custom products on demand.  The advantage is of less machine down-time, distributing costs.  Additionally, staff at a mini-manufacturer will be able to assist with designing, design selection, machine operation, and assembly.

Each of these distribution methods has its own benefits and shortcomings.  Some are detailed in the table below.


Home manufacturing and mini-manufacturers are still in relative infancy, and it is hard to assess how great of threat they are to brick and mortar retail.  Personally, I think the processes most often suggested for home or mini- manufacturing inherently have weaknesses related to quality and multi-material products.  Additionally, the main value-add of this kind of manufacturing is customization, which I don’t think will have the mainstream appeal to justify higher costs over mass produced products for most instances.  I have previously written in more detail on my opinions here.

For brick and mortar sales, a lot of value comes from being able to interact with the product.  Look and fit are much easier to decide in person than through a web browser.  Personal and expensive products like wedding rings and cars are things that people usually don’t buy without first interacting with.  For these kinds of high-end products, knowledgeable sales staff is also valuable in choosing what the right purchase is for you.  This contrasts with economy-minded products, where online reviews are often more helpful than sales associates.

Groceries are a retail category that have been slow to gain popularity in e-retail.   This is due to a trade-off between time and choice.  Being perishable, groceries are a category where 2-day shipping is insufficient.  Further, people like to select the best produce, meats, and breads from those displayed at the store.  In cases where quick access to the product is required, brick and mortar is the preferred type of retailer.

Apple stores have been lauded as an example of a great brick and mortar model, evidenced in part by Tesla motors modeling their showrooms after Apple stores.  Both of these are an additional vertical in products and experiances that are already highly controlled by their respective companies.  This allows the companies to control the entire experience for customers using their product by including selection and purchase along with usage.  Additionally, there is a marketing and advertising aspect to having such a visible front for the respective products.

There are segments where brick and mortar seem unlikely to be able to compete with e-retailers.  When the product is information, near instantaneous, free transfer, near zero inventory cost, and convenience make a clear case for digital distribution, such as Netflix for films and TV, iTunes for music and other media, and the shift from newspapers and magazines to web-based equivalents.  The only option for brick and mortar in these industries may be to hope for a time machine, to travel back and time to develop or acquire digital distribution.

A second example is Dell, especially the company as it was around 2005.  Dell has a great model of direct-order, where the buyer can semi-customize their purchase and also get great value compared to buying in-store.  In some ways, the model lead to the cannibalization of the consumer computer hardware industry.  Competition and commoditization lead to collapse of margins.  It is said that in an efficient market, there is no money to be made.  That is what has happened in computer hardware – it’s great for consumers but not for manufacturers.

Distribution through online sales reduces regional market inefficiencies.  A customer in San Francisco no longer has to choose between local stores: they have access to wide areas of stores, subject only to tariffs and shipping costs.  This lowers prices because a store in San Francisco is now competing with online stores all over the world, in addition to local stores.

A discussion on e-retail would be incomplete without mentioning Amazon; the giant in the space.  Amazon wins due to massive diversity of products stocked, quick and cheap shipping, convenient use, and meaningful user reviews.  Hidden to the purchaser, they have the infrastructure and distribution centers to make the experience work.  With said infrastructure and its momentum, the burden is probably on any other retailor – web based or not – as to how they will beat Amazon.

How can this be done?  I have a few ideas and suggestions

1. e-retail is only as fast as the postman

For things that are urgent or perishable, brick and mortar has an advantage to e-retail.  While there is convenience to shopping from home, there is also convenience in buying something and getting it right away.

2. Quality sales advice

Brick and mortar stores should be much better than e-retail at consumer education, and it usually is for high-end and very personal products.  There is no reason e-retail resources such as price comparisons and user/expert reviews can’t be as accessible in-store as online.  There are apps and interactive displays that are starting on this, but I think there is further mileage.

However, the key advantage of brick and mortar should be knowledgeable and caring sales staff.  People interact with products in a very personal way, and a salesperson should be much better suited to understand the user’s needs than a web based script or robot.  Consumer information and marketing could be a key area for innovation for brick and mortar retail.

3. Beating e-retail on price is trench warfare

Competing on price alone is rarely a sustainable business model, and brick and mortar probably has an inherent disadvantage to online retail due to higher overhead.  The lone advantage may be in shipping in bulk for brick and mortar, compared to by unit for e-retail.  In this, a model like Costco may be able to remain competitive due to high volume, low number of products, and low-overhead operations.

I see a continued shift to e-retail from brick and mortar.  The consequences of this could be quite far reaching.  Not only will business be transferred from traditionally to online retailers, but there are implications for employment, international trade, a surplus of retail real estate, and decline to the cultural pastime of shopping.  It’s still early to conclude on home and mini-manufacturing, but I don’t see these as being major threats in the space, and especially not in the near-term.

There exists a continued opportunity to disrupt the retail space.  For example, mobile devices are still relatively greenfield for retail apps, without any dominant players.  Further, the social aspect of shopping should not be overlooked.  Perhaps this factor may lead to support brick and mortar, or some innovation improves the social aspect of online retail.

What do you think?  I’m always interested to hear your comment in the bottom.