“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” – Edsger W. Dijkstra, computer scientist.

Many articles about chatbots focus on their use of A.I. and argue that recent advances in artificial intelligence are making bots viable in a way that they had’t been until now. Unfortunately, this argument is not only misguided, it is actually damaging to those who are trying to make and sell useful bots.

Artificial intelligence isn’t here. It isn’t close. It may never exist.

But A.I. isn’t necessary for useful bots.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Artificial intelligence is literally the type of intelligence that makes a computer indistinguishable from a human. The concept is based on the ideas of Alan Turing. One could make the argument that Alan Turing did more to win World War II than any other individual. Turing was in charge of Hut 8, a section of the British intelligence facility at Bletchley Park tasked with decoding German naval messages. He devised a range of code-breaking tools, including a device called the Bombe, which successfully countered the German’s infamous Enigma machine. Although it is clearly impossible to quantify the exact impact of Turing’s contributions, some historians estimate that without his work the war would have continued for at least another two years, and two million more lives would have been lost.

For most people, developing a code-breaking device that hastened the end of the war would be the outstanding achievement of a great life. But Turing was such an exceptional man that his code-breaking was not his most important work. He is probably now most famous for his invention of the Turing machine, an idealized universal computer, and the Turing test, the canonical test for artificial intelligence. The standard set by this test requires that a human can’t distinguish the machine being tested from a human test subject just by comparing their replies to the same questions.

If this is the standard, then we are still a long way from building true A.I. There are occasional stories about programs that pass the Turing test. In 2014, a chatbot called Eugene Goostman convinced 10 out of 30 judges at the University of Reading’s 2014 Turing Test that it was human. Some people decided that this result was equivalent to passing the Turing test, even though it clearly isn’t. Further, it turns out that if Eugene were a human, he would be a fairly stupid one. Not only did his programmers justify his odd answers by claiming he was a 13-year-old Ukrainian for whom English was a second language, but he made all the mistakes chatbots before him have made. That is, he dodged questions, he changed the subject, and he gave vague or evasive answers.

But the lack of real A.I. doesn’t matter when we are trying to build useful chatbots. Granted, a good chatbot should appear to be intelligent, but this expertise only has to be within a limited domain. Think of what happens when you go into a large clothing store. A salesperson will approach and offer help. If you ask about buying a shirt, this person will be able to help. But if you veer slightly off topic and ask about shoes, they will need to hand you over to a different salesperson. And if you ask about the effects of Brexit on the U.K. economy, you will likely be out of luck. Salespeople are able to help in only a very well-defined domain.

Similarly, a bot on a shoe site won’t need to have any idea about anything else, and if we restrict the necessary area of knowledge enough, we can use brute force to make the bot knowledgeable enough to be a helpful aid. And, as anyone who has had anything at all to do with computers knows, the power of brute force gets more effective every year (first noticed by Dr. Moore).

Here is another way to look at the same idea. An autopilot is smart enough to fly a plane in limited circumstances. But clearly an autopilot isn’t intelligent. The first ones were just gyroscopes and an altimeter. (They were made in 1912, the same year Turing was born.) But this doesn’t stop them being able to fly a plane in benign conditions. A more modern example is the adaptive cruise control on my car. This is just a bunch of sensors connected to the throttle and brakes. This is not in any way genuinely intelligent, but it is a superb safety system. With cruise control, I can drive on the expressway without using the pedals at all. It is basically impossible to plow into another vehicle while cruise control is engaged. Self-driving cars are just an evolution of this type of system. But again, these driving aids aren’t at all intelligent. They are just machines that appear intelligent because they are as good as humans at executing a well-defined task.

Sometimes, we in technology can forget this simple idea: Utility does not require intelligence (artificial or otherwise). This is true in general, and it is specifically true for chatbots.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More