Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1634040,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']
Guest

Artificial intelligence: How afraid should we be?

Image Credit: carlos castilla / Shutterstock

Does artificial intelligence threaten mankind? It’s an idea recently suggested by Stephen Hawking, the most famous scientist in the world today, and his remarks made headlines around the world. It attracted my attention because artificial intelligence has been my life’s work — it was the subject of my PhD and I cofounded a business based on bringing its power to smartphones. The exact business, in fact, that Professor Hawking was referring to when he made his remarks.

I have huge respect for Professor Hawking; he’s the best in the world at taking complex physics and turning it mainstream. My company, SwiftKey, has spent nearly three years working behind the scenes on the communication system Professor Hawking uses to write and speak. We partnered with Intel, integrating the next-word prediction software SwiftKey’s known for, into their system for Professor Hawking. The work has been hugely fulfilling — we believe we’ve doubled his typing speed, and it’s been a fascinating project for our engineers.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1634040,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']

But does the kind of work emerging from AI companies today threaten mankind? I believe the danger is at the very least a long way off. If the history of AI is characterized by anything, it’s over-optimism. Alan Turing, the father of modern computing, thought we would have truly intelligent computers by the close of the 20th century, and many since then have also overestimated our ability to replicate the intelligence of the human brain.


To find more exclusive insights from tech industry insiders,
explore VentureBeat’s selection of recent guest posts.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.


We systematically underestimate the incredible complexity of both the natural world and the human mind, the most complex organism in the known universe. Machines today process and analyze huge volumes of complex data and can mimic human intelligence in narrow areas through clever machine learning and statistical algorithms. But the human mind tackles issues of such complexity and diversity, integrating a staggering diversity of data sources that still confound our most powerful machines. Plus, it’s not as simple as just reproducing the decision-making patterns of human brains. The human brain sits within the body, which itself inhabits the natural world. I believe that a truly intelligent machine must also inhabit a similarly complex “body” and have the ability to interact with the world in meaningful ways. This idea is known as “embodied cognition.”

The speculation that “full AI,” the kind that could outsmart and consistently outpace human development, is imminent, is at the moment just that — speculation.

That’s not to say there hasn’t been huge progress in AI; the communications system helping Professor Hawking write and speak is just one example. This progress has also been seen in the ability to translate text into multiple different languages using computers, making the content of the web much more widely available on a global scale. It’s the same problem-solving that’s enabling advanced trials of self-driving cars. This is all a result of “narrow AI” (as distinct from the scarier “full AI” type) and it represents significant progress on solving specific problems, thanks to powerful Machine Learning techniques and the growing accessibility of big data.

This kind of problem has traditionally been difficult for computers to solve because of the inherent complexity and ambiguity, but within the last five years or so, we’ve seen the reliability of software increase, though obviously not to a perfect degree. The problem SwiftKey solves is the hassle of typing on mobile phones. We use AI to learn from individual users; our apps understand the way people use language and continually adapt, autocorrecting even the most unique words and phrases and predicting what you’ll type next. Our algorithm learns from and adjusts to your writing style, even if you’re juggling up to three languages simultaneously.

The future potential of AI lies in being able to harvest data from diverse sources and build complex conceptual structures to tackle problems that are far more general in nature — for example, how to solve climate change or cure cancer. While I believe there is clear potential for AI to play a central role in solving problems at this scale, we’re not there yet and probably won’t be for a long time to come.

That’s not to say the debate triggered by Professor Hawking isn’t hugely valuable. It’s high time we had an open and honest dialogue about the implications of our progress. With any transformative technology, there are pros and cons, beneficial applications and harmful ones. In the past we’ve seen similar debates emerge around atomic technology and nanotechnology, for instance. Talking about the implications for us as citizens, business leaders, and academic is vital, and AI is a different type of technology so we need a new debate.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1634040,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']

What’s certainly true is that AI can no longer be just a subject for academia. It’s out there in the consumer market; you probably have the power of advanced, “narrow AI” in your pocket right now. More and more technologies driven by artificial intelligence are permeating our individual technology experiences — our smartphones, our homes, our cars. From Google Now telling you the most efficient commute route, to predictive apps that learn from each user and anticipate their needs and future behavior. Plus, we’ve seen IBM’s Watson outsmart two human contestants for the grand prize of $1 million on Jeopardy, a computer pass the Turing Test, and Google announce the acquisition of British AI company DeepMind.

A new era for AI is on its way and we need to discuss the likely ethical implications, from data security when machines further analyze our lives, to whether self-driving cars should prioritize the safety of the driver over the well-being of pedestrians that stray into their path. But are we currently at risk of being extinguished by our own creations? Not for a long time in my view.

Ben Medlock is cofounder and CTO of SwiftKey.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More