As such growing disasters as global climate change and President Donald Trump loom closer, I thought it would be a good idea to see if I needed to add “dangerous artificial intelligence” to my bulging list of Things to Fear.

So, I recently did a quick sanity check with two experts: John Underkoffler, CEO of Oblong Industries, and Charles Ortiz, senior principal manager of AI and senior research scientist at voice recognition pioneer Nuance. Underkoffler’s credits include designing the interface for Minority Report — an interface that Oblong is now moving into the real world — and serving as an advisor on such major sci-fi films as Aeon Flux and Iron Man.

Other experts have been pitching the idea that I should not only add Evil AI to my Fear list, but I should do so in big letters. The proponents of this trepidation include such science/tech superstars as Stephen Hawking, Bill Gates, Ray Kurzweil, and Elon Musk.

Hawking has raised questions about whether we’ll be able to control superintelligent devices such as autonomous killing machines. Musk has said that AI may be “more dangerous than nukes.” Gates contended he doesn’t understand “why some people are not concerned.”

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Both Underkoffler and Ortiz told me: Don’t worry.

First of all, Underkoffler said we’re still a far ways from true AI.

“What people are [now] calling AI is ‘weak AI,'” he said, primarily pattern matching or machine learning that updates itself based on acquired knowledge.

“An algorithm to recognize broken Triscuits — that’s what people call AI [today],” he said, adding that even IBM is not claiming that its Jeopardy-winning Watson supercomputer is true AI.

Real AI “needs a kind of introspection, some form of self-awareness,” he said. That awareness is the key part of the definition of consciousness.

Underkoffler expects that, “sooner or later,” consciousness may emerge from computing processes, an event he said is at least a decade away. Ortiz agreed, saying that “we could reach this point of creating entities like humans.”

“Our best understanding of human consciousness is it is an emergent property,” said Underkoffler, where it just arises once the conditions are ripe.

The evidence?

“Here we are,” he said.

There’s one small problem, however. We don’t actually know what consciousness is.

But we’re already at the point where we also don’t understand everything that’s going on in advanced computer systems, Ortiz noted. As an example, he pointed to the mysteries of how neural nets, which are modeled on biological systems, reach a conclusion.

The jury is still out, Ortiz said, on whether we will be able to understand if self-awareness or feelings are actually occurring inside a computer — unless it tells us.

That’s how humans interact with computers, using one kind of language or another — programming or, more recently, natural language. At least until we have extensive, direct mental connections to computing processes.

More sophisticated conversations are on the drawing board. In five years, he predicted, “home systems [will be able] to have conversations, in context.” You can ask about dinner options, and it will remember the conversation last week about cuisines and restaurants you like.

Of course, the famous Turing Test for computer intelligence relies on conversation. If the human can’t tell the difference between a text exchange with the computer and a text exchange with another human, the test is passed.

But neither a Turing test, nor contextual conversations, nor conversations that reflect a computer’s common sense — which Ortiz said is coming in the near future — will tell us if the computer is aware and conscious. In other words, it’s not yet clear how we’ll know we’re dealing with an entity.

We will just have to infer it, like we do with humans.

In other words, Underkoffler told me, “We’ll be able to recognize it if it happens.”

And that recognition will happen over time, he said, not all of a sudden.

“We’d see a more primitive species first,” so we’d have time to get ready.

If it’s gradual, we can make darn sure that the AI doesn’t have the means to cause lots of trouble. No machine body designed for killing, no online connection to every nuclear-tipped missile.

In any case, Ortiz told me he doesn’t “buy the premise” that smarter computers will necessarily become evil. Underkoffler suggested that this assumption was “a weird displacement,” with humans projecting their own tendencies.

So, consciousness is probably coming to computing systems, but it’s a ways off, it won’t necessarily be evil, and while we may not know exactly when it gets here, we’ll have plenty of time to keep the AI away from the scissors of the world.

Now, about Donald Trump …

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More