I was fascinated to meet Masayoshi Son, the CEO of SoftBank, one of the world’s biggest tech companies, this week at the ARM TechCon conference in Santa Clara, Calif. Son believes in doing really big things. He bought Sprint for $21.6 billion in 2013. That didn’t work out so great at first, but Sprint is on the verge of making some money. Then, in September, SoftBank bought chip designer ARM for $31 billion. And now he’s raising a $100 billion fund with a Saudi Arabian group. I felt like this billionaire was like a character in a video game.
In a small breakfast meeting, Son said he wants to make the Singularity happen. That is the day, coined by futurists such as Ray Kurzweil and Vernor Vinge, when machine intelligence will exceed all human intelligence combined. Son believes that we’ll first see the Internet of Things produce more than a trillion connected devices in the next 20 years. And those devices will lead to the Singularity. To do what he wants, Son confessed that $100 billion isn’t enough.
On first blush, Son’s talk didn’t seem to have much to do with the game business. But the more I thought about it, the more I concluded that he should be talking to a wider group of people and that video game storytellers and futurists are among them. On top of that, video game creators should now wake up, poke their heads above the borders of their own business, and take a look around at the real world.
We should all take notice that we are living in an accelerated time. Artificial intelligence technologies have taken off exponentially, with advances in computing and deep learning neural networks in handling recognition tasks that only humans have been good at in the past. A.I. is moving fast, and it has begun to command a huge amount of investment in Silicon Valley and the rest of the world. The engine has started. The timetable for creating the Singularity is about 30 years, according to Son. Some people think it’s all hype, but it may happen sooner than you think. As Microsoft cofounder Bill Gates often says, it’s easy to overlook gradual changes that lead to a profound revolution over a long period of time.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Investors like Son can take capital investment and accelerate the timeline. They can marry capital with Moore’s Law to give technological progress a huge boost, said Rick Thompson, partner at Signia Venture Partners, in an interview with VentureBeat. But I think it’s important to bring in the storytellers — the science fiction writers and video game designers who have been warning about A.I. — into the conversation. Should we make the Singularity happen?
The billionaires of our time are taking stands on this question. Bill Gates, Tesla founder Elon Musk, and prize-winning astronomer Stephen Hawking have expressed their fears that unbridled A.I. could go beyond our control and spell the doom of humanity — much like James Cameron warned us about with Skynet and the rise of the machines in The Terminator. Son, Google’s Larry Page, and Amazon’s Jeff Bezos are racing to make A.I. happen.
Video game creators have been at the leading edge of A.I. as well. They’ve been on a quest to make believable humans, and they have almost gotten there with the quality of computer graphics. Cubic Motion, Epic Games, and Ninja Theory made some great strides on that front earlier this year. But game developers in general are also on an epic quest create A.I. behavior that beats human players when it comes to contests of skill. Google took a big step in this direction when its Deep Mind A.I. beat the best human Go player. In the race to make A.I. better, have they thought about the end game?
Hollywood is weighing in on the question, as HBO’s fascinating show Westworld is delving into the ethics of enslaving androids in the service of human fantasy. Blade Runner, the classic Ridley Scott film about hunting down rogue androids, is getting a sequel. I recently watched the 2015 film Ex Machina, about a billionaire’s quest to create an artificial human. When asked why he was doing it, the billionaire in the film answered, “Somebody is going to do it.” It might as well be him.
It struck me that Son said the same thing, referring to the explosion of the Internet of Things and the subsequent Singularity.
“Whether we like it or not, this Cambrian Explosion is happening,” Son said. “Would you like to be on the side to be able to eat or be eaten?”
The video game creators are weighing in as well. This summer, Deus Ex: Mankind Divided delved into the ethics of human augmentation. And David Cage, a video game designer at Quantic Dream who has been obsessed with creating lifelike humans, has a game coming called Detroit: Become Human, about an android who must hunt down rogue servant androids who have turned on their masters.
I asked Son about whether The Singularity was a good thing, or something that was dangerous, as Elon Musk said. He replied, “If you think about the history of mankind, fire made man’s life dramatically change. It’s dangerous if you misuse it, but if you use it in the right way it makes life dramatically better. It’s a double-edged sword. The singularity, if misused, could be super dangerous, but if we use it in the right way, it can help people lead longer and healthier lives. It can help people be more productive, more happy. We might never see accidents on the highway again.”
He added, “There are lots of great results we can achieve with this technology. If someone could use it, it could be dangerous. But I’m always thinking on the brighter side. I believe in good faith and good will.”
I believe that all of these people — the billionaires, the venture capitalists, the technologists, the science fiction writers, the video game creators — should all get in the same room and talk about this issue. This is perhaps the most important issue, and it is a mistake to think that we can postpone this discussion for another time. It’s going to come. When it does, hopefully everyone will remember the “The Three Laws of Robotics,” as created by sci-fi author Isaac Asimov in 1942. Those laws call for robots to protect human beings.
I’m reminded of a talk I heard by Mike Abrash, chief scientist at Oculus, a few weeks ago. He captured my imagination when he told a story about a conversation he had five years ago with engineer Atman Binstock, over why to get involved in remaking reality, which would happen anyway if he worked on it or not.
He told Binstock about the“myth of technological inevitability, this idea that because technologies are possible, they will just happen naturally.” Rather, Abrash said, as recounted by Binstock, “Instead, the way technological revolutions actually happen involves smart people working hard on the right problems at the right time. And if I wanted a revolution, and I thought I was capable of contributing, I should be actively pushing it forward.”
In this case, I think we should pour a lot of engineering resources into make sure that the Singularity doesn’t happen. Or rather, in making sure that the worst fears of A.I. won’t come about. This task will be exceedingly hard, as it is difficult to stop a technological innovation — such as the spread of the atom bomb — from happening. I would love to see people across industries start discussing this problem. I would wager that Cage and Son haven’t had a conversation. If the genie gets out of the bottle, that’s when it’s too late. We should get everyone together in the same room and hash this out.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More