Generally speaking, Sam Altman, president of Silicon Valley incubator Y Combinator, thinks technology gets regulated too much. But it’s different when it comes to superhuman machine intelligence (SMI) — machines that can get smarter on their own and have greater computing power than humans in many capacities.
“The U.S. government, and all other governments, should regulate the development of SMI,” Altman declared in a blog post today. “In an ideal world, regulation would slow down the bad guys and speed up the good guys — it seems like what happens with the first SMI to be developed will be very important.”
The remarks follow recent comments on AI from public figures like Elon Musk and Bill Gates.
Altman’s perspective stems from the possibility of artificially intelligent machines becoming capable of killing people. So today he is setting forth some guidelines for regulation.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
“I think it’s definitely a good thing when the survival of humanity is in question,” Altman wrote of regulation.
For one thing, Altman wants regulations to have a system for measuring the benefit of use or training of machine intelligence. And perhaps there should also be standards for disclosing research.
“The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea,” Altman wrote.
Altman also would like regulations to impose some sort of external review for the development of AI systems.
“For example, beyond a certain checkpoint, we could require development [to] happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.,” Altman wrote.
What’s more, the regulations should mandate that the first SMI system can’t harm people, but it should be able to sense other systems becoming operational, Altman wrote.
Further, he’d like to see funding for research and development flowing to organizations groups that agree to these rules.
And finally, Altman seeks “a longer-term framework for how we figure out a safe and happy future for coexisting with SMI — the most optimistic version seems like some version of ‘the human/machine merge.'”
Altman doesn’t say he’d like these regulations to be implemented by a certain date. But it’s clear he feels a sense of urgency.
“Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off,” he wrote in a a blog post on a similar subject last week. “This is sloppy, dangerous thinking.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More