Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1531253,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"B"}']

Google cars designed to speed because obeying the law can be dangerous

Image Credit: Flickr User DanDeChiaro

Google’s self-driving cars are designed to exceed the speed limit by up to 10 miles per hour because stubbornly obeying the law can be a danger to everyone on the road. The legal and philosophical consequences of this fact are fascinating.

In a recent Reuters review of the Google car, reporter Paul Ingrassia was told:

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1531253,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"B"}']

Google’s driverless car is programmed to stay within the speed limit, mostly. Research shows that sticking to the speed limit when other cars are going much faster actually can be dangerous … so its autonomous car can go up to 10 mph (16 kph) above the speed limit when traffic conditions warrant.

This is a fascinating quandary for Google’s engineers. No one knows for sure who is responsible when an automated vehicle breaks the law. If a flawed algorithm sends a driver down a one-way road due to a missing road sign, who is responsible? Is it the Google engineer, or perhaps a the construction crew that screwed up the road sign?

As states slowly allow for automated car testing on their roads, they’re piecing together the legality one situation at a time. The old standards of determining guilt may not apply to robots.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

“Criminal law is going to be looking for a guilty mind, a particular mental state — should this person have known better?” University of Washington’s Ryan Calo, an expert in tech law, told the New York Times. “If you’re not driving the car, it’s going to be difficult.”

But the idea of programming a car to break the law ahead of time is perhaps the most fascinating legal question. Sometimes the spirit of the law contradicts the written law. Laws are designed to save lives, but they aren’t flexible enough to deal with every single situation.

Thanks to the big data gathered on car accidents, Google will know when speeding is actually safe and when it isn’t. Right now, Google engineers may suspect that exceeding the speed limit when other cars are also speeding is the safest thing to do. And a law could be designed to allow Google’s self-driving cars to legally speed under those general circumstances.

But as Google collects more data, its decisions will increasingly become more unpredictable.

Perhaps it is safest to speed in excess of 20 miles per hour on St. Patrick’s Day, because going near the speed limit around drunk drivers agitates them and makes them more reckless. We doubt that a law could ever really be flexible enough to know when it should be permissible for Google to speed.

And even if a robot is allowed to break the law, who decides the conditions? Is it permissible to speed when doing so would cause a 10 percent decrease in potential for a traffic fatality. What about 5 percent?

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1531253,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"B"}']

These are questions that that will ultimately have to be answered by a regulatory body. For now, it’s all in Google’s hands.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More