(Reuters) – The Alamo Drafthouse Cinema Plaza here is a strip mall with a pet-accessories store, a Thai restaurant and a yogurt shop, an unlikely venue to display the high-tech future.
But one Saturday morning in March, Google did just that. A small convoy of its driverless cars cruised into the fading asphalt parking lot to give test drives – test rides, actually – to American mayors visiting Austin’s annual South by Southwest tech-and-culture festival.
Mayor Richard J. Berry of Albuquerque, New Mexico, was impressed with how the cars dodged pedestrians and fallen tree limbs. Sam Liccardo, mayor of San Jose, California, right in Google’s backyard, was impressed that he got to see the cars at all. “These things are crawling all over my city” in tests, “but I had to come to Austin to ride in one,” said Liccardo. “This is going to change cities.”
But before that happens, Google needs to change regulations – the federal, state and local edicts that cover everything from whether cars must have steering wheels to who’s at fault if a driverless car hits another vehicle. And so behind the technology display here in Austin was something as formidable as the technology but far less noticed: Google is mounting a lobbying and public-relations campaign across America to win acceptance for “autonomous vehicles,” as they’re formally known, and to shape the rules of the driverless road.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Last year, Google made Austin the first site outside Silicon Valley where it tests its driverless cars on public streets. This year, it has added Kirkland, Washington, and Phoenix, Arizona.
The proliferation of cities is aimed at gaining public acceptance for driverless cars, Google executives acknowledge, in addition to testing them in different driving conditions. The test cars all have company-trained drivers, and none yet offer rides to the public. But in each city Google does “public outreach,” in its words, such as town-hall meetings to explain the technology and tout its safety.
In Washington, Google has enlisted four former senior officials of America’s federal traffic-safety agency in the company’s efforts to convince their former government colleagues of the company’s preferred path from here to autonomy.
“Ready or not, it’s coming”
Self-driving technology has developed far faster than experts envisioned when Google started developing it in 2009. Cars with partial autonomy, such as Tesla’s Model S, are on the road now, and limited trials of fully self-driving vehicles are popping up. They could appear in urban transit systems within a few years, though the transition in the broad consumer market could take far longer.
“I’ve gone from hoping this would happen to thinking it might happen to knowing it will happen,” Chris Urmson, the 39-year-old Canadian who’s director of Google’s driverless-car program, told Reuters in an interview.
Other companies, ranging from global automotive giants to Silicon Valley startups, also are developing driverless cars. And they, too, are lobbying state and federal regulators in effort to massage the rules to their liking.
But Google, one of the first companies to launch a development effort, has emerged as the most visible and assertive lobbyist, with the most ambitious view of how self-driving vehicles should be deployed. Scorning the gradualist approach, Google is pushing for fully autonomous vehicles.
The company won an important early victory in February when the National Highway Traffic Safety Administration (NHTSA) ruled that the artificial intelligence system piloting a self-driving car could be considered the driver under federal law. That paves the way for regulators to make subsequent rulings that autonomous vehicles don’t need steering wheels, brake pedals, accelerator pedals or other things that humans use to control motor vehicles.
On Tuesday, Google and four allies – Ford, Lyft, Uber and Volvo Cars – said they were forming “the Self-Driving Coalition for Safer Streets.” The group will push for “one clear set of federal standards” for autonomous vehicles and try to build support for the technology among businesses and local governments. The coalition’s public face will be David Strickland – the former head of NHTSA.
NHTSA has promised to issue driverless-car guidelines by July. The agency is holding a public meeting on Wednesday at Stanford University in California to gather public input.
“The mission I have is we’ve got a clock ticking,” Transportation Secretary Anthony Foxx, who oversees NHTSA, told Reuters. “This technology is coming. Ready or not, it’s coming.”
But in California, Google’s home state, officials want all autonomous vehicles to have steering wheels, brake pedals and accelerator pedals – which amounts to ensuring that driverless cars can have drivers. California’s regulators believe there’s no substitute, at least not for some years, for autonomous vehicles to be designed with a “handoff” system that allows a driver to retake control in an emergency. But California hasn’t yet finalized its regulations.
In the U.S. government’s lexicon, California’s incrementalist approach is called Level Three autonomy, or “L3,” on a scale running from L0, where a driver does everything, to a fully autonomous L4 vehicle that needs no human intervention. L3 advocates are “certainly not ready for humans to be completely taken out of the driver’s seat,” as one L3 champion, Mary Cummings, director of Duke University’s Humans and Robotics Laboratory, told a U.S. Senate Committee in March.
Skeptical of human drivers
Google, though, is “definitely an L4 company,” one executive says. The company maintains that requiring human controls makes driverless cars useless for elderly, blind and disabled people who can’t operate a vehicle, and even makes the cars dangerous.
“Developing a car that can shoulder the entire burden of driving is crucial to safety,” Urmson told the same Senate panel. “Human drivers can’t always be trusted to dip in and out of the task of driving when the car is encouraging them to sit back and relax.”
Google concluded that in 2012, when it asked employee volunteers to test its driverless cars. The volunteers agreed to watch the road at all times and be ready to retake control if needed. Google filmed them in action in the car.
The technology lulled many volunteers into “silly behavior,” as Google put it. One turned around to search for his laptop in the back seat while traveling 65 miles an hour. Google gave up on L3.
The Google philosophy got some confirmation last fall when Tesla launched an L3 technology called Autopilot. Soon after, online videos posted by Tesla drivers showed near misses, prompting CEO Elon Musk to warn owners against doing “crazy things.”
Toyota, meanwhile, is pushing for another alternative. In its proposed “guardian angel” system, as the company calls it, humans would do most of the driving, but the car would take over automatically when it senses a collision is imminent.
That system, say Toyota officials, could have avoided the well-publicized collision between a Google car and a bus in Mountain View, California, Google’s headquarters city, in mid-February.
The car was a Lexus RX 450h hybrid SUV modified for autonomous driving, though it does have a steering wheel and control pedals. (Google’s latest driverless car, a bubble-shaped prototype, is designed to have neither.) The Google Lexus was trying to get around sandbags in a construction zone, with a bus approaching from behind on the left.
The Google car assumed the bus would give way, but the bus driver didn’t yield. The car’s left front fender hit the bus’s right front corner, and the car slid back alongside until it reached the bus’s midsection.
Google acknowledged “some responsibility” instead of full fault because the situation was ambiguous, Urmson told Reuters. Because the car was going just two miles an hour and nobody got hurt, there was no police report to affix blame officially. And the bus driver won’t discuss the incident.
However, Toyota officials have analyzed video footage of the accident in detail. Their conclusion: “The car made a prediction about what the bus driver was going to do, and that prediction was wrong,” says Gill Pratt, the scientist who is CEO of the Toyota Research Institute, a new $1 billion investment in artificial-intelligence and robotics research. “Their model was incorrect.”
Google declined to comment on the Toyota analysis of the accident.
A moral choice
After the fender-bender, Google temporarily pulled its 56 test vehicles off public roads and developed 300,000 new driving scenarios for its self-driving software to digest. But perfection remains elusive, even impossible, Google executives acknowledge, and at some point a driverless vehicle will cause a more serious, perhaps fatal, accident.
When that day comes, society will begin facing a difficult moral choice. On the whole, machine-driven cars will almost certainly kill fewer people than human drivers do. But will the public accept any human traffic deaths at the hands of an automaton?
Regulators already recognize the dilemma. A key question, Transportation Secretary Foxx said, is whether the public might judge a few traffic deaths caused by driverless cars more harshly than the many more fatalities caused by human drivers. In the event of a serious self-driving car failure, Foxx said, there could be a “huge reaction.” Americans, he added, need to have “reasonably set ideas about what’s possible here.”
It’s possible NHTSA’s new guidelines will allow several paths to automotive autonomy, including L3, L4 and the “guardian angel,” on the theory that different types of vehicles might best suit different people and different conditions. Already, on Teslas and some other upscale cars, drivers can switch into autonomous mode in traffic jams but switch out of it for leisurely weekend drives. Foxx says the government needs to protect public safety and also encourage innovation, but he won’t say how the NHTSA guidelines are shaping up.
The debate over the proper path to automotive autonomy is heating up because the stakes are huge. Autonomous vehicles combine the two transformative inventions that bracketed the 20th Century, the automobile and the Internet, both of which left the world far different than it was before them. Their combination in the early years of the 21st Century could be equally momentous.
That tech trend is converging with a related one: the rise of the electric car. Automakers – everyone from upstart electric specialist Tesla to legacy giants Nissan and General Motors – are pouring billions of dollars into the creation of battery-powered vehicles that would emit far less climate-warming carbon into the atmosphere than petroleum-powered cars do.
Autonomy advocates foresee an enormous reduction in automotive fatalities – now numbering 1.25 million globally every year, far more people than are killed in wars. That’s because computers, unlike human drivers, won’t get distracted, fatigued or drunk. Human error is a major contributor to 94% of traffic accidents, NHTSA studies show. The agency also found that in 2014, 6.1 million auto accidents were reported to police in America (many more, mostly minor, weren’t), causing more than 32,000 deaths, 2.3 million injuries and $836 billion in economic loss.
“Self-driving cars surely will make a huge contribution to society,” Jen-Hsun Huang, chief executive officer of Nvidia Corp., said in January when unveiling the company’s newest autonomous-car brain, which packs the power of 150 MacBook Pro computers into a panel the size of a car’s license plate.
“We’ll be able to redesign the urban environment so that parks will replace parking lots,” Huang added. “Think of the money we’ll save, the reduction in accidents and the incredible freedom this will provide people who can’t drive today.”
“Deep learning”
Incredible riches, meanwhile, could accrue to companies that control self-driving technology. So tech companies, automakers, components suppliers and a host of startups are investing big in artificial intelligence, the key to autonomous driving.
Until four years ago, teaching machines to think meant downloading human knowledge into computers. Because many human endeavors are complex, imparting intelligence to machines was slow. The breakthrough was finding that flooding computers with data and prompting them to make choices would let them teach themselves.
In essence, the computer brains that pilot driverless cars have learned to recognize images from advanced sensors – cameras, radar and lasers – and react to other vehicles, pedestrians, road obstructions and other things. It’s called “deep learning,” and is similar to the process Google used to train its AlphaGo computer program, which recently beat the world’s premier human master of Go, the complex Asian board game.
Last year, Delphi Automotive took an L3-equipped Audi SQ5 from San Francisco to New York, driving in autonomous mode 99% of the time. The Michigan-based car-components giant is considering a sequel, perhaps from Paris to Beijing.
Most automakers and components companies such as Delphi prefer L3 autonomy to the full-on L4 level: They say it’s a faster and cheaper way to reduce traffic fatalities. It also suits their business models. Features such as collision-avoidance radar, self-steering and self-parking boost profit per vehicle. But the debate is more nuanced than Silicon Valley vs. Detroit.
Ford Motor is developing both L3 cars and L4 cars, says Ken Washington, vice president for research and advanced engineering. But Ford prefers L4 technology because, like Google, it doesn’t think a quick handoff from machine to human is feasible.
Before joining Ford two years ago, Washington was in the aerospace industry. There, autopilot systems that fly planes most of the time have sharply reduced airline crashes. But some accidents, including the 2009 crash of an Air France jet en route from Rio de Janeiro to Paris that killed 228 people, have been blamed on pilots’ inability to retake control promptly when autopilot fails.
One possible solution: “driver state sensing,” sensors inside the car that monitor the driver. The sensors could sound an alarm or vibrate the seat if, say, the driver’s head nods off. Being monitored in their own car might strike drivers as freaky, but Delphi will launch such systems with two automakers (it won’t name them) at year-end.
Experiments with full L4 autonomy, meanwhile, are cautiously spreading. A six-passenger driverless shuttle called WEpod will soon begin service between two Dutch towns, Wageningen and Ede, 8 kilometers (5 miles) apart. Dutch officials are limiting WEpod’s speed to just 25 kilometers an hour (15 mph) – which in early tests has frustrated human drivers on the same road. So, the French-made WEpod now carries a banner on its back that reads: “Autonomous Vehicle. Keep Your Distance.”
That’s good advice. Most mishaps involving Google’s test cars occur when human drivers hit them from behind, often when trying to drive through a yellow light while the Google car decides to stop.
Winning over Austin
Seven American cities are finalists in a “Smart Cities” contest for $40 million in federal money and at least $10 million in private funding to provide driverless shuttle buses in contained environments. The winner, to be announced in June, must deploy them within three years. Among the finalists is Austin, whose proposal includes driverless shuttles at its airport.
“To me, the most exciting applications are buses and mass transit,” Steve Adler, Austin’s mayor, told Reuters. The biggest transit-budget cost, he says, is salaries for bus drivers, which prevents the city from expanding bus service to ease traffic congestion, some of America’s worst. Austin’s traffic fatalities topped 100 last year, surpassing the prior record of 82.
Driverless technology “isn’t perfect, but you can’t be afraid,” the mayor says. “Great cities do big things.”
Google’s initial approach to Austin about driverless-car tests, in the spring of 2015, was almost comically secretive. Local lobbyist Gerardo Interiano told mayoral aides that Google executives wanted a meeting – but couldn’t disclose the topic. (They wanted to avoid leaks, company officials explained later to Reuters.) The mayor wouldn’t meet without knowing the subject, and Google relented.
At the meeting, Google executives made no bones about wanting regulations more accommodating than California’s. The Texans assured Google there was no state impediment to deploying driverless cars without steering wheels and foot pedals. City officials sought assurances on safety and liability, which Google agreed to assume.
The discussions were lubricated by Google’s local connections: Interiano is a former senior state legislative aide, and the local head of the Google Fiber high-speed Internet service is a former state legislator.
When Mayor Adler announced the driverless-car test, he used verbatim talking points from Interiano about potentially “enormous” benefits to the city. The scripting caused a minor kerfuffle in the local press. “It isn’t unusual for me to go to an event with prepared remarks,” the mayor told Reuters. “I never say anything I don’t mean.” Interiano declined to be interviewed.
The only accident in the Austin test occurred in March, when a Google autonomous vehicle stopped at a traffic light was rear-ended by a Volkswagen Passat going 10 miles an hour. Nobody was injured.
Plugged in
In Washington, as in Austin, Google is plugged in.
The ex-NHTSA staffers enlisted to help to plead and formulate its case include Strickland, the agency’s former director, and Chan Lieu, the former congressional liaison, both now at the Washington public-policy law firm Venable LLP. Daniel Smith, who ran the agency’s Office of Vehicle Safety, is now a Google consultant. Ron Medford, former deputy director of the agency, joined Google in 2012 and serves as its autonomous-vehicle safety director. Just one of the four, Lieu, has registered to lobby on Google’s behalf.
It’s a classic example, hardly unique to Google, of Washington’s revolving door of ex-regulators being hired to help deal with their former colleagues. Google says the four provide critical expertise as well as knowledge of the ways of Washington. Transport Secretary Foxx says: “As far as I can tell, that issue has not been a difference-maker internally” in the agency’s driverless-car decisions.
Google also actively courts “the disability community,” as company officials call it. In 2012 the company invited Steve Mahan, a blind Californian, to become the first non-Google employee to ride behind the wheel of a Google driverless car. “It was just a pure delight,” Mahan recently recalled. Last year, the Foundation Fighting Blindness honored Google; Urmson, head of the driverless car program, attended the award dinner.
Despite Google’s outreach efforts, a California non-profit group called Consumer Watchdog remains unconvinced. It cites a report Google made to California regulators showing that company drivers overrode the self-driving system 341 times over a 15-month period in 2014-2015, or an average of 22.7 times a month.
“Self-driving vehicles aren’t ready to safely manage many routine traffic situations without human intervention,” Consumer Watchdog said in a letter to Foxx. The group urged the transportation department to be open and transparent in crafting its guidelines.
Urmson doesn’t question the numbers, which come from Google, after all. But he but disputes Consumer Watchdog’s interpretation. Google set a “quite conservative” threshold, Urmson says, for its test drivers to retake the wheel.
There were only 13 situations, he says, where the car would have hit something without the driver taking over. Of those, eight occurred in the first four months of the reporting period. Only five occurred over the last 11 months.
“The technology is getting better,” he says, “and it’s getting better quickly.”
(By Paul Ingrassia, Alexandria Sage and David Shepardson. Edited by Michael Williams)
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More