My prediction is that in fewer than 15 years, we will be debating whether human beings should be allowed to drive on highways.

After all, we are prone to road rage; rush headlong into traffic jams; break rules; get distracted; and crash into each other. That is why our automobiles need tank-like bumper bars and military-grade crumple zones. And it is why we need speed limits and traffic police.

Self-driving cars won’t have our limitations. They will prevent tens of thousands of fatalities every year and better our lifestyles. They will do to human drivers what the horseless carriage did to the horse and buggy.

Tesla’s announcement of an autopilot feature in its next-generation Model S takes us much closer to this future. Yes, there are still technical and logistical hurdles; some academics believe it will take decades for robotic cars to learn to navigate the complexities of the “urban jungle;” and policy makers are undecided about the rules and regulations.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

But just as Tesla produced an electric vehicle that I liken to a spaceship that travels on land, so too will it keep adding software upgrades until its autopilot doesn’t need a human operator at the steering wheel. I expect this to happen within a decade — despite the obstacles. I have already placed an order for the new model so that I can be part of this evolution.

Tesla isn’t alone in developing semi-automated driving assistants. Most car manufacturers now offer options in their high-end vehicles to keep them within their lane, adjust speed, warn of pedestrians, and stop in the event of an impending accident. These technologies work well. The Insurance Institute for Highway Safety tested the automating braking systems of 24 vehicles and gave 21 a ranking of “superior” or “advanced.”

The new Tesla will be better than all of these. It will have sensors for image recognition, a 360-degree sonar system that can “see” its surroundings, and long-range radar to recognize signs and pedestrians. It will be able to change lanes on its own, obey speed-limit signs, avoid accidents, and even park itself. It will be able also to pick us up at our front doors in the morning after driving itself out of the garage. Because Tesla controls practically everything with software — including the driving, suspension and climate — it can keep adding new features. Its cars are Internet-connected, and software updates are downloaded automatically, usually every month.

Google is far ahead of Tesla in the race to build robotic cars. It already has several on the roads in California and says that they have logged 700,000 autonomous miles. But Google is going for all or nothing. Its new prototype vehicles don’t even have a steering wheel.

The challenge this creates — and a problem that Tesla and the other carmakers will also face — is that the driving system has to be perfect before it can be allowed on the road without a human co-pilot. This also creates many legal and ethical issues. Who is responsible, for example, when a fully autonomous car has an accident?

The liability issues regarding fully driverless cars will be easy: The car’s manufacturer or software maker will be responsible for any accident unless it can be shown that a human driver was at fault.

But the hard part is what Ryan Calo, University of Washington law professor, calls the “social meaning” of technology. He observes that a driverless car may always be better at avoiding a shopping cart. And it may always be better than a human at avoiding at stroller. But what if the car confronts a shopping cart and a stroller at the same time? A human would plough into the shopping car to avoid the stroller; a driverless car might not. Meanwhile, the headline would read: “Robot Car Kills Baby to Avoid Groceries.” This could end autonomous driving in America.

There will be many difficult choices and endless debates about ethics. But we can work these out. The numbers of fatalities caused by robotic cars will be a tiny fraction of the millions that humans have caused, after all. And if political leaders and lawyers in the United States try to stop progress, other countries will still adopt the new technologies; they are unstoppable. We may just end up playing catch-up with the rest of the world.

The big advantage that self-driving cars will have is that they don’t need the safeguards and controls that humans do. They can communicate with each other to negotiate right of way and speed, warn each other of traffic hazards, and see in the dark, so they don’t need blinding high-beams.

The real risks for robotic cars are the hazards that unpredictable humans create. That is why we will need to get humans out of the drivers’ seats.

I am looking forward to having my wasted driving time turned into work and leisure. Robotic cars will enable major fuel savings because they won’t need the bumpers or steel cages and so will be lighter. We won’t have to worry about parking spots, because our cars will be able to drop us where we need to go to and pick us up when we are ready.

We won’t even need to own our own cars, because transportation will be available on demand through our smartphones.

I can’t wait for the traffic jams to disappear because our cars won’t rush headlong into traffic as mindlessly as we do.


Vivek Wadhwa is a fellow at the Rock Center for Corporate Governance at Stanford University, director of research at the Center for Entrepreneurship and Research Commercialization at Duke’s engineering school and distinguished scholar at Singularity and Emory universities. His past appointments include Harvard Law School and University of California Berkeley.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More