No more accidents caused by human error. More independence for those who can’t drive (Making a car for blind drivers; Google’s driverless car). A chauffeur experience. An extra-private uber ride. The dream of 100% autonomous cars became real in the 1980s, with Carnegie Mellon’s Navlab series. Now in 2015, Google, Uber and Audi among others have been testing self-driving cars on public roads. Since Consumer Watchdog became involved, Google has begun releasing its testing data in monthly reports (June 2015 report) which show few incidents (14 in 1.7 million miles) which were all caused by other humans on the road erring (Google driverless cars in accidents again, humans at fault — again). Statistically, human error by drivers causes 94% of all crashes (The View from the Front Seat of the Google Self-Driving Car). Driverless cars could make public roads much safer, and crashes less frequent. Service provider Uber wants in, and has enticed many of Carnegie Mellon’s prized robotics researchers away to create the first driverless ride service (Carnegie Mellon Reels After Uber Lures Away Researchers). The cars “see” the road using LIDAR, a combination of light and radar (How a driverless car sees the road). By using scores of laser rendered images of pedestrians or cyclists, for example, the cars “recognise” these road users, before even other human drivers. The cars see traffic lights, cone diversions, and can cope with unusual situations- including a family of ducks crossing the road. This odd example illustrates how sensitive the cars are.
Some opponents of driverless cars have said that though the cars follow the rules of the road, some of the 14 accidents could have been caused because the cars do not act like a human controlled car would, or like people expect a robot car to. In other words, people expect robot cars to move out of lights immediately, or that another human car would equally move with general haste. This seems like a poor excuse. To use the last accident (Youtube clip) Google’s driverless car was in as an example, (The View from the Front Seat of the Google Self-Driving Car, Chapter 2) though there was a green light, the robot car didn’t go, as the lane beyond the intersection was blocked with traffic. Using this argument, as to why the car behind robocar did not brake at all, and instead rear ended Google, is that the passenger expected the car to move forward regardless of the blocked lane, as a “normal” driver would have done. Not only that, the Google car was not first in the lane. There were three “normal” cars in front of it. So there is no way it could have moved forward anyway, without itself rear ending. The autonomous car came to a natural stop, giving plenty of notice. The argument that the “unusual behaviour” of robocars driving like responsible humans will cause accidents is invalid, as by that logic conscientious drivers cause accidents.
Another roadblock for driverless cars are the ethical dilemmas that have been raised. Since the success (or failure, depending on your expectations) of the recent DARPA finals (DARPA Robotics Challenge: Amazing Moments, Lessons Learned, and What’s Next, The DARPA Robotics Challenge Was A Bust) a looming question has been how to keep robots ethical, when faced with lesser-of-two-evils kind of problems. The A-robot created by Alan Winfield was programmed to save H-robots (to simulate humans in danger). Through dozens of tests the A-robot saved its charge each time. But then the A-robot was presented with a moral dilemma: two H-robots wandering into danger simultaneously. In almost half of the trials, the A-robot dithered helplessly and let both perish. To correct this extra rules about how to make such choices would be needed. And what if one H-robot were an adult and the other a child, which should the A-robot save first? On ethical matters like these, a consensus even among humans is difficult (Machine ethics: The robot’s dilemma). This applies to driverless cars too- if a pedestrian steps out unexpectedly, should the car be programmed to swerve, endangering its passengers and other cars? Other issues have also been raised- who is liable for the crash of an autonomous car? There’s the risk of hacking and the added cost of these features too. Hopefully as the technology improves for the cars to be used more in snow and fog, these other issues can be resolved too, including our apparent perception of them (What’s putting the brakes on driverless cars?).