When it comes to the future of transportation, the first thing that comes to mind is the possibility of flying cars. It's easy to imagine an urban utopia with vehicles that float through the air, swerving around buildings, reaching toward the heavens.
While Back to the Future: Part II wrongly predicted that we would have this technology in 2015, autonomous vehicles—which are currently being tested—may just be the stepping stone to making this a reality. Who would've thought robot cars would be our present?
No matter what side you stand on in the safety debate, even those who have concerns still agree that this innovative technology is the way of the future.
Companies like Google, Delphi Automotive, Bosche, Tesla, Nissan Mercedes-Benz, Uber and Audi have already begun testing self-driving cars on the roads. According to a June 2014 monthly report from Google, the company began testing 25 of its prototype vehicles on public streets in Mountain View, Calif., to join the other 23 Lexus RX450h autonomous SUVs.
Delphi's Roadrunner autonomous vehicle also just completed nearly 3,400 miles in a trip from San Francisco to New York City.
Robot cars like Google's use GPS to know their location, and are equipped with sensors, laser illuminating detection and ranging (LIDAR), radar, high-powered cameras and learning algorithm software to know how close they are to another vehicle, as well as detecting the surrounding environment to identify signage, pedestrians and cyclists near the car. The cars use the sensing capabilities and software to choose a safe speed, accelerate, brake and make turns, while anticipating other vehicle's actions to plan its movements. This means commuters can sit back, sleep, eat or even play Candy Crush while the car does the driving for them.
Because this technology is so new, is it too soon for them to share the roads with human drivers? Would you trust a robot or computer system with your life? How safe are they, and should we be putting our lives in the hands of robot technology?
Let's take a look at the pros and cons.
Pros Of Autonomous Vehicles
Some predict the fully autonomous vehicles will be on the road as early as the end of 2016, while others say it won't happen in our lifetime. But according to Ryan Hagemann, a civil liberties policy analyst at the libertarian advocacy organization Niskanen Center and fellow on robotics at think tank TechFreedom, who specializes in auto robotics and automation, it's more likely that this will happen within the next couple of years, probably by the year 2025.
Hagemann says it's not a question of if this technology will be integrated, but when. "The question isn't a technological question, it's a regulatory and policy question of how to incorporate vehicles on the roadways while we still have motor vehicles that are operating fully by people behind the wheel," he says in an interview with Tech Times.
In a 2014 study for the Mercatus Center at George Mason University, scholars Adam Thierer and Hagemann say that policymakers should clear the existing roadblocks (PDF) regarding smart car technology being developed, and consider the economic and social benefits.
The Technology Could Save Lives
They argue that the technology would reduce the rate of car accidents as a result from human error. According to the June 2015 early estimate of motor vehicles fatalities from U.S. National Highway Traffic Safety Administration (NHTSA), an estimated 32,675 people died in car crashes in 2014. The Eno Center for Transportation, a think tank, notes that "driver error is believed to be the main reason behind over 90 percent of all crashes" with drunk driving, distracted drivers, failure to remain in one lane and falling to yield the right of way the main causes.
Because the majority of these accidents are caused by human error, self-driving cars could potentially reduce the rate of automobile-related deaths—and save the U.S. over $400 billion (2 percent of the U.S. GDP) in total annual costs of accidents.
"In theory, if you have 100 percent fully autonomous vehicles on the road," Hagemann says, "while you still might have accidents on the margin in rare situations, you're basically looking at anywhere from a 95 to 99.99 percent reduction in total fatalities and injuries on the road."
However, Hagemann pointed out that to have these reduced numbers, all of the vehicles on the road would have to be autonomous.
Clearing Up The Roads
Along with possibly reducing the rate of motor vehicle accidents caused by human error, autonomous vehicles could also reduce the rate of traffic and congestion on the roads since the robot cars will be able to travel at a much higher speed and closer to other cars without having concerns about hitting each other. This will then ultimately decrease the amount of time Americans spend traveling, while also increasing productivity since people won't actually have to drive the car.
According to the paper by Thierer and Hagemann, congestion caused drivers to spend an extra 5.5 billion hours on the road, costing them $121 billion for the 2.9 billion gallons of fuel they needed in 2011.
"It will also end up increasing fuel savings across the board because you have computers determining when to stop and when to go, and they're optimizing all of those components," Hagemann says.
While there are many benefits to level 4 autonomous vehicles (vehicles that perform all safety-critical driving functions where human drivers are not expected to take control at any time), we already have level 1 and 2 automation features on the roadways.
The National Highway Traffic Safety Administration has established a rating system (PDF) that scores vehicles on their level of automation, on a scale of 0 (no automation) to 4 (fully automated). It also recognizes there are different streams of technological innovation developing concurrently, so it evaluates them on what it calls a continuum ranging from vehicles with in-vehicle crash-avoidance systems that provide warnings and limited automated control of safety functions to vehicle-to-vehicle communications that support various crash-avoidance applications to self-driving vehicles.
"Robot technology has been incremental," John M. Simpson, director of consumer relations at Consumer Watchdog, told Tech Times in an interview. We already have features like cruise control, technology that allows cars to stay in their lanes, autonomous braking, and self-parking sensors.
"Those are all kinds of autonomic technology that essentially works with the human and enhances the human drivers' abilities, and presumably adds safety to the vehicle because of the interaction. I see that as a logical step forward," he adds.
However, the innovative technology is not free from flaws.
Cons Of Autonomous Vehicles
"Google is putting everything out with a positive spin, but a lot of issues the technology can't cope with in a very basic way, which presumably is why the California law that allows them to test these vehicles requires that there be a driver behind the wheel who can take over and take control over an emergency," Simpson says.
The cars cannot drive in heavy rain or snow, and they have a hard time making a left-hand turn when there is oncoming traffic. They also cannot read hand signals – say, for example, if a traffic cop was directing vehicles. While Simpson believes these issues will be solved over time, the technology at this stage, he says, is still very much in development.
"In the long run, I think that some of the autonomous technology will probably enhance safety, but I think that its going to be quite a while until we get to that point—if we even can—when cars can be so-called level 4 where they completely drive themselves without any human intervention whatsoever," Simpson says, "and that's what Google's striving for."
Advocates say driverless cars will make the road safer, but it took efforts from the nonprofit public interest group Consumer Watchdog to finally get Google to release accident reports involving its driverless cars in California.
Consumer Watchdog was involved with the early decisions surrounding the California law that would govern autonomous vehicles. "Not only is there safety issues, but you also have issues surrounding the data they gather, and the privacy of the data they gather," Simpson says.
In 2012, the nonprofit wanted a provision to be added saying the information gathered could only be used to navigate the vehicles, not for other purposes without the permission of the owner or operator of the car, something Google refused.
Since then, the consumer group has been concerned with whether the testing is done in a safe manner, and if there is a lack of transparency about the accidents. After the DMV refused to release accident reports, confirming that there had been accidents, it then had a change of heart. While the details of the people in the accidents remain anonymous, the California DMV has since released these accident reports to the public.
After those reports came out, Google then announced it would come out with monthly accident reports with a synopsis of any incidents its autonomous vehicles were involved in.
"I think, and we have called for this, that any time that any of these cars are involved in any kind of accident, that a police officer should have to go to the scene and investigate it and take witness statements, and file an objective third-party report. I don't think it's good enough for us to trust Google," Simpson says
According to Google's June 2015 accident report, the cars have been in 14 minor accidents—two of them in June— over the past six years with more than 1.8 million miles of autonomous and manual driving combined. The self-driving cars were in two minor accidents in June, and according to the accidents reports filed with the California Department of Motor Vehicles, both accidents were the fault of the other human drivers.
When contacted for a comment, Goolge declined.
On June 4, a vehicle rear-ended a Lexus autonomous model that was stopped at a red traffic light. On June 18, a vehicle attempting to go straight while in the left turning lane hit another Lexus model after the signal turned green for only left turns.
Despite the accidents, Chris Urmson, the head of Google's self-driving car program, confirmed that the minor accidents were again the fault of other drivers and couldn't be blamed on the company's technology.
At the Automated Vehicle Symposium on July 22 in Ypsilanti, Mich., Urmson showcased a few scenarios to show how the cars react, such as when ducks are crossing the road (Google's smart car would wait until they pass, while humans may take off if they see an opportunity), and when a cyclist comes across an intersection when the light turns green (a human driver in the right lane may proceed, while Google's car would wait).
Even still, if Google's car reaches a level 4 rating like the company plans, Simpson says "you would get in and be in the mercy of Google."
"It could be that Google says none of the accidents that they've been involved in were the fault of the robot car, but it may be that that's technically true, but not exactly true," Simpson says.
Here's what he means: Generally speaking, if a car is rear-ended, the one at fault is the person in the back since they were unable to stop in time, with the presumption they were following too closely behind.
"The problem with some of the crashes, while technically not Google's fault, they are in fact [at fault] because in some sense people don't understand how to react with a driverless vehicle," Simpson says. "I think what may have happened in some of the cases is that it could be that Google cars are not reacting the way that human drivers would."
The expectation is that they would start up exactly when the light goes green, but apparently the Google cars wait a second and a half before they take off. That means the driver in the car behind might step on the gas, assuming that the vehicle in front of them will start up and it doesn't, or that the driverless cars are programmed not to go through yellow lights when most people would.
Ethical Issues
One of the most concerning aspects of the technology is how it would respond in emergency situations. For example, if a child ran out to chase a ball in the middle of the road, would the car swerve into oncoming traffic and potentially total the car, threatening the life of the passenger and the other vehicle's occupants, or would it know to immediately stop?
"There are fundamental ethical situations, and its difficult to understand how exactly the robot cars will be programed to deal with those kinds of kinds," Simpson says.
However, Hagemann says incidents like these would be extremely rare circumstances. "Most of those circumstances would probably just be resolved by the computer telling the car to brake instead of swerving one way or another."
It doesn't matter that robots don't feel or emote, Hagemann says, since we are looking at split-second decisions. "Whether a human driver is making the decision or the algorithms in the car are, it's never going to be a perfect scenario."
Hacking Threats
After a report surfaced that hackers were able to remotely disable a Jeep Grand Cherokee this month, Fiat Chrysler Automotive announced a safety recall on 1.4 million vehicles so the company can update its software to prevent wireless attacks. If it's so easy to hack cars already on the road, what about autonomous vehicles that would one day rely solely on computers?
The short answer is yes, the computers in driverless cars could be hacked. But, "all the cars on the roadways right now can already be hacked through Wi-Fi signals," Hagemann says.
While it's a possibility, Hagemann says it's no different than the concerns we already face. But politicians are already pushing to regulate the technology to prevent this. On July 21, Sens. Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) proposed a bill to keep computer-enabled autonomous cars from getting hacked.
The bill, called the SPY Car Act, would force car manufacturers to use "reasonable measures" to protect the software. The Federal Trade Commission also would develop a window sticker to rate a car's vulnerability to such cyberattacks so that consumers can evaluate their safety, similar to the stickers that allow consumers to check a vehicle's fuel economy via its potential gas mileage.
Conclusion
Could this autonomous vehicle technology be too creepy for consumers? Would they actually buy them? That is, would they buy such a vehicle if they can afford it? Hagemann estimates that Google's driverless car technology would add an additional $30,000 to $40,000 to the cost of the vehicle while IHS Automotive forecasts that the price for the self-driving technology will add between $7,000 and $10,000 to a car's sticker price in 2025, a figure that will drop to around $5,000 in 2030.
"People see new technologies, and they see all the cost associated with it because it's something new ... but once these types of products are actually in consumer's faces and in their daily lives more and they see the benefits, then you see people starting to make those trade-offs between the potential costs versus what the actual benefits are," Hagemann says.
"It's also important to recognize that a lot of what people have started talking about in the recent months regarding autonomous vehicles is the creation of autonomous vehicle Ubering services. It may be that when autonomous vehicles actually hit the roadways and are near 100 percent market penetration rate, it may be the case that Americans wind up not buying cars anymore and completely subscribe to autonomous vehicle ride-sharing services that Google and Uber may be offering."
It might not be a question of whether the cars themselves are too expensive to buy, but whether or not Americans are ready to get rid of owning vehicles.
We just may be living in a world one day when robots via Uber drive us to our destination.
Be sure to follow Tech Times on Twitter and visit our Facebook page.