Humans Are Too Trusting of Self-Driving Cars

San+Francisco+was+the+recent+site+of+50+Waymo+self-driving+cars%E2%80%99+malfunction%2C+with+the+cars+performing+daily+U-turns+in+a+neighborhood.++%28Courtesy+of+Twitter%29

San Francisco was the recent site of 50 Waymo self-driving cars’ malfunction, with the cars performing daily U-turns in a neighborhood. (Courtesy of Twitter)

Self-driving automobiles have the potential to completely transform how people and products are transported. Car accidents could become significantly less common, and trips could become much more efficient. Daily commutes for those who drive to work could be more productive and enjoyable with drivers’ increased freedom. However, self-driving cars are not without their faults, and the faults they possess actually make them more dangerous than helpful. 

Unlike human beings, self-driving cars do not possess the creativity to solve the problems that come with autonomy. Simply put, self-driving cars do not have the capability to improvise. Self-driving cars are machines that do what we tell them to and can only perform under set parameters. Though they have the ability to execute given tasks via pre-programmed instructions with near-perfect accuracy, even machines are subject to errors. With a skill as precise as driving, which leaves very little room for error, self-driving cars are too experimental to be useful when drivers are still required to be ready to react. 

A recent incident involving Waymo cars is a perfect example of the temperamental nature of self-driving vehicles. It was not just one Waymo car that went astray, but rather a whopping 50 that appeared daily in a neighborhood solely to make U-turns on a dead end. Had it been fewer cars, I would be tempted to take a different stance and label it a slight malfunction, with hopes they could improve in the future. However, the fact that so many of these cars were experiencing the same glitch, and that Waymo spokespeople were not more thorough in their explanation of the issue, leads me to believe that these cars are still far from safe.

Self-driving cars have also been responsible for human fatalities, which is even more troubling. In 2018, a woman was struck by an Uber self-driving car and died as a result of her injuries. This death cannot be blamed solely on the car manufacturers since there was a driver inside the vehicle. The driver had not felt the need to keep her eyes on the road, placing her trust in the car instead. Had the driver been paying attention, this death could have been prevented. This occurrence is concerning considering that self-driving vehicles were designed with the idea of decreasing automotive-related crashes.  

Based on this incident, the technology of self-driving cars is a big problem, but it is peoples’ dependence on the technology that makes self-driving cars so dangerous. If a company creates a vehicle that claims to drive itself, the average consumer will not feel the need to challenge this assertion. For the most part, consumers will behave like the woman inside the Uber, placing all of their trust in a machine that has the potential to be fallible like human drivers are now. When these kinds of errors inevitably occur the next issue will be deciding who is truly to blame, a concept that self-driving cars dangerously blur.

With the way technology is currently advancing, it would not be surprising for self-driving cars to become the next yellow taxi cab. However, I hope we are still far from that day. As it stands now, these machines are too experimental, and people are too naive to function alongside them. Humans need to have an easy way to override the machines in the case of mechanical or technical failures, and they need to be made aware that these failures are more common than they might think. The way I see it, in order for self-driving cars to truly become the new norm, either the vehicles need to be wholly infallible or drivers need to be much more skeptical. 

Carolyn Branigan, FCRH ’24, is a film and English major from Tinton Falls, N.J.