top of page
Writer's pictureLaura Boccio

The Code Controls the Crashes

Updated: Dec 14, 2018

You’re late to a meeting and eating fries while you drive there; the container starts to slip from the cup holder so you instinctively lean over to grab it and !!!

You’re listening to the radio and a contest is announced;

you briefly grab your phone to dial in while driving and !!!

You’re driving along and you hear your phone buzz;

you look down to check it and !!!

You have been drinking at a bar and are ready to leave, but your Uber is taking too long;

you grab your keys and think “it’ll be fine” and !!!


We like to think that these things are avoidable and they won’t happen to us. However, it is this same naïve thinking and idealist mindset that allows for human error to continue to be the prominent cause of car crashes. It’s time to face the fact that no matter how much driver’s education and precautionary measures we put in place, there are still too many opportunities for a quick lapse in judgement. The solution is simple: eliminate human error.


That is why the wave of the future is self-driving cars. Although these words might elicit images of futuristic media portrayals, the reality is closer than one might think. “No,” said Lilly Neff when asked her opinion on the matter. “Have you ever seen a robot movie? No,” Neff said. Lilly is an engineering student at WPI who firmly rejects the idea of self-driving cars. She, and many other people, have a difficult time getting on board with the concept. It seems fantasy and fairytale, but it is quickly becoming a reality.


Patrick Flinn, a mechanical engineering student at WPI with a concentration in motor vehicles interned with Volvo this past summer. There he saw the implications of the future and the push towards technology. He said that big car companies, such as Volvo, are already pushing technological advances. They have created car share programs where customers pay a monthly fee to be a member instead of purchasing a vehicle. This way, they simply use the app to request a car and return it when they are done. Flinn commented on the potential impact this has on the mass installation of self-driving cars.


“You’ll order the car you want from an app on your phone. It will pick you up, drive you there, and take you back,” said Flinn. “No one will need to own their own cars anymore.” While this idea is very possible, I believe that it will still be more common to own your own car in suburban and rural areas. Regardless, the future of self-driving cars and the reduced risk with the elimination of human error does have extreme potential benefits.


Let’s consider the following scenario: you’re on the highway, driving, and the truck trailer in front of you comes loose and impedes your path. You have to make a split second decision taking into consideration all the potential risks, the value of your life, your passengers’ lives, and the lives of those around you in other vehicles on the road. Before you have time to think it through, your decision is made and an action is taken. It is all dependent on your ethical priorities and those of the drivers around you.


Now imagine this with a self-driving car. The car would have the capabilities to survey the entirety of the surrounding area and assess all the option and run a risk analysis in the split seconds before impact. Additionally, it would have the capabilities to talk to the other vehicles on the road and work with them to create an optimal solution and reduce the effects of this trailer-on-the-loose.


The biggest question with this becomes: how does the car quantify the risk of each possible course of action in response to the loose tractor trailer. What goes into the decisions it makes and the priority of life. Brett Frischmann and Evan Selinger do a good job of painting this picture in their article How Self-Driving Car Policy Will Determine Life, Death, and Everything In-Between1. There they state that the resulting code that needs to be written for the vehicles will inevitably consist of some ethical reasoning. “There’s no value-free way to determine what the autonomous car should do… Sophisticated algorithms, machine learning, or any form of artificial intelligence… These tools can help evaluate and execute options, but ultimately, someone—some human beings—must choose and have their values baked into the software,” they state in their article.


While I agree with their logic in writing this, I do believe that the potential for self-driving cars is far greater than the potential to reduce human error through education of drivers. Yes, the prioritization is pre-determined in self-driving cars. Pre-determined by car manufacturers, technologists, insurance companies, or a combination and the coding behind a self-driving car. However, the alternative is a time sensitive individual under the pressure of an impending crash.


Yes there will still be error, but it will be significantly less than that current human error that exists on the roads.


If you wish to learn more about the statistics of current human error regarding road safety or the current technology regarding self-driving cars and the 360 degree sensory technology, take a look at the work Google has done on https://waymo.com/.

6 views1 comment

Recent Posts

See All

1 comentário


Amelia Papi
Amelia Papi
12 de dez. de 2018

It'd be interesting to see how this would change the rates of crashes in young drivers specifically.

Curtir
Home: Blog2

Subscribe

Home: GetSubscribers_Widget

Contact Us!

Message GOAT Gossip with any questions or comments.

Your details were sent successfully!

Home: Contact
goat.JPG
bottom of page