Season Investments

 facebook-logo.pngtwitter.pnglinkedin-logo.png

The Paradox of Automation

Posted on October 20, 2015

“It's going to change people's perception of the future quite radically.” – Elon Musk on the future of automated driving

2015-10-20_Stop_Button.jpgLast week Tesla CEO and industry innovator Elon Musk announced a new autopilot feature that is now available to a number of Tesla Model S owners through a new software update Tesla just rolled out. One of the most interesting aspects about Tesla’s autopilot program, other than being able to drive your car without touching the steering wheel, is that it learns from all the users which are actively engaging it and shares that information with the entire fleet. In other words, human drivers are training Tesla’s autopilot program on how to correctly operate a car.

This type of intersection between man and machine is not new, even when it comes to the idea of a driverless car. Just for fun, we’ve linked a short promotional video put out by GM back in 1956 extolling the virtues of a fully automated car. For those that don’t enjoy family sing-a-longs, you might want to skip to the 1:34 mark.

That being said, when it comes to automation, the normal human tendency is to resist the change based on a certain level of distrust that the machine will not be able to properly execute the task at hand. A perfect example is this short video put out by Fortune showing an associate testing out the new Tesla autopilot feature for the first time while stating, “It’s a little bit scary, I must admit.”

NPR’s Planet Money ran a podcast this summer entitled The Big Red Button on the topic of automation adoption, specifically that of self-driving cars. In the podcast, they talked about the challenges of introducing automation to people because it feels scary. The two best ways to combat this fear is to either 1) give humans an element of control by providing the option to override the automated system or 2) try to make the automated technology look and feel very friendly and harmless.

Around the turn of the previous century, people were wary about the introduction of the first fully automated elevators. Up until that point, elevators relied on a person to operate the controls. The problem was, human operators made human errors which led to cases of severe injury or even death. In a push to make elevators safer, manufactures decided to introduce a fully automated elevator with push button controls and safety bumpers on the door. At first, people hated the idea of a fully automated elevator because they didn’t trust a machine to do the job previously done by a person. In order to combat this fear, advertisements were run showing grandmas and little children pushing the buttons on the elevator while safely riding it up and down. Additionally, they added calming voice commands to instruct people on how to use the elevator and provide updates such as “the doors are closing.” Lastly, they introduced a big red STOP button which gave passengers a sense of control. These measures all worked to increase the acceptance of the new technology by calming the psyche of the passengers while reducing the number of accidents through automation.

Today, technology and innovation companies face the same problem with the adoption of a driverless car. Although the number of auto-related deaths here in the US has declined over the past decade, over 30,000 people still die every year in an automobile accident. Of these deaths, the vast majority can be attributed to some sort of human error (intoxication, falling asleep, making bad decisions, etc.). Because of this, companies such as Tesla, Apple, and Google see a huge opportunity (and profit potential) to increase the safety of driving through automation. But just like the automated elevator at the turn of the previous century, there is a huge mental barrier for people handing over the control of their car to a machine.

Google has decided to combat this barrier in a very similar way as the automated elevator. The embedded video below was produced by Google to show a wide array of everyday people having a lot of fun riding in their “cute” driverless car set to light-hearted, bubbly music.

In the video, you may have noticed that one of the passengers commented on lack of a steering wheel. The team at Google made the conscious decision not to include a steering wheel or a brake in their driverless car prototype because they felt it would simply increase accidents due to human error. For example, if someone were to become comfortable enough with a driverless car to fall asleep while it was in motion only to later be startled and try to take over control of the car in that split moment, bad things could happen. Additionally, in the future, the more we rely on self-driving cars, the less qualified we will all become to drive a car due to our lack of experience and reliance on automation.

Airplanes are a great example of this automation paradox. Autopilot was first introduced to help pilots fly straight and level. Over time more of the pilot’s responsibilities became automated to the point where planes practically fly themselves these days. Since the 1980’s when the shift toward fully automated planes began, the safety record in aviation as increased 5 fold to the point where we now have only one fatal accident for every five million flights. Unfortunately, accidents still happen as was the case for Air France Flight 447 which crashed over the Atlantic Ocean back in 2009. In that instance, the autopilot disengaged when all three airspeed probes froze over making it impossible for the autopilot system to measure an accurate speed. When the pilots took over, they panicked and made a series of bad decisions which caused the plane to stall out and eventually crash into the ocean below killing everyone aboard.

The story of Flight 447 is a tragic one, but one that could have easily been avoided if the pilots had maintained the speed and pitch of the plane at the point where they took over control from the autopilot system. Nobody knows exactly why the pilots did what they did, but some have stipulated that it is a byproduct of becoming overly reliant on an automated system. From a Vanity Fair article about the Flight 447 crash:

This is another unintended consequence of designing airplanes that anyone can fly: anyone can take you up on the offer. Beyond the degradation of basic skills of people who may once have been competent pilots, the fourth-generation jets have enabled people who probably never had the skills to begin with and should not have been in the cockpit. As a result, the mental makeup of airline pilots has changed.

From that same article, a chief engineer at Boeing was quoted saying:

We say, ‘Well, I’m going to cover the 98 percent of situations I can predict, and the pilots will have to cover the 2 percent I can’t predict.’ This poses a significant problem. I’m going to have them do something only 2 percent of the time. Look at the burden that places on them. First they have to recognize that it’s time to intervene, when 98 percent of the time they’re not intervening. Then they’re expected to handle the 2 percent we couldn’t predict.

And therein lies the paradox of automation. We automate to make things better and safer by taking human error out of the equation, but by doing so, we become reliant on the machines we build. When those machines fail, who steps in to take back the controls and what qualifications do they have to so? Tesla’s idea is to create a learning algorithm to increase the accuracy and scope of its automated driving system. In theory this should be a highly effective way to “teach” a computer program on how to drive a car. But even if this program is correctly able to identify 99.99% of the adjustments a human being may have to make while driving a car, what happens when it is faced with that 0.01% situation? Will Tesla and Google and other innovative companies fall in line with Boeing’s sentiment on automated flight and rely on human intervention in those cases? There is no doubt in my mind that automated driving will save lives and make driving much safer, but unfortunately, it will most likely mean that we all become that much more reliant on automation at the expense of maintaining a refined skill set.


elliott_headshot_bw.jpgAuthor Elliott Orsillo, CFA is a founding member of Season Investments and serves on the investment committee overseeing the management of client assets. He spent nearly ten years as a financial analyst and portfolio manager working primarily with institutional clients prior to co-founding Season Investments. Elliott earned a bachelor's degree in Engineering from Oral Roberts University and a master's degree from Stanford University in Management Science & Engineering with an emphasis in Finance. Elliott and his wife Gigi have three children and like to spend their time outdoors enjoying everything the great state of Colorado has to offer.


Subscribe to our Weekly Insight via email!

 


Transparency is one of the defining characteristics of our firm. As such, it is our goal to communicate with our clients frequently and in a straightforward way about what we are doing in their portfolios and why. This information is not to be construed as an offer to sell or the solicitation of an offer to buy any securities. It represents only the opinions of Season Investments. Any views expressed are provided for informational purposes only and should not be construed as an offer, an endorsement, or inducement to invest.