“[I]f we're going to be smart humans, we must learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.” – Farnam Street Blog
I recently read Michael Lewis’ new book entitled The Undoing Project which is all about the history of behavioral economics. For those unfamiliar with the phrase, behavioral economics “studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions of individuals and institutions” (Wikipedia). My first introduction to the field of behavioral economics was during my graduate studies at Stanford. The exposure to this discipline created a paradigm shift in my understanding of human behavior and forever changed the trajectory of my career. In this week’s Insight we will share one of the many stories from The Undoing Project and explain how it relates to investment decisions.
Back in the 1960’s an individual by the name of Lew Goldberg, who was working for the Oregon Research Institute, conducted a study with a group of radiologist to determine how they diagnosed stomach cancer and test whether their decisions could be modeled into an algorithm. The doctors indicated that there were seven major signs they looked for including size, shape, and five other cues.
There were obviously many different plausible combinations of these seven cues, and the doctors had to grapple with how to make sense of them in each of their many combinations. The size of an ulcer might mean one thing if its contours were smooth, for instance, and another if its contours were rough. Goldberg pointed out that, indeed, experts tended to describe thought process as subtle and complicated and difficult to model.
Next, the researchers showed the group of doctors 96 different stomach ulcer images and asked each doctor rate the probability of the ulcer on a seven point scale from “definitely malignant” to “definitely benign.” Without telling the doctors, they also showed each image to them twice by randomly mixing in duplicate images of each ulcer to test whether individual doctors would assign the same scores to the exact same images. Lastly, the researchers created their own simple model which equally weighted the seven factors the radiologist had provided to them and scored each image on the same 1-7 scale using their algorithm.
The results of the experiment were captured on punch cards and sent off to UCLA for processing (this was the 1960’s after all). When the results came back, the research team was aghast at what they learned.
In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors’ diagnoses. The doctors might want to believe that their though processes were subtle and complicated, but a simple model captured these perfectly well…More surprisingly, the doctors’ diagnoses were all over the map. The experts didn’t agree with each other. Even more surprisingly, when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis…
If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.
The main reason why the research team was so shocked by the results of the experiment is that their simple model was just a starting point. Going into the experiment, the expectation was that it would take a number of iterations and fine tuning to capture all the inter-dependencies and complexities of the radiologist’s decision making process. What they found is that a simple, equally weighted model which isn’t subjected to all the “noise” of human decision making worked surprisingly well. The idea that simple models are often just as good as complex ones was unpacked in the book Thinking, Fast, and Slow by Daniel Kahneman who is a main characters in Lewis’ book and largely considered one of the grandfathers of behavioral economics.
Simple equally weight formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: Frequency of lovemaking minus frequency of quarrels…You don't want your result to be a negative number.
The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatments by doctors or patients.
Behavioral economics explains many of the basic biases humans are subject to when it comes to our decision making. So many things can influence our decisions that it is almost impossible to eradicate all biases from our thought process. Without going into a whole lot of detail, another study was done a couple years ago showing how the odds of getting paroled were directly tied to how long it had been since the judge had last eaten. The embedded chart from the study shows the probability of being paroled on the vertical axis against time of day on the horizontal axis with the dotted lines representing breaks for food (snack, lunch or dinner). Judges, who are supposed to be the bastions of impartiality, are subject to human biases just like the rest of us!
There are mountains of research supporting the thesis that human beings are far from purely rational decision makers. As such, and as the opening quote would indicate, we must “learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.” We agree with this sentiment wholeheartedly, which is why we have developed a computer-based, trend-following model to manage risk in our public equity (stocks) and hard assets allocations. Our trend following model, which we have named MarketVANE, isn’t rocket science by any stretch of the imagination. But often times relying on a simple model without human biases is better than listening to expert opinion. Does that mean that the simple model will always outperform human judgement? Absolutely not. But we are still firm believers that over long periods of time, a simple model that manages risk and reduces the cost of being human will eventually produce superior results to investment decisions that rely on gut-feel and expert opinion.
Author Elliott Orsillo, CFA is a founding member of Season Investments and serves on the investment committee overseeing the management of client assets. He spent nearly ten years as a financial analyst and portfolio manager working primarily with institutional clients prior to co-founding Season Investments. Elliott earned a bachelor's degree in Engineering from Oral Roberts University and a master's degree from Stanford University in Management Science & Engineering with an emphasis in Finance. Elliott and his wife Gigi have three children and like to spend their time outdoors enjoying everything the great state of Colorado has to offer.
Transparency is one of the defining characteristics of our firm. As such, it is our goal to communicate with our clients frequently and in a straightforward way about what we are doing in their portfolios and why. This information is not to be construed as an offer to sell or the solicitation of an offer to buy any securities. It represents only the opinions of Season Investments. Any views expressed are provided for informational purposes only and should not be construed as an offer, an endorsement, or inducement to invest.