implications of human irrationality on the managerial ... · implications of human irrationality on...
TRANSCRIPT
Implications of Human Irrationality on the
Managerial Decision Making Process
A thesis submitted to the Bucerius/WHU Master of Law and Business Program in partial
fulfillment of the requirements for the award of the Master of Law and Business (“MLB”)
Degree
Etai Biran July 26, 2013
13.348 words (excluding footnotes)
Supervisor 1: Prof. Matthias Meyer
Supervisor 2: Mr. Yoram Abramov
1
Table of Contents
Introduction ...................................................................................................................................... 2
Chapter 1 - Standard Economics vs. Behavioral Economics ........................................................... 4
1.1 The difference between standard and behavioral economics ................................................. 4
1.2 The concept of “Free Lunches” .............................................................................................. 5
1.3 Criticism on behavioral economics ........................................................................................ 6
Chapter 2 - Basic Structure of the Decision Making Process .......................................................... 8
Chapter 3 - Bounded Rationality, Heuristics and Biases ............................................................... 11
3.1 The limitations of human rationality .................................................................................... 11
3.2 Heuristics and biases ............................................................................................................ 12
3.2.1 Information selection biases .......................................................................................... 13
3.2.2 Information processing biases ........................................................................................ 15
3.2.3 Decision biases ............................................................................................................... 22
3.2.4 Evaluation of the decision biases ................................................................................... 26
3.3 When CEO’s go wrong – a real life case of irrational managerial decision making ........... 30
Chapter 4 - Improving the Decision Making Process .................................................................... 33
4.1 Strategies for improving the decision making process ......................................................... 34
4.1.1 Shifting a decision maker from System 1 to System 2 thinking .................................... 34
4.1.2 Debiasing the decision making process ......................................................................... 35
4.1.3 Understanding biases in others ...................................................................................... 37
4.2 Why improving the decision making process is important .................................................. 38
Summary ........................................................................................................................................ 39
Bibliography ................................................................................................................................... 41
2
Introduction
How many times have we told ourselves we want to eat better, start exercising and quit
smoking? We know these are the right things to do yet we do no such things. Such behavior is the
exact opposite attribute that characterizes the classical economic human (homo economicus). The
classical economic human makes rational decisions and knows how to weigh advantages and
disadvantages of each decision in order maximize value and profit for himself. The classical
economic human is virtually flawless except he has one fatal flaw – he does not exist (Lambert,
2006).
When it comes to making decisions in our lives, we all believe that we are in control. Few
people succeed to realize that some of the choices we make are not only influenced by external
factors beyond our control, but are consistently irrational as opposed to what we think. The
troubling truth is that we continue to make these mistakes over and over again on a daily basis
without being aware that we are actually making them (Ariely, 2009).
In today’s ever growing corporate world, managerial decisions are being made daily and
have great impact on a company’s future. According to a study conducted by Paul Nutt, professor
of management sciences at Ohio State University, one of the common reasons for business distress
is management failures and poor managerial decision making. Moreover, Nutt’s study has proven
that “managers make the same mistakes over and over again as they formulate the decisions" (Nutt,
2002).
In this paper I will try to demonstrate that poor managerial decisions can often be a result
of managers being obstructed by “dark forces” that obscure their ability to make rational choices
for the company’s benefit. Moreover, they are the victims of inherent biases that often lead them
to making bad managerial decisions. What are these hidden forces and biases? How do they affect
managers while making decisions (mainly investment decisions)? Can managers overcome this
phenomenon? These are only some of the questions I wish to answer in this paper.
Moreover, I will try to demonstrate how these misguided behaviors are neither random nor
senseless. They are systematic and predictable. If I am able to prove that, I can then try to suggest
a starting point for improving the decision making process.
3
The discipline that will help me answer the questions in hand is referred to as behavioral
economics. Behavioral economics is a relatively new field. Its research revolves around aspects of
both psychology and economics. It serves as an antithesis to the conventional and classical
economic theory that represents our basic compelling belief that we, human beings, acting as
rational creatures, are capable of making the right decisions for ourselves.
My emphasis in this paper will be on managerial decision making while focusing on
investment decision making in companies. Nonetheless, these theories can be applied to any kind
of decision making process as well as general every day decisions. Throughout this paper I will
present the effect of irrational behavior and biases on the managerial decision making process using
real life examples and experiments held in the field of behavioral economics.
The research will be conducted in the following order: In the first chapter I will introduce
the differences between conventional and behavioral economics. I will try to justify the need to
shift from the conventional economics perception of human rationality to the behavioral economics
understanding that human beings are irrational creatures. In the second chapter I will introduce
the basic structure and anatomy of the decision making process. In the third chapter I will
demonstrate how bounded rationality, heuristics and biases influence the managerial decision
making process. In the fourth chapter I will present recommendations to overcome irrational
behavior and biases and ways to improve the decision making process. Finally, I will present a
summary.
4
Chapter 1 - Standard Economics vs. Behavioral Economics
In this chapter I will describe what behavioral economics is and how it differs from standard
economics theory. Introducing this difference will help me build the foundations of which this
research relies on. Whereas standard economic theory assumes absolute human rationality,
behavioral economics claims that human beings have bounded rationality. This is a key
understanding that will later help explain why managers make poor decisions.
1.1 The difference between standard and behavioral economics
Behavioral economics is a relatively new sub-field of standard economics. It asks to shed
some light on some of the fundamental assumptions that standard economics relies on. Behavioral
economics incorporates insights from human psychology and investigates what happens in markets
where agents operating within these markets display human limitations and complications
(Mullainathan and Thaler, 2000).
Standard economics assumes that humans are rational in an obsolete way. According to
standard economics, a rational human being is expected to know all the relevant information
regarding his decisions and is therefore able to calculate the value of the different options he faces.
Furthermore, rational human beings are not cognitively constraint in any way while they weigh the
ramifications of each potential choice they face. The result is that we are presumed to be making
logical and sensible decisions at all times. If we somehow do come to make a wrong decision, we
are still able to quickly adapt and improve, either on our own or with the help of market forces
(Fama, 1995).
Accordingly, while facing a decision, we humans are fully capable of assessing the worth
of all goods and services and the amount of utility all decisions are likely to produce. This is a
fundamental belief in the classical theory which defines the market as “a market where there are
large numbers of rational profit maximizers actively competing… where important current
information is almost freely available to all participants” (Fama, 1995, p. 76). Under these
assumptions, every individual in the market place is trying to maximize profit and is striving to
optimize his experiences.
5
Behavioral economics asks to question these assumptions by applying scientific research
(some of which will be presented throughout this paper) on human and social cognitive and
emotional patterns. The scientific research conducted (see for example Tvesky and Kahneman,
1974, 1979, 1981; Rabin, 1998; Thaler, 2000; Ariely, 2009) in this field has demonstrated that
humans are subjected to be influenced by irrelevant emotions and shortsightedness from their
immediate environment which lead them to making irrational decisions.
According to my view, behavioral economics picks up where classical economic theory
falls short. It focuses on how people actually behave rather than assuming how they should behave.
This key difference paves the road to understanding why people behave the way they do. This is a
question standard economics fails to answer because it relies on the assumption that human
decisions are unquestionable (human beings are rational creatures that are expected to make the
best decision because they are “programed” to maximize utility). The understanding that human
beings do not always behave rationally and that we make mistakes in our decisions implies that
there are ways to improve our decisions.
1.2 The concept of “Free Lunches”
Milton Friedman claimed that “there’s no such thing as a free lunch” (Friedman, 1975).
From an economical point of view, Friedman simply claims that it is impossible to get something
for nothing. This simple idea has been a core assumption of standard economics theory for a long
time. Standard economics theory assumes that all agents within the marketplace are rational agents
working effectively to maximize their own welfare. Since all the agents are seemed to be in a state
where they are all maximizing returns, it follows that there are no “free lunches”. Even if there
were any free lunches, someone would have already found them and extracted all their value.
Behavioral economics, on the other hand, assumes that the marketplace suffers from various
inefficiencies and that the agents within the marketplace do not always maximize their own
welfare. Instead, agents follow suboptimal decision strategies and fall prey to different decision
traps. This depressing view of human irrationality has a silver lining though. The human mistakes
that we make provide opportunities for improvement. Moreover, if these mistakes are systematic,
we can surely develop tools and methods that will help us make better decisions and improve our
6
overall well-being. This is the exact meaning of free lunches from a behavioral economics point of
view (Ariely, 2009).
With the understanding of the concept of “free lunches” we can point out another important
difference between standard and behavioral economics. While standard economics does offer a
rather optimistic view about human nature, behavioral economics offers, in my opinion, a more
realistic one. Rather than appealing to the tempting and compelling idea that our human reasoning
is limitless and that we are capable of making optimal decisions, it might be better to acknowledge
that we humans are imperfect. It is only with this understanding that we can take advantage of the
free lunches offered to us and thus improve our overall well-being. Otherwise we would just be
clinging to an appealing belief that we maximize utility in the best possible way because we are
naturally programed to do so. It is this naive presumption that would paradoxically lead to a
decrease in utility maximization and eventually to market inefficiencies as well.
1.3 Criticism on behavioral economics
According to Dan Ariely, one of the common critiques by standard economists’ revolves
around the fact that most of the findings and the conclusions of behavioral economics are based on
surveys and experiments held on college students. The main argument is that this population lacks
“real life” experience and therefore they make decisions in a way that does not reflect society as a
whole. However, Ariely claims that “many decision-making studies in behavioral economics have
shown that young adults do not act much differently than adult adults when it comes down to their
core behavior” (taken from www.danariely.com – last visited on 04.07.2013). To refute this claim,
Ariely presents a study where the implications of the endowment effect (see later) were tested both
on college students and on mature experienced grown-ups. The results of this study did not
demonstrate any differences in the behaviors between the examined groups.
Moreover, traditional economists (Levitt and List, 2007) are skeptical of the experimental
and survey based techniques that are used in behavioral economics research. They claim that
behavioral economists use experiments done with humans to try and explain actions of the outside
world. These economic experiments are not the same as scientific experiments and what is shown
in an economic experiment does not always coincide with what is really occurring in the real world.
These arguments have been dismissed by behavioral economists who claim that consistent results
7
that are obtained in multiple situations and geographies can produce good theoretical insight
(Rabin, 1998).
It seems to me that the main criticism of standard economists revolves around technical
criteria’s and does not question the core principals and findings of the behavioral economics theory.
Although it is far from my scope of research to examine the criticism on behavioral economics, I
find it important to emphasize the validity of this field of research by confronting this criticism. As
of today, the field of behavioral economics continues to grow and proves to have significant results
regardless of contradictions from traditional economists.
8
Chapter 2 - Basic Structure of the Decision Making Process
In this chapter I will try to explain how the human decision making process is structured.
Most people lack an understanding as to how our minds work and how we come to make a decision.
The fact that our brain does not come with a “user’s manual” and that we do not hold a basic
understanding of how it works has profound consequences. Without this understanding, we are
unable to expect when the cognitive processes that serve us every day so well are likely to misguide
us.
Over the years, psychological research has revealed many of the shortcuts on which our
brains rely on to help us get through the day (see for eample Tversky and Kahneman, 1974). These
systematic shortcuts that our brains take are common to all human beings – even the brightest ones.
They can sometimes lead to small harmless problems (such as choosing the wrong dish for lunch,
buying the wrong TV etc.) but can sometimes lead to bigger and significant problems (insolvency,
bad investment decisions etc.). Realizing these errors that occur in our brains can help us improve
our judgment when it comes to making decisions.
The cognitive aspects that effect the decision making process are the ones that define our
judgment. To help us improve our judgment, it is crucial for us to first identify the components that
comprise a good decision making process. Hammond, Keeney and Raiffa (1999) suggest eight
steps that define a good decision making process:
1. Define the right problem;
2. Specify objectives;
3. Create alternatives;
4. Understand the consequences of each possible choice;
5. Consider tradeoffs;
6. Clarify uncertainties;
7. Consider risk tolerance;
8. Consider linked decisions.
These steps may provide a useful direction to define what a good decision making process might
look like.
9
The question in hand however is do people really follow these steps when they need to
make a decision? Stanovich and West (2000) describe 2 systems of cognitive functioning. In the
professional literature the two systems are referred to as “System 1” and “System 2” thinking.
System 1 thinking refers to what we consider our intuitive thinking. This usually refers to fast,
effortless, implicit, automatic and emotional decision making. Stanovich and West claim that we
make most of our decisions in life based on System 1 thinking. On the other hand, System 2
thinking refers to reasoning that is slower, effortful, explicit, conscious and logical (Kahneman,
2003). Hammond’s, Keeney’s and Raiffa’s eight steps that are described above demonstrate well
behavior that is identified with System 2 thinking.
For most of our decisions in life, System 1 thinking is sufficient. Moreover, it would be
inefficient to utilize System 2 thinking in every decision we make in life (such as what to wear in
the morning). System 2 thinking should preferably influence our most important decisions, those
that have great implications and consequences over our lives. However, people don’t always follow
this pattern. The more people have on their plate, the more busy their lives are and the more they
have on their minds causes them to rely more on System 1 thinking which is quicker and easier to
execute (even when considering a significant decision). To be more specific to our context, the
frantic pace of the managerial life suggests that executives will often rely on System 1 thinking
(Chugh, 2004). This is not to imply that managers should use System 2 thinking for every
managerial decision they face, but rather to suggest that they strive to identify those situations that
require System 2 thinking. Once they identify those situations, decision makers can consider
implementing the 8 steps mentioned above to try and improve their decision making process.
To challenge the System 1 intuitive thinking pattern (which most people seem to have a
great deal of confidence in) consider the following diagram (Shepard, 1990):
Which table is longer?
10
Most people intuitively think that the table on left is longer than the table on the right when
in fact both tables are of equal length (the right table is simply wider than the left one). This is an
example of how our System 1 thinking sometimes fails us. Regardless to how smart people are, we
all fall prey to these traps and errors that occur more often in System 1 than in System 2.
Understanding the basic elements of a good decision making process and acknowledging
that we make decisions in different types of thinking patterns can help managers improve their
judgment. As a first step, managers should try to identify situations in which they need to shift
from intuitive System 1 thinking to logical System 2 thinking. Later, managers have to know what
kind of traps their System 1 thinking sets up for them and how to avoid these traps (which may
appear in System 2 as well). In behavioral economics, these traps are referred to as biases and
heuristics that are both a result of human irrationality (or bounded rationality as it may be referred
to throughout this paper).
11
Chapter 3 - Bounded Rationality, Heuristics and Biases
In this chapter I will describe what bounded rationality is and how heuristics and biases,
which are a form of human irrationality, influence the managerial decision making process. To
help me demonstrate this, I will present real experiments that were held by behavioral economists
which help draw conclusions on human irrationality. Please note that this paper is far from covering
all the known biases and heuristics and only demonstrates some of the relevant biases and heuristics
that effect managerial investment decisions.
3.1 The limitations of human rationality
Most economists define the term rationality by referring to the decision making process and
judgment of an individual. A rational human being is expected to reach optimal results in the
decisions he makes while taking into consideration his values, attributes and his risk preferences.
Herbert Simon (1957) was one of the first to suggest that human beings are bounded in their ability
to be completely rational (as standard economists assume). Simon suggested that humans behave
in an irrational manner due to lack of important information that would help them define the
problem before making a decision. Moreover, human beings are subjected to time and cost
constraints that limit them from obtaining all the information they need to make a rational choice.
Furthermore, decision makers are only capable of retaining small amounts of data in their usable
memory. Finally, limitations to a decision maker’s intelligence, constrain his ability to accurately
calculate what his optimal choice is from all the choices available to him. As a result, human beings
are bounded in their ability of making rational choices and they will forgo the best possible decision
in favor of the first one they find reasonable or acceptable.
Simon’s findings simply demonstrate that human judgment deviates from rationality.
However, his study does not explain how our judgment is biased. This is where Tversky and
Kahneman’s (1974) work fills the gap. Their work and the work of behavioral economists that
followed, led to our understanding of judgment as we know it today. Specifically, they found out
that human beings rely on “rules of thumb” when making decisions. These rules of thumb (or
shortcuts), are referred to as heuristics. Heuristics serve as a mechanism for us to cope with the
complex environment that surrounds our decisions. They may be useful at some times but can also
12
lead to severe errors as well. These heuristics eventually cause humans to deviate from rational
behavior during the decision making process.
It is important to note that the decision making process may be influenced by other factors
besides bounded rationality. For example, Mullainathan and Thaler (2000) have demonstrated how
decision making is bounded in two additional ways besides bounded rationality. The first being
bounded willpower, meaning that we are more focused on present concerns than on future ones.
As a result, our motivation is not consistent with our long term interest which results in bad decision
making (not having a retirement plan for example). Second, Mullainathan and Thaler suggested
that our self-interest is bounded, meaning we care about the outcomes of others which leads us to
focus on wrong things and eventually to making bad decisions. Moreover, bounded awareness (the
tendency to overlook important, obvious and readily available information when it lies beyond our
immediate attention) and bounded ethicality (the notion that our ethics are bounded in ways we are
unaware of) may also effect the decision making process. For simplicity reasons, in this paper I
will refer to all these examples as bounded rationality although they are not precisely captured by
this concept.
3.2 Heuristics and biases
As mentioned, this paper will focus on managerial biases along the decision making
process. Specifically, I will examine how biases affect managers in investment decisions. This
specific field is referred to as behavioral finance (a related field to behavioral economics). Much
like behavioral economics, behavioral finance aims to prove what is wrong with the neoclassical
approach and its efficient market hypothesis (see above Fama, 1995). Furthermore, it demonstrates
how individual behavior is subjected to psychological biases which are not in accordance with the
neoclassical assumptions of rationality. The following diagram categorizes these psychological
biases along the decision making process:
13
Availability Bias Representativeness
Bias
Mental Accounting
Bias
Hindsight Bias
Selective Perception
Bias
Conservatism Bias
and Herding Bias
Endowment Bias
and Sunk Costs Bias
Prospect Theory
Anchoring Bias
Framing Bias
Overconfidence
Bias
Optimism Bias
3.2.1 Information selection biases
Being aware of our biased behavior during the information selection stage has significant
implications on the rest of the decision making process. Selecting the right information to form a
decision will have great impact on the decision’s outcome. Using the wrong information to evaluate
a situation will have a “domino effect” on the rest of the decision making process and will
eventually lead to bad judgment and bad decisions. If the information selection process is biased it
may well be that the final decision turns out to be a bad one because it was based on wrongful
information all along. Moreover, biased behavior during this stage is likely to “contaminate” our
judgment with biased behavior in the following stages of the decision making process as well.
Therefore, high awareness and avoidance of biased behavior should be recognized as a top priority
in this stage.
Availability:
People assess the frequency, probability or likely cause of an event according to the degree
to which examples or occurrences of that event are readily available in their memory (Tversky and
Kahneman, 1973). Usually, specific and clear events that can be easily imagined and that evoke
emotions will be more available than vague, amorphous, unemotional and difficult to imagine
Information Selection
Information Processing
DecisionEvaluation of the Decision
Hindsight
Bias
Prospect
Theory
Availability
Selective
Perception
14
events. For example, a manger considering an investment project will base his assessment
regarding the probability of the project’s success on his recollection of the success or failures of a
similar project from the recent past. The ease of access to available information may cause
managers to rest their judgment on information that is not necessarily relevant or adequate to the
decision in hand.
To demonstrate this Tversky and Kahneman (1973) held an experiment in which two
groups were presented with two pairs of lists containing names: one pair with the names of 19
famous women and 20 less famous men and another pair with 19 famous men and 20 less famous
women. The first group of participants was asked to recall as many names as possible from the lists
and the second group was asked to judge whether the lists contained more names of men or of
women. The results were that the famous names were more easily recalled and the majority of the
participants erroneously judged the class consisting more famous names to be more frequent. This
comes to show that we make judgments with reference to thoughts that are easily drawn from our
brain. The first group was able to recall more famous names than less famous ones because the
information regarding famous figures was available, making it easy to access and recall that
information. The second group’s misconception is a result of their brain realizing and being
available to the more famous names on the list rather than to the less famous ones. Accordingly,
the second group concluded that the class of famous people was more frequent than the class of
less famous people although de facto it was not. Nonetheless, the availability heuristic can be very
useful sometimes because our minds generally recall events that occur more frequently. This might
lead to accurate judgment in some situations.
Selective perception
Selective perception defines a type of behavior humans exhibit while obtaining new
information. This bias is described as a situation where the brain is only capable of gathering certain
information that is related to a particular frame of reference while ignoring other important
information that deviates from this frame. This frame of reference can be our existing values and
beliefs, certain anchors (see further 3.2.2), influences from our environment and other factors
embedded in our personality (Chabris and Simons, 2011).
15
To demonstrate this, Chabris and Simons conduct a visual test (video available at:
www.theinvisiblegorilla.com - last visited on 19.07.13) where they ask participants to watch a short
video that presents two teams – one team wearing white and the other team wearing black. Each
team passes a basketball among its members and participants are asked to count how many times
the white team passes the ball. During the test, a black gorilla passes by the screen. When the test
is over participants are asked to give their answer to the question that was presented to them
beforehand (which is irrelevant in this case). Later, they are asked if they spotted the gorilla that
passed by the screen during the test. 50% of the participants do not spot the gorilla.
The failure to spot the gorilla is a result of the inability to attend to it while engaged in the
difficult task of counting the number of passes between the white team (the task we were “framed”
to be focused on). These results indicate that we sometimes have a tendency not to see what we are
not looking for even when we are looking directly at it. This bias has severe implications on the
managerial decision making process. Bounded awareness leads most decision makers to overlook
important information that is readily available to them in their immediate environment. By doing
so they do not obtain all available (and maybe essential) information they need in order to make a
completely informed decision. Managers who focus themselves on a certain spectrum of data that
is available to them may overlook other important information that exists right in front of them as
well.
3.2.2 Information processing biases
This stage of the decision making process is where most biases tend to occur. In this stage
our mind tends to take shortcuts to make information processing easier for us. While processing
information, our brain’s cognitive activity is more intense than in other stages of the decision
making process and therefore it is when our judgment is more likely to be biased.
Representativeness
When people need to make a judgment about someone or something, they look for traits
that someone or something might have which correspond with typical stereotypes they are familiar
with (Tversky and Kahneman, 1974). When people rely on representativeness to make judgments,
they are likely to judge incorrectly because the fact that something is more representative does not
necessarily make it more likely. This heuristic is used because it offers us a quick and familiar
16
reference point for assessing our judgment calls. We refer to what we know although we are not
really sure that our reference point is absolutely accurate.
For example, mangers may disregard an investment opportunity in an emerging country
because the common belief may be that emerging countries are considered to be primitive and
underdeveloped. In this case, the representativeness bias prevails and overcomes other rational
reasoning as to why it is worthwhile to invest in emerging countries (potential growth in the area,
lack of competition, lower operating costs etc.). In another example, the representativeness bias
may explain why investors tend to invest in state bonds. State bonds are often mistakenly regarded
as a “sure investment”. For most people the decision to invest in state bonds is represented by the
common belief that state bonds never fail and always guarantee a sure return. This was proven
wrong with Greece’s default in 2012.
To test the representativeness bias, Tversky and Kahneman (1983) held an experiment
where participants were provided with the following description of a woman:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student,
she was deeply concerned with issues of discrimination and social justice and also participated in
anti-nuclear demonstrations.
After reading the text, the participants were asked to rank the following statements by their
probability from 1-8 (1 being most probable and 8 being least probable):
1. Linda is an elementary school teacher
2. Linda works in a book store and takes yoga classes
3. Linda is active in the feminist movement (F)
4. Linda is a psychiatric social worker
5. Linda is a member of the League of Women Voters
6. Linda is a bank teller (T)
7. Linda is an insurance sales person
8. Linda is a bank teller and active in the feminist movement (T+F)
The description of Linda was constructed to be representative of an active feminist (F) and
unrepresentative of a bank teller (T) and the results indeed confirm this (F>T). However, results
17
showed that 85% of the participants ranked the conjunctional option 8 as more probable than the
less representative option 6 (F>T+F>T). However, the probability of two conditions (T+F)
occurring simultaneously is obviously lower than just one (T) occurring. In other words, it is more
likely that Linda is “just” a bank teller than Linda being both a bank teller and a feminist. The
reason this bias occurs is because the conjunctional option (T+F) seems more "representative" of
Linda, although it is clear that mathematically it is less likely to be probable.
In some cases the representativeness bias can offer a good initial evaluation (for example
determining that F>T), focusing us on our better options. However, it can also lead to serious errors
and bad judgment (for example, in the past people believed that diseases were caused by evil spirits
and therefore rejected medical help which resulted in many unnecessary deaths).
Conservatism and Herding:
The conservatism bias is described as a behavior where people are too slow (too
conservative) adjusting their beliefs in response to new information (Edwards, 1968). An example
to this might be managers who initially underreact to news about their firm, causing prices and
other factors to be reflected according to the new information only gradually. Conservatism is
usually a result of the brain referring to information it is already familiar with and comfortable with
processing. New information requires adjustment and processing and therefore people tend to
“stick” with what they already know.
Managers biased by conservatism may lean towards avoiding new information that requires
adaptation to. The uncomfortable feeling of dealing with new information causes conservative
decision makers to incorporate only familiar knowledge and information into their judgment and
disregard new information that might prove to be important and essential.
The herding bias on the other hand, occurs when investors think that a new trend is so
popular that they simply have to be part of it – hence they follow the herd. This bias results from
the comfort of being part of the crowd (and the feeling of security that comes along with it). People
tend to feel that being part of the herd is less risky and more comfortable than being in a position
on their own. However, the popularity of a particular trend is no guarantee for it being a good
decision.
18
An example for the herding bias can be demonstrated in the recent initial public offering of
“Facebook”. Prior to its listing, many investors were desperate not to miss out on buying Facebook
shares. This created a trend and a herding effect where people were buying shares just because it
was the popular thing to do. However, buyers did not really know (or try to find out) much about
the overestimated evaluation of Facebook and its future ramifications on their newly purchased
shares. Facebook investors have since seen the value of their shares decrease over time.
Anchoring:
The anchoring bias describes a human tendency to rely on an initial piece of information
offered to us (the “anchor”) when making decisions (Epley, 2004). After the anchor has been set,
a decision an individual makes becomes relative to the anchor (which might be an irrelevant point
of reference to the decision in hand). Moreover, Tversky and Kahneman (1974) claim that even
when people realize that an anchor is irrelevant to the decision in hand, they still find it hard to
adjust away from the anchor. The anchor leads people to create reasons and access information that
is consistent with that anchor and avoid information that is inconsistent with it (Mussweiler and
Strack, 1999).
To demonstrate the anchoring bias Tversky and Kahneman (1974) held an experiment
where participants were assigned a random anchor and were later asked to answer a question. To
set an anchor, participants were given a random number (in some experiments the anchor was set
by asking participants to write down the last two digits of their phone number or social security
number). After the anchor was set, participants were asked to state whether the percentage of
African nations among the UN is higher or lower than the anchor. Later they were asked to give
their best estimation to the percentage of UN countries that are African. The random anchor that
was given to the participants had substantial impact on their estimates. For example, people who
started with the number 10 had an average estimate of 25% whereas people who started with the
number 65 had an average estimate of 45%. Despite the fact that participants were aware of the
anchor being random and unrelated to their judgment task, it still had a significant effect on their
judgment.
Dan Ariely (2009) took this experiment a step further. His findings suggest that initial
anchors do not only affect our judgment regarding the current decision we face but effect all the
19
related decisions we make in the future as well. For example, an anchor set for purchasing a car
does not only effect the purchase of the current car but our future purchase of a new car as well
(see also Simonsohn and Loewenstein, 2006).
In investment decisions, anchors can vary from current stock prices to peer investment
quotes and evaluations. Managers who are anchored to these figures are likely to underestimate or
overestimate an investment’s actual worth and therefore exhibit bad judgment behavior.
Framing:
The decisions we make tend to be affected by how our choices are framed (Tversky et al.,
1981). The term framing is used to refer to alternative wording of the same objective information
that significantly changes the decisions people make. We would not expect different wording
(framing) of the decision to affect our ability to make rational choices, when in fact it does.
To demonstrate the framing bias Tversky and Kahneman (1986) asked people the following
question regarding medical treatment in the case of lung cancer:
If you were diagnosed with lung cancer and had been given the following statistics, which treatment
would you prefer?
Surgery: Of 100 people having surgery 90 live through the postoperative period, 68 are
alive at the end of the first year and 34 are alive at the end of five years.
Radiation therapy: Of 100 people having radiation therapy all live through the
treatment, 77 are alive at the end of the first year and 22 are alive at the end of five
years.
In this case 82% chose surgery and 18% chose radiation therapy.
A second group of different people were given the same instructions only this time they had to
choose from the following options:
Surgery: Of 100 people having surgery 10 die during surgery or the post-operative
period, 32 die by the end of the first year and 66 die by the end of five years.
Radiation therapy: Of 100 people having radiation therapy none die during treatment,
23 die by the end of the first year and 78 die by the end of five years.
20
In this case 56% chose surgery and 44% chose radiation therapy.
The difference in the framing of the statistics produced a change in the preference towards
the radiation treatment (from 18% in the survival frame to 44% in the mortality frame). The
advantage of radiation therapy over surgery grows larger when it is stated as a reduction of the risk
of immediate death from 10% to 0% rather than as an increase from 90% to 100% in the rate of
survival.
The way information is framed may have an effect on a manager’s decision whether to
engage in an investment or not. Managers might find themselves dismissing an investment
opportunity just because the information was presented in a certain way (for example, analysis of
predicted losses rather than possible gains). A certain investment opportunity may be framed in
negative terms and therefore be rejected whereas the same investment may be framed in positive
terms and therefore be accepted.
Overconfidence:
People tend to overestimate the accuracy of what they believe in and they tend to
overestimate their abilities as well. One example for this bias is provided by Barber and Odean
(2001), who compared trading activity and average returns in brokerage accounts of men and
women. They found that men (especially single men) trade more actively than women. This
corresponds with the greater overconfidence that men are considered to have as documented in
psychological literature.
Russo and Schoemaker (1992) proposed an “overconfidence test” that asked participants to
answer 10 quantitative general knowledge questions. Participants were asked to give upper and
lower boundaries in their answers while undertaking to be 90% confident they cover the true values
(see table below).
21
Maximum Minimum
1. Age of Martin Luther King when he died
2. Length of the river Nile (km)
3. Number of OPEC member states
4. Number of books in the old testament
5. Diameter of the moon
6. Weight of an empty Boeing
7. Birth year of Mozart
8. Gestation period of an Asian elephant
9. Airline distance London – Tokyo
10. Deepest known point in the world seas (m)
The results show that most people are not able to answer more than 70% of the answers correct
(despite undertaking to be 90% confident that they will cover the true values). The reason for this
is because most of us are overconfident when it comes to the precision of our beliefs and we do not
acknowledge our true uncertainty. It is simple to achieve a perfect score on the overconfidence test
by setting very wide ranges in our answers (which is not forbidden according to the instructions).
However, we limit ourselves to set narrow ranges because we are overconfident that our answers
will be correct within these ranges.
This bias may have severe consequences when it comes to managers making investment
decisions. While a manager’s confidence in his ability to succeed is important, overconfidence can
be an obstacle for effective decision making. Overconfident managers (with high rank comes power
and confidence) tend to make impulsive and irrational decisions that may lead to a meltdown of an
entire group. A good case to demonstrate this is the story of former WorldCom CEO, Bernard
Ebbers.
WorldCom was “once the dominant company in the telecommunications industry” where
Ebbers served as CEO for over 20 years. Ebbers was notorious for his temper and overconfidence
and was regularly seen in cowboy boots and hat. Fearing Ebbers, WorldCom employees were
“reluctant to present Ebbers with company information that he didn’t like”. This ultimately put
WorldCom in serious financial trouble. “Ebbers lead WorldCom through over 60 acquisitions in a
22
period of 15 years. He grew annual revenues from $1 million in 1984 to over $17 billion in 1998”.
However, Ebbers had little regard for long-term planning. Consequently, he avoided making large
strategic decisions while WorldCom was accumulating increasing debt. “Company employees who
tried to bring initial problems to Ebbers’s attention were discouraged. Ebbers made it clear he only
wanted to receive good news and then based his decisions on this news only”. WorldCom
Employee’s noted that with Ebbers it was either “his way or the highway”. Ebbers’s
overconfidence eventually lead WorldCom to face serious financial problems which he avoided
confronting (increasing debt was disregarded). This resulted in a group meltdown in 2002. In 2006,
WorldCom was purchased for $7.6 billion and subsequently integrated into Verizon. In 2005,
Ebbers began serving a 25-year jail sentence (case taken from Carpenter et al., 2009, p. 825).
Optimism
Optimism is closely related to overconfidence but is still quite different. Overly confident
managers will tend to hold unwarranted optimism regarding future success and profitability of their
investment decision. Moreover, they will usually maintain this optimism even when disappointing
results regarding their investment become available. Optimism helps investors maintain their
confidence in the superiority of their investment decisions. Much like confidence, it is important
for managers to have a fair measure of optimism in the decisions they make. Every decision
involves some measure of risk and uncertainty. Unoptimistic managers may tend to avoid making
decisions because they are already sure they are unlikely to succeed. On the other hand, over
optimistic managers may tend to make bad decisions because they are over positive about the future
success of their decisions (Chapin and Coleman, 2009).
3.2.3 Decision biases
Mental accounting
Mental accounting is considered to be a specific form of framing where people mentally
categorize and segregate certain decisions from one another (Thaler, 2004). Humans hold within
their brains a variety of “mental accounts” that they use to evaluate different types of financial
activities. From a managerial point of view, a manager may demonstrate risk loving behavior in
one investment account (the company’s) but take a very conservative and risk averse approach with
23
another account (his children’s college fund). This suggests that we apply different decision rules
to each and every one of our mental accounts.
To demonstrate this phenomenon Thaler (1985) described the following scenario. A woman
walks by a store front and notices a beautiful coat. She examines the coat closely and finds it even
more beautiful. Then, when she looks closely and sees the price tag she discovers that it’s twice as
expensive as she originally guessed and was willing to pay. After some minutes of thought, she
decides she cannot justify paying so much for the coat and she goes on her way. When she arrives
home, she finds out that her significant other has surprised her and purchased the coat for her with
money from their joint account. Thaler claims that most people will be thankful for the gift and
would not consider returning it although they have already concluded it is well over their budget.
The reason for this is because we get what we want without making us face the guilt associated
with the purchase. Although the coat was bought from the same physical account the couple shares,
it was not bought out of the woman’s mental account and therefore she has no problem accepting
the gift.
In a different example, Thaler presents the following scenario:
You have recently subjected yourself to a weekly budget and are going to purchase a $6 sandwich
for lunch. As you are waiting in line, one of the following things occurs:
1. You find out that you have a hole in your pocket and have lost 6$; or
2. You buy the sandwich, but as you plan to take a bite, you stumble and drop the sandwich
on the floor.
In either case (assuming you still have enough money), would you buy another sandwich?
In the first situation, most people would not consider the money they lost as being part of
their lunch budget. This money was never allocated to their “lunch account” and therefore was not
spent from it. Consequently, most people are likely to buy another sandwich in this situation.
Whereas in the second situation most people would not buy another sandwich because the limited
amount of money from the “lunch account” had already been spent.
In this last example, Tversky and Kahneman (1981) demonstrate how mental accounting
may cause us to evaluate the same amount of money differently depending on the account we
24
allocate this money with. The example describes a man that is facing two different daily tasks. In
the first task, the man wishes to buy a pen. He arrives at the office supply store and picks up a pen
that costs $14. Just before he pays for the pen, the cashier informs the man that the same exact pen
is sold for half the price in another store which is just 15 minutes away. In the second task, the man
wishes to purchase a suit. He arrives to the store and picks out a $500 suit. The salesperson informs
the man that the same exact suit is sold for $493 in another store just 15 minutes away. Tversky
and Kahneman claim that it is more likely that the man would make the trip in the first case than
in the second case, although in both cases it is the same $7. The reason for this is because we have
two separate mental accounts, one for inexpensive items and the other for expensive items. In the
first account, $7 is considered a significant amount for which we are willing to make the extra
effort for whereas in the second account $7 is an insignificant sum that does not justify the trip to
the other store.
While creating mental accounts and allocating money for specific uses may be essential for
our self-control, it is also a mental shortcut that can limit investment returns and profit. Loss-averse
managers may be reluctant to close underperforming business units or get rid of any unworthy
investments because they weigh the loss solely against the money invested within that “account”
and selling/closing the account would put it under “loss”. This also ties up money that could be
more efficiently invested elsewhere and generate profit. The segregation we do for our different
mental accounts prevents us from evaluating the situation as a whole and focuses us on factors
which are only relevant to that specific account. By doing this, we miss out on the “big picture”.
Endowment bias/Sunk costs
The endowment effect represents a behavior where an individual places more value to what
he owns. Most people would sell an item they own for more than what they would be willing to
pay for it if they did not own it (Thaler, 1980). Objectively, the evaluation of an item should be
based on its true value. However, sellers usually add an extra cost to the item’s value which
represents their attachment to it. This effect is demonstrated by an experiment Thaler conducted
with MBA students in the University of Chicago. Thaler asked two groups of students one of the
following questions (which were realistic during the time of the experiment):
25
Group 1:
It is 1998, Michael Jordan and the Chicago Bulls are about to play their final championship
game. You would very much like to attend. The game is sold out and you won’t have another
opportunity to see Michael Jordan play for a long time, if ever. You know someone who has
a ticket for sale. What is the most you would be willing to pay for the ticket?
Group 2:
It is 1998, Michael Jordan and the Chicago Bulls are about to play their final championship
game. You have a ticket to the game and would very much like to attend. The game is sold
out and you won’t have another opportunity to see Michael Jordan play for a long time, if
ever. What is the least that you would accept to sell your ticket?
Thaler’s findings were that the average amount the first group of students was willing to
pay for the ticket was $330 whereas in the second group the average amount the students
demanded for the ticket was $1,920 (for a similar experiment that was held on Duke students
see Ariely, 2009, p. 127). This demonstrates how the endowment effect causes people to
overvalue what they own. Ownership creates value that does not comply with a rational
evaluation of what the commodity is really worth to the individual.
From a manager’s point of view, this understanding is critical for making wise assessments
of the value one places on his investment decisions. Managers will have a tendency to overvalue
their current investments and include the sunk costs that were put into the investment. These
sunk costs can be both monetary and emotional as well (hard work, time, effort etc.). This will
result in deviation from the real value of the investment and lead to an irrational evaluation of
the investment. Managers who are affected by this bias tend to hold on to their investments too
long while it would make sense to give them up for a good offer that comes along (before it’s
too late). Moreover, the sunk costs bias and the “attachment” to the investment may lead
managers to keep investing in an unprofitable investment. Biased managers who fail to
recognize sunk costs as historical and irrecoverable, do not have an objective measure to their
investment because their reference point refers to the past and not to the present state. As a
result, they misinterpret the evaluation of future costs or benefits associated with the
investment. Failing to do so eventually leads them to making bad investment decisions.
26
3.2.4 Evaluation of the decision biases
Hindsight bias
The hindsight bias is defined as “a tendency to change a previous judgment in the direction
of newly provided information” (Mazzoni and Vannucci, 2007, p. 204). People tend to look back
on their own judgment and the judgment of others and to reconstruct their judgment according to
information that reveals the results of their decisions. Fischhoff (1975) examined this bias with an
experiment where participants were asked to judge the outcome of an historical event. Four groups
of participants were asked to provide the outcome of a famous battle out of four possible
alternatives. Each group was reported with a different alternative as being the true outcome of the
battle. Later, each participant was asked to assess the probability of each of the four outcomes,
provided that he had not been given the reported outcome beforehand. Most participants claimed
that even if they had not been given an outcome, they would have chosen the same outcome they
were told had happened as being the most probable one. Fischhoff’s experiment demonstrates how
knowing an outcome of a situation increases an individual’s belief as to how he would have
predicted that outcome without that prior knowledge.
Due to the hindsight bias, people are less likely to evaluate decisions objectively and learn
from their past mistakes. After managers are provided with new information that sheds light on
past decisions, they may be biased to assess the decision incorrectly according to that new
information. The hindsight bias reduces the chance of managers learning from their mistakes and
increases the chance that managers will repeat the same mistakes over and over again. Biased
managers are likely to use new information to help justify their decisions rather than to
retrospectively learn where they went wrong.
Prospect theory
Standard financial theory assumes that higher wealth provides higher satisfaction (or
“Utility”) for a rational individual, but at a diminishing rate (see figure below).
27
The curve flattens (utility increase is reduced) as an individual becomes wealthier, indicating a rise
to risk aversion. A gain of one unit of wealth increases utility by less than the loss of which that
same unit decreases it. Therefore, investors are likely to reject risky investments that don’t offer a
risk premium.
Prospect theory modifies the analytic description of rational risk-averse investors described
above in three main aspects. First, it describes how people choose between risky alternatives that
are likely to be probable, where the probability of each alternative is known (Kahneman and
Tversky, 1979). Prospect theory recognizes that people do not make decision solely on the basis of
the inherent utility that a certain option possesses for them but according to how their brain edits
and evaluates information (see heuristics and biases above). Tversky and Kahneman claim that
people make decisions on the potential value of loses and gains rather than the final outcome.
Utility does not depend on the level of wealth but on changes in wealth from the current level (see
figure below). Moreover, the choice depends on the way decisions are framed (see framing bias
above).
28
Second, prospect theory offers an S-shaped value curve as opposed to a concave utility
curve in the classical theory. The reference point denotes no changes from current wealth. The left
side from this point indicates a decrease in wealth (loses) and the right side an increase in wealth
(gains). In a series of experiments Tversky and Kahneman were able to prove that individuals tend
to be risk averse in the domain of gains and relatively risk loving in the domain of loses. Therefore
the curve is S shaped (convex in the losses domain and a concave in the gains domain). This fact
has significant implications. Whereas a concave utility function rules out risk loving behavior, the
S-shaped function gives rise to risk loving behavior in terms of losses. This helps explain the
irrational behavior and bad judgment of managers that have experienced recent loses and therefore
tend to get involved more easily in risky investments.
To demonstrate this, the following example is given:
You are given the chance to participate in the following sets of games:
Set #1:
Game A: You lose 100$; OR
Game B: You play a lottery where you have a 50% chance of losing nothing or 50% chance of
losing 200$.
29
Set #2:
Game C: You win 100$; OR
Game D: You play a lottery where you have a 50% chance of winning nothing or 50% chance of
winning 200$.
The results have shown that in Set #1 most people prefer game B over game A and in Set #2 game
C is preferred over D. According to the classical rational theory, all games generate the same
amount of utility:
U(A) = -100 and U(B) = 0.5 * 0 + 0.5 * -200 = -100
U(C) = 100 and U(D) = 0.5 * 0 + 0.5 * 200 = 100
If this is the case, we should not expect rational individuals to have preference for a specific
game over the other. Rational individuals should be willing to play any game because it generates
the same amount of utility for them. However, this has been proven to be false. As mentioned, the
experiment has demonstrated that individuals tend to be risk averse in the domain of gains (prefer
a sure win of $100 over playing a lottery) and relatively risk loving in the domain of loses
(preferring to bet for the chance of losing nothing rather than a sure loss of $100).
Third, the S-shaped value function is steeper in the domain of loses than in that of gains.
This implies relative loss aversion and explains why humans react stronger to losing money rather
than to winning the same amount (thus justifying risk loving behavior when it comes to losses).
Loss aversion is exemplified by the endowment effect whereby humans value what they own more
than they value an equal alternative owned by others (see above).
Prospect theory explains why investors have a strong preference to hold on to unprofitable
investments and to sell profitable ones quickly. Decision makers are likely to compare outcomes
to some reference point. Some managers consider the current sunk costs in the investment as a
reference point. If an investment is profitable, managers can either sell it and make a guaranteed
profit (making them “winners”) or hold on to it and risk that profit for an unknown and uncertain
return. Therefore, managers tend to be risk averse and sell the investment to guarantee the gain. On
the other hand, if an investment is unprofitable, managers are faced with a sure loss or with holding
30
the investment for an uncertain return. Therefore with loses, mangers tend to be risk loving and
take the risk of holding on to a bad investment in hope it will become a good one.
3.3 When CEO’s go wrong – a real life case of irrational managerial decision making
After reviewing all the above mentioned biases and heuristics, I will demonstrate how these
behaviors may have affected a CEO in a real life example. I will describe Quaker’s former CEO,
William Smithburg’s decision to acquire two different companies and the affect his decisions had
on Quaker in each one of the scenarios. Furthermore, I will try to describe which biases and
heuristics might have affected the CEO during his decision making process (case taken from
www.investopedia.com – last visited on 07.07.2013).
The Quaker – Snapple acquisition
In 1983 Quaker, an American food conglomerate, decides to buy Stokely-Van Camp, the
original makers of “Gatorade”. After the acquisition, Quaker begins aggressive segmenting and
promotion of the brand and turns Gatorade into the number one selling sports drink. Quaker
managed to convert Gatorade from an underexploited niche product into a megabrand. Having been
a well-known brand in the sports drinks industry, Quaker builds the modestly successful Gatorade
from an $87 million annual sales business into a $600 million giant. Quaker’s CEO at the time
William Smithburg claims that Quaker “built Gatorade from scratch” and turned it into the
megabrand it was back then and still is until today. In 1996 supermarket sales alone exceed the
competition by more than tenfold. Smithburg’s Gatorade investment is considered a huge success.
In 1994, after misadventures in pet foods and toys, Quaker’s CEO Sminthburg tries to tie
in with the Gatorade strategy. Smithburg decides to buy “Snapple”, a brand of tea and juice drinks,
for $1.7 billion. With the Snapple acquisition Quaker’s CEO Smithburg was trying to shoot for the
remake of his Gatorade success story. After closing the deal, Quaker’s president of beverage
division Don Uzzi claimed that “Quaker knows how to advance businesses. Our expectation is that
we do the same as we take Snapple as well as Gatorade to the next level”. However, the Snapple
investment did not turn out to be a success like the Gatorade investment and in 1997 Snapple was
sold to Triarc for $300 million (resulting in a $1.6 billion loss). Not long after that, Smithburg
retired.
31
Why did Smithburg fail to remake the Gatorade success with Snapple? – In 1994 and prior
to finalization of the deal, Wall Street was warning Quaker that it was overpaying for the deal by
$1 billion. In addition to overpaying, Smithburg didn’t really know how to “bring specific value-
added skills sets and expertise” to the Snapple operation. He solely relied on his Gatorade success
hopping to duplicate it while thinking it was the same investment scheme all over again. During
the acquisition deal, Snapple sales were already dropping down from $700 million to $500 million.
Moreover, Quaker’s management thought it could leverage Snapple’s sales through its current
relationship with supermarkets and large retailers who were purchasing Gatorade from them.
However, Smithburg failed to realize that “about half of Snapple's sales came from smaller
channels, such as convenience stores, gas stations and related independent distributors”. While
struggling with all these problems, Quaker’s rivals, Coca-Cola and PepsiCo, “launched a barrage
of competing new products that ate away at Snapple's positioning in the beverage market” and
eventually lead Quaker to sell Snapple for $300 million in 1997.
What biases may have effected Smithburg’s decision in acquiring Snapple? :
Availability – To assess the probability of success of the Snapple investment, Smithburg might
have referred himself to the clear and available example in his memory of a similar recent
successful investment such as the Gatorade investment. By doing so he neglects other important
information and focuses solely on the high probability of the Snapple investments succeeding much
like the similar successful Gatorade investment. This might explain why Smithburg ignored Wall
Street’s warnings of Snapple being overpriced.
Selective perception – The Gatorade investment might have acted as a framing reference point for
Smithburg’s evaluation of the Snapple investment. Evaluating the likelihood of success of the
Snapple investment with reference to the Gatorade investment, might have led Smithburg to miss
out on essential information. This important information was readily available to Smithburg but he
overlooked it because he was not really looking for it. Simthburg might have focused his attention
on the same information and criteria’s he used to assess the Gatorade investment which would
result in overlooking available information that was relevant to the Snapple investment. This may
indicate why Smithburg failed to realize that Quaker’s current distribution channels are inadequate
for selling Snapple.
32
Overconfidence and Optimism – Smithburg might have been overconfident after coming out a
“winner” with the Gatorade investment. He probably felt sure that if he managed to turn Gatorade
into a megabrand he can do the same with Snapple as well. This Overconfidence might be the
reason that caused Smithburg to rush into the Snapple investment while relying solely on his
success from the Gatorade investment. Moreover, Smithburg might have been over optimistic
about the success of the Snapple investment. Over optimism in this case could have been influenced
as well by the huge success of the Gatorade investment (if Gatorade was a huge success why can’t
Snapple be one too).
Sunk costs bias – It did not take much time for Smithburg to realize that Snapple was a bad
investment. However, Quaker only sold Snapple almost 3 years after already realizing this. Perhaps
Smithburg did not sell Snapple because there were a lot of sunk costs in this investment that made
it hard for him to sell it. Moreover, Smithburg might have been concerned of coming out a “loser”.
This might have been the reason for holding on to this investment for too long even though it was
generating Quaker huge loses.
As we can see, heuristics and biases that cause people to behave in an irrational manner
have severe implications on the decision making process and may lead to unwanted results.
However, there is a silver lining to all this. The experiments in the field of behavioral economics
mentioned above demonstrate how these behaviors are systematic and predictable. Due to this fact,
we can try and offer solutions to overcome these cognitive failures and improve the way we make
decisions. The next chapter will focus on recommendations for improving the decision making
process.
33
Chapter 4 - Improving the Decision Making Process
In the previous chapters, I described why and how human judgment is flawed. I started by
explaining how behavioral economics questions the standard economic theory and its assumptions
regarding human rationality. Later, by describing how the human decision making process is
structured and demonstrating how heuristics and biases occasionally affect this process, I was able
to point out how irrational behavior is neither random nor senseless. It is systematic and
predictable. With these understandings, I can now start offering solutions to overcome irrational
behavior and offer ways to improve the decision making process.
The study of biases has substantial practical value when it comes to managerial decisions
in companies. Studying how and when managers fail can provide managers with useful lessons to
help them succeed and do better. Organizations and managers that understand this can improve
their decision making processes and make them significantly more effective. Moreover, the study
of managerial failures is not intended to question managerial intelligence but rather to suggest ways
of overcoming biased behaviors that all humans are expected to demonstrate while they make
decisions. The proposed strategies offered in this chapter are in no way a sure solution for
eliminating biases and curing all decision making problems, but they do serve as a good starting
point for making better decisions.
Before I discuss strategies for improving the decision making process, it is important to
mention how difficult finding solutions for biased decision making has proven to be. Baruch
Fischhoff (1981) has examined four commonly proposed solutions for biased decision making. In
his research he asked to test their effectiveness in improving the decision making process. The
proposed solutions were:
1) Offering warnings about the possibility of biases occurring;
2) Describing the direction of a bias;
3) Providing feedback for specific biased decisions;
4) Offer extended program of training with feedback, coaching and other tools designed to
improve ongoing decision making.
Fischhoff’s studies (that have examined the effectiveness of these solutions for over 25 years) have
shown that the first three strategies yielded little success in improving the decision making process
34
and the fourth solution had shown only moderate improvements in decision making. This
demonstrates how over the years it was hard to provide a clear and concrete solution for our
misguided behaviors in the decision making process.
4.1 Strategies for improving the decision making process
The main benefit of the behavioral economics theory over the standard theory is the
opportunity for “free lunches” as described in chapter 1. As mentioned, the understanding that we
make mistakes offers an opportunity for improvement. The following are proposed strategies for
improving the decision making process:
4.1.1 Shifting a decision maker from System 1 to System 2 thinking
Stanovich’s and West’s (2000) two systems of cognitive functioning described in chapter
2, offer a good starting point for proposing effective strategies to improve the decision making
process. As mentioned in chapter 2, the frantic pace in a manager’s life is likely to lead him to rely
on System 1 thinking thus increasing the chances of cognitive biases occurring. A number of
strategies may help overcome specific decision making biases by shifting decision makers from
System 1 thinking to System 2 (note that this is not successful in overcoming every bias).
A very successful strategy to do so relies on replacing intuition with specific decision
analysis tools. One useful tool is a linear model. A linear model uses “a formula that weights and
adds up the relevant predictor variables” of a decision (Bazerman and Moore, 2008, p. 181). The
result of the formula provides a quantitative prediction of the decision’s outcome (Dawes and
Corrigan, 1974). For example, a linear model to evaluate an investment may be constructed by
using data such as known expenses, prior profits from similar investments and other available
market data relevant to the investment. Researchers have found that linear models outperform
decisions based on intuition and may often produce predictions that are superior to an expert’s
analysis (Dawes, 1971).
A linear model forces decision makers to consider relevant information in order to construct
an effective formula. Therefore, this model can help avoid many of the information selection
biases. However, we do not always know how to construct these formulas and which information
to include in them. Moreover, coming up with an effective formula may be costly and time
35
consuming. In my opinion, a linear model may serves as a good solution for relatively simple
decisions and will be harder to implement with hard and complex decisions.
Another strategy to help shift people to System 2 thinking involves taking an outsider’s
perspective or getting an expert’s opinion. Taking an outsider’s perspective may help reduce a
decision maker’s overconfidence about his knowledge. A second opinion might raise questions and
problems the decision maker was not aware of because he was overly sure he already possessed all
the information he needed to come to a decision (Gigerenzer et al., 1991). Kahneman and Lovallo
(1993) suggest that taking an outsider perspective may help to mentally remove a decision maker
from the situation. By doing so, a decision maker can objectively evaluate an outsiders opinion and
the situation itself. However, this does not guarantee that the outsider himself is not cognitively
biased while evaluating the decision. Nonetheless, an outsider perspective may help raise important
questions and issues that the decision maker himself was not aware of thus leading to System 2
thinking and better decision making.
Other research of shifting people from System 1 to System 2 thinking encourages decision
makers to consider the opposite alternative of the decision they are about to make (Larrick, 2004).
By questioning his own decisions, an individual can reduce errors in the information processing
stage and can overcome biases such as overconfidence, optimism and anchoring.
4.1.2 Debiasing the decision making process
Debiasing refers to the process of trying to eliminate or reduce the occurrence of biases
throughout the decision making process. As mentioned above, Fischhoff (1981) proposed four
steps to help decision makers debias their judgment. However, his research has shown that these
techniques yield little success. Fischhoff’s conclusion was that debiasing is an extremely difficult
process that has to be closely monitored by psychological experts that can later offer coaching for
decision makers. Training and coaching decision makers about their systematic biased behavior
can help raise their awareness to the pattern of their misguided behavior and will eventually inflict
change. However, this is not always successful. Fischhoff’s (1977) research on the hindsight bias
has demonstrated that even when participants are guided to avoid the bias, it still remains in some
cases. However, later research (Larrick, 2004) shows that humans do have the ability to overcome
biases through training. For this to be effective, feedback and training have to be conducted in
36
proximity to the decision and have to be personally tailored for the decision maker (there is no “one
solution fits all”).
Lewin (1947) offered a different model for debiasing decisions called the “change
management model”. According to this model, in order to improve the decision making process
and make it sustainable over time, individuals must force themselves to engage in a three step
process. In the first step, individuals have to unfreeze the notion that their decision making process
does not require improvement. In practice, this is easier said than done. Individuals typically rely
on their intuitive strategy for years. Therefore, changing this strategy forces people to admit that
they were doing something wrong all these years. Most successful managers will probably consider
their intuition as being a talent rather than a flaw. What helps in this case is demonstrating with the
use of experiments (some of which were described in chapter 3) how all of us, including the
brightest ones, are victims of biased behavior. Usually, people that test themselves in these studies
(for example 50% of the people who don’t spot the gorilla in “the invisible gorilla” test) and
discover that they are making mistakes, are ready to unfreeze themselves and start learning how
they can perform better.
In the second stage, once an individual has unfrozen himself, he becomes willing to
consider alternatives. This step represents the change process of the individual. However, change
is not necessarily guaranteed. Internal resistance to change is likely to cause the individual to
constantly question the necessity of change. If influenced by this urge, the individual is then
dragged back to the first step where he needs to unfreeze himself again from the notion that his
current decision making strategies are flawless. There are three critical steps to changing one’s
decision making process (Bazerman and Moore, 2008):
1. Clarifying the existence of a specific decision bias;
2. Explaining the roots of this bias;
3. Reassuring the individual that these biases are not to be taken as a threat to his self-esteem.
From a managerial point of view, to ensure an effective change process, managers need to
consider a slightly different change process than “regular” individuals (Lovallo and Sibony, 2010).
First, managers need to decide which decisions warrant the effort to change. Not all decisions are
worth changing because change is sometimes a time consuming and costly process. Therefore,
37
mangers have to consider and evaluate which are the crucial decisions they need to fix their biased
behavior in. Second, managers need to identify the biases that are most likely to affect these critical
decisions. This is not a simple task to do as most managers do not always hold sufficient knowledge
regarding biases. Encouraging open group discussion can help raise the awareness to possible
biases that may affect the decision and can prove to be effective in this step. Moreover, hiring
professional consultants to help identify biased behavior may help as well. Last, after identifying
the possible biases that may affect the decision, managers need to select practices and tools to
counter the most relevant biases. The strategies offered in this chapter can serve as a good starting
point for this phase.
After change occurs the third “refreezing” step needs to take place. Even when an individual
makes changes in the way he makes decisions, it is still tempting for him to go back to past practices
and bad habits. Old biases remain to exist and can easily reappear. The new change is something
foreign and must be embedded into the individual’s pattern of behavior. This process takes time.
As we consciously use new “fixed” strategies, they slowly become part of our behavior and replace
old patterns. For effective refreezing, an individual should always conduct a routine checkup to
evaluate his decisions while he remains aware to the limitations of his judgment.
4.1.3 Understanding biases in others
The nature of the managerial decision making process involves considering the decisions
of others as well. Evaluating another person’s decision is a completely different task than
evaluating you own. Moreover, research conducted in the field of behavioral economics has proven
that all human decisions, regardless if they are made by managers or “simple” workers, are
subjected to the same set of biases. The question is to what extent should a manager adjust his
decisions with respect to the decisions of others (knowing that other people’s decisions may be
biased too)?
Tversky and Kahneman (1982) offer a three phase procedure for adjusting our decisions
with respect to other people’s decisions. With the help of this model managers can recognize the
existence of biases in other people’s decisions and then adjust their own decisions accordingly. In
the first phase the manager has to accurately perceive and examine the context in which the other
decision is being made (usually using reference to other similar decisions from the past). After
38
understanding the context of the decision, the manager has to try and forecast which potential biases
may affect the decision and the decision maker in this context (past experience may help here).
This stage requires the manager to have a fundamental understanding of biases and how they affect
human decisions. If he does not have sufficient knowledge he may want to refer to experts who
can identify biased behaviors. In the last stage, the manager has to identify the appropriate
adjustment measures for that decision and adjust his final decision according to them. The
adjustment measures will be determined by the context of the decision and by the degree to which
the other decision was biased. One common adjustment measure is reducing the reliability on the
other decision and using it solely for reinforcement or questioning of the final decision (thus
refraining from using the other decision as the decisive judgment for the final decision). This
method can be used to evaluate and adjust decisions made by others as well as our own. However,
applying this technique requires managers to have fair knowledge and understanding of biased
behaviors.
4.2 Why improving the decision making process is important
The importance of this matter may be somewhat self-evident yet managers still fail to
acknowledge it. Managerial decisions shape important outcomes in companies. Therefore, if
managers had a better understanding of how to improve those outcomes, companies would do
better and all the stakeholders would benefit. Errors caused by biased judgment lead to bad
decisions. Given the massive cost (not only monetary) that could be involved in these bad
decisions, it is crucial to increase managerial knowledge regarding strategies that can lead to better
decisions. Moreover, during time, errors will become even costlier because of two reasons: First,
the economy nowadays has shifted to a dependence on knowledge where a manager’s primary
deliverable is a good decision; and second, with globalization taking place, each biased decision is
likely to have effect on a greater part of society (Milkman et al., 2009).
39
Summary
Throughout this paper I have tried to demonstrate how our minds fool us while engaging in
the everyday activity of decision making. If I were to draw one main conclusion to take away from
this research it would be the notion that we are all players in a game mastered by our brains. We
usually consider ourselves as captain of our ships (our brains in this case) that navigate our way
through the decisions we make in life. We believe we are in full control over our decisions but we
fail to accept that regardless of how smart we are, we all fall prey to the same traps our minds set
up for us.
Admitting to this is not an easy task. We all want to believe that we are fully capable of
making the best decisions out of the possibilities that lay before us. However, we cannot ignore
what behavioral economics studies have proven to us time and time again – human beings are
irrational creatures who are susceptible to cognitive failures that are embedded in their nature. With
this somewhat depressing understanding, we can either choose to deny it and believe that we are
all homo economicus (supermans’ of rationality fully capable of making perfectly rational choices
for ourselves) or accept it and search for ways to improve ourselves where we fall short.
If we do adhere to the notion that our rationality is flawed, we can start looking for ways to
improve ourselves and overcome the cognitive failures that lead us astray. An optimistic but naïve
view would be to believe that after we understand the behavioral economics concepts, ideas and
recommendations, we would be instantly capable of making perfect decisions. A more realistic and
correct approach would be to use these new insights and start an “unfreezing” process in our heads.
Once we unfreeze ourselves from the notion that our decision making process does not require
improvement, we can start making a change where the process is damaged. However, change is a
long and difficult process that requires us, like most of the things in life, to keep practicing in order
for us to get better.
Albert Einstein said “we cannot solve our problems with the same thinking we used when
we created them”. According to standard economics, we are automatically capable of fixing
problems by using the same tools, behavior and information that created them in the first place.
Standard economics assumes that all relevant information and knowledge exists in the marketplace
40
and is accessible to us at all times. If this is true, how is it possible to find solutions to our problems
if we keep using the same information and behavior that created them?
Behavioral economics acknowledges the fact that we make mistakes in our decisions but
challenges us to take a different approach while trying to fix them. If we are able to understand
where and when we are likely to make bad decisions, we can try to be more attentive and force
ourselves to think differently in these situations. Moreover, if we succeed to leverage our mistakes
to opportunities, we will all be able to enjoy “free lunches”.
Finally, I wish to address the importance of these conclusions from a managerial
perspective. The decisions an individual makes have impact on his life (and maybe the lives of
those who surround him). However, the decisions a manager makes have impact on his life and the
future of an entire company and its stakeholders as well. The great influence the decision of a single
person has on a company emphasizes how important it is for managers to put effort in improving
the managerial decision making process. Engaging in constant improvement of the managerial
decision making process has proven to have significant practical value for companies.
Organizations and managers that understand this, have the opportunity to increase their
performance and may gain a competitive advantage over companies whose managers fail to do so.
This may be expressed in higher revenues, better company culture, attracting quality personal and
enhancing overall development.
41
Bibliography
ARIELY, D. 2009. Predictably irrational, revised and expanded edition: The hidden forces that
shape our decisions, HarperCollins.
BARBER, B. M. & ODEAN, T. 2001. Boys will be boys: Gender, overconfidence, and common
stock investment. The Quarterly Journal of Economics, 116, 261-292.
BAZERMAN, M. H. & MOORE, D. A. 2008. Judgment in managerial decision making.
CARPENTER, M., BAUER, T. & ERDOGAN, B. 2009. Principles of management, Flat World
Knowledge.
CHABRIS, C. F. & SIMONS, D. J. 2011. The invisible gorilla: And other ways our intuitions
deceive us, Random House Digital, Inc.
CHAPIN, J. & COLEMAN, G. 2009. Optimistic bias: What you think, what you know or who you
know? North American Journal of Psychology, 11, 121-132.
CHUGH, D. 2004. Societal and managerial implications of implicit social cognition: Why
milliseconds matter. Social Justice Research, 17, 203-222.
DAWES, R. M. 1971. A case study of graduate admissions: Application of three principles of
human decision making. American Psychologist, 26, 180.
DAWES, R. M. & CORRIGAN, B. 1974. Linear models in decision making. Psychological
bulletin, 81, 95.
EDWARDS, W. 1968. Conservatism in human information processing. Formal representation of
human judgment, 17-52.
EPLEY, N. 2004. A tale of tuned decks? Anchoring as accessibility and anchoring as adjustment.
The Blackwell handbook of judgment and decision making, 240-256.
FAMA, E. F. 1995. Random walks in stock market prices. Financial Analysts Journal, 75-80.
FISCHHOFF, B. 1975. Hindsight is not equal to foresight: The effect of outcome knowledge on
judgment under uncertainty. Journal of Experimental Psychology: Human perception and
performance, 1, 288.
FISCHHOFF, B. 1977. Cognitive liabilities and product liability. Journal of Products Liability, 1,
207-219.
FISCHOFF, B. 1981. Debiasing. In: KAHNEMAN, D., SLOVIC, P. & TVERSKY, A. (eds.)
Judgment under uncertainty: Heuristics and biases. Cambridge, MA: Cambridge
University Press.
FRIEDMAN, M. 1975. There's no such thing as a free lunch, Open Court La Salle, IL.
GIGERENZER, G., HOFFRAGE, U. & KLEINBÖLTING, H. 1991. Probabilistic mental models:
a Brunswikian theory of confidence. Psychological review, 98, 506.
HAMMOND, J. S., KEENEY, R. L. & RAIFFA, H. 1999. Smart Choices: A Practical Guide to
Making Better Decisions. Medical decision making, 19, 364-365.
KAHNEMAN, D. 2003. A perspective on judgment and choice: mapping bounded rationality.
American psychologist, 58, 697.
42
KAHNEMAN, D. & LOVALLO, D. 1993. Timid choices and bold forecasts: A cognitive
perspective on risk taking. Management science, 39, 17-31.
KAHNEMAN, D. & TVERSKY, A. 1979. Prospect theory: An analysis of decision under risk.
Econometrica: Journal of the Econometric Society, 263-291.
KAHNEMAN, D. & TVERSKY, A. 1982. The simulation heuristic. In: KAHNEMAN, D.,
SLOVIC, P. & TVERSKY, A. (eds.) Judgment under uncertainty: Heuristics and biases.
New York: Cambridge University Press.
LAMBERT, C. 2006. The marketplace of perceptions. Harvard Magazine, 108, 50.
LARRICK, R. P. 2004. Debiasing. In: KOEHLER, D. J. & HARVEY, N. (eds.) Blackwell
Handbook of Judgment and Decision Making. Oxford, England: Blackwell.
LEVITT, S. D. & LIST, J. A. 2007. What do laboratory experiments measuring social preferences
reveal about the real world? The Journal of Economic Perspectives, 21, 153-174.
LEWIN, K. 1947. Group decision and social change. In: NEWCOMB, T. M. & HARTLEY, E. L.
(eds.) Readings in social psychology. New York: Holt, Rinehart and Winston.
LOVALLO, D. & SIBONY, O. 2010. The case for behavioral strategy. McKinsey Quarterly, 30-
43.
MAZZONI, G. & VANNUCCI, M. 2007. Hindsight bias, the misinformation effect, and false
autobiographical memories. Social cognition, 25, 203-220.
MILKMAN, K. L., CHUGH, D. & BAZERMAN, M. H. 2009. How can decision making be
improved? Perspectives on Psychological Science, 4, 379-383.
MULLAINATHAN, S. & THALER, R. H. 2000. Behavioral economics. National Bureau of
Economic Research.
MUSSWEILER, T. & STRACK, F. 1999. Hypothesis-consistent testing and semantic priming in
the anchoring paradigm: A selective accessibility model. Journal of Experimental Social
Psychology, 35, 136-164.
NUTT, P. C. 2002. Why decisions fail: Avoiding the blunders and traps that lead to debacles,
Berrett-Koehler Store.
RABIN, M. 1998. Psychology and economics. Journal of economic literature, 36, 11-46.
RUSSO, J. E. & SCHOEMAKER, P. J. 1992. Managing overconfidence. Sloan Management
Review, 33, 7-17.
SHEPARD, R. N. 1990. Mind sights: Original visual illusions, ambiguities, and other anomalies,
with a commentary on the play of mind in perception and art, WH Freeman/Times
Books/Henry Holt & Co.
SIMON, H. A. 1957. Models of man; social and rational.
SIMONSOHN, U. & LOEWENSTEIN, G. 2006. Mistake# 37: The Effect of Previously
Encountered Prices on Current Housing Demand. The Economic Journal, 116, 175-199.
STANOVICH, K. E. & WEST, R. F. 2000. Individual differences in reasoning: Implications for
the rationality debate? Behavioral and brain sciences, 23, 645-665.
THALER, R. 1980. Toward a positive theory of consumer choice. Journal of Economic Behavior
& Organization, 1, 39-60.
THALER, R. 1985. Mental accounting and consumer choice. Marketing science, 4, 199-214.
43
THALER, R. H. 2000. From homo economicus to homo sapiens. The Journal of Economic
Perspectives, 14, 133-141.
THALER, R. H. 2004. Mental accounting matters, Russell Sage Foundation. Princeton, NJ:
Princeton University Press.
TVERSKY, A. & KAHNEMAN, D. 1973. Availability: A heuristic for judging frequency and
probability. Cognitive psychology, 5, 207-232.
TVERSKY, A. & KAHNEMAN, D. 1974. Judgment under uncertainty: Heuristics and biases.
science, 185, 1124-1131.
TVERSKY, A. & KAHNEMAN, D. 1983. Extensional versus intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological review, 90, 293.
TVERSKY, A. & KAHNEMAN, D. 1986. Rational choice and the framing of decisions. Journal
of business, S251-S278.
TVERSKY, A., KAHNEMAN, D. & CHOICE, R. 1981. The framing of decisions. Science, 211,
453-458.