Author:: [[Shortform]]
DateFinished:: 3/15/2023
Rating:: 9
Tags:: #đŠ # Thinking, Fast and Slow by Daniel Kahneman

## đThe Book in 3 Sentences
-
### đ¨ Impressions
-
### đWho Should Read It?
-
### âď¸ How the Book Changed Me
-
### âď¸ My Top 3 Quotes
-
# Summary
## Highlights
An individual has been described by a neighbor as follows: âSteve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.â Is Steve more likely to be a librarian or a farmer? The resemblance of Steveâs personality to that of a stereotypical librarian strikes everyone immediately, but equally relevant statistical considerations are almost always ignored. Did it occur to you that there are more than 20 male farmers for each male librarian in the United States? Because there are so many more farmers, it is almost certain that more âmeek and tidyâ souls will be found on tractors than at library information desks. However, we found that participants in our experiments ignored the relevant statistical facts and relied exclusively on resemblance. We proposed that they used resemblance as a simplifying heuristic (roughly, a rule of thumb) to make a difficult judgment. The reliance on the heuristic caused predictable biases (systematic errors) in their predictions.
**Note:** We use resemblances as a way of simplifying things. Despite the fact that there are 20 male farmers for every one librarian people said Steve would be a librarian because he had the resemblance of one.
**Tags:** pink
People tend to assess the relative importance of issues by the ease with which they are retrieved from memoryâand this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the publicâs mind. It is no accident that authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by celebrities, media feeding frenzies are common.
We have all heard such stories of expert intuition: the chess master who walks past a street game and announces âWhite mates in threeâ without stopping, or the physician who makes a complex diagnosis after a single glance at a patient. Expert intuition strikes us as magical, but it is not. Indeed, each of us performs feats of intuitive expertise many times each day. Most of us are pitch-perfect in detecting anger in the first word of a telephone call, recognize as we enter a room that we were the subject of the conversation, and quickly react to subtle signs that the driver of the car in the next lane is dangerous. Our everyday intuitive abilities are no less marvelous than the striking insights of an experienced firefighter or physicianâonly more common.
The psychology of accurate intuition involves no magic. Perhaps the best short statement of it is by the great Herbert Simon, who studied chess masters and showed that after thousands of hours of practice they come to see the pieces on the board differently from the rest of us. You can feel Simonâs impatience with the mythologizing of expert intuition when he writes: âThe situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.â
This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.
## Part 1 Two Systems
#### 1 The Characters of the Story
System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
In rough order of complexity, here are some examples of the automatic activities that are attributed to System 1: Detect that one object is more distant than another. Orient to the source of a sudden sound. Complete the phrase âbread andâŚâ Make a âdisgust faceâ when shown a horrible picture. Detect hostility in a voice. Answer to 2 + 2 = ? Read words on large billboards. Drive a car on an empty road. Find a strong move in chess (if you are a chess master). Understand simple sentences. Recognize that a âmeek and tidy soul with a passion for detailâ resembles an occupational stereotype.
The highly diverse operations of System 2 have one feature in common: they require attention and are disrupted when attention is drawn away. Here are some examples: Brace for the starter gun in a race. Focus attention on the clowns in the circus. Focus on the voice of a particular person in a crowded and noisy room. Look for a woman with white hair. Search memory to identify a surprising sound. Maintain a faster walking speed than is natural for you. Monitor the appropriateness of your behavior in a social situation. Count the occurrences of the letter a in a page of text. Tell someone your phone number. Park in a narrow space (for most people except garage attendants). Compare two washing machines for overall value. Fill out a tax form. Check the validity of a complex logical argument.
The often-used phrase âpay attentionâ is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail.
**Note:** The brain canât multitask.
It switches between different tasks at incredibly fast rates.
Intense focusing on a task can make people effectively blind, even to stimuli that normally attract attention. The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The Invisible Gorilla
The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.
System 2 is activated when an event is detected that violates the model of the world that System 1 maintains.
The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off. If you are shown a word on the screen in a language you know, you will read itâunless your attention is totally focused elsewhere.
System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off.
The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. ... Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical.
Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other peopleâs mistakes than our own.
The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other peopleâs mistakes than our own.
#### 2 Attention and Effort
the pupils are sensitive indicators of mental effortâthey dilate substantially when people multiply two-digit numbers, and they dilate more if the problems are hard than if they are easy.
mental lifeâtoday I would speak of the life of System 2âis normally conducted at the pace of a comfortable walk, sometimes interrupted by episodes of jogging and on rare occasions by a frantic sprint. The Add-1 and Add-3 exercises are sprints, and casual chatting is a stroll. We found that people, when engaged in a mental sprint, may become effectively blind. The authors of The Invisible Gorilla had made the gorilla âinvisibleâ by keeping the observers intensely busy counting passes.
As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity associated with an action changes as skill increases, with fewer brain regions involved. Talent has similar effects. Highly intelligent individuals need less effort to solve the same problems, as indicated by both pupil size and brain activity. A general âlaw of least effortâ applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.
**Note:** We will always revert to the easiest possible way of doing a task
What makes some cognitive operations more demanding and effortful than others? What outcomes must we purchase in the currency of attention? What can System 2 do that System 1 cannot? We now have tentative answers to these questions.
**Note:** System 2 is capable of comparing distinct things, implementing rules, and holding in memory multiple things at once. System 1 is better for making implicit relations.
Effort increases the more a task requires the system 2 things mentioned above. Time crunching force more effort as well. So does context switching tasks. Finally having to hold many things in working memory requires effort.
A crucial capability of System 2 is the adoption of âtask setsâ: it can program memory to obey an instruction that overrides habitual responses.
**Note:** This is the essence of building virtue. Virtue is simply the habit of good action. Most of the time, our System 1 responses will cause us to behave with vice. It's the habit of good action that lets us override this.
One of the significant discoveries of cognitive psychologists in recent decades is that switching from one task to another is effortful, especially under time pressure.
#### 3 The Lazy Controller
Accelerating beyond my strolling speed completely changes the experience of walking, because the transition to a faster walk brings about a sharp deterioration in my ability to think coherently. As I speed up, my attention is drawn with increasing frequency to the experience of walking and to the deliberate maintenance of the faster pace. My ability to bring a train of thought to a conclusion is impaired accordingly. At the highest speed I can sustain on the hills, about 14 minutes for a mile, I do not even try to think of anything else. In addition to the physical effort of moving my body rapidly along the path, a mental effort of self-control is needed to resist the urge to slow down. Self-control and deliberate thought apparently draw on the same limited budget of effort.
**Note:** Self control and deliberate thought draw on the same limited budget of effort.
Riding a motorcycle at 150 miles an hour and playing a competitive game of chess are certainly very effortful. In a state of flow, however, maintaining focused attention on these absorbing activities requires no exertion of self-control, thereby freeing resources to be directed to the task at hand.
**Note:** Both self control and cognitive effort are forms of mental work.
However to enter flow, one must lose consciousness of the self and become completely immersed in the activity. In other words they must not need to expert self control frequently inside the activity.
It is now a well-established proposition that both self-control and cognitive effort are forms of mental work. Several psychological studies have shown that people who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to the temptation. Imagine that you are asked to retain a list of seven digits for a minute or two. You are told that remembering the digits is your top priority. While your attention is focused on the digits, you are offered a choice between two desserts: a sinful chocolate cake and a virtuous fruit salad. The evidence suggests that you would be more likely to select the tempting chocolate cake when your mind is loaded with digits. System 1 has more influence on behavior when System 2 is busy, and it has a sweet tooth.
People who are cognitively busy are also more likely to make selfish choices, use sexist language, and make superficial judgments in social situations.
Too much concern about how well one is doing in a task sometimes disrupts performance by loading short-term memory with pointless anxious thoughts. The conclusion is straightforward: self-control requires attention and effort.
Baumeisterâs group has repeatedly found that an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around. The phenomenon has been named ego depletion
**Note:** Future Aidan Research into if Ego Depletion is real.
The evidence is persuasive: activities that impose high demands on System 2 require self-control, and the exertion of self-control is depleting and unpleasant. Unlike cognitive load, ego depletion is at least in part a loss of motivation.
A disturbing demonstration of depletion effects in judgment was recently reported in the Proceedings of the National Academy of Sciences. The unwitting participants in the study were eight parole judges in Israel.
**Tags:** pink
The proportion spikes after each meal, when about 65% of requests are granted. During the two hours or so until the judgesâ next feeding, the approval rate drops steadily, to about zero just before the meal.
**Tags:** pink
tired and hungry judges tend to fall back on the easier default position of denying requests for parole. Both fatigue and hunger probably play a role.
**Note:** More of the judges consciousness were filled with feelings of hunger leading them to make less empathetic decisions.
**Tags:** pink
A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost? A number came to your mind. The number, of course, is 10: 10¢. The distinctive mark of this easy puzzle is that it evokes an answer that is intuitive, appealing, and wrong. Do the math, and you will see. If the ball costs 10¢, then the total cost will be $1.20 (10¢ for the ball and $1.10 for the bat), not $1.10. The correct answer is 5¢.
**Note:** An example of how system 1 can cause us to answer overconfidently
many people are overconfident, prone to place too much faith in their intuitions. They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible.
Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed.
**Note:** This is why you can improve your intelligence over time.
You can ingrain relevant material in memory and train your awareness facilities. However, your raw information processing power will likely not be able to change.
In one of the most famous experiments in the history of psychology, Walter Mischel and his students exposed four-year-old children to a cruel dilemma. They were given a choice between a small reward (one Oreo), which they could have at any time, or a larger reward (two cookies) for which they had to wait 15 minutes under difficult conditions. They were to remain alone in a room, facing a desk with two objects: a single cookie and a bell that the child could ring at any time to call in the experimenter and receive the one cookie. As the experiment was described: âThere were no toys, books, pictures, or other potentially distracting items in the room. The experimenter left the room and did not return until 15 min had passed or the child had rung the bell, eaten the rewards, stood up, or shown any signs of distress.â The children were watched through a one-way mirror, and the film that shows their behavior during the waiting time always has the audience roaring in laughter. About half the children managed the feat of waiting for 15 minutes, mainly by keeping their attention away from the tempting reward. Ten or fifteen years later, a large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four-year-olds had substantially higher scores on tests of intelligence.
**Note:** Children with higher levels of control are more likely to grow up with less drug problems and score higher on intelligence tests.
**Tags:** pink
characters of our psychodrama have different âpersonalities.â System 1 is impulsive and intuitive; System 2 is capable of reasoning, and it is cautious, but at least for some people it is also lazy.
some people are more like their System 2; others are closer to their System 1.
#### 4 The Associative Machine
associative activation: ideas that have been evoked trigger many other ideas, in a spreading cascade of activity in your brain. The essential feature of this complex set of mental events is its coherence. Each element is connected, and each supports and strengthens the others. The word evokes memories, which evoke emotions, which in turn evoke facial expressions and other reactions, such as a general tensing up and an avoidance tendency. The facial expression and the avoidance motion intensify the feelings to which they are linked, and the feelings in turn reinforce compatible ideas. All this happens quickly and all at once, yielding a self-reinforcing pattern of cognitive, emotional, and physical responses that is both diverse and integratedâit has been called associatively coherent.
**Note:** Holy shit! This describes the thought patterns that come about when I get mad at myself for thinking about food or having to do chem. I try and be aware of the present moment and just let them go but the thought of them triggers the same thought patterns that always happen. Why are you so basic? Should you care more about your friends? Be productive? Are you doing the right thing with your time? This is itâŚ
I will adopt an expansive view of what an idea is. It can be concrete or abstract, and it can be expressed in many ways: as a verb, as a noun, as an adjective, or as a clenched fist. Psychologists think of ideas as nodes in a vast network, called associative memory, in which each idea is linked to many others. There are different types of links: causes are linked to their effects (virus cold); things to their properties (lime green); things to the categories to which they belong (banana fruit).
One way we have advanced beyond Hume is that we no longer think of the mind as going through a sequence of conscious ideas, one at a time. In the current view of how associative memory works, a great deal happens at once. An idea that has been activated does not merely evoke one other idea. It activates many ideas, which in turn activate others. Furthermore, only a few of the activated ideas will register in consciousness; most of the work of associative thinking is silent, hidden from our conscious selves. The notion that we have limited access to the workings of our minds is difficult to accept because, naturally, it is alien to our experience, but it is true: you know far less about yourself than you feel you do.
âWhat is the first word that comes to your mind when you hear the word DAY?â The researchers tallied the frequency of responses, such as ânight,â âsunny,â or âlong.â In the 1980s, psychologists discovered that exposure to a word causes immediate and measurable changes in the ease with which many related words can be evoked. If you have recently seen or heard the word EAT, you are temporarily more likely to complete the word fragment SO_P as SOUP than as SOAP. The opposite would happen, of course, if you had just seen WASH. We call this a priming effect and say that the idea of EAT primes the idea of SOUP, and that WASH primes SOAP.
**Note:** The priming idea works by priming your brain with a idea. You will be more likely to put to mind things associated to the idea in the next present moments of your life.
Priming effects take many forms. If the idea of EAT is currently on your mind (whether or not you are conscious of it), you will be quicker than usual to recognize the word SOUP when it is spoken in a whisper or presented in a blurry font.
**Note:** Cultures prime is to think Iâm certain ways.
There is probably evidence that eastern cultures tend to think more of others than Western ones.
Another major advance in our understanding of memory was the discovery that priming is not restricted to concepts and words. You cannot know this from conscious experience, of course, but you must accept the alien idea that your actions and your emotions can be primed by events of which you are not even aware. In an experiment that became an instant classic, the psychologist John Bargh and his collaborators asked students at New York Universityâmost aged eighteen to twenty-twoâto assemble four-word sentences from a set of five words (for example, âfinds he it yellow instantlyâ). For one group of students, half the scrambled sentences contained words associated with the elderly, such as Florida, forgetful, bald, gray, or wrinkle. When they had completed that task, the young participants were sent out to do another experiment in an office down the hall. That short walk was what the experiment was about. The researchers unobtrusively measured the time it took people to get from one end of the corridor to the other. As Bargh had predicted, the young people who had fashioned a sentence from words with an elderly theme walked down the hallway significantly more slowly than the others.
**Note:** Priming is not restricted to words and concepts. The students who primed their brains with things associated of the elderly walked to the end of the hall slower than those that did not.
**Tags:** pink
This remarkable priming phenomenonâthe influencing of an action by the ideaâis known as the ideomotor effect.
**Note:** Walking slowly after hearing words that give connotations to old people for example.
The ideomotor link also works in reverse.
**Note:** For example nodding makes us more likely to accept something and shaking or head makes us more likely to reject it.
Reciprocal links are common in the associative network. For example, being amused tends to make you smile, and smiling tends to make you feel amused. Go ahead and take a pencil, and hold it between your teeth for a few seconds with the eraser pointing to your right and the point to your left. Now hold the pencil so the point is aimed straight in front of you, by pursing your lips around the eraser end. You were probably unaware that one of these actions forced your face into a frown and the other into a smile. College students were asked to rate the humor of cartoons from Gary Larsonâs The Far Side while holding a pencil in their mouth. Those who were âsmilingâ (without any awareness of doing so) found the cartoons funnier than did those who were âfrowning.â In another experiment, people whose face was shaped into a frown (by squeezing their eyebrows together) reported an enhanced emotional response to upsetting picturesâstarving children, people arguing, maimed accident victims.
**Note:** The association effect in action. When we are smiling, we tend to react more happily because smiling is associated with smiling. The students who were frowning were more likely to be affected by sad thoughts from pictures.
**Tags:** pink
Simple, common gestures can also unconsciously influence our thoughts and feelings. In one demonstration, people were asked to listen to messages through new headphones. They were told that the purpose of the experiment was to test the quality of the audio equipment and were instructed to move their heads repeatedly to check for any distortions of sound. Half the participants were told to nod their head up and down while others were told to shake it side to side. The messages they heard were radio editorials. Those who nodded (a yes gesture) tended to accept the message they heard, but those who shook their head tended to reject it. Again, there was no awareness, just a habitual connection between an attitude of rejection or acceptance and its common physical expression. You can see why the common admonition to âact calm and kind regardless of how you feelâ is very good advice: you are likely to be rewarded by actually feeling calm and kind.
**Note:** Act calm and kind regardless of how you feel and in most circumstances it will cause you to feel calm and kind.
**Tags:** pink
In another experiment in the series, participants were told that they would shortly have a get-acquainted conversation with another person and were asked to set up two chairs while the experimenter left to retrieve that person. Participants primed by money chose to stay much farther apart than their nonprimed peers (118 vs. 80 centimeters). Money-primed undergraduates also showed a greater preference for being alone. The general theme of these findings is that the idea of money primes individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others.
**Note:** Money primed people try to act more independent. The people primed by money sat farther away from their fellow student than those that didnât.
**Tags:** pink
the idea of money primes individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others.
#### 5 Cognitive Ease
cognitive ease, and its range is between âEasyâ and âStrained.â Easy is a sign that things are going wellâno threats, no major news, no need to redirect attention or mobilize effort. Strained indicates that a problem exists, which will require increased mobilization of System 2. Conversely, you experience cognitive strain. Cognitive strain is affected by both the current level of effort and the presence of unmet demands.
A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth.
**Note:** The exposure effect.
However, the cognitive ease through which something is processed also can heighten belief. Sentences that rhyme, names that are easy to pronounce, words easier to distinguish from the page. All of these heighten belief.
Suppose you must write a message that you want the recipients to believe. Of course, your message will be true, but that is not necessarily enough for people to believe that it is true. It is entirely legitimate for you to enlist cognitive ease to work in your favor, and studies of truth illusions provide specific suggestions that may help you achieve this goal. The general principle is that anything you can do to reduce cognitive strain will help, so you should first maximize ease of understanding.
**Note:** To write persuasively, anything that makes the message easier for your audience to understand will usually help.
If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? 100 minutes OR 5 minutes In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? 24 days OR 47 days The correct answers to both problems are in a footnote at the bottom of the page.* The experimenters recruited 40 Princeton students to take the CRT. Half of them saw the puzzles in a small font in washed-out gray print. The puzzles were legible, but the font induced cognitive strain. The results tell a clear story: 90% of the students who saw the CRT in normal font made at least one mistake in the test, but the proportion dropped to 35% when the font was barely legible. You read this correctly: performance was better with the bad font. Cognitive strain, whatever its source, mobilizes System 2, which is more likely to reject the intuitive answer suggested by System 1.
**Note:** Cognitive strain whatever itâs source mobilizes system 2. People who read these problems in bad font had a 35% chance of making a mistake compared to 90%. The answers are 5 and 47.
**Tags:** pink
the effect of repetition on liking is a profoundly important biological fact, and that it extends to all animals. To survive in a frequently dangerous world, an organism should react cautiously to a novel stimulus, with withdrawal and fear.
**Note:** We favor messages that we can consume with cognitive ease for an evolutionary reason.
Environments that arenât novel are more likely to be safe.
Mood evidently affects the operation of System 1: when we are uncomfortable and unhappy, we lose touch with our intuition.
A happy mood loosens the control of System 2 over performance: when in a good mood, people become more intuitive and more creative but also less vigilant and more prone to logical errors.
#### 6 Norms, Surprises, and Causes
A capacity for surprise is an essential aspect of our mental life, and surprise itself is the most sensitive indication of how we understand our world and what we expect from it. There are two main varieties of surprise. Some expectations are active and consciousâyou know you are waiting for a particular event to happen. When the hour is near, you may be expecting the sound of the door as your child returns from school; when the door opens you expect the sound of a familiar voice. You will be surprised if an actively expected event does not occur. But there is a much larger category of events that you expect passively; you donât wait for them, but you are not surprised when they happen. These are events that are normal in a situation, though not sufficiently probable to be actively expected.
**Note:** The two forms of surprise are active and passive surprise. Active surprise are things that you are actively ready for or waiting to happen. Passive surprises are things that you donât wait for but are not surprised when they happen. Passive surprises can become active ones simply by occurring. Even though they are still very unlikely we will not be surprised when they happen.
Finding such causal connections is part of understanding a story and is an automatic operation of System 1. System 2, your conscious self, was offered the causal interpretation and accepted it.
**Note:** System 1 is primed to a causual thinking.
To find cause and effect between things that might be only correlational in nature.
#### 7 A Machine for Jumping to Conclusions
Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort. Jumping to conclusions is risky when the situation is unfamiliar, the stakes are high, and there is no time to collect more information.
**Note:** This reminds me of Signal Detection Theory.
In the case where risk and signal are high, you should respond, but if the risk is low and noise high, you shouldnât.
System 1 does not keep track of alternatives that it rejects, or even of the fact that there were alternatives. Conscious doubt is not in the repertoire of System 1; it requires maintaining incompatible interpretations in mind at the same time, which demands mental effort. Uncertainty and doubt are the domain of System 2.
Gilbert sees unbelieving as an operation of System 2, and he reported an elegant experiment to make his point. The participants saw nonsensical assertions, such as âa dinca is a flame,â followed after a few seconds by a single word, âtrueâ or âfalse.â They were later tested for their memory of which sentences had been labeled âtrue.â In one condition of the experiment subjects were required to hold digits in memory during the task. The disruption of System 2 had a selective effect: it made it difficult for people to âunbelieveâ false sentences. In a later test of memory, the depleted participants ended up thinking that many of the false sentences were true. The moral is significant: when System 2 is otherwise engaged, we will believe almost anything. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy. Indeed, there is evidence that people are more likely to be influenced by empty persuasive messages, such as commercials, when they are tired and depleted.
**Note:** Unbelieving is an operation of system 2. We will be more likely to fall upon system 1 and blindly believe things if we are tired or have too many things on the mind.
**Tags:** pink
System 2 is otherwise engaged, we will believe almost anything. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy.
If you like the presidentâs politics, you probably like his voice and his appearance as well. The tendency to like (or dislike) everything about a personâincluding things you have not observedâis known as the halo effect.
The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increases the weight of first impressions, sometimes to the point that subsequent information is mostly wasted. Early in my career as a professor, I graded studentsâ essay exams in the conventional way. I would pick up one test booklet at a time and read all that studentâs essays in immediate succession, grading them as I went.
**Note:** Our first impressions profoundly shape our future evaluations as confirmation bias leads us to seek information which confirms our first impression.
If a student had written two essays, one strong and one weak, I would end up with different final grades depending on which essay I read first. I had told the students that the two essays had equal weight, but that was not true: the first one had a much greater impact on the final grade than the second. This was unacceptable. I adopted a new procedure. Instead of reading the booklets in sequence, I read and scored all the studentsâ answers to the first question, then went on to the next one.
decorrelate error! To understand how this principle works, imagine that a large number of observers are shown glass jars containing pennies and are challenged to estimate the number of pennies in each jar. As James Surowiecki explained in his best-selling The Wisdom of Crowds, this is the kind of task in which individuals do very poorly, but pools of individual judgments do remarkably well. Some individuals greatly overestimate the true number, others underestimate it, but when many judgments are averaged, the average tends to be quite accurate. The mechanism is straightforward: all individuals look at the same jar, and all their judgments have a common basis. On the other hand, the errors that individuals make are independent of the errors made by others, and (in the absence of a systematic bias) they tend to average to zero. However, the magic of error reduction works well only when the observations are independent and their errors uncorrelated. If the observers share a bias, the aggregation of judgments will not reduce it. Allowing the observers to influence each other effectively reduces the size of the sample, and with it the precision of the group estimate.
**Note:** Decorrelate error refers to making sure bias does not enter into someoneâs judgements by being influenced by someone elseâs opinion. When too many people in a group talk over something, it often leads to them all adopting biases from the groups shared opinions.
To derive the most useful information from multiple sources of evidence, you should always try to make these sources independent of each other. This rule is part of good police procedure. When there are multiple witnesses to an event, they are not allowed to discuss it before giving their testimony. The goal is not only to prevent collusion by hostile witnesses, it is also to prevent unbiased witnesses from influencing each other.
**Note:** This is an example of the Wisdom of Crowds.
The principle of independent judgments (and decorrelated errors) has immediate applications for the conduct of meetings, an activity in which executives in organizations spend a great deal of their working days. A simple rule can help: before an issue is discussed, all members of the committee should be asked to write a very brief summary of their position.
**Note:** This is to stop the initial speakers from giving too much of an impression towards the opinion of subsequent speakers.
System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.
**Note:** This reminds me of the availability heuristic as well as tendency to over incorporate present states into imaginations of the future and retrievals of memory.
One great way to tamper this bias is to assess factors are important concerning a decision beforehand. In other words, framing the decision.
WYSIATI, which stands for what you see is all there is. System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.
Overconfidence: As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.
Framing effects: Different ways of presenting the same information often evoke different emotions. The statement that âthe odds of survival one month after surgery are 90%â is more reassuring than the equivalent statement that âmortality within one month of surgery is 10%.â Similarly, cold cuts described as â90% fat-freeâ are more attractive than when they are described as â10% fat.â The equivalence of the alternative formulations is transparent, but an individual normally sees only one formulation, and what she sees is all there is.
Situations are constantly evaluated as good or bad, requiring escape or permitting approach. Good mood and cognitive ease are the human equivalents of assessments of safety and familiarity.
we are endowed with an ability to evaluate, in a single glance at a strangerâs face, two potentially crucial facts about that person: how dominant (and therefore potentially threatening) he is, and how trustworthy he is, whether his intentions are more likely to be friendly or hostile. The shape of the face provides the cues for assessing dominance: a âstrongâ square chin is one such cue. Facial expression (smile or frown) provides the cues for assessing the strangerâs intentions. The combination of a square chin with a turned-down mouth may spell trouble. The accuracy of face reading is far from perfect: round chins are not a reliable indicator of meekness, and smiles can (to some extent) be faked. Still, even an imperfect ability to assess strangers confers a survival advantage.
Participants in one of the numerous experiments that were prompted by the litigation following the disastrous Exxon Valdez oil spill were asked their willingness to pay for nets to cover oil ponds in which migratory birds often drown. Different groups of participants stated their willingness to pay to save 2,000, 20,000, or 200,000 birds. If saving birds is an economic good it should be a sum-like variable: saving 200,000 birds should be worth much more than saving 2,000 birds. In fact, the average contributions of the three groups were $80, $78, and $88 respectively. The number of birds made very little difference. What the participants reacted to, in all three groups, was a prototypeâthe awful image of a helpless bird drowning, its feathers soaked in thick oil. The almost complete neglect of quantity in such emotional contexts has been confirmed many times.
**Note:** We are terrible at using quantity in emotional contexts. No matter how many birds we said to be saved the people donated about the same.
**Tags:** pink
You do not automatically count the number of syllables of every word you read, but you can do it if you so choose. However, the control over intended computations is far from precise: we often compute much more than we want or need. I call this excess computation the mental shotgun.
we often compute much more than we want or need. I call this excess computation the mental shotgun
What are the correct responses for the following sentences? Some roads are snakes. Some jobs are snakes. Some jobs are jails. All three sentences are literally false. However, you probably noticed that the second sentence is more obviously false than the other twoâthe reaction times collected in the experiment confirmed a substantial difference. The reason for the difference is that the two difficult sentences can be metaphorically true.
**Note:** An example of the mental shotgun.
#### 9 Answering an Easier Question
I propose a simple account of how we generate intuitive opinions on complex matters. If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. I call the operation of answering one question in place of another substitution. I also adopt the following terms: Â The target question is the assessment you intend to produce. The heuristic question is the simpler question that you answer instead.
**Note:** For example, Participants in one of the numerous experiments that were prompted by the litigation following the disastrous Exxon Valdez oil spill were asked their willingness to pay for nets to cover oil ponds in which migratory birds often drown. Different groups of participants stated their willingness to pay to save 2,000, 20,000, or 200,000 birds. If saving birds is an economic good it should be a sum-like variable: saving 200,000 birds should be worth much more than saving 2,000 birds. In fact, the average contributions of the three groups were $80, $78, and $88 respectively. The number of birds made very little difference. What the participants reacted to, in all three groups, was a prototypeâthe awful image of a helpless bird drowning, its feathers soaked in thick oil. The target question was how much they were willing to pay to help a number of birds? The heuristic question was how do I feel about birds? Statistics had almost no influence on the number.
How are the heuristic answers fitted to the original target question. Another capability of System 1, intensity matching, is available to solve that problem. Recall that both feelings and contribution dollars are intensity scales. I can feel more or less strongly about dolphins and there is a contribution that matches the intensity of my feelings. The dollar amount that will come to my mind is the matching amount.
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. I call the operation of answering one question in place of another substitution. I also adopt the following terms: Â The target question is the assessment you intend to produce. The heuristic question is the simpler question that you answer instead.
heuristic is a simple procedure that helps find adequate, though often imperfect, answers to difficult questions.
Recall that both feelings and contribution dollars are intensity scales. I can feel more or less strongly about dolphins and there is a contribution that matches the intensity of my feelings. The dollar amount that will come to my mind is the matching amount. Similar intensity matches are possible for all the questions. For example, the political skills of a candidate can range from pathetic to extraordinarily impressive, and the scale of political success can range from the low of âShe will be defeated in the primaryâ to a high of âShe will someday be president of the United States.â
**Note:** Intensity matching refers to when we match the intensity of how we feel about something to answer a question.
How happy are you these days? How many dates did you have last month? The experimenters were interested in the correlation between the two answers. Would the students who reported many dates say that they were happier than those with fewer dates? Surprisingly, no: the correlation between the answers was about zero. Evidently, dating was not what came first to the studentsâ minds when they were asked to assess their happiness. Another group of students saw the same two questions, but in reverse order: How many dates did you have last month? How happy are you these days? The results this time were completely different. In this sequence, the correlation between the number of dates and reported happiness was about as high as correlations between psychological measures can get. What happened? The explanation is straightforward, and it is a good example of substitution. Dating was apparently not the center of these studentsâ life (in the first survey, happiness and dating were uncorrelated), but when they were asked to think about their romantic life, they certainly had an emotional reaction. The students who had many dates were reminded of a happy aspect of their life, while those who had none were reminded of loneliness and rejection. The emotion aroused by the dating question was still on everyoneâs mind when the query about general happiness came up.
**Note:** WYSIATI does a ton to affect your thoughts of happiness. Substituting large complicated questions like how happy are you is more likely to happen if you give an easy way to do it. When the question of dating was asked first, it gave students an easy way to assess their happiness and the two variables became highly correlated.
**Tags:** pink
In the context of attitudes, however, System 2 is more of an apologist for the emotions of System 1 than a critic of those emotionsâan endorser rather than an enforcer. Its search for information and arguments is mostly constrained to information that is consistent with existing beliefs, not with an intention to examine them.
**Note:** In most of the book system 2 is described as a clever, smart, fixer of system 1s errors. However in the case of attitudes system 2 is prone to validating system 1 held beliefs rather than examining them.
#### Characteristics of System 1
generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions operates automatically and quickly, with little or no effort, and no sense of voluntary control can be programmed by System 2 to mobilize attention when a particular pattern is detected (search) executes skilled responses and generates skilled intuitions, after adequate training creates a coherent pattern of activated ideas in associative memory links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance distinguishes the surprising from the normal infers and invents causes and intentions neglects ambiguity and suppresses doubt is biased to believe and confirm exaggerates emotional consistency (halo effect) focuses on existing evidence and ignores absent evidence (WYSIATI) generates a limited set of basic assessments represents sets by norms and prototypes, does not integrate matches intensities across scales (e.g., size to loudness) computes more than intended (mental shotgun) sometimes substitutes an easier question for a difficult one (heuristics) is more sensitive to changes than to states (prospect theory)* overweights low probabilities* shows diminishing sensitivity to quantity (psychophysics)* responds more strongly to losses than to gains (loss aversion)* frames decision problems narrowly, in isolation from one another
Now imagine the population of the United States as marbles in a giant urn. Some marbles are marked KC, for kidney cancer. You draw samples of marbles and populate each county in turn. Rural samples are smaller than other samples. Just as in the game of Jack and Jill, extreme outcomes (very high and/or very low cancer rates) are most likely to be found in sparsely populated counties. This is all there is to the story. We started from a fact that calls for a cause: the incidence of kidney cancer varies widely across counties and the differences are systematic. The explanation I offered is statistical: extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation is not causal. The small population of a county neither causes nor prevents cancer; it merely allows the incidence of cancer to be much higher (or much lower) than it is in the larger population. The deeper truth is that there is nothing to explain. The incidence of cancer is not truly lower or higher than normal in a county with a small population, it just appears to be so in a particular year because of an accident of sampling. If we repeat the analysis next year, we will observe the same general pattern of extreme results in the small samples, but the counties where cancer was common last year will not necessarily have a high incidence this year. If this is the case, the differences between dense and rural counties do not really count as facts: they are what scientists call artifacts, observations that are produced entirely by some aspect of the method of researchâin this case, by differences in sample size.
**Note:** Our brains are terrible at thinking statistically. We love to think cause and effect leading us to believe that itâs the rural nature or something else that causes higher degrees of kidney cancer in some cases. In fact, itâs actually the small sample size that leads to artifacts that makes the prevalence of kidney cancer so high in some areas.
In a telephone poll of 300 seniors, 60% support the president. If you had to summarize the message of this sentence in exactly three words, what would they be? Almost certainly you would choose âelderly support president.â These words provide the gist of the story. The omitted details of the poll, that it was done on the phone with a sample of 300, are of no interest in themselves; they provide background information that attracts little attention. Your summary would be the same if the sample size had been different. Of course, a completely absurd number would draw your attention (âa telephone poll of 6 [or 60 million] elderly votersâŚâ). Unless you are a professional, however, you may not react very differently to a sample of 150 and to a sample of 3,000. That is the meaning of the statement that âpeople are not adequately sensitive to sample size.â
**Note:** People are not adequately cognizant of sample size
The exaggerated faith in small samples is only one example of a more general illusionâwe pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. Many facts of the world are due to chance, including accidents of sampling. Causal explanations of chance events are inevitably wrong.
Amos and I once rigged a wheel of fortune. It was marked from 0 to 100, but we had it built so that it would stop only at 10 or 65. We recruited students of the University of Oregon as participants in our experiment. One of us would stand in front of a small group, spin the wheel, and ask them to write down the number on which the wheel stopped, which of course was either 10 or 65. We then asked them two questions: Is the percentage of African nations among UN members larger or smaller than the number you just wrote? What is your best guess of the percentage of African nations in the UN? The spin of a wheel of fortuneâeven one that is not riggedâcannot possibly yield useful information about anything, and the participants in our experiment should simply have ignored it. But they did not ignore it. The average estimates of those who saw 10 and 65 were 25% and 45%, respectively.
**Note:** The anchoring effect in action. Because the two numbers on the wheel were 65 and 10 they students used them as anchors to guess the percentage of African States in the UN. This was regardless of the fact that it had nothing to do with the question.
**Tags:** pink
anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity. What happens is one of the most reliable and robust results of experimental psychology: the estimates stay close to the number that people consideredâhence the image of an anchor.
Amos liked the idea of an **adjust-and-anchor heuristic** as a strategy for estimating uncertain quantities: **start from an anchoring number, assess whether it is too high or too low, and gradually adjust your estimate by mentally âmovingâ from the anchor.** The adjustment typically ends prematurely, because people stop when they are no longer certain that they should move farther.
**Note:** Typically you will stay closer to the anchor the more agreeable you are feeling and the more you system 2 is being inhibited by excess information.
Is the height of the tallest redwood more or less than 1,200 feet? What is your best guess about the height of the tallest redwood? The âhigh anchorâ in this experiment was 1,200 feet. For other participants, the first question referred to a âlow anchorâ of 180 feet. The difference between the two anchors was 1,020 feet. As expected, the two groups produced very different mean estimates: 844 and 282 feet. The difference between them was 562 feet. The anchoring index is simply the ratio of the two differences (562/1,020) expressed as a percentage: 55%. The anchoring measure would be 100% for people who slavishly adopt the anchor as an estimate, and zero for people who are able to ignore the anchor altogether. The value of 55% that was observed in this example is typical. Similar values have been observed in numerous other problems.
**Note:** This is how you calculate the anchoring effect
Powerful anchoring effects are found in decisions that people make about money, such as when they choose how much to contribute to a cause. To demonstrate this effect, we told participants in the Exploratorium study about the environmental damage caused by oil tankers in the Pacific Ocean and asked about their willingness to make an annual contribution âto save 50,000 offshore Pacific Coast seabirds from small offshore oil spills, until ways are found to prevent spills or require tanker owners to pay for the operation.â This question requires intensity matching: the respondents are asked, in effect, to find the dollar amount of a contribution that matches the intensity of their feelings about the plight of the seabirds. Some of the visitors were first asked an anchoring question, such as, âWould you be willing to pay $5âŚ,â before the point-blank question of how much they would contribute. When no anchor was mentioned, the visitors at the Exploratoriumâgenerally an environmentally sensitive crowdâsaid they were willing to pay $64, on average. When the anchoring amount was only $5, contributions averaged $20. When the anchor was a rather extravagant $400, the willingness to pay rose to an average of $143. The difference between the high-anchor and low-anchor groups was $123. The anchoring effect was above 30%, indicating that increasing the initial request by $100 brought a return of $30 in average willingness to pay.
**Note:** Anchoring effects have a powerful influence on how much people are willing to spend for something.
**Tags:** pink
The power of random anchors has been demonstrated in some unsettling ways. German judges with an average of more than fifteen years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9. As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice. Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter. On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%.
**Note:** Random anchors can cause unsettling results. Despite meaning nothing, the dice that were rolled as a three caused the sentence to be on average 3 three months lower than the dice that rolled as a nine.
**Tags:** pink
My advice to students when I taught negotiations was that if you think the other side has made an outrageous proposal, you should not come back with an equally outrageous counteroffer, creating a gap that will be difficult to bridge in further negotiations. Instead you should make a scene, storm out or threaten to do so, and make it clearâto yourself as well as to the other sideâthat you will not continue the negotiation with that number on the table.
If the content of a screen saver on an irrelevant computer can affect your willingness to help strangers without your being aware of it, how free are you? Anchoring effects are threatening in a similar way. You are always aware of the anchor and even pay attention to it, but you do not know how it guides and constrains your thinking, because you cannot imagine how you would have thought if the anchor had been different (or absent). However, you should assume that any number that is on the table has had an anchoring effect on you, and if the stakes are high you should mobilize yourself (your System 2) to combat the effect.
**Note:** To combat the anchoring effect, be aware of the fact that any number in front of you has an influence on the things you say next.
The availability heuristic, like other heuristics of judgment, substitutes one question for another: you wish to estimate the size of a category or the frequency of an event, but you report an impression of the ease with which instances come to mind.
In a famous study, spouses were asked, âHow large was your personal contribution to keeping the place tidy, in percentages?â They also answered similar questions about âtaking out the garbage,â âinitiating social engagements,â etc. Would the self-estimated contributions add up to 100%, or more, or less? As expected, the self-assessed contributions added up to more than 100%. The explanation is a simple availability bias: both spouses remember their own individual efforts and contributions much more clearly than those of the other, and the difference in availability leads to a difference in judged frequency. The bias is not necessarily self-serving: spouses also overestimated their contribution to causing quarrels, although to a smaller extent than their contributions to more desirable outcomes. The same bias contributes to the common observation that many members of a collaborative team feel they have done more than their share and also feel that the others are not adequately grateful for their individual contributions.
**Note:** A classic example of the availability heuristic in action. Spouses overestimated their contributions to both bad and good things as it was easier for them to call their own efforts to mind.
**Tags:** pink
Imagine that you had been asked for twelve instances of assertive behavior (a number most people find difficult). Would your view of your own assertiveness be different? Schwarz and his colleagues observed that the task of listing instances may enhance the judgments of the trait by two different routes: the number of instances retrieved the ease with which they come to mind The request to list twelve instances pits the two determinants against each other. On the one hand, you have just retrieved an impressive number of cases in which you were assertive. On the other hand, while the first three or four instances of your own assertiveness probably came easily to you, you almost certainly struggled to come up with the last few to complete a set of twelve; fluency was low. Which will count moreâthe amount retrieved or the ease and fluency of the retrieval? The contest yielded a clear-cut winner: people who had just listed twelve instances rated themselves as less assertive than people who had listed only six. Furthermore, participants who had been asked to list twelve cases in which they had not behaved assertively ended up thinking of themselves as quite assertive!
**Note:** If you cannot easily come up with instances of meek behavior, you are likely to conclude that you are not meek at all. Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved. If you cannot easily come up with instances of meek or good behavior, you are likely to conclude that you are not meek or good at all.
**Tags:** pink
Psychologists enjoy experiments that yield paradoxical results, and they have applied Schwarzâs discovery with gusto. For example, people: believe that they use their bicycles less often after recalling many rather than few instances are less confident in a choice when they are asked to produce more arguments to support it are less confident that an event was avoidable after listing more ways it could have been avoided are less impressed by a car after listing many of its advantages
**Note:** Ease of recall has a higher effect on our estimations of our capabilities than flat out number of examples. Paradoxically, when you come up with more examples of something it makes it harder to recall more things about it.
They told the participants they would hear background music while recalling instances and that the music would affect performance in the memory task. Some subjects were told that the music would help, others were told to expect diminished fluency. As predicted, participants whose experience of fluency was âexplainedâ did not use it as a heuristic; the subjects who were told that music would make retrieval more difficult rated themselves as equally assertive when they retrieved twelve instances as when they retrieved six. Other cover stories have been used with the same result: judgments are no longer influenced by ease of retrieval when the experience of fluency is given a spurious explanation by the presence of curved or straight text boxes, by the background color of the screen, or by other irrelevant factors that the experimenters dreamed up.
**Note:** Judgments are no longer influenced by ease of retrieval when the experience of fluency is given a spurious explanation by the presence of some outside stimulus. People didnât rate themselves as less assertive paradoxically like before after recalling 12 instances rather than 6 of assertiveness. They had the music to blame for their lack of recall.
**Tags:** pink
affect heuristic, in which people make judgments and decisions by consulting their emotions: Do I like it? Do I hate it? How strongly do I feel about it? In many domains of life, Slovic said, people form opinions and make choices that directly express their feelings and their basic tendency to approach or avoid, often without knowing that they are doing so. The affect heuristic is an instance of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think about it?).
An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action. On some occasions, a media story about a risk catches the attention of a segment of the public, which becomes aroused and worried. This emotional reaction becomes a story in itself, prompting additional coverage in the media, which in turn produces greater concern and involvement. The cycle is sometimes sped along deliberately by âavailability entrepreneurs,â individuals or organizations who work to ensure a continuous flow of worrying news. The danger is increasingly exaggerated as the media compete for attention-grabbing headlines.
**Note:** .qa What is an availability cascade? a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action.
**Tags:** qa
The Alar tale illustrates a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weightânothing in between. Every parent who has stayed up waiting for a teenage daughter who is late from a party will recognize the feeling. You may know that there is really (almost) nothing to worry about, but you cannot help images of disaster from coming to mind.
To decide whether a marble is more likely to be red or green, you need to know how many marbles of each color there are in the urn. The proportion of marbles of a particular kind is called a base rate.
**Note:** .qa What is a base rate? The proportion of one variable in a group of larger variables.
**Tags:** qa
Judging probability by representativeness has important virtues: the intuitive impressions that it produces are oftenâindeed, usuallyâmore accurate than chance guesses would be.
**Note:** .qa What is representativeness? The amount a given description of something/someone fits into the preconceived stereotype of something.
**Tags:** qa
One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events. Here is an example: you see a person reading The New York Times on the New York subway. Which of the following is a better bet about the reading stranger? She has a PhD. She does not have a college degree. Representativeness would tell you to bet on the PhD, but this is not necessarily wise. You should seriously consider the second alternative, because many more nongraduates than PhDs ride in New York subways. And if you must guess whether a woman who is described as âa shy poetry loverâ studies Chinese literature or business administration, you should opt for the latter option. Even if every female student of Chinese literature is shy and loves poetry, it is almost certain that there are more bashful poetry lovers in the much larger population of business students.
There is one thing you can do when you have doubts about the quality of the evidence: let your judgments of probability stay close to the base rate. Donât expect this exercise of discipline to be easyâit requires a significant effort of self-monitoring and self-control.
**Note:** When predicting the probability of something, try and stick close to the base rate is you are very scared of internal bias. You can influence the probabilities just slight when you learn more information.
There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. The first is that base rates matter, even in the presence of evidence about the case at hand. This is often not intuitively obvious. The second is that intuitive impressions of the diagnosticity of evidence are often exaggerated. The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves. The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.
**Note:** .qa What are the two essential ideas to disciplined Bayesian reasoning? Anchor your judgment of the probability of an outcome on a plausible base rate and question the significance of your evidence.
**Tags:** qa
The word fallacy is used, in general, when people fail to apply a logical rule that is obviously relevant. Amos and I introduced the idea of a conjunction fallacy, which people commit when they judge a conjunction of two events (here, bank teller and feminist) to be more probable than one of the events (bank teller) in a direct comparison.
**Note:** .qa What is a conjunction fallacy in probability? When people judge a conjunction of two events to be more probable than one of the events in a direct comparison.
**Tags:** qa
To appreciate the role of plausibility, consider the following questions: Which alternative is more probable? Mark has hair. Mark has blond hair. and Which alternative is more probable? Jane is a teacher. Jane is a teacher and walks to work. The two questions have the same logical structure as the Linda problem, but they cause no fallacy, because the more detailed outcome is only more detailedâit is not more plausible, or more coherent, or a better story. The evaluation of plausibility and coherence does not suggest an answer to the probability question. In the absence of a competing intuition, logic prevails.
**Note:** Logic prevails in the absence of a competing intuition.
Christopher Hsee, of the University of Chicago, asked people to price sets of dinnerware offered in a clearance sale in a local store, where dinnerware regularly runs between $30 and $60. There were three groups in his experiment. The display below was shown to one group; Hsee labels that joint evaluation, because it allows a comparison of the two sets. The other two groups were shown only one of the two sets; this is single evaluation. Joint evaluation is a within-subject experiment, and single evaluation is between-subjects.     Set A: 40 pieces   Set B: 24 pieces Dinner plates   8, all in good condition   8, all in good condition Soup/salad bowls   8, all in good condition   8, all in good condition Dessert plates   8, all in good condition   8, all in good condition Cups   8, 2 of them broken    Saucers   8, 7 of them broken     Assuming that the dishes in the two sets are of equal quality, which is worth more? This question is easy. You can see that Set A contains all the dishes of Set B, and seven additional intact dishes, and it must be valued more. Indeed, the participants in Hseeâs joint evaluation experiment were willing to pay a little more for Set A than for Set B: $32 versus $30. The results reversed in single evaluation, where Set B was priced much higher than Set A: $33 versus $23. We know why this happened. Sets (including dinnerware sets!) are represented by norms and prototypes. You can sense immediately that the average value of the dishes is much lower for Set A than for Set B, because no one wants to pay for broken dishes. If the average dominates the evaluation, it is not surprising that Set B is valued more. Hsee called the resulting pattern less is more. By removing 16 items from Set A (7 of them intact), its value is improved.
**Note:** Less is more. If people jusge how much to pay for something off the average value of all the things, getting rid of the lowest quality items might actually be beneficial.
**Tags:** pink
To head off the possible objection that the conjunction fallacy is due to a misinterpretation of probability, we constructed a problem that required probability judgments, but in which the events were not described in words, and the term probability did not appear at all. We told participants about a regular six-sided die with four green faces and two red faces, which would be rolled 20 times. They were shown three sequences of greens (G) and reds (R), and were asked to choose one. They would (hypothetically) win $25 if their chosen sequence showed up. The sequences were: RGRRR GRGRRR GRRRRR Because the die has twice as many green as red faces, the first sequence is quite unrepresentativeâlike Linda being a bank teller. The second sequence, which contains six tosses, is a better fit to what we would expect from this die, because it includes two Gâs. However, this sequence was constructed by adding a G to the beginning of the first sequence, so it can only be less likely than the first.
**Note:** The conjunction fallacy in action. People used representativness instead of logic to determine which was the most likely.
**Tags:** pink
Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be. The two types of base-rate information are treated differently: Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available. Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.
**Note:** .qa What are statistical and casual base rates? Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.
**Tags:** qa
The experiment was conducted a long time ago by the social psychologist Richard Nisbett and his student Eugene Borgida, at the University of Michigan. They told students about the renowned âhelping experimentâ that had been conducted a few years earlier at New York University. Participants in that experiment were led to individual booths and invited to speak over the intercom about their personal lives and problems. They were to talk in turn for about two minutes. Only one microphone was active at any one time. There were six participants in each group, one of whom was a stooge. The stooge spoke first, following a script prepared by the experimenters. He described his problems adjusting to New York and admitted with obvious embarrassment that he was prone to seizures, especially when stressed. All the participants then had a turn. When the microphone was again turned over to the stooge, he became agitated and incoherent, said he felt a seizure coming on, and asked for someone to help him. The last words heard from him were, âC-could somebody-er-er-help-er-uh-uh-uh [choking sounds]. IâŚIâm gonna die-er-er-er IâmâŚgonna die-er-er-I seizure I-er [chokes, then quiet].â At this point the microphone of the next participant automatically became active, and nothing more was heard from the possibly dying individual. What do you think the participants in the experiment did? So far as the participants knew, one of them was having a seizure and had asked for help. However, there were several other people who could possibly respond, so perhaps one could stay safely in oneâs booth. These were the results: only four of the fifteen participants responded immediately to the appeal for help. Six never got out of their booth, and five others came out only well after the âseizure victimâ apparently choked. The experiment shows that individuals feel relieved of responsibility when they know that others have heard the same request for help.
**Note:** We are surprisingly unwilling to give help if we know that others might do it instead of us.
**Tags:** pink
To apply Bayesian reasoning to the task the students were assigned, you should first ask yourself what you would have guessed about the two individuals if you had not seen their interviews. This question is answered by consulting the base rate. We have been told that only 4 of the 15 participants in the experiment rushed to help after the first request. The probability that an unidentified participant had been immediately helpful is therefore 27%. Thus your prior belief about any unspecified participant should be that he did not rush to help. Next, Bayesian logic requires you to adjust your judgment in light of any relevant information about the individual. However, the videos were carefully designed to be uninformative; they provided no reason to suspect that the individuals would be either more or less helpful than a randomly chosen student. In the absence of useful new information, the Bayesian solution is to stay with the base rates. Nisbett and Borgida asked two groups of students to watch the videos and predict the behavior of the two individuals. The students in the first group were told only about the procedure of the helping experiment, not about its results. Their predictions reflected their views of human nature and their understanding of the situation. As you might expect, they predicted that both individuals would immediately rush to the victimâs aid. The second group of students knew both the procedure of the experiment and its results. The comparison of the predictions of the two groups provides an answer to a significant question: Did students learn from the results of the helping experiment anything that significantly changed their way of thinking? The answer is straightforward: they learned nothing at all. Their predictions about the two individuals were indistinguishable from the predictions made by students who had not been exposed to the statistical results of the experiment. They knew the base rate in the group from which the individuals had been drawn, but they remained convinced that the people they saw on the video had been quick to help the stricken stranger.
**Note:** Teaching psychology might not be as easy as we hoped. Even when students knew that the students would feel less of an urge to help because they didn't feel responsible they still predicted people would help.
**Tags:** pink
Teachers of psychology should not despair, however, because Nisbett and Borgida report a way to make their students appreciate the point of the helping experiment. They took a new group of students and taught them the procedure of the experiment but did not tell them the group results. They showed the two videos and simply told their students that the two individuals they had just seen had not helped the stranger, then asked them to guess the global results. The outcome was dramatic: the studentsâ guesses were extremely accurate. To teach students any psychology they did not know before, you must surprise them. But which surprise will do? Nisbett and Borgida found that when they presented their students with a surprising statistical fact, the students managed to learn nothing at all. But when the students were surprised by individual casesâtwo nice people who had not helpedâthey immediately made the generalization and inferred that helping is more difficult than they had thought. Nisbett and Borgida summarize the results in a memorable sentence: Subjectsâ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.
**Tags:** pink
People who are taught surprising statistical facts about human behavior may be impressed to the point of telling their friends about what they have heard, but this does not mean that their understanding of the world has really changed. The test of learning psychology is whether your understanding of situations you encounter has changed, not whether you have learned a new fact. There is a deep gap between our thinking about statistics and our thinking about individual cases. Statistical results with a causal interpretation have a stronger effect on our thinking than noncausal information. But even compelling causal statistics will not change long-held beliefs or beliefs rooted in personal experience. On the other hand, surprising individual cases have a powerful impact and are a more effective tool for teaching psychology because the incongruity must be resolved and embedded in a causal story. That is why this book contains questions that are addressed personally to the reader. You are more likely to learn something by finding surprises in your own behavior than by hearing surprising facts about people in general.
**Note:** You are more likely to learn something by finding surprises in your own behavior or another individual case than by hearing surprising facts about people in general.
rewards for improved performance work better than punishment of mistakes. This proposition is supported by much evidence from research on pigeons, rats, humans, and other animals.
Naturally, he praised only a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and therefore likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into a cadetâs earphones only when the cadetâs performance was unusually bad and therefore likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process.
**Note:** .qa What is the regression to the mean phenomenon? People who perform significantly better or worse than the average on a given day are likely to regress to the mean on the next attempt simply because it's the mean.
**Tags:** qa
The discovery I made on that day was that the flight instructors were trapped in an unfortunate contingency: because they punished cadets when performance was poor, they were mostly rewarded by a subsequent improvement, even if punishment was actually ineffective. Furthermore, the instructors were not alone in that predicament. I had stumbled onto a significant fact of the human condition: the feedback to which life exposes us is perverse. Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty.
If the correlation between the intelligence of spouses is less than perfect (and if men and women on average do not differ in intelligence), then it is a mathematical inevitability that highly intelligent women will be married to husbands who are on average less intelligent than they are (and vice versa, of course). The observed regression to the mean cannot be more interesting or more explainable than the imperfect correlation.
**Note:** This is why highly intelligent women marry less intelligent men. An example of regression to the mean.
#### 18 Taming Intuitive Predictions
Recall that the correlation between two measuresâin the present case reading age and GPAâis equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps: Start with an estimate of average GPA. Determine the GPA that matches your impression of the evidence. Estimate the correlation between your evidence and GPA. If the correlation is .30, move 30% of the distance from the average to the matching GPA. Step 1 gets you the baseline, the GPA you would have predicted if you were told nothing about Julie beyond the fact that she is a graduating senior. In the absence of information, you would have predicted the average. (This is similar to assigning the base-rate probability of business administration graduates when you are told nothing about Tom W.) Step 2 is your intuitive prediction, which matches your evaluation of the evidence. Step 3 moves you from the baseline toward your intuition, but the distance you are allowed to move depends on your estimate of the correlation. You end up, at step 4, with a prediction that is influenced by your intuition but is far more moderate.
Here are the directions for how to get there in four simple steps: Start with an estimate of average GPA. Determine the GPA that matches your impression of the evidence. Estimate the correlation between your evidence and GPA. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
This approach to prediction is general. You can apply it whenever you need to predict a quantitative variable, such as GPA, profit from an investment, or the growth of a company. The approach builds on your intuition, but it moderates it, regresses it toward the mean. When you have good reasons to trust the accuracy of your intuitive predictionâa strong correlation between the evidence and the predictionâthe adjustment will be small.
**Note:** Here are the directions for how to make quantifiable predictions without systemic bias: Start with an estimate of average GPA. Determine the GPA that matches your impression of the evidence. Estimate the correlation between your evidence and GPA. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
This approach to prediction is much more accurate than coming to a predictive inquiry with no analysis of the base rate and understanding of regression to the mean. By coming up with a baseline before asking yourself the inquiry you protect yourself from being too influenced by in the moment system 1 thinking.
A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or of any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events. Baruch Fischhoff first demonstrated this âI-knew-it-all-alongâ effect, or hindsight bias, when he was a student in Jerusalem. Together with Ruth Beyth (another of our students), Fischhoff conducted a survey before President Richard Nixon visited China and Russia in 1972. The respondents assigned probabilities to fifteen possible outcomes of Nixonâs diplomatic initiatives. Would Mao Zedong agree to meet with Nixon? Might the United States grant diplomatic recognition to China? After decades of enmity, could the United States and the Soviet Union agree on anything significant? After Nixonâs return from his travels, Fischhoff and Beyth asked the same people to recall the probability that they had originally assigned to each of the fifteen possible outcomes. The results were clear. If an event had actually occurred, people exaggerated the probability that they had assigned to it earlier. If the possible event had not come to pass, the participants erroneously recalled that they had always considered it unlikely. Further experiments showed that people were driven to overstate the accuracy not only of their original predictions but also of those made by others.
**Note:** We are notoriously bad at reconstructing past beliefs. As soon as our world views change, we struggle to think back accurately on what we felt before they changed.
**Tags:** pink
Hindsight bias has pernicious effects on the evaluations of decision makers. It leads observers to assess the quality of a decision not by whether the process was sound but by whether its outcome was good or bad.
Indeed, the halo effect is so powerful that you probably find yourself resisting the idea that the same person and the same behaviors appear methodical when things are going well and rigid when things are going poorly. Because of the halo effect, we get the causal relationship backward: we are prone to believe that the firm fails because its CEO is rigid, when the truth is that the CEO appears to be rigid because the firm is failing. This is how illusions of understanding are born.
What happened was remarkable. The global evidence of our previous failure should have shaken our confidence in our judgments of the candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each of our specific predictions was valid. I was reminded of the MĂźller-Lyer illusion, in which we know the lines are of equal length yet still see them as being different. I was so struck by the analogy that I coined a term for our experience: the illusion of validity. I had discovered my first cognitive illusion.
**Note:** .qa What is the illusion of validity? Even when we know that our base predictions about things are biased from things like the representativeness heuristic, we still feel like there is no way we could be bad at predicting things.
**Tags:** qa
my questions about the stock market have hardened into a larger puzzle: a major industry appears to be built largely on an illusion of skill. Billions of shares are traded every day, with many people buying each stock and others selling it to them. It is not unusual for more than 100 million shares of a single stock to change hands in one day. Most of the buyers and sellers know that they have the same information; they exchange the stocks primarily because they have different opinions. The buyers think the price is too low and likely to rise, while the sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong. What makes them believe they know more about what the price should be than the market does? For most of them, that belief is an illusion.
Cognitive illusions can be more stubborn than visual illusions. What you learned about the MĂźller-Lyer illusion did not change the way you see the lines, but it changed your behavior. You now know that you cannot trust your impression of the length of lines that have fins appended to them, and you also know that in the standard MĂźller-Lyer display you cannot trust what you see. When asked about the length of the lines, you will report your informed belief, not the illusion that you continue to see. In contrast, when my colleagues and I in the army learned that our leadership assessment tests had low validity, we accepted that fact intellectually, but it had no impact on either our feelings or our subsequent actions.
**Note:** People are more willing to accept visual illusions than cognitive ones.
The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. As Nassim Taleb pointed out in The Black Swan, our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting ability. Everything makes sense in hindsight, a fact that financial pundits exploit every evening as they offer convincing accounts of the dayâs events. And we cannot suppress the powerful intuition that what makes sense in hindsight today was predictable yesterday. The illusion that we understand the past fosters overconfidence in our ability to predict the future.
Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. âWe reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,â Tetlock writes. âIn this age of academic hyperspecialization, there is no reason for supposing that contributors to top journalsâdistinguished political scientists, area study specialists, economists, and so onâare any better than journalists or attentive readers of The New York Times in âreadingâ emerging situations.â The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts.
**Note:** People need to realize that blind luck accounts for a huge amount of what happens in history. So many different factors effect one thing that it is almost impossible to guess what will happen in the future.
Tetlock also found that experts resisted admitting that they had been wrong, and when they were compelled to admit error, they had a large collection of excuses: they had been wrong only in their timing, an unforeseeable event had intervened, or they had been wrong but for the right reasons. Experts are just human in the end.
Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better. Several studies have shown that human decision makers are inferior to a prediction formula even when they are given the score suggested by the formula! They feel that they can overrule the formula because they have additional information about the case, but they are wrong more often than not.
Another reason for the inferiority of expert judgment is that humans are incorrigibly inconsistent in making summary judgments of complex information. When asked to evaluate the same information twice, they frequently give different answers. The extent of the inconsistency is often a matter of real concern. Experienced radiologists who evaluate chest X-rays as ânormalâ or âabnormalâ contradict themselves 20% of the time when they see the same picture on separate occasions.
The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without any prior statistical research. Simple equally weighted formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: frequency of lovemaking minus frequency of quarrels You donât want your result to be a negative number.
**Note:** Formulas are often better predictors of things than expert prediction. Even ones made with limited statistical knowledge can often outdo experts.
The aversion to algorithms making decisions that affect humans is rooted in the strong preference that many people have for the natural over the synthetic or artificial. Asked whether they would rather eat an organic or a commercially grown apple, most people prefer the âall naturalâ one. Even after being informed that the two apples taste the same, have identical nutritional value, and are equally healthful, a majority still prefer the organic fruit. Even the producers of beer have found that they can increase sales by putting âAll Naturalâ or âNo Preservativesâ on the label.
**Note:** Why people don't want to accept algorithms or formulas even though they can predict things better than experts
Fortunately, I had read Paul Meehlâs âlittle book,â which had appeared just a year earlier. I was convinced by his argument that simple, statistical rules are superior to intuitive âclinicalâ judgments. I concluded that the then current interview had failed at least in part because it allowed the interviewers to do what they found most interesting, which was to learn about the dynamics of the intervieweeâs mental life. Instead, we should use the limited time at our disposal to obtain as much specific information as possible about the intervieweeâs life in his normal environment. Another lesson I learned from Meehl was that we should abandon the procedure in which the interviewersâ global evaluations of the recruit determined the final decision. Meehlâs book suggested that such evaluations should not be trusted and that statistical summaries of separately evaluated attributes would achieve higher validity.
**Note:** Danielle was instructed to design an interview that would be more useful but would not take more time. He was also told to try out the new interview and to evaluate its accuracy. The previous interviews hadnât worked because of their variability in questions from the interviewer. Danielle knew better.
I decided on a procedure in which the interviewers would evaluate several relevant personality traits and score each separately. The final score of fitness for combat duty would be computed according to a standard formula, with no further input from the interviewers. I made up a list of six characteristics that appeared relevant to performance in a combat unit, including âresponsibility,â âsociability,â and âmasculine pride.â I then composed, for each trait, a series of factual questions about the individualâs life before his enlistment, including the number of different jobs he had held, how regular and punctual he had been in his work or studies, the frequency of his interactions with friends, and his interest and participation in sports, among others. The idea was to evaluate as objectively as possible how well the recruit had done on each dimension.
**Note:** Danielâs strategy for making a more statistical interview. By focusing on standardized, factual questions, he hoped to combat the halo effect, where favorable first impressions influence later judgments. As a further precaution against halos, he instructed the interviewers to go through the six traits in a fixed sequence, rating each trait on a five-point scale before going on to the next.
Several hundred interviews were conducted by this new method, and a few months later we collected evaluations of the soldiersâ performance from the commanding officers of the units to which they had been assigned. The results made us happy. As Meehlâs book had suggested, the new interview procedure was a substantial improvement over the old one. The sum of our six ratings predicted soldiersâ performance much more accurately than the global evaluations of the previous interviewing method, although far from perfectly. We had progressed from âcompletely uselessâ to âmoderately useful.â
**Note:** The results of Daniele using a much more statistical interviewing procedure in evaluating soldiers battle capabilities.
The big surprise to me was that the intuitive judgment that the interviewers summoned up in the âclose your eyesâ exercise also did very well, indeed just as well as the sum of the six specific ratings. I learned from this finding a lesson that I have never forgotten: intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective information and disciplined scoring of separate traits. I set a formula that gave the âclose your eyesâ evaluation the same weight as the sum of the six trait ratings. A more general lesson that I learned from this episode was do not simply trust intuitive judgmentâyour own or that of othersâbut do not dismiss it, either.
Implementing interview procedures in the spirit of Meehl and Dawes requires relatively little effort but substantial discipline. Suppose that you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Donât overdo itâsix dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it, say on a 1â5 scale. You should have an idea of what you will call âvery weakâ or âvery strong.â These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the information on one trait at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. Because you are in charge of the final decision, you should not do a âclose your eyes.â Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like betterâtry to resist your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as âI looked into his eyes and liked what I saw.â
22 Expert Intuition: When Can We Trust It?
I quoted Herbert Simonâs definition of intuition in the introduction, but it will make more sense when I repeat it now: âThe situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.â
**Note:** Experts are people with fantastic memories.
Intuition is nothing more and nothing less than recognition.â This strong statement reduces the apparent magic of intuition to the everyday experience of memory. We marvel at the story of the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, âwithout knowing how he knows.â However, we also do not know how we immediately know that a person we see as we enter a room is our friend Peter. The moral of Simonâs remark is that the mystery of knowing without knowing is not a distinctive feature of intuition; it is the norm of mental life.
Klein and I eventually agreed on an important principle: the confidence that people have in their intuitions is not a reliable guide to their validity. In other words, do not trust anyoneâincluding yourselfâto tell you how much you should trust their judgment.
**Note:** This is because while experts can have expert intuitions in some areas they donât have expert intuitions in other areas they havenât built skill in.
But because we donât know what we donât know and the experts believe they are expert in that thing, they can bring overconfidence into that endeavor.
If subjective confidence is not to be trusted, how can we evaluate the probable validity of an intuitive judgment? When do judgments reflect true expertise? When do they display an illusion of validity? The answer comes from the two basic conditions for acquiring a skill: an environment that is sufficiently regular to be predictable an opportunity to learn these regularities through prolonged practice
**Note:** .qa What are the two prerequisites of an environment that must be met for a intuitive judgement to be valid? 1. It must be environment that is sufficiently regular to be predictable. 2. There must be an opportunity to learn these regularities through prolonged practice.
**Tags:** qa
When do judgments reflect true expertise? When do they display an illusion of validity? The answer comes from the two basic conditions for acquiring a skill: an environment that is sufficiently regular to be predictable an opportunity to learn these regularities through prolonged practice
**Note:** This is why chess players, weatherman, and surgeons are likely to have intuitions that are high skilled.
Some environments are worse than irregular. Robin Hogarth described âwickedâ environments, in which professionals are likely to learn the wrong lessons from experience. He borrows from Lewis Thomas the example of a physician in the early twentieth century who often had intuitions about patients who were about to develop typhoid. Unfortunately, he tested his hunch by palpating the patientâs tongue, without washing his hands between patients. When patient after patient became ill, the physician developed a sense of clinical infallibility. His predictions were accurateâbut not because he was exercising professional intuition!
Among medical specialties, anesthesiologists benefit from good feedback, because the effects of their actions are likely to be quickly evident. In contrast, radiologists obtain little information about the accuracy of the diagnoses they make and about the pathologies they fail to detect. Anesthesiologists are therefore in a better position to develop useful intuitive skills. If an anesthesiologist says, âI have a feeling something is wrong,â everyone in the operating room should be prepared for an emergency.
When can you trust an experienced professional who claims to have an intuition? Our conclusion was that for the most part it is possible to distinguish intuitions that are likely to be valid from those that are likely to be bogus. As in the judgment of whether a work of art is genuine or a fake, you will usually do better by focusing on its provenance than by looking at the piece itself. If the environment is sufficiently regular and if the judge has had a chance to learn its regularities, the associative machinery will recognize situations and generate quick and accurate predictions and decisions. You can trust someoneâs intuitions if these conditions are met.
If the environment is sufficiently regular and if the judge has had a chance to learn its regularities, the associative machinery will recognize situations and generate quick and accurate predictions and decisions. You can trust someoneâs intuitions if these conditions are met.
#### 23 The Outside View
The inside view is the one that all of us, including Seymour, spontaneously adopted to assess the future of our project. We focused on our specific circumstances and searched for evidence in our own experiences.
**Note:** When we imagine the future we must use our perceptions of the present and memories of the past.
However, if we have never before done what we are imagining we are bound to have a faulty estimate. This is why in cases where you are assessing the time scope or possibility of a project you have little experience in, you should take the outside view and see how other people in the past have done it.
Extrapolating was a mistake. We were forecasting based on the information in front of usâWYSIATIâbut the chapters we wrote first were probably easier than others, and our commitment to the project was probably then at its peak. But the main problem was that we failed to allow for what Donald Rumsfeld famously called the âunknown unknowns.â There was no way for us to foresee, that day, the succession of events that would cause the project to drag out for so long.
In light of both the outside-view forecast and the eventual outcome, the original estimates we made that Friday afternoon appear almost delusional. This should not come as a surprise: overly optimistic forecasts of the outcome of projects are found everywhere. Amos and I coined the term planning fallacy to describe plans and forecasts that are unrealistically close to best-case scenarios could be improved by consulting the statistics of similar cases
The prevalent tendency to underweight or ignore distributional information is perhaps the major source of error in forecasting. Planners should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available.
**Note:** The planning fallacy can be mitigated by taking the outside view.
Optimistic individuals play a disproportionate role in shaping our lives. Their decisions make a difference; they are the inventors, the entrepreneurs, the political and military leadersânot average people. They got to where they are by seeking challenges and taking risks. They are talented and they have been lucky, almost certainly luckier than they acknowledge. They are probably optimistic by temperament; a survey of founders of small businesses concluded that entrepreneurs are more sanguine than midlevel managers about life in general. Their experiences of success have confirmed their faith in their judgment and in their ability to control events.
Their self-confidence is reinforced by the admiration of others. This reasoning leads to a hypothesis: the people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realize. Â The evidence suggests that an optimistic bias plays a roleâsometimes the dominant roleâwhenever individuals or institutions voluntarily take on significant risks. More often than not, risk takers underestimate the odds they face, and do not invest sufficient effort to find out what the odds are. Because they misread the risks, optimistic entrepreneurs often believe they are prudent, even when they are not. Their confidence in their future success sustains a positive mood that helps them obtain resources from others, raise the morale of their employees, and enhance their prospects of prevailing.
**Note:** Optimistic people have the most influence of peopleâs lives
When one has just had a door slammed in oneâs face by an angry homemaker, the thought that âshe was an awful womanâ is clearly superior to âI am an inept salesperson.â I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.
The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: âImagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.â
**Tags:** qa
Bernoulli observed that most people dislike risk (the chance of receiving the lowest possible outcome), and if they are offered a choice between a gamble and an amount equal to its expected value they will pick the sure thing. In fact a risk-averse decision maker will choose a sure thing that is less than expected value, in effect paying a premium to avoid the uncertainty. One hundred years before Fechner, Bernoulli invented psychophysics to explain this aversion to risk.
**Note:** the expected value of: 80% chance to win $100 and 20% chance to win $10 is $82 (0.8 Ă 100 + 0.2 Ă 10). Now ask yourself this question: Which would you prefer to receive as a gift, this gamble or $80 for sure? Almost everyone prefers the sure thing. If people valued uncertain prospects by their expected value, they would prefer the gamble, because $82 is more than $80. Bernoulli pointed out that people do not in fact evaluate gambles in this way.
The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes. For an example, take the following scenarios: Today Jack and Jill each have a wealth of 5 million. Yesterday, Jack had 1 million and Jill had 9 million. Are they equally happy? (Do they have the same utility?) Bernoulliâs theory assumes that the utility of their wealth is what makes people more or less happy. Jack and Jill have the same wealth, and the theory therefore asserts that they should be equally happy, but you do not need a degree in psychology to know that today Jack is elated and Jill despondent.
**Note:** Bernoulliâs theory is seriously flawed.
In utility theory, the utility of a gain is assessed by comparing the utilities of two states of wealth. For example, the utility of getting an extra $500 when your wealth is $1 million is the difference between the utility of $1,000,500 and the utility of $1 million. And if you own the larger amount, the disutility of losing $500 is again the difference between the utilities of the two states of wealth. In this theory, the utilities of gains and losses are allowed to differ only in their sign (+ or â). There is no way to represent the fact that the disutility of losing $500 could be greater than the utility of winning the same amountâthough of course it is. As might be expected in a situation of theory-induced blindness, possible differences between gains and losses were neither expected nor studied. The distinction between gains and losses was assumed not to matter, so there was no point in examining it.
**Note:** How utility theory works according to Bernoulli. The problem is it doesnât take into account if you are losing or gaining money. Loss aversion states we will hate losing money more than gaining.
A principle of diminishing sensitivity applies to both sensory dimensions and the evaluation of changes of wealth. Turning on a weak light has a large effect in a dark room. The same increment of light may be undetectable in a brightly illuminated room. Similarly, the subjective difference between $900 and $1,000 is much smaller than the difference between $100 and $200.
It is difficult to accept changes for the worse. For example, the minimal wage that unemployed workers would accept for new employment averages 90% of their previous wage, and it drops by less than 10% over a period of one year.
tastes are not fixed; they vary with the reference point. Second, the disadvantages of a change loom larger than its advantages, inducing a bias that favors the status quo. Of course, loss aversion does not imply that you never prefer to change your situation; the benefits of an opportunity may exceed even overweighted losses. Loss aversion implies only that choices are strongly biased in favor of the reference situation (and generally biased to favor small rather than large changes).
the endowment effect is not universal. If someone asks you to change a $5 bill for five singles, you hand over the five ones without any sense of loss. Nor is there much loss aversion when you shop for shoes. The merchant who gives up the shoes in exchange for money certainly feels no loss. Indeed, the shoes that he hands over have always been, from his point of view, a cumbersome proxy for money that he was hoping to collect from some consumer. Furthermore, you probably do not experience paying the merchant as a loss, because you were effectively holding money as a proxy for the shoes you intended to buy. These cases of routine trading are not essentially different from the exchange of a $5 bill for five singles. There is no loss aversion on either side of routine commercial exchanges.
Selling goods that one would normally use activates regions of the brain that are associated with disgust and pain. Buying also activates these areas, but only when the prices are perceived as too highâwhen you feel that a seller is taking money that exceeds the exchange value. Brain recordings also indicate that buying at especially low prices is a pleasurable event.
No endowment effect is expected when owners view their goods as carriers of value for future exchanges, a widespread attitude in routine commerce and in financial markets. The experimental economist John List, who has studied trading at baseball card conventions, found that novice traders were reluctant to part with the cards they owned, but that this reluctance eventually disappeared with trading experience.
**Note:** If dealing with an item that is meant to be traded, the endowment effect decreases as the trader becomes more experienced.
At a convention, List displayed a notice that invited people to take part in a short survey, for which they would be compensated with a small gift: a coffee mug or a chocolate bar of equal value. The gifts were assigned at random. As the volunteers were about to leave, List said to each of them, âWe gave you a mug [or chocolate bar], but you can trade for a chocolate bar [or mug] instead, if you wish.â In an exact replication of Jack Knetschâs earlier experiment, List found that only 18% of the inexperienced traders were willing to exchange their gift for the other. In sharp contrast, experienced traders showed no trace of an endowment effect: 48% of them traded! At least in a market environment in which trading was the norm, they showed no reluctance to trade.
**Note:** The endowment effect decreases with the more experience someone has as a trader
**Tags:** pink
Recent studies of the psychology of âdecision making under povertyâ suggest that the **poor are another group in which we do not expect to find the endowment effect**. **Being poor**, in prospect theory, **is living below oneâs reference point**. There are goods that the poor need and cannot afford, so **they are always âin the losses.â Small amounts of money** that they receive are **therefore perceived as a reduced loss, not as a gain**. The money helps one climb a little toward the reference point, but the poor always remain on the steep limb of the value function.
The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animalâs odds of living long enough to reproduce. The automatic operations of System 1 reflect this evolutionary history. No comparably rapid mechanism for recognizing good news has been detected. Of course, we and our animal cousins are quickly alerted to signs of opportunities to mate or to feed, and advertisers design billboards accordingly. Still, threats are privileged above opportunities, as they should be.
the long-term success of a relationship depends far more on avoiding the negative than on seeking the positive. Gottman estimated that a stable relationship requires that good interactions outnumber bad interactions by at least 5 to 1. Other asymmetries in the social domain are even more striking. We all know that a friendship that may take years to develop can be ruined by a single action.
Loss aversion refers to the relative strength of two motives: we are driven more strongly to avoid losses than to achieve gains. A reference point is sometimes the status quo, but it can also be a goal in the future: not achieving a goal is a loss, exceeding the goal is a gain. As we might expect from negativity dominance, the two motives are not equally powerful. The aversion to the failure of not reaching the goal is much stronger than the desire to exceed it.
People often adopt short-term goals that they strive to achieve but not necessarily to exceed. They are likely to reduce their efforts when they have reached an immediate goal, with results that sometimes violate economic logic. New York cabdrivers, for example, may have a target income for the month or the year, but the goal that controls their effort is typically a daily target of earnings. Of course, the daily goal is much easier to achieve (and exceed) on some days than on others. On rainy days, a New York cab never remains free for long, and the driver quickly achieves his target; not so in pleasant weather, when cabs often waste time cruising the streets looking for fares. Economic logic implies that cabdrivers should work many hours on rainy days and treat themselves to some leisure on mild days, when they can âbuyâ leisure at a lower price. The logic of loss aversion suggests the opposite: drivers who have a fixed daily target will work many more hours when the pickings are slim and go home early when rain-drenched customers are begging to be taken somewhere.
**Note:** We struggle to motivate ourselves to exceed goals we have already met
**Tags:** pink
A hardware store has been selling snow shovels for $15. The morning after a large snowstorm, the store raises the price to $20. Please rate this action as: Completely Fair Acceptable Unfair Very Unfair The hardware store behaves appropriately according to the standard economic model: it responds to increased demand by raising its price. The participants in the survey did not agree: 82% rated the action Unfair or Very Unfair. They evidently viewed the pre-blizzard price as a reference point and the raised price as a loss that the store imposes on its customers, not because it must but simply because it can.
**Note:** A basic rule of fairness they found, is that the exploitation of market power to impose losses on others is unacceptable. Especially if the business doesnât need to change the market to survive but does so only because they can.
The firm has its own entitlement, which is to retain its current profit. If it faces a threat of a loss, it is allowed to transfer the loss to others. A substantial majority of respondents believed that it is not unfair for a firm to reduce its workersâ wages when its profitability is falling. We described the rules as defining dual entitlements to the firm and to individuals with whom it interacts. When threatened, it is not unfair for the firm to be selfish. It is not even expected to take on part of the losses; it can pass them on.
Neuroeconomists (scientists who combine economics with brain research) have used MRI machines to examine the brains of people who are engaged in punishing one stranger for behaving unfairly to another stranger. Remarkably, **altruistic punishment is accompanied by increased activity in the âpleasure centersâ of the brain.** It appears that **maintaining the social order** and the rules of fairness in this fashion **is its own reward**. Altruistic punishment could well be the glue that holds societies together. **However, our brains are not designed to reward generosity as reliably as they punish meanness.** Here again, we find a marked asymmetry between losses and gains.
The expected value of a gamble is the average of its outcomes, each weighted by its probability. For example, the expected value of â20% chance to win $1,000 and 75% chance to win $100â is $275. In the pre-Bernoulli days, gambles were assessed by their expected value. Bernoulli retained this method for assigning weights to the outcomes, which is known as the expectation principle, but applied it to the psychological value of the outcomes. The utility of a gamble, in his theory, is the average of the utilities of its outcomes, each weighted by its probability.
Possibility and certainty have similarly powerful effects in the domain of losses. When a loved one is wheeled into surgery, a 5% risk that an amputation will be necessary is very badâmuch more than half as bad as a 10% risk. Because of the possibility effect, we tend to overweight small risks and are willing to pay far more than expected value to eliminate them altogether. The psychological difference between a 95% risk of disaster and the certainty of disaster appears to be even greater; the sliver of hope that everything could still be okay looms very large.
**Note:** .qa Is the certainty or possibility effect stronger? The certainty effect is stronger.
**Tags:** qa
Probabilities that are extremely low or high (below 1% or above 99%) are a special case. It is difficult to assign a unique decision weight to very rare events, because they are sometimes ignored altogether, effectively assigned a decision weight of zero. On the other hand, when you do not ignore the very rare events, you will certainly overweight them. Most of us spend very little time worrying about nuclear meltdowns or fantasizing about large inheritances from unknown relatives. However, when an unlikely event becomes the focus of attention, we will assign it much more weight than its probability deserves.
**Note:** Furthermore, people are almost completely insensitive to variations of risk among small probabilities. A cancer risk of 0.001% is not easily distinguished from a risk of 0.00001%, although the former would translate to 3,000 cancers for the population of the United States, and the latter to 30.
People overestimate the probabilities of unlikely events. People overweight unlikely events in their decisions.
The probability of a rare event is most likely to be overestimated when the alternative is not fully specified.
You read that âa vaccine that protects children from a fatal disease carries a 0.001% risk of permanent disability.â The risk appears small. Now consider another description of the same risk: âOne of 100,000 vaccinated children will be permanently disabled.â The second statement does something to your mind that the first does not: it calls up the image of an individual child who is permanently disabled by a vaccine; the 99,999 safely vaccinated children have faded into the background.
**Note:** As predicted by denominator neglect, low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated in more abstract terms of âchances,â ârisk,â or âprobabilityâ (how likely). As we have seen, System 1 is much better at dealing with individuals than categories.
The effect of the frequency format is large. In one study, people who saw information about âa disease that kills 1,286 people out of every 10,000â judged it as more dangerous than people who were told about âa disease that kills 24.14% of the population.â The first disease appears more threatening than the second, although the former risk is only half as large as the latter! In an even more direct demonstration of denominator neglect, âa disease that kills 1,286 people out of every 10,000â was judged more dangerous than a disease that âkills 24.4 out of 100.â
**Note:** Denominator neglect in action
**Tags:** pink
As in many other choices that involve moderate or high probabilities, people tend to be risk averse in the domain of gains and risk seeking in the domain of losses.
narrow framing: a sequence of two simple decisions, considered separately broad framing: a single comprehensive decision, with four options
**Note:** Humans are by nature narrow framers of information because it makes processing information easier. Broad framing is better in almost every circumstance.
I sympathize with your aversion to losing any gamble, but it is costing you a lot of money. Please consider this question: Are you on your deathbed? Is this the last offer of a small favorable gamble that you will ever consider? Of course, you are unlikely to be offered exactly this gamble again, but you will have many opportunities to consider attractive gambles with stakes that are very small relative to your wealth. You will do yourself a large financial favor if you are able to see each of these gambles as part of a bundle of small gambles and rehearse the mantra that will get you significantly closer to economic rationality: you win a few, you lose a few.
**Note:** Now I have a sermon ready for Sam if he rejects the offer of a single highly favorable gamble played once, and for you if you share his unreasonable aversion to losses:
The outside view shifts the focus from the specifics of the current situation to the statistics of outcomes in similar situations. The outside view is a broad frame for thinking about plans.
**Note:** .qa What is the outside view in economics/statistics? The outside view shifts the focus from the specifics of the current situation to the statistics of outcomes in similar situations. This makes it much less likely you will fall for the optimism of the planning fallacy.
**Tags:** qa
Mental accounts come in several varieties. We hold our money in different accounts, which are sometimes physical, sometimes only mental. We have spending money, general savings, earmarked savings for our childrenâs education or for medical emergencies. There is a clear hierarchy in our willingness to draw on these accounts to cover current needs. We use accounts for self-control purposes, as in making a household budget, limiting the daily consumption of espressos, or increasing the time spent exercising.
finance research has documented a massive preference for selling winners rather than losersâa bias that has been given an opaque label: the disposition effect.
The disposition effect is an instance of narrow framing. The investor has set up an account for each share that she bought, and she wants to close every account as a gain. A rational agent would have a comprehensive view of the portfolio and sell the stock that is least likely to do well in the future, without considering whether it is a winner or a loser.
At least in the United States, taxes provide a strong incentive: realizing losses reduces your taxes, while selling winners exposes you to taxes. This elementary fact of financial life is actually known to all American investors, and it determines the decisions they make during one month of the yearâinvestors sell more losers in December, when taxes are on their mind. The tax advantage is available all year, of course, but for 11 months of the year mental accounting prevails over financial common sense.
A rational decision maker is interested only in the future consequences of current investments. Justifying earlier mistakes is not among the Econâs concerns. The decision to invest additional resources in a losing account, when better investments are available, is known as the sunk-cost fallacy,
Imagine a company that has already spent $50 million on a project. The project is now behind schedule and the forecasts of its ultimate returns are less favorable than at the initial planning stage. An additional investment of $60 million is required to give the project a chance. An alternative proposal is to invest the same amount in a new project that currently looks likely to bring higher returns. What will the company do? All too often a company afflicted by sunk costs drives into the blizzard, throwing good money after bad rather than accepting the humiliation of closing the account of a costly failure.
**Note:** The sunk cost fallacy
people expect to have stronger emotional reactions (including regret) to an outcome that is produced by action than to the same outcome when it is produced by inaction.
In a compelling demonstration of the power of default options, participants played a computer simulation of blackjack. Some players were asked âDo you wish to hit?â while others were asked âDo you wish to stand?â Regardless of the question, saying yes was associated with much more regret than saying no if the outcome was bad! The question evidently suggests a default response, which is, âI donât have a strong wish to do it.â It is the departure from the default that produces regret.
**Note:** If you feel it was your action or choice that caused a bad outcome, you will have more regret.
Losses are weighted about twice as much as gains in several contexts: choice between gambles, the endowment effect, and reactions to price changes. The loss-aversion coefficient is much higher in some situations. In particular, you may be more loss averse for aspects of your life that are more important than money, such as health. Furthermore, your reluctance to âsellâ important endowments increases dramatically when doing so might make you responsible for an awful outcome.
You can also take precautions that will inoculate you against regret. Perhaps the most useful is to be explicit about the anticipation of regret. If you can remember when things go badly that you considered the possibility of regret carefully before deciding, you are likely to experience less of it. You should also know that regret and hindsight bias will come together, so anything you can do to preclude hindsight is likely to be helpful. My personal hindsight-avoiding policy is to be either very thorough or completely casual when making a decision with long-term consequences.
**Note:** Hindsight is worse when you think a little, just enough to tell yourself later, âI almost made a better choice.â
The emotional reactions of System 1 are much more likely to determine single evaluation; the comparison that occurs in joint evaluation always involves a more careful and effortful assessment, which calls for System 2.
**Note:** .qa What is preference reversal? Preference reversal is the switching of a preference to the opposite side upon seeing the two previously single evaluated decisions in joint evaluation. It can also be switching from seeing two things in joint evaluation to single evaluation.
**Tags:** qa
Salespeople quickly learn that manipulation of the context in which customers see a good can profoundly influence preferences. Except for such cases of deliberate manipulation, there is a presumption that the comparative judgment, which necessarily involves System 2, is more likely to be stable than single evaluations, which often reflect the intensity of emotional responses of System 1. We would expect that any institution that wishes to elicit thoughtful judgments would seek to provide the judges with a broad context for the assessments of individual cases.
A bad outcome is much more acceptable if it is framed as the cost of a lottery ticket that did not win than if it is simply described as losing a gamble. We should not be surprised: losses evokes stronger negative feelings than costs.
choices between gambles and sure things are resolved differently, depending on whether the outcomes are good or bad. Decision makers tend to prefer the sure thing over the gamble (they are risk averse) when the outcomes are good. They tend to reject the sure thing and accept the gamble (they are risk seeking) when both outcomes are negative.
The high-donation countries have an opt out form, where individuals who wish not to donate must check an appropriate box. Unless they take this simple action, they are considered willing donors. The low-contribution countries have an opt-in form: you must check a box to become a donor. That is all. The best single predictor of whether or not people will donate their organs is the designation of the default option that will be adopted without having to check a box.
**Note:** This is because our system 2 would have to think about checking a box to opt out
### Part 5 Two Selves
#### 35 Two Selves
How can experienced utility be measured? How should we answer questions such as âHow much pain did Helen suffer during the medical procedure?â or âHow much enjoyment did she get from her 20 minutes on the beach?â The British economist Francis Edgeworth speculated about this topic in the nineteenth century and proposed the idea of a âhedonimeter,â an imaginary instrument analogous to the devices used in weather-recording stations, which would measure the level of pleasure or pain that an individual experiences at any moment.
Peak-end rule: The global retrospective rating was well predicted by the average of the level of pain reported at the worst moment of the experience and at its end.
**Note:** .qa What is the peak-end rule? The tendency for people to measure the memory of something by the highest peak and lowest drop instead of taking the middle into account.
**Tags:** qa
The experiencing self is the one that answers the question: âDoes it hurt now?â The remembering self is the one that answers the question: âHow was it, on the whole?â Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self.
A comment I heard from a member of the audience after a lecture illustrates the difficulty of distinguishing memories from experiences. He told of listening raptly to a long symphony on a disc that was scratched near the end, producing a shocking sound, and he reported that the bad ending âruined the whole experience.â But the experience was not actually ruined, only the memory of it. The experiencing self had had an experience that was almost entirely good, and the bad end could not undo it, because it had already happened.
An inconsistency is built into the design of our minds. We have strong preferences about the duration of our experiences of pain and pleasure. We want pain to be brief and pleasure to last. But our memory, a function of System 1, has evolved to represent the most intense moment of an episode of pain or pleasure (the peak) and the feelings when the episode was at its end. A memory that neglects duration will not serve our preference for long pleasure and short pains.
**Note:** Our remembering self is bad at taking duration into account. Only our experiencing selves really take duration into account when deciding if something is pleasure able or painful.
âThis is a bad case of duration neglect. You are giving the good and the bad part of your experience equal weight, although the good part lasted ten times as long as the other.â
**Tags:** qa
The psychologist Ed Diener and his students wondered whether duration neglect and the peak-end rule would govern evaluations of entire lives. They used a short description of the life of a fictitious character called Jen, a never-married woman with no children, who died instantly and painlessly in an automobile accident. In one version of Jenâs story, she was extremely happy throughout her life
**Tags:** blue
Another version added 5 extra years to Jenâs life, who now died either when she was 35 or 65. The extra years were described as pleasant but less so than before.
**Tags:** blue
The results provided clear evidence of both duration neglect and a peak-end effect. In a between-subjects experiment (different participants saw different forms), doubling the duration of Jenâs life had no effect whatsoever on the desirability of her life, or on judgments of the total happiness that Jen experienced.
**Tags:** blue
In many cases we evaluate touristic vacations by the story and the memories that we expect to store. The word memorable is often used to describe vacation highlights, explicitly revealing the goal of the experience.
Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.
âYou seem to be devoting your entire vacation to the construction of memories. Perhaps you should put away the camera and enjoy the moment, even if it is not very memorable?â
When happily in love, we may feel joy even when caught in traffic, and if grieving, we may remain depressed when watching a funny movie. In normal circumstances, however, we draw pleasure and pain from what is happening at the moment, if we attend to it. To get pleasure from eating, for example, you must notice that you are doing it.
**Note:** Our obsession with time in the modern era has made it harder to enjoy the moments of time for what they are. We constantly feel a need to go go.
The gigantic samples allow extremely fine analyses, which have confirmed the importance of situational factors, physical health, and social contact in experienced well-being. Not surprisingly, a headache will make a person miserable, and the second best predictor of the feelings of a day is whether a person did or did not have contacts with friends or relatives. It is only a slight exaggeration to say that happiness is the experience of spending time with people you love and who love you.
**Note:** Measurements of experienced well being on a broad scale. I don't know if I agree with this statement anymore. In the long term being in contact with people you love will lead to more sustained happiness. However, there have been days where I have been consumed by the flow state and felt perfectly content.
peopleâs evaluations of their lives and their actual experience may be related, but they are also different. Life satisfaction is not a flawed measure of their experienced well-being, as I thought some years ago. It is something else entirely.
âThe easiest way to increase happiness is to control your use of time. Can you find more time to do the things you enjoy doing?â
âBeyond the satiation level of income, you can buy more pleasurable experiences, but you will lose some of your ability to enjoy the less expensive ones.â
Women who have a mate spend less time alone, but also much less time with friends. They spend more time making love, which is wonderful, but also more time doing housework, preparing food, and caring for children, all relatively unpopular activities. And of course, the large amount of time married women spend with their husband is much more pleasant for some than for others. Experienced well-being is on average unaffected by marriage, not because marriage makes no difference to happiness but because it changes some aspects of life for the better and others for the worse.
**Note:** The explanation for why marriage doesnât have a huge affect on life satisfaction in the long term.
The goals that people set for themselves are so important to what they do and how they feel about it that an exclusive focus on experienced well-being is not tenable. We cannot hold a concept of well-being that ignores what people want. On the other hand, it is also true that a concept of well-being that ignores how people feel as they live and focuses only on how they feel when they think about their life is also untenable. We must accept the complexities of a hybrid view, in which the well-being of both selves is considered.
**Note:** Well-being includes two things:
1. How you feel about your life reflecting on it.
2. How you feel about your life experiencing it.
This takes into account both the remembering and experiencing self.
This is the essence of the focusing illusion, which can be described in a single sentence: Nothing in life is as important as you think it is when you are thinking about it.
**Tags:** qa
To appreciate how strong this illusion is, take a few seconds to consider the question: How much pleasure do you get from your car?
An answer came to your mind immediately; you know how much you like and enjoy your car. Now examine a different question: âWhen do you get pleasure from your car?â The answer to this question may surprise you, but it is straightforward: you get pleasure (or displeasure) from your car when you think about your car, which is probably not very often.
Daniel Gilbert and Timothy Wilson introduced the word miswanting to describe bad choices that arise from errors of affective forecasting. This word deserves to be in everyday language. The focusing illusion (which Gilbert and Wilson call focalism) is a rich source of miswanting. In particular, it makes us prone to exaggerate the effect of significant purchases or changed circumstances on our future well-being.
**Note:** In reality we wonât get nearly as much joy as we think we will from achieving something because of hedonic adaptation.
We believe that duration is important, but our memory tells us it is not. The rules that govern the evaluation of the past are poor guides for decision making, because time does matter. The central fact of our existence is that time is the ultimate finite resource, but the remembering self ignores that reality. The neglect of duration combined with the peak-end rule causes a bias that favors a short period of intense joy over a long period of moderate happiness. The mirror image of the same bias makes us fear a short period of intense but tolerable suffering more than we fear a much longer period of moderate pain.
A moment can also gain importance by altering the experience of subsequent moments. For example, an hour spent practicing the violin may enhance the experience of many hours of playing or listening to music years later.
**Note:** A few days spent honing down your memorization systems can allow you to have better experiences in all occurrences where you need to remember things in the future.
The issue of which of the two selves matters more is not a question only for philosophers; it has implications for policies in several domains, notably medicine and welfare. Consider the investment that should be made in the treatment of various medical conditions, including blindness, deafness, or kidney failure. Should the investments be determined by how much people fear these conditions? Should investments be guided by the suffering that patients actually experience? Or should they follow the intensity of the patientsâ desire to be relieved from their condition and by the sacrifices that they would be willing to make to achieve that relief?
**Note:** Is the remembering self or experiencing self more important?
The only test of rationality is not whether a personâs beliefs and preferences are reasonable, but whether they are internally consistent. A rational person can believe in ghosts so long as all her other beliefs are consistent with the existence of ghosts. A rational person can prefer being hated over being loved, so long as his preferences are consistent. Rationality is logical coherenceâreasonable or not. Econs are rational by this definition, but there is overwhelming evidence that Humans cannot be.
What can be done about biases? How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely:
The key insight from the above example is that evaluations of utility are not purely dependent on the current state. Utility depends on changes from oneâs reference point. Utility is attached to changes of wealth, not states of wealth. And losses hurt more than gains ([View Highlight](https://www.shortform.com/app/highlights/5f76af43-ae8a-4739-bbb1-9b0b55e01943))
- Note: Context is everything. The main reason the rich donât support higher taxes despite the fact it would make them better off is they fail to account that everyone would be taxed and not just them meaning relatively they would be at the same income.
## New highlights added 10-03-2023 at 12:41 PM
make.
(Shortform note: the idea of anchoring can be taken beyond numbers into ideas. If someone tells you an extremely outrageous idea, then later gives you a second idea that is less extreme, the second idea sounds less controversial than if he had presented it to you first. Thatâs because youâve anchored to the first extreme idea.) ([View Highlight](https://www.shortform.com/app/highlights/b018579f-e81c-4e0f-810e-3eb3e99839d7))
(Shortform note: the idea of anchoring can be taken beyond numbers into ideas. If someone tells you an extremely outrageous idea, then later gives you a second idea that is less extreme, the second idea sounds less controversial than if he had presented it to you first. Thatâs because youâve anchored to the first extreme idea.) ([View Highlight](https://www.shortform.com/app/highlights/6f8824e2-9e83-4116-bac2-12cc6d16cb7c))
Related to hindsight bias, outcome bias is the tendency to evaluate the quality of a decision when the outcome is already known. People who succeeded are assumed to have made better decisions than people who failed.
This causes a problem where people are rewarded and punished based on outcome, not on their prior beliefs and their appropriate actions. People who made the right decision but failed are punished more than those who took irresponsible risks that happened to work out. ([View Highlight](https://www.shortform.com/app/highlights/9a0f1372-3985-456b-9e0b-708b98b438f0))
Simple algorithms are surprisingly good predictors. Even formulas that put equal weighting on its factors can be as accurate as multiple-regression formulas, since they avoid accidents of sampling. Here are a few examples of simple algorithms that predict surprisingly accurately: ([View Highlight](https://www.shortform.com/app/highlights/c0798785-6ed9-46fd-a691-92fe05e4d822))
When can you trust human intuition? Kahneman argues accurate human intuition is developed in situations with two requirements:
An environment that is sufficiently regular to be predictable, with fast feedback
Prolonged practice to learn these regularities ([View Highlight](https://www.shortform.com/app/highlights/f1621bb7-fab7-4924-9fe9-7d1261879f16))
Humans are efficient learners and generally donât miss obvious predictors. However, algorithms do win at detecting signals within noisy environments. ([View Highlight](https://www.shortform.com/app/highlights/72f505c5-2c80-4813-8f2d-83fbb133861d))
Utility depends on changes from oneâs reference point. Utility is attached to changes of wealth, not states of wealth. And losses hurt more than gains ([View Highlight](https://www.shortform.com/app/highlights/5a211545-918a-452f-9dbe-9e59550aed4d))
- Note: Prospect theory is an offset of utility theory that sees the utility of something by changes according to a reference point not by simple absolute metrics.
Prospect Theory in 3 Points
1. When you evaluate a situation, you compare it to a neutral reference point.
Usually this refers to the status quo you currently experience. But it can also refer to an outcome you expect or feel entitled to, like an annual raise. When you donât get something you expect, you feel crushed, even though your status quo hasnât changed.
2. Diminishing marginal utility applies to changes in wealth (and to sensory inputs).
Going from $100 to $200 feels much better than going from $900 to $1,000. The more you have, the less significant the change feels.
3. Losses of a certain amount trigger stronger emotions than a gain of the same amount.
Evolutionarily, the organisms that treated threats more urgently than opportunities tended to survive and reproduce better. We have evolved to react extremely quickly to bad news. ([View Highlight](https://www.shortform.com/app/highlights/0d6629d7-f839-453b-8b7d-60d254f5b0e9))
While Bernoulli presented utility as an absolute logarithmic scale starting from 0, prospect theory calibrates the curve to the reference point. People feel differently depending on whether theyâre gaining or losing money. ([View Highlight](https://www.shortform.com/app/highlights/3880c99a-9d79-4d0c-9f70-61ca083b3013))
Why did it take so long for someone to notice the problems with Bernoulliâs conception of utility? Kahneman notes that
once you have accepted a theory and use it as a tool in your thinking, it is very difficult to notice its flaws
. Even if you see inconsistencies, you reason them away, with the impression that the model somehow takes care of it, and that so many smart people who agree with your theory canât all be wrong ([View Highlight](https://www.shortform.com/app/highlights/cb27d776-89ab-4205-94b1-7e47efdcf566))
Prospect theory has holes in its reasoning as well. Kahneman argues that it canât handle disappointment - that not all zeroes are the same. Consider two scenarios:
1% chance to win $1 million and 99% chance to win nothing
99% chance to win $1 million and 1% chance to win nothing.
In both these cases, prospect theory would assign the same value to âwinning nothing.â But losing in case 2 clearly feels worse. The high probability of winning has set up a new reference pointâpossibly at say $800k.
Prospect theory also canât handle regret, in which failing to win a line of gambles causes losses to become increasingly more painful.
People have developed more complicated models that do factor in regret and disappointment, but they havenât yielded enough novel findings to justify the extra complexity. ([View Highlight](https://www.shortform.com/app/highlights/7ef3d56c-ef25-4104-a2e0-4f773cfc7cec))
We are driven more to avoid failing a goal than to exceed it. Failing a goal is perceived as a loss; exceeding the goal is a gain ([View Highlight](https://www.shortform.com/app/highlights/33e49d27-6dd4-4475-abcc-c160744e0c5a))
- Note: This is why I generally like to have punishments for not reaching goals than to have rewards.
First, I should do the activity for the activity in itself, not a reward.
And second, we are more driven by loss than gain. One caveat is black hat motivation can be de energizing and tiring in the long run.
In another reframing of loss aversion, we are biased toward keeping the status quo. Two effects are at play here: 1) the endowment effect exaggerates the value of what you have, warping your prior indifference curve, and 2) loss aversion makes you hesitant to take on risky bets, since losses are more painful than gains. ([View Highlight](https://www.shortform.com/app/highlights/12d02c80-e0c0-42c1-84bc-08d101783043))
In negotiations, concessions are painful because they represent losses from the status quo. Both parties are trying to make concessions, but because losses outweigh gains, the concessions from the other side donât make up for your personal concessions. This is why negotiations can end up feeling like everyone walks away unhappy ([View Highlight](https://www.shortform.com/app/highlights/b6b0983b-f2d7-48a2-b5b6-9d9c8c22f00a))
In another twist, the feeling of regret depends on your default action and whether you deviate from it. If you do something uncharacteristic and fail, youâre more likely to feel regret, but others are less likely to blame you. ([View Highlight](https://www.shortform.com/app/highlights/edcf0a01-b7e5-4e8e-ad4f-f35456c782c1))
- Note: We need to realize NOT doing something is in itself a decision.
sadly, this might drive some people to blame victims of rape, who allegedly were âasking for itâ through their typical dress or behavior.) ([View Highlight](https://www.shortform.com/app/highlights/bb0b07bd-0df7-4026-a737-f2df9a7f503c))
As you journal your decisions, note the possibility of regret before deciding. Then if a bad outcome happens, remember that you considered the possibility of regret before you made your decision. This avoids the hindsight bias and the feeling of âI almost made a better choice and I should have known better ([View Highlight](https://www.shortform.com/app/highlights/130cdbab-296b-4fc4-84ab-9808a39c3231))
When an event is made specific or vivid, people become less sensitive to probability (lower chances are overestimated and higher chances are underestimated).
When an event is specifically defined, your mind constructs a plausible scenario in which it can happen ([View Highlight](https://www.shortform.com/app/highlights/5f3586ed-063c-4ec9-9605-8acc9854256c))
People were asked to estimate the chances that each of 8 NBA playoffs teams had in winning the championship. The sum of chances for all teams should of course total 100%. But for these subjects, the sum was 240%! The reason: when considering each team in isolation, it was easy to construct a plausible path to winning, while the alternative of (7 other teams) was a diffuse possibility. Thus each individual teamâs chances was overestimated, when the attention was focused on that team.
This effect disappeared when the scenario was simplified, and subjects were asked to estimate the chance of the winning team coming from the Eastern vs the Western conference. In this case, the event and its alternative were equally specific, and it was clearer that the probabilities should add to 100%. ([View Highlight](https://www.shortform.com/app/highlights/c4190fd0-b030-4cbd-97f1-0e709086dbcc))
In some cases, you might exploit these biases for your own gain to overcome your hesitation:
If youâre hesitant because you know you have a small chance of success (like starting a new business), paint a vivid picture of the success you could enjoy. This will help you overestimate the chances of success. (Though use this with caution, because you might not want to delude yourself too much.) ([View Highlight](https://www.shortform.com/app/highlights/9f93d237-56b0-48e3-be3a-30f3b69d8680))
Antidotes to Specificity and Denominator Neglect ([View Highlight](https://www.shortform.com/app/highlights/621ac0b6-d679-4045-99d6-b5328689e651))
When you hear a vivid story about how things will work, strip away the irrelevant details to regain sensitivity to probabilities. ([View Highlight](https://www.shortform.com/app/highlights/9f01a6f8-987d-4dd9-89ae-87c1d335bf2d))
Antidotes to Specificity and Denominator Neglect ([View Highlight](https://www.shortform.com/app/highlights/d22602a2-f115-4938-aaac-847d699e44b1))
When you evaluate a decision, youâre prone to focus on the individual decision, rather than the big picture of all decisions of that type. A decision that might make sense in isolation can become very costly when repeated many times. ([View Highlight](https://www.shortform.com/app/highlights/a9470df9-1824-461a-9118-f563086d8dbf))
This is the difference between narrow framing and broad framing. The ideal broad framing is to consider every combination of options to find the optimum. This is obviously more cognitively taxing, so instead you use the narrow heuristicâwhat is best for each decision at each point?
An analogy here is to focus on the outcome of a single bet, rather than assembling a portfolio of bets.
Yet each single decision in isolation can be hampered by probability misestimations and inappropriate risk aversion/seeking. When you repeat this single suboptimal decision over and over, you can rack up large costs over time ([View Highlight](https://www.shortform.com/app/highlights/845b27f8-d31c-46cd-8070-382d99f01771))
- Note: One practical implication is to always take small gambles for a higher gain than loss and ignore your loss aversion. Over a long enough time scale, this will pay off massively.
Judgment and preferences are coherent within categories, but may be incoherent when comparing across categories ([View Highlight](https://www.shortform.com/app/highlights/159d9823-6c26-48c2-99b4-a3e39ea114e4))
Antidotes to Narrow Framing and Reversals ([View Highlight](https://www.shortform.com/app/highlights/b021df24-69c3-4004-b470-0bb7a55ed794))
Reduce all decisions down to a single fungible metric, in the broadest account possible, to allow global calibration.
Decisions around human life can be assessed in terms of a single metric like Quality-adjusted life years per $. This way, repairing glaucoma can be compared to childhood cancers.
Projects and investments should be considered in terms of ROI or rate of return, so that all possible spending can be assessed on the same scale from a global account.
Personal decisions can be considered in terms of happiness per $ or per hour. ([View Highlight](https://www.shortform.com/app/highlights/96c7e170-9ad0-41d3-9067-77ab07b2d354))
- Note: While I can see the logic in these examples, some depth can be lost by reducing everything down to a single metric.
Weâve shown that humans are not rational in the decisions they make. Unfortunately, when society believes in human rationality, it also promotes a libertarian ideology in which it is immoral to protect people against their choices. âRational people make the best decisions for themselves ([View Highlight](https://www.shortform.com/app/highlights/c6838b85-3c41-49f4-8a15-cb266843504c))
This belief in rationality also leads to a harsher conclusion: people apparently deserve little sympathy for putting themselves in worse situations. Elderly people who donât save get little more sympathy than people who complain about a bill after ordering at a restaurant. Rational agents donât make mistakes. ([View Highlight](https://www.shortform.com/app/highlights/8cf1ab1c-24b0-40b2-8e6c-19b6ff2eb6ba))
Behavioral economists believe people do make mistakes and need help to make more accurate judgments. They believe freedom is a virtue worth having, but it has a cost borne by individuals who make bad choices (that are not completely their fault) and by a society that feels obligated to help them. ([View Highlight](https://www.shortform.com/app/highlights/4fcfd814-b242-4866-8754-efcf43f8c200))