In the 1970s,I convinced some officials in the Israeli Ministry of Education that there was a need for a high school course that taught judgment and decision-making. The team I assembled to design the curriculum and write the textbook included several experienced teachers, some of my psychology students, and Seymour Fox, then Dean of the School of Education at Hebrew University and a curriculum development expert.
After meeting every Friday afternoon for about a year, we developed a detailed syllabus, wrote a few chapters, and ran a few sample lessons. We all feel like we've made good progress. Then, when we were discussing procedures for estimating uncertain quantities, an exercise occurred to me. I asked people to write down their estimates of how long it would take us to get the finished textbooks to the Department of Education. I followed a procedure that we had already planned to incorporate into the curriculum: the proper way to get information from a group is not to start with an open discussion, but to secretly gather everyone's judgment. I gathered estimates and wrote the results on the board. They are roughly concentrated around two years: one and a half years on the low end and two and a half years on the high end.
Read our latest thinking
I then turned to Seymour, our curriculum expert, and asked if he could think of other teams similar to ours developing courses from the ground up. Seymour says he can think of several, and it turns out he's familiar with the details of a few. I asked him to think about these teams when they were at the same stage as us in the process. How long did it take them to complete the textbook project?
He fell silent. When he finally spoke, I think he blushed and was embarrassed by his answer: "You know, I've never realized this before, but the truth is, at our stage of the team, not all teams did their job. A large part of the team ended up not getting the job done."
This is worrisome; we never considered the possibility that we might fail. With growing anxiety, I asked him how big the fraction was estimated to be. "About 40 percent," he said. At this moment, a layer of haze shrouded the room.
"Those who finished, how long did it take them?"
"I can't think of any team that did it in less than seven years," Seymour said, "nor any team that took more than a decade."
I grab at straws: "When you compare our skills and resources to those of other groups, how good are we? How do you rate us compared to these teams?"
Seymour did not hesitate long this time.
"We're below average," he said, "but not by much."
This caught us all by surprise — including Seymour, whose previous estimates were squarely in line with the panel's optimistic consensus. Before I prompted him, there was no connection between what he knew about other teams' history and what he predicted about our future.
We should quit that day. None of us want to put in another six years of work on a project that has a 40% chance of failing. However, while we must have realized that persistence is unreasonable, the warnings do not provide an immediately compelling reason to quit. After a few minutes of staccato debate, we gathered our energy and moved on as if nothing had happened. In the face of choice, what we give up is rationality, not enterprise.
The book was completed eight years later. By then, I was no longer living in Israel, and I was no longer part of the team that had gone through such a difficult task to get it done. The Ministry of Education's initial enthusiasm for the idea has faded, and the textbooks have never been used.
why inner view doesn't work
This embarrassing episode remains one of the most enlightening experiences of my career. I stumbled across the difference between two very different methods of forecasting, Amos Tversky1Later I marked the inner view and the outer view.
thisinternal viewIt's a voluntary way for all of us (including Seymour) to assess the future of a project. We focus on our specific circumstances and look for evidence in our own experiences. We had a rough plan: we knew how many chapters we were going to write, and we also knew how long it took us to write the two chapters we had done. The more cautious among us may add a few months of margin of error.
But the extrapolation is wrong. We make predictions based on the information at hand, but the chapters we write first are easier than the others, and our commitment to the project may be at its peak. The main problem is that we don't take into account Donald Rumsfeld's famous "unknown unknowns". At the time, we could not have foreseen the succession of events that would delay the project for so long: divorce, illness, a crisis of coordination with the bureaucracy. These unexpected events not only slow down the writing process, but create long periods of time during which little or no progress is made. Of course, the same must be true of other teams that Seymour knows. Like us, the members of these teams are unaware of the difficulties they face. There are many reasons for the failure of any plan, and although the probability of most of them is too improbable to predict, thesomethingThere is a high chance of errors in a large project.
How an outside perspective can help
The second question I asked Seymour diverted his attention from us to a similar class of cases. Seymour estimates a base success rate for this reference class: a 40 percent failure rate and a seven to ten-year completion time. His informal survey certainly fell short of scientific standards of proof, but it provided a reasonable basis for baseline predictions: You can make predictions about a case if you know nothing but the class it falls into. This should be the anchor point for further adjustments. For example, if you are asked to guess the height of a woman, and all you know is that she lives in New York City, then your baseline prediction is your best guess at the average height of women in that city. If you were now given information about a specific case—the woman's son was the starting center for her high school basketball team—you would adjust your estimate. Seymour's comparison of our team to other teams shows that our outcome predictions are slightly worse than the baseline forecast, which is already bad.
In our specific case, the astonishing accuracy of the external view forecast must have been a fluke and should not be counted as evidence of validityExterior. However, the argument for external views should be based on general grounds: external views will give an indication of approximate location if the reference category is well chosen. This may indicate that, as it did in our case, the predictions of insider views are not even close.
daniel kahnemanHe is Professor Emeritus of Psychology and Public Affairs at the Woodrow Wilson School of Public and International Affairs at Princeton University. He received the 2002 Nobel Prize in Economics for his pioneering work on prospect theory, which challenged rational models of judgment and decision-making. This article is an edited excerpt from his new book,thinking, fast and slow, published byFarrar, Strauss and Giroud(us),double day(Canada), andAllen Lane(U.K). Copyright © 2011 Daniel Kahneman. all rights reserved.
Explore careers with us