The behavioral scientist/psychologist explains why people are not really rational in the way that economists like to think they are.
Economists work under the assumption that people make rational decisions. Psychologists don’t, at least not in the way traditional economists think about the rational model. Behavioral scientist Eldar Shafir straddles both worlds. In his quest to relieve the tension between rational versus real life, Shafir reminds us that conflict, context, and uncertainty can’t be accounted for.Eldar Shafir is the William Stewart Tod Professor of Psychology and Public Affairs at Princeton University, and co-founder and scientific director at ideas42, a social science R&D lab. His current research focuses on decision making in contexts of poverty and on the application of behavioral research to policy. Shafir was a keynote speaker at the Federal Reserve Bank of Cleveland’s Policy Summit on Housing, Human Capital, and Inequality in September 2013. Mark Sniderman, the Cleveland Fed’s executive vice president and chief policy officer, interviewed Shafir during his visit. An edited transcript follows.
Sniderman: You began your career as a cognitive scientist, but now are in the business of behavioral economics. How did that happen?
Shafir: While at MIT, I attended a series of lectures by Amos Tversky [a cognitive psychologist who challenged economic theory by showing that people frequently do not behave rationally]. I didn’t even know the topic before, but I was blown away and thought it was wonderful and ended up going to work with him. Soon after, I wrote my first real economics-focused paper with Amos and Peter Diamond [an economist and professor at MIT].
To me, it felt very much like psychology. We were just asking about people’s perception of money, just like any psychologist would do, but it ended up fitting very well with an interesting set of issues having to do with economics. I found myself reading about the Phillips curve much more than I would have otherwise! But the research felt very behavioral. It didn’t feel like I had stopped doing something and now I was doing something else; I was doing the same thing.
Sniderman: Are there fundamental differences, nevertheless, in how economists and psychologists think about the way people make decisions?
Shafir: Yes. There are some young behavioral economists who are starting to change those intuitions a bit. But I think the classical economists really are kind of enamored with and believe in the idea that people make—on the whole, to a large extent—rational decisions. That might fail occasionally, but altogether, when people attend to things and care enough, they make rational decisions as in the rational model. I think many psychologists would rarely even consider that a possibility.
Sniderman: Economists make the fundamental assumption that if people had all the information, they would process it in a logical way. Do you think that can or should change in economics?
Shafir: I can divide economists into two camps. There are those who do their work on rational agents, and it’s beautiful work and quite sophisticated and they see no reason to go any further. It’s about an idealized world in which people are rational. Then there are economists who want to have an impact on real life, who want to enter policy issues and have work that’s relevant to real human affairs. I think the latter will have to change their assumptions because it’s pretty clear right now those assumptions do not capture very well what people do.
One thing that’s good to keep in mind is that the rational model of economics, though it fails to describe people, is an empirical product in a sense. It correctly describes what people think it means to be rational; it’s not just a philosophical creation that went nowhere. Rational analysis is actually a pretty deep empirical description—not of what people do—but of what they consider to be a rational way of going about making decisions. It’s not a theory that was debunked and thrown away; it’s a theory that really captures the heart and mind of many people. It’s very appealing and also has a force of being right in some deep way. It’s not about what we do, but how we would like to act. It’s a little bit like ethics. You’re not going to throw away the Ten Commandments because it turns out people violate them. They’re still there, they’re still good; it’s just they’re not a good description of how we act. So it’s not that there is a good theory and a bad theory, but rather opposing weights on what people consider is the right thing to do and what people in fact do in practice.
Sniderman: Could we simply relax the assumption that people are processing information in a rational way, allowing for a certain type of bias in the model and then still use all the math to solve it?
Shafir: People have tried that within very limited models on specific domains, to keep the structure and relax some assumptions, but I think in the grander scheme of things, apart from selectively modeling very specific phenomena, the changes would have to be massive.
Maybe we’re lacking imagination. Maybe there will be a new cohort or ways of thinking about it that will retain some of the beautifully sophisticated economic instruments, and yet manage to become more faithful to what people actually do. But for now at least, we have on the one hand models that are very impressive, but even with all the relaxations, still fail to capture what people actually do. And then we have the psychology, which sort of lacks a good general model. Psychology provides a collection of interesting phenomena and interesting observations and some very nice theories, but they’re always very specific to the area where you work. There isn’t a generalized model that anybody can just grab and use in the behavioral sciences. We just haven’t reached anything like that. It looks from here like we never will, but you never know.
Sniderman: Is there an equivalent in behavioral psychology to economists’ rational agents?
Shafir: Psychologists don’t think that way. Those who study decision making basically study some specific aspect of individuals’ decisions, though they may use mathematical structures to try to model what they’re studying. Similarly, for the study of vision, color perception, divided attention, language acquisition, stereotyping, conformity, or other very technical or less technical areas. It’s always highly compartmentalized to specific areas of cognitive function or behavior and it never gets to the generalized level of formalism and cleanliness that you get from economic theorizing.
Sniderman: You’ve pointed out that sometimes people behave in the real world in ways that are not at all what you would predict from lab experiments. Can you expand on this?
Shafir: This problem is true for my field, but also for pharmaceuticals, nutrition, experimental game theory, or anything else. Studies do not always capture what happens when people start living their normal lives. It’s certainly true in the behavioral sciences. I think to some extent one develops a bit of an intuition. What are the kinds of cases that will extend more easily than others? If I study your capacity to retain a number of items in short-term memory, it’s not clear why everyday life will be very different from a lab. But if I decide to study your tendency to contribute to charitable organizations, or to exercise, of course in a lab you can do and believe you do things you’d never do in real life. So a lot depends on the extent to which you might expect a divide between lab-based and real world or field experiments.
Having said that, the dream in many cases is to basically do both. You often start from the lab to see if you have a phenomenon that seems to be real, then you go out there and see if it replicates or if it extends to real settings. It gets even messier than that because in behavioral science, even when you replicate it in Washington it doesn’t mean you replicate it, in precisely the same format, in Stockholm. It gets trickier. There are general principles that you can replicate. So, for example, we know that people pay more attention to some things and that they pay less attention to others, and that’s going to influence what they do, and that’s going to replicate. But what it is they pay attention to and what they neglect might differ from place to place. So, again, it might take a little bit of intuitive juggling to understand what might replicate from place to place and what might not, without some relevant changes.
We did a big study a few years ago where we sent letters to clients of a money lender in South Africa inviting them to apply for a loan. Among the things we manipulated was the interest rate that was offered to people randomly—from 3 to 12 percent monthly. Along with that, we manipulated several other dimensions of the letter, like whether you had a picture of a man or a woman or no picture at all at the bottom of the letter. We did all the comparisons among 60,000 participants and they showed that as the interest rate went up, the take-up of the loans went down (as you would expect). But when we switched the picture on the bottom of this letter from a man to a woman, take-up of the loan (this was three months of people’s actual income—these were all real loans taken) was the same as roughly lowering interest four and a half percentage points monthly.
So, what do we learn there? We learn, from my perspective, that little manipulations can have an impact much greater than we thought. I wouldn’t say we learn that switching the picture from a woman to a man or from a man to a woman in, say, Germany, would get the same result of four percentage points. You might have to make a bigger switch or a more subtle one. Those nuances will change from place to place, but the fact that people’s attention is captured by small elements of which they’re unaware and shapes what they do—that’s what I would say is rather universal. So that’s the game you play: You can try and find universals that drive human behavior, realizing that the nuances, the details, the extent of the effects will change from one context to another and in the lab.
Sniderman: Is it possible that people will become more aware of their environment being manipulated and, consequently, learn to neutralize it?
Shafir: Awareness is good. When you’re offered three for the price of two, you detect and you notice that manipulation. You might grow more accustomed to it and be less enthralled every time this happens. But a lot of things having to do with persuasion and behavior change are at a level that you don’t even perceive. You wouldn’t even know that it’s happening to you at that moment. You wouldn’t understand that it is a manipulation. So in those cases, it’s very unlikely people will learn to be immune to it.
Sniderman: It can be tempting to say that the way people make decisions just reveals their personality. Does that have any meaning to a cognitive scientist?
Shafir: Very little. [It has relevance] to some psychologists studying individual differences, but the cognitive view is largely focused on understanding how the machinery we carry behind the eyes and between the ears works and the assumption is the differences between us are not particularly interesting. How does vision work? How does memory work? It’s memory that makes you who you are as opposed to any other creature in the animal kingdom. And the fact that your memory is a bit better (whatever that means) than mine makes very little difference. That’s how cognitive psychologists typically think about it.
Sniderman: Do traits, like procrastination for example, end up having cognitive roots to them?
Shafir: Yes, at least to some degree, procrastination can arise from distraction, for example. I decide to exercise today, and then I don’t simply because I forget. Or, as part of my diet, I might decide to refrain from eating bread and then sit down and eat it, not because of weakness of will, but simply because I am distracted and I fail to notice it. That gets into an area that is more subtle, where cognition meets the contexts and cultures in which people behave. Those things are messy. The behavior we exhibit is a mix of what we bring from within and the context in which we function. Most behavioral researchers are strongly biased toward thinking about context. So if we take classic studies, like [Stanley] Milgram’s electrocution experiments, it turns out that it doesn’t matter who you take, except for the occasional outlier; everybody will be heavily influenced by contextual nuance that we never thought would have a big impact. Context clearly trumps small individual differences.
Sniderman: What is the difference between the term “context” from a cognitive science perspective and the term “culture”?
Shafir: Context is what characterizes the situation now, “Who is in the room?”, “What do you see?”, “How is it presented?”, “How is it described?”, “What are you thinking of at that moment?” Some of it, of course, could occasionally correlate with culture, but it’s much more malleable from moment-to-moment than culture tends to be. Most behavioral studies’ inconsistencies have to do with slight manipulations of context all occurring in the same culture; you just shift the description, the surroundings, the set of options, and things shift and change.
Sniderman: When it comes to cognitive behavior, can people train themselves to change or is it pretty much the way a person is wired?
Shafir: There are gradations. Some things are really inherent to the way the brain functions and some are things that one could develop a better intuition about. Sunk cost is a great example because that’s a fallacy that economists discovered, not psychologists. This is the idea that you order an expensive meal and it’s terrible and you eat it anyway because it was expensive. Economists say “This is silly, you already paid the money, you’re $20 poorer. If the food is not good, don’t eat it.” That’s a classic fallacy people make: They initially invest in something and then persist with it because they invested. There is some evidence that this is something you can learn about and can become aware of doing; once people get lectured about sunk costs, they tend to do it a little bit less. They realize “Wait a second. Yes, I paid for that but the money is gone. Why am I suffering now?” They learn.
Many other fallacies that are really part of the way we process information are not going to go anywhere. If I ask you if there are more words in the English language that start with the letter “R” or instead have “R” in the third position, it just turns out that it’s very easy to come up with examples that start with “R” and it’s very hard to think of examples that have “R” in the third position. There’s not much you can do to train yourself otherwise and if you make the inference that “If I thought of more examples, there must be more of them” you’re going to make a classic error. Those are cases that are really fundamentally a part of how we process information and it’s not going to change. You can’t avoid simple cognitive processes, but you might be able to improve on more conscious elements that have to do with assumptions we make (like sunk cost, or awareness of regression to the mean).
Sniderman: There’s been a lot of discussion in the last several years about decision architecture, especially in regard to public programs. Many believe that people should manage their own affairs and question if it’s appropriate for the government to get involved. You have said that you don’t think it’s fair to blame people for not making good decisions. Can you elaborate on that?
Shafir: The mistrust of government at some level is a perfectly healthy thing, and we can come back to that. As far as choice architecture and blaming people for not making good decisions goes, to my mind it gets almost silly. Let’s say I assume that people can walk anywhere they wanted really quickly. Then I announce a conference in Washington, DC, this evening and tell you to be there. You can’t do it. Now I could hold you responsible for not having enough motivation—you didn’t walk from Cleveland to DC in an hour. That’s sort of what happens with some of our assumptions. There are clear limitations to what people do that have been carefully investigated and documented. Some things we just can’t do well. Assigning blame for not doing them well borders on the comical.
Consider the amount of bandwidth a person has available at any one time to process information. Some people have too much going on and too many things to take care of to be able to manage their finances well, but you’re assuming otherwise and then holding them responsible for doing things badly.
Sniderman: So you take issue with the assumption part?
Shafir: It’s the misunderstanding of what motivates people, what they’re capable of, how they divide their attention, what they’re able and not able to do at any point in time that leads you to leave them responsible for things that—no matter how good they are—they’re bound to fail at very often. You either have to digest this and accept that your assumptions are not right and change them, or you’re going to get into a situation where people are just conducting less successful lives and are blamed for it.
Sniderman: In the private markets there are those who are engaged in overtly appealing to consumers’ biases to lead them to make choices that they might regret but would be profitable to the company. Do you think it is appropriate for the government to try to neutralize that through various kinds of interventions?
Shafir: I think people are going to be influenced by things that they wish they weren’t—everything from commercials, to smells, to all kinds of at-the-moment, urgent offers that are highly appealing. We have a lot of questionable players in the market who take advantage of and hurt people. There are two options: you outlaw them or you enter the game yourself—“you” being well-intentioned policy and government organizations. I think outlawing all of them is probably inadvisable and unlikely, so I think the best thing to do is swallow our pride a little bit and enter a world where we do some “publicity.” We typically feel it is below us to appeal to people in ways that are not respectable and on the table, but that’s how people’s behavior is shaped so we need to take it seriously.
The other option is to question far more seriously the actions of the bad guys. I don’t think the notion that a free market needs to incorporate a lot of unethical behavior is necessarily in the original idea. It’s a recent, to some extent American, development. And the idea that you’re playing the free market, with minimal consumer protections, and also are allowed to be dishonest maybe should change. I think more restrictions and consumer protections and heavier penalties on unfair players who are predatory on the less capable seem very plausible, but in today’s climate it’s probably not going to happen tomorrow.
Sniderman: Where do you weigh in on the nudge debate—where government doesn’t tell you what to do, but gently biases the context so that you find it easier to do things you think are in your own self-interest?
Shafir: It’s compelling enough to give it a real try. Of course, not everything is “nudge-able.” Perhaps, in times when something else is needed, the attempt to nudge could go too far, or not be enough. You can either not do enough and have people fail or you can do too much and have them fail. I don’t know if trial and error is the best approach, but we need to institute systems that help us see how the work can be adjusted for the best outcome, which is what we do with everything else. We create new inventions, new medications, new treatments, and then we see what works and what doesn’t and we adjust. In some sense, I think we’re going to have to do that here, too. Clearly at some point if you take it too far, there is the sense that people could lose their own individualism and responsibility and the sense that government might not choose well. It’s a fine balancing game.
Sniderman: Have private companies tried to make positive changes for employees or customers by nudging them to make good decisions?
Shafir: Sure. Opower is a company that’s gotten a lot of press in recent years. It’s a company that’s devoted to saving energy. It’s well-informed on the behavioral literature and is using various interventions and installing all kinds of gadgets in people’s homes; contraptions that change color and beep according to energy consumption, as well as letters and flyers that are carefully designed. These things can have an enormous impact on energy savings. Another example: The GlowCap is a privately produced little plastic capsule for delivering medication that seems to have had an enormous impact on people’s compliance and adherence to taking their medications. It’s very cheap, it’s sold privately, and it’s having a huge positive effect on health. I think we’re going to see more and more of that.
Sniderman: You recently served as a member on the President’s Advisory Council on Financial Capability, a group representing the financial industry, foundations, public interest groups, and financial regulators. What did you learn?
Shafir: What I learned, which I knew I would, is the difficulty of implementing any good idea in the real world. That’s something that you saw very clearly in these meetings, where people are trying to do anything from financial education to workplace environments. You encounter the big obstacles once you have some good ideas. I got a lot better sense of all that.
I think people were very open to the behavioral notions to which many of them had not been exposed to significantly before. Many of them were really very eager to get a better sense of what the behavioral perspective could contribute. It figured very clearly in the final report we gave to the President. I think it’s a mini, mini step, but it’s in the right direction.
Sniderman: Do you see the application of this way of thinking about people and their decision making spreading to all areas of social sciences?
Shafir: In some disciplines I assume that specific assumptions about human behavior play a bigger role than in others. In general, though, I think that our emerging conception of what drives people and how they behave is not quite the one that we would have anticipated intuitively. Studies show that it’s quite different.
The Woodrow Wilson School of Public and International Affairs [at Princeton University] now has a required course that all the MPA students take: psychology for policy. It’s a course that students resented enormously initially and now they mostly love, and we’re adding more courses because they want more of this stuff. I think a lot of policymakers will be trained now to have a better conception of what drives people. Various social science disciplines will probably differ a lot in what they do with this more nuanced understanding, and how much it comes to form a central element of their conception of people, but I think it will, in fact, enter many disciplines.
Sniderman: Do people in other countries think about behavior differently, especially when it comes to financial capability, than those in the United States? If so, how?
Shafir: I think there’s a sense in which the support systems here are less developed than they are in parts of Western Europe, for example. Here, there’s a little bit more of a sense that people are responsible for their own destiny and should be left to their own devices.
Sniderman: What sort of support systems do you mean?
Shafir: Jails, for example, are friendlier and more pleasant in Europe because the perception is that you’re there partly because of bad luck, whereas here it’s something you did and, consequently, you deserve the miserable conditions you get. And this way of thinking pertains to the poor, too. There’s this ethic that says you’re responsible for your destiny and if things go wrong, it’s you who did it and we owe you very little. I think the Europeans’ perception—and there’s some research on this—is that when you’re poor or incarcerated or whatever, a substantial proportion is attributed to bad luck. “There, but for the grace of God (go I).” That influences how policy is conducted.
Sniderman: Some research done on poverty suggests that decisions made early in life (having to do with education, the age of having a child, and getting married, for example) may enormously influence one’s relationship to poverty. In your work, you seem to stress that poverty itself can contribute to poor conditions. Can you talk more about this causality issue?
Shafir: There’s no doubt that if you grow up in contexts of poverty, you suffer educationally, biologically, culturally, and in every way possible. So you clearly grow up handicapped in many ways. The question is, what happens next? And are you able to transcend it? There’s been some evidence of programs that help.
What Sendhil Mullainathan and I focus on in our book [Scarcity: Why Having Too Little Means So Much] is essentially the cognitive life inhabited by the poor, which is, to a large extent, ahistorical. It’s moment-to-moment: how you spend your mental bandwidth taking care of all the things you need to take care of. And what our studies show is that if you take anybody and put them in the context of poverty, they start doing things less well. Everything suffers. If you take them out of poverty, they start paying attention outside the confines of juggling the day-to-day and start doing well elsewhere. All this certainly doesn’t argue against the notion that you suffer biologically in ways that take a very long time—if ever—to recover from if you’re raised in abject poverty. And we’re talking about America, we’re not even talking about the third world where poverty can be more extreme.
There are, by the way, issues of relative poverty that are quite intricate. There’s a world in which when you talk about the American poor, people come and say “What are you talking about? Everybody in America has air conditioning and toasters and TVs.” Adam Smith resolved that puzzle 250 years ago when he talked about the laborer in England who used to not need a linen shirt to go to work, but now that he’s expected to have one, Smith explains, if he cannot afford one, he’s poor. So clearly standards change.
And what it means to be poor changes with them. Recently the Heritage Foundation reported that most of the poor do not suffer from material hardship, as exemplified by the fact that most people defined as poor have air conditioning, microwaves and DVD players. My guess is that this characterization of what it means to be poor makes initial sense to many people, because unless you think about it carefully, it sort of sounds reasonable. But it’s not. Imagine if the report had said all the poor have running water.
The context in which you live determines what is considered minimally acceptable. Running water was once a luxury, but now it is considered part of a minimally acceptable American life. So if you don’t have it, you may feel poor. There are certain things that you expect to have for a minimally acceptable American life today if you are in America or Swedish life if you are in Sweden. That’s a very behavioral notion, a simple psychological notion. Internet was a major luxury a while ago, but now if you can’t afford it, you feel poor. Sometimes it’s hard to deal with this issue because some people’s perception of poverty comes close to something approaching starvation. So they think that not having internet, or a car, has nothing to do with being poor. But, in fact, internet and a car and a TV, like water, have become part of basic American life. As per Adam Smith, if you cannot live a minimally acceptable life in the time and place in which you live, you’re going to feel poor. And when you live poor, behaviorally what we find is that contexts in which you feel you do not have enough tend to capture your mind and make you attend to them at the expense of other things, and that ends up impoverishing you in other ways.
Sniderman: What are you working on today that you hope will bear fruit in the next five or ten years?
Shafir: We’re hoping to start a center for behavioral policy at the Woodrow Wilson School at Princeton that will bring together researchers and students from different disciplines, who are focused on these behavioral issues. I think figuring out to what extent we can infiltrate and have some real impact on policymaking anywhere from government to nonprofits will be the agenda for the next few years. It seems a like good moment. The United Kingdom has the Behavioral Insights Team (also known as the “nudge unit”); that's David Cameron's actual office for doing behaviorally informed work, and the White House and Treasury are now starting with a similar project. So I think there's going to be more of that behavioral perspective entering policymaking and that's possibly something I'd devote some serious attention to.
Sniderman: Thank you.