Jump to ContentJump to Main Navigation
Meeting at Grand CentralUnderstanding the Social and Evolutionary Roots of Cooperation$

Lee Cronk and Beth L. Leech

Print publication date: 2012

Print ISBN-13: 9780691154954

Published to Princeton Scholarship Online: October 2017

DOI: 10.23943/princeton/9780691154954.001.0001

Show Summary Details
Page of

PRINTED FROM PRINCETON SCHOLARSHIP ONLINE (www.princeton.universitypressscholarship.com). (c) Copyright Princeton University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in HSO for personal use (for details see http://www.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 15 August 2018

Cooperation and the Individual

Cooperation and the Individual

Chapter:
(p.72) Chapter 4 Cooperation and the Individual
Source:
Meeting at Grand Central
Author(s):

Lee Cronk

Beth L. Leech

Publisher:
Princeton University Press
DOI:10.23943/princeton/9780691154954.003.0004

Abstract and Keywords

This chapter examines the evolutionary roots of the proximate psychological mechanisms that underlie cooperation. The idea that there are specific biological mechanisms behind at least some aspects of cooperation is supported by recent work in behavior genetics. One common technique in behavior genetics is to compare identical twins to fraternal twins. Another study, using a different technique, found a relationship between voter turnout and two specific genes. Hormones provide another window onto the proximate psychological mechanisms underlying cooperation. The chapter first considers the most basic form of cooperation, reciprocity, before discussing its relation to culture, the avoidance of individuals prone to free riding, and detection of cheaters. It also explores indirect reciprocity, generosity as performance, and hard-to-fake signals.

Keywords:   evolution, cooperation, reciprocity, culture, free riding, cheaters, indirect reciprocity, generosity

From the proximate to the distal

Every approach in the social sciences begins with some theory of the individual. The approaches to collective action described in chapter 3 are no exception to this rule. However, most such theories attempt to explain behavior at only the proximate, mechanistic level. Consider, for example, Peter Clark and James Q. Wilson’s idea that people join groups that supply public goods in order to experience “solidary benefits.” In other words, people join groups because it makes them feel good to join with like-minded people and accomplish something they feel is important. Of course, this is true enough. But to say that solidary benefits explain the existence of groups is simply to say that people join groups because they like to. At best, this is simplistic. At worst, it is circular (as Olson himself pointed out). We can escape from this circularity if we remember the lessons from chapter 2 about levels of explanation. Clark and Wilson answered the question of what motivates individuals to contribute to a public good at the proximate level, but they left the other levels—developmental, distal, and evolutionary—unexplored. The solution is a dose of evolutionary thinking. The evolutionary approach to behavior recognizes the importance of proximate mechanisms but also asks how they came into existence in the first place. In this way, explanations provided by evolutionary scientists and those provided by social scientists can be recognized and appreciated for what they usually are: complementary to one another, not competing. In this chapter, we explore the ways in which evolutionary scientists are shedding light on the evolutionary roots of the proximate psychological mechanisms that underlie cooperation.

Skeptics will wonder—quite rightly—whether we are justified in making claims about the evolutionary foundations of proximate psychological mechanisms. After all, evolution acts ultimately on DNA, so if we want to claim that cooperation is influenced by evolution, then we would do well to find some evidence that human cooperativeness is grounded in biological processes that could conceivably be encoded in our genes. The idea that there really are specific biological mechanisms behind at least some aspects of cooperation is supported by recent work in behavior genetics. One common technique in behavior genetics is to compare identical (p.73) twins, who are identical genetically as well as in appearance, to fraternal twins, who are no more similar genetically than regular full siblings. Identical twins are far more likely than fraternal twins to have similar rates of political participation.1 Another study, using a different technique, found a relationship between voter turnout and two specific genes.2 If such genes encode proximate mechanisms that influence behavior, then the brain may be the best place to look for them. Functional magnetic resonance imaging (fMRI), which reveals the areas in the brain in which oxygen use is greatest, is useful because it can show which parts of the brain are most active during cooperation (or anything else). And indeed, several fMRI studies have shown that certain parts of the brain are more active than others during cooperation.3 Among the more relevant findings is that subjects who were more cooperative while playing experimental games show activity in areas of the brain associated with reward. This suggests that there are intrinsic benefits at the neurochemical level for behaving cooperatively.4 In other words, just as Clark and Wilson claimed, it makes us feel good to work with others to achieve common goals.

Hormones provide another window onto the proximate psychological mechanisms that underlie cooperation. Oxytocin, the so-called “cuddle chemical,” is best known for its role in milk letdown in nursing mothers, but it also has been associated in nonhumans with the ability to form normal social attachments more generally—certainly necessary for many forms of cooperation. A study by a group of economists and psychologists found that subjects who received nasal spray containing oxytocin were more generous in an experimental game than subjects who received a placebo. The authors of the study suggest that oxytocin “affects an individual’s willingness to accept social risks arising through interpersonal interactions.”5 Oxytocin may also have a darker side, but still one that may be important for cooperation. A team of researchers from the Netherlands has shown that nasally administered oxytocin increased subjects’ willingness to trust and cooperate with members of their own group, but it also decreased their concern for out-group members. In one study, subjects were presented with a famous moral dilemma involving a runaway trolley car. The trolley is headed toward a group of five people, but between the trolley and the group of people is a switch. If you don’t hit the switch, all five will die. However, if you do hit the switch, the trolley will head off in a new direction and kill one other innocent person, instead. Do you do nothing, or do you act in a way that causes the death of one person but saves the lives of five others? The Dutch subjects of this experiment were more likely to sacrifice the one to save the many if the doomed individual was given a name indicative of an out-group, such as (p.74) Germans (e.g., Helmut) or Arabs (e.g., Ahmed) rather than a typical Dutch name (e.g., Maartens), but only if they had inhaled oxytocin rather than a placebo.6

However, neither genes nor hormones are destiny. We are only beginning to understand the complex relationships among genes, hormones, environments, and behavior. Consider, for example, a recent study of the effects of testosterone on cooperation conducted by members of the same team that looked at oxytocin’s effects on play in economic games.7 Female subjects in a double-blind experiment were given doses of either testosterone or a placebo underneath their tongues and then played the Ultimatum Game. Players who had received the testosterone made significantly higher offers than those who received the placebo. Crucially, the experimenters also asked the players whether they believed that they had been given testosterone or the placebo. Those who believed that they had received a dose of testosterone (and these beliefs were wrong about half the time) made significantly lower offers in the game. The researchers speculate that their subjects were acting on their folk theories of how testosterone makes people act. Clearly, our belief systems remain crucially important elements in understanding human behavior, and much work is left to be done on the proximate mechanisms that underlie cooperation.

Another approach to the causal chain that underlies cooperation is to come at it from the other end, that is, to imagine social situations that might have occurred among our ancestors and provided selection pressure in favor of a willingness and an ability to cooperate. This focus on ultimate or distal explanations is the approach that has dominated the evolutionary study of cooperation, and so it is the one that we will take in the rest of this chapter.

Reciprocity and the identification of cooperators

While we were writing the first draft of this book, a Stanford business professor named Robert Sutton published The No Asshole Rule, a slim, pithy volume that quickly reached the top of the best-seller lists. Sutton explained how to avoid and deal with the “bullies, creeps, jerks, weasels, tormentors, tyrants, serial slammers, despots [and] unconstrained egomaniacs” that can turn any workplace into a nightmare.8 Sutton’s book captured an important insight about cooperation: the first step toward successful cooperation is to avoid noncooperators and free riders. This is true not only of businesses and other large cooperative endeavors, but also of the most basic form of cooperation of all: reciprocity.

According to game theorist Ken Binmore, reciprocity was discovered in a manner much like the Americas: it kept happening until everyone finally realized that it had happened.9 As with so many things in the study of social behavior, David Hume had the basic idea back in 1740: “I learn to do service to another, without bearing him any real kindness, because I foresee, that he will return my service in expectation of another of the (p.78) same kind, and in order to maintain the same correspondence of good offices with me and others. And accordingly, after I have serv’d him and he is in possession of the advantage arising from my action, he is induc’d to perform his part, as foreseeing the consequences of his refusal.”10 When ethnographers discovered that the world’s societies contain a wide variety of reciprocal gift-giving systems, some simple and others quite elaborate, their study became a mainstay of the new discipline of anthropology.11 By the time game theorist and eventual Nobel Prize winner Robert Aumann formalized the idea in the 1950s, he thought that it was already so generally known that he called it not the “Aumann Theorem” but rather simply the “Folk Theorem.”12 A few years later, Robert Trivers rediscovered the idea on behalf of biology.13 The Christopher Columbus of the story, according to Binmore, is political scientist Robert Axelrod. Realizing that reciprocity was already a well-established area of research in the social sciences, however, Axelrod did not claim to have discovered any new continents. Axelrod’s great contribution was a book, The Evolution of Cooperation, that was so compelling and widely read that people in disciplines across the social and life sciences finally realized that they had all been studying the same thing.14

Reciprocity’s contribution to the study of cooperation is well summarized by Robert Aumann: “[C]ooperation may be explained by the fact that the ‘games people play’—i.e., the multiperson decision situations in which they are involved—are not one-time affairs, but are repeated over and over.”15 Repetition is the key. If individuals have a good chance of interacting with each other in the future, cooperation can develop because they can hold each other accountable for favors given and owed.

Another way to say “repetition” is “iteration,” and iterated games have been a very important tool for understanding how reciprocity can evolve. The single most important game in the literature on this topic, for good or ill, is the iterated Prisoner’s Dilemma. In a noniterated, one-round Prisoner’s Dilemma, two players must choose between two strategies, one labeled “cooperate” and one “defect.” Imagine two individuals who had collaborated in a crime and agreed not to talk to the police if they were ever caught. “Cooperating” means adhering to that agreement; “defecting” means violating the agreement by talking to the police. If they both cooperate, they both get moderate payoffs, that is, light jail terms. If they both defect, they both get low payoffs (i.e., long jail terms), but not the lowest payoffs possible (i.e., very long jail terms). The dilemma arises because of the payoffs they receive when one defects and the other cooperates. In that situation, the defector gets the highest payoff possible in the game (no time in jail) while the cooperator gets the lowest possible payoff (a very long jail term). It is easy to see that in a one-round game, the best strategy is to defect because it is the only way

to avoid the lowest possible payoff and because both players know that the other will be tempted by the high payoff associated with defection when the other party cooperates. This is essentially a two-person collective action dilemma. The Prisoner’s Dilemma would thus seem to be a bad way to model cooperation. Its value becomes apparent when it is played repeatedly. In an iterated Prisoner’s Dilemma, cooperation can (p.81) emerge as a successful strategy as long as the game is likely to continue. An iterated Prisoner’s Dilemma game presents players not so much with a collective action dilemma as with an assurance problem: It’s in both our best interests to remain with us in the “cooperate/cooperate” box, earning steady, moderate payoffs, for round after round, but how can we trust each other to do so?

The dynamics of the iterated Prisoner’s Dilemma were explored as long ago as the 1950s, but it was made central to the study of cooperation when Axelrod invited game theorists to submit strategies for playing the Prisoner’s Dilemma and then played them against each other.16 Tit-for-Tat, a very simple strategy submitted by game theorist Anatol Rapoport, emerged as the winner. A Tit-for-Tat player first cooperates. Thereafter, Tit-for-Tat does whatever the other player did in the previous round. Like other high-scoring strategies, Tit-for-Tat is “nice,” meaning that it is never the first to defect. Such strategies rarely get the high one-round scores associated with defection while the other player cooperates, but they rack up large scores when both parties cooperate for round after round. In addition to showing that cooperation can become common even in a universe populated only by selfish actors, this now famous finding provided a starting point for an enormous number of additional studies based on the Prisoner’s Dilemma. Among the many important findings in that literature is the observation that Tit-for-Tat can be beaten if some of the assumptions built into Axelrod’s tournament are relaxed. For example, Tit-for-Tat cannot deal well with errors, such as defecting when you really should have cooperated.17 When mistakes are a problem, an alternative strategy called Generous Tit-for-Tat, which sometimes cooperates even when the other player has defected, can do better than plain Tit-for-Tat because it can correct mistakes and avoid the low scores associated with mutual defection. A third simple strategy, Win-Stay, Lose-Shift (also known as Pavlov), in which each player starts by cooperating and then changes from cooperation to defection or vice versa depending on whether it is doing well or doing poorly, also does well when there are errors.

In Tit-for-Tat and Win-Stay, Lose-Shift, players are stuck dealing with each other even if they are not enjoying themselves. But of course in the real world we usually get to pick our cooperative partners and stop interacting with people whom we find to be uncooperative. In Robert Sutton’s terms, we can try to avoid the assholes. Thus, a simple way to make the iterated Prisoner’s Dilemma more realistic is simply to add the possibility of movement through space to the scenario. Athena Aktipis did this with a modified Win-Stay, Lose-Shift strategy called Walk Away.18 Walk Away is essentially Win-Stay, Lose-Move: a player who is dissatisfied with how things are going with another particular player can simply move away and find someone else with whom to interact. Walk Away beats Tit-for- (p.82) Tat and Win-Stay, Lose-Shift because unhappy players can avoid defectors and find other cooperators.

Walk Away highlights the fact that one of the most fundamental aids to cooperation is simply assortment, that is, enabling cooperative individuals to spend more time interacting with each other and less time interacting with cheaters, free riders, and other uncooperative individuals. One way to think of walking away from an uncooperative individual is as a form of punishment. It is well known that punishment can make cooperation more likely. However, many forms of punishment are costly to the punisher and so constitute a public good. Any model purporting to explain cooperation that assumes the existence of costly punishment is essentially assuming that the collective action dilemma has been solved, which amounts to assuming away one of the biggest obstacles to cooperation. Another way to say this is that costly punishment constitutes a second-order collective action dilemma. If you hit a norm violator, he might hit back. To imprison the person, you would need to build a jail. Both activities are costly, and it would be easier to leave them for someone else to do, leaving the norm violator unpunished. Punishing those who fail to punish is no solution because, if such punishment is costly, then it is simply a third-order collective action dilemma. One way out of this logical tailspin is to look for ways that individuals can punish noncooperators at no cost to themselves. Walk Away provides a hint: just walk away. Ostracism is a simple way to punish someone for being uncooperative that not only costs you relatively little but also yields benefits in the form of the time you save by not trying to work things out with someone who won’t work with you.19

The success of the Walk Away strategy also brings home an important point about psychological adaptations to reciprocity. Given that people vary in their quality as cooperators and assuming that our ancestors had at least some limited choice regarding their cooperative partners, it follows that we should have mechanisms that help us choose good partners.20 In a species with long-term pair bonds and biparental care of offspring, perhaps the most important such choice would be that of a mate. After all, we expect our long-term mates not only to have sex with us but also to help us care for our offspring and to be committed to our relationship with them. Cross-cultural research shows that many of the characteristics people look for in long-term mates are similar to those we would expect them to look for in any cooperative partner. Dependability, emotional stability, intelligence, and sociability all rank highly for both men and women when looking for a long-term mate, and of course these are also the kinds of things we look for in cooperative partners outside mating, as well.21 It follows that we should have the ability to identify such characteristics in others, even from minimal cues, and research shows (p.83) that we do. In one recent study, men were able to accurately assess women’s sexual attitudes (and thus, perhaps, their risk of being cuckolded) simply by looking at photographs of women’s expressionless faces.22 The key was facial masculinity (e.g., length of the lower jaw), which is an indicator of both testosterone exposure during development and a variety of personality and behavioral tendencies, including aggressiveness and a reduced affinity for children.23

Our ability to choose good long-term mates is one aspect of a broader ability to choose good cooperative partners. This ability appears early. Preverbal infants will choose to play with a toy that has been depicted as helpful to other toys over one that has been depicted as hindering other toys in their efforts to reach a goal.24 One good way to associate with more cooperative people is to avoid uncooperative ones. Accordingly, in laboratory experiments people are better at remembering faces of people whom they are led to believe are untrustworthy or uncooperative than those of people who are supposedly trustworthy and cooperative.25 People also remember the faces of uncooperative people better than those of cooperative ones. Toshio Yamagishi and his colleagues first obtained photographs of people who had played a one-round Prisoner’s Dilemma game, keeping track of which ones cooperated and which ones defected. They then used those photographs in a test of other subjects’ memories. Even though the second group of subjects knew nothing about how the people in the photographs had played the game, they were better at recalling the faces of the defectors than those of the cooperators.26 Other studies have shown that people can tell the difference between more and less altruistic people simply by watching them on videotapes. The key seems to be the fact that altruists more often display Duchenne smiles, which involve not only the mouth but also the muscles around the eye and which are known to be better indicators of actual feelings than the forced “say cheese!” smiles that involve only the muscles around the mouth.27 Sometimes a potential partner’s cooperativeness may be less important than his other qualities, such as the ability to win a fight, and people across cultures are able to make accurate assessments of men’s physical strength simply by looking at photos of their faces or listening to their voices.28 Selection may have originally favored this ability because it helped our ancestors size up potential opponents, but it could easily have been co-opted for cooperative partner choice. After all, when push comes to shove, it’s best to be on the side with the best shovers.

Another way to tell whether someone else is cooperative is simply to observe whether they cooperate. Anthropologist Michael Price has shown that the Shuar, an isolated group of horticulturalists in the Ecuadorian Amazon, do exactly that.29 Shuar routinely work in groups, accomplishing such important jobs as clearing fields, harvesting crops, and building (p.84) houses. Shuar estimates of one another’s work effort correspond closely to Price’s more systematic observations of such effort. Shuar also pay attention to whether those who shirk do so because they have chosen to shirk or because something is preventing them from contributing to the group project, and their impressions in this regard are a good match for systematically recorded excused and unexcused absences from work parties. Not surprisingly, those who were perceived as working harder and who had fewer unexcused absences had better overall reputations. These results are almost certainly not peculiar to the Shuar. As Price pointed out, every successful scheme for community management of shared resources involves some system of monitoring the behaviors of the members of the scheme.30

Price also predicted that people’s willingness to help with collective work projects should be greater if they think others are also likely to help. Although data from the Shuar are not available to test this theory, public goods experiments in laboratory settings do support it. When subjects are able to move from group to group, more cooperative people leave groups populated by less cooperative people and go in search of fellow cooperators.31 Similarly, when subjects are given information about each others’ contributions to past Public Goods Games and an opportunity to form new groups for future rounds of the game, highly cooperative people tend to form groups with one another, thus reducing free rider problems. The simple ability to re-form groups results in average earnings that are higher compared not only to when subjects are stuck in their initial groups but also compared to when they are able to pay to punish each other regardless of whether they can re-form groups. Although punishment does result in higher average contributions, its cost reduces average earnings.32

Reciprocity, culture, and the avoidance of cheaters

At the same time that we humans ought to be trying to identify cooperators, we also need to be avoiding free riders and other kinds of cheaters. Fortunately, we seem to be very good at figuring out whether people have violated social rules. In the words of evolutionary psychologists Leda Cosmides and John Tooby, people have a dedicated “cheater detection mechanism.”33 A cheater, in this body of work, is someone who violates a social rule that requires a person to pay a particular cost in order to get a particular benefit. Evidence for the existence of such a mechanism comes mainly from the Wason selection task, a logic problem of the if-p-then-q variety.34 A research subject is presented with four cards and a rule regarding what is written on the cards such as, “If a card has a D on one side, it must have a 3 on the other side.” The four cards read D, K, 3, and (p.85) 7. Subjects know that if there is a letter on one side there will be a number on the other side, and vice versa, but they do not know whether the rule has been followed exactly. Their task is to identify the cards that they need to turn over in order to find out whether the rule has been followed. Try it—the answer is in an endnote.35

If you chose the wrong cards to turn over, don’t feel bad. Only a small minority of people have minds that are skilled enough at abstract logical problems to get it right.

However, performance on the Wason selection task improves greatly when the exactly same logical problem is presented not as a logical puzzle but rather as a quest to find out whether people are violating social rules. Again, you can try it yourself. Imagine that the rule is “If you borrow my car, you must fill the tank” and the cards read “borrowed car,” “did not borrow car,” “full tank,” and “empty tank.” Easy, right? If the car was not borrowed, you don’t care what’s on the other side because the rules do not apply. Similarly, if the tank is full, you don’t care if the car was borrowed because even if it was, the rule has been followed. So you turn over the “borrowed car” and “empty tank” cards, which are logically equivalent to the correct answers in the abstract version. Almost everyone gets this version right. This contrast is displayed not only by the typical subjects of psychological experiments (college students), but also among the Shiwiar, a very isolated people in the Ecuadorian Amazon who have no system of formal education.36 Others have found that what counts as “cheating” depends on one’s perspective: people are better at spotting cheaters who work against their own interests than they are at cheaters who are on their side.37

Cosmides, Tooby, and their colleagues argue that the cheater detection mechanism reflects past selection in favor not of a general ability to deal with logical problems but rather in favor of an ability to identify, and then perhaps avoid or even punish, individuals who violate social rules. More specifically, they identify the selection pressures behind the cheater detection mechanism as arising from social exchange, starting with simple reciprocity of the you-scratch-my-back-I’ll-scratch-yours variety. Another possibility was recently suggested by Natalie and Joseph Henrich.38 They argued that because such rules are unavoidably cultural, such an ability should be seen as a product of gene-culture coevolution, an idea we described briefly in chapter 2. This observation is based on the fact that “cheating” in this body of work is different from defecting in the Prisoner’s Dilemma game and other kinds of noncooperation in that it presupposes the existence of a shared social rule, and therefore of culture. We think both views have merit. The situation might be clarified by a realization that the scenarios used in Cosmides and Tooby’s experiments come in two broad varieties. In some of them, the benefit is a favor from (p.86) someone else, such as borrowing her car, and the cost is doing a favor for her in return, such as leaving the car with a full fuel tank. That scenario is based on reciprocity, so our ability to think through it may indeed be grounded in selection pressures arising from reciprocal social exchange. In other scenarios, the cost is something more arbitrary, such as getting a painful tattoo, and the benefit is in the form of a privilege, such as the right to consume an aphrodisiac. Despite the fact that this kind of scenario involves no reciprocity, people are still good at identifying cheaters in them. It may be that an ability to notice when one’s favors are not reciprocated, which would not require gene-culture coevolution, was co-opted by gene-culture coevolution in favor of an ability to identify people who violate other kinds of social rules regarding benefits and associated costs.

In our view, reciprocity is an essential part of any explanation of human cooperation. However, we would be remiss if we did not recognize that some scholars are skeptical about this. For example, Robert Boyd and Peter J. Richerson have argued that reciprocity cannot explain the development of cooperation in large groups.39 They came to this conclusion by assuming that cooperative acts must benefit everyone in a group, not just the two reciprocators. If that were truly necessary in order for selection to favor reciprocity, then reciprocity would indeed have a hard time explaining very much about human cooperation. But of course the theory of reciprocity includes no such requirement. For reciprocity to be the basis of widespread cooperation, it is only necessary for each individual in a reciprocating pair to benefit.40 It does take two to tango, but, fortunately, two can tango just fine without anyone else benefitting from the performance.

Generosity as performance, part 1: Indirect reciprocity

Human society would be impossible without the ability of each of us to know, individually, a variety of neighbors. We learn that Mr. X is a noble gentleman and that Mr. Y is a scoundrel. A moment of reflection should convince anyone that these relationships may have much to do with evolutionary success.

George Williams 1966:93

Cooperative partner choice and cheater detection are good examples of social selection, the process introduced in chapter 2 that occurs when organisms generate selection pressure on members of their own species. This form of social selection must have helped shape our species in important ways. Clearly, if people can detect and associate with cooperators (p.87) and detect and avoid cheaters, then it would behoove them to also have the ability to prove to others that they are cooperators and not cheaters. This is the basic idea between two closely related evolutionary approaches to cooperation: indirect reciprocity and hard-to-fake signals.

The idea behind indirect reciprocity was around long before it had a name or a formal model. Williams captured the gist of it in the quote above. Whereas plain old reciprocity involves just two individuals (say, Abe and Ben), indirect reciprocity involves a third (Charlie).41 If Charlie is pleased by what he sees Abe do for Ben, he may treat Abe nicely in the future, perhaps seeking him out for some cooperative venture. If Charlie is displeased by Abe’s behavior toward Ben, he may avoid Abe or perhaps even punish him in some way. Either way, he may tell someone else (Doris) about Abe’s treatment of Ben. Even if Charlie himself has been treated well by Abe, he might think again about working with Abe in the future if Abe treats Ben badly. As humorist Dave Barry once noted, “A person who is nice to you, but rude to the waiter, is not a nice person.”42 While Williams may have been the first evolutionary biologist to recognize this possibility, it was Richard Alexander who gave it a name that stuck and who explored its relationship to human moral systems.43 In recent years, however, Martin Nowak and his colleagues have been indirect reciprocity’s main proponents, arguing not only that it is the key to the puzzle of human cooperation but, because of its importance to the evolution of social intelligence and language, that it is also the key to understanding our species’ distinctiveness.44

Indirect reciprocity has been the subject of a large number of modeling and simulation studies. One early model imagined cooperative players being rewarded by others connected through looped chains and found that indirect reciprocity was likely to lead to cooperation only in small groups with tightly looped networks of players.45 However, this model was missing a crucial real-world element: reputation. Including reputations turns out to be the key not only to indirect reciprocity in the real world but also to making it work in simulations. One way to provide agents in simulation models with reputations is through a method called “image scoring,” an approach developed by Martin Nowak and Karl Sigmund. Despite the name, image scoring has nothing to do with actual visual images. In image scoring, individuals who give help to others lose resources but get higher image scores, while those that pass up such opportunities get lower image scores but keep their resources. A player will then give help (i.e., resources) to another depending in part on whether the recipient’s image score is above a certain threshold value. Under a variety of different assumptions about the strategies players use and how they gain knowledge about other players’ image scores, this leads to populations in which strategies that accept the short-term cost of helping another in exchange for the boost to their reputations—and hence to the (p.88) amount of help they receive from others in the future—predominate.46 This finding soon received support from a laboratory study on the effects of reputation in an experimental economic game. Claus Wedekind and Manfred Milinski set up a game in which individuals could choose to give each other money at some cost to themselves, with their generosity being made known to the other players.47 As Nowak and Sigmund (and everyday experience) would predict, players who gave more also received more.48

Although Nowak and Sigmund referred to the helping that players do in this model as “cooperation,” it does not fit the definition of cooperation given in chapter 1 because it is an act by an individual rather than two or more individuals working together. Nevertheless, their model was an important step in the evolutionary analysis of human cooperation, and it has inspired a large number of subsequent studies. One important development is a shift from image scoring to “standing,” an idea inspired by Robert Sugden’s discussion of “good standing.”49 While a player’s image score merely reflects his history of donating or not donating, his standing reflects his intentions as well as his actions by distinguishing between justified failures to donate (i.e., when the recipient is of low standing) and unjustified ones (i.e., when the recipient is of high standing). This brings us closer to how reputations work among real people: helping a deserving person is an action worthy of praise, but helping a criminal or miscreant, even a needy one, is condemned. The analytical payoff of standing versus image scoring becomes apparent when there is a chance that players will make errors when they play. Olof Leimar and Peter Hammerstein showed that players who attend to standing usually outcompete those who use image scores, while Karthik Panchanathan and Robert Boyd demonstrated that when players make errors, cooperation is more stable when it is based on standing than when it is based on image scores.50

Later, Panchanathan and Boyd made a contribution to the literature on the collective action dilemma by showing how indirect reciprocity can eliminate the second-order free rider problem.51 As we have seen in previous chapters, collective action is undermined not only by those who fail to contribute to the public good but also by those who fail to punish those who do so. This is the second-order free rider problem. Panchanathan and Boyd imagined a situation in which individuals first have the choice of contributing or not in a collective action game. Then, individuals play an indirect reciprocity game in which they follow one of three strategies. “Defectors” do not contribute in the collective action game and do not help anyone in the indirect reciprocity game. “Cooperators” contribute in the collective action game and help those who need help in the indirect reciprocity, though an error term included in the model ensures (p.89) that they will sometimes fail to do so. “Shunners” contribute in the collective action game and never help anyone who is in bad standing in the indirect reciprocity game. Shunners also help needy individuals with good standing in the indirect reciprocity game, though, like Cooperators, they sometimes make mistakes. Failure to contribute in the collective action game is more damaging to a player’s reputation than merely failing to help someone in the indirect reciprocity game. A population of Shunners can resist invasion by both Defectors and Cooperators, creating an incentive for contributing to the collective action. Because Shunners themselves benefit from not helping players in bad standing, there is no second-order free rider problem. This model was inspired by a laboratory study by Manfred Milinski and his colleagues in which players alternated between Public Goods Games and indirect reciprocity games, one after another. Stinginess by a player in the Public Goods Games resulted in lower donations to him by other players in the indirect reciprocity game, creating an incentive to contribute more to future Public Goods Games. Thus, indirect reciprocity helped solve the collective action dilemma.52 From an empiricist’s point of view, the appeal of both Panchanathan and Boyd’s model and Milinski et al.’s laboratory study is that they resemble the real world, where reputations are not isolated within particular realms of behavior (e.g., collective action situations versus opportunities to help other individuals) but rather reflect individuals’ behaviors across many types of social interactions.

For social scientists, who have long recognized reputation’s crucial role in human cooperation,53 evolutionary theory’s discovery of this fact might seem to come a bit late. Indeed, everyone is familiar with the effect an audience, or the lack of one, has on human behavior. Consider, for example, the rudeness of drivers versus the politeness of pedestrians. The Internet has created such a large space for anonymous rudeness that it has led to a folk theory regarding anonymity’s impact on behavior. The Greater Internet Fuckwad Theory “posits that the combination of a perfectly normal human being, total anonymity and an audience will result in a cesspit.” In other words, when Internet users are anonymous, inhibition and empathy seem to go out the window. Some website managers have responded to this problem by requiring users to register under their own names, thus removing the veil of anonymity, before posting comments.54

What evolutionary science can contribute is its ability to identify specific psychological adaptations associated with reputational concerns. Several recent studies have made progress on this front. All demonstrate the important effect that an audience, even one that is only hinted at, can have on rates of cooperation. One of the most interesting was conducted using the simplest materials available: a coffee pot, a donation jar, and (p.90) photographs of eyes. Like many offices, the Department of Psychology at the University of Newcastle maintains a common coffee pot alongside a donation jar and a sign asking for people to make voluntary contributions to help pay for the cost of the coffee, tea, and milk. Three members of the department set their colleagues up as unknowing experimental subjects by posting two different kinds of images on the wall above the pot. One week, there would be a photograph of flowers. The next week, there would be a photograph of a pair of eyes. This went on for ten weeks. The amount of tea and coffee consumed was estimated by keeping track of the amount of milk used. Every time the picture shifted from flowers to eyes, the amount in the jar at the end of the week increased. Every time the picture shifted from eyes to flowers, the amount decreased.55 In a follow-up study conducted in a cafeteria, pictures of eyes and flowers were paired either with a sign admonishing people to clean up after themselves or one asking them to consume only food and drink purchased in the cafeteria. Regardless of which sign they were posted alongside, the eyes generated more table clearing than the flowers.56

More controlled studies of the same phenomenon have revealed a similar effect. An image of a robot on a computer screen increased donations in a Public Goods Game.57 Stylized eyespots on a computer screen were enough to increase donations in a Dictator Game.58 Even the most minimal hint of a face is enough to elicit this effect. Mary Rigdon and her colleagues divided their subjects into two groups and had them play a Dictator Game. Some were given a sheet on which to record how much, if any, of $10 to allocate to an anonymous subject that included three dots arranged to look vaguely like a face, with two on top and one on the bottom. This was inspired by an earlier finding that such a “face” is enough to stimulate the part of the brain responsible for face recognition. The rest of the subjects were given a decision sheet that was identical except that the “face” was upside down, with two dots on the bottom and one on the top. Even a depiction of a face this minimalistic was enough to make people more generous.59 Just as a false sense that an audience is present can be stimulated very easily, so can a false sense of anonymity. Chen-Bo Zhong and his colleagues created an illusory sense of anonymity among their experimental subjects by simply dimming the lights in a room or having them wear sunglasses. Both led people to cheat more and to behave more selfishly than people in rooms with normal lighting and no sunglasses.60

These findings are significant for reasons that go beyond the study of indirect reciprocity and cooperation. A common critique of evolutionary psychology is that it is simply folk psychology with a Darwinian makeover. While it is indeed true that folk psychology and evolutionary psychology sometimes make similar predictions, they also frequently contradict each other. More often, though, evolutionary psychology addresses (p.91) issues and makes predictions that folk psychology never considers. The effectiveness of even minimalistic depictions of faces in making people more generous is a case in point. We know of no folk psychological theories that would predict such a thing. What evolutionary psychology and evolutionary theory more broadly bring to the table is the ability to derive novel predictions from evolutionary theory and test them in a rigorous fashion.

Even the specter of a supernatural observer is enough to make people more generous. In an experiment using the Dictator Game, subjects who were primed with God concepts through a task involving the unscrambling of sentences gave considerably more than unprimed subjects. Interestingly, the effect was seen in both theist and atheist subjects.61 Of course, actual audiences also have effects on behavior. In a study involving English high school students, average contributions to a public good increased (and, correspondingly, the retention of resources by individuals decreased) when everyone’s actions were made known to each other, but not when privacy was maintained.62 In two experiments where subjects were given a chance to pay to punish others who had committed moral violations, punishment increased when other subjects were to be told of their choice. Even when only the experimenter was aware of subjects’ choices, more subjects punished and spent more to do so than when their choice was completely anonymous.63

Because reputations are usually spread through language, the theory of indirect reciprocity also gives language a leading role in the evolution of our species’ high degree of cooperativeness.64 Although the origins of language are beyond the scope of this volume, it does not take much imagination to see how even a rudimentary ability to share information about others’ behavior could have provided our ancestors with a variety of benefits and encouraged the further development of our linguistic abilities as well as our ability to engage in indirect reciprocity. Today, gossip about others constitutes about two-thirds of what people talk about in casual everyday conversations.65 The ability to talk about others when they are not actually there is an example of a broader ability found in human language but not in the signaling systems of nonhumans: displacement. While nonhuman signals make sense only in reference to things that are present (“Look out! It’s a leopard!”),66 humans can discuss things that are not there. Someone can tell you how to get from Penn Station to Grand Central even if you are currently sitting at home on your couch. Or, someone can tell you about his colleagues, neighbors, and relatives, spreading good and bad reputational information in the process.

Although language differs from nonhuman signaling systems in many ways besides just displacement, some linguists identify it as language’s single most important innovation.67 Given that displacement is essential (p.92) to our ability to gossip about reputations and that such gossip is a key to the power of indirect reciprocity, it is easy to imagine that indirect reciprocity and language coevolved in a positive feedback loop. Because language is an aspect of culture, indirect reciprocity is thus best understood as a product of gene-culture coevolution.68 We know from studies of other kinds of selection (e.g., the effects of female mate choice on male courtship displays) that social selection feedback loops of this kind may be very powerful, making indirect reciprocity indispensable for understanding our highly social and cooperative species’ rapid trip down an unusual evolutionary pathway.

Although we are convinced that indirect reciprocity is a key to our species’ high levels of cooperation, we must acknowledge the fact that others are more skeptical. For example, Natalie and Joseph Henrich have claimed that “a standard indirect reciprocity model cannot sustain significant cooperation in groups much larger than dyads or triads.”69 If that were true, then indirect reciprocity would be of little help in explaining our species’ high levels of cooperation. However, Henrich and Henrich’s argument against indirect reciprocity is similar to Boyd and Richerson’s argument against reciprocity: They assume that the costs and benefits of cooperative and uncooperative acts must be felt by everyone in the group. Sometimes, this is indeed the case. When we work with a group and we decide to withhold cooperation from someone else in the group, we must simultaneously withhold it from other individuals in the group.70 Certainly, this is a real problem that people face every day. But it does not mean that there is no work for the theory of indirect reciprocity to do in explaining cooperation beyond dyads and triads. People live, work, reciprocate, and cooperate in networks. Reputations gained in dyadic and triadic interactions spill over into group contexts, and vice versa. Henrich and Henrich have argued that people living in large populations will have trouble directing their aid and cooperation toward others with good reputations because it will be hard to spread and store all of the necessary information about individuals’ reputations. In our view, this ignores humans’ remarkable capacity, due in large part to language, for doing exactly that.71 Indeed, this seems to us to be a very large part of human social life.

Generosity as performance, part 2: Hard-to-fake signals

It pays to have a reputation for such things as generosity, kindness, trustworthiness, a willingness to follow even arbitrary norms, a willingness to punish those who do not show these characteristics, and a willingness to (p.93) share accurate information about others’ social behaviors. But how can you make sure that people know that you have these characteristics, particularly when they might be skeptical about your claims? This is where signaling theory can play a role in our understanding of human cooperation. Not all signals are equally believable. The ones that are most likely to convince a skeptical receiver are those that are hard to fake, that is, signals that individuals without the qualities being advertised find difficult to pull off. Such signals are often referred to as “costly,” and it is indeed true that one way to make a signal hard to fake is to make it costly in a way that only honest signalers can afford. However, because sometimes signals either are hard to fake for reasons other than their cost or are costly for reasons other than a need to be believable, we prefer to refer to them as “hard to fake.”72 Hard-to-fake signals that are not costly are called indexical signals, a bit of terminology borrowed from semiotics.73 For example, tigers mark their territories by scratching as high on trees as they can.74 This signals to other tigers not only that they have been there but also their size. A scratch high on a tree trunk is not a particularly costly signal, but, at least until tigers learn to stand on boxes, it is a hard one for a small tiger to fake. Signals can be costly without being hard to fake simply because they need to be loud or otherwise conspicuous in order to get through to a receiver in a noisy environment. Signaling theorists refer to this as “efficacy cost” to distinguish it from the “strategic costs” that make some signals hard to fake.75 For example, people yell in crowded nightclubs not to show off their powerful vocal cords but simply to be heard above the background noise of amplified music and everybody else who is also yelling just to be heard.

The value of hard-to-fake signals was recognized independently by both biologists and economists. Although biologists are most interested in signals designed by natural selection, economists (and other social scientists) are concerned mostly with signals designed by humans (e.g., advertising agents). However, because the same principles of signal design apply to both, we can use the same body of theory to study signals regardless of how they were designed. In biology, the classic problem is mate choice, which very often involves females choosing among males. In such a situation, selection would favor males who find ways of displaying the high quality of their genes in ways that cannot be imitated by low-quality males. This results in the great variety of conspicuous and sometimes even dysfunctional characteristics that are often easiest to spot by comparing the males and females of a particular species—peacocks versus peahens, for example.76

In economics, Michael Spence’s theory of job market signaling provides an example of a hard-to-fake signal that is both familiar and clear.77 Firms would like to hire focused, hard-working people, but how can they (p.94) distinguish them from the unfocused and lazy ones? Focused, hard-working people, for their part, want to make sure that prospective employers can tell the difference between them and their unfocused, lazy competitors. One way to show that you are capable of lots of focused, hard work is to gain admission to an elite university and earn honors while you are there. Thus, employers have a reason to attend to educational achievements even if they have nothing to do with the specific job for which they are hiring. Hard-to-fake workplace signaling may continue even after one has been hired. We all know some model workplace citizens, people who work late, volunteer to take on additional tasks, bring donuts to meetings, and so on. Sabrina Deutsch Salamon and Yuval Deutsch have suggested that those sorts of “organizational citizenship behaviors” may constitute a hard-to-fake signal of otherwise intangible qualities such as conscientiousness, thoughtfulness, and commitment to the organization.78 Political scientist Ken Kollman has applied Spence’s idea to the phenomenon of grassroots lobbying. Organizations that do have the support of large numbers of people find it easy to create grassroots lobbying campaigns. Others find such campaigns to be very expensive and are less likely to attempt them at all.79

Although it is often said that hard-to-fake signals arise when there is a conflict of interests between signalers and receivers, this is not quite right. Hard-to-fake signals are most useful when there is a broad conflict of interests between classes of signalers and receivers (e.g., males and females; job applicants and employers), but where particular signalers and particular receivers have at least a temporary confluence of interests. Peahens have an evolved skepticism toward peacocks’ claims about their own quality as mates, but it is in their best interests to mate with peacocks who are of truly high quality. Employers have a learned skepticism regarding job applicants’ claims about themselves, but it is in their best interests to hire applicants who are of truly high quality. The link between the signal and the underlying quality it displays makes it both informative and believable.80 While these sorts of situations do involve conflicts of interest between broad classes of signalers and receivers, the confluences of interest between receivers and high-quality signalers, whether males or job applicants, means that they have much to do with coordination, as well. As we will see in chapter 6, coordination problems are solved through the creation of shared knowledge: Everyone needs to know the solution to the problem, and they also need to know that everyone knows it. Hard-to-fake signals create common knowledge by making it clear to receivers that, while they may have reason to suspect that their interests and those of the signalers don’t coincide, in fact they do.

The process employers use to select new employees (and vice versa) is a form of social selection. Social selection creates incentives to send signals (p.95) about one’s quality as an ally in a variety of circumstances, and not all of them will lead to signals that one might call “prosocial.” For instance, if you are looking for people to be on your side in an armed conflict, you might want to forego such characteristics as agreeableness and focus instead on aggressiveness. However, it is often the case that social signals do take prosocial forms, such as public generosity and participation in group defense.81 One reason for this is broadcast efficiency.82 If you want to get a message out to a lot of people, you need to get their attention. A simple way to do this is to do something that they will appreciate, such as provide a public good. Thus, public generosity may serve as a signal of one’s status and ability to control resources or one’s cooperative nature.83 Such generosity is not cooperation as we have defined it in this book, but it is relevant to the study of cooperation more broadly. By making it worthwhile for an individual to provide a public good without help from others, such “competitive altruism,” as evolutionary theorist Gilbert Roberts has called it, has the potential to solve the collective action dilemma simply by removing it. By creating a system of unambiguous signals of cooperativeness, it also has the potential to enhance the effects of reciprocity and indirect reciprocity.

One of the most intriguing examples of generosity as a signal to others is provided not by any human society but rather by a bird called the Arabian babbler. Amotz Zahavi and his colleagues have been studying babblers in an Israeli nature preserve since 1970.84 Babblers are highly social creatures. They live in groups, and each group defends a territory against both neighboring groups and individual babblers who are not part of a territory-holding group. Babblers do lots of nice things for each other. When they eat, one of them will act as a sentry, on the lookout for predators, and not eat despite being hungry. Adult babblers give each other food and feed each other’s offspring, again despite being hungry themselves. Rather than flying away when the group is under attack, they risk their own lives by attacking dangerous predators such as raptors and snakes. All of this is fascinating. But what is most intriguing about Arabian babblers is that they actually compete for the right to engage in these altruistic behaviors. For example, higher-ranking babblers expel lower-ranking subordinates from the sentry post, sometimes even shoving and hitting the more stubborn ones. Lower-ranking babblers also seek sentry duty, but rather than trying to expel a higher-ranking individual from the guard post, they position themselves nearby and wait for an opening. Keep in mind that sentries cannot eat, so their eagerness to engage in this behavior is truly an altruistic act. Babblers also compete with each other for the right to feed the group’s young, and feeding among adult babblers generally goes only one way: from higher-ranking individuals to lower-ranking ones. Zahavi interprets this pattern in terms of signaling theory. (p.96) Babblers use altruism to signal their quality to each other. The payoff is social status, which, in turn, pays off in terms of reproductive success. Social status is a limited resource, and those who seek it are playing a zero-sum game. Such high stakes can lead to strong selection pressure, even in favor of helping one’s competitors.

Like Arabian babblers, people also often compete for opportunities to behave generously toward others. In a laboratory study involving college students, people were more generous in a continuous Prisoner’s Dilemma game when they knew that a third party would be aware of their behavior while choosing between them and others for inclusion in a cooperative task in which they could earn additional money.85 Outside the lab, possible examples of competitive altruism are widespread, ranging from sponsorship of expensive potlatch ceremonies among Northwest Coast Indians,86 to charity auctions, to donations of new buildings on college campuses by wealthy philanthropists. A detailed study of public generosity as a hard-to-fake signal was conducted on Mer, an island in the Torres Strait.87 Although Mer is administered by Australia, the Meriam are culturally and linguistically closer to the people of New Guinea than to Australia’s aborigines. Meriam eat a variety of things, including large green turtles (Chelonia mydas), each one of which yields about fifty kilos (more than one hundred pounds) of meat. There are two ways to catch turtles. When they are nesting, they can be easily collected off the beaches by just about anybody—men, women, children, and the elderly. Turtle meat obtained that way is mostly shared privately among just a few households. During the non-nesting season, however, the only way to catch a turtle is to head out to sea and capture one in its own element. This is a difficult, risky, and expensive proposition, but some men still do it. Furthermore, when they do it, the meat is never consumed privately but rather widely shared at public ceremonies. Why provide turtle meat—or any other public good—despite the fact that so many free riders will consume it? Because others notice. On average, more than a third of the island’s population attends any one feast, which supports the idea that public generosity is a particularly good way to signal one’s quality because of its broadcast efficiency. The attention turtle hunters receive eventually results not only in adulation but also in more mates and children.88

The example of Mer turtle hunting helps answer a longstanding question about human evolution: How did our ancestors get past the collective action dilemma and become so good at hunting big game? Unlike most other hunting species, human hunters routinely take down prey much larger than themselves. Although it was once thought that this had a direct impact on the hunters’ reproductive success by supplying their (p.97) families with meat, careful ethnographic work among living hunter-gatherers indicates that big game is often much more widely shared. Big game hunting may represent the original public good and, thus, the original collective action dilemma. Kristen Hawkes broke through the haze on this issue with a bold idea called the “show-off hypothesis.” Hawkes first pointed out that hunters typically forego small, reliable prey that is shared within families in favor of large, risky game that is more widely shared.89 She then suggested that the benefit to good hunters was reproductive: Because good hunters are valuable to the whole community and because group membership among hunter-gatherers is usually quite flexible, people have an incentive to make good hunters happy. This may mean taking better care of their children or tolerating the affairs they have with other men’s wives. The problem with this is that it creates a second-order collective action dilemma: Why should any one person do whatever it may be that helps keep the good hunter in the community? Why not let someone else carry that burden? In short, why not be a free rider? Recognizing big game hunting as a costly signal breaks us out of this second-order collective action dilemma. If big game hunting is an honest signal of a man’s quality as a mate or ally, then receivers who need high-quality mates and allies have their own selfish reasons for attending to it. Although the result of all of this is a public good, only private benefits need be invoked to explain it.90

In addition to letting others know about one’s quality as a mate or ally, signals can also give them information about one’s commitments. The role of commitments in strategic interactions was explored first by Thomas Schelling, who pointed out that it is often in our own long-term best interests to convince others that we will not act in our own short-term best interests.91 For example, most apartment owners require their renters to sign leases that commit them to at least a year of residence and that specify penalties if the lease is broken. Renters sign leases because they want a commitment from the apartment owner that they will not be evicted if they pay the specified rent. Renters agree to forego opportunities to move to better apartments, and apartment owners agree to forego opportunities to rent apartments to people willing to pay more. Because leases are legal contracts backed by the courts, they represent very believable commitments.92 A more interesting situation is when there is no legal recourse if a commitment is broken. In that case, signalers must find ways of making their commitments believable to skeptical receivers. Once again, hard-to-fake signals provide a solution.

The question for a signaler is exactly how to make a believable signal of commitment. Receivers sometimes give signalers a helping hand in this regard by specifying what they must do in order for their commitment to (p.98) be convincing. This is often done by groups that need believable commitments from their members in order to hold together. A common feature of groups is that their members must forego their own short-term best interests either to join or to remain in the group. This can take mild forms such as membership and initiation fees, but many groups require much more onerous signals of members’ commitments. Religious groups can be particularly demanding. The kinds of acts and sacrifices required by religion are familiar to everyone: tithing and other contributions of wealth and labor, dietary restrictions, frequent prayer, distinctive clothing, and participation in various rituals, which may involve discomfort if not actual pain.

An intangible but important sign of commitment is often demanded by the very content of religious beliefs. Many religious beliefs involve things that are impossible: virgin birth, resurrection after death, statues that drink milk, lamps that burn for far longer than they should on a given amount of fuel, and so on. To nonbelievers, such things remain impossible. But, precisely because they are impossible, believing in them displays one’s willingness to suspend one’s rational faculties and thus signals one’s commitment. This was clearly recognized by Paul who, in his correspondence with early Christians, encouraged them to embrace beliefs that non-Christians would find “scandalous” and even “moronic.”93 The payoff for religious groups of imposing such costs on their members is that they can be confident that they are truly committed. The payoff to members of paying such high costs is that they get to be members of a cooperative group and to reap the benefits of cooperation.94

This idea has recently been tested in two very different religious settings. Richard Sosis and his colleagues compared religious and secular kibbutzim in Israel. Belonging to any sort of kibbutz means fulfilling some group demands, but religious kibbutzim typically expect more signs of commitment than do ones grounded in secular ideologies. Accordingly, religious men made more cooperative decisions in a common-pool resource game than secular men, and religious men who participated in thrice-daily communal prayers were the most cooperative of all.95 Inspired by Sosis’s work, Montserrat Soler conducted a similar study in a very different setting: congregations (terreiros) belonging to an Afro-Brazilian religion called Candomblé. Each terreiro is dedicated to one of several gods (orixás) in a pantheon derived from African sources. Members of terreiros are required to engage in a wide range of costly behaviors, including sponsorship of large feasts, participation in exhausting dances and other rituals, adherence to a variety of taboos and regulations regarding food and clothing, and completion of difficult and time-consuming initiation rites. Terreiros help members who are in financial (p.99) trouble, and some even live at the terreiro. Like Sosis, Soler found a correspondence between religiosity and cooperativeness. Terreiro members who made more displays of commitment to the religion made more cooperative choices in a Public Goods Game, and members who were more likely to be in need of support from the terreiro made more displays of commitment. A possible criticism of this kind of work is that it does not distinguish between cooperation that occurs because of all of the signaling that goes on among members of religious congregations and cooperation that occurs because religious believers are afraid of supernatural punishments, whether in this life or the next, if they fail to cooperate. One of the advantages of Soler’s study is that in Candomblé there are no beliefs in supernatural punishments or in a judgmental afterlife. Signals appear to be doing all of the work.96

The irrationality of religious beliefs and practices enhances their effectiveness as signals of commitment. Because committing oneself to acting against one’s own short-term interests is in itself irrational, irrational acts can be convincing displays of such commitments. Robert Frank has suggested that emotions may play an important role in such displays.97 Consider, for example, romantic love. Committing oneself to a single man or woman is, in most circumstances, irrational. Unless the person in question is of the highest possible quality in every way and will remain that way forever, it is quite likely that someone more desirable, at least in one way or another, will eventually come along. But unless one makes a commitment to stay with a romantic partner even in the face of temptation, one is likely to be quite lonely. Although the commitment requires one to forego opportunities for gain should they arise, it guarantees access to something that is both real and good enough to justify such a trade-off. One way we accomplish this is through the legalities of formal marriage, but it is also accomplished simply through the emotion of romantic love. By blinding us both to our partner’s shortcomings and to the competing charms of other prospective mates, romantic love makes our commitment believable. The power of emotional signals is also evident when we compare religious signals, which typically have a powerful emotional element, and secular ones, which typically appeal more to the head than to the heart. This is shown not only by Richard Sosis’s findings regarding religious versus secular kibbutzim but also by a comparative study he conducted of nineteenth-century American communes. Religious communes tended both to last much longer and to make more demands upon their members than did secular communes. Furthermore, increasing the number of costly requirements for membership had no effect on secular commune longevity, but it did increase religious commune longevity.

(p.100) From individuals to organizations

Members of terreiros, kibbutzim, and other religious organizations are able to send signals of commitment to one another because they share not only a common belief system but also a common organizational structure. Such structures include not only religious congregations but also a wide variety of other kinds of groups—firms, bureaucracies, sports teams, and so on. On an evolutionary time scale, such groups are short-lived, which might tempt some evolutionary minded scholars into concluding that we don’t need to study them in order to understand cooperation. That might save us all a lot of work (and shorten this book considerably!), but it would be a mistake. Although the individual groups may not last long, the fact that we cooperate within organizational structures is an important fact of human existence. Thus, if we wish to understand cooperation not only on an evolutionary scale but also on a human scale, then we need to understand the important role that institutions and organizations play in making cooperation possible as we go about our everyday lives. That is the focus of the next chapter.

Notes:

(1.) See Alford and Hibbing 2008 for a review of these studies and an excellent discussion of the broader role of biology in political science. Also see Alford et al. 2005; Fowler et al. 2008.

(18.) Aktipis 2004, 2011.

(29.) Price 2006; see also Humphrey (1997). Price framed his findings in terms of a controversial approach to cooperation called “green beard theory.” The idea behind green beard theory is simple: If cooperative individuals could simply identify each other, it would be easy for cooperative behavior to spread in a population. As W. D. Hamilton recognized, one way to do this would be for there to be some outward marker of one’s willingness to cooperate with others who have the same marker and the same willingness to cooperate with others who have it. Ever since Richard Dawkins made the somewhat facetious suggestion that green beards might do the trick, this has been known as green beard theory (Dawkins 1976). For this to work as the basis for a genetically encoded tendency for cooperativeness, then the genes that code for the cooperative behavior, those that code for the green beard, and those that code for the ability to recognize green beards (p.197) must all be closely linked. As Dawkins pointed out, the problem is that such linkage is unlikely to last very long. When the gene for cooperativeness is inherited separately from the gene for the green beard, green beards will no longer be reliable indicators of a willingness to cooperate with others who have green beards. When that happens, uncooperative individuals with green beards will abound, refusing to cooperate with other green-bearded individuals but benefitting from the generosity of individuals for whom green beards remain linked with cooperativeness. Furthermore, because green beard genes are indicators of the presence of only a single gene (or a bundle of tightly linked genes) rather than genetic similarity between individuals across their entire genomes, selection on the rest of the genome will favor genes that lead individuals to ignore green beards (West et al. 2007; Gardner and West 2010). Despite how unlikely it seems at first glance, green beard theory continues to receive attention from theorists (e.g., Riolo et al. 2001, 2002; Roberts and Sherratt 2002; Jansen and van Baalen 2006; and Traulsen and Schuster 2003), and examples have begun to show up in species as diverse as amoebas (Queller et al. 2003), yeast (Smukalla et al. 2008), and fire ants (Keller and Ross 1998). Green beard theory may even help explain how mammalian mothers and fetuses interact (Haig 1996; Summers and Crespi 2005). The intellectual payoff of green beard theory for the study of human interactions outside the womb, however, is less clear. Price has attempted to solve the problem of a weak link between green beards and actual cooperativeness by suggesting that cooperativeness itself were to serve as a green beard. The result is a set of predictions very similar to those derived from reciprocity theory regarding cooperative partner choice and from indirect reciprocity theory regarding the importance of reputation.

(33.) A good place to start reading the large literature on this topic is Cosmides and Tooby (2005).

(35.) You need to turn over the D and 7 cards. The D card is obvious, and almost everyone gets it. The 7 card is trickier. You need to turn it over to check whether there is a D on the other side. The rule says nothing about the K card, so you don’t need to turn it over. Many people want to turn over the 3 card because it is mentioned in the rule, but it is not necessary. The rule states only that if there is a D on one side, then there must be a 3 on the other. It does not say that if there is a 3 on one side then it must have a D on the other.

(41.) In order to distinguish between reciprocity and indirect reciprocity, many writers now refer to “direct reciprocity.” We resist this move on the grounds that “reciprocity” is a perfectly good description of the phenomenon in question (see Box 4.2). Technically, “indirect reciprocity” is not reciprocity at all. Although we (p.198) are tempted to encourage people to adopt some alternative name for it, we fear that ship has already sailed.

(43.) Alexander 1977, 1987.

(48.) See Bshary and Grutter (2006) for a nonhuman example of image scoring.

(53.) See, for example, Chong (1991). For a recent review of the literature on reputation management, see Tennie et al. (2010).

(54.) The Greater Internet Fuckwad Theory originated in an online comic called Penny Arcade (www.penny-arcade.com). We first heard about it while listening to Marketplace, a radio program produced by American Public Media, on August 2, 2010. The quote is from Eva Galperin of the Electronic Frontier Foundation, a civil liberties organization.

(64.) Smith 2003, 2010.

(66.) The only possible exceptions of which we are aware are recruitment signals in ants and bees, and possibly also ravens (see Bickerton 2009 for more on displacement, animal signaling systems, and the evolution of language).

(67.) See, for example, Bickerton (2009); see also Cronk (2004a).

(71.) Smith 2003, 2010.

(89.) Hawkes 1990, 1991, 1993.

(93.) This is a reference to 1 Corinthians 1:18-25. The original Greek words skandalon and moros, from which the English words “scandalous” and “moronic” are derived, appear in most English translations of the Bible as “stumbling block” and “foolishness” (Strong 1890).

(95.) Sosis and Ruffle 2003, 2004.