Web Of Lies
She shrank back against the legs of a chair at my approach, like she wanted to melt into the metal. Her pulse fluttered like a mad butterfly in her temple. I put my friendliest, most trustworthy, charming, Southern smile on my face and crouched down until I was eye-level with her.
Web of Lies
The underlying motivation behind verification is the presence of a widely held norm of truth-telling13. This means that when false information is identified, people can be expected to make that falsity known and not spread lies, even when it goes against their self-interest. Despite this normative expectation, the effectivity of verification may be compromised, as people do not act in a vacuum; rather, they act in a naturally occurring social network in which those connected to one another have similar dispositions, interests and incentives14,15. It has been shown that social networks have become more polarized over time16,17,18,19, which may lead people to prioritize fitting in and supporting views that are shared by other group members and thus beneficial to their group by reinforcing group identity19,20,21 instead of incorporating contradicting information22,23 and telling the truth13,24.
Adding to these benefits, the experiment we design also helps us to better understand the mechanisms that may drive the effectivity of verification, such as the psychological cost that individuals experience when telling lies or the reputational cost they perceive when identified as liars13. We interrogate these mechanisms through experimental manipulations that change the presence and type of verification to test which of these channels may enhance the effect of verification. Our findings can help inform the designs of useful interventions and policies to prevent the spread and amplification of lies on social networks in real-world settings, where people are surrounded by others who are similar to them, when sharing information25,26,27.
We design a one-shot sequential game, which we call the web of lies game (see Fig. 1), where three players are assigned to different positions in a linear communication network: first, F, intermediate, I, and last, L. At the beginning of the game, player F chooses a card from a \(12 \times 12\) grid, which reveals an integer, x, between 1 and 30 written on the card. The number x is observed only by F and is referred to as the hidden number. Player F then sends a number, xF, also between 1 and 30, to player I, reporting on x. Player I observes xF, but not x, and reports a number, xI, under the same conditions to player L. Finally, player L observes xI, but not x or xF, and reports the final number, xL, this time to the experimenter.
We choose a distribution of hidden numbers where any integer between 1 and 30 has a positive probability of being drawn by player F, which ensures that no reports are obvious lies. As there are more cards with smaller numbers, the probability is higher for smaller numbers to be drawn, which is known to all (see Supplementary Information Sect. 7 for the full instrument). In sum, the truth is costly, as far as the monetary incentives of the players are concerned, and lies may be suspected based on the size of the reported number but are never evident without verification. From a normative perspective, however, lies come at a cost, as players have been informed how they should play the game.
To preview the structure of our analysis and main results, first, we quantify the effect of verification on the spread of lies at the group level and find that verification has limited effectivity in the prevention of lies. To establish these findings, we use the three treatments already described, no, endo, and exo. Second, we try to enhance its effectiveness in additional experimental treatments using two strategies: using two additional treatments, we increase the psychological cost of lying by introducing passive players whose payoffs are reduced with false reports. Moreover, using three additional treatments, we increase the reputational cost of lying by making evident who lies. Of these two strategies, only the latter works.
To evaluate how effective verification is in reducing the spread of false reports, we focus on two key measures: the likelihood of lying and the size of the lies told. We begin by analyzing group-level outcomes and then turn to the behavior of participants in each position in the communication chain.
At the group level, a lie is reported if the final report is different from the hidden number, \(xL \ne x\), and the size of the lie told is the magnitude of that difference, \(xL-x\) (see bars in Fig. 2A). We compare the share of lying groups and the average size of lies told in each treatment with verification to the treatment with no verification. We find that only endogenous verification is effective in preventing the spread of lies, as groups in endo lie 22 percentage points less often (\(p = 0.004\)) and tell smaller lies, 7.7 versus 10.4 (\(p = 0.084\), i.e., marginally significant), than those in no. In contrast, compared to groups in no, groups in exo lie at the same rate (\(p = 0.118\)) and by an indistinguishable amount (10.4 vs. 9.5, respectively, \(p = 0.576\)).
Importantly, this moderate effect on preventing lies in the treatments with verification is not due to low levels of actual verification of the truth (see diamonds in Fig. 2A). Verification was randomly assigned to a large share of groups in exo and was chosen by an even larger share in endo (\(75\%
Second, we analyze the reports made by player F in the network. The average reports by player F were 15.0, 15.3, and 13.4 in no, exo and endo, respectively. Player F lies by reporting a different number than the hidden number that he or she drew (\(xF - x \ne 0\)), and the size of the lie is the magnitude of that difference (see the gap between the first and second bars in Fig. 2A). Neither the probability of lying nor the size of the lies are affected by the anticipation of exogenous (\(p=0.788\) and \(p=0.976\)) or endogenous verification (\(p=0.407\) and \(p=0.468\)) when compared to the baseline condition of no verification. This suggests that knowing the last player could identify whether the number reported to him or her is false does not impact the lying behavior of first players.
Third, we consider the reports made by players in the intermediate position, I. The average reports by player I were 18.7, 18.2, and 15.8 in no, exo and endo, respectively. Player I lies by reporting a different number than the one he or she received from player F (\(xI - xF \ne 0\)), and the size of the lie is the magnitude of that difference (see the gap between the second and third bars in Fig. 2A). Note that player I may report a lie by repeating the report he or she received from player F if player F lied. Compared to the baseline, there are no significant differences in the magnitude of the lies told by player I in endo (\(p=0.244\)) or in exo (\(p=0.732\)). However, the probability that player I lies is significantly smaller in the endogenous verification condition (\(p=0.046\)). When we analyze how the reports from the intermediate player differ from the hidden number (see the gap between the first and third bars in Fig. 2A), we find that there is a significantly lower share of false reports reaching the final player in endo than in no (\(p = 0.045\)). Moreover, smaller numbers are reported by the intermediate player \(15.9
Our analyses reveal that endogenous verification is a more effective intervention than exogenous verification, which is also more effective than no verification. We identify two avenues via which this occurs. On the one hand, endogenous verification reduces the number of lies told by those who anticipate that their lies will be identified. On the other hand, it appears to trigger the motivation among those who have the agency to verify the truth to correct a lie and report the truth. However, our data reveal that groups still report lies in endo and that the size of lies told is only moderately lower than that in no. Drawing from the literature on lying behavior and the effects of transparency, there are multiple mechanisms behind truth telling. The two most prominently discussed are the psychological costs of lying and concern for reputation13. We explore six new treatments that aim to elevate these costs to increase the effectiveness of verification when verification is endogenous.
First, we address how increasing the psychological cost of lying may enhance the efficacy of verification. For this, we designed two additional treatments using endo, where we add one or two passive players, creating victims of lies, whose payoffs decrease as a function of the lies told by player L. We label these treatments vctm when there is a single passive player and vctms when there are two passive players. Victims make no decisions, and their payoffs are 5 cents \(\times (2x - xL)\). This means that victims earn the same as the active players when the last report is truthful (\(xL = x\)), but their payoffs are negatively affected by lies. This simple point is made clear to participants by stating that truth-telling results in an equal payoff for everyone. Unlike in endo, in vctm and vctms, lying by reporting a number different from the hidden number hurts others, which is expected to increase the psychological cost of lying for final players. Thus, we evaluated whether the presence of negative externalities decreases the share of groups that lie or reduces the size of the lies that groups tell relative to endo. 041b061a72