Abstract: People often extrapolate from data samples, inferring properties of the population like the rate of some event, class, or group ‒e.g., the percent of female scientists, the crime rate, the chances to suffer some illness. In many circumstances, though, the sample observed is non-random, i.e., is affected by sampling bias. For instance, news media rarely display (intentionally or not) a balanced view of the state of the world, focusing particularly on dramatic and rare events. In this respect, a recent literature in Economics hints that people often fail to account for sample selection in their inferences. We here offer evidence of this phenomenon at an individual level in a tightly controlled lab setting and explore conditions for its occurrence. We conjecture that people tend to update their beliefs as if no selection issues existed, unless they have extremely strong evidence about the data-generating process and the inference problem is simple enough. In this vein, we find no evidence for selection neglect in an experimental treatment in which subjects must infer the frequency of some event given a non-random sample, knowing the exact selection rule. In two treatments where the selection rule is ambiguous, in contrast, people extrapolate as if sampling were random. Further, they become more and more confident in the accuracy of their guesses as the experiment proceeds, even when the evidence accumulated patently signals a selection issue and hence warrants some caution in the inferences made. This is also true when the instructions give explicit clues about potential sampling issues.

Keywords: Beliefs, Experiments, Extrapolation, Sampling bias, Selection problem