The hyperactive agency detection (HAAD), or hyperactive agent-detection device (HADD), is the most widely accepted explanation for religious belief in biology, psychology and sociology. It offers us a naturalistic explanation of the origin of beliefs which form the basis of every religion. Because of this, you can expect that many religious believers are skeptical of its claims. Some of them claim that this is a "just-so" story, part of "atheist mythology." The irony of religionists making this claim, when their religious beliefs are often backed up on the mere testimony of religious texts, which are chalk full of just-so stories, is stupendous. A just-so story is "an unverifiable and unfalsifiable narrative explanation for a cultural practice, a biological trait, or behavior of humans or other animals." Is the HADD hypothesis unverifiable and unfalsifiable? It must be both in order to meet the criterion of a just-so story. Here I want to list some of the evidence supporting the HADD hypothesis and support the view that it is a valid scientific explanation.
In their 2008 paper The evolution of superstitious and superstition-like behaviour, Harvard biologist Kevin R. Foster and Helsinki biologist Hanna Kokko test for the origin of superstitious behavors through an incorrect assignment of cause and effect, where they "conclude that behaviours which are, or appear, superstitious are an inevitable feature of adaptive behaviour in all organisms, including ourselves."
This is experimental evidence for what Michael Shermer termed patternicity, or the tendency to find meaningful patterns in both meaningful and meaningless noise. He writes:
Unfortunately, we did not evolve a baloney-detection network in the brain to distinguish between true and false patterns. We have no error detecting governor to modulate the pattern-recognition engine. The reason has to do with the relative costs of making Type I and Type II errors in cognition, which I describe in the following formula:
P = C(TI) < C(TII) Patternicity (P) will occur whenever the cost (C) of making a Type I error (TI) is less than the cost (C) of making a Type II error (TII).
The problem is that assessing the difference between a Type I and Type II error is highly problematic—especially in the split-second timing that often determines the difference between life and death in our ancestral environments—so the default position is to assume that all patterns are real; that is, assume that all rustles in the grass are dangerous predators and not the wind.A Type I error is a false positive, and is "believing something is real when it is not." A Type II error is a false negative, and is "believing something is not real when it is." For a short explanation of how this affected our hominid ancestors, see here.
This is the basis for the evolution of all forms of patternicity, including superstition and magical thinking. There was a natural selection for the cognitive process of assuming that all patterns are real and that all patternicities represent real and important phenomena. We are the descendants of of the primates who most successfully employed patternicity. The Believing Brain (60)