#2Leif D. Nelson(University of California, Berkeley)H-Index: 26
Understanding how much consumers value products is crucial for marketers. Though past research has assumed that pricing measures (e.g., willingness-to-pay) and rating measures (e.g., enjoyment) are interchangeable in determining value, across seven studies (N = 3919), we find that these measures are distinct under uncertainty. Namely, when considering pricing measures (e.g., willingness-to-pay), uncertainty is evaluated negatively, whereas when considering rating measures (e.g., enjoyment), unce...
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (“professor”) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (“soccer hooligans”). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%–3%) as well as a ...
#2Leif D. Nelson(University of California, Berkeley)H-Index: 26
Last.Uri Simonsohn(UPenn: University of Pennsylvania)H-Index: 26
view all 3 authors...
We describe why we wrote “False-Positive Psychology,” analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.
In 2010–2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call Psychology's Renaissance. We begin by describing how psychologists’ concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodo...
Redundant or excessive information can sometimes lead people to lean on it unnecessarily. Certain experimental designs can sometimes bias results in the researcher’s favor. And, sometimes, interesting effects are too small to be studied, practically, or are simply zero. We believe a confluence of these factors led to a recent paper (Isaac & Brough, 2014, JCR). This initial paper proposed a new means by which probability judgments can be led astray: the category size bias, by which an individua...
Last.Leif D. Nelson(University of California, Berkeley)H-Index: 26
view all 4 authors...
Across 4,151 participants, the authors demonstrate a novel framing effect, attribute matching, whereby matching a salient attribute of a decision frame with that of a decision’s options facilitates decision-making. This attribute matching is shown to increase decision confidence and, ultimately, consensus estimates by increasing feelings of metacognitive ease. In Study 1, participants choosing the more attractive of two faces or rejecting the less attractive face reported greater confidence in a...