Bias against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
Research which explores unchartered waters has a high potential for major impact but also carries a high uncertainty of having minimal impact. Such explorative research is often described as taking a novel approach. This study examines the complex relationship between pursuing a novel approach and impact. We measure novelty by examining the extent to which a published paper makes first time ever combinations of referenced journals, taking into account the difficulty of making such combinations. We apply this newly developed measure of novelty to a set of one million research articles across all scientific disciplines. We find that highly novel papers, defined to be those that make more (distinct) new combinations, have more than a triple probability of being a top 1% highly cited paper when using a sufficiently long citation time window to assess impact. Moreover, follow-on papers that cite highly novel research are themselves more likely to be highly cited. However, novel research is also risky as it has a higher variance in the citation performance. These findings are consistent with the “high risk/high gain” characteristic of novel research. We also find that novel papers are typically published in journals with a lower than expected Impact Factor and are less cited when using a short time window. Our findings suggest that science policy, in particular funding decisions which are over reliant on traditional bibliometric indicators based on short-term direct citation counts and Journal Impact Factors, may be biased against novelty.