Branding/Logomark minus Citation Combined Shape Icon/Bookmark-empty Icon/Copy Icon/Collection Icon/Close Copy 7 no author result Created with Sketch. Icon/Back Created with Sketch. Match!

Peer review versus bibliometrics: Which method better predicts the scholarly impact of publications?

Published on Jul 11, 2019in Scientometrics 2.77
· DOI :10.1007/s11192-019-03184-y
Giovanni Abramo27
Estimated H-index: 27
(National Research Council),
Ciriaco Andrea D’Angelo19
Estimated H-index: 19
(University of Rome Tor Vergata),
Emanuela Reale11
Estimated H-index: 11
(National Research Council)
Cite
Abstract
In this work, we try to answer the question of which method, peer review versus bibliometrics, better predicts the future overall scholarly impact of scientific publications. We measure the agreement between peer review evaluations of Web of Science indexed publications submitted to the first Italian research assessment exercise and long-term citations of the same publications. We do the same for an early citation-based indicator. We find that the latter shows stronger predictive power, i.e. it more reliably predicts late citations in all the disciplinary areas examined, and for any citation time window starting 1 year after publication.
  • References (61)
  • Citations (0)
Cite
References61
Newest
Published on Jan 1, 2018in Palgrave Communications
Vincent A. Traag11
Estimated H-index: 11
(LEI: Leiden University),
Ludo Waltman38
Estimated H-index: 38
(LEI: Leiden University)
When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found. However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on ag...
Published on Feb 1, 2019in Journal of Informetrics 3.88
Giovanni Abramo27
Estimated H-index: 27
(National Research Council),
Ciriaco Andrea D’Angelo19
Estimated H-index: 19
(University of Rome Tor Vergata),
Giovanni Felici15
Estimated H-index: 15
(National Research Council)
Abstract The ability to predict the long-term impact of a scientific article soon after its publication is of great value towards accurate assessment of research performance. In this work we test the hypothesis that good predictions of long-term citation counts can be obtained through a combination of a publication's early citations and the impact factor of the hosting journal. The test is performed on a corpus of 123,128 WoS publications authored by Italian scientists, using linear regression m...
Published on Jan 1, 2019in SAGE Open
Dag W. Aksnes17
Estimated H-index: 17
,
Liv Langfeldt6
Estimated H-index: 6
,
Paul Wouters25
Estimated H-index: 25
(LEI: Leiden University)
Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the int...
Published on Oct 29, 2018in arXiv: Applications
Alberto Baccini9
Estimated H-index: 9
,
Lucio Barabesi15
Estimated H-index: 15
,
Giuseppe De Nicolao26
Estimated H-index: 26
Two experiments for evaluating the agreement between bibliometrics and informed peer review - depending on two large samples of journal articles - were performed by the Italian governmental agency for research evaluation. They were presented as successful and as warranting the combined use of peer review and bibliometrics in research assessment exercises. However, the results of both experiments were supposed to be based on a stratified random sampling of articles with a proportional allocation,...
Published on Aug 1, 2018in Journal of Informetrics 3.88
Giovanni Abramo27
Estimated H-index: 27
(National Research Council)
Abstract The development of scientometric indicators and methods for evaluative purposes, requires a multitude of assumptions, conventions, limitations, and caveats. Given this, we cannot permit ambiguities in the key concepts forming the basis of scientometric science itself, or research assessment exercises would rest on quicksand. This conceptual work attempts to spell out some principles leading to a clear definition of “impact” of research, and above all, of the appropriate scientometric in...
Published on Jul 1, 2018in Research Evaluation 2.88
Elizabeth S. Vieira7
Estimated H-index: 7
(University of Porto),
José A. N. F. Gomes23
Estimated H-index: 23
(University of Porto)
Published on Apr 1, 2018in Scientometrics 2.77
Drahomira Herrmannova7
Estimated H-index: 7
(OU: Open University),
Robert M. Patton8
Estimated H-index: 8
(ORNL: Oak Ridge National Laboratory)
+ 1 AuthorsChristopher G. Stahl2
Estimated H-index: 2
(ORNL: Oak Ridge National Laboratory)
This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics ...
Published on Jul 1, 2017in Research Evaluation 2.88
Emanuela Reale11
Estimated H-index: 11
,
Antonio Zinilli1
Estimated H-index: 1
Evaluation for the allocation of project-funding schemes devoted to sustain academic research often undergoes changes of the rules for the ex-ante selection, which are supposed to improve the capability of peer review to select the best proposals. How modifications of the rules realize a more accountable evaluation result? Do the changes suggest an improved alignment with the program’s intended objectives? The article addresses these questions investigating Research Project of National Interest,...
Published on Feb 1, 2017in Journal of Informetrics 3.88
Emanuel Kulczycki5
Estimated H-index: 5
(Adam Mickiewicz University in Poznań),
Marcin Korzeń3
Estimated H-index: 3
(West Pomeranian University of Technology),
Przemyslaw Korytkowski7
Estimated H-index: 7
(West Pomeranian University of Technology)
This article discusses the metrics used in the national research evaluation in Poland of the period 2009–2012. The Polish system uses mostly parametric assessments to make the evaluation more objective and independent from its peers. We have analysed data on one million research outcomes and assessment results of 962 scientific units in the period 2009–2012. Our study aims to determine how much data the research funding system needs to proceed with evaluation. We have used correlation analysis, ...
Published on Dec 1, 2016in Scientometrics 2.77
Giovanni Abramo27
Estimated H-index: 27
(National Research Council),
Ciriaco Andrea D'Angelo19
Estimated H-index: 19
(National Research Council)
The prediction of the long-term impact of a scientific article is challenging task, addressed by the bibliometrician through resorting to a proxy whose reliability increases with the breadth of the citation window. In the national research assessment exercises using metrics the citation window is necessarily short, but in some cases is sufficient to advise the use of simple citations. For the Italian VQR 2011---2014, the choice was instead made to adopt a linear weighted combination of citations...
Cited By0
Newest
Published in Journal of Informetrics 3.88
Giovanni Abramo27
Estimated H-index: 27
(National Research Council),
Ciriaco Andrea D’Angelo19
Estimated H-index: 19
(University of Rome Tor Vergata),
Flavia Di Costa13
Estimated H-index: 13
Abstract This study inserts in the stream of research on the perverse effects that PBRF systems can induce in the subjects evaluated. The authors’ opinion is that more often than not, it is the doubtful scientific basis of the evaluation criteria that leave room for opportunistic behaviors. The work examines the 2004–2010 Italian national research assessment (VQR) to test the lack of possible opportunistic behavior by universities in order to limit the penalization of their performance (and fund...