Match!

The objectivity of national research foundation peer review in South Africa assessed against bibliometric indexes

Published on Nov 1, 2013in Scientometrics2.77
· DOI :10.1007/s11192-013-0981-0
J. W. Fedderke1
Estimated H-index: 1
(University of the Witwatersrand)
Abstract
This paper examines the strength of association between the outcomes of National Research Foundation (NRF) peer review based rating mechanisms, and a range of objective measures of performance of researchers. The analysis is conducted on 1932 scholars that have received an NRF rating or an NRF research chair. We find that on average scholars with higher NRF ratings record higher performance against research output and impact metrics. However, we also record anomalies in the probabilities of different NRF ratings when assessed against bibliometric performance measures, and record a disproportionately large incidence of scholars with high peer-review based ratings with low levels of recorded research output and impact. Moreover, we find strong cross-disciplinary differences in terms of the impact that objective levels of performance have on the probability of achieving different NRF ratings. Finally, we report evidence that NRF peer review is less likely to reward multi-authored research output than single-authored output. Claims of a lack of bias in NRF peer review are thus difficult to sustain.
  • References (59)
  • Citations (8)
📖 Papers frequently viewed together
27 Citations
14 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References59
Newest
#1Anne-Wil HarzingH-Index: 46
19 Citations
#1Giovanni Abramo (National Research Council)H-Index: 28
#2Ciriaco Andrea D’Angelo (University of Rome Tor Vergata)H-Index: 28
Last. Flavia Di Costa (University of Rome Tor Vergata)H-Index: 12
view all 3 authors...
Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for ex...
44 CitationsSource
#1Giovanni Abramo (University of Rome Tor Vergata)H-Index: 28
#2Ciriaco Andrea D’Angelo (University of Rome Tor Vergata)H-Index: 28
National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up nat...
83 CitationsSource
#1Andreas Thor (Leipzig University)H-Index: 18
#2Lutz Bornmann (ETH Zurich)H-Index: 48
Purpose – The single publication h index has been introduced by Schubert as the h index calculated from the list of citing publications of one single publication. This paper aims to look at the calculation of the single publication h index and related performance measures.Design/methodology/approach – In this paper a web application is presented where the single publication h index and related performance measures (the single publication m index, h2 lower, h2 centre, and h2 upper) can be automat...
18 CitationsSource
240 CitationsSource
Hirsch's h index is becoming the standard measure of an individual's research accomplishments. The aggregation of individuals' measures is also the basis for global measures at institutional or national levels. To investigate whether the h index can be reliably computed through alternative sources of citation records, the Web of Science (WoS), PsycINFO and Google Scholar (GS) were used to collect citation records for known publications of four Spanish psychologists. Compared with WoS, PsycINFO i...
69 CitationsSource
#1Péter Jacsó (UH: University of Hawaii)H-Index: 24
Purpose – Google Scholar (GS) has shed the beta label on the fifth anniversary of launching its service. This paper aims to address this issue.Design/methodology/approach – As good as GS is – through its keyword search option – to find information about tens of millions of documents, many of them in open access full text format, it is as bad for metadata‐based searching when, beyond keywords in the title, abstract, descriptor and/or full text, the searcher also has to use author name, journal ti...
92 CitationsSource
#1Gemma Derrick (USYD: University of Sydney)H-Index: 10
#2Heidi Sturk (UQ: University of Queensland)H-Index: 8
Last. Wayne Hall (UQ: University of Queensland)H-Index: 89
view all 5 authors...
Reliability of citation searches is a cornerstone of bibliometric research. The authors compare simultaneous search returns at two sites to demonstrate discrepancies that can occur as a result of differences in institutional subscriptions to the Web of Science and Web of Knowledge. Such discrepancies may have significant implications for the reliability of bibliometric research in general, but also for the calculation of individual and group indices used for promotion and funding decisions. The ...
9 CitationsSource
#1Abhaya V. KulkarniH-Index: 46
#2Brittany AzizH-Index: 2
Last. Jason W. BusseH-Index: 40
view all 4 authors...
Context Until recently, Web of Science was the only database available to track citation counts for published articles. Other databases are now available, but their relative performance has not been established. Objective To compare the citation count profiles of articles published in general medical journals among the citation databases of Web of Science, Scopus, and Google Scholar. Design Cohort study of 328 articles published in JAMA, Lancet, or the New England Journal of Medicine between Oct...
354 CitationsSource
#1Niclas Adler (Jönköping University)H-Index: 10
#2Maria Elmquist (Chalmers University of Technology)H-Index: 15
Last. Flemming Norrgren (Chalmers University of Technology)H-Index: 8
view all 3 authors...
Contemporary and future challenges when managing research involve coping with emerging prerequisites which include, among other things, a new knowledge production discourse, new research funding methods and new ways for international collaboration. Managers for boundary-spanning research activities need to combine the sometimes opposing logics and perspectives of the multiple stakeholders--the individual researchers searching for independence, sustainability and freedom and others searching for ...
57 CitationsSource
Cited By8
Newest
Source
#2Gherardo Chirici (UniFI: University of Florence)H-Index: 27
Last. G. BucciH-Index: 12
view all 6 authors...
Since 2010, the Italian Ministry of University and Research issued new evaluation protocols to select candidates for University professorships and assess the bibliometric productivity of Universities and Research Institutes based on bibliometric indicators, i.e. scientific paper and citation numbers and the h-index. Under this framework, the objective of this study was to quantify the bibliometric productivity of the Italian forest research community during the 2002-2012 period. We examined the ...
Source
#1Ana RamosH-Index: 1
#2Cláudia S. Sarrico (University of Lisbon)H-Index: 15
Research units in Portugal undergo a formal evaluation process based on peer review which is the basis for distributing funding from the national research council. This article analyzes the evaluation results and asks how good they are at predicting future research performance. Better research evaluations mean the institution receives more funding, so the key question is to what extent research evaluations are able to predict future performance as measured by bibliometric indicators. We use data...
4 CitationsSource
#1Nikolay K. Vitanov (MPG: Max Planck Society)H-Index: 27
The understanding of dynamics of research organizations and research production is very important for their successful management. In the text below, selected deterministic and probability models of research dynamics are discussed. The idea of the selection is to cover mainly the areas of publications dynamics, citations dynamics, and aging of scientific information. From the class of deterministic models we discuss models connected to research publications (SI-model, Goffmann–Newill model, mode...
Source
#2M. Goldschmidt (PSU: Pennsylvania State University)H-Index: 1
In this study we evaluate whether a substantial increase in public funding to researchers is associated with a material difference in their productivity. We compare performance measures of researchers who were granted substantial funding against researchers with similar scholarly standing who did not receive such funding. We find that substantial funding is associated with raised researcher performance – though the increase is moderate, is strongly conditional on the quality of the researcher wh...
13 CitationsSource
#1Zaida Chinchilla (CSIC: Spanish National Research Council)H-Index: 16
#2Sandra Miguel Izquierdo (UNLP: National University of La Plata)H-Index: 10
Last. Félix De-Moya-Anegón (CSIC: Spanish National Research Council)H-Index: 33
view all 3 authors...
Argentinas patterns of publication in the humanities and social sciences were studied for the period 2003---2012, using the Scopus database and distinguishing the geographic realm of the research. The results indicate that "topics of national scope" have grown and gained international visibility. They can be broadly characterized as having Spanish as the language of publication, and a marked preference for single authorship; in contrast, the publication of "global topics", not geographically lim...
16 CitationsSource
#2Tamás Fleiner (BME: Budapest University of Technology and Economics)H-Index: 13
Last. Eva PotpinkováH-Index: 2
view all 3 authors...
Peer evaluation of research grant applications is a crucial step in the funding decisions of many science funding agencies. Funding bodies take various measures to increase the independence and quality of this process, sometimes leading to difficult combinatorial problems. We propose a novel method based on network flow theory to find assignments of evaluators to grant applications that obey the rules formulated by the Slovak Research and Development Agency.
1 CitationsSource
#1Zongmin Li (Drexel University)H-Index: 7
#2Merrill W. Liechty (Drexel University)H-Index: 8
Last. Benjamin Lev (Drexel University)H-Index: 15
view all 4 authors...
Individual research output (IRO) evaluation is both practically and theoretically important. Current research tends to only consider either bibliometric measures or peer review in IRO evaluation. This paper argues that bibliometric measures and peer review should be applied simultaneously to evaluate IRO. Moreover, in real life situations IRO evaluations are often made by groups and inevitably contain evaluators' subjective judgments. Accordingly, this paper develops a fuzzy multi-criteria group...
12 CitationsSource
#1Nick Deschacht (Katholieke Universiteit Leuven)H-Index: 4
#2Tim C. E. Engels (University of Antwerp)H-Index: 13
This chapter explores the potential for informetric applications of limited dependent variable models, i.e., binary, ordinal, and count data regression models. In bibliometrics and scientometrics such models can be used in the analysis of all kinds of categorical and count data, such as assessments scores, career transitions, citation counts, editorial decisions, or funding decisions. The chapter reviews the use of these models in the informetrics literature and introduces the models, their unde...
2 CitationsSource