‘Integrity index’ flags universities with high retraction rates

A score based on indicators related to research quality could help to prevent institutions gaming the metrics that feed into conventional rankings

It can be difficult to detect problems at an institution level using conventional metrics. Credit: SusanneB/Getty
A university-scoring method that highlights research-integrity ‘red flags’ could make it easier to spot institutions that are chasing conventional publishing metrics at the expense of rigorous science, researchers say.
The Research Integrity Risk Index, described in a preprint on arXiv last month1, categorizes institutions according to how many of their papers are retracted and how many are published in journals delisted from the scholarly databases Scopus and Web of Science. Researchers say that the index could improve university ranking systems that currently reward quantity of research output over quality.
“Universities want to be seen as rising stars” but it is not always clear whether they are “rising on solid ground or statistical quicksand”, says Lokman Meho, an information scientist at the American University of Beirut, who says he developed the index as a way to help identify “environments where integrity may be compromised by pressure to maximize metrics” such as publication rates. The measure is not designed to identify research misconduct, he stresses, “but reveal vulnerabilities warranting further review”.
The index is “a good first step” and “should be used by the ranking agencies”, says Achal Agrawal, a data scientist in Raipur, India, and founder of India Research Watch, an online group of researchers and students who highlight integrity issues. It “offers a strong and timely correction to a system that too often equates research excellence with sheer volume”, says Kiran Sharma, a data scientist at BML Munjal University in Gurugram, India. “It shifts attention from quantity to integrity, helping realign academic incentives with genuine scholarly value”.
Ranking rethink
Global university rankings often track how many papers an institution publishes and how often its research gets cited. Meho says that such ‘positive data’ can be misleading — and that some institutions exploit them to climb the rankings. “I wanted to go the other way, the research-integrity part,” he says.
Meho’s index ranks universities by the proportion of their papers that are published in delisted journals and how often their papers have been retracted over the past two years. Meho says he chose these two criteria because they were ‘objective’ and based on publically-available data, although he explored other ways to quantify integrity concerns, such as the presence of hyper-prolific authors and whether there had been a decrease in the proportion of papers that listed an institution's researchers as first or corresponding authors. The tool places institutions into one of five categories, ranging from ‘low risk’ to ‘red flag’, which suggests high rates in both measures and indicates a need for urgent scrutiny.
While developing the index, Meho analysed 18 institutions — located in Saudi Arabia, India, Lebanon and the United Arab Emirates — which had seen ‘extreme publication growth’ and rapidly climbed international rankings over the past few years.
Enjoying our latest content?
Login or create an account to continue
- Access the most recent journalism from Nature's award-winning team
- Explore the latest features & opinion covering groundbreaking research
or
Sign in or create an accountdoi: https://doi.org/10.1038/d41586-025-01727-3
This story originally appeared on: Nature - Author:Miryam Naddaf