AI intensifies fight against ‘paper mills’ that churn out fake research
Text- and image-generating tools present a new hurdle for efforts to tackle the growing number of fake papers making their way into the academic literature
Advances in artificial intelligence (AI) are complicating publishers’ efforts to tackle the growing problem of paper mills — companies that produce fake scientific papers to order. Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper-mill content, which could prove particularly difficult to detect. These were among the challenges discussed by research-integrity experts at a summit on 24 May, which focused on the paper-mill problem.
“The capacity of paper mills to generate increasingly plausible raw data is just going to be skyrocketing with AI,” says Jennifer Byrne, a molecular biologist and publication-integrity researcher at New South Wales Health Pathology and the University of Sydney in Australia.
“I have seen fake microscopy images that were just generated by AI,” says Jana Christopher, an image-data-integrity analyst at the publisher Federation of Eeuropean Biological Societies Press in Heidelberg, Germany. But being able to prove beyond suspicion that images are AI-generated remains a challenge, she says.
Language-generating AI tools such as ChatGPT pose a similar problem. “As soon as you have something that can show that something’s generated by ChatGPT, there’ll be some other tool to scramble that,” says Christopher.
A stream of papers
Anna Abalkina, a social scientist at the Free University of Berlin and an independent research-integrity analyst, suspects that there might be a delay in these AI tools becoming more apparent in the academic literature because of the length of the peer-review process. Perhaps in the next few months, “we will see the first stream of papers”, she says.
Byrne, Christopher and Abalkina were participants at the UNITED2ACT summit last week, which was convened by the Committee on Publication Ethics (COPE), a non-profit organization focused on ethics in academic publishing based in Eastleigh, UK, and the International Association of Scientific, Technical and Medical Publishers (STM), based in Oxford. The summit brought together international researchers, including independent research-integrity analysts, as well as representatives from funding bodies and publishers.
“It was the first time we had a group of people come together and co-create a set of actions which we’re going to take forward to combat this problem,” says Deborah Kahn, a trustee of COPE and a research-integrity consultant, based in London. The group intends to publish its joint action plan soon.
When it comes to detecting paper-mill works, “there is absolutely an additional challenge which is posed by synthetic images, synthetic text, et cetera”, says Joris van Rossum, programme director for STM Solutions, a subsidiary of STM. “There is a general realization that there is the potential of screening becoming more difficult,” he says.
AI assistance
Kahn says that, although there will undoubtedly be positive uses of AI to support researchers writing papers, it will still be necessary to distinguish between legitimate papers written with AI and those that have been completely fabricated. “We have to really look at how we identify those things, and how we make sure that people have actually done the research. And there are various ways we can do that,” she says.
One strategy discussed during the summit was to require authors to provide the raw data from experiments, potentially with digital watermarks that would enable publishers to confirm that those data are genuine.
Currently, requirements for submitting raw data vary significantly between publishers, says Christopher. Establishing a uniform set of requirements for the submission of raw data across publishers, taking into account differences between fields of research, could therefore be helpful, she says.
Sabina Alam, director of publishing ethics and integrity at Taylor & Francis, a publisher based in Abingdon, UK, agrees but says that such standards will take time to implement. “I can’t imagine it being an overnight flip, because the reality is many institutions don’t actually have the resources to offer data-management infrastructure,” she says. “We don’t want to penalize actual research.”
Sharing information
The summit also discussed other strategies for tackling the problem of paper mills more broadly, including organizing an awareness day or week for researchers, as well as identifying ways for publishers to share relevant information on suspected paper mills — for example when publishers simultaneously receive submissions — without breaching data-protection rules.
STM is continuing to develop its own paper-mill detection software, while also collating resources on similar tools available elsewhere through its integrity hub. The apparent rise in paper mills increases demand for such techniques — both for detecting fake papers at the point of submission and for identifying those that are already published.
Taylor & Francis is among the publishers that are making use of such tools, and Alam says that a growing number of ethics cases — instances of potential misconduct that are flagged for further investigation — are being escalated to her team. Roughly half of these cases are because of paper mills, according to Alam. Her team saw the number of ethics cases increase more than tenfold from 2019 to 2022 — and so far this year, there have been almost as many cases as during the whole of 2022. “It seems to have been commercialized and scaled up,” she says.
doi: https://doi.org/10.1038/d41586-023-01780-w
This story originally appeared on: Nature - Author:Layal Liverpool