Is it OK for AI to write science papers? Nature survey shows researchers are split

Poll of 5,000 researchers finds contrasting views on when it’s acceptable to involve AI and what needs to be disclosed
How much is the artificial intelligence (AI) revolution altering the process of communicating science? With generative AI tools such as ChatGPT improving so rapidly, attitudes about using them to write research papers are also evolving. The number of papers with signs of AI use is rising rapidly (D. Kobak et al. Preprint at arXiv https://doi.org/pkhp; 2024), raising questions around plagiarism and other ethical concerns.
To capture a sense of researchers’ thinking on this topic, Nature posed a variety of scenarios to some 5,000 academics around the world, to understand which uses of AI are considered ethically acceptable.
The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.
Past surveys reveal that researchers also use generative AI tools to help them with coding, to brainstorm research ideas and for a host of other tasks. In some cases, most in the academic community already agree that such applications are either appropriate or, as in the case of generating AI images, unacceptable. Nature’s latest poll focused on writing and reviewing manuscripts — areas in which the ethics aren’t as clear-cut.
A divided landscape
Nature’s survey laid out several scenarios in which a fictional academic, named Dr Bloggs, had used AI without disclosing it — such as to generate the first draft of a paper, to edit their own draft, to craft specific sections of the paper and to translate a paper. Other scenarios involved using AI to write a peer review or to provide suggestions about a manuscript Dr Bloggs was reviewing (see Supplementary information for full survey, data and methodology, and you can also test yourself against some of the survey questions).
Survey participants were asked what they thought was acceptable and whether they had used AI in these situations, or would be willing to. They were not informed about journal policies, because the intent was to reveal researchers’ underlying opinions. The survey was anonymous.
The 5,229 respondents were contacted in March, through e-mails sent to randomly chosen authors of research papers recently published worldwide and to some participants in Springer Nature’s market-research panel of authors and reviewers, or through an invitation from Nature’s daily briefing newsletter. They do not necessarily represent the views of researchers in general, because of inevitable response bias. However, they were drawn from all around the world — of those who stated a country, 21% were from the United States, 10% from India and 8% from Germany, for instance — and represent various career stages and fields. (Authors in China are under-represented, mainly because many didn’t respond to e-mail invitations).
The survey suggests that current opinions on AI use vary among academics — sometimes widely. Most respondents (more than 90%) think it is acceptable to use generative AI to edit one’s research paper or to translate it. But they differ on whether the AI use needs to be disclosed, and in what format: for instance, through a simple disclosure, or by giving details about the prompts given to an AI tool.

When it comes to generating text with AI —for instance, to write all or part of one’s paper — views are more divided. In general, a majority (65%) think it is ethically acceptable, but about one-third are against it.

Asked about using AI to draft specific sections of a paper, most researchers felt it was acceptable to do this for the paper’s abstract, but more were opposed to doing so for other sections.

Although publishers generally agree that substantive AI use in academic writing should be declared, the response from Nature’s survey suggests that not all researchers have the same opinion, says Alex Glynn, a research literacy and communications instructor at the University of Louisville in Kentucky. “Does the disconnect reflect a lack of familiarity with the issue or a principled disagreement with the publishing community?”
Using AI to generate an initial peer-review report was more frowned upon — with more than 60% of respondents saying it was not appropriate (about one-quarter of these cited privacy concerns). But the majority (57%) felt it was acceptable to use AI to assist in peer review by answering questions about a manuscript.

“I’m glad to see people seem to think using AI to draft a peer-review report is not acceptable, but I’m more surprised by the number of people who seem to think AI assistance for human reviewers is also out of bounds,” says Chris Leonard, a scholarly-communications consultant who writes about developments in AI and peer review in his newsletter, Scalene. (Leonard also works as a director of product solutions at Cactus Communications, a multinational firm in Mumbai, India.) “That hybrid approach is perfect to catch things reviewers may have missed.”
AI still used only by a minority
In general, few academics said they had actually used AI for the scenarios Nature posed. The most popular category was using AI to edit one’s research paper, but only around 28% said they had done this (another 43%, however, said they’d be willing to). Those numbers dropped to around 8% for writing a first draft, making summaries of other articles for use in one’s own paper, translating a paper and supporting peer review.

A mere 4% of respondents said they’d used AI to conduct an initial peer review.

Overall, about 65% reported that they had never used AI in any of the scenarios given, with people earlier in their careers being more likely to have used AI at least for one case. But when respondents did say they had used AI, they more often than not said they hadn’t disclosed it at the time.
“These results validate what we have also heard from researchers — that there’s great enthusiasm but low adoption of AI to support the research process,” says Josh Jarrett, a senior vice-president at Wiley, the multinational scholarly publisher, which has also surveyed researchers about use of AI.
Split opinions
When given the opportunity to comment on their views, researchers’ opinions varied drastically. On the one hand, some said that the broad adoption of generative AI tools made disclosure unnecessary. “AI will be, if not already is, a norm just like using a calculator,” says Aisawan Petchlorlian, a biomedical researcher at Chulalongkorn University in Bangkok. “‘Disclosure’ will not be an important issue.”
On the other hand, some said that AI use would always be unacceptable. “I will never condone using generative AI for writing or reviewing papers, it is pathetic cheating and fraud,” said an Earth-sciences researcher in Canada.
Others were more ambivalent. Daniel Egan, who studies infectious diseases at the University of Cambridge, UK, says that although AI is a time-saver and excellent at synthesizing complex information from multiple sources, relying on it too heavily can feel like cheating oneself. “By using it, we rob ourselves of the opportunities to learn through engaging with these sometimes laborious processes.”
Respondents also raised a variety of concerns, from ethical questions around plagiarism and breaching trust and accountability in the publishing and peer-review process to worries about AI’s environmental impact.
Some said that although they generally accepted that the use of these tools could be ethical, their own experience revealed that AI often produced sub-par results — false citations, inaccurate statements and, as one person described it, “well-formulated crap”. Respondents also noted that the quality of an AI response could vary widely depending on the specific tool that was used.
There were also some positives: many respondents pointed out that AI could help to level the playing field for academics for whom English was not a first language.
Several also explained why they supported certain uses, but found others unacceptable. “I use AI to self-translate from Spanish to English and vice versa, complemented with intensive editing of the text, but I would never use AI to generate work from scratch because I enjoy the process of writing, editing and reviewing,” says a humanities researcher from Spain. “And I would never use AI to review because I would be horrified to be reviewed by AI.”
Career stage and location
Perhaps surprisingly, academics’ opinions didn’t generally seem to differ widely by their geographical location, research field or career stage. However, respondents’ self-reported experience with AI for writing or reviewing papers did correlate strongly with having favourable opinions of the scenarios, as might be expected.
Career stage did seem to matter when it came to the most popular use of AI — to edit papers. Here, younger researchers were both more likely to think the practice acceptable, and more likely to say they had done it.

And respondents from countries where English is not a first language were generally more likely than those in English-speaking nations to have used AI in the scenarios. Their underlying opinions on the ethics of AI use, however, did not seem to differ greatly.

Related surveys
Various researchers and publishers have conducted surveys of AI use in the academic community, looking broadly at how AI might be used in the scientific process. In January, Jeremy Ng, a health researcher at the Ottawa Hospital Research Institute in Canada, and his colleagues published a survey of more than 2,000 medical researchers, in which 45% of respondents said they had previously used AI chatbots (J. Y. Ng et al. Lancet Dig. Health 7, e94–e102; 2025). Of those, more than two-thirds said they had used it for writing or editing manuscripts — meaning that, overall, around 31% of the people surveyed had used AI for this purpose. That is slightly more than in Nature’s survey.
“Our findings revealed enthusiasm, but also hesitation,” Ng says. “They really reinforced the idea that there’s not a lot of consensus around how, where or for what these chatbots should be used for scientific research.”
In February, Wiley published a survey examining AI use in academia by nearly 5,000 researchers around the world (see go.nature.com/438yngu). Among other findings, this revealed that researchers felt most uses of AI (such as writing up documentation and increasing the speed and ease of peer review) would be commonly accepted in the next few years. But less than half of the respondents said they had actually used AI for work, with 40% saying they’d used it for translation and 38% for proofreading or editing of papers.
Enjoying our latest content?
Login or create an account to continue
- Access the most recent journalism from Nature's award-winning team
- Explore the latest features & opinion covering groundbreaking research
or
Sign in or create an accountNature 641, 574-578 (2025)
doi: https://doi.org/10.1038/d41586-025-01463-8
Richard Van Noorden co-designed, conducted and analysed the survey.
This story originally appeared on: Nature - Author:Diana Kwon