How are researchers using AI? Survey reveals pros and cons for science
Despite strong interest in using artificial intelligence to make research faster, easier and more accessible, researchers say they need more support to navigate its possibilities
Using artificial intelligence (AI) tools for processes such as preparing manuscripts, writing grant applications and peer review will become widely accepted within the next two years, suggests a survey of nearly 5,000 researchers in more than 70 countries by the publishing company Wiley.
The survey asked researchers how they are currently using generative AI tools — which include chatbots such as ChatGPT and DeepSeek — as well as how they feel about various potential applications of the technology. The results suggest that the majority of researchers see AI becoming central for scientific research and publishing (see ‘Acceptable use’). More than half of the respondents think that AI currently outperforms humans at more than 20 of the tasks given as example use cases, including reviewing large sets of papers, summarizing research findings, detecting errors in writing, checking for plagiarism and organizing citations. More half of the survey participants expect AI to become mainstream in 34 out of 43 use cases in the next two years.
“What really stands out is the imminence of this,” says Sebastian Porsdam Mann at the University of Copenhagen, who studies the practicalities and ethics of using generative AI in research. “People that are in positions that will be affected by this — which is everyone, but to varying degrees — need to start” addressing this now, he adds.
Wiley, headquartered in Hoboken, New Jersey, posted the survey findings online on 4 February. Josh Jarrett, senior vice-president and general manager of the publisher’s AI growth team, says he hopes they will serve as a road map for innovators and start-ups looking for opportunities to develop AI tools. “There's broad acceptance that AI is going to reshape the research field.”
Limited uses
The survey polled 4,946 researchers worldwide, 27% of whom are early-career researchers. Perhaps surprisingly, says Jarrett, the results show that “people aren't really using these tools much in their day-to-day work”. Only 45% of the first wave of respondents (1,043 researchers) said that they had actually used AI to help with their research, and the most common uses they cited were translation, proofreading and editing manuscripts (see ‘Uses of AI’).
Although 81% of these 1,043 respondents said they had used OpenAI’s ChatGPT for personal or professional purposes, only one-third had heard of other generative-AI tools such as Google’s Gemini and Microsoft’s Copilot. However, there are clear differences across countries and disciplines, with researchers in China and Germany, as well as computer scientists, being the most likely to use AI in their work.
The majority of survey participants expressed interest in expanding their AI use. About 72% want to use AI for preparing manuscripts in the next two years — for tasks such as detecting errors in writing, plagiarism checks and organizing citations. Sixty-two per cent think that AI already outperforms humans in these tasks (see ‘Who does it better: humans or AI?’).
Around 67% of respondents also expressed interest in using AI to handle large amounts of information, for example helping to review the literature, summarizing papers and processing data. Early-career researchers showed greater interest than did more senior colleagues in using AI for writing grant applications and finding potential collaborators. “These are both things that come easier with experience and seniority,” says Porsdam Mann. “Using AI will help even those things out a little bit.”
However, researchers are less convinced about AI’s capabilities in more-complex tasks such as identifying gaps in the literature, choosing a journal to submit manuscripts to, recommending peer reviewers or suggesting relevant citations. Although 64% of respondents are open to using AI for these tasks in the next two years, the majority thinks that humans still outperform AI in these areas.
Who does it better: humans or AI?
Researchers think AI can already perform some tasks better than people.
Use case | Examples | Q: Who currently does it better: humans or AI? |
---|---|---|
Manuscript preparation | Detecting errors/bias in own writing Checking for unintended plagiarism Writing assistance (e.g. copyediting, translation) Formatting Populating citations | AI (62% of respondents) |
Handling information | Reviewing a large number of studies AI agent to monitor key new publications Automated processing of unstructured data (e.g. cataloguing events in hours of video footage) Data visualization | AI (60%) |
Sharing findings | ‘Plain language’ summaries of article findings Knowledge-management agent to help make information accessible | AI (58%) |
Increasing impact | Generating educational content based on a paper Generating multimedia Video abstract or “explainer” generator | Humans (51%) |
Enhancing research methods and enhancing collaboration | Writing up documentation Optimizing experimental design Identifying potential collaborators Tools to optimize allocation of shared resources Advanced simulations | Humans (58%) |
Peer review | Peer reviewer recommendation tool based on article comparisons Adapting reviewer feedback into standardized format Automated feedback to reviewers | Humans (59%) |
Source: ExplanAItions report, Wiley
Obstacles and opportunities
Enjoying our latest content?
Login or create an account to continue
- Access the most recent journalism from Nature's award-winning team
- Explore the latest features & opinion covering groundbreaking research
or
Sign in or create an account Continue with Google Continue with ORCiDdoi: https://doi.org/10.1038/d41586-025-00343-5
This story originally appeared on: Nature - Author:Miryam Naddaf