Curious about using artificial intelligence to boost your research? Here are the programs you shouldn’t miss

AI for research: the ultimate guide to choosing the right tool

When Mohammed Shafi, a PhD student in civil engineering at the Indian Institute of Technology Guwahati, first saw his friends testing out artificial intelligence (AI) tools back in late 2022, he didn’t immediately see the appeal. Mostly, people seemed to be using generative AI platforms such as OpenAI’s ChatGPT as a replacement for Google, or as a novelty for drumming up ideas for practical jokes and pet names. “They were fun to play around with, but I didn’t necessarily sense any relevance to my own coursework or my research,” he says.

He quickly came around, however, when he started seeing more AI tools being built to meet the needs of students and scientists. Now a daily user of AI, Shafi has pieced together an entire pipeline of AI-powered platforms that feed into one another. These update him on new research, break down complex topics, troubleshoot experiments, organize his writing and citations, and help him to navigate the demands of classes and research.

Shafi now says that the arrival of AI has been “a revolution for research”, a sentiment seemingly shared by others. Surveys show that many university students and scientists are using AI in their work, often on a weekly or even daily basis. And whereas many educators and academic institutions initially responded with wariness, academia seems increasingly willing to allow students to use AI, albeit in controlled ways. Although it wouldn’t be impossible to go back to the way he did things before, Shafi says, “it’s hard to imagine wanting to”.

Here, Nature explores how academics and students can harness AI to streamline various parts of the research process.

Sharpen your literature review

Daniel Weld, chief scientist at the academic search engine Semantic Scholar, who is based in Seattle, Washington, says that many popular AI platforms have “advanced enormously” in an area called active learning — a method that mimics how a person would approach a research question. Programs such as Google’s Gemini Deep Research and OpenAI’s Deep Research offer the most powerful tools in this regard, and many companies are launching similar products.

Students can enter a query, supported by their own data or documents, and then step away as these advanced models conduct in-depth searches over 30 minutes or so. The final report might include text, figures and visualizations, and all output is thoroughly referenced — another jump over past iterations, says Isa Fulford, a technical staff researcher at OpenAI in San Francisco, California, who helped to develop Deep Research. “Especially in the context of scientific research, we recognize that veracity is critical, and we think this model is better at including the proper citations than any other model we’ve released,” she says.

Chuck Downing, a PhD student in accounting at the Massachusetts Institute of Technology (MIT) in Cambridge, says that these deep-research tools have been especially useful when digging into unfamiliar topics. During one project, Downing used OpenAI’s Deep Research to create a report ranking various approaches for reducing emissions at manufacturing plants. “I didn’t know much going in, but I learnt quite a bit, and so I use these deep dives all the time now,” he says. “It’s better than anything else I’ve used so far at finding good papers and in presenting the information in a way that I can easily understand.”

Other programs enable students to delve more deeply into a single document or small collection of papers. The student-focused AI platform SciSpace , for example, has a ‘Chat with PDF’ function. Users can upload a paper and ask questions about its content — a feature shared by other platforms, such as Claude, NotebookLM and PDF.ai.

For David Tompkins, a PhD student in human development at Cornell University in Ithaca, New York, this approach has helped him to stay on top of the burgeoning scientific literature. Tompkins often goes to journal-club meetings having used Claude to generate a summary of a chosen paper, which he then follows up with more-targeted questions based on the group’s discussion. “I’m still a big believer in actually reading papers to fully understand them, but it’s become much easier to do my prep when I’m feeling stretched,” he says. “In some ways, I feel I’m engaging more with the material through these tools than I did before them.”

Create your hypothesis

The ability of AI to pull together many threads of information has seemingly made it easier to identify research gaps and connect ideas — although a recent survey suggests that an over-reliance on generative AI could dampen a person’s critical-thinking skills. Weld, who is also an AI researcher at the Allen Institute for AI in Seattle, says there has been so much demand for tools that assist with ideation that he and his team are developing hypothesis-generation and detection products that attempt to combine ideas across papers into something new. “We have them running internally, but we’re trying to make sure that they’re working robustly,” he says, adding that his Allen Institute group hopes to release them publicly in the next few months.

Shafi turned to programs such as the visualization tool Research Rabbit when working on his dissertation, which focuses on how microplastics are transported through soil and into groundwater. Research Rabbit takes a single ‘seed paper’ and generates an interconnected web of research linked by topic, author, methodology or other key features. By piping its results into a chatbot such as ChatGPT, “it’s possible to query the body of work for hidden links or new ideas”, Shafi says.

María Mercedes Hincapié-Otero.Credit: Jayson J. Ricamara

AI-powered programs are also proving increasingly capable as experimental assistants. As a PhD student at MIT, Zhichu Ren created the software Copilot for Real-world Experimental Scientist (CRESt), which combines several AI technologies into an enhanced chatbot (Z. Ren et al. Preprint at ChemRxiv https://doi.org/pdwv; 2023). Users can chat with CRESt as they would with a colleague, and it can help to craft and run experiments by retrieving and analysing data, turning equipment on and off using digital switches, powering robotic arms, documenting findings and alerting scientists by e-mail when issues arise or protocols end. In a 2023 conference paper, CRESt assisted researchers by prioritizing candidate alloys for a new fuel cell, and suggested experiments that the group might run to test them. “I wanted to create a tool that can continue to help even as your needs change,” says Ren, who now works at the AI start-up firm Labig in Cambridge, Massachusetts. “AI can do that in a way that static, written documentation cannot.”

But even for students who do not have access to something as advanced as CRESt, AI can still function as a helpful colleague. Gemini Deep Research, for example, can generate a “personalized multi-point research plan” among other features, and resources such as Scite and Elicit are billed as research assistants. Users can give these programs a handful of papers or a working hypothesis, for example, and ask for a set of experiments to test the theory.

Joseph Fernandez, a PhD student in biomedical engineering at the University of Colorado Anschutz Medical Campus in Aurora, says he continues to use ChatGPT for most things, including troubleshooting his experiments. In the past, the bot has helped him to brainstorm explanations when one of his assays was returning unusual values, and to calculate serial dilutions to avoid wasting expensive reagents. ChatGPT has also served as a stand-in for his committee, generating pointed questions to test his research proposal before his exams.

“I think you’re really only limited by your imagination, even if some uses are more mundane than others,” he says. “Nowadays, if a question or task pops into my mind, I’m generally wondering if ChatGPT can help with it.”

Streamline your statistics

Code editors including GitHub’s Copilot, Amazon CodeWhisperer (now Amazon Q Developer) and Anysphere’s Cursor aim to make it easy for beginners to use coding to organize data, create analysis pipelines, run descriptive statistics and generate visualizations. Such tools, researchers note, have also largely overtaken websites such as GitHub and Stack Exchange as the main resources for troubleshooting. Rather than spending hours looking for answers, users can simply highlight a section of code and ask a chatbot to fix it, Downing says.

“Thinking about what a PhD student primarily does, the largest tasks are increasingly coding and data analysis, at least for computational fields, so anything that helps there is just disproportionately useful,” he says. Although he already considered himself an adequate coder, he says that his preferred tool, Cursor, has made him better by removing the more tedious aspects and making it easier to probe specific aspects of a data set. Instead of spending all of his time debugging (cleaning the code), he says, he is “putting more effort into really getting to know the underlying data and engaging with my code in ways that help me learn. If I get curious about something, it’s very easy to generate descriptive statistics, something like a chart.”

Tompkins has likewise found tools such as Claude to be essential for writing code for compelling, dynamic visualizations. Creating a good graph, particularly if it’s interactive, can require hundreds of lines of code, and Tompkins says that, in the past, that level of effort had put him off. “But once I started using Claude, I was able to have it write out those literal hundreds of lines of code,” he says. The resulting visualizations have gone a long way towards helping others to understand his research, which describes how small changes in the way in which people experience information can drive their reactions to it.

Researcher Zhichu Ren created an AI-based system called CRESt to run experiments.Credit: Jason Sparapani/MIT Department of Materials Science and Engineering

He adds, however, that he still always writes his own code for statistical analyses: “I want to make sure that whatever I’m reporting is something that I fully understand and can stand behind when I submit a paper.”

These AI programs focus on generating new code, but Gaurav Ragtah, who founded a platform called CatalyzeX, saw an opportunity to repurpose existing code. If researchers write a new analysis pipeline for each experiment, for example, it can make it more challenging for others to reproduce their work, particularly if the documentation is poor or a developer stops updating their code. Instead, Ragtah, who is based in San Francisco, wanted to make it easier to locate and share code that others have published. CatalyzeX uses a web platform and browser extension to flag open-source code shared in papers indexed on websites such as Google Scholar or PubMed, and researchers can search for code through the platform using keywords. Someone interested in using machine learning to aid in cancer detection, for example, could search for a modified data-processing pipeline that helps to address the fact that publicly available data often involve small sample sizes.

“As amazing as some of these code generators are, we don’t need to reinvent the wheel every time when there are excellent examples of what you’re trying to do,” Ragtah says. “Open source keeps people from having to start from scratch and gives them a scaffold on which to build and improve, while making research more easily comparable.”

Enjoying our latest content?
Login or create an account to continue

  • Access the most recent journalism from Nature's award-winning team
  • Explore the latest features & opinion covering groundbreaking research
Access through your institution

or

Sign in or create an account Continue with Google Continue with ORCiD

Nature 640, 555-557 (2025)

doi: https://doi.org/10.1038/d41586-025-01069-0

This story originally appeared on: Nature - Author:Amanda Heidt