As educators debate whether it’s even possible to use AI safely in research and education, students are taking a role in shaping its responsible use

Ready or not, AI is coming to science education — and students have opinions

Leo Wu, an economics student at Minerva University in San Francisco, California, founded a group to discuss how AI tools can help in education.Credit: AI Consensus

The world had never heard of ChatGPT when Johnny Chang started his undergraduate programme in computer engineering at the University of Illinois Urbana–Champaign in 2018. All that the public knew then about assistive artificial intelligence (AI) was that the technology powered joke-telling smart speakers or the somewhat fitful smartphone assistants.

But, by his final year in 2023, Chang says, it became impossible to walk through campus without catching glimpses of generative AI chatbots lighting up classmates’ screens.

“I was studying for my classes and exams and as I was walking around the library, I noticed that a lot of students were using ChatGPT,” says Chang, who is now a master’s student at Stanford University in California. He studies computer science and AI, and is a student leader in the discussion of AI’s role in education. “They were using it everywhere.”

ChatGPT is one example of the large language model (LLM) tools that have exploded in popularity over the past two years. These tools work by taking user inputs in the form of written prompts or questions and generating human-like responses using the Internet as their catalogue of knowledge. As such, generative AI produces new data based on the information it has already seen.

However, these newly generated data — from works of art to university papers — often lack accuracy and creative integrity, ringing alarm bells for educators. Across academia, universities have been quick to place bans on AI tools in classrooms to combat what some fear could be an onslaught of plagiarism and misinformation. But others caution against such knee-jerk reactions.

Victor Lee, who leads Stanford University’s Data Interactions & STEM Teaching and Learning Lab, says that data suggest that levels of cheating in secondary schools did not increase with the roll-out of ChatGPT and other AI tools. He says that part of the problem facing educators is the fast-paced changes brought on by AI. These changes might seem daunting, but they’re not without benefit.

Educators must rethink the model of written assignments “painstakingly produced” by students using “static information”, says Lee. “This means many of our practices in teaching will need to change — but there are so many developments that it is hard to keep track of the state of the art.”

Despite these challenges, Chang and other student leaders think that blanket AI bans are depriving students of a potentially revolutionary educational tool. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” Chang says. For example, rather than asking AI to write their final papers, students might use AI tools to make flashcards based on a video lecture. “There were a lot of discussions happening [on campus], but always without the students.”

Computer-science master’s student Johnny Chang started a conference to bring educators and students together to discuss the responsible use of AI.Credit: Howie Liu

To help bridge this communications gap, Chang founded the AI x Education conference in 2023 to bring together secondary and university students and educators to have candid discussions about the future of AI in learning. The virtual conference included 60 speakers and more than 5,000 registrants. This is one of several efforts set up and led by students to ensure that they have a part in determining what responsible AI will look like at universities.

Over the past year, at events in the United States, India and Thailand, students have spoken up to share their perspectives on the future of AI tools in education. Although many students see benefits, they also worry about how AI could damage higher education.

Enhancing education

Leo Wu, an undergraduate student studying economics at Minerva University in San Francisco, California, co-founded a student group called AI Consensus. Wu and his colleagues brought together students and educators in Hyderabad, India, and in San Francisco for discussion groups and hackathons to collect real-world examples of how AI can assist learning.

From these discussions, students agreed that AI could be used to disrupt the existing learning model to make it more accessible for students with different learning styles or who face language barriers. For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images. Others used AI to transform data points collected in a laboratory class into an intuitive visualization.

For people studying in a second language, Wu says that “the language barrier [can] prevent students from communicating ideas to the fullest”. Using AI to translate these students’ original ideas or rough drafts crafted in their first language into an essay in English could be one solution to this problem, he says. Wu acknowledges that this practice could easily become problematic if students relied on AI to generate ideas, and the AI returned inaccurate translations or wrote the paper altogether.

Jomchai Chongthanakorn and Warisa Kongsantinart, undergraduate students at Mahidol University in Salaya, Thailand, presented their perspectives at the UNESCO Round Table on Generative AI and Education in Asia–Pacific last November. They point out that AI can have a role as a custom tutor to provide instant feedback for students.

“Instant feedback promotes iterative learning by enabling students to recognize and promptly correct errors, improving their comprehension and performance,” wrote Chongthanakorn and Kongsantinart in an e-mail to Nature. “Furthermore, real-time AI algorithms monitor students’ progress, pinpointing areas for development and suggesting pertinent course materials in response.”

Although private tutors could provide the same learning support, some AI tools offer a free alternative, potentially levelling the playing field for students with low incomes.

Jomchai Chongthanakorn gave his thoughts on AI at a UNESCO round table in Bangkok.Credit: UNESCO/Jessy & Thanaporn

Despite the possible benefits, students also express wariness about how using AI could negatively affect their education and research. ChatGPT is notorious for ‘hallucinating’ — producing incorrect information but confidently asserting it as fact. At Carnegie Mellon University in Pittsburgh, Pennsylvania, physicist Rupert Croft led a workshop on responsible AI alongside physics graduate students Patrick Shaw and Yesukhei Jagvaral to discuss the role of AI in the natural sciences.

“In science, we try to come up with things that are testable — and to test things, you need to be able to reproduce them,” Croft says. But, he explains, it’s difficult to know whether things are reproducible with AI because the software operations are often a black box. “If you asked [ChatGPT] something three times, you will get three different answers because there’s an element of randomness.”

And because AI systems are prone to hallucinations and can give answers only on the basis of data they have already seen, truly new information, such as research that has not yet been published, is often beyond their grasp.

Croft agrees that AI can assist researchers, for example, by helping astronomers to find planetary research targets in a vast array of data. But he stresses the need for critical thinking when using the tools. To use AI responsibly, Croft argued in the workshop, researchers must understand the reasoning that led to an AI’s conclusion. To take a tool’s answer simply on its word alone would be irresponsible.

“We’re already working at the edge of what we understand” in scientific enquiry, Shaw says. “Then you’re trying to learn something about this thing that we barely understand using a tool we barely understand.”

These lessons also apply to undergraduate science education, but Shaw says that he’s yet to see AI play a large part in the courses he teaches. At the end of the day, he says, AI tools such as ChatGPT “are language models — they’re really pretty terrible at quantitative reasoning”.

Shaw says it’s obvious when students have used an AI on their physics problems, because they are more likely to have either incorrect solutions or inconsistent logic throughout. But as AI tools improve, those tells could become harder to detect.

Chongthanakorn and Kongsantinart say that one of the biggest lessons they took away from the UNESCO round table was that AI is a “double-edged sword”. Although it might help with some aspects of learning, they say, students should be wary of over-reliance on the technology, which could reduce human interaction and opportunities for learning and growth.

“In our opinion, AI has a lot of potential to help students learn, and can improve the student learning curve,” Chongthanakorn and Kongsantinart wrote in their e-mail. But “this technology should be used only to assist instructors or as a secondary tool”, and not as the main method of teaching, they say.

Equal access

Tamara Paris is a master’s student at McGill University in Montreal, Canada, studying ethics in AI and robotics. She says that students should also carefully consider the privacy issues and inequities created by AI tools.

Some academics avoid using certain AI systems owing to privacy concerns about whether AI companies will misuse or sell user data, she says. Paris notes that widespread use of AI could create “unjust disparities” between students if knowledge or access to these tools isn’t equal.

Tamara Paris says not all students have equal access to AI tools.Credit: McCall Macbain Scholarship at McGill

“Some students are very aware that AIs exist, and others are not,” Paris says. “Some students can afford to pay for subscriptions to AIs, and others cannot.”

One way to address these concerns, says Chang, is to teach students and educators about the flaws of AI and its responsible use as early as possible. “Students are already accessing these tools through [integrated apps] like Snapchat” at school, Chang says.

In addition to learning about hallucinations and inaccuracies, students should also be taught how AI can perpetuate the biases already found in our society, such as discriminating against people from under-represented groups, Chang says. These issues are exacerbated by the black-box nature of AI — often, even the engineers who built these tools don’t know exactly how an AI makes its decisions.

Beyond AI literacy, Lee says that proactive, clear guidelines for AI use will be key. At some universities, academics are carving out these boundaries themselves, with some banning the use of AI tools for certain classes and others asking students to engage with AI for assignments. Scientific journals are also implementing guidelines for AI use when writing papers and peer reviews that range from outright bans to emphasizing transparent use.

Lee says that instructors should clearly communicate to students when AI can and cannot be used for assignments and, importantly, signal the reasons behind those decisions. “We also need students to uphold honesty and disclosure — for some assignments, I am completely fine with students using AI support, but I expect them to disclose it and be clear how it was used.”

For instance, Lee says he’s OK with students using AI in courses such as digital fabrication — AI-generated images are used for laser-cutting assignments — or in learning-theory courses that explore AI’s risks and benefits.

For now, the application of AI in education is a constantly moving target, and the best practices for its use will be as varied and nuanced as the subjects it is applied to. The inclusion of student voices will be crucial to help those in higher education work out where those boundaries should be and to ensure the equitable and beneficial use of AI tools. After all, they aren’t going away.

“It is impossible to completely ban the use of AIs in the academic environment,” Paris says. “Rather than prohibiting them, it is more important to rethink courses around AIs.”

Nature 628, 459-461 (2024)

doi: https://doi.org/10.1038/d41586-024-01002-x

This story originally appeared on: Nature - Author:Sarah Wells