Researchers call on technology companies to test their systems for consciousness and create AI welfare policies

What should we do if AI becomes conscious? These scientists say it’s time for a plan

Some researchers worry that if AI systems become conscious and people neglect or treat them poorly, they might suffer.Credit: Pol Cartie/Sipa/Alamy

The rapid evolution of artificial intelligence (AI) has brought to the fore ethical questions that were once confined to the realms of science fiction: if AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, will humanity be equipped to properly care for them?

A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously. In a report posted last month on the preprint server arXiv1, ahead of peer review, they call for AI companies to not only assess their systems for evidence of consciousness and the capacity to make autonomous decisions, but also to put in place policies for how to treat the systems if these scenarios become reality.

They point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.

Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning. Among them is Anil Seth, a consciousness researcher at the University of Sussex in Brighton, UK. “These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility,” he wrote last year in the science magazine Nautilus. “The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.”

The stakes are getting higher as we become increasingly dependent on these technologies, says Jonathan Mason, a mathematician based in Oxford, UK, who was not involved in producing the report. Mason argues that developing methods for assessing AI systems for consciousness should be a priority. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” he says.

People might also be harmed if AI systems aren’t tested properly for consciousness, says Jeff Sebo, a philosopher at New York University in New York City and a co-author of the report. If we wrongly assume a system is conscious, he says, welfare funding might be funnelled towards its care, and therefore taken away from people or animals that need it, or “it could lead you to constrain efforts to make AI safe or beneficial for humans”.

A turning point?

The report contends that AI welfare is at a “transitional moment”. One of its authors, Kyle Fish, was recently hired as an AI welfare researcher by the AI firm Anthropic, based in San Francisco, California. This is the first such position of its kind designated at a top AI firm, according to authors of the report. Anthropic also helped to fund initial research that led to the report. “There is a shift happening because there are now people at leading AI companies who take AI consciousness and agency and moral significance seriously,” Sebo says.

Nature contacted four leading AI firms to ask about their plans for AI welfare. Three — Anthropic, Google and Microsoft — declined to comment, and OpenAI, also based in San Francisco, did not respond.

Some are yet to be convinced that AI consciousness should be a priority. In September, the United Nations High-level Advisory Body on Artificial Intelligence issued a report on how the world should govern AI technology. The document did not address the subject of AI consciousness, despite a call from a group of scientists for the body to support research assessing consciousness in machines.

“This speaks to a deeper challenge or difficulty with communicating this issue to the wider community,” Mason says.

Operating under uncertainty

Although it remains unclear whether AI systems will ever achieve consciousness — a state that’s difficult to assess even in humans and animals — uncertainty shouldn’t discourage efforts to develop protocols for evaluating the situation, Sebo says. As a preliminary step, a group of scientists last year published a checklist of criteria that could help to identify systems with a high chance of being conscious. “Even an imperfect initial framework can still be better than the status quo,” Sebo says.

Nevertheless, the authors of the latest report say that the discussion about AI welfare should not come at the expense of other important issues, such as making AI development safe for people. “You can commit to working to make AI systems safe and beneficial for all,” the authors say in the report. “Including humans, animals, and — if and when the time comes — AI systems.”

doi: https://doi.org/10.1038/d41586-024-04023-8

This story originally appeared on: Nature - Author:Mariana Lenharo