Kiri Wagstaff, who temporarily shelved her academic career to provide advice on federal AI legislation, talks about life inside the halls of power

The US Congress is taking on AI —this computer scientist is helping

Half a dozen AI scientists have moved to Washington DC to advise the US Congress.Credit: Mandel Ngan/AFP via Getty

Regulation of artificial intelligence (AI) is booming in the United States. Since 2016, federal lawmakers have passed 23 AI-related bills into law1, many more than any other country. Now AI scientists are joining the action, trading academia for Capitol Hill on a mission to feed technical advice into proposed laws on AI.

Among those scientists who have gone to Washington is Kiri Wagstaff, a computer scientist who temporarily left her teaching position at Oregon State University in Corvallis to work for a year in the office of Senator Mark Kelly, an Arizona Democrat and former astronaut. Wagstaff is one of six AI researchers now serving in Congress through the Science & Technology Policy Fellowships programme run by the American Association for the Advancement of Science. The fellows’ expertise is unlikely to go to waste. In 2023, 181 AI-related bills were proposed at the federal level — more than twice as many as in 2022.

Wagstaff spoke with Nature about US’s AI regulation boom as seen through a scientist’s eyes.

What’s your background in AI?

I spent about two decades at the NASA Jet Propulsion Laboratory [JPL, in Pasadena, California], developing ways to apply AI and machine learning to space exploration. This was about analysing very large data sets, but also about what we could put on board our rovers and orbiters to help them be a little smarter. The Mars Science Laboratory rover, for example, has a laser spectrometer; it can point a laser at a rock metres away and get information about the composition of that rock. In 2016, JPL gave it a software update that allowed it to take images of a scene, rank all the rocks by science priorities and autonomously decide which ones it should aim the laser at. That was very, very productive because we typically only have an opportunity to talk to the rover and give it instructions a few times a day.

Computer scientist Kiri Wagstaff.Credit: Dutch Slager

How did you come to this fellowship?

I’ve been working in applied machine learning for my whole career, so I care a lot about what happens when you try to solve real-world problems with these techniques. When this opportunity with Congress came through, I thought, this is perfect. I was super excited the instant I saw it.

The AAAS sent out the call in late July [2023], with a submission deadline of the first week of August. The six of us who were chosen to be AI fellows reported to Washington DC on September 1 of last year. It was a whirlwind. They [the fellowship organizers] don’t usually do things this way; it usually takes about a year. They realized they didn’t want to wait to bring in AI experts and get this ball rolling.

What do you do day-to-day?

If staffers or anyone in the congressional offices have ideas about ways to encourage AI innovation, or to regulate it or keep it safe, I’m able to assess that from a technical perspective and say, first of all, do these words make sense, and is that feasible, and what might be overlooked.

I get to review many bill proposals. AI is so broad: it’s touching on finance, jobs, education, copyright … everything. The ubiquity is such that asking if your topic touches on AI is getting to be like asking if you use a computer or electricity.

What has been the scope of legislative action?

There have been more than 300 AI-related proposed bills introduced in this congressional session [beginning in January 2023]. They range all over the place, from controlling misinformation to how can we stimulate AI innovation and research.

Does some of this legislation touch on things relevant to the upcoming election?

There’s a cluster of bills that have been proposed on what to do about misinformation.

Some of these bills suggest that if you have a campaign out there that uses generative AI in any way, whether it’s misinformation or not, that requires a label or disclaimer. Others straight out prohibit what they call deceptive AI: portraying something that didn’t actually happen or wasn’t actually said. They say that should be illegal and punishable.

Certain kinds of falsehoods are already illegal, of course, and if you use generative AI and it falls into that category, you can just use existing law to deal with that. The real question before us is: where does existing law fall short?

Where are those holes in the law that need to be patched?

There’s actually a bill that says we should find that out: the ASSESS AI Act says we should task a commission with going through all the relevant laws and identifying places where AI creates new issues that aren’t being covered.

One development that I think is important and exciting is a growing recognition that AI systems themselves have a pretty large environmental impact, in terms of energy use and also water consumption for cooling the data centres. There’s a bill out there to really measure those impacts.

Europe is usually seen as the leader in global AI regulation. What do you make of the European Union’s AI Act, which passed in March 2024?

This is an excellent opportunity for us in the United States, because we’re watching another entity charging forward trying to solve the same problems that we’re trying to solve, but being more on the proactive side. That means we get to see what are the points of disagreement that [EU countries] run into, and how does that play out. We reap a little benefit by not being the first adopter; we get to learn from their example.

But it’s really important to remember what’s different about our situation. The really big difference is the first amendment [which protects freedom of speech]; it pops up everywhere, and that’s not a constraint that most other countries work under. Take generative AI: if it offends someone, how much of that do we allow to just be as it is without restrictions? We have to draw that line carefully.

What direction does AI policy need to take next in the United States?

We’re all talking about AI, but there’s a rising parallel threat concerning data. Who owns your data? What is it worth? What should you have control over? What should you opt in or out of? That’s almost as important as the AI part.

doi: https://doi.org/10.1038/d41586-024-01354-4

Wagstaff declined the use of an AI-based service to transcribe this interview because of questions surrounding the subsequent use of that data. This conversation has been edited for length and clarity.

This story originally appeared on: Nature - Author:Nicola Jones