Despite escalating U.S

OpenAI's Altman warns the U.S. is underestimating China's next-gen AI threat export controls on semiconductors, Altman is unconvinced that the policy is keeping up with technical reality

OpenAI CEO Sam Altman warned that the U.S. may be underestimating the complexity and seriousness of China's progress in artificial intelligence, and said export controls alone likely aren't a reliable solution.

"I'm worried about China," he said.

Over Mediterranean tapas in San Francisco's Presidio — just five miles north of OpenAI's original office in the Mission — Altman offered a rare on-the-record briefing to a small group of reporters, including CNBC. He warned that the U.S.–China AI race is deeply entangled — and more consequential than a simple who's-ahead scoreboard.

"There's inference capacity, where China probably can build faster. There's research, there's product; a lot of layers to the whole thing," he said. "I don't think it'll be as simple as: Is the U.S. or China ahead?"

Despite escalating U.S. export controls on semiconductors, Altman is unconvinced that the policy is keeping up with technical reality.

Asked whether it would be reassuring if fewer GPUs were reaching China, Altman was skeptical. "My instinct is that doesn't work," he said.

"You can export-control one thing, but maybe not the right thing… maybe people build fabs or find other workarounds," he added, referring to semiconductor fabrication facilities, the specialized factories that produce the chips powering everything from smartphones to large-scale AI systems.

"I'd love an easy solution," added Altman. "But my instinct is: That's hard."

His comments come as Washington adjusts its policies designed to curb China's AI ambitions. The Biden administration initially tightened export controls, but in April, President Donald Trump went further — halting the supply of advanced chips altogether, including models previously designed to comply with Biden-era rules.

Last week, however, the U.S. carved out an exception for certain "China-safe" chips, allowing sales to resume under a controversial and unprecedented agreement requiring Nvidia and AMD to give the federal government 15% of their China chip revenue.

The result is a patchwork regime that may be easier to navigate than enforce. And while U.S. firms deepen their dependence on chips from Nvidia and AMD, Chinese companies are pushing ahead with alternatives from Huawei and other domestic suppliers — raising questions about whether cutting off supply is having the intended effect.

Open source and China

China's AI progress has also influenced how OpenAI thinks about releasing its own models.

While the company has long resisted calls to make its technology fully open source, Altman said competition from Chinese models — particularly open-source systems like DeepSeek — was a factor in OpenAI's recent decision to release its own open-weight models.

"It was clear that if we didn't do it, the world was gonna head to be mostly built on Chinese open source models," Altman said."That was a factor in our decision, for sure. Wasn't the only one, but that loomed large."

Earlier this month, OpenAI released two open-weight language models — its first since GPT-2 in 2019 — marking a significant shift in strategy for the company that has long kept its technology gated behind application programming interfaces, or APIs.

The new text-only models, called gpt-oss-120b and gpt-oss-20b, are designed as lower-cost options that developers, researchers, and companies can download, run locally, and customize.

An AI model is considered open weight if its parameters — the values learned during training that determine how the model generates responses — are publicly available. While that offers transparency and control, it's not the same as open source. OpenAI is still not releasing its training data or full source code.

With this release, OpenAI joins that wave and, for now, stands alone as the only major U.S. foundation model company actively leaning into a more open approach.

While Meta had embraced openness with its Llama models, CEO Mark Zuckerberg suggested on the company's second-quarter earnings call it may pull back on that strategy going forward.

OpenAI, meanwhile, is moving in the opposite direction, betting that broader accessibility will help grow its developer ecosystem and strengthen its position against Chinese rivals. Altman had previously acknowledged that OpenAI had been "on the wrong side of history" by locking up its models.

Ultimately, OpenAI's move shows it wants to keep developers engaged and within its ecosystem. That push comes as Meta reconsiders its open-source stance and Chinese labs flood the market with models designed to be flexible and widely adopted.

Still, the open-weight debut has drawn mixed reviews.

Some developers have called the models underwhelming, noting that many of the capabilities that make OpenAI's commercial offerings so powerful were stripped out.

Altman didn't dispute that, saying the team intentionally optimized for one core use case: locally-run coding agents.

"If the kind of demand shifts in the world," he said, "you can push it to something else."

Watch: OpenAI's enterprise bet pays off as startups in Silicon Valley switch to GPT-5

This story originally appeared on: CNBC - Author:MacKenzie Sigalos