Americans’ trust in artificial intelligence is declining even as global advances in the field accelerate. That points to a potential national-security problem: lawmakers across partisan lines, industry leaders, think tanks, and others have warned that falling behind China on AI would put the United States at a disadvantage. Negative public sentiment could undermine congressional and financial support for research and development in the field.
But some AI companies are modifying their products to give government customers more control—over model behavior, over data inputs, even over the power source that runs the system. Could this mollify the public?
A recent deal shows how far companies are willing to go to meet the government’s needs. For billions of people, ChatGPT is an abstraction viewed through their web browser. But earlier this month, the AI chatbot took on physical form when OpenAI delivered several hard drives containing the o3 model weights to Los Alamos National Laboratory. The lab aims to use them to examine classified data in search of particle-physics insights that could reshape the pursuit of energy and the development of nuclear weapons.
Those hard drives were the “most valuable” ones on Earth, the physical embodiment of OpenAI’s $300 billion valuation, the company’s government lead Katrina Mulligan told Defense One at a recent AI event in Washington, D.C.
“We are absolutely losing money on our deal with the national labs,” Mulligan said. “Every engineering resource I’m using to do the work that I’m doing with government could be 10 times more lucrative if I applied that resource to the commercial work that we do.”
Days later, OpenAI announced a $200 million Pentagon contract to “prototype how frontier AI can transform its administrative operations—from improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense. All use cases must be consistent with OpenAI’s usage policies and guidelines.”
At the Special Competitive Studies Project’s AI Expo in Washington, D.C., an OpenAI representative demonstrated how the company’s tools can serve national-security tasks: geolocating images with no obvious clues, scanning Telegram logs for indicators of cyber activity, or identifying the origin of drone parts retrieved from the battlefield.
The demo leader said the company’s latest reasoning models not only outperform past versions but now permit secure input of classified data in accordance with DOD guidelines. Unlike the public-facing ChatGPT, the government version also shows users how the model prioritizes data sources, offering transparency that lets analysts fine-tune the logic and understand how the program reached the conclusion that it reached.
That visibility is essential for national-security use, said Mulligan—far more than for consumers who want answers and rarely ask how they’re made.
“It produces a pretty detailed chain of thought that tells you how it arrived at its conclusion, what information it considered that wasn’t possible in the earlier paradigm,” she said, saying that people long believed that such models simply could not be explained, thinking “these models were always going to be—a black box.”
Explainability, data portability, and local infrastructure control—allowing labs to run models on their own supercomputers—are emerging as the baseline requirements for AI use in government. Giving users more agency builds trust.
OpenAI isn’t alone. Amazon Web Services is quietly becoming a critical player in defense AI. It recently released a version of its Bedrock service, which enables users to build generative-AI applications with a menu of foundational models, with classified-level security for DOD customers.
The Pentagon isn’t just hiring AI companies; they’re putting top executives in uniform. This month the Army announced an “Innovation Corps” whose inaugural members include Palantir CTO Shyam Sankar, Meta CTO Andrew “Boz” Bosworth, and Kevin Weil and Bob McGrew of OpenAI. These tech execs, who are receiving commissions as lieutenant colonels in the Army Reserve, will advise the service about adding commercial technology to DOD workflows. It’s not just a reform of procurement. It’s also a gesture of confidence.
But while trust is deepening between government and Silicon Valley around the shared goal of advancing in deploying AI, the public is heading in the opposite direction.
China and the AI Trust Gap
A March survey by Edelman showed trust in AI dropped from 50 percent to just 35 percent since 2019. And the mistrust spans political lines: just 38 percent of Democrats trust AI, compared to 25 percent of independents and 24 percent of Republicans.That follows other surveys that have shown increasingly sour public sentiment toward AI, even as professionals who have incorporated AI into their work report higher levels of performance.
But the real divergence is geopolitical. “Trust in AI in the United States and everywhere else in the Western world is low, and trust in AI in China and the rest of the developing world is in the high 70s. So that’s not nothing,” Mulligan said.
She worries that those disparities will translate into real-world gaps in adoption—with China pulling ahead in productivity, economic growth, and quality of life. She likens today’s AI debate to early resistance to electrification.
“Electricity was dangerous. There were home fires. There were actually bad outcomes… But a whole bunch of really good things also happened, and we did end up figuring out more or less how to harness what was good… We’re going to have to do the same thing with AI. But that involves getting out of the defensive crouch.”
At the AI Expo, former Google CEO Eric Schmidt predicted that a new cold war would emerge along the lines of divergent worldviews on AI. “You’re going to end up with a bifurcation—the Western models, which are democratic and messy, and the Chinese model, which will be very controlled and very powerful. And that’s a fight.”
Mulligan said excessive risk aversion could hamper AI innovation, and China is already advancing rapidly, given the government’s large, strategic financial support.
“I was pretty confident then that we were maybe six months to a year ahead of our competitors. And that is no longer true. I would guess that we are at most a couple of months ahead of where China is.”
Schmidt said that the widespread use of open-source models by Chinese AI firms helps them proliferate and speed ahead. But he cautioned against the illusions of “free” software, especially when the PRC is absorbing the cost.
“You want this AI future to be built with American values…Imagine if the burgeoning AI system gets built on Chinese principles, which include surveillance…You wouldn’t like it at all, nor would I.”
Schmidt said the Pentagon’s AI ethics guidelines were a good example of the contrast between AI when it emerges from a democracy, where public leaders face accountability, versus from an authoritarian regime.
Yet public skepticism about AI also reflects a broader alienation from tech leaders. Elon Musk’s public arguments with his AI, Grok—accusing it of being “woke”—or Oracle CEO Larry Ellison’s comments in September about AI and population surveillance (“Citizens will be on their best behavior because we’re constantly recording and reporting”) do little to build confidence.
It also doesn’t help that new AI tools are being deployed in polarizing domains. Israel plans strikes in Gaza with help from Lavender and other targeting or tracking models. U.S. Customs and Border Protection use facial recognition and driver monitoring. These examples deepen the suspicion that AI is not about empowerment, but control.
The argument that liberal democracies must embrace AI or risk falling under autocratic influence is unpersuasive to citizens who increasingly see their leaders behaving autocratically.
Still, rejecting AI altogether won’t halt its progress—only ensure that others shape it first.
A Democratized Future for AI?
If building public trust in AI is critical for national security and U.S. competitiveness, how do you do it? One idea: give users more control. The AI Now Institute, a nonprofit focused on public-interest technology, recently released a roadmap aimed at reclaiming public agency in AI development. Its top proposals:
- Enforce and expand privacy laws, especially around third-party data brokers.
- Break up compute monopolies by supporting open-source infrastructure.
- Establish independent audits of AI systems to prevent abuse.
These are not abstract ideals. In fact, they echo what OpenAI is already delivering to national labs and the Defense Department: user control, especially over data; infrastructure flexibility; and model transparency.
If those features help the government trust AI, perhaps they could help the public trust it too.
Read the full article here