The United States will seek to “achieve global dominance” in artificial intelligence under a plan released Wednesday by the White House, which aims to accelerate military adoption, fast-track permits for data centers, support open-source models, and take other steps.
The plan was met with approval by some military and technical experts, but others voiced concern about its threats to withhold federal funds from states that restrict AI development or deployment—even limits tied to civil liberties, such as facial-recognition systems.
On the military front, the plan declares the United States “must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence,” while maintaining reliability and safety. It directs the Pentagon to build a real, autonomous, AI-enabled “virtual proving ground” to test new autonomy and AI solutions under various scenarios.
While the Defense Department has already incorporated AI into exercises and experiments, such as demonstrations of AI piloting capabilities against humans, it currently lacks an explicit, continuous virtual proving environment.
The plan also grants the Defense Department priority access to commercial cloud computing in times of crisis, with a goal to “codify priority access to computing resources in the event of a national emergency so that DoD is prepared to fully leverage these technologies during a significant conflict.”
Training also plays a key role. The plan calls for “talent development programs” within DOD “to meet AI workforce requirements and drive the effective employment of AI-enabled capabilities”—a move that could improve the job-readiness of veterans in AI-driven workplaces.
Mina Narayanan, a research analyst at the Center for Security and Emerging Technology, told Defense One in an email that the plan’s “recommendations for the Departments of Labor and Commerce to help states identify workers dislocated by AI, leverage funds to proactively upskill workers, and conduct pilots to meet rapid retraining needs can support a U.S. AI workforce that is well-equipped to address present and future AI challenges.”
The plan also calls for the creation of AI innovation and research hubs at senior military colleges to “foster AI-specific curriculum, including in AI use, development, and infrastructure management.”
On the global front, the act may lessen concerns raised by officials and analysts about China’s access to semiconductors and other elements of AI hardware. It imposes tracking requirements for advanced chips and calls for “location verification features on advanced AI compute,” aimed at closing export control loopholes. It also urges the United States to retake leadership in international standards-setting bodies, a move intended to reassure critics who feared the U.S. was ceding influence in shaping norms for emerging technologies.
“Denying our foreign adversaries access to this resource… is a matter of both geostrategic competition and national security,” the plan reads.
Given President Donald Trump’s well-known aversion to multilateral alliances, it’s somewhat surprising that the plan also recommends a “technology diplomacy strategic plan for an AI global alliance.”
The plan has already won praise from several industry-aligned groups.
“President Trump’s AI Action Plan is a giant leap forward in the race to secure American leadership in artificial intelligence,” said Doug Kelly, CEO of the American Edge Project, a group that advocates on behalf of U.S. technology companies. “By prioritizing innovation, infrastructure, talent, and global reach, the plan confronts key barriers to American competitiveness, begins to fill long-standing gaps in our national strategy, and helps position the U.S. to beat China in this high-stakes tech race.”
Ylli Bajraktari, president of the Special Competitive Studies Project, said in a statement: “This AI Action Plan provides a critical component for winning the techno-economic competition of the 21st Century… It correctly identifies that our national security and economic prosperity, as well as America’s global leadership position, are now intertwined with leadership in AI. We are committed to helping transform this strategic vision into enduring national policy.”
A ‘wish list from Silicon Valley’
However, the plan also includes provisions that some say could threaten civil liberties, privacy, and even national security.
At the center of those concerns is a push to eliminate regulations that may “unnecessarily hinder AI development or deployment.”
The plan builds on Trump’s January executive order rescinding Biden-era rules that had established guardrails around AI safety and research, by pressuring states to abandon their own restrictions or risk losing federal funding. It also urges the Office of Management and Budget to consider whether a state’s “regulatory regimes may hinder the effectiveness of that funding or award.”
This issue was the subject of a recent debate in Congress, where a controversial moratorium that would have prohibited states from restricting certain AI uses failed overwhelmingly. That measure, part of the so-called “Big, Beautiful Bill,” was defeated in a 99–1 vote. More than 40 state attorneys general and 17 Republican governors also opposed the provision.
Sarah Myers West, co-executive director of the AI Now Institute, told Defense One that the new action plan “amounts to a workaround” of that failed provision.
“The action plan, at its highest level, reads just like a wish list from Silicon Valley,” she said.
States from Maine to Montana have passed or are considering legislation to restrict uses of AI, including facial recognition for law enforcement. States have also adopted rules limiting AI in hiring decisions or insurance claims—domains where algorithmic errors could lead to discrimination.
Narayanan warned that the federal plan “risks chilling the very type of activity that is needed to protect U.S. national security and strengthen the domestic AI ecosystem.” She noted that states play a crucial role in areas such as workforce development, equitable access to computing resources for small businesses, and the creation of information-sharing institutions for emerging AI risks.
Of particular relevance to the U.S.-China AI rivalry are the state and local bans on facial recognition. Not only is the technology unpopular among American citizens, but it also carries potential risks: law enforcement or personal data collected through AI systems might end up exposed to adversaries, including China. Why? China maintains a dominant global market share in both surveillance cameras and facial recognition software—the same types of products local law enforcement agencies might procure today.
The act does take steps to monitor surveillance equipment and software from China for data-leakage risks and to restrict imports. But those safeguards will take time to implement. In the meantime, state and local agencies looking to deploy facial recognition may turn to commercial products already on the market, some of which could present security vulnerabilities.
West said much will depend on how executive orders are implemented, but added, “The prospect of a gap is certainly a concern to look into now.”
More broadly, she warned that the bill could reinforce the market dominance of a few large AI firms, choking out the very innovation it claims to foster, especially in the context of other White House moves to shrink U.S. public research budgets.
“This action plan is sort of doubling down on the idea that if we create these national monopolies in AI, that’s going to serve U.S. interests,” she said. “But if there’s anything history shows us, it’s that these kinds of monopolies end up stifling innovation in the long term.”
Read the full article here