The Pentagon’s third AI-acceleration strategy in four years sets up seven “pace-setting projects” that will “unlock critical foundational enablers” for other U.S. military efforts, the department announced Monday.
The six-page document also directs the department’s many components to fulfil a four-year goal to make their data centrally available for AI training and analysis. It omits any mention of ethical use of AI and casts suspicion on the concept of AI responsibility while banning the use of models that incorporate DEI-related “ideological ‘tuning.’”
Also on Monday, Secretary Pete Hegseth announced that Pentagon networks, including classified ones, would enable access to Grok, the Elon Musk-owned, Saudi- and Qatari-backed AI chatbot noted for its partisan, even Nazi, slant and its willingness to create sexually explicit images of children.
In some key respects, the new strategy bears much in common with its 2023 Biden-administration predecessor, which also emphasized the rapid adoption of commercially available AI frontier models across the military.
The new strategy, however, offers far more specific pathways for that adoption across various military activities. A project called “Swarm Forge” will “iteratively discover, test, and scale” new ways of using AI in combat. Another project aims to rapidly incorporate agentic AI—foundation models that can complete specific tasks autonomously—for “enabled battle management and decision support, from campaign planning to kill chain execution,” and a third aims to promote AI in scenario planning.
One intelligence-related project aims to “turn intel into weapons in hours not years;” another, to make posture planning more “dynamic.”
Another project aims to make AI tools—including Grok and Google’s Gemini—available to department personnel at “Information Level (IL-5) and above classification levels.”
Perhaps most significantly, the new strategy lays out a mandate to eliminate “blockers” to data sharing within the Department and institute open-architecture systems, a move generally seen as favorable to startups and faster innovation.
Among these are “responsible AI,” ethical considerations, and DEI. Under a section titled “Clarifying ‘Responsible AI’ at the [Department of War] – Out with Utopian Idealism, In with Hard-Nosed Realism,” the strategy declares: “Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts.”
The strategy also mandates that the defense undersecretary for research and engineering must “incorporate standard ‘any lawful use’ language into any DoW contract through which AI services are procured within 180 days,” meaning that any use of AI need only meet the legal standard the Department uses for force in general, human or not, as opposed to a special higher standard that which requires “meaningful human control” for the use of autonomy in war. But the strategy does not explicitly rescind the long-stated preference for meaningful human control either, which could lead to confusion as different commanders interpret meaningful and control in different ways, or, potentially, choose to ignore it.
It’s exactly the sort of discrepancy that the Department’s conspicuously absent AI ethics principles are meant to address. (The strategy also fails to acknowledge the growing chorus of lawmakers voicing concern about the administration’s ability to understand or follow the law, whether in attacking unarmed boats or taking lethal action against civilians on U.S. streets.)
The Pentagon is launching the strategy at a time when Russia and China are accelerating their own AI adoption, but also when public trust in AI is collapsing across the U.S. political spectrum. It also arrives as many European allies are turning away from U.S. tech companies due to the administration’s aggression toward democracies.
Read the full article here

