Pentagon leaders say workers are using new agentic AI tools to compress weeks of work into hours. But the same tools are opening new frontiers of digital crime and changing the very nature of cybersecurity.
The rollout of agentic tools on the department’s GenAI.mil platform since December has been a “tremendous success,” Emil Michael, defense undersecretary for research and engineering, told reporters at the Pentagon on Tuesday.
Michael said people were using the tools to do “the mundane part of their job” and take a “two-week task and compress it down to three hours.”
The platform recently added Google’s Gemini and is looking for more such.
We’re “trying to have options so we’re not single-threaded on any one vendor, and each of these models is trained in a somewhat different way on different data. So we’re going to learn which ones are more capable on which dimensions,” he said.
In its quest for AI tools that can help find vulnerabilities, the Pentagon is even evaluating Mythos, a powerful agentic model made by Anthropic—a company that has officially been labeled a national-security risk. Anthropic has sued the government over the designation.
Michael said the government is in a “testing and evaluation period” with Mythos, which is already being used by agencies and a select group of large companies to find vulnerabilities. He tried to explain why the Pentagon was still using tools from a company that allegedly threatens national security, saying that Mythos is “a different product in some ways. Different probably than the company itself.”
What the Pentagon and the rest of the federal government must do now, Michael said, is “look at what this model can do, not only to the government software and hardware infrastructure, but to the private sector…for the rural hospitals, for the wastewater treatment plants, to all the things. So that we have the ability to patch them before adversaries get the same ability.”
When criminals use agentic AI
Michael said agent-based AI tools like Mythos, which can find and patch vulnerabilities without human oversight, will become more standard.
“All the big tech companies now are using these cyber models to find vulnerabilities. They’re trying to make automatic patching using agents and using the same models. So we expect that to grow across the industry, across the government.”
But that won’t be enough to protect against future AI-enabled attacks.
Jackson Reed, founder of AI startup Barding Defense, says that agentic tools will change cybersecurity in ways that many institutions don’t yet appreciate.
“We’re going to see criminal groups look a lot more like state actors,” Reed said.
What does that mean? Today, most cybercriminals ocus on fast-payoff attacks like stealing data or encrypting it for ransom. But soon, he said, they will mimic some state-backed Chinese and Russian groups by trying to stay in a network to spy, move “laterally,” or manipulate data.
“Changes in attacker skill are going to produce entire new taxonomies of attack (like the industrialized insider trading example, or industry-wide ransomware deployments) that will pose risks to society and raise questions about the feasibility of current constitutional approaches,” Reed said in a followup email.
That will create business models for cyber criminals and states such as Russia that routinely work with those criminal groups. Reed said using AI to automatically detect and patch software holes won’t protect against that. For instance, Opus 4.6, Anthropic’s latest model for coding and reasoning, can find and fix software vulnerabilities but it misses things like lateral movement, he says.
Reed is working with Breakpoint Labs, a cybersecurity company that works with the U.S. military, to develop a new sort of agent platform to help cybersecurity professionals find the new kinds of attacks that agentic AI tools enable but can’t spot.
Read the full article here

