A mask of darkness had fallen over the Gobi Desert training grounds at Zhurihe when the Blue Force unleashed a withering strike intended to wipe Red Force artillery off the map. Plumes rose from “destroyed” batteries as the seemingly successful fire plan took out its targets in waves. But it had all been a trap.
When Blue began to shift positions to avoid counter-battery fire, exercise control called a halt—and revealed that, far from defeating the enemy, more than half of Blue’s fire units had already been destroyed. After the exercise, the Red commander explained the ruse: he had salted the range with decoy guns and what he called “professional stand-ins,” the signatures of units and troops, which not only tricked Blue’s sensors and AI-assisted targeting into shooting at phantoms, but also revealed their own firing points.
It was just one example of how China’s military is building for a battlefield where humans and AI seek not just to fight, but fool each other.
Under the banner of “counter-AI warfare”, the People’s Liberation Army is teaching troops to fight the model as much as the soldier. Forces are learning to alter how vehicles appear to cameras, radar, and heat sensors so the AI misidentifies them, to feed junk or poisoned data into an opponent’s pipeline, and to swamp battlefield computers with noise. Leaders are drilling their own teams to spot when their own machines are wrong. The goal is simple: make an enemy’s military AI chase phantoms and miss the real threat.
The PLA conceives its counter-AI playbook as a triad that targets data, algorithms, and computing power. In May, PLA Daily described the concept in its Intelligentized Warfare Panorama series. It argued that the most reliable way to “break intelligence” is to hit all three at once.
First, counter-data operations inject junk data, skew what the sensors see, slip in corrupted examples, and reshape a vehicle’s radar, heat, and visual signals with coatings and emitters that mimic another platform’s profile and even engine vibration to mislead AI-assisted ISR. Second, counter-algorithm operations take advantage of model weak spots with logic tricks and crafted inputs, confusing AIs by breaking their “reward” signals and leading them to waste time in fruitless searches. Finally, attacks on computing power include “hard-kill” kinetic and cyber strikes on data centers and links, and “soft-kill” saturation attacks that flood the battlespace with electromagnetic noise, tying down scarce computing resources and clogging decision loops. A 2024 study by PLA researchers lists soft-kill techniques such as data pollution, reversal, backdoor insertion, and adversarial attacks that manipulate machine learning models.
Commentary from PLA analysts casts the contest as algorithm-versus-algorithm in joint operations. It urges planners to defeat enemy algorithms by testing how the algorithms make decisions, scrambling the signals that guide drone swarms, and maneuvering in unexpected ways to throw off the patterns those systems are trained to favor, with the aim of tricking enemy sensors and models into misidentifying targets.
In sum, instead of fearing an enemy’s use of AI, the PLA defines the adversary’s AI as a target set, and assigns work to hit each part.
The PLA is already putting these ideas into action. In August 2023, an Air Force UAV regiment added “real and fake targets” to target-unmasking drills, forcing pilots to sort decoys from real targets. Similarly, PLA air-defense training now treats ultra-low-altitude penetration as a priority, with studies framing the fight as the meeting point of decoys, deceptive signatures, and AI-aided or intelligent recognition. In the maritime arena, a 2024 study builds a framework for unmanned underwater vehicles to detect and ignore acoustic decoys when attacking a surface vessel.
PLA writers also give sustained attention to the human half of the team. In April, PLA Daily warned that commanders can slide into technology dependency and amplify bias baked into training data. The remedy is training commanders to judge when to trust the AI and when to overrule it by adding deception scenarios to simulations and running human and machine war games so operators practice spotting bad advice and overriding it. Follow-on commentary argued for “cognitive consistency” between operator and tool. In this model, wargames embed adversary behavior and develop rapid courses of action so instructors can see when officers override a wrong algorithm and explain why.
Human-in-the-loop command remains the baseline, with humans continuing to play the role of operator, fail-safe, and moral arbiter. Lt. Gen. He Lei echoed this view in 2024, urging tight limits on wartime AI and insisting that life-and-death authority stay with humans. Recent guidance adds rules for how units collect, label, and track data from start to finish, and those rules feed training scenarios, post-exercise reviews, and performance scores.
Industry’s role
Reflecting this growth in PLA thinking, Chinese companies have also begun to market counter-AI products in the categories of physical deception, electronic warfare, and software. Huaqin Technology markets multispectral camouflage that hides radar, infrared, and visual signatures. Yangzhou Spark offers camouflage nets and suits, stealth coatings, radar-absorbing materials, smoke generators, signature simulators, and radar reflectors. JX Gauss advertises inflatable, full-scale radar-vehicle decoys with remote-controlled moving parts. Together, these products support the counter-data playbook by changing how vehicles appear to radar, infrared, and visual sensors, planting convincing decoys, and tricking AI-enabled surveillance into locking onto the wrong signals.
Electronic-warfare vendors jam communications links and network connections, following the PLA’s soft-kill computing resources concept. Saturating the spectrum with clutter and false signals forces the enemy’s AI and limited computing power to waste time and resources, while friendly forces maintain a clear picture. Chengdu M&S Electronics lists gear that generates false target signals, fields radar decoy rounds, and provides simulators that play back hostile radar and communications signals to confuse receivers. Balu Electronics sells communications-jamming simulators that build complex electromagnetic environments and drive multi-target interference.
Meanwhile, Chinese tech firms are developing counter-AI software. Tencent Cloud runs a large-model red-team program and offers tools that monitor and lock down model inputs and outputs to block prompt injection, tainted data, and leaks. Qi’anxin’s model protection fence and GPT-Guard add tools that simulate attacks and watch inputs and outputs for tampering, and RealAI’s RealSafe automatically builds test cases that try to fool models and checks how well they hold up. Marketed as defense, these tools also sharpen tradecraft for pressuring an opponent’s algorithms.
U.S. response
U.S. planners need not look to China to understand that they must assume their AI will be contested in future battles. The PLA’s work in this space reflects the lessons from Ukraine, where deception operations have taken on a new level of importance in a battlefield saturated with sensors. It also heightens the concern of a growing “deception gap,” where if the U.S. military and its partners cannot master today’s emerging tools, they may fall behind in a critical field.
Answering that playbook begins with structured red-teaming and rigorous test and evaluation, not just one-off demos. The U.S. already has building blocks, including DARPA’s GARD on adversarial robustness, IARPA’s TrojAI on backdoor detection, NIST’s AI Risk Management Framework for evaluation and risk controls, and DOT&E guidance for continuous test and evaluation across the enterprise. Planners must harden pipelines and models by protecting data provenance, detecting anomalies, preserving safe fallbacks, and monitoring model health in the field.
Keeping humans decisively on top of the loop remains essential and is codified in DoD Directive 3000.09 on autonomy in weapons. Units should also upgrade the opposing forces they train against, giving them AI-enabled reconnaissance and deception kits and ensuring that “real and fake” targets are part of every major exercise.
Failure to do so will mean that the American military’s enthusiastic embrace of AI leads not to new advantages, but new vulnerabilities and even loss in this crucial new aspect of warfare.
Read the full article here

