In the Ukranian online newspaper Pravda, some days ago they published the news of Western countries coalition’s plan to supply a large amount of autonomous war drones to the Ukranian army. Original news comes from Bloomberg, though it’s habitually a paywalled site.
Rob Bauer, chairman of NATO's military committee, told Bloomberg that:
the use of drones in Ukraine "in combination with artificial intelligence" could be more successful than artillery shelling by the Russians.
Beyond that, the heart of the story lies not only in the number of drones to be supplied, but in the capabilities that the use of artificial intelligence will bring to them... however, I am highly dubious about the real technical feasibility of this plan. Particularly, I am very skeptic on the possibility of implementing this system on swarm drones.
Nowadays, there are thousands of signal inhibitors in the Ukranian front, which limits a lot the attacking capacity of UAVs. Both Ukranian and Russian soldiers pursue the development of autonomous aerial weapons, but probably not in the sense that lays on public imagination.
“The war in Ukraine is spurring a revolution in drone warfare using AI,” claimed a Washington Post headline last July. Then, in the fall, various reports mentioned that both Russia and Ukraine had deployed small drones that used artificial intelligence to identify and impact targets. Having on-board AI meant that the drones wouldn’t need a human operator to guide them all the way to impact. Humans could limit to pilot them till the inhibited area, and then, when flying autonomously inhibitors would be totally useless defending from this kind of attacks.
If this AI had proved itself in battle, it really would have been a revolution. Skilled drone pilots could have been replaced by thousands of algorithms quickly trained to point-and-click on potential targets. And instead of every drone requiring an operator staring at its video feed full-time, a single human soldier could cover a swarm of lethal machines.
However, in military news, hype and exaggeration are a common weapon, as I said. In early February, a detailed report from the Center for a New American Security dismissed the AI drones in a few lines. In words of Stacie Pettyjohn, CNAS’s defense program director,
The Lancet-3 was advertised as having autonomous target identification and engagement, although these claims are unverified.
Both parties claim to be using artificial intelligence to improve the drone’s ability to hit its target, but likely its use is limited.
Then, on February 14, an independent analysis suggested that the Russians, at least, had turned their Lancet’s AI-guidance feature off. Videos of Lancet operators’ screens, posted online since the fall, often included a box around the target, one capable of moving as the target moved, and a notification saying “target locked,” freelance journalist David Hambling posted on Forbes. Those features would require some form of algorithmic object-recognition -similar to those old ones in smartphones that predict identify your face, mouth and eyes in the photos-, although it’s impossible to tell from video alone whether it was merely highlighting the target for the human operator or actively guiding the drone to hit it.
Military AI hype
It’s impossible to confirm Hambling’s analysis without access to Russian military documents or the drone’s software code. But Pettyjohn agreed that Hambling’s interpretation was not only plausible but probable.
According to one of the biggest experts in this area, Samuel Bendett:
This technology needs a lot of testing and evaluation, this technology needs a lot of iteration, and many times the technology isn’t ready. I think it’s a slow roll because both sides want to get it right. Once they get it right, they’re going to scale it up.
I am sure you are aware of Boston Dynamics robot videos. The staff releases the video of the perfect single implementation out of hundreds of failed implementations, which means that their technology is not always permanently perfect. Maybe we should believe the same of the information from David Hambling.
Ukraine war has shown that cheap, scalable, robust and not technically highly innovative solutions are the ones which work with drones. So I’m almost sure that the plan of Great Britain and US to provide of intelligent drone swarms to Ukraine has also a bigger dose of propaganda than technical reality. We’ll see.
Indeed, I do agree with you. War in Ukraine is showing many new realities. Most of us thought that cyberattacks would be a fatal enemy, or that Russia would not survive without oil exports.
In case of UAVs, I think that many countries practised with them in less media covered conflicts, such in Sudan or Ethiopia. Enemy and us are always trying new ideas.
The law of nature teaches us that it is much more expensive to achieve something than to pretend it. Animals are equipped with plumage and movements to appear larger than they are. For example, primates do this by making deeper sounds than would be expected for their body size. It is already said that the first victim of war is information, so it is possible, as you say, that where AI is boasted we only have a lot of propaganda.
However, I think this issue is not only about getting better positions through propaganda. The war in Ukraine could be, as happened with the Spanish Civil War in the XX century previous to the WWII, a testing ground for future conflicts on a larger scale, for example around Taiwan.