Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: AI tanks, autonomous military platforms, the game is changing

The sheer volume of rhetoric and shill-like babble about military AI is already gigantic. The constant stream of new AI in military roles is making headlines every day. Everyone has an opinion; answers to questions, maybe not.
The likely effect of super-lethal AI-operated weapons is one of those questions. “Slaughter” is the more usual answer. This is before the systems even become operational. It’s anyone’s guess what else will come out of the genius factory before some meaningful facts become operational.
The current truth seems to be the traditional mix of disingenuous fanaticism about technologies vs reality. Some facts are available. AI can definitely do things human operators can’t. There are too many elements in a fifth or sixth generation fighter, for example, for the pilot to manage. The weapons systems can be anything, from nukes to combat systems to communications. Drones are also heading that way tactically, and fast. Drone swarms, in particular, have developed fast as serious tactical issues from their not-very-innocuous beginnings.
AI tanks
Perhaps the best example of AI in military service is the new furor over AI-operated tanks. The new tanks are almost at the point of “no humans required”, with a huge list of caveats.
One of the problems with so-called “heavy metal” tanks and tank theory is their sheer size, cost and bulk support needs. Tanks are high-value military assets.
Much has been said about the ability to knock out tanks, but they’re a serious threat in combat. A modern Main Battle Tank (MBT) can do unbelievable damage. Tanks can’t be ignored for that reason.
A tank is simply a mobile weapons platform with added survivability. One of the reasons they were introduced in World War One was to save lives in the face of insane casualties on the Western Front. If some tanks got knocked out, you might lose a few people. If infantry were involved, you could lose multiples of those numbers, hundreds or thousands of people in the same environment.
Tanks are mobile. They can travel long distances, allowing for maneuver and tactical/strategic deployment in any sort of scenario. Tanks are versatile. Anything that can be fired at a tank can be fired by a tank, or tank variant.
Future tanks will NEED to be designed with the same principles in mind. Combat effectiveness and efficiency is the name of the game, whatever the technologies involved.
That’s why AI tanks are such an important step. AI works on such basic principles as targeting, for example. The time lag between target finding and a hit is critical. AI can do it seamlessly.
This also applies to self-defence. Self-defence systems for tanks are now approaching puberty. These systems can manage even close-range attacks like RPGs, helicopter attacks, and more. The likelihood of onboard defence systems to combat IEDs is increasing rapidly. This is the usual story of adaption to threats. From this perspective, AI is an inevitable part of combat systems for tanks.
Autonomous weapons – This is not the time to be naïve.
The big threat from military AI is seen as autonomous weapons systems, with some very good reasons. What level of trust can you put in an AI-operated platform with large amounts of lethal weaponry? What safeguards are in place? (I’ve covered this subject before, but the stakes are getting bigger by the day, both in terms of lives and money.)
The “killer robots” idea, as now being applied to things like drone swarms, however, is getting the wrong end of the argument. Simply saying they’re dangerous, which they are, is missing the point.
The point being missed is any discussion of how to deal with these things. Banning them will have no effect whatsoever. Drone swarms are likely to be highly effective in combat. Nobody will just merrily sign off on the idea of not having a system to counter what the enemy has, either.
Nor will anyone give up a major technological advantage. It’s hard to see the United States swearing off AI systems while China and Russia develop more advanced systems. That’s not even slightly credible.
Even the idea of defining autonomous systems for some sort of nominal ban can be easily corrupted. A bit of AI isn’t autonomous, right? A few separate bits of AI aren’t autonomous, right? Hook the bits up and that weapons system is effectively autonomous.
The usual story with arms limitations is that everyone simply develops weapons systems around the theoretical and legal obstacles. It’s too naïve to assume that won’t happen on a routine basis.
Getting around the practical obstacles, however, is a very different ballgame. Realistically, also, these systems will need intensive teething. No system is instantly workable for combat purposes. A rogue AI system could be a serious liability to its own side. It could even start a war all by itself. AI systems will require extreme vetting and quality control to be trusted on to a battlefield. That’s just common sense. Even so, these are real issues, right now. Autonomy is still some distance away, but a high level of it is easily visible already.
Remember also that like all modern weapons systems, AI systems will be global. There’s no way out of these arguments but solving the problems.
The human role in war is about to change
The rise of the technician/soldier in the last century has changed war drastically. Lethality is now multiples of the past. Platoons and even squads can now have the firepower of battalions 100 years ago, and far more combat capability.
Combat units now operate a huge variety of systems using minimum numbers of personnel. The skills bar has been raised through the roof many times. The risk factor has also gone up. Precision targeting alone has made the combat zone a much more dangerous place.
In one of the greatest ironies in human history, the range of human combat operations is now inevitably global. The human element in war is therefore far more advanced because it needs to be. AI is the latest, perhaps greatest, perhaps most dangerous, challenge.
The problem with AI isn’t theory; it’s quality and depth of intelligence. AI can learn and it learns well. That doesn’t mean that it can outsmart human opponents. It can react faster, sure – And react in exactly the wrong way. If you give your AI software a shoot/don’t shoot option as its working parameters, sooner or later both of those options are going to be wrong.
Caveat – That doesn’t mean this problem hasn’t occurred to the people writing the software. It means situations can be ambivalent. How do you factor in the ambiguous?
The role of the human in warfare could well become that of exercising judgment in combat scenarios. AI can’t necessarily make some decisions. It almost certainly won’t make the same decisions as a human.
“Complex warfare” is just that – complex. Can AI factor in civilians, risks, tactical changes, situational no-go zones? Even with a truly gigantic range of information available, can it do the job properly? In theory, it might, but theory and combat are two very different environments.
The takeaway from this rather thankless series of much-too-real issues is that it’s too complex to leave the role of military AI to rhetorical positions. Critical thinking is required, and a lot of it.

Avatar photo
Written By

Editor-at-Large based in Sydney, Australia.

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

World

Iran's supreme leader Ayatollah Ali Khamenei leads prayers by the coffins of seven Revolutionary Guards killed in an April 1 air strike on the...