Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: Military AI — Hitting the targets, but missing the point

The endless controversy over military A.I. is as ferocious, and ineffective, as ever. The current state of military A.I. development has two faces, as you’d expect – The public face, and the secret face. The public face is bad enough. Robot tanks, swarms of drones with explosives, etc, it’s all fodder for the ongoing march of the world’s first robot armies.
Machines can be programmed to minimise damage, stick to specific targets, and play nice. Is it likely they will be? No. Function is more important in combat than niceties. Inevitably, some “laws” won’t be applied, as in any war. Autonomy also means that A.I. can/must interpret the rules, and make a decision which may be based on a “weighted logic” scenario.
For example:
• The mission is to hit the target.
• There are other people in the target area.
• The machine is programmed to avoid any unnecessary loss of life or something similar.
• The mission takes priority, and anyone else in the target area is automatically at risk.
A.I. can and does learn. In a military operation, however, learning isn’t the primary function. Algorithms dictate parameters, but not situations.
In a strategic sense, this has instant serious and potentially major global ramifications. A scenario: A US nuclear sub encounters an aggressive anti-submarine force. It takes evasive action, but is cornered. The A.I. knows how to fight, but there’s a minor software problem – It’s core survival strategy is an all-out defensive response. By simple oversight, nobody thought to make sure it doesn’t release its nukes as part of that response. It simply never occurred to anyone that it would be necessary to include that provision. Result, World Wars 3 – 7.
If a nuclear false alarm occurs, can A.I. systems respond to cross-check to ensure that the alarm is false? How? Has anyone thought to tell the A.I. it needs confirmation? Probably, but what if it doesn’t get it, or the confirmation source is destroyed? If a bioweapon is used on New York, can an A.I. system do the job of monitoring, instituting countermeasures, managing an evacuation, threat assessment and projections, etc? With what assets?
The trouble with military A.I. is that it is by definition grafted on to a whole spectrum of non-military and political situations. It may or may not be able to assess these situations, and/or get the information it needs to properly assess them. A.I. decisions have to be based on what the A.I. is capable of doing, both passively and actively.
Inevitability, both ways
It’s quite inevitable that A.I. will become part of war. “Fight by wire” systems are very much part of the mix in modern militaries around the world. That makes the weapons more dangerous, more effective, and, some say, with or without justification, more likely to be used. A smart rifle is still a rifle. You can’t do your knitting with it. It will act as a gun, and that’s the whole story.
Huge combat A.I. systems are exactly the same. They’re designed to fight wars. If they respond to anything, it won’t be with a few GIFs and emojis, although they could do that. They’ll open fire.
Expecting A.I. to have human instincts and interpretations is also asking too much of then. You may know something isn’t a threat, A.I. probably won’t. Again, function has priority.
Facial recognition machine guns? Where development and fiction meet
A lot Is being added to military AI, notably facial recognition, of all things, for machine guns. A person hit in the face with a machine gun wouldn’t have much of a face left to recognize, but if you have the wrong face, you’re a target. What about identical twins?
Facial recognition is hardly an exact science. It’s also a pretty strange military priority, in one way. A million enemies have a million faces. The face itself isn’t the enemy. The enemy could jam facial recognition with simple masks. Why is this an issue at all? Yet, it is.
This is an example of how and where military A.I. development and reality need to get down to hard cases. Shooting someone on the basis of facial recognition software is hardly a great recommendation for anything. Just about everyone has a doppelganger, all facial recognition software has flaws, so you’re building in a good chance of hitting the wrong target, “on principle”?
Come off it. This is the classic case of function added to function. Remember this tech is very much about huge money. It’s Christmas for the military industrial complex, which means any amount of assorted crap will be sold to the world’s militaries for decades. It’ll also be like the Internet of Things. You’ll have light operas added to tanks. ICBMs with Get Well Soon cards. The usual military industrial spiel is to “add features” to the point you’ve spent a few more trillion on whatever is the current fad.
The fictional part of A.I. is implied. It’s wonderful. It’s economic (it is, to a point). It’s the system to beat all the other systems which don’t exist yet.
All at a massive potential risk. A.I. is a blank slate. It’s inexcusably lazy to assume that all A.I. entities, and there will be billions of them, will be Safe For Baby.
Many of the military A.I. units will have some degree of slapdash, “didn’t think of that” stuff included. SNAFU is a proven military reality, and A.I. won’t get it. It won’t know how to interpret a cluster of mistakes and inaccurate data. All it can do is pull its trigger.
Military A.I. bottom line
The only safe way to manage military A.I. is to ensure that it’s properly modelled in tactical and decision-making scenarios. It shouldn’t even be deployed until there’s justification for doing so.
With all due respect to the science, I think the military professionals are the better judges of what’s needed, where and when. A.I. can’t have the instincts of the professional military, or the depth of knowledge outside its own parameters.
Regulation definitely isn’t the answer. Wars are often fought with no regard at all for laws. The miserable record of rules of war enforcement, at the expense of millions of lives, is well-known. What’s needed is an Off switch, both in combat and on the theoretical level, to make sure these systems can be managed.
My advice for the grunts would be forensic-level assessment, particularly of tactical situational awareness, manageability under extreme conditions. You also need a good fit for actual needs, not fictional needs. That won’t make war any safer, but much less annoying for all involved.

Avatar photo
Written By

Editor-at-Large based in Sydney, Australia.

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

Business

The job losses come on the back of a huge debt restructuring deal led by Czech billionaire Daniel Kretinsky - Copyright AFP Antonin UTZFrench...