Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: Google’s Project Maven — A tech too far?

The risk of “autonomous killing machines” has been and still is debated long and loud around the world. If a general consensus is against it, mainly because of the risk of unleashing god knows what on hideous modern battlefields, the practical issues are even less appealing. Google’s entry in to this field means that the major league is now involved.
The worry is real enough. It seems that Google staff have gone so far to petition management for the end of Project Maven, Google’s venture in to military AI, and some are said to have even quit.
Project Maven is classic AI; it’s about teaching machines to act autonomously, and has direct applications to military hardware like drones and other systems.
Why The Military Is Interested In AI
There’s a reason for the military interest in this type of AI. AI allows systems to respond instantly, not from a remote, and perhaps compromised, location. Reaction times in modern combat have to be instantaneous. Best guess from a few continents away isn’t good enough any more.
However – That level of autonomy is exactly what’s bothering critics. Can AI distinguish between civilians and enemy military? What if the software goes wrong, and the AI decides to take out everything in the vicinity, friend or foe?
The military may not be too keen on “AI on blue” scenarios, either. Some systems carry enough firepower to do catastrophic damage. There are obvious requirements for safeguards, and above all, “Off” switches to avoid these risks.
AI also theoretically reduces the risks of human casualties in high risk situations. You can replace a machine, but replacing a trained human with another trained human takes a long time. The lower body bag count, too, has its merits.
The Other Side of AI – Enemy AI is Coming
You can expect a pretty ordinary arms race in military AI. China in particular is looking at AI as an area in which to challenge the United States, and it’s highly unlikely that military considerations would be left out.
If one side has a weapon, the other side wants a better weapon. It’s the unavoidable logic of arms superiority since the Bronze Age, and not much has changed.
There’s a cut-off point to the similarities this time. Modern weapons and AI are a deadly combination. Being at a combat disadvantage in this type of warfare really isn’t an option. If an enemy drone can knock billions of dollars’ worth of your hardware out of action at a fraction of the price of manned systems, Cheap Kill is the only real choice available.
The US currently has an edge in AI. Momentum is building to introduce AI across the entire economic and social spectrum, another compelling, and hard to stop, force in play. Arguably, US AI could be the ultimate deterrent, a machine system which can respond even if the conventional systems can’t.
Scenario
Imagine an AI nuclear submarine. It’s a lot smaller than manned subs. It carries enough firepower to flatten a civilization. It’s mobile, stealthy and hard to find, let alone target. Your enemy is very wary of this wild card, which can do colossal damage.
When you don’t know where the threat is, you have to act according to general risk values, and this would be a serious risk issue. Your own AI may not be quick enough, or carry enough smart systems, to find these subs. So you think twice, three times about a conflict.
OK, let’s add to this scenario a few considerations:
1. The AI sub, for whatever reason, can malfunction. It’s either out of play, or (perhaps overdramatically) goes rogue. It initiates its attack sequence, partly because that’s what it’s designed to do, and starts a World War.
2. The AI sub isn’t all that easy to find for your own side, either. Rightly, its location is a secret, but the enemy will have tried to hack your systems to find it, or sabotage your link with it. Do you have an asset, or an added security risk? Can you fix any problems fast enough?
3. Military systems using AI are equally risky, both as assets and high maintenance operational considerations. Your AI will be your first line of attack and defence. It HAS to be 200% reliable. Do you use a constant monitoring system, which could be compromised, or some other option?
4. The “autonomous killing machine” thing isn’t entirely out of the question, however careful you are about creating it. Consider the learning requirements alone. AI has to learn a lot, and operate in a very fast moving combat environment. Remember also that SNAFU is a traditional military expression for Murphy’s Law.
Having cheered you up with all that –
There is zero possibility that other nations won’t develop military AI, and partly because they’ll assume that you already have. Bluffing is not an option. AI can’t behave the same way as human-operated systems. The minute it’s built, it will be recognised by the way it acts.
Military AI is a step over a threshold, and there’s no going back. Consider well, and consider wisely. There are no refunds on this one.

Avatar photo
Written By

Editor-at-Large based in Sydney, Australia.

You may also like:

World

Stop pretending to know what you’re talking about. You’re wrong and you know you’re wrong. So does everyone else.

Entertainment

Taylor Swift is primed to release her highly anticipated record "The Tortured Poets Department" on Friday.

Social Media

The US House of Representatives will again vote Saturday on a bill that would force TikTok to divest from Chinese parent company ByteDance.

Business

Two sons of the world's richest man Bernard Arnault on Thursday joined the board of LVMH after a shareholder vote.