Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Military robots, the future of war where nobody needs to show up?

By Paul Wallis     Mar 16, 2009 in Technology
In military circles, and maybe cubes, a debate is getting personal. The rise of the armed robot. The robots make a lot of sense, in terms of IEDs and body bag shortages. The trouble is that they’re a whole new type of war.
Wars have historically had a limit: The people fighting them. Even the mindless generational wars of recent times have had only so many actual combatants.
The argument is broadly about the ramifications of introducing unlimited numbers of combatants with theoretically unlimited capacities. Added to this is the fact that machines don’t have rationales.
It’s the second part of the argument that’s causing the worries at this stage. The “human factor”, ironically, is supposed to be the rational element. Robots can theoretically slaughter millions of people, and you can innocently say it’s a software error.
No doubt there’ll be much mirth in the graves.
Giggle, rot, necrotize, chuckle. Political expediency at its best. Can a hair dryer commit a war crime?
However, this isn’t really an issue which it’s possible to avoid.
The start of wars between industrial societies, like the First World War, was the beginning of the dehumanization of warfare. The tank was designed to overcome conditions in which humans couldn’t survive, and this is really the next logical step. Mechanized warfare ironically costs fewer lives than the footslogger’s war.
It’s that logic which is pushing the military robots. Just about every country on Earth has some form of robot. The US military has literally thousands, scouring the caves looking for terrorists. The armed form, particularly a fire and forget version loaded with high explosive, could also demolish caves as hiding places.
Interesting to think humanity is returning to the caves in this way, isn’t it? We’ve come so far, you know… spend a few hundred thousand years and come up with a new way of hitting people on the head… dazzling…
With the robots comes an arms race that could make the Cold War look like a daisy chain making competition. The robots make tactical sense. Robots aren’t limited in functions. Nor are they limited in theoretical destructive power. A relatively basic robot could carry a 50kt nuclear charge quite easily. That’d really shake up your cornflakes.
If there are going to be limits, they have to be practical limits. The obvious and very relevant limit at the moment isn’t weaponry, but decision making. Robots don’t yet have the judgment or instinctive reflexes of humans. Software and hardware have come a long way, but not to this real higher brain level.
Fortunately, the military is being wary of the idea of armed robots as the answer to everything. Just as well, because they aren’t. They could be a total tactical liability, if they can’t adapt to situations. It’s expecting a lot of any kind of robot to expect it to instantly comprehend things which would have a human flapping their arms and flying away.
Machines might not get exhausted, go insane or die of body odor, but they do break down. An average of 30% of any military inventory is maintenance fodder at any given moment. Advanced machines create large, expensive, logistic chains for that reason.
A machine telling you about a syntax error in the middle of a firefight may not be quite as idyllic as it sounds, either. It’s anyone’s guess how many people have died cursing since the first bow and arrow misfired. System reliability is a generational thing, and a lot of operators get wiped out before the bugs get ironed out.
The robots of the current generation, however, are showing up with another useful function. The Big Dog Robots of the US Marines are a case in point. These are quadrupeds, (no minor technical feat, given power/weight ratios), and they’re an indication of how different robot roles in warfare could become.
On a more basic level, The Squad Mission Support System (SMSS), which looks like a cut down, reworked LAV, designed to do the heavy lifting for troops, is an instantly recognizable useful workhorse.
Sorry, I should have said MULE. The US Army’s Multi-Utility Logistics Equipment (MULE) vehicles are semi independent. MULEs can be remotely operated in several different modes, and they can look after themselves to some degree already. They can find their own way around obstacles. They can also carry weapons. The current idea is to equip them with Javelin anti tank missiles and machine guns by 2015. That’s a lot of firepower, more than a Bradley.
“Multi Utility” is likely to be the working phrase for these machines as they evolve. Processors that are perfectly capable of playing chess won’t have a lot of trouble with basic number crunching like targeting. As it is, humans don’t do much of the work of targeting and other operations. Fighters have been computer dependent for decades for that reason. Robots can do multiple tasks the minute they get the software. Add a few stray humans for the decisions, and you have ten robots with the abilities of a battalion.
You can see the obvious interaction: Part machine, part human operation. That really opens up the floodgates, in terms of possible technologies. It’s also a genie that isn’t going to be hanging around in the bottle for long.
The current forms of warfare have one serious problem: They’re annoying. Complex warfare, if nothing else, gets on the nerves of combatants. It’s now a scratchable itch, if you have machines to do the work. If you can get around the operational ramifications of that annoyance with a bit of code and a click or two, that’s the way things will work. A few considered accidents are quite likely, and you could write the script pretty easily.
Given humanity’s ability to kick itself in the teeth with its own technology, there have to be problems waiting nobody’s yet been stupid enough to discover. Something for future generations to enjoy.
At the moment, the problem is not turning all this convenience into yet another form of self destruction on a massive scale. There’s nothing but budgets and sanity between robots and their use as strategic weaponry.
Concepts are a bit thin on the ground for dealing with this quite likely form of escalation of military capacity. On the tactical level, something like Asimov’s First Law of Robotics seems to be the ballpark idea. Telling friends and foes apart is one problem.
Complex warfare also involves making judgment calls that robots can’t make. What’s the difference between a family with a shopping cart and an enemy combatant? We know, but a robot wouldn’t. This is currently an area where robots definitely can’t operate, and shouldn’t.
Not yet covered are nano robots, and emerging technology. I innocently searched military+robots and got 2.3 million images.
Some of the information and headings, like killer robots which can change shape, related to these images takes a bit of groaning about, but the designs are interesting, and so are the design concepts.
Shape changing killer robots aren't impossible, just a bit passé as an expression. What's wrong with "acrimonious", or "argumentative and heavily armed" instead of "killer"?
Robot “swarms” are another issue. Collaborative intelligence, in groups of robots. Fine, but check out the extended logic of that idea. To achieve what? Build a nest? Basically, they’re an amorphous network. There are tactical applications for that, but we’re not yet at the stage of much more than basic gaming technology. Real world applications of swarming robots, vs. shotgun, are still pretty much on the side of the shotgun.
Stories abound, too. There’s too damn many techno illuminati for my tastes, and if they’d check their facts it’d be nice. They’re like the superstitious villagers in a B movie.
I did, however, find one site, Computer Science for Fun, with some less technophobic materials:
This is interesting stuff, and heading to the ballpark for realistic applications.
It also includes the Three Laws of Robotics, by Isaac Asimov:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
From the look of this lot, I’d say what we really need is a robot version of Hippocrates, in partnership with a robot Diogenes, searching the net for an honest robot. Then they’d be cynical enough to fight wars without us.
Who knows, the robots might even come up with a logical reason for them.
Nah, by that time they'd be able to talk, and they'd be insufferable. We'd have to kill them.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about Military robots, Tactics, Warfare