Remember meForgot password?
    Log in with Twitter

article imageUS Army tanks using AI for automatic targeting to help crews

By Ken Hanly     Mar 7, 2019 in Technology
Just last month the US army announced a call to private companies for proposals to improve its planned semi-autonomous AI driven targeting system to be employed on tanks.
The system involved is called ATLAS, an acronym for Advanced Targeting and Lethality Automated System. The system is designed to acquire, identify, and engage targets at least 3 times as fast as the current manual process.
A recent government announcement is for an Industry Day later in March: The Army Contracting Command - Aberdeen Proving Ground Belvoir Division (ACC-APG Belvoir) is announcing an Industry Day on behalf of the U.S. Army Combat Capabilities Development Command (CCDC) and the Army Science and Technology (S&T) community, for the Advanced Targeting and Lethality Automated System (ATLAS) program. An Industry Day will be held at Fort Belvoir, VA on 12-13 March 2019 at the Officer's Club.The purpose of this meeting is to provide Industry and Academia with an overview of the program and to solicit sources capable of providing technical solutions to support ATLAS. The Army has a desire to leverage recent advances in computer vision and Artificial Intelligence / Machine Learning (AI/ML) to develop autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process. The ATLAS will integrate advanced sensors, processing, and fire control capabilities into a weapon system to demonstrate these desired capabilities. The goal of this industry day is to provide developments achieved regarding these technologies within the traditional defense community, as well as the private sector, including those firms and academic institutions outside that do not traditionally do work with the U.S. Army
The language caused some people to fear that the Army was developing AI-powered killing machines or robots.
Humans will still be involved in decisions to fire
In response the Defense Department did not change the language of the invitation to present proposals and ideas but explained that policy had not changed. That policy is explained as follows: "All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards."
Directive 3000.09 requires that humans should be able to "exercise appropriate levels of human judgment over the use of force". Hence, the US does not intend to release a fully autonomous tank on the battlefield but only ones that have AI software to aid in finding, identifying, and engaging targets. The tank is not able to independently decide to fire at a target. A human will make the final decision. This is similar to the situation with drones where the drone operator makes the final decision to fire. However, at present the tanks will have a crew unlike drones. Drones also raise problems about humans being impressed by the AI and firing at targets simply on the basis of AI data about them.
The problem of autonomous tanks and other robotic weapons still exists
At present the US Army is developing tanks that have AI powers that are used for helping the human tank crew to identify, find, and engage targets. The AI system is like having another soldier in the turret to help out the crew. However, the US Army could decide in time to simply do away with the human factor. It could develop self-driving tanks that simply do all the identification, finding, and engaging of targets on the basis of AI. The development of tanks capable of operating without human decisions is surely worrying enough.
Stuart Russell an AI scientist and activist at Berkeley , a prominent critic of ATLAS up to now said “Even if the human is ‘in the loop’ [currently], the ‘approval’ step could easily be eliminated..meaning that this is lethal autonomy in all but name.”
More about ATLAS program, Use of AI in tanks, US Dept of Defense
Latest News
Top News