Sunday, December 15, 2024
spot_img

How much independence should machines have in our future wars?

Date:

Share post:

spot_img
spot_img

Beyond AI: Are we doomed to dangers of autonomous weapons systems?

By Girish Linganna

The transformation of warfare, as well as other aspects of society, by Artificial Intelligence is inevitable. However, it is important to strive for an evolutionary rather than apocalyptic or catastrophic outcome. Advancements in technology have consistently changed the way wars are fought. From chariots and saddle technology to gunpowder and nukes. Now, drones, as illustrated by the ongoing Russia-Ukraine conflict.
In the 19th-century Battle of Koeniggraetz, the Prussians trounced the Austrians with advanced breech-loading guns which enabled quick reloading even when the shooter remained prone on the ground. The Austrians were saddled with slower-reloading, muzzle-loading rifles, which they fired standing up. The technological edge was a crucial factor in Prussian success and ultimately influenced German unification under Berlin’s leadership, not Vienna’s.
If artificial intelligence were considered on par with that level of technology, either the US or China, competing for dominance, might aim to achieve temporary military superiority. As a military tool, Artificial Intelligence is more comparable to innovations like the telegraph, internet, or electricity rather than breech-loading rifles. In other words, it is not just a weapon but a foundational technology that will progressively change various aspects, including military operations.
This transformation is already underway, as American satellites and surveillance drones are generating vast amounts of data that surpass human capabilities to analyze quickly. This immense volume of information is beyond the capacity of human analysts to provide timely and valuable insights to the Ukrainians regarding Russian troop movements. In a similar manner, AI takes on this task, assisting soldiers much like doctors who rely on AI to navigate through large volumes of X-ray data, as reported by Andreas Kluth from Bloomberg
The next phase involves integrating AI into various types of robotic devices that will serve as automated wingmen companions for fighter pilots. While a human pilot will continue to operate the aircraft, they will be accompanied by a swarm of drones equipped with sensors and AI to identify and, with the pilot’s authorization, eliminate enemy air defenses or ground troops. The robots or bots, will not mind if they are destroyed in the process. This way, AI could not only reduce casualties and expenses but also allow humans to focus on the overall mission.
The important point to note is that these robots are required to obtain permission from humans before engaging in any lethal actions. It is believed that algorithms may not possess the necessary contextual understanding to accurately determine, for instance, whether individuals in civilian attire are likely civilians or combatants. This is highlighted by the fact that even humans sometimes struggle to differentiate between the two categories. Kluth from Bloomberg believes that AI should not decide if the number of human casualties in a mission is justified by the strategic objective.
The main question at hand is not solely about AI itself. Paul Scharre from the Center for a New American Security, a respected author on the topic, suggests that the key issue revolves around the level of independence we allow our machines to have. Will the algorithm support soldiers, officers, and leaders, or will it take over their roles entirely? This information was shared by Bloomberg.
Before the emergence of AI, a similar issue existed during the Cold War era. Moscow developed “dead-hand” systems like Perimeter, which is an automated protocol to initiate nuclear attacks if the human leadership in the Kremlin is incapacitated due to an assault. The main goal is to make the enemy believe that a successful initial attack would result in Mutual Assured Destruction. However, there is concern about the potential consequences if there is a malfunction or accidental launch of the upgraded Perimeter system by the Russians.
The issue revolves around the extent to which machines make decisions independently. Whether it is with nuclear weapons or other “lethal autonomous weapons systems” (LAWS), such as killer robots, the consequences are incredibly significant and potentially life-threatening.
There is a possibility that an algorithm can make effective decisions that reduce casualties, which is why certain air-defense systems currently utilize AI, as it is quicker and more proficient than humans. However, there is a risk of the algorithm malfunctioning or even being intentionally programmed to increase suffering. Would you be comfortable with the idea of Russian President Vladimir Putin or Hamas using killer robots?
In its 2022 Nuclear Posture Review, the US stated that it will always have a person involved in making decisions to launch nuclear weapons, ensuring human oversight and control in the process. Russia and China have not made a similar commitment, unlike the US, which last year released a statement on the responsible military use of Artificial Intelligence and Autonomy. Supported by 52 countries and ongoing, the declaration advocates for various protective measures on Lethal Autonomous Weapons Systems (LAWS).
However, the declaration does not urge for the prohibition of LAWS. This is an opportunity for the US to play a more helpful role in international law. The UN Convention on Certain Conventional Weapons, aiming to limit harmful methods of killing like landmines, has been making efforts to completely ban autonomous killer robots. However, the US is one of the countries that is against a ban. It should instead back a ban and encourage China, followed by other countries, to do the same.
Even if there is a global rejection of Lethal Autonomous Weapons Systems (LAWS), Artificial Intelligence (AI) will continue to pose new risks. AI can speed up military decision-making to a point where humans may not have sufficient time to assess a situation, potentially leading to fatal errors or relying solely on algorithms under high pressure. This phenomenon is known as automation bias, which occurs when individuals rely on automated systems, like a car’s GPS, to the extent that it may lead them into dangerous situations, such as driving into a pond or off a cliff.
The risk has consistently risen with advancements in military technology, dating back to when humans first attached stone tips to spears. However, throughout history, we have largely succeeded in handling these emerging dangers. As long as humans, and not machines, continue to make the ultimate and most critical decisions, there is a chance for us to adapt and progress alongside Artificial Intelligence, rather than facing destruction because of it. (IPA Service)
(The author is a Defence, Aerospace & Political Analyst based in Bengaluru.)

spot_img
spot_img

Related articles

Will end naxalism in Chhattisgarh by March 2026: Amit Shah

Raipur, Dec 15: Union Home Minister Amit Shah on Sunday reiterated the government’s resolve to rid Chhattisgarh of...

Hindu leaders demand apology from Rahul Gandhi on Dronacharya-Eklavya remark

New Delhi, Dec 15 : As Leader of Opposition in the Lok Sabha Rahul Gandhi compared the actions...

Parliamentarians unite over cricket match, raise awareness about eradicating TB by 2025

New Delhi, Dec 15 : In a unique blend of sports and social awareness, political leaders from both...

Armstrong murder case: 23 accused shifted to Puzhal central prison for security reasons

Chennai, Dec 15: The Tamil Nadu Prison Department shifted 23 people, accused of the murder of BSP state...