Thinking hard on AI - Broadsword by Ajai Shukla - Strategy. Economics. Defence.

Home Top Ad

Breaking

Thursday 31 March 2022

Thinking hard on AI

India’s consideration of AI-based weapons systems, though a right step, would need to consider various legal and ethical conundrums

 

By Ajai Shukla

Business Standard, 1st April 22

 

The intellectual roots of Artificial Intelligence (AI) date back to Greek mythology, but the term has become part of the popular discourse only after science fiction (sci-fi) films, such as “The Terminator”, gave the public a fictional glimpse of combat between AI beings and humans. An example of an autonomous weapon in use today is the Israeli Harpy drone, which is programmed to fly to a particular area, hunt for specific targets, and then destroy them using a high-explosive warhead nicknamed “Fire and Forget.”

 

At its simplest, AI is a field of computer science that allows computers and machines to perform intelligent tasks by mimicking human behaviour and actions. Most of us encounter some form of AI systems daily, such as music streaming services, speech recognition and personal assistants such as Siri or Alexa. In 1950, in a paper titled “Computing Machinery and Intelligence”, Alan Turing considered the question ‘Can machines think?” And in 1956, John McCarthy first coined the term Artificial Intelligence.

 

In July 2015 at the “International Joint Conferences on Artificial Intelligence (IJCAI)” in Buenos Aires, researchers warned in an open letter about the dangers of an AI arms race and called for a “ban on offensive autonomous weapons beyond meaningful human control.” This letter has been signed by over 4,500 AI/robotics researchers and some 26,215 individuals, including distinguished personalities from the fields of physics, engineering and technology innovation. 

 

Despite that concern, global powers such as China, Russia, US and India are competing to develop AI-based weaponry. At the 2018 summit on the UN Convention on Conventional Weapons (CCW), the US, Russia, South Korea, Israel and Australia opposed talks to “take negotiations on fully autonomous weapons powered by AI to a formal level that could lead to a treaty banning them”. More recently, two Indian researchers – Gaurav Sharma of Sentilius Consulting Services and Dr. Rupal Rautdesai formerly from Symbiosis Law School – wrote an unpublished paper, “Artificial Intelligence and the armed forces: Legal and ethical concerns,” from which this newspaper column draws.

 

Sharma and Rautdesai consider AI to be of broadly two types: Narrow AI, which performs specific tasks such as music, shopping recommendations, medical diagnosis etc. Then there is General AI, which is a system “that exhibits apparently intelligent behaviour at least as advanced as a person across the full range of cognitive tasks”. The broad consensus is that general AI is still a few decades away. However, there is no formal definition, given that the word ‘intelligence’ is, in itself, difficult to define. 

 

As AI is adopted in everyday use, especially in the armed forces, numerous legal concerns are likely to arise, key amongst them being its regulation. However, the government and policy makers would require a clear definition of AI prior to even attempting its regulation. In August 2017, the Ministry of Commerce and Industry set-up an AI task force (AITF) to “explore possibilities to leverage AI for development across various fields”. The AITF submitted its report in March 2018. In its recommendations to the Government of India, the AITF is largely silent on the various legal issues that would be required to be addressed.

 

One of the most important and interesting uses of AI is in military operations. There are potentially tremendous benefits for militaries in harnessing AI to obtain tactical advantages, especially in big data analytics – where large volumes of data is required to be gathered, analysed and disseminated to multiple fronts during a war. Of equal, or even greater interest, is the use of autonomous weapons. AI-based analytics are not lethal by themselves and are mere tools for the humans to take decisions. Rebecca Crootof of the Yale Law School has defined an autonomous weapon system as “a weapon system that, based on conclusions derived from gathered information and pre-programmed constraints, is capable of independently selecting and engaging targets.” Where human intervention is required before any action is to be taken, the system would be considered as “semi-autonomous.”

 

While there is no agreed definition of “automated weapon system” in the international context, the International Committee of the Red Cross (ICRC) in its report prepared for the 32nd International Conference of The Red Cross and Red Crescent at Geneva, Switzerland, in December 2015, proposed that “autonomous weapon systems” be considered as: 

“An umbrella term that would encompass any type of weapon systems, whether operating in the air, on land or at sea, with autonomy in its ‘critical functions,’ meaning a weapon that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.”

 

Technologically advanced militaries already have significant capabilities in AI-based weapons systems; and they are putting in additional efforts to research and develop automated weapon systems. The US is investing heavily in intelligent weapon systems, which include computers that can ‘explain their decisions to military commanders.’ Such systems, which are currently in the realm of science fiction, could soon become a reality.

 

India is no exception to the growing interest in deploying AI-based weapons systems for the military. In February 2018, the Ministry of Defence (MoD) set up a task force to study the use and feasibility of AI in India’s military. The contents of the task force’s report, which was handed over to the MoD on June 30, 2018, remain classified but the accompanying press release states that the report, inter alia, has “made recommendations for policy and institutional interventions that are required to regulate and encourage robust AI-based technologies for defence sector in the country.” 

 

India’s consideration of AI-based weapon systems are steps in the right direction given our hostile neighbours and our peculiar problem of Naxalism. However, due regard would need to be given to the various legal and ethical conundrums India would face if the use and deployment of such systems is not well regulated. 

 

These types of AI-automated weapons systems – also referred to as “killer robots” – which could pose significant threats are known as Lethal Automated Weapons Systems (LAWS). They are designed not to require any human involvement once activated. They pose difficult legal and ethical challenges since it would be a machine that would effectively take the decision to kill or engage targets. 

 

At the international level in 2013, at the “Meeting of State Parties to the Convention on Certain Conventional Weapons” (CCW) it was decided that an informal Meeting of Experts would be held in 2014 to discuss issues in relation to LAWS. India’s stand at the various meetings held since 2014 has been that such weapon systems “meet the standards of international humanitarian law, systemic controls on international armed conflict that does not widen the technology gap, or that its use is insulated from the dictates of public conscience.” 

 

Arguments on using AI-based weapons range from accessibility of remote areas to reduced casualties amongst soldiers and non-combatants. On the flip side the objection to use of AI based weapons, especially autonomous systems, is that it would be easier for countries to do warmongering and civilian and collateral casualties could be far greater. 

 

Mr Sharma and Dr Rautdesai conclude by arguing that both sides of the arguments have their merits, it is fruitless to compare credibility. Suffice to say that a plethora of legal and ethical issues arise when a country is to deploy AI-based weapons system, especially those like LAWS. 


No comments:

Post a Comment

Recent Posts

<
Page 1 of 10412345...104Next >>Last