Il Ponte – a student periodical based at bratislava international school of liberal arts (bisla)

AI in Warfare

AI in Warfare

SUMMARY The integration of artificial intelligence (AI), particularly the use of autonomous weapon systems (AWS) in military operations in Israeli-Palestinian conflict, has raised ethical and humanitarian concerns. “According to a major investigation published last month by the Israeli outlet +972 Magazine” reports state that AI systems are being used to identify and target individuals with minimal human oversight (tech-speak for a person who affirms or contradicts the AI’s recommendation), leading to a high number of civilian casualties and accusations of genocide.

OVERVIEW A significant investigation released in April by the Israeli publication +972 Magazine, revealed that Israel has been using AI to determine its targets for elimination, with minimal human involvement in the decision-making process, particularly during the initial phases of the conflict (Kwet, 2024). The integration of artificial intelligence (AI) in military operations, particularly in Israel’s conflict with Gaza, has raised ethical and humanitarian concerns. “According to a major investigation published in April by the Israeli outlet +972 Magazine” (Samuel, 2024) reports that AI systems are being used to identify and target individuals with minimal human oversight (person who affirms or contradicts the AI’s recommendation), leading to a high number of civilian casualties and accusations of genocide (John, 2024). Currently, Israel employs several AI systems, such as "The Gospel," "Lavender," and "Where’s Daddy?" to guide its military actions in Gaza. This policy prioritizes speed and efficiency in targeting, often at the expense of accuracy and ethical considerations (McKernan & Davies, 2024). “Gospel marks buildings that it says Hamas militants are using. Lavender, which is trained on data about known militants, then trawls through surveillance data about almost everyone in Gaza — from photos to phone contacts — to rate each person’s likelihood of being a militant. It puts those who get a higher rating on a kill list. And Where’s Daddy? tracks these targets and tells the army when they’re in their family homes because it’s easier to bomb them there than in a protected military building,” (Samuel, 2024). “One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male.” (Abraham, 2024). “The intelligence officers in their testimony stated that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.” ( McKernan & Davies, 2024). There are numerous reasons for initiation of changes. Samuel, 2024 stated that “while AI advocates often say that technology is neutral (“it’s just a tool”) or even argue that AI will make warfare more humane (“it’ll help us be more precise”), Israel’s reported use of military AI arguably shows just the opposite.” The continuation of this is due to the perceived advantages of using AI, such as faster decisionmaking and the ability to handle a lot of data (McKernan & Davies, 2024). However, the initiation of changes is important because of: a) high civilian casualties, AI-generated targeting decisions have led to significant collateral damage, including thousands of civilian deaths. b) international disapproval, the high death toll andreports of targeting have started a wave of international criticism and accusations of genocide. c) ethical concerns, the use of AI in warfare raises on point ethical questions about the delegation of life-and-death decisions to machines. Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs” (Abraham, 2024) the sources said, destroying entire homes and killing all their occupants. According to conflict experts, if Israel has been using bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war (McKernan & Davies, 2024). The Increase of the level of human involvement in AI-generated targeting decisions COULD be an option to ensure more ethical and accurate outcomes. Another option would be to implement strict, perhaps international, regulations on the use of AI in military operations and increase transparency to allow for better accountability. Furthermore, a temporary suspension of the use of AI in targeting decisions until a consensus or ethical guidelines and safeguards are developed. International Collaboration is another option. Working with international bodies to establish global norms and standards for the use of AI in military contexts. The recommended course of action is to enhance human oversight of AI-generated targeting decisions. Reasoning for selecting that course of action is that with the increased human oversight the moral and ethical considerations are factored into targeting decisions. While maintaining the efficiency benefits of AI, human oversight can correct potential errors, leading to more precise and justified military actions. Furthermore it could align with international humanitarian laws and reduce the risk of legal actions and sanctions against Israel. Addressing ethical concerns and reducing civilian casualties can improve Israel’s international look and reduce the widespread criticism of its military operations. By implementing this policy, Israel can leverage the advantages of AI technology while sticking onto ethical standards and minimizing harm to civilians, thus fostering a more humane approach to warfare.

ANALYTICAL FRAMEWORK

The Universal Declaration of Human Rights signed by the General Assembly of the United Nations (UN) in Paris on 10 December 1948 (General Assembly resolution 217 A) and all subsequent International Covenant and Conventions (see Annex I) enshrine the universal application to all human beings of the rights and principles of equality, security, liberty, integrity, and dignity, complementing the Sustainable Development Goals of the UN . In addition, the multilateral agreement Geneva Convention relative to the protection of civilian persons in time of war (see Annex II) concluded on 12 August 1949 by the Plenipotentiaries of the Governments represented at the Diplomatic Conference held in Geneva from 21 April 1949 to 12 August 1949 implements the idea of the purpose of war being a political domination, and not the war itself.

Currently, the utilization of AI in warfare includes (What you need, 2024):

  • Weapon systems, particularly autonomous weapon systems (AWS).

  • Cyber and information operations.

  • Military decision support systems.

    That is to say, the most urgent issue concerning AI in warfare is its autonomy, as international law applies only to human beings. In particular, it affects the AWS (What you need, 2024).

  • The AWS use raises challenges for compliance with international law, including humanitarian law in terms of liability. Under the Rule 151. Individual Responsibility of customary international law, applicable in both international and non-international armed conflicts, individuals are criminally responsible for war crimes they commit; in other words, AI is not liable (Henckaerts & Doswald-Beck, 2005).

  • Due to the unpredictable level of quality of the AWS-based act, in terms of the types of targets, duration, geographical scope, scale of use, situation of use, and degree of autonomy, the AWS brings risks of harm to those affected by armed conflict, both civilians and combatants, as well as dangers of conflict escalation.

  • With politics being a domain of human beings and the purpose of war being political domination, the use of AWS raises fundamental ethical concerns for humanity. In other words, this idea is contradicted in acts of substituting human decisions about life and death with sensors, software, and machines.

    In conclusion, the implication of international law is to regulate AI in warfare in terms of the types of targets, duration, geographical scope, scale of use, situation of use, and a requirement of a human-machine interaction, to maintain the proportionality of the war, ensure liability, protect civilians and civilian objects.

STAKEHOLDERS IN CONFLICT TRANSFORMATION

This policy paper is addressed to the General Assembly of United Nations. Nevertheless, there are other stakeholders who vary in their influence and interest in the use of AWS in warfare, each having legal, ethical, economic, political, technological, societal, ecological, environmental interests. High-influence and high-stakes stakeholders include the UN, local governments and armed forces of involved states. Low-influence and high-stakes stakeholders include Palestinians, and people who are currently present in the affected area. High-influence and low-stakes stakeholders include the International Criminal Court, International Court of Justice, International Court of Human Rights, International Committee of the Red Cross, Human Rights Watch, Amnesty International, European Court of Human Rights, States, other international organisations, corporations, and businesses. Low-influence and low-stakes stakeholders include Israeli citizens, and general public worldwide, other international organisations, corporations, and businesses.

CURRENT POLICIES

Regulation of autonomous weapon systems is currently sparse, lacking a universal and legally binding framework. Nevertheless, several key proposals have been made on the international level from a variety of actors and institutions, including the European Union, The European Council, The Department of State of the United States of America, and the United Nations. Specifically, the European Union has recently adopted the Artificial Intelligence Act (2024). While the Act does not concern itself with the use of artificial intelligence in military, it does limit its employment in other spheres of human activity (Future of Life Institute, n.d.). Most importantly, however, the Act identifies high-risk A.I. systems where the use of artificial intelligence has the potential of causing serious damage or human harm (Future of Life Institute, n.d.). These include, for instance, healthcare, infrastructure, education, or border control (Future of Life Institute, n.d.). In a similar spirit, military could be considered as one of these systems and could be addressed in the future. On top of the Artificial Intelligence Act, the European Parliament adopted Guidelines for military and non-military us of Artificial Intelligence in 2021. Although this document is not legally binding, it calls for a European-Union-wide legal framework on the use of artificial intelligence, including its deployment in the military (European Parliament, 2021). In these guidelines, the European Parliament stressed the need for human control when artificial intelligence systems are used for combat purposes, so that humans can “assume responsibility and accountability for their use” (European Parliament, 2021, para. 6). Additionally, a draft resolution is in the works at the Council of Europe, aimed at regulating lethal autonomous weapon systems. Its Committee on Legal Affairs and Human Rights also stresses the need for human control over A.I. systems, which should be exercised in the stage of development of the systems, at the point of their activation, as well as during their operation (Cottier, 2022). Moreover, the draft resolution calls not only for legal accountability for the use of

POLICY RECOMMENDATIONS Based on ethical principles, legal concerns, urgency of the present situation, and the current policies regarding the use of artificial intelligence in the military, it is highly recommended to continue multilateral talks between states regarding the regulation of autonomous weapon systems at a platform provided by the United Nations, which should culminate in a proposal for the regulation of autonomous weapon systems, brought up to be adopted by the General Assembly of the United Nations. As part of this convention, the following measures should be considered:

(1) A universal, legally binding definition of autonomous weapon systems needs to be adopted.

(2) At the very least, a moratorium on the use of autonomous weapon systems in combat needs to be introduced.

(3) Research of autonomous weapon systems should continue, as the deployment of these systems in combat can have several advantages, such as the eventual and complete removal of human personnel deployed at the war front. However, certain limits must be first put forward on the way these systems are employed. These limits should include:

(a) The prohibition of the use of autonomous weapon systems against human beings.

(b) The establishment of a minimal level of human input present when autonomous weapon systems are deployed in combat or used in the military.

LIST OF REFERENCES

Abraham, Y. (2024). ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza.

972mag. Retrieved from: https://www.972mag.com/lavender-ai-israeli-army-gaza/

Bureau of Arms Control, Deterrence, and Stability. (2023, November 9). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Retrieved from U.S. Department of State: https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy-2/

Cottier, D. (2022). Emergence of lethal autonomous weapons systems (LAWS) and their necessary apprehension through European human rights law. Council of Europe. Retrieved from https://assembly.coe.int/LifeRay/JUR/Pdf/TextesProvisoires/2022/20221116-LawsApprehension-EN.pdf

European Parliament. (2021, January 20). Guidelines for military and non-military use of Artificial Intelligence. Retrieved from News European Parliament: https://www.europarl.europa.eu/news/en/press-room/20210114IPR95627/

guidelines-for-military-and-non-military-use-of-artificial-intelligence Future of Life Institute. (n.d.). The AI Act Explorer. Retrieved June 2, 2024, from EU Artificial

Intelligence Act: https://artificialintelligenceact.eu/ai-act-explorer/ Geneva Convention Relative to the Protection of Civilian Persons in Time of War (Fourth Geneva Convention). (1949). 75. U.N.T.S. 973. Retrieved from: https://www.refworld.org/legal/agreements/icrc/1949/en/32227

What you need to know about autonomous. (2024). International Committee of the Red Cross. weapons. Retrieved from: https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons

Jean-Marie Henckaerts & Louise Doswald-Beck, Customary International Humanitarian Law, Volume I: Rules, International Committee of the Red Cross (ICRC), 2005, Retrieved from: https://www.refworld.org/reference/research/icrc/2005/en/98261 [accessed 16 June 2024] Kwet, M. (2024). How US Big Tech supports Israel’s AI-powered genocide and apartheid As an extension of US imperial power, US tech corporations are eager to support Israeli atrocities. Aljazeera. Retrieved from: https://www.aljazeera.com/opinions/2024/5/12/how-us-big-tech supports-israels-ai-powered-genocide-and-apartheid McKernan, B. Davies, (2024). The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets. The Guardian. Retrieved from: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol 1). (1949). Retrieved from: https://www.ohchr.org/en/instruments-mechanisms/instruments/protocol-additional-geneva conventions-12-august-1949-and Samuel, S. (2024). Some say AI will make war more humane. Israel’s war in Gaza shows the opposite. AI nudges us to prioritize speed and scale. In Gaza, it’s turbocharging mass bombing. VOX. Retrieved from: https://www.vox.com/future-perfect/24151437/ai-israel gaza-war-hamas-artificial-intelligence UN General Assembly. (1948). Universal Declaration of Human Rights (217 [III] A). Paris. Retrieved from: https://www.un.org/en/about-us/universal-declaration-of-human-rights United Nations. (2024). Convention on Certain Conventional Weapons — Group of Governmental Experts on Lethal Autonomous Weapons Systems. Retrieved from United Nations Office for Disarmament Affairs. Retrieved from: https://meetings.unoda.org/meeting/71623 United Nations. (2024). Global Issues, The General Assembly as a forum for adopting multilateral treaties. Retrieved from: https://www.un.org/en/global-issues/international-law-and-justice? fbclid=IwZXh0bgNhZW0CMTAAAR3Y-YUBYWkvNW2HWM9AbRMtYL92Efk17 fSmmlX6Bjt9h86Vs-zMLLr2YA_aem_ZmFrZWR1bW15MTZieXRlcw The graphic design of this policy paper is inspired by the design used by the UN in their policy papers.

Turning a Blind Eye to Genocide:  The Response and Approach of European Union to the Situation of the Rohingya Minority in Myanmar

Turning a Blind Eye to Genocide: The Response and Approach of European Union to the Situation of the Rohingya Minority in Myanmar

Of Lice and Men: Dignity and Dehumanizationin the Language of Slovak Politics

Of Lice and Men: Dignity and Dehumanizationin the Language of Slovak Politics