top of page

AI Weapons

The potential consequences of AI weapons

Robot holding a gun. Photo taken from iStock

With the rapid development and advancement of artificial intelligence (AI), comes new potential for military technologies: Lethal Autonomous Weapons Systems (LAWs). These weapons, capable of operating without human intervention, while incredible in concept and use and technology, also raise ethical and legal concerns. 


These autonomous weapon systems are military devices capable of selecting and engaging targets without any human intervention. They range from drones to fully robotic systems equipped with AI that can perform various complex tasks, including identifying and attacking targets based on pre-programmed criteria. Major powerful countries, such as the United States and China, are currently actively incorporating these technologies into their military arsenals: for example, the U.S. Department of Defense has launched the “Replicator” program, which aims to develop and deploy all-domain attributable autonomous systems, including drones and uncrewed surface vehicles, to reduce the risk to human life in combat. These systems are capable of executing predefined missions autonomously, reacting to changing environments and threats with minimal human input. 


From a technological perspective, autonomous weapons could potentially reduce military casualties by performing high-risk combat operations, as well as very precise and efficient military operations. However, the speed and autonomy of these AI systems also raise risks of unnecessary conflicts, especially as AI-driven decisions may potentially lead to unintended consequences. Despite their potential, they also pose risks of malfunctioning or being hacked, or having biases against certain groups of people. Additionally, the reliability of these systems in accurately distinguishing between citizens and combatants remains an issue - before any of these weapons are officially implemented, thorough testing and validation must be carried out to prevent any unintended consequences. 


While with their benefits, there also have been ethical debates - particularly about the moral implications of allowing machines to make life-and-death decisions. Stuart Russell, a computer scientist and AI researcher at the University of California, Berkeley, explains, “[t]he technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project.” This capability raises serious questions about the adherence of these systems to international humanitarian law, especially the principles of distinction and proportionality, which are designed to protect civilian lives during conflicts.


UN Secretary-General António Guterres and others have called for tighter regulations and even bans on these weapons. However, due to how complex these systems are, it has been difficult for regulators to reach a consensus at venues like the UN Convention on Certain Conventional Weapons (CCW). Going forward, the world must navigate and address certain challenges to ensure that the deployment of autonomous systems does not surpass our ability to control and understand their consequences. 

@2024 International Review in STEM (IRIS)

  • Instagram
  • LinkedIn
  • X
  • TikTok
bottom of page