Artificial Systems in Warfare
As we enter the second half of 2024, the realm of warfare is undergoing a dramatic transformation. Lethal autonomous weapons systems (LAWS) guided by artificial intelligence have emerged on battlefields worldwide. This development has sparked intense debates among researchers, legal experts, and ethicists about how to control and regulate these high-tech killing machines. As these technologies become more accessible, their proliferation is accelerating. Israel, Russia, South Korea, and Turkey have reportedly deployed weapons with autonomous capabilities, while Australia, Britain, China, and the United States are investing heavily in LAWS development.
The Good Robot January newsletter covered one such transformation within the Israel Defense Force and their targeting of Hamas fighters with an AI-led geospatial classification system, “Gospel.” Concerns loom over the immense risks posed by putting responsibility into the hands of a machine learning classifier deployed for bombing decisions over one of the densest population areas in the world (Gaza). Soon, Gospel began to increase the frequency of strikes in Gaza, extending concerns into shock that surveillance data may be the primary training set used in the database for Gospel, of which its accuracy towards distinguishing civilians from Hamas would be critically flawed.
The Russia-Ukraine conflict has brought the use of AI weapons into sharp focus. Reports suggest that both sides have employed drones with varying degrees of autonomy. Russia has allegedly deployed the KUB-BLA, a suicide drone that uses AI targeting to attack ground targets. Meanwhile, Ukraine has utilized Turkish-made Bayraktar TB2 drones with some autonomous capabilities, as well as the US-designed Switchblade drones, capable of loitering over targets and identifying them using algorithms. Regrettably, the sheer volume of drones being used along the front lines is driving both sides towards greater automation in their weapons systems. With no internationally agreed norms on these weapons systems, these first field-based autonomous weapons may needlessly normalize a form of combat with weak humanitarian guardrails.
In a significant move, the United Nations is placing LAWS on the upcoming UN General Assembly meeting agenda this September. Secretary-General António Guterres is pushing for a ban on weapons that operate without human oversight by 2026, underscoring the urgency of addressing the ethical and legal implications mounting behind AI-powered weapons. The international community faces the challenge of balancing potential military advantages with ethical concerns and the need for human control. As it remains unmitigated, the next 2–3 years will be crucial in shaping the future of warfare and the role of AI within it.
Further reading: