Lavender’s lethal legacy: AI-driven warfare and the human cost in Gaza

This shift towards algorithmic target selection has ostensibly streamlined operations, but at a significant cost.

189
SOURCENationofChange
Image Credit: Molly Mendoza for Reveal

The unveiling of Lavender, an artificial intelligence system employed by the Israel Defense Forces (IDF), has ignited a firestorm of controversy and concern. This AI tool, designed to identify and mark targets in the complex and densely populated landscapes of Gaza, represents a new frontier in military technology. However, its deployment raises profound ethical questions, particularly regarding the accuracy of its target selection and the consequent civilian casualties.

The AI at war: Lavender’s role in targeting operations

Lavender has been a pivotal tool in the IDF’s arsenal during recent operations in Gaza. Unlike its predecessors focused on infrastructure, Lavender marks individuals, integrating data points from various sources to generate a “kill list.” This shift towards algorithmic target selection has ostensibly streamlined operations, but at a significant cost.

Yuval Abraham of +972 Magazine starkly outlines Lavender’s operational ethos: “Lavender marks people—and puts them on a kill list.” This encapsulates the chilling transition from human-led decision-making to reliance on algorithms.

An intelligence officer, anonymized as B., shared operational insights with +972 Magazine: “At first, we did checks to ensure that the machine didn’t get confused. But at some point, we relied on the automatic system, and we only checked that [the target] was a man—that was enough.”

The human toll:

The deployment of Lavender has had dire consequences. In the early weeks of the conflict, 37,000 Palestinians found themselves and their residences marked as potential targets. This reliance on an AI system, with its inherent margin of error, has led to devastating mistakes, including strikes on non-combatants and civilians.

The ethical quandary:

The IDF’s adoption of Lavender’s recommendations without rigorous verification has sparked a debate on the ethics of AI in warfare. The system’s error rate, known to be around 10%, translates into a significant risk of misidentifying targets, raising questions about compliance with international humanitarian laws that emphasize the protection of civilians in conflict zones.

The intelligence officers who spoke to +972 Magazine and Local Call reveal a troubling picture of reliance on Lavender’s output, often without adequate human oversight. The rush to meet operational demands led to a reliance on the system’s recommendations, sidelining the critical human judgment necessary to differentiate between combatants and civilians.

As the dust settles, the implications of Lavender’s use in Gaza continue to reverberate. The system’s role in the high civilian death toll has led to introspection and criticism of the IDF’s tactics. The reliance on an imperfect AI tool, without sufficient safeguards to prevent civilian casualties, marks a contentious chapter in the annals of military technology.

“It’s industrialized extermination,” said enterpreneur Arnaud Bertrand, “the likes of which we haven’t seen since… you know when.”

“A ratio of 20 civilians killed for one target works out to about 95% civilian deaths.”

FALL FUNDRAISER

If you liked this article, please donate $5 to keep NationofChange online through November.

COMMENTS