Privacy advocates decry EU’s failure to fully address surveillance concerns in landmark legislation

Privacy groups have expressed disappointment, labeling the Act a 'missed opportunity' for not including a complete ban on live facial recognition.


The European Union’s ambitious Artificial Intelligence (AI) Act, finalized on Friday, has drawn significant attention for its sweeping efforts to regulate AI technology. However, it has also raised concerns for not imposing an outright ban on live facial recognition, a move privacy advocates deem crucial for protecting civil liberties.

After intense negotiations that spanned 37 hours, representatives from the European Commission, Council, and Parliament hashed out the provisions of the AI Act. Key EU member states, including France, Germany, and Italy, played a significant role, with some pushing to dilute aspects of the bill in the final stages.

Privacy groups have expressed disappointment, labeling the Act a ‘missed opportunity’ for not including a complete ban on live facial recognition. This omission contrasts starkly with an earlier draft, which had proposed such a prohibition.

Amnesty Tech, Amnesty International’s technology-focused branch, voiced strong objections. Mher Hakobyan, Amnesty Tech’s advocacy adviser, criticized the EU institutions for greenlighting “dystopian digital surveillance” and setting a “devastating precedent globally concerning AI regulation.”

The AI Act aims to shield Europeans from AI risks, including job automation, misinformation, and security threats. It mandates rigorous testing and risk assessments for AI applications, especially in critical areas like self-driving vehicles and hiring practices.

New rules will enforce transparency in AI systems, such as chatbots and the creation of deepfakes, to prevent manipulation without public awareness. The Act also bans the indiscriminate collection of images for facial recognition databases.

Despite these advancements, the Act allows exemptions for law enforcement, permitting the use of live facial recognition to locate human trafficking victims, prevent terrorist attacks, and apprehend suspects of certain crimes.

The bill’s failure to ban the export of harmful AI technologies, especially for social scoring systems, has led to accusations of a dangerous double standard. Hakobyan highlighted this concern, noting the inconsistency in recognizing the harm these technologies can cause while allowing their export.

The AI Act’s provisions are set to take effect within 12 to 24 months. Andreas Liebl, managing director of AppliedAI Initiative, acknowledged the law’s likely impact on the tech industry’s operational capabilities within the EU.

Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, emphasized the importance of strong enforcement. Without it, the Act risks becoming ineffective, failing to live up to its regulatory potential.

The EU’s decision to forego a full ban on live facial recognition in public spaces sets a concerning precedent in global AI regulation. In concluding remarks, Mher Hakobyan lamented, “Lawmakers also failed to ban the export of harmful AI technologies… Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard.”


If you liked this article, please donate $5 to keep NationofChange online through November.