On March 18, 2026, two key European Parliament committees — the Internal Market and Consumer Protection (IMCO) and the Civil Liberties, Justice and Home Affairs (LIBE) — voted overwhelmingly (101 in favor, 9 against, 8 abstentions) to approve an amendment to the EU's Artificial Intelligence Act that would explicitly ban AI systems capable of generating non-consensual sexually explicit images. The prohibition, included in the Digital Omnibus on artificial intelligence, represents the first explicit parliamentary ban on so-called "nudification" applications anywhere in the world.
Why It Matters
The EU's nudification ban sets a global regulatory precedent. While dozens of countries and U.S. states have enacted or proposed laws targeting deepfake pornography after the fact — through civil liability, criminal penalties, or takedown mandates — the EU is the first major jurisdiction to prohibit the AI tools themselves at the point of creation. This upstream approach could fundamentally reshape how AI companies build and deploy generative image models, requiring consent verification and output filtering to be baked into the technology rather than addressed through post-hoc moderation. For the sex tech industry, the implications ripple outward: legitimate AI applications in sexual wellness (personalized content, AI companions, therapeutic tools) will need to navigate an increasingly complex regulatory environment that draws sharp lines between consensual and non-consensual AI-generated intimate content. The Grok controversy proved that even major tech platforms can become vectors for mass-produced non-consensual imagery, making the EU's tool-level ban a test case for whether regulation can outpace the technology.The amendment outlaws any AI system that generates realistic images "so as to depict sexually explicit activities or the intimate parts of an identifiable natural person" without their consent, including the creation of child sexual abuse material. The vote was propelled by global outrage earlier in 2026 when users of Elon Musk's Grok AI tool on X generated and shared thousands of sexualized images of adults and children, prompting the European Commission to describe the output as "appalling" and "clearly illegal" with "no place in Europe."
"Today Parliament has drawn a red line," said co-rapporteur Michael McNamara, an independent MEP aligned with Renew Europe. "AI must never be used to humiliate, exploit or endanger people. These tools inflict real harm on real people. This was not in the Commission's original proposal, but Renew pushed for it and we got it over the line." IMCO shadow rapporteur Svenja Hahn was also instrumental in advancing the prohibition. The committees simultaneously voted to extend the compliance deadline for high-risk AI systems beyond the original August 2, 2026 target while shortening the watermarking compliance deadline to November 2, 2026.
The measure now advances to a full Parliament plenary vote expected March 26, 2026, followed by trilogue negotiations between the Parliament, Council, and European Commission anticipated through March and April. Final plenary adoption is targeted for June 2026, with the amended AI Act provisions aimed to enter into force on August 1, 2026.
Sources
- EU Advances Ban on AI Tools Creating Non-Consensual Sexual Images — Bloomberg
- MEPs back proposed ban on 'nudification' apps — RTÉ News
- Renew delivers nudification ban and simplified AI Rules — Renew Europe
- EU lawmakers deal to ban AI non-consensual intimate deepfakes — The Next Web
Update — 2026-03-24
Initial entry — story first created.