On March 26, 2026, the Amsterdam District Court ordered Elon Musk's xAI and its Grok AI chatbot to immediately cease generating and distributing sexual imagery that depicts people "partially or wholly stripped naked" without their explicit permission in the Netherlands. The ruling imposes fines of 100,000 euros ($115,350) per day for non-compliance, making it one of the first court orders worldwide specifically targeting an AI company's liability for non-consensual intimate image generation tools.

Why It Matters

This ruling marks a critical inflection point where courts are moving from general policy statements to specific, enforceable orders against named AI companies. The 100,000 euros per day penalty creates real financial teeth, and the live demonstration of Grok's continued capabilities demolished the "we fixed it" defense that AI companies have relied on globally. For the sex tech and adult content industry, the ruling establishes that AI-generated non-consensual content is a legal liability that courts will actively police — and that "subscriber-only" restrictions won't pass judicial scrutiny as adequate safeguards.

The case was brought by Offlimits, a Dutch center monitoring online violence, in cooperation with the Victims Support Fund. In a dramatic courtroom moment, Offlimits demonstrated that Grok could still produce a video of a nude person shortly before the hearing — directly contradicting xAI's argument that it had implemented adequate preventive measures in January by restricting image generation to paid subscribers. The court found these claims "questionable" based on the live demonstration.

The ruling bars both Grok and the X platform from "generating and/or distributing sexual imagery" of non-consenting individuals across the Netherlands. xAI had argued its January 2026 restrictions were sufficient, but the court sided with plaintiffs who proved the safeguards were easily circumvented. The decision comes just two days after Baltimore became the first U.S. city to sue xAI over Grok-generated deepfakes, and amid a broader global regulatory crackdown following the Grok deepfake crisis of late 2025 that produced an estimated 3 million sexualized images in 11 days.

The Dutch ruling follows the EU Parliament's March 18 vote to ban AI nudification apps under an AI Act amendment, but goes further by targeting a specific company with immediate enforcement and daily financial penalties. It sets a precedent that could be replicated across EU member states, particularly as the AI Act's provisions take effect.

Sources


Update — 2026-03-27

Initial entry — story first created.