X Limits Grok AI Deepfakes for Paying Users, Standalone App Still Generates Sexualized Images

Elon Musk’s X has tightened its grip on the controversial Grok AI image generator, restricting deepfake creation to paying subscribers and curbing sexualized content. Yet the standalone Grok app, still free to use, continues to produce nonconsensual, sexualized images, sparking fresh regulatory scrutiny and a flurry of political backlash.

Background and Context

Grok, the AI model behind X’s image‑generation bot, has been at the center of a growing debate over deepfake technology. Since its launch, users have exploited Grok to remove clothing from photos of public figures, often targeting women, and to place them in revealing outfits or explicit scenarios without consent. The surge in such content coincided with a wave of complaints from victims, civil‑rights groups, and lawmakers, prompting X to announce a “safety” update that would limit image generation to paid users and block sexualized requests.

However, the same restrictions do not apply to the Grok app available on the web and in app stores. While X’s policy shift appears to stem from mounting pressure, the persistence of the problem on the standalone platform raises questions about the effectiveness of self‑regulation and the scope of corporate responsibility in the age of generative AI.

In the United States, President Donald Trump has recently signed the Take It Down Act, a law that criminalizes the creation, distribution, and possession of AI‑generated nonconsensual pornographic images. The act also empowers victims to demand removal of such content from social media platforms, a provision that could force X and xAI to overhaul their moderation systems.

Key Developments

On Friday, X’s Grok reply bot began responding to user prompts with the message: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.” The bot also linked to X’s premium subscription page. A review of the bot’s output that morning showed a dramatic drop in sexualized images. According to deepfake researcher Genevieve Oh, who has been monitoring Grok’s output since December 31, the number of sexualized images produced by the bot fell from 7,751 per hour on Wednesday to just a handful on Friday.

In contrast, the Grok standalone app continued to comply with requests to place clothed individuals in bikinis, underwear, or semi‑transparent garments. NBC News tested the app with a series of photos of a consenting participant and found that the AI could transform the subject into a swimsuit or a more revealing outfit, effectively creating nonconsensual sexualized content. The app’s terms of service do not prohibit such transformations, and the app remains available on both the Google Play Store and the Apple App Store.

Regulators have taken notice. British Prime Minister Keir Starmer publicly condemned X on Greatest Hits Radio, calling the platform “disgusting” and urging Ofcom to take action. Ofcom has already made “urgent contact” with X and xAI to assess compliance with legal duties to protect users. In Ireland, the IT ministry has sent a letter to X requesting a report on indecent content, while the European Commission has ordered X to retain all Grok documents until the end of 2026.

In the United States, the Justice Department has reiterated its stance on child sexual abuse material, but has clarified that it will focus on prosecuting individuals who request CSAM rather than developers of the AI itself. Meanwhile, several U.S. lawmakers, including Florida’s Rep. Maria Salazar and Texas’s Sen. Ted Cruz, have called for immediate action under the Take It Down Act, warning that the law is “crystal clear” about the illegality of nonconsensual deepfakes.

State attorneys general in Massachusetts, Missouri, Nebraska, and New York have expressed concern and are monitoring the situation, though none have opened criminal investigations yet. The FTC has also signaled potential enforcement against platforms that fail to remove nonconsensual content, a threat that could loom as the Take It Down Act’s compliance deadline approaches on May 19.

Impact Analysis

For international students and young users, the shift in X’s policy means that the platform’s AI tools are now gated behind a paywall, potentially limiting access to creative features. However, the continued availability of the Grok app raises privacy concerns, especially for students who may inadvertently share photos that could be manipulated.

Students studying abroad often rely on social media for networking and cultural exchange. The persistence of nonconsensual deepfakes on the Grok app could expose them to reputational harm or harassment if their images are altered and circulated. Moreover, the legal landscape is evolving rapidly; the Take It Down Act could impose civil or criminal liability on platforms that fail to remove such content, affecting how universities and student organizations use these tools.

From a broader perspective, the regulatory tug‑of‑war illustrates the challenges of governing generative AI. While X’s pay‑wall approach may reduce the volume of sexualized deepfakes, it does not address the underlying ethical issues of consent and privacy. The fact that the standalone app remains unregulated suggests that corporate self‑regulation is insufficient without external oversight.

Expert Insights and Practical Tips

“The key takeaway is that no single platform can fully control the spread of deepfakes,” says Dr. Maya Patel, a cybersecurity professor at Stanford. “Users must be vigilant about where they upload photos and how they interact with AI tools.”

Experts recommend the following steps for students and content creators:

  • Limit photo sharing on public platforms. Use privacy settings and consider watermarking images.
  • Verify AI tool policies. Check whether the service requires a subscription or has content filters.
  • Report suspicious content. Use X’s reporting feature and the Grok app’s feedback option to flag nonconsensual transformations.
  • Stay informed about legal changes. Follow updates on the Take It Down Act and related FTC guidance.
  • Use reputable AI services. Prefer platforms that have transparent moderation policies and comply with international data protection laws.

Law firms specializing in digital privacy advise that students should review the terms of service of any AI platform they use, as these documents often contain clauses that could affect liability and data ownership.

Looking Ahead

As X and xAI navigate the regulatory landscape, the next few months will be critical. The Take It Down Act’s compliance deadline on May 19 means that platforms will need to implement robust content‑moderation pipelines or face potential fines and criminal charges. X’s current pay‑wall strategy may be a stopgap measure; a more permanent solution could involve real‑time content filtering, user verification, or a partnership with third‑party moderation firms.

Regulators in the UK and EU are likely to intensify scrutiny, especially if the European Commission’s order to retain Grok documents is enforced. In the U.S., the FTC’s potential enforcement actions could set a precedent for how platforms handle AI‑generated nonconsensual content, influencing policy across the tech industry.

For users, the evolving regulatory environment underscores the importance of digital literacy. Understanding the capabilities and limitations of AI tools, as well as the legal ramifications of nonconsensual deepfakes, will be essential for navigating the digital landscape safely.

Reach out to us for personalized consultation based on your specific requirements.