Articles

GrokAI, NCII, and the Fight to Protect Women and Girls Online 

By: Rachel E. Barkley, JD MBA, Staff Attorney, National Center for Legal Approaches to Prevent Family Violence

Late December 2025, X users began leveraging GrokAI, X’s automated chatbot, to create nonconsensual sexually suggestive and explicit images of women and children. These images, which were developed through Grok’s image generation tools and its recently launched “spicy mode,” have sparked concerns about the safety of women and minors on the internet. 

A 2025 survey by the National Organization for Women and Incogni found that 25% of women have experienced technology-facilitated harassment and abuse, including the dissemination of AI-generated deepfakes. Similarly, 12.6% of children and young people worldwide have experienced the nonconsensual taking, sharing, or exposure to sexual images and videos. Developments and advancements in the accessibility of Generative AI have increased the capacity to create realistic, explicit imagery with minimal effort. GrokAI makes this technology more readily available and “frictionless”, increasing the prevalence and dissemination of deepfake nudes and other forms of nonconsensual intimate imagery. 

The psychological and social impacts of the dissemination of nonconsensual intimate imagery include severe emotional distress, reputational damage, financial exploitation, and in extreme cases, self-harm and suicide. In these cases, where fully clothed images are transformed into sexually explicit photos without consent, feelings of powerlessness, lack of autonomy, and shame are compounded. When users post photos on the internet, they are exercising self-determination, sharing pieces of their lives with the world, or celebrating milestones. The use of these tools to disrupt the right to digital privacy replicates the coercive control exercised by people who cause harm and abuse. 

In addition to the violation that this trend enacts on women and girls each time someone creates one of these images, it has a profound impact on the planet, exacerbating the effects of climate change and increasing the risk of natural disasters. The facility that powers GrokAi, X’s Colossus supercomputer facility, is powered by dozens of methane gas turbines, which spew toxic fumes and worsen air pollution in the historically Black neighborhood in South Memphis, Tennessee. This increases rates of asthma and other respiratory conditions. Simultaneously, cooling the chips required to power Colossus requires up to 1 million gallons of water, exploiting a resource that is already rapidly depleting. These environmental effects are massive and cause undue strain on women and children within the community and globally. 

As of January 15th, Grok’s image editing function was only accessible to paid users. However, the Associated Press found that the image editing tool was still available to free users on X, using the “edit image” function, as well as through Grok’s website and app. Regardless, the efficacy of this response is questionable at best. It does not undo the harm caused by the chatbot, nor does it change the greater landscape of the online world, where women, children, and other marginalized folks are particularly at risk of experiencing digital violence and harm. 

Countries like Malaysia and Indonesia have blocked X’s chatbot over the dissemination of these sexualized AI images, and the UK and the state of California have launched investigations into whether the proliferation of these images violates online safety laws. Though laws like Section 230 protect platforms from being held liable for user-generated content, GrokAI and X are creating and distributing these images based on prompts from users. With this in mind, it is possible that X itself can face consequences for these images. Similarly, because individuals who create or distribute nonconsensual sexual images can be held civilly or criminally liable under laws that prohibit sexual exploitation, harassment, and abuse, individual users can be found responsible for deepfakes and other nonconsensual intimate imagery created through Grok. Furthermore, the Take It Down Act, a bill that requires social media platforms to remove nonconsensual intimate imagery and deepfakes within 48 hours and criminalizes the creation and dissemination of these materials, has the potential to serve as a protective measure against future harm. 

Though the legal system and legislative regulations are valid forms of recourse for those who have already faced or will experience the real-world and online implications of this trend, education is crucial. Teaching communities, including young people and children, about the damaging effects of using these tools to create sexualized images is a key step in preventing these harms in the future. Similarly, dismantling systems of oppression like misogyny, particularly in online spaces and encouraging practices like healthy masculinity and the equitable treatment of people of all genders, can shape cultural attitudes around the worth of women and girls. GrokAI is just the latest technological advancement being used to enact sexual violence – it certainly won’t be the last, and education is one of the best methods of protecting our communities against future harm.


#Gender Based Violence #News

Related Articles

What I Learned While Working with Young Survivors 

By: Megan BeldenESQ. Senior Attorney Advisor, National Restraining Order Center One in three.  That’s how many teens will experience physical, sexual, or…

#Gender Based Violence #News

GrokAI, NCII, and the Fight to Protect Women and Girls Online 

By: Rachel E. Barkley, JD MBA, Staff Attorney, National Center for Legal Approaches to Prevent Family Violence Late December 2025,…

#Gender Based Violence #News

Opening Doors for Trafficking Survivors

Each January, Human Trafficking Awareness Month invites us to confront a form of violence that is widespread, often hidden, and…

#Gender Based Violence #News