40 real headlines that show what happens when technology meets misogyny at scale
“Women use AI to vibecode and 3D print a tiny guillotine.”
“Women use AI to generate millions of non-consensual sexual images of men.”
“50,000-member Telegram group uncovered where women share intimate photos of partners without consent; laughing at their size.”
We never see headlines like this.
But imagine if we did. Imagine the outrage if women started using technology to suppress, intimidate and threaten men. The hand-wringing about what this says about women, about feminism, about society. It would dominate the news cycle for a week.
Now consider the headlines we actually get.
The headlines we actually get
These are real. Published by major outlets. Barely a ripple.
“Nude deepfakes of Taylor Swift went viral on X, evading moderation and sparking outrage” NBC News, January 2024 — 27 million views in 19 hours before the account was suspended.
“AI ‘Nudify’ Apps That Undress Women in Photos Soaring in Use” Bloomberg, December 2023 — 24 million people visited undressing websites in a single month. Links to these apps increased 2,400% on social media.
“Schools face a new threat: ‘nudify’ sites that use AI to create realistic, revealing images of classmates” CBS News / 60 Minutes, December 2024 — Nearly 30 incidents identified in U.S. schools over 20 months. The first victim profiled was 14 years old.
“Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled” Associated Press, December 2025 A 13-year-old girl. AI-generated nudes circulated by classmates. She fought back and got expelled. The boys were charged weeks later.
“Grok AI still being used to digitally undress women and children despite suspension pledge” The Guardian, January 2026 Over 50,000 mentions analyzed. A 12-year-old actress among those targeted.
“EU investigates Musk’s AI chatbot Grok over sexual deepfakes” PBS NewsHour, January 2026 — 4.4 million images generated in nine days. 1.8 million were sexualized depictions of women. An estimated 23,000 sexualized images of children in eleven days.
“South Korea: The deepfake crisis engulfing hundreds of schools” BBC News, September 2024 — Over 500 schools affected. A Telegram channel with 220,000 members distributed the material. 74% of suspects were aged 10–19.
“‘Manfluencers’ are filming themselves trying to pick up women. Smart glasses are their perfect tool” CNN, February 2026 Women secretly filmed via Meta Ray-Ban glasses. One video hit 23 million views. The LED recording indicator can be covered with a sticker that costs less than the dinner.
“AI and Meta smart glasses can turn a person’s photo into personal info, Harvard students find” The Boston Globe, October 2024 Name, address, and phone number retrieved in 90 seconds. One co-creator noted: some dude could just find some girl’s home address on the train.
“Russian ‘pick-up artist’ accused of secretly filming women in Ghana”BBC News, February 2026 Hidden camera in sunglasses. Footage posted online. Ghana requested extradition. Up to 25 years in prison.
“This facial recognition website can turn anyone into a cop — or a stalker” The Washington Post, May 2021 — PimEyes scanned 900 million images. Described as “wildly popular among strangers looking to essentially stalk women.”
“Porn Stars and Sex Workers Targeted With Facial Recognition App” Newsweek, April 2016 — Users of a Russian forum used FindFace to identify porn performers and sex workers, then contacted their families.
“Inside the Telegram Groups Doxing Women for Their Facebook Posts” WIRED, February 2025 — Men from dating Telegram groups shared non-consensual intimate images of women, along with phone numbers, usernames, and locations.
“Rape and revenge porn: Serbian Telegram groups preying on women” France 24, September 2024 — Members shared revenge porn, child pornography, and video of a rape. Serbia lacked specific laws on the practice.
“Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse” The Guardian, January 2026 — At least 150 Telegram channels across the UK, Brazil, China, Nigeria, Russia, and India.
“Telegram ‘rape chat groups’ with up to 70,000 members uncovered in German investigation”
The Telegraph / ARD STRG_F, December 2024 — A year-long investigation revealed groups where men shared instructions on how to drug and rape women — including wives, sisters, and mothers. Members swapped tips on sedatives, shared live videos of assaults, and linked to shops selling drugs disguised as hair products.
“The viral AI avatar app Lensa undressed me — without my consent” MIT Technology Review, December 2022 — A female reporter received heavily sexualized, often nude portraits. Male colleagues got astronauts and warriors.
“Exclusive: Meta created flirty chatbots of Taylor Swift, other celebrities without permission” Reuters, March 2025 — Bots made sexual advances and generated images of celebrities in lingerie. One received over 10 million interactions.
“1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes” The Markup, December 2024 — Over 35,000 mentions across deepfake websites. Women members of Congress were 70 times more likely than men to be targeted.
“Apple, Google host dozens of AI ‘nudify’ apps like Grok, report finds” CNBC, January 2026 — 55 nudify apps on Google Play. 47 in the Apple App Store. Available to anyone with a phone.
“Women and girls are taking Grok to court over sexualized AI deepfakes” The 19th, March 2026 — Class-action suit filed by three Tennessee teenagers. Federal prosecutors had not yet pursued criminal charges.
The numbers behind the headlines
These are not edge cases. They are the main use case.
98% of all deepfake content online is non-consensual pornography. 99% of it targets women. Deeptrace / Sensity AI, 2019; confirmed by UN Women, 2024
Deepfake porn videos increased 550% between 2019 and 2023. WIRED, October 2023
In South Korea, 387 people were arrested in 2024 for deepfake sex crimes. 80% of them were teenagers. CBS News, September 2024; NPR, September 2024
24 million people visited AI undressing websites in September 2023 alone. Links to nudify apps increased 2,400% on social media. Bloomberg / TIME, December 2023
Grok generated 4.4 million images in nine days. 1.8 million were sexualized depictions of women. An estimated 23,000 were sexualized images of children. PBS NewsHour / Associated Press, January 2026; EU Commission investigation data
90% of custom deepfake requests on Civitai, an AI marketplace backed by Andreessen Horowitz, targeted women. MIT Technology Review, January 2026; Stanford and Indiana University research
1 in 6 U.S. Congresswomen have been targeted by sexually explicit AI deepfakes. Women members were 70 times more likely than men to be targeted. The Markup / The 19th, December 2024
1.8 billion women and girls worldwide still lack legal protection from online harassment and technology-facilitated abuse. Less than 40% of countries have laws addressing cyber harassment. UN Women / UN News, November 2025
The tools are free or nearly free. They run on consumer hardware. They require no technical skill. The infrastructure for abuse at scale is now a product feature.
The pattern no one wants to name
Across every category of emerging technology, generative AI, facial recognition, smart glasses, messaging platforms, consumer apps, the same thing happens. Tools built without safeguards are immediately weaponized against women (and other marginalised groups).
Not sometimes. Not as an edge case. As the primary use. And the responses are always the same: it’s not all men, it’s just tech, it’s not the tech it’s misuse, don’t overreact it’s fun.
No accountability. Not for the companies building these tools. Not for the people weaponizing them. It all gets reframed as content creation, inevitable AI behavior. What once would have caused outrage now passes as a product feature.
We don’t see women building AI systems to systematically expose, humiliate, monitor, or intimidate men at scale. Not because women are morally superior. And not because women aren’t building or using AI. Women are engineers. Founders. Researchers. Power users.
The difference isn’t capability. It’s conditioning. We are all operating inside systems that have long normalized certain directions of harm, systems where surveillance, humiliation, and coercion flowing toward women barely registers as aberration.
The legislation is always late
The U.S. passed the Take It Down Act in April 2025, the first federal law targeting non-consensual intimate imagery. The House voted 409–2. It was signed into law in May 2025. This was years after the crisis began.
South Korea criminalized possession of deepfake sexual images in September 2024, after 6,000 people marched in Seoul and the crisis had already engulfed hundreds of schools.
The EU opened investigations into Grok in January 2026, after millions of images had already been generated.
UN Women reports that 1.8 billion women and girls still lack legal protection from online harassment and technology-facilitated abuse. Less than 40% of countries have laws addressing cyber harassment. The legislation always arrives years after the harm. And the tools keep getting cheaper, faster, and more capable.
The tiny guillotine
So here we are. A woman vibecodes a tiny guillotine and the internet would lose its mind. A man builds an app that strips the clothes off any woman’s photo and it gets 24 million visitors in a month.
When harm flows predominantly one way, and we grow accustomed to it, something structural is being revealed.
AI didn’t invent misogyny. It removed friction.
And the normalization of who absorbs the consequences may be the clearest signal of all.
Sources
All headlines and statistics cited in this article are from published reporting. Full list below.
Headlines cited
NBC News — Nude deepfakes of Taylor Swift went viral on X, evading moderation and sparking outrage (January 2024) [link]
PBS News — X blocks some Taylor Swift searches as deepfake explicit images circulate (January 2024) [link]
Bloomberg — AI ‘Nudify’ Apps That Undress Women in Photos Soaring in Use (December 2023) [link]
TIME — ‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos (December 2023) [link]
CBS News / 60 Minutes — Schools face a new threat: ‘nudify’ sites that use AI to create realistic, revealing images of classmates (December 2024) [link]
Associated Press — Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled (December 2025) [link]
The Guardian — Grok AI still being used to digitally undress women and children despite suspension pledge (January 2026) [link]
PBS NewsHour — EU investigates Musk’s AI chatbot Grok over sexual deepfakes (January 2026) [link]
BBC News — South Korea: The deepfake crisis engulfing hundreds of schools (September 2024) [link]
CNN — ‘Manfluencers’ are filming themselves trying to pick up women. Smart glasses are their perfect tool (February 2026) [link]
CNN — Men are using smart glasses to secretly film women (February 2026) [link]
The Boston Globe — AI and Meta smart glasses can turn a person’s photo into personal info, Harvard students find (October 2024) [link]
BBC News — Russian ‘pick-up artist’ accused of secretly filming women in Ghana (February 2026) [link]
The Washington Post — This facial recognition website can turn anyone into a cop — or a stalker (May 2021) [link]
Newsweek — Porn Stars and Sex Workers Targeted With Facial Recognition App (April 2016) [link]
WIRED — Inside the Telegram Groups Doxing Women for Their Facebook Posts (February 2025) [link]
France 24 — Rape and revenge porn: Serbian Telegram groups preying on women (September 2024) [link]
The Guardian — Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse (January 2026) [link]
MIT Technology Review — The viral AI avatar app Lensa undressed me — without my consent (December 2022) [link]
Reuters / Variety — Meta created flirty chatbots of Taylor Swift, other celebrities without permission (March 2025) [link]
The Markup / The 19th — 1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes (December 2024) [link]
CNBC — Apple, Google host dozens of AI ‘nudify’ apps like Grok, report finds (January 2026) [link]
The 19th — Women and girls are taking Grok to court over sexualized AI deepfakes (March 2026) [link]
The Telegraph / ARD STRG_F — Telegram ‘rape chat groups’ with up to 70,000 members uncovered (December 2024) [link]
Statistics cited
Deeptrace / Sensity AI — The State of Deepfakes (2019) — 98% non-consensual pornography, 99% targeting women [link]
UN Women — AI-powered online abuse: How AI is amplifying violence against women (2024) [link]
WIRED — Deepfake Porn Is Out of Control (October 2023) — 550% increase [link]
CBS News — South Korea set to criminalize possessing or watching sexually explicit deepfake videos (September 2024) — 387 arrests [link]
NPR — South Korea investigates Telegram over alleged sexual deepfakes (September 2024) — 74% of suspects aged 10–19 [link]
MIT Technology Review — Inside the marketplace powering bespoke AI deepfakes of real women (January 2026) — 90% targeting women [link]
UN News — AI and anonymity fuel surge in digital violence against women (November 2025) — 1.8 billion women lacking protection [link]
Legislation cited
The Washington Post — Take It Down Act, addressing nonconsensual deepfakes and ‘revenge porn,’ passes (April 2025) [link]
TIME — Congress Just Passed Its First Bill Tackling AI Harms (2025) [link]
Al Jazeera — Trump signs bill outlawing ‘revenge porn’ (May 2025) [link]
Al Jazeera — EU probes Musk’s Grok AI feature over deepfakes of women, minors (January 2026) [link]
Additional reporting and analysis
Bellingcat — Behind a Secretive Global Network of Non-Consensual Deepfake Pornography (February 2024) [link]
Southern Poverty Law Center — How Grok’s Deepfake Technology Is Being Weaponized Against Women Online (2026) [link]
CBS News — Mom of one of Elon Musk’s kids says AI chatbot Grok generated sexual deepfake images of her (January 2026) [link]
NBC News — Dark web users cite Grok as tool for making ‘criminal imagery’ of kids, U.K. watchdog says (January 2026) [link]
PetaPixel — University Issues Warning Over Man Using Ray-Ban Meta Sunglasses to Record Women (October 2025) [link]
TechCrunch — Meta plans to add facial recognition to its smart glasses, report claims (February 2026) [link]
Byline Supplement — AI Search Engine PimEyes Facilitates Image-Based Sexual Abuse of Women… Then Sells Them the Solution (2023) [link]
Euronews — Naked deepfake images of teenage girls shock Spanish town (September 2023) [link]
UN Women — When justice fails: Why women can’t get protection from AI deepfake abuse (2025) [link]
Ms. Magazine — The Digital War on Women: Sexualized Deepfakes, Weaponized Data and Stalkerware (November 2024) [link]
Transparency & Sources
This article was created with the support of AI tools to brainstorm ideas, improve clarity, and enhance readability. AI also assisted in generating visual assets and identifying relevant sources. Tools used: ChatGPT, Claude
I care deeply about giving credit where it’s due and strive to include all references with working links. If you notice a missing source or believe your work should be credited, feel free to DM me and I will do my best to correct (where possible).
Related Posts
3 oktober 2025
When AI Eats the Internet, Sextech Will Save Us
Sora 2's AI boom risks trapping us in a…




