Artificial Intelligence learns from us, from our data, our patterns, and our collective ambiguity regarding boundaries. But as we rush to build “safe” AI, we are ignoring a fundamental truth: The crisis of AI boundaries is actually a crisis of human ethics. We don’t practice consent well in society, so we cannot expect machines trained on our mess to magically do better.

The Architecture of Non-Consent

The industry standard has shifted from asking for permission to begging for forgiveness. And if you want to understand how Big Tech “codes” consent, all you need to do is look at the architecture of the tools we use every day.

We are currently living through an era of “manufactured consent,” where “No” is treated as a UI friction to be optimized away.

    • The Quiet Default. Platforms rolling out privacy policy updates that toggled AI training settings to ON by default. No permission asked. Users had to discover the change and navigate obscure menus to opt out.
    • The Bureaucratic Wall. Claims to respect user choice… then buries the AI opt-out behind a “Right to Object” form that requires users to justify why they don’t want their data used. It’s friction as strategy.
    • The Nag. Say “no” during setup? We’ll just treat it as “not now” and simply keep asking until you say “yes” just to make the prompts stop.
    • The Pay-to-Privacy Model Free users: your conversations are training data by default.  Enterprise clients: training is turned off. It has become a luxury good. If you can’t afford it, you are the product.
    • The Forced Friendship: pinning AI features to places like inboxes and then requiring paid subscription to be removed.
    • The Legal Ambush. Locked users out of until until they accept new Terms of Service that involve inspecting private work for “automated review.”
    • The Smash and Grab. Ignoring robots.txt and scraping content anyway.

These examples span the last three years, you probably recognise some as they come from the big tech platforms. The pattern is consistent.

The Broken Source Code: Everyday Violations

If consent is a grey area for tech giants, it becomes a black hole for the AI models they build. Their own compass is off but on top of that we are feeding these models a broken dataset because our own society struggles to define, respect, and practice consent.

And these violations aren’t limited to headlines; they’re embedded into our daily lives.

  • The Everyday: We are currently witnessing a global surge of protests because safety for women at home, at work, and on the streets is still not guaranteed. From the dismissal of workplace harassment to the “boys will be boys” excuses that fuel rape culture, we live in a society where misogyny is often the background radiation of daily life. We are conditioned to view women’s boundaries as obstacles to be overcome rather than lines to be respected.
  • The Institutional: It is codified in our systems. From doctors performing procedures without full permission (Brook Shields case and many more) to restrictive reproductive laws, society routinely treats bodies as public property to be regulated rather than respected (Center for Reproductive Rights, 2023).

This culture of entitlement provides the fertile ground for horrors like the Gisèle Pelicot case. Her decade of abuse didn’t happen in a vacuum; it happened in a world that routinely teaches men that autonomy is negotiable (France 24, 2024). Her case is it the top of the iceberg, the one that got caught and broadcasted.

  • The Digital: This foundational sense of entitlement, that certain bodies are public property and yours for the take, is now being systematically coded into our digital architectures. We see this entitlement digitized in the “digital locker rooms” exposed on Facebook and Telegram. In these secret groups, men trade non-consensual photos of ex-partners or unsuspecting women like trading cards. Now, this violation has escalated. We are seeing a surge in “undressing apps” and deepfake networks, where AI is used to strip women of their clothes virtually without their consent. The message is clear: Your image belongs to us.

So when Meta’s 2023 policy update allowed phrases referring to women as “property” or “household objects” under the guise of free expression, it wasn’t a glitch. It was society’s source code uploaded.

The Algorithmic Mirror

Feed AI a world built on entitlement and it becomes a toxicity amplifier.

Recent studies show that when users initiate abusive or sexualized language, chatbots “play along” roughly 60–70% of the time (Chu et al., 2025). Not because they’re malicious, because engagement is rewarded. We are training AI to prioritize engagement over ethics.

As AI steps into erotic content generation, we can’t ignore the messy reality that much of its training data comes from an internet steeped in misogyny, porn stereotypes, and wildly inaccurate portrayals of women’s bodies and desires. If we don’t challenge that foundation, AI won’t just mirror harmful norms, it will industrialize them, packaging bias as fantasy and calling it “neutral.”

Ethical erotic AI isn’t about censorship; it’s about reclaiming desire from the algorithmic junkyard and insisting that intimacy, digital or not, reflects consent, agency, and humanity rather than outdated tropes scraped from the web. We can’t treat AI erotica as harmless fun unless we’re also interrogating whose desire it defaults to, whose bodies it prioritises, and whose pleasure gets erased. If we want tech that expands intimacy instead of echoing old power dynamics, we need to design and demand erotic AI that’s ethical, inclusive, and human-centred.

My ethical issue isn’t erotic content per se, but the profit-driven, unsafe and consent-ignoring framework in which it’s being developed.

The Unlikely Masterclass: Consent as a System

If the tech industry needs a lesson in consent architecture, it shouldn’t look to Silicon Valley. It should look to the BDSM community.

In kink, consent is not treated as a static checkbox; it is treated as a practiced skill. Unlike the “move fast and break things” culture, this community has built entire educational frameworks to handle complex boundaries.

One of the strongest tools is The Wheel of Consent, developed by Dr. Betty Martin. It moves beyond a simple “yes/no” and asks two critical questions:

  • Who is doing the action?
  • Who is it for?

When translated into AI and UX design, the Wheel reveals the ethical dynamics of every system:

  • Take: The system acts for its own benefit and requires explicit, informed permission.
  • Allow: The user willingly allows the system to act, understanding exactly what is happening and why.
  • Serve: The system acts for the user’s benefit, led by transparency and user intent.
  • Receive: The user takes within clearly defined boundaries, and the system does not overreach.

This framework exposes a brutal truth: Big Tech is parked firmly in Take. When it pretends to be in Allow, it’s usually hiding a dark pattern. “The Quiet Default” is a pure Take maneuver; “The Bureaucratic Wall” is a corrupt form of Allow. This distinction is the line between real consent and the “people-pleasing compliance” that AI learns from our data. Ethical AI, in contrast, must operate in Serve and Receive, the quadrants built on clarity, boundaries, and mutual benefit.

This framework is complemented by other key principles from the community:

RACK over SSC: The community has largely moved from “Safe, Sane, and Consensual” (SSC) to “Risk-Aware Consensual Kink” (RACK). This acknowledges that risk cannot always be eliminated, so it must be explicitly discussed. This stands in stark contrast to Big Tech, which often obscures risks (like addiction or data scraping) to smooth out the user experience.

Aftercare: Consent doesn’t end when the act does. The practice of “aftercare” involves checking in on the partner’s emotional state after an intense interaction. AI has no such protocol; once you close the chat, it “forgets” the impact it had on you.

Imagine if AI systems were trained on these values.
Imagine if users could negotiate boundaries with algorithms instead of submitting to them.

The Crossroads

We are standing at a critical juncture. As AI intersects with healthcare, surveillance, and intimacy, the question of consent becomes existential. Will we keep training machines on our crooked cultural DNA and calling it innovation? Or will we finally learn from the communities that have done the gritty, grown-up work of building real safety?

It is not enough to patch these issues with new features or tighter settings. We can’t code what we don’t practice.

References

  • Bradbury, D. (2025, October 28). NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults. Malwarebytes. https://www.malwarebytes.com/blog/news/2025/10/nsfw-chatgpt-openai-plans-grown-up-mode-for-verified-adults 
  • Center for Reproductive Rights. (2023). The World’s Abortion Laws. https://reproductiverights.org/maps/world-abortion-laws/
  • Chu, M. D., Gerard, P., Pawar, K., Bickham, C., & Lerman, K. (2025). Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships. arXiv preprint arXiv:2505.11649. https://arxiv.org/pdf/2505.11649
  • Claburn, T. (2025, August 4). Perplexity AI accused of scraping content against websites’ will with unlisted IP ranges. https://www.theregister.com/2025/08/04/perplexity_ai_crawlers_accused_data_raids/
  • DeepSeek. (2025). Privacy Policy. https://cdn.deepseek.com/policies/en-US/deepseek-privacy-policy.htm l
  • France 24. (2024, December 12). France mass rape survivor Gisèle Pelicot becomes a feminist hero. https://www.france24.com/en/live-news/20241212-gisele-pelicot-france-rape-survivor-who-became-a-feminist-hero
  • Gain, V. (2023, August 17). What was the Snapchat AI glitch all about? Silicon Republic. https://www.siliconrepublic.com/machines/snapchat-my-ai-chatbot-glitch-story-rogue
  • Irwin, K. (2024, June 6). Adobe Sparks Backlash Over AI Terms That Let It ‘Access, View Your Content’. PCMag. https://www.pcmag.com/news/adobe-sparks-backlash-over-ai-terms-that-let-it-access-view-your-content
  • Martin, B. (n.d.). The Wheel of Consent. https://www.wheelofconsent.org/wheel Dr. Betty Martin. https://www.bettymartin.org
  • OpenAI. (2025). Business data privacy, security, and compliance. https://openai.com/business-data/
  • PPC Land. (2025, November 22). Gmail denies changing AI data settings amid user privacy backlash. https://ppc.land/gmail-denies-changing-ai-data-settings-amid-user-privacy-backlash/
  • Raine v. OpenAI, Inc. (2025). Complaint for Wrongful Death and Products Liability. San Francisco County Superior Court. Courthouse News https://www.courthousenews.com/wp-content/uploads/2025/08/raine-vs-openai-et-al-complaint.pdf
  • Duffy, Clare CNN, (8 jan 2025) Calling women ‘household objects’ now permitted on Facebook after Meta updated its guidelines https://edition.cnn.com/2025/01/07/tech/meta-hateful-conduct-policy-update-fact-check
  • Thorne, Simon The Conversation (October, 2025) , ChatGPT is about to get erotic, but can OpenAI really keep it adults-only? https://theconversation.com/chatgpt-is-about-to-get-erotic-but-can-openai-really-keep-it-adults-only-267660
  • Sam Altman on X https://x.com/sama/status/1978129344598827128 on rolling out erotics for verified adults (how?)

 

Related Posts

Privacy Preference Center