AI: The Mirror We Built
Scroll your feed and you will probably regularly encounter a furious outcry over:
- AI telling women to aim lower on salary than men
- Bots ghost jobseekers for having the “wrong” name or accent.
- AI image generators spewing stereotypes. Have you tried generating a “hysterical teacher” lately?
We act shocked and demand fixes. But what if the AI isn’t malfunctioning? What if it’s working perfectly?
AI doesn’t invent or generate new bias it just mirrors the data we’ve fed. Our blind spots, prejudices, and lazy defaults are now amplified at industrial scale. That teacher’s ‘efficient’ lesson plan? It may erase hard-won representation. That marketer’s ‘fresh and innovative’ campaign? It’s likely repackaging old clichés and stereotypes.
The Myth of Neutrality
We’ve been sold a fairytale about “neutral” tech for years. Tech feels objective as it is made of numbers, code, science, math, logic. It’s comforting to think the machine is above bias. But all tech is made by humans, and humans are messy, biased, and far from neutral. Every algorithm is shaped by the priorities, blind spots, and unconscious biases of its creators.
AI takes this to a next level. Contrary to the software we know and love, AI is not only code and instructions written by a developer. It’s also trained on our collective digital history: social media, old news, forums, all our messiness. Every dominant flaw and blind spot gets baked in. It also learns from all the interactions users have with it, every user regardless background, ideas or motives.
In the past, if tech acted unfairly, you could often find the problem in the code and ask the programmer to fix it. But today’s AI is so complex that even its creators can’t always see or explain what’s going on inside. What data is included in training, what is excluded and who decides all that? Understanding the precise “why” behind every single output of complex models is difficult. This makes detecting and mitigating biases hard.
Bias Isn’t a Bug, it’s the Default
When AI generates a racist/misogynist/sexist image or spits out other stereotypes, it isn’t malfunctioning. It’s doing exactly what it was designed to do: calculate probabilities from its data. These systems aren’t ‘creative, they calculate what’s most statistically probable, based on the mountain of data they’ve been fed. If that data is riddled with bias and stereotypes, so are the outputs. It isn’t going rogue. It’s doing its job: reflecting the probabilities and prejudices hard-coded into its training data.
The ugly truth? When AI reflects bigotry or bad stereotypes, it’s showing us our own data. We trained it on the internet. What did we expect? Its core drive is accuracy, not fairness. If biased patterns exist, AI will replicate them
When ‘Most Probable’ Erases People (and Ideas)
And the real danger? We’re now rolling out AI in so many aspects of life: healthcare, security, hiring, education and more. Left unchecked, AI becomes a bias factory, mass-producing yesterday’s prejudices faster and cheaper than ever before and at a scale no human campaigner can match. Progress on equality doesn’t just stall, it can run backwards.
I see it in education and marketing myself. So many people dazzled by AI’s ability to churn out endless content, campaigns, and lesson materials superfast. Even on autopilot if you can just find the right AI-agent workflow. Companies and teachers dreaming of 24/7 output, newsletters, ads, lesson plans, explainer videos… Yes, AI makes us more efficient and save a lot of money. But automate content, and you automate your blind spots. We’re letting the machine decide what’s “normal,” who gets represented, and what counts as knowledge based on a mountain of data that contains views and opinions we might not want or worked so hard to improve the past decades.
Instead of progress, we risk industrializing inequality… while congratulating and thumbs-upping ourselves and colleagues for being so innovative.
This problem extends far beyond marketing and education of course. Once AI starts making decisions for millions, quietly, invisibly, and at scale, if we don’t pay attention, the clock doesn’t just stop on equality and diversity. It starts running backwards.
Can We Fix the Cracked Mirror?
If AI is holding up a mirror, the challenge isn’t just “How do we fix the machine?” it’s “How do we change what it reflects?” That’s hard work bias and stereotypes baked into our societies don’t vanish with new tech. AI will never be more ethical than the society that builds, uses and trains it. Additional problem is that we dont hold the mirror. A small group of companies decides where it points, what it shows, and what it hides.
But there are things we can do ourselves:
- Check the output, don’t just trust and it. Especially for attributions, representation, and historical facts. Be the human editor at all time.
- Engage critically with AI. Ask for diverse perspectives in your prompts, explicitly request views from underrepresented groups, rephrase your questions in different ways, and compare outputs to spot missing voices.
- Push for transparency in training data and usage. Ask what’s included, excluded, and why.
- Use available feedback tools to flag biased, offensive, or unfair response.
- Support diverse data creation. Fund and share work that fills the gaps in representation.
- Refuse black boxes. Choose tools where bias can be investigated and challenged.
- Cross-check facts and sources. Don’t let statistical probability replace evidence or primary references.
For teachers: Guard your curriculum, make sure AI isn’t deciding what knowledge counts.
For marketers: Guard your storytelling, don’t let probability decide who gets seen.
Better habits and better data won’t solve everything, but they’re what we can control until the mirror is in more hands. One day🙂
Title inspired by the quote: “Every nation gets the government it deserves/ Toute nation a le gouvernement qu’elle mérite” (Joseph de Maistre, 1811).
Bibliography, inspiration and more readings:
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Bell, J. (2023). Relational machines: How algorithms reshape power, agency, and social structures. MIT Press.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research. PDF at https://www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification/
CMSMEclub (2024) Consumer Rights On Fair And Responsible AI https://cmsmeclub.com/articles/blogs/consumer-rights-on-fair-and-responsible-ai/
Couch, Christina (2017), Ghosts in the Machine-AI systems don’t think exactly like humans, but the algorithms can—and do—play favorites. PBS.org https://www.pbs.org/wgbh/nova/article/ai-bias/
Council of Europe. (2022). Study on the impact of artificial intelligence systems, their potential and risks. Retrieved from https://rm.coe.int/study-on-the-impact-of-artificial-intelligence-systems-their-potential/1680ac99e3
Creel, Kathleen & Hellman, Deborah. (2022). The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems. Canadian Journal of Philosophy. 52. 1-18. 10.1017/can.2022.3. https://doi.org/10.1177/01622439211054422
Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), 4691–4697. https://doi.org/10.24963/ijcai.2017/654 and https://www.ijcai.org/proceedings/2017/0654.pdf
European Digital Rights (EDRi). (2021). Regulating AI and its inequalities. https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf
Floridi, Luciano (2019) Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, May 2019 PDF published here https://www.researchgate.net/publication/333332997_Translating_Principles_into_Practices_of_Digital_Ethics_Five_Risks_of_Being_Unethical
Frąckiewicz, Marcin. (2025). Black box AI exposed: Hidden algorithms, risks, and breakthroughs in 2025. TS2 Space. Retrieved from https://ts2.tech/en/black-box-ai-exposed-hidden-algorithms-risks-and-breakthroughs-in-2025/
Interview with Elena Beretta . Can AI Discriminate? Vrije Universiteit Amsterdam. Retrieved from https://vu.nl/en/stories/can-ai-discriminate
Hermans, F. (2024, April 21). Keynote: Generative AI in education [Video]. YouTube. https://www.youtube.com/watch?v=owqu9VibLTw
Kinchin, Niamh (2020). “Voiceless”: The procedural gap in algorithmic justice. International Journal of Law in Context. Oxford University Press. https://academic.oup.com/ijlit/article/doi/10.1093/ijlit/eaae024/7877312 and pdf here
Lumenalta (2024) Fairness in generative AI https://lumenalta.com/insights/fairness-in-generative-ai
Nikolić, Kristina; Jovičić, Jelena (2023) Reproducing inequality: How AI image generators show biases against women in STEM https://www.undp.org/serbia/blog/reproducing-inequality-how-ai-image-generators-show-biases-against-women-stem
Nurock, V. (2025). Care in an era of new technologies and artificial intelligence. Peeters. https://www.peeters-leuven.be/download_OA.php?id=9789042952027&name=Care+in+an+Era+of+New+Technologies+and+Artificial+Intelligence
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, *366*(6464), 447–453. https://doi.org/10.1126/science.aax2342
Pavlidis, G. (2024), “Unlocking the Black Box: Analysing the EU Artificial Intelligence Act’s Framework for Explainability in AI”, Law Innovation and Technology, Vol. 16 No. 1, pp. 293-308 https://doi.org/10.1080/17579961.2024.2313795
Sharma, P., et al. (2023). Interpreting black-box models: A review on explainable artificial intelligence (XAI). Cognitive Computation. https://link.springer.com/article/10.1007/s12559-023-10179-8
Simon, J., Wong, P.-H., & Rieder, G. (2020). Algorithmic bias and the Value Sensitive Design approach. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1534 and https://policyreview.info/concepts/algorithmic-bias
Steier, Henning (2025), AI job interviews may discriminate against accents and disabilities, study finds. Linkedin Article https://www.linkedin.com/pulse/ai-job-interviews-may-discriminate-against-accents-study-steier-3yumf/
Wikipedia https://en.wikipedia.org/wiki/Algorithmic_bias
Wittebrood. Wilke (2025), Nieuwe baan? ChatGPT raadt vrouwen aan tot 120.000 dollar minder salaris te vragen, MT/Sprout https://mtsprout.nl/werk-leven/ai-taalmodellen-chatgpt-salaris-loonkloof
The Human Centered AI Institute (2025) One Key Challenge in Ensuring Fairness in Generative AI. https://www.hcaiinstitute.com/blog/one-key-challenge-in-ensuring-fairness-in-generative-ai
Transparency & Sources
This article was created with the support of AI tools to brainstorm ideas, improve clarity, and enhance readability. AI also assisted in generating visual assets and identifying relevant sources. Tools used: ChatGPT, Perplexity and DeepSeek.
I care deeply about giving credit where it’s due and strive to include all references with working links, beit direct quotes or articles that inspired me. If you notice a missing source or believe your work should be credited, feel free to DM me and I will do my best to correct (where possible).
Related Posts
3 oktober 2025
When AI Eats the Internet, Sextech Will Save Us
Sora 2's AI boom risks trapping us in a…




