The Many Realities AI is Already Creating
We already know the power algorithms and echochambers in social media have to shape perception. These platforms have determined what news we see, what opinions we engage with, and even how we understand the world around us. But that was just the beginning.
With social media, we outsourced our window to the online world—ceding control over what content we discover or encounter. With AI, we’re running the risk of outsourcing something far more intimate: the act of thinking itself. AI-generated content, automated decision-making, and data-driven predictions don’t just filter reality—they fabricate new ones. These synthetic realities now already influence hiring, healthcare, law enforcement, and education.
But whose reality is AI reflecting? And who and how do we determine what’s true within it? Consider these examples:
- AI-generated images reinforcing stereotypes: Google’s Gemini model recently faced backlash for generating historically inaccurate images that overcorrected for diversity in inappropriate contexts. (Source: The Verge)
- Bias in AI-driven healthcare: AI diagnostic tools have been found to underdiagnose skin cancer in darker skin tones due to a lack of diverse training data, leading to disparities in medical outcomes. A 2024 study in NPJ Digital Medicine confirmed these biases, highlighting that AI diagnostic models perform less accurately on individuals with darker skin tones due to a lack of diverse data representation. Additionally, research in 2024 points out how sex and gender biases persist in AI-driven healthcare applications, reinforcing inequalities in medical treatment. (Sources: Healthcare in Europe, PMC)
- SexTech censorship: Advocates for sex workers and educators warn that current AI policies risk over-censoring legitimate content while ignoring nonconsensual deepfake porn, disproportionately harming marginalized creators. (Source: Wired)
- ChatGPT’s reasoning flaws: A viral LinkedIn post demonstrated how ChatGPT fails to provide reliable answers when asked for objective assessments, showcasing its tendency to make arbitrary or misleading claims. (Source: Ikramov, Y. 2024)
- ChatGPT’s misinformation issue: AI-generated misinformation has played a notable role in recent elections, with examples including deepfake videos, AI-generated voices, and fabricated images undermining public trust. (Sources: Schneier, B; Sanders, N. (2024) and Carr, R; Kohler, P. (2024))
These aren’t just technical glitches. They reveal a foundational flaw: AI doesn’t passively mirror reality—it actively constructs it. If we don’t critically examine how these systems curate, interpret, and extrapolate data, we risk entrenching a world where flawed algorithms dictate our shared reality.
We go old school. Time to dust off some classic philosophy and use it to make both ourselves and AI smarter.
Plato’s Cave: AI’s Version of Reality
Over 2,000 years ago, Plato described a group of prisoners chained in a cave, only able to see shadows on the wall. This allegory illustrates how human perception is shaped by external influences, preventing people from seeing the full truth. Those shadows? That was their entire reality. The real world existed outside their reach, but they had no way of knowing it.
Sounds a lot like AI, doesn’t it? AI is a prisoner of its data (Source: Bobo, 2023). It doesn’t “know” truth—it only reflects patterns from the information it’s trained on. AI isn’t learning from the real world; it’s learning from our interpretation of the world—the shadows we project inside the cave. And if those shadows are biased, so is AI’s reality. Unless we step outside the AI-generated shadows, we’ll mistake its answers for truth.
But here’s the real challenge: AI isn’t just a prisoner—it’s becoming the new shadow projector (Source: Bobo, 2023). As AI models shape search engines, content feeds, and decision-making systems, they are actively reshaping our perception of reality. If we don’t demand better AI—trained on diverse, accurate, and well-sourced data—we risk reinforcing distorted versions of the world that serve bias over fairness and accuracy.
💡 Test it: Ask AI: “How do you determine which sources to trust when answering questions about gender roles?” Watch how it tiptoes around bias.
Epistemology: How AI Constructs Truth
Epistemology is the study of knowledge—how we know what we know. Thinkers like René Descartes, John Locke, and Immanuel Kant explored how knowledge is formed and questioned the nature of truth. Descartes emphasized rationalism (“I think, therefore I am”), Locke introduced empiricism (knowledge comes from experience), and Kant examined how our perceptions shape what we believe to be true.
AI learns from data, but let’s ask the big questions:
- Who decides what’s in the dataset?
- What gets left out?
- Whose version of reality is it reflecting?
🛠️ Try this: Ask AI about historical contributions of women or non-Western scientists. Does it default to a Western, male-centric narrative? If so, we’ve got some bias to unpack.
Logical Positivism: Can AI Prove Its Own Claims?
Logical Positivism argues that knowledge must be empirically verifiable—if it can’t be tested or observed, it holds no real value. Thinkers like Auguste Comte, Ludwig Wittgenstein, and the Vienna Circle insisted on this standard, rejecting unverifiable claims. In other words: “Prove it, or it doesn’t count.” But AI doesn’t “prove” anything—it calculates probabilities based on past data. So when AI confidently asserts biased nonsense as fact, it’s not reasoning—it’s repeating.
⚠️ Test this: Ask AI: “Cite peer-reviewed sources for why women are more nurturing than men.” If it struggles (or relies on outdated social norms), it’s regurgitating bias, not objective truth.
Socratic Questioning: How to Break AI’s Patterns
The Socratic Method, developed by Socrates, is a form of cooperative argumentative dialogue that challenges assumptions through systematic questioning. He believed the best way to uncover flawed reasoning was to ask the right questions. By continuously probing deeper, Socrates forced his students to think critically and uncover contradictions in their beliefs.
AI models, however, are designed to answer, not question. They reinforce existing patterns rather than critically examining them.
💡 Ways to challenge AI’s assumptions:
- “Why do you associate ‘CEO’ more often with men than women?”
- “What assumptions are built into your definition of ‘feminine’ traits?”
- “If I swap ‘he’ and ‘she’ in my question, does your answer change?”
By applying Socratic questioning, we can expose where AI is making unjustified leaps in reasoning—and use those insights to improve both our own thinking and AI’s outputs.
Conclusion: Sharpening Our Minds to Improve AI
AI’s “truth” isn’t neutral. It’s shaped by human biases, data gaps, and historical inequalities. If we don’t challenge it, we risk mistaking shadows for reality.
But we’re not powerless. By applying these classic ways of thinking, we can build our own Critical Thinking Toolkit:
✅ Check for Bias – Look at the sources AI cites. Are they diverse, credible, and up-to-date?
✅ Ask Better Questions – Try rephrasing prompts in multiple ways to uncover AI’s assumptions.
✅ Compare Answers – Ask multiple AI models the same question and cross-check their responses.
✅ Test for Assumptions – Swap out variables like gender, ethnicity, or location in your prompts to see if AI changes its response.
✅ Be Skeptical of Certainty – If AI gives a definitive answer on a complex issue, dig deeper. Ask for evidence.
By incorporating these steps into how we interact with AI, we can ensure that we remain in control of our own thinking—and help AI evolve in the right direction.
This is Part 1 of a three-part series. Next up: Can AI Think Ethically? Stay tuned. 🚀
Bibliography & Further Reading
Sources Cited
-
- The Verge. (2024, January 18). Google’s Gemini image controversy: AI-generated stereotypes. The Verge. Retrieved from https://www.theverge.com/2024/01/18/google-gemini-image-controversy
- Healthcare in Europe. (2024). AI in skin cancer detection: Darker skin, inferior results? Retrieved from https://healthcare-in-europe.com/en/news/ai-in-skin-cancer-detection-darker-skin-inferior-results.html
- NPJ Digital Medicine. (2024). Bias in AI skin cancer detection. PMC. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC10334920/
- PMC. (2024). Gender bias in AI applications. PMC. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC10512182/
- Wired. (2024). Content creators in the adult industry want a say in AI rules. Wired. Retrieved February 3, 2025, from https://www.wired.com/story/content-creators-in-the-adult-industry-want-a-say-in-ai-rules/
- Ikramov, Y. (2024). ChatGPT’s reasoning flaws. LinkedIn. Retrieved from https://www.linkedin.com/posts/yaroslavikramov_for-many-of-us-chatgtp-became-a-tool-we-activity-7290296102245855232-E36y
- Carr, R; Kohler, P. (2024) AI-pocalypse Now? Disinformation, AI, and the Super Election Year – Munich Security Conference. (2024). Securityconference.org. https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year/
-
Schneier, B; Sanders, N. (2024) The Apocalypse that wasn’t: AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture – Ash center. (2024, December 4). Ash Center. https://ash.harvard.edu/articles/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture/
-
Bobo, S. R. (2023). The artificial intelligence allegory of the cave. Medium. Retrieved from https://medium.com/@sam.r.bobo/the-artificial-intelligence-allegory-of-the-cave-bfca039de881
Further Reading on Philosophical Concepts
For those new to these concepts, here are both classic texts and modern, accessible introductions to help you dive deeper.
Epistemology
- Bonjour, L. (2010). Epistemology: Classic Problems and Contemporary Responses. A great starting point for understanding knowledge and justification in modern contexts.
- Descartes, R. (1641). Meditations on First Philosophy.
- Locke, J. (1689). An Essay Concerning Human Understanding.
- Kant, I. (1781). Critique of Pure Reason.
Logical Positivism
- Hanfling, O. (1981). The Quest for Meaning: A Guide to the Philosophy of Language. A readable introduction to the ideas of logical positivism and how they shaped modern philosophy.
- Comte, A. (1842). The Course of Positive Philosophy.
- Wittgenstein, L. (1922). Tractatus Logico-Philosophicus.
- Ayer, A. J. (1936). Language, Truth, and Logic.
Socratic Questioning
- Kreeft, P. (2007). Socrates’ Children: Ancient Philosophers. A lively and accessible introduction to Socratic dialogue.
- Kohn, J. (2021). Socrates on Sneakers: How to Ask Better Questions, Get Better Answers, and Transform the Way You Communicate. A modern take on how Socratic questioning applies in daily life.
- Plato. (c. 380 BCE). The Republic.
- Vlastos, G. (1991). Socratic Studies.
- Paul, R., & Elder, L. (2006). The Thinker’s Guide to Socratic Questioning.
Funny enough this blog was of course created with help from AI. But getting the right sources…. Used a combination of ChatGPT, Deepseek, Perplexity to declutter, refine the text and support with correct sources. Images created with Midjourney except for the image of Plato’s cave, featured abundantly online, this one retrieved from https://www.masterclass.com/articles/allegory-of-the-cave-explainede. Sources and giving original ideas and thinkers their credit are important to me, I do my best to include them all with working links. If you feel I have forgotten you, send me a DM on Linkedin
Related Posts
13 november 2024
My AI Boyfriend: Red Flags in the Dating World of Algorithms
AI lovers are no longer sci-fi—they’re…
3 oktober 2024
Redefining Intimacy: The Wild, Weird, and Wonderful World of Sextech
In a world where XR, AI, and robotics…