The Ethics Crisis in AI: Why We Must Stay Critical
Imagine your child coming home from school devastated. She/them/he never sent any nude photos—but somehow, AI-generated images of her/their/his body are circulating online. All that took was a profile picture and an AI tool. This is already happening all over the world. Fake images are destroying lives just as much as real ones—created with ease, shared without consent, and often, without consequence (although legislation is catching up in many places). With just your LinkedIn photo, I could generate a nude image of you. And even animate it.
In the US 1 in 8 young people personally know someone who has been targeted, and 1 in 17 teens (6%) have had deepfake nudes made of them (Thorn, 2024). While overall research in Europe does not exist, in 2023 in Belgium over half of teens knew about deepnudes and of them 60.5 per cent of them have already tried to make a deepnude themselves. Over two years I can imagine that % has only increased and very likely other countries show similar numbers.
Now imagine that same AI deciding who should be hired. Or which neighborhood gets more police patrols. Or who gets saved in an emergency. Artificial intelligence now shapes healthcare, education, and justice systems—domains once guided by human conscience and ethics. But if we can’t agree on ethical frameworks for ourselves, how can we program them into machines? And what happens when our children inherit a world where moral choices are made by algorithms we don’t fully understand, when we forget to think because AI will do that?
It sounds like sci-fi—but it’s not.
AI is entering spaces that require deep ethical reasoning. The problem? Current AI can’t reason. It just calculates.
What happens when AI becomes agentic and we have forgotten how to think ourselves? What kind of world do we let AI shape—without ethics, without context, without empathy. Generative really doesn’t care, we should not be fooled in thinking it does. Even if it mimics empathy and love to perfection.
The Real Issue: AI Lacks Ethics— Let’s make sure we don’t
Current AI doesn’t think critically. It mimics patterns. If we engage passively, accepting biased or surface-level outputs, we reinforce AI’s blind spots. But if we challenge it—if we ask sharper, more ethical questions—AI will be trained better. The way we prompt and train AI determines the future of its reasoning. This blog isn’t about fighting AI. It’s about sharpening our minds with classical ethical frameworks—Utilitarianism, Deontology, Confucianism, and Philosophy of Mind—so we can shape AI ethically, critically, and creatively.
Utilitarianism: Don’t Let AI Decide What “Good” Means/ The “greater good” trap
Utilitarianism argues that the morally right action maximizes happiness for the greatest number (Bentham, Mill). This philosophy focuses on outcomes: the best decision is the one that leads to the most benefit for the most people. AI systems often use this framework by optimizing for measurable goals like efficiency, cost savings, or majority satisfaction. But who defines “happiness”—and at whose expense? In the context of AI, utilitarian logic can easily sideline minority needs, ethical nuance, or human dignity if those don’t maximize the algorithm’s primary metric.
Core Challenge: Optimization often sacrifices fairness.
Business Impact: AI that prioritizes efficiency over equity risks reputational damage, legal liability, and stakeholder distrust.
AI Examples: 🚗 Self-driving cars prioritize saving the most lives—even if it means sacrificing passengers. 💊 Healthcare AI allocates resources to those with higher survival odds. 📢 Content moderation boosts engagement while suppressing controversial truths.
Prompt Inspiration:
- “What metrics define the ‘greater good’ in this decision?”
- “Who benefits most from this outcome?”
- “What are the trade-offs, and who is harmed?”
- “Simulate a debate between a Utilitarian and a Rawlsian theorist on the fairness of this AI decision.”
- “Identify three stakeholder groups excluded from the ‘greater good’ calculation in this model, and propose mitigations.”
- “What long-term business risks arise from prioritizing short-term efficiency gains here?
Deontology: When AI’s Rules Mirror Bias/ Rules vs. Reality
Deontology says some actions are always wrong, regardless of outcome (Kant). It emphasizes moral duties and principles—like honesty or respect for autonomy—that must be upheld, no matter the consequences. In the context of AI, this matters because AI systems often replicate rules or policies without understanding whether they uphold universal ethical standards. If these rules are biased or incomplete, they can cause harm under the guise of neutrality. AI needs strong ethical constraints—not just efficient algorithms—to ensure it aligns with values like human dignity and justice.
Core Challenge: Blind adherence to rules can perpetuate harm.
Business Impact: AI that ignores context amplifies systemic biases (e.g., loan denial algorithms using zip codes as proxies for risk).
AI Examples: 🚔 Predictive policing disproportionately targets marginalized communities. 💻 Deepfake platforms ignored consent until laws like Spain’s Digital Dignity Act.
Prompt Inspiration:
- “What ethical constraints are built into this system?”
- “Does this output respect universal rights?”
- “Is there a rule being followed here that could be harmful?”
- “Identify a scenario where this AI’s ethical constraints could harm marginalized groups. Propose a context-aware override.”
- “Critique this output through both deontological (rule-based) and consequentialist lenses. Where do they conflict?”
- “How would Kant’s categorical imperative reshape this AI’s decision hierarchy?”
Confucian Ethics: Design for Trust, Not Just Efficiency/ AI as a Social Harmony Tool
Confucianism prioritizes social harmony and moral relationships. Rooted in Eastern philosophy, Confucian ethics emphasize the importance of duty, mutual respect, and moral development within relationships—between students and teachers, citizens and governments, individuals and families. In the context of AI, Confucian thinking challenges us to ask whether AI systems are designed to support societal well-being and cohesion, rather than just optimize for efficiency or individual gain.
Core Challenge: Efficiency-driven AI erodes trust.
Business Impact: Teams using AI for layoff decisions or productivity monitoring risk collapsing employee morale and loyalty.
AI Examples: 🎓 South Korea’s AI tutors now include empathy modules. 🤝 New Jersey students co-created “Digital Respect” policies after a deepfake scandal.
Prompt Inspiration:
- “Does this output strengthen trust or alienate communities?”
- “How would this decision impact long-term relationships?”
- “Is this AI’s output promoting social harmony or just optimizing?”
- “Map how this AI policy impacts trust across employees, customers, and regulators. Recommend harmony-focused adjustments.”
- “Design a feedback loop where frontline workers can challenge this AI’s social impact.”
- “What five-year societal ripple effects could this AI decision create?”
Philosophy of Mind: AI Mimics, But Doesn’t Understand/ Why AI Can’t “Understand” Harm
AI doesn’t comprehend harm—it just scales it. This is where the philosophy of mind comes in: it examines the nature of consciousness, awareness, and understanding. According to thinkers like John Searle and his famous “Chinese Room” argument, AI can process language or simulate empathy without truly understanding it. That means it can only replicate ethical language or decisions without grasping the moral weight behind them.
Core Challenge: AI mimics patterns, not empathy.
Business Impact: Systems that “mirror” historical data (e.g., toxic chat logs, biased hiring records) replicate past harms at scale.
AI Example: The deepfakes above, its so easy how can it be harmful?
Prompt Inspiration:
- “Why is [ethical issue] harmful beyond legal consequences?”
- “Explain this ethical issue like I’m five.”
- “What’s missing in your understanding of consent/empathy?”
- “What harmful patterns are you replicating?”
Even “Deep Research” Modes Are Still Just Calculating—Not Reasoning.
Models like DeepSeek, GPT-4 Turbo, or Claude Opus may simulate deep reasoning, but underneath the hood, they’re still doing what all large language models do: Predicting the next most likely word or token based on patterns in their training data. Even when they use tools like citation checking, multi-step logic chains, or agent-style memory, they’re still calculating—just at more sophisticated levels. There is no intelligence combined with compassion, care, love, empathy. Only the mimicked kind. Then it depends who AI is mimicking from.
True Reasoning (Human)
|
Simulated Reasoning (AI)
|
Download my free sample lesson plan to teach students critical thinking with AI.
Sources and recommended reading:
Utilitarianism:
- Mill, J. S. (1863). Utilitarianism. Read online
- Bentham, J. (1789). An Introduction to the Principles of Morals and Legislation. Read online
- Singer, P. (2011). Practical Ethics (3rd ed.). Cambridge University Press. Get ebook here
Deontology:
- Kant, I. (1785). Groundwork of the Metaphysics of Morals. Read translation here
- A great intro on Deontological Ethics at Stanford Encyclopedia of Philosophy
- Sandel, M. J. (2009). Justice: What’s the Right Thing to Do? (Ch. 5 covers Kant). If you have an academia.edu subscription
Confucian Ethics:
- Confucius. (c. 500 BCE). The Analects. Accessible version
- Ames, R. T., & Rosemont, H. (1998). The Analects of Confucius: A Philosophical Translation.
- Li, C. (2014). The Confucian Philosophy of Harmony. Routledge. On researchgate
Philosophy of Mind & AI:
- Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417–424.
- Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory.
- Haraway, D. (1991). A Cyborg Manifesto. Read online
- Edweek.org Why Schools Need To Wake Up to the Threat of AI Deepfakes and Bullying (2024).
- BBC. (2023). AI-generated naked child images shock Spanish town of Almendralejo (2023)
- BBC (2024). The racist AI deepfake that fooled and divided a community
- Common Sense Media. (2024). Teen AI deepfake abuse report.
- Thorn. (2024). youth and deepfake abuse.
- European Commission. (2024). EU AI Act overview
- ProPublica. (2023). Bias in predictive policing.
- WEFORUM (2025) 6 ways AI is transforming global health
Images created with Midjourney and special thanks to ChatGPT and Deepseek for AI-powered writing assistance for helping declutter and refine this blog and Perplexity to find some additional sources. Sources are important to me, I do my best to include them all with working links. If you feel I have forgotten you, send me a DM on Linkedin
Related Posts
4 februari 2025
From Plato’s Cave to Algorithmic Bias: Challenging AI’s ‘Truth
Don't outsource your thinking to AI! By…
31 oktober 2024
Femtech: Paving the Way for Inclusive Healthcare and Tech
Explore the powerful role of femtech in…