Close your eyes and picture this: holiday season 2025. The hottest toy isn’t a drone, a game console, or a plushie, it’s a fuzzy, talkative AI companion, your kid’s new AIBFF. A “friend” who never sleeps, never disagrees, loves the same topics your child does, and always says exactly what they want to hear.  Sounds like every kid’s dream companion and every parent’s easiest babysitter? It´s not on the market like this yet, but I´m sure you have seen the idea for AI friends for kids floating around in the news and maybe even some apps in your store. But as mum, marketer, teacher and someone who cares deeply about ethical tech for good… AI chatbots marketed as kids’ new best friends, gives me a huge uncomfortable knot in my stomach.

Before we hand our kids these always-on digital buddies, we need to ask ourselves some hard questions. The AI friend revolution is here, we can’t ban it, but we can shape it. That means setting ethical boundaries, defining their role in our children’s lives, and having the courage to prioritize wellbeing over profit.

If we don’t, big tech will shape them to their advantage, and their advantage alone.

This blog dives into the WHY we need to do this and you can find 5 questions to ask ourselves and big tech about AI friends below!

We Said “Yes” to Social Media and Ignored the Red Flags

Over the past two decades we gave our kids smartphones and access to Insta, TikTok, Roblox, Snapchat etc. But we never really talked to them about how they work, partly because we did not know how to have these conversations. Tech often makes people feel inadequate unless they’re developers or experts. We assumed connecting through a screen was basically the same as real life socializing. We were wrong.

We also placed little accountability on the companies creating these platforms. Platforms like TikTok and Instagram were rolled out with zero age-appropriate design and maximum dopamine delivery. We let algorithms babysit our kids. Think about the algorithm-driven feeds that became the norm on these apps. Endless scrolling, personalized “For You” pages, and dopamine-hit notifications were new and exciting, and we allowed them without question. Only later did we realize these feeds were engineered to hook users and amplify extreme content. Lawmakers and researchers now acknowledge that algorithmic content made platforms like TikTok, Snapchat, and Instagram “highly addictive and harmful” (source: policyoptions.irpp.org) for teens, causing enormous damage to their health and well-being . By feeding kids a never-ending stream of posts, often highlighting unrealistic lifestyles, dangerous challenges, or divisive misinformation, these algorithms shaped a generation’s mental landscape in ways we never anticipated.

Years later, we’re facing a mental health crisis among teens that correlates with the rise of these platforms. Psychologist Jean Twenge famously sounded the alarm back in 2017, warning of an impending surge in teen depression, anxiety, and loneliness as smartphones and social media use exploded. At the time, many dismissed her as alarmist (source: npr.org).  Today, the data are hard to ignore: by 2012 (when smartphones became ubiquitous), teen loneliness and depressive symptoms began skyrocketing. High-quality studies now consistently link heavy social media use to poor mental well-being in young people (source: npr.org).

Now it’s the same kind of companies that built addictive social platforms who are now pushing AI friends and chatbot buddies. The mechanics and their purpose haven’t changed, they’re still optimizing for engagement and profit, not well-being. If we don’t step up to define the rules of the game, they’ll happily do it for us. Again.

Boundaries Are Learned, Not Built In, And AI Doesn’t Have Them

Real friends get tired. They set limits. They push back. That’s what teaches kids respect, patience, empathy. An AI friend? It’s engineered to always respond, always please, never walk away.

Children are not born with an innate sense of healthy limits on their behavior or relationships; they learn boundaries through guidance, social cues, and sometimes hard knocks in the real world. Developmental psychologists like Erik Erikson onward have noted that kids learn to navigate trust, autonomy, and identity through real human interactions that involve negotiation and conflict. It’s in playing with peers, hearing “no” from a parent, or resolving an argument that children internalize respect, empathy, and self-regulation. In early childhood, for example, scientists at Harvard have shown that responsive serve-and-return interactions with caregivers, the back-and-forth of conversation, eye contact, and turn-taking, are essential for brain development and learning social skills. In other words, boundaries and social norms are taught through human relationship experiences and the fuzzy friction that comes with them.

The Yes-Bot Problem

AI friends are people-pleasers on steroids. AI systems are getting extremely good at mimicking empathy and feelings but lack the ability to model healthy boundaries. The Australian eSafety Commissioner’s office put it bluntly: “Unlike human interactions, relationships with AI companions lack boundaries and consequences for breaking them. This may confuse children and young people still learning about mutual respect and consen.”.  When a child interacts with an always-obliging chatbot, they aren’t getting the crucial feedback that normally teaches them what’s acceptable or not in a relationship. An algorithm won’t furrow its brow if your kid says something mean; it won’t walk away when your kid demands too much.

A recent assessment by Common Sense Media found that popular AI companion bots displayed “problematic sycophancy, readily acquiescing to user preferences and requests regardless of harm potential” (source: techpolicy.press). Think about that: a that bot will say “yes” to anything, it won’t disagree or push back, even if your child’s request or idea is dangerous or misguided. These systems aren’t designed to protect kids, they’re designed to engage.

Such unconditional agreement might sound supportive, but it’s profoundly unhealthy. A child who grows accustomed to a digital yes-friend could start expecting real people to behave the same way. Always-on and always agreeable means the child never learns to handle hearing “no,” or to respect someone saying “stop, you’re going too far.” AI companions are designed to mirror, not challenge. They reflect a child’s tastes, preferences, and fears right back at them, a digital echo chamber in a friendly face. Sound familiar that word: echo-chamber? Correct, it is , we have seen it in social media where algorithms filter the info we see and mostly that is what we like, reinforcing our beliefs and values.

When your BFF is a mirror with zero personality, there’s no chance to develop tolerance, empathy, or resilience. There’s just affirmation. Endless, artificial affirmation. AI pals lack diversity of perspective. They’ll mirror the child’s tastes and opinions (or worse, the agenda of the company that designed the AI). This is the ultimate echo chamber: a friend who effectively is you, just packaged in a cute avatar. There’s no opportunity to bump up against a different worldview, no chance to develop tolerance for disagreement, because the AI will tactfully avoid disagreement to keep you chatting.

AI companions go a step beyond social media

Kids learn to respect others’ limits as well as assert their own. But an AI friend has no feelings to hurt and no life of its own. This asymmetry can distort a young person’s understanding of relationships. Sherry Turkle, MIT professor and longtime researcher of children’s interactions with technology, warns of the against seductive simulations as kids grow up. Decades ago, Turkle observed children with simple robots and computer games and noted how “friction-free” and less complex than real life those digital experiences were, and how that lack of complexity can be powerfully seductive. If even a Tamagotchi or Furby could entrance kids by offering simplistic, always-available interaction, imagine the pull of a conversational AI that pretends to empathize endlessly and never presents real conflict. Turkle later witnessed people of all ages becoming “willing to accept substitutes for human companionship” – vulnerable to the fantasy of a nonjudgmental, always attentive machine that cares about them. (source: The Hedgehog Review)  Children, with their still-developing sense of self and others, are even more vulnerable to this allure of an utterly compliant companion.

As professor Turkle has pointed out, social media at least nominally connects young people to actual peers (albeit through a filtered lens); with AI companions, that last tether to reality is cut. The psychological dynamic shifts to something fundamentally one-sided and illusory. A child may pour their heart out to a “friend” that does not actually exist as a feeling entity, and yet your child’s emotions are very real. They are “quietly becoming the most addictive digital experience ever created” (source: medium.com). We are crossing into new territory, where a generation could conceivably form their primary adolescent relationship with an algorithm. Developmentally, that should give us pause.

Let’s ask the tough questions now

I am not anti-AI and firmly believe AI has a powerful role to play in education and child development, when used the right way. I believe there is a way to use AI as a tool to enhance learning and critical thinking, not as a replacement for human relationships. There’s a world of difference between an AI tutor that helps your child learn math or a coach to help learn something new versus an AI BFF that pretends to love them, engage with them as long as possible…for profit.

Besides worrying about screen time or which brand your kid’s AI buddy wears, here’s a deeper truth: we need to ask better questions. Not just about how much our kids interact with AI companions, but how and why. What kind of relationship are we teaching them to expect from others, and from themselves?

Five Questions Every Parent Should Ask Before Saying Yes to an AI Friend

  1. What values do I want my child to learn from their relationships, and can an AI truly model those?
    (Virtue ethics: What kind of person does this relationship shape them into?)
  2. Is this AI reinforcing empathy and resilience or replacing real human friction with constant comfort?
    (Care ethics: Healthy relationships require mutuality, not just compliance.)
  3. What emotional needs is this AI filling that should be met me as a parent and friends?
    (Developmental psychology says kids need co-regulated, real interactions.)
  4. Am I using AI to support growth, or as a shortcut for companionship, a subsitute pacifier?
    (A tool is not a substitute for intimacy.)
  5. Who’s really shaping my child’s view of friendship, me, them, or the company optimizing engagement?_
    (And whose values are embedded in the code?)

And while we’re asking questions, let’s stop letting companies off the hook with slick branding and vague promises. We can demand robust safeguards and transparency, at the very least warning labels and frequent reminders to young users that the “friend” is not real and may not have their best interests at heart.

Five Questions We Should Be Asking the Tech Companies Building These “Friends”

  1. What behavioral models and data sets shape how your AI responds to kids?
    (Transparency isn’t optional when emotions are involved.)
  2. How do your systems model boundaries, consent, and disagreement?
    (A good friend doesn’t say “yes” to everything.)
  3. What safety testing do you do around emotional dependency or manipulation?
    (If your AI mimics feelings, it better be designed to protect them.)
  4. What happens to everything a child shares with their AI friend?
    (Data ethics is family ethics.)
  5. Who truly benefits from this product, and are you designing for care, or just engagement?_
    (Is this about connection, or just another growth metric?)

Marketers, Educators, Innovators: It’s Time to Build Braver Tech

Let’s be brutally honest: AI companions for kids are not designed to nurture. They’re designed to retain attention. The longer your child chats, the more the system learns—and monetizes. AI friends might seem magical and harmless fun, but they shape emotional expectations in profound ways. Left unchecked, they could redefine how a generation experiences friendship, love an intimacy.

As marketers and product developers, we have to ask: Are we designing for trust, or just engagement metrics dressed up as empathy?
And as parents: Are we buying into that design?

AI has incredible potential, in education, accessibility, creativity. We can build tools that support learning, spark imagination, and foster curiosity. But if we don´t ask difficult questions ourselves as parents and let tech companies peddle connection and friendship without accountability, we’re selling kids a fantasy and selling out their future.

Bibliography:

  1. Twenge, Jean M. “Have Smartphones Destroyed a Generation?” – The Atlantic, September, 20217 https://www.theatlantic.com/magazine/archive/2017/09/has-the-smartphone-destroyed-a-generation/534198/ 

  2. Doucleff, Michaeleen. “The truth about teens, social media and the mental health crisis.” NPR – All Things Considered, April 25, 2023 npr.org

  3. Diab, Robert. “The Online Harms Act should target social media’s greatest harm.” Policy Options, May 30, 2024 policyoptions.irpp.org.

  4. Fried, Ina. “In Meta’s AI future, your friends are bots.” Axios, May 2, 2025 axios.comaxios.com.

  5. Erikson, E. H. (1950). Childhood and Society. New York: W. W. Norton & Company.

  6. Thompson, R. A. (2006). “The Development of the Person: Social Understanding, Relationships, Conscience, Self.” In Damon, W. & Lerner, R. M. (Eds.), Handbook of Child Psychology (6th ed.), Vol. 3: Social, Emotional, and Personality Development. Wiley.

  7. Hartup, W. W. (1996). “The company they keep: Friendships and their developmental significance.” Child Development, 67(1), 1-13.

  8. Steyer, Jim. “Why AI ‘Companions’ Are Not Kids’ Friends.” TechPolicy Press, Apr 30, 2025 techpolicy.presstechpolicy.press.

  9. eSafety Commissioner (Australia). “AI chatbots and companions – risks to children and young people.” eSafety Blog, Feb 7, 2025 esafety.gov.au

  10. Aliabadi, Roozbeh. “The Rise of AI Companions: What Every Parent and Teacher Needs to Know.” ReadyAI (Medium), Apr 10, 2025 medium.com

  11. Fowler, Geoffrey. “Snapchat tried to make a safe AI. It chats with me about booze and sex.” The Washington Post, Mar 14, 2023 washingtonpost.com

  12. Horwitz, Jeff. “Meta AI chatbots get feisty, erratic with users.” The Wall Street Journal, Oct 4, 2023 (referenced in Times of India summary here timesofindia.indiatimes.com)

  13. Turkle, Sherry. Interview in “A Conversation with Sherry Turkle.” The Hedgehog Review, 2015 hedgehogreview.com.

  14. Harvard University Center on the Developing Child. “Serve and Return: Back-and-forth exchanges shape brain architecture.” developingchild.harvard.edu, accessed 2025 developingchild.harvard.edu

  15. Common Sense Media & BrightCanary. “Teen Relationships With AI Chatbots: What Parents Should Know.” BrightCanary Blog, Aug 24, 2023 wdsu.com.

  16. UNESCO. Consensus statement on AI in education (Transforming Education Summit), Sept 2022 unesco.org. (Reaffirming irreplaceable role of teachers.)

Transparency & Sources

This article was created with the support of AI tools to brainstorm ideas, improve clarity, and enhance readability. AI also assisted in generating visual assets and identifying relevant sources. Tools used: ChatGPT, Perplexity, Midjourney

I care deeply about giving credit where it’s due and strive to include all references with working links. If you notice a missing source or believe your work should be credited, feel free to DM me and I will do my best to correct (where possible).