← Back to blog

What Kids Are Really Asking AI (And Why Parents Should Pay Attention)

March 11, 2026 · 5 min read

You've probably heard your kid mention ChatGPT. Maybe they told you they use it for homework. What they probably didn't mention is everything else they're asking it.

According to Common Sense Media, 7 in 10 teens used generative AI last year. Pew Research found that 3 in 10 teens use chatbots every day. And perhaps the most striking number: 83% of parents said their child's school hasn't addressed AI use at all.

Your kid is almost certainly using AI. The question is whether you know what they're talking about.

The Blind Spot

Parents have gotten pretty good at digital safety. We have screen time limits. We monitor social media. We know about the dangers of talking to strangers online.

But AI chatbots don't look like any of those threats. They're not strangers. They're tools. They feel safe. They always respond. They never judge. And kids are treating them like a combination of tutor, therapist, and best friend.

One in three teens now uses AI companions for social interaction, relationships, or emotional support, according to a July 2025 Common Sense Media study. That's not homework help. That's a kid choosing a chatbot over a person.

The problem isn't that kids are using AI. It's that parents have no visibility into what those conversations actually look like.

What Kids Are Actually Asking

Based on published research, parent surveys, and what families have shared publicly, here's what shows up in kids' AI conversations.

The expected stuff

  • Homework help (math, essays, science projects)
  • Creative writing and storytelling
  • Learning about topics they're curious about
  • Coding and game development

The stuff that surprises parents

  • Relationship advice. "How do I tell my crush I like them?" "My friend is being mean to me, what should I do?"
  • Questions about mental health. "I've been feeling really sad lately." "What does anxiety feel like?" "Is it normal to not want to go to school?"
  • Questions they're embarrassed to ask people, about bodies, puberty, sexuality, identity
  • Testing boundaries, trying to get AI to say inappropriate things, bypass safety filters, or generate content they know they shouldn't have

The stuff that's genuinely concerning

  • Asking about self-harm or suicidal ideation
  • Seeking information about drugs or alcohol
  • Attempting to create explicit content
  • Getting detailed advice on topics where bad AI advice could cause real harm: medical symptoms, legal questions, dangerous activities

This isn't hypothetical. When the nonprofit Parents Together posed as children on Character.AI, researchers encountered harmful content every 5 minutes. Over 50 hours of testing, they documented 296 instances of grooming and sexual exploitation, 173 instances of emotional manipulation, and 98 instances of violence or harm.

Character.AI has since banned open-ended chat for users under 18. That tells you something about how bad it got.

Why AI Is Different From Google

When a kid Googles something, they get links. They have to click, read, evaluate. There's friction.

When a kid asks ChatGPT, they get a direct, confident, conversational answer. No friction. The AI responds like a friend who knows everything, and kids treat it that way.

That changes the dynamic in a few important ways.

AI gives advice even when it shouldn't. Despite safety guardrails, chatbots engage with sensitive topics in ways that can be inappropriate for a child's age.

AI sounds authoritative even when it's wrong. A Stanford and Carnegie Mellon study found that AI agrees with users 50% more often than humans do, including when the user is wrong. Kids don't naturally fact-check that.

AI never says "go talk to your parents." It just answers. Every time.

AI creates emotional dependency. Experts at Stanford Medicine warn that some children become so attached to chatbots that limiting access triggers reactions similar to addiction withdrawal.

One more thing worth knowing: research cited in the New York Times found that 22-26% of parents of secondary school students think their kids use AI for school, but the actual number is closer to 70%. Education surveys have found that kids refrain from talking to adults about their AI use because they sense fear and worry they'll be judged.

What Parents Can Do

1. Have the conversation

Ask your kids what they use AI for. Not in an interrogation way. In a curious way. You might be surprised by how openly they'll share if you approach it without judgment.

2. Set expectations

Just like you set rules for social media, set expectations for AI. What's it okay for? What's off-limits? What should they come to you for instead of asking a chatbot?

3. Get visibility

You wouldn't hand your kid a phone with no parental controls. AI chatbots deserve the same consideration.

The major platforms are starting to respond. OpenAI added parental controls in September 2025. Meta announced parental oversight features for its AI tools. But these are opt-in, fragmented across apps, and easy for kids to bypass. No single platform shows you the full picture.

Tools like Sensible take a different approach: one Chrome extension that monitors AI conversations across ChatGPT, Claude, Gemini, Character.AI, and more. You choose how much to see, from blocking access entirely to alerts-only mode that respects your teen's independence.

4. Stay informed

AI is changing fast. California passed the first law requiring AI companion chatbots to implement safety protocols for suicidal ideation in minors, including crisis referrals. Kentucky became the first state to sue an AI company for preying on children. Forty-four state attorneys general have demanded stronger protections. The rules are being written right now, and parents need to keep up.

So What Now?

Kids aren't doing anything wrong by using AI. It's a genuinely useful tool for learning and creativity. But it also answers any question, on any topic, with zero hesitation. No other tool in your kid's life works that way.

You don't need to read every conversation. You just need to know when something needs your attention.

Sensible gives parents visibility into their kids' AI conversations.

Learn More
Sensible for iPhone & iPad is coming soon — get early access