Fact Checking and Curiosity: When AI Sounds Confident But Wrong

"Did you know all the blood vessels in your body would stretch 100,000 kilometers if you laid them end to end?"

My son brought this up at dinner one night.

"Where'd you hear that?" I asked.

"I think it was from a YouTube video. Or maybe school?"

After dinner, I saw an opportunity. "Let's check that together," I said. We opened ChatGPT and asked, "How long are all the blood vessels in the human body?"

The answer came back confidently: 100,000 kilometers.

So was it right? We did a quick Google search. Two medical-school-backed websites confirmed it. But then we found Wikipedia, which added nuance—updated research is actually questioning that number. It turns out, the “100,000 km” fact is a widely repeated myth that modern science is starting to challenge. The AI gave us the popular answer, not the scientifically accurate one. That was a lightbulb moment. AI is great at repeating the internet, but it's not great at vetting it.

And that's the challenge we're facing now. AI doesn't just make mistakes. It makes them with complete confidence.

The Confidence Problem

When my kids ask me a question and I don't know the answer, I tell them: "I'm not sure. Let's look it up together."

AI doesn't do that. Instead, it answers every question with the same confident tone. To a 10-year-old (or honestly, to most adults), that confidence feels like authority. But authority and accuracy aren't the same thing. If we want our kids to use AI wisely, we need to teach them how to verify what they're learning—without turning research into a huge, tedious chore.

The Hallucination Problem

AI's confidence becomes especially problematic when it starts inventing answers.

This past summer, I couldn’t figure out how to start the washing machine in a rental apartment, so I asked ChatGPT to help me. I included the name of the brand and model and uploaded a photo of the touchscreen. It told me the "Start" button was in the upper left corner. It wasn't. I corrected it, and it confidently assured me the button was actually in the lower right. Also wrong. It was just making it up. Eventually, my husband figured it out and we were able to wash our clothes. 

That's what AI "hallucination" looks like—confidently wrong answers that sound completely believable. And here's the problem: it never tells you when it's guessing. Whether the topic is volcanoes or washing machine buttons, it sounds equally sure of itself.

Here's why this happens: AI tools aren't databases of verified facts. They're fundamentally text generators that predict the most probable next word based on patterns they've learned from their training data. Think of AI like the predictive text on your phone, but trained on the entire internet. If you type “Have a great”, your phone suggests “day”. It doesn't know what a day is; it just knows that “day” usually follows “great”. ChatGPT, Claude, Gemini, etc. are doing the same thing, just with much more complex paragraphs. They prioritize sounding smooth over being right.

While these tools do try to prioritize authoritative sources within their training data ranging from peer-reviewed medical journals to social media posts, they can reproduce biases and misinformation. Since AI learns from human-generated content, including the less reliable parts of the internet, it can pick up and repeat inaccurate information it encountered during training.

So when we (or our kids) use AI, the goal isn't blind trust or constant suspicion—it's awareness. AI is a brilliant assistant, but a terrible fact-checker.

That's why the next skill matters so much—not avoiding AI, but learning quick ways to check it.

The 30-Second Habit That Changes Everything

Here's a simple strategy to check accuracy and it's something professional fact-checkers actually use: lateral reading. Instead of diving deeper into one AI answer, open a few new tabs and check:

  • Who's behind this source?

  • Where else is this information mentioned?

  • Do credible sites agree?

It's called lateral reading because you're reading sideways instead of down. And it takes about 30 seconds. That blood vessels moment at dinner? That was lateral reading in action.

What we did:

  1. Asked ChatGPT for the answer

  2. Opened a few new tabs to verify

  3. Found medical school websites that confirmed it

  4. Found Wikipedia with updated research that questioned it

Within a few minutes, my son went from repeating a fact he'd heard to understanding that scientific knowledge evolves. He learned that even widely accepted numbers can be challenged by new research and that’s part of what makes science exciting.

AI tools sometimes include citations with their responses. Those can be a great starting point for lateral reading, but I also encourage you and your kids to go beyond them — to look for independent sources that verify or challenge what the AI says. 

Start With Trusted Sources for Important Topics

For topics that really matter, like health, science, history, try reversing the order. Start with a trusted source first, then bring AI in to help explain.

For example:

  • Health questions? Start with the Mayo Clinic or CDC

  • Space and science? Start with NASA or National Geographic

  • History? Start with the Smithsonian or primary source documents

Once you've found reliable information, you can ask AI: "Can you explain this in simpler terms?" or "Can you summarize the key points for me?"

This approach lets facts lead and AI clarify. Use lateral reading when you're checking AI's answers. Use this trusted-sources-first approach when accuracy is critical from the start.

Let AIs Check Each Other (But Be Careful)

This one is interesting. Try using multiple AI tools together.

Here's what we do:

  • Ask ChatGPT a question

  • Copy its answer and paste it into Claude or Gemini

  • Ask: "Is this accurate? What sources support it?"

You can also type the same question into multiple AI tools and compare their responses. Different answers mean it's time for lateral reading.

Important note: Even when multiple AIs agree, they can all still be wrong. They often draw from similar training data, so they might repeat the same mistakes. That's why lateral reading with credible sources is still essential.

The Conversation I'm Having With My Kids

I don't want my kids to distrust AI entirely. That's not realistic, and it's not helpful. But I do want them to pause before accepting everything AI says as fact. So here's the question I keep asking them:  "How do you know that's true?" Not in a challenging way. Just curious.

"ChatGPT told me."

"Okay, but how does ChatGPT know? Where could we check?"

Over time, this question has become automatic for them. They're starting to ask it themselves.

What About When Kids Are Just Exploring?

I don't ask my kids to fact-check every single thing they ask AI. If my son wants to know what it would be like to live in a different galaxy or my daughter is brainstorming fantasy creatures for a story, I'm not going to make them verify every detail.

But when the topic shifts to something real, science homework, health information, historical events, that's when I nudge them toward checking.

Why This Feels Urgent to Me

I keep thinking about how we handled the arrival of smartphones and social media. Back then, most of us, parents and kids alike, were learning as we went, hoping schools, tech companies, or policymakers would figure out the guardrails for us. They didn’t.

But this time, we have a head start. We know what happens when we treat a new technology as something kids will just “figure out”. And we also know how quickly families can learn, adapt, and build healthy habits when we get curious together.

AI brings new challenges, but also an incredible opportunity. Unlike social media, it can be a tool for learning, creativity, and even connection, if we approach it with awareness and intention.

So instead of worrying that our kids will believe anything AI says, we can teach them how to ask better questions, spot red flags, and think critically. That’s empowerment.

Curiosity + Checking = Thinking

Every time my kids pause to verify a claim, compare two answers, or question a confident-sounding statement, they're building something bigger than fact-checking skills.

They're building critical thinking skills. They're learning that curiosity isn't just about asking questions. It's about questioning answers. And they're discovering that the most important tool they have isn't the AI itself. It's their own judgment about when to trust it, when to check it, and when to dig deeper.

That's the skill I want them to carry forward. Not fear of AI. Not blind trust. Just healthy, curious skepticism.

This Week's Try-It

Next time your child uses AI to learn something new:

  1. Ask them: "How do you know that's true?" Not as a test, but as genuine curiosity.

  2. Try lateral reading together. Open a few tabs, check credible sources, see if the information holds up.

  3. If you find a contradiction, celebrate it. Say, "Great catch! Let's figure out which one is right."

You're not trying to make your kid a professional fact-checker. You're just teaching them to pause, check, and think before accepting AI's confident answers as truth.

What's Next

Next up, I'll share how kids can use AI as a debate partner that challenges their ideas and presents different viewpoints in order to strengthen their reasoning and clarify what they actually believe. 

If you're finding this series helpful, subscribe below to follow along with the full set of AI explorations.

Sources

For more information about lateral reading, check out Stanford University’s Civic Online Reasoning website - https://cor.inquirygroup.org/curriculum/collections/teaching-lateral-reading/

A Note on Process

I used Claude, ChatGPT and Gemini to help draft and refine this post. I started by outlining the main ideas I wanted to cover—lateral reading, trusted sources, and the "how do you know that's true?" question. Then I worked with the AI tools to find the right structure and tone, reading sections aloud to make sure they sounded like me. The examples of AI in my family’s life are all true, not made up by AI.

Next
Next

AI Brainstorming: Start with a Spark