"Am I AI-Hallucinating?"

"Am I AI-Hallucinating?"

A unique lens on why AI systems sometimes mix fact and fiction.

Leonardo_Phoenix_A_warm_sunlit_artists_studio_with_multiple_pa_2

Image generated by Leonardo AI by author prompt. “A warm sunlit artist's studio with multiple paintbrushes with a single canvas on an easel, a painting on canvas of different patterns that blend harmoniously, soft golden light, impressionist style.”

Through my lens as both an artist and technologist, I see AI hallucinations differently than most. It's not a simple story of truth versus fiction, or real versus fake. Like any creative process, it requires different-sized paintbrushes to paint the full picture of what's actually happening.

You know that moment when you're talking with a friend, and they combine two different memories into one story? They're not making things up it’s just that their brain is connecting dots in a way that seems logical but isn't quite right. That's surprisingly similar to what's happening when AI "hallucinates."

When people hear "AI hallucination," they often imagine a system gone rogue, inventing wild stories out of nowhere. But that's not what's happening at all. It's more like what happens when you're learning a new language and occasionally mixing up idioms, it’s that you're using real pieces of language, just not always in the right way.

I've seen this play out in fascinating ways. Just the other day, I asked an AI to recommend a John Grisham legal thriller I hadn't read yet. It described what sounded like the perfect book, a compelling case about a small-town lawyer taking on a tech giant, with all the usual Grisham twists. Only problem? The book didn't exist. The AI had taken elements from real Grisham novels and combined them into something that sounded right but wasn't real.

This happens because AI learns much like we do, by taking in lots of information and finding patterns. Think about how you learned your profession. You didn't just memorize procedures from a manual but rather you absorbed knowledge from many sources: technical documentation, case studies, mentors' experiences, industry publications, and practical hands-on work. Each source taught you something different about how to be effective in your role.

AI does something similar but at a much larger scale. It learns from:

  • Scientific papers that teach precise technical language
  • Stories and plays that show how to handle emotion and metaphor
  • News articles that demonstrate factual reporting
  • Technical documents that explain step-by-step processes
  • And yes, creative works that help it understand the human experience

That last point about creative works often raises eyebrows. Why include fiction and art in AI training at all? But think about how much we humans learn about conflict resolution, emotional intelligence, and social dynamics through stories. A great novel or film can teach us more about human nature than a dozen psychology textbooks alone. Just as we need both technical knowledge and creative understanding, AI needs these same reference points to understand the nuanced ways we communicate and interact.

Here's what's really interesting: AI doesn't just file away this information like a giant digital filing cabinet. Instead, it uses probability to make educated guesses about how to combine everything it's learned. Most of the time, this works brilliantly as it can understand context, respond appropriately, and even show a bit of creativity. But sometimes, like our friend mixing up memories, it connects dots that shouldn't be connected.

That's why leading AI companies are constantly working on solutions. They're addressing critical challenges by building systems that can:

  • Express uncertainty when they're not quite sure
  • Cross-check information against reliable sources
  • Track where their information comes from
  • Understand and acknowledge their own limitations

But it goes beyond just technical fixes. There's important work happening around protecting and crediting creative works. Companies are developing ways to compensate creators, building systems that track attribution, and creating clear guidelines for how creative works can be used in training. Because at the end of the day, this isn't just about making AI more accurate—it's about making sure it enhances rather than replaces human creativity.

I'm committed to being at the table of this conversation, helping make these complex topics more digestible to business leaders and technologists through my lens, building understanding between creators and developers. 

By better understanding challenges like hallucinations, we can work toward a shared future where AI truly lifts all boats: from the developers building it, to the creators whose work informs it, to the users whose lives it impacts. 

When we are painting that picture with every paintbrush, we are building a better tomorrow.