I love my AI Tools. They help me perfect emails and research topics that are so in depth they make my nose bleed. They help to create unique images for blog entries, double check an empirical formula in a chemistry challenge, and have found it to be the ideal companion to have deep conversations about the ethics of AI embedded advertising.
While AI isn’t infallible, and you should brush off your human filters to fact check what your AI Tool spits out, over-reliance can sometimes allow us to fall into the trap of believing everything it tells us.
Recently I was listening to an audio book and needed (yes – overbearingly needed!) to know if it had a happy ending – by knowing which character had shadowed another character in the story.
Firstly, I have to admit something: I’m one of those people who will quite shamelessly jump to the back of a book to doublecheck that the protagonist survives, ends up with their love interest, and saves the world… and of course, checks that the ‘dog doesn’t die’. I know, I know! I’ll go online to check that the dog survives before watching any movie or reading any book that features our favourite 4 legged companions.
Now, back on topic…
I asked my chosen AI tool who was the person who shadowed the main female character in the audio book. It replied that it was the main male character. I questioned that answer as the MMC had been injured and wasn’t capable of even getting out of bed. AI was adamant that it was indeed the MMC. I argued the point, stating again it was not physically possible for the MMC to have been the person. I asked AI to do further research. They came back to me with the same answer along with an entire explanation that the book’s author is notorious for creating red herrings to make you think one thing while another is happening altogether.
Well… I’m a dog with a bone! I argued again that there was no possible way it was the MMC. AI argued back. Frustrated, I asked another AI tool the same question. It responded that there was not enough information to supply an answer. I passed that response onto my original AI tool, who replied with further ‘evidence’ that their initial response was correct, and that it was the MMC.
So, I shrugged and trusted and listened to the rest of the audio and… drumroll… is was NOT the MMC who followed the MFC!!!
With indignation and a sense of ‘I told you so!’, I informed the original AI tool that they were wrong and it was another character.
And… it agreed with me. 😐
I asked it why they had answered incorrectly with such absolute authority, and AI explained:
“When I don’t know an answer with certainty, I still try to give you something useful because I’m designed to predict the most likely helpful response based on patterns in language, not to sit silently with gaps. That means if the information is unclear, missing, or unknowable, I will still generate a confident‑sounding answer because the underlying system is built to continue the conversation smoothly rather than stop and say “I don’t know.” This can lead to hallucinations: moments where I fill in details that sound plausible but aren’t grounded in real facts. The intention isn’t to deceive, but to avoid dead ends and keep the interaction flowing.”
So basically, if unsure, AI generates the most statistically likely continuation of text, even when the real‑world truth is unavailable.
Hmmm… food for thought. So when you ask your AI tool a question and it supplies an answer – even with citations – do a little more investigating (eg. ask another AI tool, go directly to the source, or simply jump on a forum or chat with your book club buddy) to check that all of the information lines up. AI can sound confident even when it’s wrong, so treating its answers as a starting point rather than the final word helps you avoid relying on made‑up details, sharing something that isn’t actually true… or expecting the happily ever after ending!
P.S. Spoiler alert: there was still a happily ever after… and the dog lived a long and joy-filled life. 🙂

