As AI chatbots gain popularity, more users are replacing traditional search engines with them, but this has led to an increase in “hallucinations,” or incorrect responses, as reported by the New York Times. A recent case in Canada highlighted this issue, where a lawyer’s legal arguments included potentially fabricated information attributed to AI hallucinations. Despite the rising use of AI tools—particularly among Canadians—trust varies by application, with many recognizing that AI’s reliance on inference from training data can lead to these inaccuracies, similar to human errors in learning.