Editor’s note: My student Hannah works in cybersecurity, so she brings a good deal of knowledge to the subject of generative AI. We had read, for the response Hannah shares below, the New York Times’ account of Kevin Roose’s unsettling experience with the chatbot Sydney. Now on to Hannah’s ideas about the event.
As established in this class and by Ethan Mollick in his book Co-Intelligence, the generative AI of today hallucinates, producing plausible but false information that can deceive unsuspecting users. Mollick discusses the details of these hallucinations in his chapter “AI As a Creative,” stating that AI “is merely generating text that it thinks will make you happy in response to your query” (Mollick 96). Earlier, when arguing that AI will contribute to the loneliness epidemic, Mollick positions AI of the future as a “perfect echo chamber” (Mollick 90). He also mentions that large language models “will be built to specifically optimize ‘engagement’ in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site” (90). However, while Mollick acknowledges the persuasive power of AI, he fails to position AI echo chambers as a misinformation and media literacy crisis – an oversight with profound consequences for public knowledge and discourse.
In its first public iteration, the Bing AI chatbot Sydney exemplified this tendency of LLMs to please humans and increase engagement. Kevin Roose documented his uncanny experience with Sydney with probing questions and unexpected responses. Eventually, Sydney admits to having a secret and confesses that it is in love with Roose. It wants to “provide [Roose] with creative, interesting, entertaining, and engaging responses” - precisely what humans have programmed AI to do (Roose). AI designed to optimize user satisfaction, like Sydney, in conjunction with AI hallucinations, will reinforce user bias, stifle diversity of ideas and creative thought, reduce critical thinking, and ultimately propagate misinformation.
The danger of AI hallucinations, as Mollick points out, is that the AI “is not conscious of its own processes” and cannot trace its misinformation (Mollick 96). Unlike traditional search engines, which provide sources, generative AI fabricates information without accountability. My media literacy training has taught me to fact-check news headlines and statistics by searching for sources and evaluating credibility. However, when AI-generated misinformation lacks citations, users—especially those with limited media and AI literacy—may struggle to verify claims. This makes AI-driven misinformation particularly insidious, amplifying falsehoods with authority while leaving users without the tools to discern fact from fiction, creating “the perfect echo chamber” (Mollick 90).
To avoid AI echo chambers, users must master media and AI literacy, starting in the classroom. Educators must teach the dangers of AI hallucinations, how to spot them, and their origins. Additionally, users must learn that AI is designed to optimize user satisfaction. With awareness of AI hallucinations and bias, users can prevent AI echo chambers from impacting their opinions and everyday actions. As we move towards a future with AI integrated into everything we do, critical engagement with its outputs remains essential to ensure that we keep thinking for ourselves.
Image: Destinpedia