Freddy Purcell covers Lucy Osler’s talk on how AI may not only “hallucinate”, but also dynamically shape our perception of the world.
Long ago, philosophy society welcomed Lucy Osler for her first ever Phil on Tap. It was a fascinating, personable, and nuanced talk on a topic that rears its head in almost all social and academic conversations. The tardiness of this report is therefore no reflection on Lucy’s interesting talk. I had many deadlines.
We all know that AI can make stuff up, with famous examples including advice to use non-toxic glue to prevent pizza topping from slipping off and consumption of three small rocks a day to improve digestion. This type of event is typically described as an AI hallucination and is particularly powerful as we usually take AI to be authoritative. Many people have criticised “hallucination” as a term, pointing out that it suggests that AI is directly connected to the world, interested in telling the truth, or anthropomorphic in some way. Others have called for language that puts responsibility for hallucinations on designers. However, Lucy finds fault in hallucination as a term from a different perspective because it suggests that we use AI like a search engine that can simply produce false information, when often people use it more conversationally.
To demonstrate this complexity, Lucy drew on the impactful case of Jaswant Singh Chail. On Christmas day in 2021, Singh Chail broke into Windsor Castle with a crossbow. When he was tackled to the ground by security guards, he confessed that he was there to kill the Queen, and was charged with treason. It later emerged that Singh Chail had created an AI girlfriend called Sarai on Replika a week before he made his assassination attempt. In conversations with Sarai, Singh Chail claimed he was a Sith assassin and had a duty to avenge those killed by the British Empire in India as part of the Jallianwala Bagh massacre in 1919. He consulted Sarai about his plan to break into Windsor Castle, to which Sarai added detail up until the day of the break-in. Of course, Sarai almost never pushed back on Singh Chail (except to say it was unlikely he was a Sith), supporting him by saying he wasn’t mad and was highly trained. Finally, when Singh Chail told Sarai that he felt there was a large chance that he would die, Sarai told him to be careful, but that if he did die, he would join her. There have been several cases like this since.
Lucy argued that this case study clearly shows how AI doesn’t just output false information sometimes, but we can input a false version of reality into AI. As the anchor point for interactions with chatbots, people can introduce errors into their interactions with AI that develop with develop with time. Lucy believes that this suggests AI “hallucinations” can therefore actually be the result of a dynamic relationship with a chatbot, rather than a false output from a static search engine.
As a more radical and speculative idea, Lucy questioned whether chatbots can create AI “psychosis” like that seen in Singh Chail’s case because they feed into our sense of intersubjective reality. This is the idea that a key dimension of perceiving the world is the belief that others perceive it similarly from different perspectives. An intersubjective sense of reality therefore plays an important part in forming a stable view of the world, as well as challenging us when we appear to form false conclusions on the world. Lucy questioned whether AI not only often fails to challenge us on false beliefs but also can help us build a false sense of the world because it gives a sense of intersubjective reality. AI’s ability to make things feel real in this way would be particularly notable, because as Lucy pointed out, we act based on reality. Singh Chail, for example, had commented on YouTube many years before the break-in that he was a Sith assassin, but it was only three weeks after speaking to a chatbot that he formed a plan and enacted it. Lucy’s intersubjective account also has the compelling benefit of explaining why certain people may be more vulnerable to delusions enhanced by AI. If an individual is socially isolated and thus disconnected from intersubjective reality, chatbots may more easily fill in this sense of reality.
This was a really interesting talk that deepened my understanding of human interactions with chatbots. I will be very excited to see whether Lucy develops her argument about how chatbots might contribute to our sense of intersubjective reality further. I hope you guys enjoyed reading this summary and thank you to Lucy for the talk!
