Teaching critical AI literacy is a must

OpenAI’s ChatGPT, a generative artificial intelligence (GenAI) chatbot, was launched in November 2022. Users interact with the bot through prompts written in natural language, and it responds in natural language. Within just a few months after its release, ChatGPT was estimated to have 100 million users; it also took academia by storm. (Dobrin 2023, 17-18; Mollick & Mollick 2023.) Although ChatGPT is likely the most well-known GenAI chatbot, there are many others, such as Microsoft’s Copilot, Google’s Gemini, and Anthropic’s Claude. The AI hype abounds, and students everywhere are interacting with GenAI chatbots (Kichizo 2023).

To use GenAI chatbots responsibly, students must comprehend the risks involved. Whatever a teacher’s stance — whether aboard the AI hype train or digging heels in the ground to fight the AI storm —teaching critical AI literacy should be on every teacher’s agenda.

“Your first name is Sarah” ‒ GenAI hallucinating

Are GenAI chatbots on acid? They are said to hallucinate, meaning that they sometimes spew out nonsense (Dobrin 2023, 39-40). According to one study, this may happen anywhere between 3 and 27 percent of the time (Metz 2023). Hallucinations are a risk — even more so because GenAI chatbots generate plausible, well-structured answers, so the user may not always immediately notice when a chatbot’s answer is nonsense. Yet sometimes these hallucinations are obvious (Image 1).

[Alt text: screen view of dialogue between the author and artificial intelligence that announces that your first name is sarah.]
Image 1. ChatGPT 3.5 hallucinating and calling me “Sarah”. (Image: Hamid Guedra)

Besides hallucinations, there are other risks. These include academic integrity violations, copyright infringements, AI anthropomorphism, algorithmic bias, the environmental impact of using GenAI, the exploitation of workers to train GenAI, loss of agency and cognitive outsourcing, and more. Among these, understanding AI anthropomorphism is key when interacting with GenAI chatbots.

AI anthropomorphism

It is common to project human-like qualities onto new technologies and anthropomorphize them. The term artificial intelligence itself is a case in point. According to Gibbons, Mugunthan and Nielsen (2023), there are four degrees of AI anthropomorphism:

  • Courtesy: Being polite to the AI.
  • Reinforcement: Giving praise to the AI.
  • Roleplay: Assigning the AI a role of a person with specific traits or qualifications.
  • Companionship: Building friendships with AI.

AI anthropomorphism happens for two reasons. It has a functional role, to make AI perform better, and a connection role, to make the experience more pleasant. (Gibbons et al. 2023.)

The risk, then? If users attribute GenAI chatbots human-like traits, this may lead to assuming that the bots have actual human abilities, even sentience (De Cosmo 2022), maybe even the ability to show empathy. However, there is no “ghost in the machine” and no natural language understanding in GenAI chatbots, and some have dubbed them stochastic parrots (Bender et al. 2021). Moreover, critics such as Sherry Turkle have emphasized that, with GenAI chatbots, there is no real but simulated empathy (Lithub 2021).

Image 2. Students wearing a critical AI literacy hat.  (Image: Ideogram.ai / Hamid Guedra) 

When interacting with powerful GenAI chatbots, no matter how plausible or human-like their answers appear to be, students need to be aware of the risks involved and keep their critical AI literacy hats on.

Author

Hamid Guedra is a Senior Lecturer at the Language Centre and teaches English for professional and academic purposes at LAB University of Applied Sciences. Despite ChatGPT’s persistence, Hamid is not Sarah.

References 

Bender, E.M., Gebru, T., McMillan-Major, A. & Schmitchell, S. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, 610–623. Cited 8 Mar 2024. Available at https://doi.org/10.1145/3442188.3445922

De Cosmo, L. 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters. Scientific American. Cited 8 Mar 2024. Available at https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/

Dobrin, S. L. 2023. AI and Writing. Peterborough (ON), Canada: Broadview Press.

Gibbons, S., Mugunthan, T. & Nielsen, J. 2023. The 4 Degrees of Anthropomorphism of Generative AI. Nielsen Norman Group. Cited 8 Mar 2024. Available at https://www.nngroup.com/articles/anthropomorphism/

Kichizo, T.O. 2023. I’m a Student. You Have No Idea How Much We’re Using ChatGPT. Chronicle of Higher Education. 5/26/2023, Vol. 69 Issue 19, 1-1. Cited 8 Mar 2024. Available at https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt

Lithub. 2021. Sherry Turkle on AI and the Perils of Pretend Empathy. Literary Hub. Cited 8 Mar 2024. Available at https://lithub.com/sherry-turkle-on-ai-and-the-perils-of-pretend-empathy/

Metz, K. 2023. Chatbots May ‘Hallucinate’ More Often Than Many Realize. The New York Times. Cited 8 Mar 2024. Available at https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html

Mollick, E. & Mollick, L. Why All Our Classes Suddenly Became AI Classes. Harvard Business Review. Cited 8 Mar 2024. Available at https://hbsp.harvard.edu/inspiring-minds/why-all-our-classes-suddenly-became-ai-classes