Hacking AI Chatbots for Critical AI Literacy in the Library

Publisher:
Taylor & Francis
Publication Type:
Journal Article
Citation:
Journal of the Australian Library and Information Association, 2026, 75, (1)
Issue Date:
2026-03-01
Full metadata record
AI is seeping into the fabric of our information environment as generative AI tools are increasingly used to search for and discover information. Despite their promise for improving efficiency, AI systems regularly produce errors (also known as ‘hallucinations’), which demonstrate that uncertainty is a feature rather than a bug of such systems. Despite this problem, we regularly hear stories about people who have mistakenly used false information provided by these tools in their communications and outputs – from lawyers’ reports to government hearings. There is wide agreement about the need for AI literacy to recognise how to use AI effectively and ethically but less consensus on how AI literacy is best achieved. A key component of many AI literacy frameworks is an understanding of how AI works. Using a case study of a critical AI literacy intervention in four Greater Sydney libraries, we argue that instead of learning only about how AI works, AI literacy might involve learning when, how, and why AI doesn’t work. The concept of socio-technical error and uncertainty is a useful heuristic for understanding AI – particularly in the context of information search and discovery, a primary practice in both public and academic libraries.
Please use this identifier to cite or link to this item: