California Academic & Research Libraries |
Presentation Abstracts 2026 |
Beyond Pre-Packaged AI: Teaching Students to Build Custom Research Tutors Using the ACRL Framework
Alexis Pavenick, CSU Long Beach
This interactive workshop will demonstrate to librarians how to teach students to build their own AI research tutors by creating custom generative AI bots designed to guide critical thinking and information literacy skills. Rather than relying on pre-packaged AI tools like Elicit or Perplexity, which undermine the research process by delivering quick answers, teaching students to build their own research support bots puts human intelligence and pedagogical expertise at the center of student use of genAI.
Students are already using AI tools for research, often in ways that undermine the development of essential information literacy skills. Tools like Perplexity and Elicit promise quick answers to research questions, but they bypass the critical evaluation, query refinement, and source analysis processes that enable students to become independent researchers. Meanwhile, librarians wonder how to integrate AI ethically into instruction without replacing the human expertise and critical thinking we work very hard to teach and model.
This workshop will demonstrate how to teach students to build their own research tutor bots using natural language prompts in platforms like ChatGPT or Microsoft Copilot. These are not bots built from scratch with coding. The Presenter will show how to use the customize features in ChatGPT and Microsoft Copilot to create bots that focus on research support, not agentic actions. The presentation will show how to configure and continuously refine support bots.
The innovation is in the design philosophy. Students learn to build bots to ask them questions rather than provide answers, guiding their database search query refinement rather than letting bots conduct searches for them. The focus is on query and keyword evaluation, and then the evaluation of sources rather than summaries of content. Students can be shown how to access the ACRL Framework's six frames to include in their bot’s build so that the genAI becomes a tool for developing information literacy rather than circumventing it. Students can also be shown how to continually refine their bot so that it sticks as a tutor, rather than providing answers.
Learning Outcomes
Participants will leave this workshop equipped to:
This approach directly addresses librarians' ethical responsibilities in AI oversight by teaching students to be critical designers of AI tools rather than uncritical consumers. Students learn that AI outputs require verification, that bots need continuous refinement based on human judgment, and that the goal is skill development, not shortcut-taking. By teaching students to build tools that scaffold their learning, we prepare them for college research and for a lifetime of ethical AI use in professional contexts.
We cannot stop students from using AI, but we can transform how they use it. By teaching students to build research tutors rather than just use answer generators, we center human intelligence, critical thinking, and librarian expertise in the AI-integrated future of academic research. This workshop offers librarians a concrete, innovative strategy for staying at the forefront of information literacy instruction while embracing and shaping the role of AI in student learning.
This presentation aligns with the conference's call for radical and innovative approaches that leverage technology in new ways, empower students as critical thinkers and independent researchers, and future-proof librarianship by preparing for the opportunities and challenges ahead.
AI-Powered or Pressured? Navigating Gen AI Adoption in the California State University System
Talor Wald, Mount St. Mary's University; Ariana Varela, Cal State Los Angeles
In 2024 two California State University (CSU) librarians were voluntold to lead a pilot for a new AI-powered tool that, according to its developer, held great potential to help students achieve academic and professional success. In this presentation, three library workers in the CSU system describe their experience implementing a pilot initiative of an AI-powered guided reading tool. The presenters will engage participants in critiquing AI-powered edtech and the discourse surrounding it and creating new approaches to library assessment that center community. This presentation will explore the evolving role of librarians as implementers and stewards of AI technologies. It will also advocate for human expertise and an ethics of care as automation continues to emerge in and saturate libraries and academic institutions.
First, the presenters will draw on the work of Critical Technology scholars like Roderic N. Crooks and Ruha Benjamin to critique the idea, promoted by administrators, vendors, and educators, that access to and use of AI will ensure students’ academic and professional success. Though generative AI-powered edtech is new, the hype surrounding it is rooted in the Access Doctrine, a term introduced to describe the Clinton administration's push to expand access to broadband internet as a solution to poverty and other social ills.
Then, the presenters will share how the CSU pilot failed to lend credence to the current hype around AI. Though the pilot did not support student learning outcomes, it did generate profit for the technology vendor and as one administrator said, “made the library look good.” During the pilot program, the CSUs also invested millions of dollars into a system-wide license with OpenAI amidst extreme budget cuts to become the “first AI-powered University,” a decision that blindsided faculty, staff, and students. The pilot and subsequent administrative decisions around AI adoption left the authors with several questions: Who gets to make decisions about AI adoption and integration? Who is the authority on what students need to advance their education and “workplace readiness” skills? Why are we turning to tech companies to solve problems they perpetuate?
Finally, the presenters will draw on Relational Cultural Theory (RCT) and Veronica Arellano Douglas’s idea of Assessment as Care to argue that decisions about AI pilots, licensing, and adoption, in higher education should be returned to the community most impacted by these decisions - students, faculty, staff, and librarians. RCT, which posits that interpersonal human connections are foundational to development and growth, serves as a necessary interruption to dominant perspectives of AI that legitimate the inevitability of AI technologies, collection of more (student) data, and pressure academic institutions to be early adopters. The presenters will conclude by leading a short discussion or interactive activity where participants can dream together and begin formulating approaches to library assessment that center community.
After attending the presentation, participants will be able to:
Prompting Against the Machine: Teaching Critical Data Literacy Through AI Interrogation
Dykee Gorrell, San Jose State University
As artificial intelligence tools become embedded in everyday research, writing, and learning, the role of librarians is shifting from gatekeepers of information to facilitators of inquiry. This session reframes prompting—often taught as a technical skill for extracting better AI outputs—as a pedagogical practice for cultivating critical thinking and epistemic awareness. Rather than asking students to perfect their prompts for efficiency, this approach teaches them to challenge AI responses: What assumptions shape this answer? Whose voices and data are absent? What social, historical, or political conditions inform what AI “knows”?
Drawing on critical data studies and information literacy frameworks, this session explores how data-driven systems rewrite what counts as knowledge, privileging certain ways of knowing while erasing others. Participants will engage with classroom activities and discussion models that use AI-generated text as sites of inquiry—spaces for uncovering bias, assessing authority, and reconstructing erased perspectives.
By positioning librarians as stewards of critical data pedagogy and counter-archives, this session highlights how libraries can empower learners to interrogate machine intelligence and assert human judgment, ethics, and imagination at the center of research practice. Ultimately, “prompting for critical thinking” becomes not a technical fix, but an act of intellectual and social resistance—teaching students to read AI critically and to write themselves back into data-driven narratives.
Ctrl+Alt+SIFT: Rebooting Info Literacy for AI Times
Rayheem Eskridge, Cal Poly Pomona
This presentation argues that the role of librarians is not to normalize AI’s flaws or serve as its facilitators, but to foreground practices that truly build discernment. By centering information literacy over technological novelty, librarians empower students to ask better questions, demand credible sources, and see through the deceptive confidence of AI. In short, this session reframes the SIFT method and other evaluative practices as a vital counterbalance to the unchecked proliferation of AI in education. This presentation challenges the growing perception of AI tools like ChatGPT as accurate, authoritative, and even essential. Despite the hype, these platforms often produce text that is polished but misleading, eroding critical thinking skills while creating a false sense of authority. Education technology companies frame AI as a replacement for evaluative work, but such tools are inherently prone to error, bias, and manipulation. In this environment, the evaluative tools like the SIFT method become more crucial than ever. Rather than teaching students and faculty to accommodate or “fix” AI outputs, librarians must focus on equipping learners with enduring evaluative skills. These skills form the guardrails that help users resist the lure of authoritative-sounding misinformation, whether generated by humans or machines.
AI-Enhanced Liberation: Empowering Neurodivergent Learners
Clarissa P. Moreno, University of Southern California
Neurodivergent learners often face challenges with organization, time management, and sensory overstimulation—barriers that traditional education models frequently overlook. This presentation explores how trauma-informed pedagogy, enhanced by Artificial Intelligence (AI), can create more inclusive, supportive environments that affirm neurodiverse strengths through the convergence of AI tools and trauma-informed teaching to empower neurodivergent learners through autonomy, self-advocacy, and reduced cognitive load. AI technologies—used as communication aids, executive function tools, and adaptive feedback—offer personalized support that fosters academic and adaptive coping skills. Central to this transformation is the role of academic librarians, who play a vital role in creating inclusive, student-centered learning environments by expanding access to technology and building critical literacy skills. Through resource curation and collaboration with educators, libraries can develop AI-integrated spaces that empower neurodivergent learners to thrive.
Mapping Evidence Synthesis Production in the CSUs: Reproducibility, the Role of Librarians, and Opportunities for the Future
CARL Research Grant Recipients: Mary Catherine Ellis, San Diego State University; Essy Barroso-Ramirez, San Jose State University; Yuqi He, San Jose State University; Dawn Hackman, San Jose State University
This paper presentation will present initial findings of a mapping review of the evidence synthesis output of the California State University (CSU) system produced between 2014 and 2025.
The objectives of this mapping review were threefold: to understand and assess the current output of published evidence synthesis projects within the CSU system, in terms of quantity, reproducibility, and discipline; to understand how many evidence synthesis projects produced by CSU-affiliated authors include a librarian as an author or report consultation with a librarian; and to identify opportunities for CSU librarians to develop an evidence-informed service model and outreach plan, while assessing resource needs for supporting evidence synthesis.
Inspired by and modeled after a study conducted by librarians at University of Texas at Arlington, the authors conducted a mapping review using the JBI Scoping Review methodology as a framework. The protocol was published January 15, 2025 on OSF. The primary search was developed in Ovid Medline and then translated into the appropriate syntax for 23 additional databases representing the wide range of majors offered by the undergraduate and graduate programs of the CSU system. Inclusion criteria included: published journal article or conference paper; at least one author identified as affiliated with one of the 23-campuses in the CSU system; the study self-identified as a structured evidence synthesis; published in English; and published no later than 2014, to reflect all author affiliations becoming meta-data standard in the primary database. Two rounds of screening and one round of data extraction were performed independently and blinded in Covidence by two authors, with conflicts resolved by a third author or by consensus. Data was charted and analyzed to determine the effect of librarian involvement on the rigor of evidence synthesis studies published by CSU-affiliated researchers up to this point.
The database searches identified 20,565 references representing 20,562 studies. A total of 9,644 duplicates were identified by Covidence and manually by the authors. The authors screened 10,918 studies by title and abstract. The full-text of 2,529 studies were screened and 1,584 studies excluded. 945 studies were extracted. Analysis is currently underway and completed work will be reported.
This project has been funded by a 2025 CARL Research Grant.
Speeding Up or Slowing Down? AI & Slow Librarianship in Academic Libraries
Amanda Makula, University of San Diego; Heather Sardis, MIT Libraries; Meredith Farkas, Portland Community College
AI – often touted as a tool to enhance productivity and increase efficiency – appears at first glance in stark opposition to “Slow Librarianship,” a movement which advocates for a rejection of "urgency culture" in favor of intentional, values-driven work that prioritizes relationship-building, workplace equity, and intellectual curiosity. Libraries/librarians who have embraced slower, more intentional work now find themselves in a peculiar position, asking: How do we respond to the rapid proliferation of and changes brought on by AI amid an organizational dedication to human-centered library work?
This panel presentation will define slow librarianship and explore whether it is fundamentally in opposition to AI adoption. Are the two contradictory, or might there be points of intersection? We ask: What might we lose if patrons gravitate toward GenAI tools for research support and information, rather than seeking out the library – and/or what might be gained in leveraging GenAI to provide broader support? Does AI necessarily pressure us toward superficial engagement? Is it indeed antithetical to slow, human-centered librarianship? What does the research show? Do the new cognitive burdens required to “keep up” in the AI landscape create more fragmentation of our time and energy, isolate us further from one another, and essentially introduce an even heavier workload?
Or: can AI help us automate routine tasks and free us up for more higher-level thinking, reflection, collaboration, and meaningful interactions? Can the established – and evolving – risks of GenAI be mitigated, and what is the role of libraries/librarians in such mitigation? Can library leaders utilize AI within a human-centered context, and frame it in such a way that is supportive and encouraging for their teams? How do we build community when individuals have different interests, attitudes, and experiences around AI? What tasks or activities are ripe for AI, and what should remain strictly under the domain of humans? Do we need a framework for thinking about when it’s appropriate and effective – and in line with slow librarianship – to delegate work to AI?
Panelists include: Meredith Farkas, Faculty Librarian at Portland Community College in Oregon, who has written and spoken extensively about slow librarianship, and whose chapter, “Neoliberal Time and the Promise of Slow Librarianship,” is forthcoming in Slow Librarianship: Reflections and Practices; Heather Sardis, Associate Director for Technology and Strategic Planning & AI Strategy Lead at the MIT Libraries, who authored the chapter “We Have Always Been Ready: Libraries, AI, and 100 Years of Information” in the forthcoming ACRL Press book, AI in Academic Libraries; and Amanda Makula, Digital Initiatives Librarian at the University of San Diego, who serves on the AI Committee and delivers AI workshops for faculty at USD, helped launch a new ACRL-AI Interest Group, and is a topic editor for Authorship in AI Literacy as part of the Project on Open Evolving Metaliteracies (POEM) through Carnegie Mellon University.
The Human Brand: Why Libraries Should Be the Last to Automate Their Voice
John Jackson, Loyola Marymount University
Nobody wants to see AI used in library outreach. At the William H. Hannon Library of Loyola Marymount University, we have foregone any use of artificial intelligence in our marketing, outreach, and external communications work. Instead, we have intentionally focused on the “faces and places” of the library, centering the people and the community who make it a welcoming space. This presentation will explore the ways in which we center human intelligence, expertise, and community, sometimes in direct opposition to artificial intelligence, in order to build trust with our users and strengthen the library’s brand.
Research on higher ed best practices has shown that retention and academic success are tied to sense of belonging. From within our field, we know that library anxiety can be reduced (and sense of belonging strengthened) through a variety of high impact practices, including giving students the opportunity to build relationships with library staff and connecting them with spaces and services where they feel seen and appreciated. Through our outreach and marketing efforts, we have placed a high premium on these personal connections. We push out stories, not fliers. We celebrate our “library fans.” And we focus on the humans who keep our services up and running.
As generative AI becomes increasingly embedded in communication platforms (e.g. social media, email marketing, graphic design software), libraries face a choice on how to engage with it. This session explores our deliberate decision to resist the use of AI in outreach and marketing efforts, not out of fear or technological lag, but as a principled stance rooted in building trust, authenticity, and human connection.
The presenter will provide practical examples from campus campaigns, student engagement efforts, and faculty communications, showing how the library’s voice is not just a tool: it’s a reflection of institutional values. By centering human intelligence in our messaging, we model the kind of critical thinking and ethical discernment we hope to cultivate in our community. We also push back against the normalization of algorithmic content, which often prioritizes efficiency over empathy and personalization over personhood.
The presenter will briefly discuss the research behind sense of belonging in higher ed and how that research informs academic library outreach efforts. Attendees will leave with a framework for evaluating the role of AI in their own outreach work, questions to ask before adopting new tools, and strategies for preserving the human brand in an increasingly automated landscape. This session invites library workers to consider not just what AI can do, but what it should do, and what we risk losing when we outsource our voice.
The presentation will include an interactive portion that will give attendees an opportunity to identify their “library fans,” craft key messages, and formulate immediate next steps toward creating a human-centered library brand. Any presentation files, handouts, exercises, etc. created for the session will be available for attendees to download.
Cross-Campus Collaboration for Accessible Course Materials
Sylvia Page, UCLA; Emilie Eshbaugh, UCLA; Courtney Hoffner, UCLA
As a public university, UCLA is committed to fostering an inclusive digital environment that complies with Title II of the Americans with Disabilities Act (ADA) and is reinforced by the university's accessibility policy. New ADA Title II updates require the university to work toward WCAG (Web Content Accessibility Guidelines) 2.1 AA conformance for all web content.
In light of these new requirements, library workers have had to work quickly and creatively to address the accessibility of our collections and content. It is crucial that we work collaboratively with campus partners and within our library to prioritize and strategize our approaches.
This poster highlights one such partnership to focus on digitally accessible course materials. Using library expertise on course reserves workflows and technology (Leganto tool for Canvas), we have worked with colleagues in the UCLA Disabilities and Computing Program, Bruin Learn Center for Excellence (our LMS administrators), and the Teaching and Learning Center to develop a strategy for prioritizing the accessibility of required course materials. We will outline challenges inherent in working within a campus-wide and university-wide structure, including labor and budget considerations, communication siloes, and a lack of clear delegation of responsibility. To counter these challenges, we will highlight opportunities for partnership and creativity across departments, and the necessity of scalable work in the face of impending timelines.
LLM—Let’s Learn More: A Deep Research Experimentation
Katherine Kapsidelis, UCLA; Michelle Brasseur, UCLA; Eli Edwards, UCLA Law; Jamie Hazlitt, UCLA; Courtney Hoffner, UCLA
With Deep Research, a new search tool, OpenAI claims complex research tasks can be simplified to a matter of minutes. This poster will showcase a project in which five librarians experimented with a new generative AI tool to explore the implications of this search functionality for university teaching and scholarship, as well as its practical implications for the quotidian work of the Library.
Participants experimented with the tool individually, shared their results during project meetings, and discussed the ethical implications of its use. Examples of how the tool was used included: reducing internal department workflows; assisting with revisions to a book review article; creating an agenda for a project meeting; preparing research reports; and brainstorming information relevant to a search committee.
The committee also specifically explored the implications of Deep Research for student learning and library instruction, and held well-attended professional development sessions open to all library staff.
The poster will highlight our experimentation and reflect on organizing AI-focused professional development initiatives within academic libraries.
Cracking the Case: A Survey on Resource Usefulness and Librarian Assistance among First-Year Health Sciences Students
Jennifer A. Silverman, University of Southern California Wilson Dental Library
Background:
At a university, Doctor of Dental Surgery (DDS) students participate in a problem-based learning (PBL) curriculum, which uses patient cases to teach biomedical sciences. The library supports the curriculum by providing patient case guides, recommended textbooks, ebook platforms, and librarian assistance. To assess the students’ perceived usefulness of these resources and support, a survey was conducted in the first trimester and last trimesters of their first year. The survey’s purpose was to use results to guide changes to future library resources and services. Librarians will find this project valuable because it offers a practical method to gain student feedback on library resources and services while demonstrating how survey findings can directly inform enhancements based on student needs.
Description:
The objective of this survey was to assess students' perceptions of PBL library resources and librarian assistance. The survey was created using Springshare Libwizard. Students received a link to the optional feedback survey via the school’s listserv in the first and last trimesters of their first year of dental school. Repeating the survey in the final trimester was intended to measure any potential changes in responses over time. The survey examined students’ perceived usefulness of the library’s PBL resources and asked about the frequency of reaching out for librarian assistance, with space for comments. Questions about the PBL resources specifically focused on the PBL case guides, list of Recommended Textbooks for PBL, and ebook platforms. Most questions used a Likert Scale, which included the responses “very useful,” “moderately useful,” “not at all useful,” and “I did not use [xyz resource].” Students were also asked how often they contacted the librarian for help. For students who did not seek help from the librarian, they were asked to state why. To encourage a high response rate, anyone who completed the survey was entered into a raffle to win a Starbucks gift card.
Conclusion:
Out of 145 eligible students, 34 responded to the survey in the first trimester and 32 in the final trimester, with no significant difference in results between the trimesters. Nearly 70% of respondents found each PBL resource “very useful,” and 60% did not seek librarian help because they did not need it. All who requested librarian assistance rated the experience as “moderately helpful” or “very helpful.” The results indicate the library’s current PBL resources provide effective asynchronous support, and librarian assistance is valued. Conducting a similar survey would benefit other librarians by systematically collecting student feedback to inform targeted improvements to library services.
Centering Ethics in Library Instruction for Human Flourishing in an AI Future
Brittany Harden, Point Loma Nazarene University; Robin Lang, Point Loma Nazarene University
This conference session will demonstrate how instructional services librarians can address the challenge posed by generative AI by implementing a virtues-framework in their research instruction. Generative AI has had a profound impact on higher education in a short amount of time, with updates and innovations emerging almost daily. While there is a wide range of opinions about potential benefits or drawbacks on the use of AI, many educators in higher education are concerned about the decline of students’ critical thinking abilities and the diluting of learning outcomes. Additionally, several AI companies have eliminated their ethics teams, which currently leaves weighing ethical considerations up to the end user. Some generative AI companies claim to have designed a more ethical product (such as Anthropic's Claude, which they market as designed to be safe, accurate, and secure), but no single generative AI product can replicate the wisdom possible through ethically-centered human intelligence. Further, many educators are concerned that the same problems plaguing search engines (i.e., algorithmic bias, personalized results) are replicated in generative AI outputs, yet many students use generative AI without considering their ethical implications.
This presentation will demonstrate how to center human intelligence in the form of ethics in a learning environment rife with the temptations offered by AI. Teaching a virtues-based curriculum aids students in developing habits of mind, attitudes, and dispositions that will equip them to slow down and more carefully evaluate how best to use generative AI in their research and studies overall. This presentation offers hope that, despite the overwhelming effect AI has had in higher education, librarians can mitigate negative outcomes by grounding information literacy instruction within an ethical structure that implements virtues through practical application. This presentation will outline how instructional services librarians collaborated with a composition program to update their information literacy curriculum around a virtues-based framework to mitigate the problems posed by generative AI.
Our academic library has been collaborating with our university’s college composition program for over three decades, but recent changes involving a new masters in writing program and a new director of the college composition program provided an opportunity for a more integrated and robust collaboration, while the introduction of generative AI called for an updated curriculum, as well. The Instructional Services Librarians teach two library research sessions for each section of ENG 1010, a required GE for all freshmen. The newly adopted core course text used in ENG 1010 applies virtues to writing, research, and rhetoric.
The three virtues centered in both the college composition course and the two library research sessions are: ‘humble listening,’ ‘loving argumentation,’ and ‘hopeful timekeeping.’ These virtues are introduced in the college composition class during the first unit on ‘writing as conversation.’ Then librarians build on this foundation by introducing the ACLR Information Literacy Frame ‘research as conversation.’ Further, librarians mapped these virtues to several additional frames from the ACRL Framework for IL, which, in turn, structure each research session. The virtue ‘humble listening’ is paired with ‘research as conversation’; ‘loving argumentation’ is contextualized within ‘authority is constructed and contextual’; and ‘hopeful timekeeping’ mapped to ‘searching as strategic exploration.’
Students are primed to develop a stronger set of information literacy attitudes and dispositions when key concepts from the Framework are grounded with virtues, better equipping students who are grappling with how to effectively implement generative AI into their education and daily lives. Attendees will be given time during the active learning section of the session to consider and discuss ways to incorporate virtues as applied to the ACRL IL Framework into their instruction, therefore centering human intelligence in an environment forever changed by generative AI. This presentation will accomplish two learning outcomes: 1) Attendees will be able to map key concepts from the ACRL Framework for Information Literacy to virtues already inherent in their library work. 2) Attendees will identify practical strategies for implementing several virtues and their accompanying IL key concepts into a library research session.
AI SWOT
Amy Catania, Contra Costa College; Erica Watson, Contra Costa College
In the evolving landscape of education, simply talking about AI one-on-one is far from sufficient. Instead, we need to center experiential learning and a greater conversation that fosters curiosity, leads to actions, and creates opportunities for further inquiry into the ever-changing technologies we encounter on a near daily basis. Our presentation will feature the many different avenues that we have explored and discuss the successes and challenges we have encountered along the way. One of the activities we will highlight is our highly successful AI jumpstart workshop where students explore different AI platforms and discuss their experiences using AI; the content is fluid and develops organically based on the attendees (20 students attended our last workshop). We will also share the successes and challenges of our 3-Part series on AI, which features the showing of a documentary, a library hosted discussion panel with faculty, staff, and students, and a guest speaker who researched AI in education during her sabbatical. In addition, we will talk about one of our great positives which comes from our information literacy class and two discussions surrounding ChatGPT versus OneSearch (which we call “Search the Library”) and chatting with an AI program or bot and a human. Additionally, we will discuss a very eye-opening experience which involved interviewing students about their own encounters with AI. These are just a few of the many programs, events, and opportunities with which we have experimented, and during our presentation, we will share the good, the bad, and the ugly. We will also have opportunities for discussions in small groups via the breakout rooms.
Do We Have Anything to Say About It? Why and How to Critically Evaluate Embedded AI tools in Library Databases
Jen Pesek, Santa Clara University
What do librarians have to say about AI tools in library databases? And why does it matter? Several frameworks, critical reflection tools and rubrics have been developed by librarians over the last few years to support evaluation of GenAI tools from an info lit perspective. This is part of our profession’s collective work to expand and deepen AI Literacy. However, most AI evaluation tools focus on AI functions as a process separate from and outside of library databases. Their evaluative questions tend to assume that the prospective user has the choice to use or reject the tool, or to use it only for specific purposes.
In the past year major library database publishers have rushed to incorporate AI tools into their databases. And many institutions, not wanting to be left out of the AI revolution, have rushed to incorporate them. Because this change is so rapid, librarians have not had time to evaluate, or even develop evaluative measures of these AI tools before they are expected to learn them, use them, and teach their use to students. AI tools are still being developed as they are being rolled out, and expectations about what makes a product “finished” enough for release are expanding in this experimental environment. Therefore it is more important than ever that librarians evaluate embedded AI tools in library databases so that they can guide students to understand how they work, what they can and cannot do reliably, how effective they are, and where their limitations lie.
In response to this need, the presenter created an evaluation rubric focused on AI in library databases, with an accompanying set of testing criteria. Based on a synthesis of various frameworks in our field, the rubric and criteria were implemented at our library’s instructional retreat this year as a first step towards developing a community voice about AI implementation. During the retreat participants got a chance to try out the rubric, do some testing, and discuss the role of librarians in the development of AI in library databases.
This presentation will introduce librarians to the special capabilities, issues and concerns of AI tools embedded in library databases. It will allow participants to discuss and reflect on their own experiences with AI, and help them consider if and how they would like to engage in evaluative work. The presentation will guide them through, and give them a chance to try out the evaluative rubric developed by the presenter. Lastly, the session will conclude with practical tips and best practices for librarians to help them engage in the larger discussion of AI use in library databases.
Bringing Librarian Ethics to the AI Discussion
Dr. Mary Jordan, St. Cloud Technical and Community College
Artificial intelligence (AI) is becoming increasingly significant in academic life for students, staff, and faculty. Academic librarians are uniquely positioned to help steer AI conversations across campus community, as members begin to encounter and to use AI in different ways.
The core values of librarians are access, equity, intellectual freedom and privacy, public good, and sustainability. These are also key issues in ethical and effective use of AI tools. In addition to helping in providing instruction for community members in use of AI tools, librarians can be instrumental in working across the campus community to establish a campus-specific set of ethical standards that speak to the needs of the institution.
In this presentation, we will discuss some basic strategies for building these connections and starting conversations across campus. We will walk through some procedures for bringing academic values to the process of setting up AI use procedures, for individual instructional areas/classes and for departments or the school as a whole. And we will look at some of the templates for building ethical standards in higher education that are being used. Throughout the process of developing any strategy for effectively using AI in education, the importance of centering human judgement and evaluation will be emphasized.
Inductive Learning as an Approach in Information Literacy (IL) Instruction Integrating the Utilization of Artificial Intelligence (AI) Tools
Wil Weston, San Diego State University
There are several significant challenges to information literacy (IL) instruction. The largest of these challenges is the problem of misinformation and now, the potential for disinformation to be propagated by Artificial Intelligence (AI). AIs, like ChatGPT, are large language models (LLM) trained on an enormous textual corpus. The problem with these LLMs being trained on this corpus of information is that all the “discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on” (Noble, 2018). However, inductive learning may provide a useful approach to information literacy when discussing AI and misinformation to help students identify when biases are being replicated in AI results. Inductive learning has been used as a psychological intervention designed to improve true and fake news discernment (Modirrousta-Galian, Higham, & Seabrook, 2023) and inductive learning in media literacy interventions has been found to impact their efficacy (Motz, Fyfe, & Guba, 2023). The approach with AI technologies would be the same used when examining media bias; students learn by doing and through confronting their particular experiences using this technology may learn to not take AI results at face value.
Modirrousta-Galian, A., Higham, P. A., & Seabrooke, T. (2023). Effects of Inductive Learning and Gamification on News Veracity Discernment. Journal of Experimental Psychology. Applied, 29(3), 599–619. https://doi.org/10.1037/xap0000458
Motz, B. A., Fyfe, E. R., & Guba, T. P. (2023). Learning to Call Bullsht via Induction: Categorization Training Improves Critical Thinking Performance. Journal of Applied Research in Memory and Cognition, 12(3), 310–324. https://doi.org/10.1037/mac0000053
Noble, S. U. (2018). Algorithms of oppression : how search engines reinforce racism. New York University Press.
AI Literacy’s Hidden Prerequisite: Why Having Information Literacy Is Essential For Understanding And Using AI
Sophia Mosbe, Santa Clara University
How does one become AI literate? This question echoes across industries, disciplines, and institutions as professionals attempt to define the skills necessary to navigate an AI-saturated world. While terminology and qualifiers may vary, one foundational point remains consistent: AI literacy cannot exist without information literacy.
The American Library Association defines information literacy as “a set of abilities requiring individuals to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information.” These core abilities form the essential scaffolding for more advanced competencies. Without them, learners lack the critical thinking, evaluative habits, and contextual awareness required to engage meaningfully with generative AI systems.
As educators and information professionals, we cannot skip to the ‘advanced chapter’ simply because societal or institutional pressures emphasize rapid AI adoption. When foundational literacies are underdeveloped, learners are more likely to misunderstand AI outputs, misapply tools, or overestimate system capabilities. Conversely, strong grounding in information literacy primes individuals to approach AI with a critical, informed, and intentional mindset.
Instead of treating information literacy and AI literacy as separate or competing frameworks, I assert that we should conceptualize them as a continuum. AI literacy emerges as an advanced extension of information literacy—building upon the same analytical skills, evaluative practices, and ethical considerations, but applying them within the context of algorithmic systems and machine-generated content.
This 10 minute lightning talk argues for an integrated model in which AI literacy is not taught in isolation but situated within the established principles of information literacy. By reinforcing foundational competencies, instructors can equip learners not only to use AI tools effectively but to understand their limitations, navigate their biases, and engage with them responsibly.
CARL Grant Recipients
A Scoping Review and Bibliometric Analysis of Evidence Synthesis in Human Factors Engineering: A Journal-Set–Restricted Search
Yuqi He, San Jose State University; Delaney Ubellacker, San Jose State University
Evidence synthesis, including scoping and systematic reviews, is increasingly important in engineering, yet its use in Human Factors Engineering (HFE) is not well understood. This work-in-progress project uses a journal-set–restricted search to map evidence synthesis in HFE, assess methodological application, evaluate adherence to reporting guidelines, and identify research trends. Findings will address a key gap in understanding evidence synthesis practices in HFE.
"Some Light of Joy”: Using Collective Biography to Investigate Academic Librarian Well-Being in the Workplace
Katherine Luce, Cal Poly Maritime Academy; Margot Hanson, UC Berkeley; Annette Marines, UC Santa Cruz; Cameron Bluford, Los Medanos College; Annie Pho, University of San Francisco; Tamar Kirschner, Diablo Valley College
The existing literature on toxicity and low morale in library workplaces is extensive. There is little discussion, however, of the opposite experience in academic libraries: well-being, and even joy, for workers.
A body of recent writing focuses on workers' affective experiences and the need to bring joy to workplaces, focusing on medical workplaces affected by the intractable challenges of COVID-19, worker shortages, and grueling and dispiriting work, and including academic library workplaces. Efforts to promote joy at work are often initiated from a top-down management perspective. Writers propose joy as a means to make workers more productive, with or without their direct input. Increasing workers’ autonomy and input, and facilitating interpersonal connections, can improve well-being, but do these changes bring the alluring concept of joy within reach?
Work by Gannon and others using collective biography on experiences in academic workplaces found that joy was interstitial, localized in isolated moments (2016). The methodology of collective biography offers a path to gain a deeper qualitative understanding of individuals’ experiences in their historical context, well-suited to illuminate librarians’ shared and individual environments (Cowman, 2016; Gannon et al., 2019). The focus on joy reflects how far many workplaces are from inspiring joy, how difficult it is to change the work itself, and how humans crave the experience of transcendent delight. As technology and business imperatives, including “AI,” change workplaces, collective biography offers a human-centered research method that places embodied experience at the center of its investigations.
Collective biography often involves a closely-connected group of participants who conduct their research over years, meeting for research retreats which last for several days and developing the collective biographies as co-researchers using an iterative, democratic process. This methodology provides opportunities to explore subjective topics which are not well-suited to traditional social science approaches, but is extremely resource- and time-intensive, so is not feasible for many researchers.
This project has two goals: to better understand the subjective, embodied experiences of academic librarians around the concept of joy in the workplace, and to test and evaluate the methodology of collective biography with a less deeply connected group of participants, briefer research retreats, and a shorter time frame than is usual for this methodology.
An initial one-day collective biography retreat with six academic librarian research participants, selected to represent a range of institutions and demographics, took place in July, 2025, funded by a Cal Poly Maritime Academy Research, Scholarship and Creative Activity grant. The researchers have received a CARL Research Grant to allow an additional daylong collective biography workshop to take place in early 2026. During the workshops, the researcher-participants collect embodied memories of joyful experiences connected to their work, and then discuss their writings. The proposed conference panel will include as many of the participant-researchers as possible, who will discuss initial findings from the workshops, including the suitability of collective biography as a human-centered methodology, and embodied workplace experiences of joy.
Cowman, K. (2022). Collective Biography. In L. Faire & S. Gunn (Eds.), Research Methods for History (pp. 85–103). Edinburgh University Press. https://doi.org/10.1515/9781474408745
Gannon, S., Taylor, C., Adams, G., Donaghue, H., Hannam-Swain, S., Harris-Evans, J., Healey, J., & Moore, P. (2019). “Working on a rocky shore”: Micro-moments of positive affect in academic work. Emotion Space and Society, 31, 48–55. https://doi.org/10.1016/j.emospa.2019.04.002
Carrie Fry, University of San Diego; Jennifer Bidwell, University of San Diego
In this talk, we will discuss personalizing a Large Language Model tool to develop a customized AI space that is primed and pre-prompted to reflect your core values. Most large language models include personalization options, which can be used to establish our core professional identity and values. Using our disciplinary specializations - business and nursing - we will demonstrate how we customized Gemini Gems and ChatGPT Projects for our professional outputs. Some examples for use include lesson planning for information literacy instruction, survey development, searching for gaps or blind spots in our professional output. We will also discuss the importance of disclosing AI use and show the AI Disclosure Framework as a possible workflow tool.
Learning Objectives
By the end of this workshop, participants will be able to: