Day 1 (August 25)
All times are US Eastern. Click on time to find the time in your local time zone.
Presenters: Souvick Ghosh, Andew Cox, Lignzhi Hong, Darra Hoffman
This panel will provide a comprehensive introduction to artificial intelligence, demystifying the concept and clarifying its various dimensions. Attendees will learn about the history of AI, key terminologies, and the different types of AI systems, including narrow AI, general AI, and superintelligent AI. By exploring real-world applications and potential future developments, participants will gain a clear understanding of what AI is, what it can do, and how it is shaping our world.
Presenter: Anerew Cox
AI is likely to significantly impact libraries both directly through uses by the library and indirectly through how it changes user behavior. As a result, the range and depth of impacts is hard to fully understand, but it is likely to affect all library operations and library roles. The session seeks to describe some of this context. It will also explore the drivers of change and the challenges, including of defining responsible AI in the library context.
Horizon scanning to discuss how AI might be evolving over the next period and how to monitor such changes.
AI is likely to significantly impact libraries both directly through uses by the library and indirectly through how it changes user behavior. As a result, the range and depth of impacts is hard to fully understand, but it is likely to affect all library operations and library roles. The session seeks to describe some of this context. It will also explore the drivers of change and the challenges, including of defining responsible AI in the library context.
Horizon scanning to discuss how AI might be evolving over the next period and how to monitor such changes.
Learning Objectives:
- To appreciate the range of library uses of AI and indirect impacts
- To evaluate potential uses of AI in the participant’s context, taking into account benefits and challenges
- To understand how to effectively monitor the changing scene and its implications
To appreciate the range of library uses of AI and indirect impacts - To evaluate potential uses of AI in the participant’s context, taking into account benefits and challenges
- To understand how to effectively monitor the changing scene and its implications
Presenter: Vishnu Pendyala
The next-generation artificial intelligence (AI) is evolving from generating incredible content into enabling entirely autonomous systems. The rise of agentic artificial intelligence and autonomous systems ushers in a new era of innovation, but it also introduces significant challenges. These intelligent systems, capable of independent decision-making and adaptive behavior, amplify existing concerns and pose unique issues. Key challenges include ethical considerations, such as mitigating bias, ensuring accountability, and aligning autonomous actions with human values. Technical barriers, like improving interpretability and scalability, remain critical for trust and reliability. The environmental toll of energy-intensive models and the protection of data privacy further complicate development. Additionally, safety and control mechanisms are paramount to managing risks associated with highly autonomous systems. The economic and societal impacts, including workforce disruptions and potential misuse, require forward-thinking policies and interdisciplinary collaboration. Addressing these hurdles is essential to unlock the transformative potential of agentic AI and autonomous technologies responsibly. The talk will give insights into these aspects.
Presenter: Souvick Ghosh
This session will cover the fundamental concepts and techniques of machine learning (ML). Attendees will learn about the different types of ML, including supervised and unsupervised, and the algorithms associated with each type. The session will introduce key concepts such as training data, model fitting, and prediction. Participants will also explore practical applications of ML – both code and no-code approaches – using real-world datasets. This course aims to provide a solid foundation in ML, equipping attendees with the knowledge to understand and leverage ML technologies in their respective fields.
Upon successful completion of this course, students will be able to:
- Understand the theory behind various machine learning algorithms and their application to real-world data problems
- Learn to develop machine learning algorithms with or without coding
- Analyze and compare machine learning tools and algorithms and apply them suitably for solving problems
- Create simple visualizations to explore and analyze data
Day 2 (August 26)
All times are US Eastern. Click on time to find the time in your local time zone.
Presenter: Claire Bytas, Ithaka S&R
Claire Baytas is a senior analyst on the Libraries, Scholarly Communication, and Museums team, working in the research enterprise program area. Her work at Ithaka S+R to date has focused on the effects of generative AI on teaching, learning, and research. Before joining Ithaka S+R, Claire completed a PhD in comparative literature at the University of Illinois. Prior to joining Ithaka S+R, I completed a PhD in Comparative Literature, where I studied the relationship between memory and migration in contemporary literature and cinema, as well as taught undergraduate humanities courses.
Presenter: Andrew Cox
The session builds on the Identifying opportunities and challenges session to dig deeper into library strategies for responding to AI. It will use tools such as policy analysis, force field analysis, capability modelling and roadmapping to engage participants in a process to define considerations for a strategy relevant to their context.
Learning Objectives:
- To appreciate the importance of library positioning in relation to relevant national, sectoral and institutional policy trends
- To be able to apply force field analysis to the issue of AI adoption
- To be able to apply AI capability modelling to their own context
- To understand the steps in designing a roadmap for AI in their context
Presenter: Brady Lund
This session presents key findings from the presenter’s recent research on the integration of artificial intelligence in higher education, offering practical insights for library and information professionals and setting the stage for further inquiry. The session will focus on three major areas of research related to the adoption of generative AI in academic environments:
- Algorithmic Transparency: We will examine the importance of understanding specifically how large language models work, emphasizing the need for transparency to protect students and reduce institutional risk.
- Copyright and Legal Risks: We will explore how the use of large language models by students and researchers could lead to violations of traditional copyright laws, potentially exposing universities and individuals to legal action.
- Academic Integrity: We will discuss how AI has shifted perceptions of plagiarism and cheating among students, and what this means for upholding academic standards.
These three studies will be linked through the broader concern of academic “brain drain” – the growing dependency on AI tools that may be eroding students’ and researchers’ independent thinking and intellectual engagement.
Presenter: Charlene Chou
In light of the growing use of machine learning (ML) and natural language processing (NLP) to improve the discoverability of digital resources, this presentation aims to assess the outcomes of several pilot tests conducted in recent years. These efforts, carried out in collaboration with data scientists, domain experts, and metadata librarians, focus on enhancing the discovery of academic resources, particularly through semantic search and human-centered AI. The pilot projects explored the use of various generative AI tools to support transliteration of multilingual resources, subject heading recommendation, and metadata remediation and generation. The results were mixed: some tools performed well, while others revealed critical issues such as hallucination and poor-quality recommendations.
This year, we conducted a pilot to strengthen interdisciplinary collaboration across domains, including faculty, students from the Center for Data Science, and metadata librarians, in alignment with NYU’s strategic pathways. Effective communication and mutual training are key to achieving shared goals and align with the NYU Center for Responsible AI’s mission to make “responsible AI” synonymous with “AI.” Beyond NYU’s initiatives, this presentation also offers an environmental scan of global AI and metadata projects and outlines strategic directions for future research.
Presenter: Lingzi Hong & Xiaoying Song
In public and academic libraries, librarians often face questions across a wide range of topics by patrons from diverse backgrounds and with varying levels of literacy. This includes addressing health misinformation, offering mental health support, and navigating sensitive or high-stakes conversations with patrons. These situations require not only domain-specific knowledge but also adaptive communication strategies. The task can be challenging even for experienced librarians.
In this talk, we present the design and development of a large language model (LLM) powered chatbot aimed at supporting librarians in this role. The application is capable of retrieving evidence-based information from health knowledge bases and generating tailored, trustworthy responses appropriate for patrons with different levels of health information literacy. The model-generated responses can serve as references to support librarians in responding to various inquiries. In this specific health misinformation debunk application, we showcase how the chatbot generates nuanced counterspeech to debunk health misinformation while adjusting its language style, tone, and complexity to meet users' needs and understanding levels. By integrating domain expertise with conversational adaptability, this tool offers an approach to enhancing information services in libraries and supporting librarians. Through hands-on demonstrations and real-world examples, librarians will discover how to deploy these advanced AI tools to effectively assist communication-related services.
Day 3 (August 27)
Times below are in US Eastern TIme. Click on time to find the time in your local time zone.
Presenter: Alamir Novin
How is AI changing how people look and think about information? Given the ubiquity of AI, how can educators and librarians focus their efforts on compensating for the limitations of AI systems? Alamir Novin will share findings from his lab as they studied and experimented on the affects of AI on student learning. Novin applies the findings in his classroom using software he developed and released to everyone (free to use).
Presenter: Darra Hoffman
Despite their many differences, AI and blockchain have similar lessons for librarians in the adoption of new technologies. Both AI and blockchain have been extremely hyped, with enthusiasts heralding them as revolutions. However, for librarians, archivists, and other information professionals tasked with evaluating the suitability of AI, blockchain, and other emerging technology solutions for their work, it is necessary to develop the ability to critically evaluate the affordances, constraints, and impacts of such solutions, and to understand their role in the broader information ecosystem. In this session, participants will look at these two technologies to identify affordances and constraints based on both the design of solutions and the requirements, examining their intersection through the problem of data privacy.
Learning Objectives:
- Evaluate the affordances and constraints of emerging technologies, including AI and blockchain.
- Examine the impact of solution design on the appropriateness of the solution to particular information problems.
- Apply technology evaluation to a particular information problem.
Presenter: Lingzi Hong & Jinyu Liu
Large language models (LLMs) have demonstrated superior performance across a wide range of natural language processing tasks, including content comprehension, classification, and summarization. Their growing adoption across domains has sparked interest among information professionals and researchers in exploring their potential for library information organization, particularly in cataloging tasks.
In this session, we explore how LLMs can be adapted and applied to support cataloging workflows, such as subject analysis and classification. Beginning with an overview of the recent development of LLMs, we will highlight the potential of LLMs in cataloging tasks, including the classification and subject analysis to enhance metadata. Next, we demonstrate techniques to train LLMs for the subject analysis tasks, including prompt engineering, fine-tuning, and retrieval-augmented generation. Additionally, we introduce techniques such as post-processing and normalization to further optimize our framework’s performance. Finally, we discuss opportunities and challenges in integrating these models into library systems. Through practical examples and experimental findings, we highlight how LLMs can enhance efficiency in cataloging while supporting the evolving role of metadata professionals in the age of AI. This session will equip librarians and researchers with the knowledge of LLMs and basic training methods to leverage LLMs for library organization tasks.
Presenter: Satanu Ghosh
As we continue to develop increasingly sophisticated intelligent systems—ranging from classic machine learning models to general-purpose AI agents—we face growing challenges in evaluating their capabilities meaningfully. Traditional benchmarks, once considered gold standards, are now frequently surpassed, gamed, or rendered obsolete. This fragility raises serious questions about our ability to measure progress in a fast-evolving landscape.
In this session, we’ll explore why evaluation metrics and benchmarks are struggling to keep pace with advances in AI, and why “better than human” scores often fail to reflect real-world utility or alignment with our goals. Drawing on examples from both traditional supervised models and generative systems, we’ll uncover how benchmarks can mislead, how overfitting to static tests can mask brittleness, and how evaluation metrics can be decoupled from practical success.
We’ll then turn to more promising directions, including dynamic evaluation setups, human-in-the-loop assessments, real-world task validation, and customized evaluation frameworks aligned with deployment contexts. Whether you're working on classifiers, recommendation engines, reinforcement learning agents, or generative AI, this session will help you rethink evaluation from the ground up.
By the end of this session, participants will be able to:
- Identify the limitations of traditional benchmarks in evaluating modern AI systems.
- Analyze cases where evaluation metrics failed to reflect real-world performance or robustness.
- Compare and contrast different evaluation approaches—such as static testing, task-based performance, and interactive/human-in-the-loop methods.
- Design an evaluation strategy that aligns with specific goals, stakeholders, or deployment contexts.
- Reflect on the broader ethical and societal implications of what we choose to measure—and what we ignore—when evaluating AI.
Day 4 (August 28)
All times are US Eastern. Click on time to find the time in your local time zone.
Presenter: Jiangen He
More Information Coming Soon
Presenter: Satanu Ghosh
By the end of this session, participants will be able to:
- Explain the core principles behind generative language models, including how
they are trained and how they generate text - Differentiate between statistical language modeling and human intelligence, and understand why LLMs are not truly "intelligent" in a human sense
- Identify common limitations and pitfalls of generative models, such as hallucinations, bias, and context sensitivity
- Evaluate real-world applications of generative language models across various domains (e.g., education, creative industries, research, coding)
- Apply ethical and practical considerations to the use of generative models in their own work or studies
- Formulate strategies for using these models effectively, including prompt design, critical evaluation of outputs, and human-in-the-loop workflows
Presenter: Souvick Ghosh
More Information Coming Soon
Presenter: Norman Mooradian
The presentation identifies central issues in the ethics of artificial intelligence, explains their relation to different conceptions of AI (narrow, general), and their evolution from information ethics. It then reviews AI ethics principles and key ethical concepts and discusses the importance of using these frameworks and concepts in policy development and decision making
Learning Objectives:
After attending this presentation, participants will be able to:
- Identify and explain central AI issues \ risks as they impact individuals and society.
- Identify ways in which AI issues \ risks arise in LIS contexts.
- Use AI ethical principles and concepts to analyze issues when formulating policy or supporting decisions.