AI and User Data
by Robert Wilson
In January 2022, my role changed to Systems and Analytics Librarian at Middle Tennessee State University’s Walker Library. In this role, time is dedicated to reporting including the formalization and automation of past data analytics and data visualization work as well as predictive analytics. This work includes improving and automating descriptive reports as well as the creation of predictive models on future outreach and collection development needs incorporating Machine Learning (ML) tools and techniques. While this work will be invaluable for identifying underserved students, outreach needs, and collection development, it is also invaluable in demonstrating the library’s positive impact on student success, a major initiative at MTSU and so many other higher education institutions as we are consistently having to demonstrate our value to university administration.
Much of this work involves user usage data. I’ve had reservations about using this type of data and how might it undermine library privacy values. This is work in some ways related to Learning Analytics (LA), a growing area of predictive analytics measuring and collecting information at numerous data points that revolve around individual students. There is much debate on the type of environment this is creating in education and the risks and rewards of this evolving practice.
Talking about my project and past work in this area at the IDEA Institute for AI with the PIIs, Instructors, and other IDEA Institute Fellows over the course of the six-day intensive workshop was invaluable in better understanding the ethical concerns around this work and other types of LA work. Not only did we participate in group and individual activities talking about and building our vision for our individual projects, we talked about the ethical issues of AI including its inequities and the many ways bias can creep into and subvert AI work whether intentional or not. We also examined the power structures of AI with concepts informed by Catherine D’Ignazio’s and Lauren F. Klein’s book, Data Feminism, and we reviewed AI technologies in use by governments and in the commercial realm that are actively discriminating against groups of people and undermining core concepts of an individual’s right privacy in a democratic society with Shalini Kantayya’s 2020 film documentary, Coded Bias.
For my project, thinking about these problems and evaluating the use of the data sources within this context is extremely important. Increased integration of different data sources or inappropriate use of this data demands constant monitoring, feedback, and review of this this data is being used versus how it should be used ethically.
We’re in a midst of a major shift in regard to how AI is used in our daily lives with little to no regulations in place on appropriate use of these technologies. While I’m hopeful that will change as government regulation catches up with technological advancements, in the meantime, libraries must be one of the leaders in ensuring we are not using this technology inappropriately and to inform as many of our community members both of the opportunities and risks of AI.
Becoming an IDEA Institute Fellow has aided greatly in better understanding the need for libraries to be a voice on this subject. Working with other fellows and learning AI and ML techniques and tools and how they fundamentally work makes this all the more possible. I look forward to continuing to learn more as I work on my project from other participants and the program’s instructors, and I hope to be a voice at my institution on the appropriate use of AI.