Ethics, Care, and AI in Practice: Reflections from ASIS&T 2025 – Elaine Kong
This is part 2 of the blog series where we share reflections from our 2025 NEASIS&T Conference Support Award recipients, offering a glimpse into the value and impact of the 2025 ASIS&T Annual Meeting. Through their experiences, we aim to highlight the learning, networking, and growth that make this event so impactful.
Attending the 2025 ASIS&T Annual Meeting in both in-person and virtual formats was a meaningful and rewarding experience that enhanced my intellectual growth. The conference structure allowed me to engage across multiple days and contexts, which extended my learning beyond session summaries to substantive reflection on key issues such as ethics, care, narrative, and inclusive design. As a doctoral student focusing on health information behavior and digital equity, this conference helped me see how my work fits into ongoing scholarly dialogues about responsibility, agency, and human dignity in information systems.
Two workshops were especially influential in shaping my perspective. The virtual workshop Best Practices for Ethics of Care When Engaging Vulnerable Communities prompted me to reconsider how ethical responsibility functions in research beyond formal protocols. The discussion emphasized that ethical decision-making involves ongoing attention to the lived experiences and emotional realities of participants, especially those from vulnerable groups. This approach resonated deeply with my research on equitable health information access for cancer survivors, who often encounter power imbalances and emotional complexities when navigating digital information environments. I left the workshop with a richer understanding of how care, accountability, and reflexivity can be integrated throughout research design and practice.
The in-person workshop Multivocal Writing: A Workshop on the Thematic Narrative shifted how I think about academic communication. Rather than presenting findings in singular authoritative voices, the workshop encouraged scholars to weave multiple perspectives into thematic narrative structures that honor the voices of participants, communities, and researchers equally. This idea has significant implications for my work, which aims to foreground patient experiences with AI-mediated health information without reductive abstraction. The workshop inspired me to think more critically about how narrative forms can shape the visibility and legitimacy of experiential knowledge.
I also co-presented a short paper titled Measuring Socio-Ethical Engagement with Generative AI: Scale Development via Exploratory Factor Analysis. Presenting this work allowed me to engage directly with peers about how ethical engagement with AI can be conceptualized and measured. The feedback I received helped me refine my thinking about balancing quantitative rigor with qualitative depth, particularly when dealing with complex constructs like trust, fairness, and perceived ethical relevance. These conversations reaffirmed the importance of methodological pluralism in my future research.
In addition to workshops and my own presentation, several panel sessions anchored my reflections. The session Brenda Dervin’s Sense-Making Methodology: What Has Been Achieved and Why It Matters Now? reminded me how foundational sense-making continues to be in the study of information behavior. Revisiting Dervin’s work in this contemporary context helped clarify for me how people interpret emotionally charged health information, particularly when mediated by technology. This session encouraged me to consider how sense-making theory might inform the development of tools and systems that bridge personal human contexts with machine processes.
Another session, A Critical Dialogue on Ethics and Practices for Digital Research with “Difficult” to Reach Populations, brought forward nuanced ethical challenges associated with digital research involving communities that are often overlooked or underserved. The panel raised important questions about recruitment, consent, representation, and digital equity that I had not fully considered before. Hearing from researchers who navigate ethical tensions in real-world projects expanded my view of what responsible research looks like when traditional recruitment and engagement avenues are unavailable or inappropriate.
I also attended paper and poster sessions related to health information behavior and AI. Two paper sessions that stood out were I Post Because I Have Been Down This Road for So Freaking Long: Reading and Sharing of Personal Narratives among COVID Long-Haulers and Navigating Barriers: Disability, Healthcare Information Seeking, and AI-Enabled Chatbots. Both papers emphasized how information practices are deeply contextual, embodied, and socially situated. These presentations helped me see that information systems are not neutral technologies, because they are shaped by human meaning, emotion, and experience. In particular, these sessions reinforced my belief that patient stories, user experiences, and socio-ethical concerns should be central, not peripheral, to the design and evaluation of AI tools in health contexts. Moreover, the poster session allowed me to engage in informal but meaningful conversations with other scholars. Discussing poster projects related to health information and AI helped me reflect on how research ideas evolve through dialogue. These interactions strengthened my sense of belonging and reminded me that knowledge in LIS is shaped through ethical reflection.
Beyond formal sessions, networking also played an important role in my experience. I attended the SIG- HLTH dinner and the lunch reception co-sponsored by NEASIS&T, SIG-USE, and SIG-AI, where informal conversations with researchers and practitioners enriched my understanding of emerging work at the intersection of health, user studies, and AI. These interactions helped me situate my research within communities of practice and gave me confidence that my work contributes to important ongoing conversations.
Overall, ASIS&T 2025 was a deeply reflective experience that strengthened my commitment to conducting ethically grounded, human-centered research in information science. The conference encouraged me to move beyond viewing AI as a purely technical system and toward understanding it as a sociotechnical practice shaped by values, power, and care. I leave the conference with a clearer sense of how my work can contribute to advancing equity and responsibility in health information systems, and with renewed motivation for my future academic journey. I am grateful to NEASIS&T for their travel support, which made it possible for me to attend ASIS&T 2025 and fully engage in these
scholarly exchanges.