WEDNESDAY, 14 NOVEMBER
9:00 am – 5:00 pm
Big Metadata Analytics: Setting A Research Agenda for the Data-Intensive Future
Jian Qin, Syracuse University; Chaomei Chen; Drexel University Jeff Hemsley, Dietmar Wolfram, University of Wisconsin – Milwaukee
Big metadata from research data repositories, catalog systems, and indexing databases is a unique data source for studying collaboration, science history, knowledge diffusion, and many other phenomena emerged from the knowledge creation pro-cess and offers opportunities for building theories and methodologies for a new research area. The big metadata’s quality and readiness for analysis, however, is a major obstacle for using this vast data source for research. This workshop will bring together researchers who have used or are using big metadata in their projects to share their research methods and findings. Through group discussions, participants will develop a research agenda for big metadata analytics.
8:00 am – 12:00 pm
Building a Foundation for Integrating AI and Text Analytics
Tom Reamy, KAPS Group, United States of America
While new AI techniques are generating a lot of press, they have some severe limits when applied to text rather than pattern-based perception. These limits can be overcome with the addition of a range of text analytics techniques – text mining, machine learning, noun phrase extraction, auto-categorization, auto-summarization, and social media/sentiment analysis. The essential trick is how to integrate two very disparate fields that barely speak the same language. This workshop, based on the recent book, Deep Text: Using Text Analytics to Overcome Information Overload, Get Real Value from Social Media, and Add Big(ger) Text to Big Data, will take attendees through the entire process of creating a text analytics foundation that provides the means for that integration. The workshop will include exercises designed to deepen the participant’s appreciation for the practical process of building text analytics applications and, at the same time, exemplify some of the key theoretical issues.
1:00 – 5:00 pm
Deep Learning for Social Media Processing
Muhammad Abdul-Mageed, University of British Columbia
Deep learning, an approach to machine learning inspired by information processing in the human brain, has recently broken records on a range of tasks (e.g., speech recognition, machine translation). Be it it’s use in self-driving cars or health and well-being; the technology is transforming many aspects of our lives. Deep learning is also currently a multi-billion-dollar industry, and its applications are expected to have far-reaching impacts in many fields and domains, including those tightly related to information science and technology. Especially due to its large volume and availability, social media data lend themselves to a wide range of deep learning tasks. This tutorial will introduce use of deep learning for processing social media, with a focus on tasks like emotion detection, sentiment analysis, and detection of fake news.
8:00 am – 12:00 pm
Developing a Meaningful and Sustainable Research Identity
George Buchanan & Dana Mckay, University of Melbourne
This tutorial is focused on early career researchers and will address a range of issues of identity. It is not a branding workshop, rather it is aimed at helping novice researchers understand what is important to them about their research and reflect that in the way they publish. Participants will learn about the importance of author names and select their own with evidence-based guidance. They will learn–with examples and exercises–the ways in which publication titles reflect venue and author as well as publication, and how they can be used to attract the right reviewers and readers. Finally, participants will learn how to write abstracts that reflect not just their work, but their research ethos, further attracting the right readers and reviewers.
8:00 am – 12:00 pm
Github Pages as a Learning Management System for IS Education
Elliott Hauser, UNC Chapel Hill, United States of America
Github pages is a simple yet powerful static website hosting service. With a little set up, it can be turned into an excellent learning management tool for a variety of information science subjects. By using Github’s web-based collaboration flow, students can directly contribute posts to a collaborative website. Using a Markdown-enabled web editor, students can report on their progress on assignments, post personal reflections, or continue class discussions. The collaborative merging of Pull Requests introduces students to a valuable professional skill that project managers, designers, and, of course, developers will use daily in technical careers. This is an excellent addition to programming, database, data analsys, and other technical courses, and can even be used in non-technical courses if desired. Participants will be led through posting to a shared website and will leave with their own site, ready to customize for their own teaching.
9:00 am – 5:00 pm
Information Experience Design: Uniting Information Research with Practice
Kate Davis, University of Southern Queensland; Elham Sayyad Abdi; Queensland University of Technology
This tutorial introduces the related concepts of information experience (IX) and information experience design (IXD). Information experience explores how people engage with information in a given context, while information experience design is an approach to designing interventions to improve user experiences of information, informed by information experience research.
Over the past several years, library and information organisations have adopted methodologies like design thinking to design their services, spaces, products, and programs. These methodologies put the customer at the centre of design process, but do not necessarily focus on the information component of their experience. Information experience design bridges that gap by marrying design methodologies with our disciplinary knowledge about people’s information experience to improve or enhance those experiences.
This tutorial will equip participants with tools and strategies for understanding users’ everyday information experience and designing interventions that enhance those experiences. Participants will work with real data to extract insights about a user’s information experience and use these insights in an information experience design process. Participants will leave this workshop with knowledge of the information experience research landscape, approaches to information experience research, an information experience design toolkit, and practical experience working through a design process.
9:00 am – 5:00 pm
Understanding, Visualizing, and Managing Research Data (SIG-DL)
Ekatarina Grguric & Nushrat Khan, McGill University
Data skills are growing in importance and competencies vary widely in different areas of the information profession. The goal of this tutorial is to introduce early career practitioners to best practices and common strategies in working with, understanding, visualizing, and managing research data while illustrating an approach to further developing these data skills. Knowing how to effectively collect data to answer a research question is one piece of the puzzle and it is easy for early career practitioners to be overwhelmed with the abundance of tools available to them to then work with the collected data. Often these tools also have a high learning curve and require significant time commitment. Rather than surveying many different tools, we have chosen to focus on a few and outline workflows that cover different kinds of data and best practices in managing it.
This tutorial is offering a $40 discount for regular registrants and a $100 discount for student registrants! Use discount codes DL40 (regular) or DL100 (student) at registration.
1:00 pm – 5:00 pm
Using Word2vec and Node2vec Algorithms in Information Science
Yi Bu, Indiana University; Yong Huang, Wuhan University; Ying Ding, Wuhan University
Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Node2vec is an algorithmic framework for representational learning on graphs (networks). Given any graph, it can learn continuous feature representations for the nodes, which can then be used for various downstream machine learning tasks. Words and networks are the main focuses of Word2vec and Node2vec, respectively. Learning how to involve and implement Word2vec and Node2vec is of importance to information scientists, because words and graphs (networks) are two typical objectives researched in Information Science, such as the output of a qualitative interview in a human information behavior study (words) and scholarly relationships extracted from scientometrics (networks). This tutorial will show the principle of the two algorithms and provides step-by-step hands-on instructions. More discussions on how to involve them in real information science research will also be triggered.