Nailing Jello to a Wall: Metrics, Frameworks, & Existing Work for Metadata Assessment
With the increasing number of repositories, standards and resources we manage for digital libraries, there is a growing need to assess, validate and analyze our metadata - beyond our traditional approaches such as writing XSD or generating CSVs for manual review. Being able to further analyze and determine measures of metadata quality helps us better manage our data and data-driven development, particularly with the shift to Linked Open Data leading many institutions to large-scale migrations. Yet, the semantically-rich metadata desired by many Cultural Heritage Institutions, and the granular expectations of some of our data models, makes performing assessment, much less going on to determine quality or performing validation, that much trickier. How do we handle analysis of the rich understandings we have built into our Cultural Heritage Institutions’ metadata and enable ourselves to perform this analysis with the systems and resources we have?
This webinar sets up this question and proposes some guidelines, best practices, tools and workflows around the evaluation of metadata used by and for digital libraries and Cultural Heritage Institution repositories. What metrics have other researchers or practitioners applied to measure their definition of quality? How do these metrics or definitions for quality compare across examples - from the large and aggregation-focused, like Europeana, to the relatively small and project-focused, like Cornell University Library’s own SharedShelf instance? Do any metadata assessment frameworks exist, and how do they compare to the proposed approaches in core literature in this area, such as Thomas Bruce and Diane Hillmann’s 2004 article, “The Continuum of Metadata Quality”? The Digital Library Federation Assessment Interest Group (or DLF AIG) has a Metadata Working Group that has been attempting to build a framework that can be used broadly for digital repository metadata assessment - the state of this work, and the issues it has raised, will be discussed in this webinar as well. Finally, how does one begin to approach this metadata assessment - what tools, applications, or efforts for performing assessment exist for common digital repository applications or data publication mechanisms?
This webinar hopes to provide some solutions to these questions within existing literature, work, and examples of metadata assessment happening ‘on the ground’. The goal is for webinar participants to walk away prepared to handle their own metadata assessment needs by using the existing work outlined and being better aware of the open questions in this domain.
Christina Harlow works on metadata operations for the Cornell University Library. This work involves building out data infrastructure, ETL (extract transform load) functions, and Linked Open Data usage in service of distributed metadata management for Cornell's library repositories and systems.