ASIS&T 2006 START Conference Manager    

An Evaluation Model for the National Consortium of Institutional Repositories of Korean Universities

Hyunhee Kim, Professor, Department of Library and Information Science, Myongji University, Yongho Kim, Professor, Department of Communication, Pukyong national University

ASIS&T Annual Meeting - 2006 (ASIS&T 2006)
Austin, Texas, November 3-9, 2006


Abstract

Under the open access environment, academic institutions are building institutional repositories (IRs) for a nationwide knowledge distribution infrastructure. The Korean Education and Research Information Service (KERIS) proposed to organize institutional repositories into a consortium, called "dCollection," composed of more than 40 universities, to give users more access to materials and to give authors more readers. As the KERIS project proceeds, its member libraries are trying to implement and to further promote the dCollection system, which has turned out to develop more slowly than planned, especially in terms of building content. This study, funded by KERIS, aims to build an evaluation model for diagnosing and solving problems that have occurred during the systems' improvement: the model can be employed as a tool either for KERIS's assessment of the dCollection system as a whole or for a member library's self-evaluation of its own system. The evaluation model was built in two steps. First, based upon literature review from digital library evaluation and institutional repositories, and case studies on university repositories, the conceptual framework was composed of four categories: content (diversity, currency, size and metadata); system and network (interoperability, integration and dCollection homepage); uses, users and submitters (material use rate, user satisfaction, submitter satisfaction and support for users and submitters); and management and policy (budget and staffs, awareness of IR, copyright management, marketing strategies, formal agreement, policies and procedures and archiving methods). Second, the evaluation framework was modified through three methods, such as a Delphi method, an expert forum and a pilot test, leading to an evaluation model with 4 categories and 34 indicators. The evaluation model's validity issues were discussed in detail. The IR experts and librarians at universities agreed upon the importance of indicators included in the model. Their consistency in relative importance and the converging tendency of standard deviations account for the validity of the model as a whole.


  
START Conference Manager (V2.52.6)
Maintainer: rrgerber@softconf.com