Feature

An Overview of the Annual Review of Information Science and Technology and Summary of Volume 32

by Martha E. Williams

The Annual Review of Information Science and Technology is published for the American Society for Information Science. The series reflects research, technology and practice in the field of information science and technology and, particularly the interests and concerns of ASIS members and the information science and technology community at large. Each annual volume contains 8-12 chapters on topics of current and continuing interest to information professionals. The set of chapters is designed to fit within a generic master plan, thereby providing continuity to the series and diversity within each volume. Each volume contains one or more chapters dealing with “Planning for Information Systems and Services,” “Basic Techniques and Technologies” and “Applications,” and every two to four years there is a chapter dealing with “The Profession.”

Volumes 1-10 of ARIST were edited by Carlos A. Cuadra. Subsequent volumes have been edited by Martha E. Williams. The contents of the chapters in volume 32, which is now in production, are discussed below.

I. Planning Information for Systems and Services

Much has been written in the 1990s about evaluation of information retrieval (IR) systems, due at least in part to (1) the large-scale evaluation of STAIRS, IBM's large full-text IR system; (2) the Text REtrieval Conferences (TREC), a research initiative sponsored by the Advanced Research Project Agency (ARPA) and coordinated by the National Institute of Standards and Technology (NIST); and (3) the development of search engines for use on the Internet. Until the 1980s it had not been feasible to run large-scale IR tests because large full-text collections in machine-readable form did not exist, storage capacity was limited and computer processing speeds were insufficient.

In a chapter entitled “Evaluation of Information Retrieval Systems: Approaches, Issues and Methods,” Stephen P. Harter and Carol A. Hert of Indiana University define an IR system as a system that retrieves documents or references to documents, not numerical data. As historical background for their review, they start by explaining the classic Cranfield studies that have served as a standard for retrieval testing since the 1950s. They discuss the Cranfield model and its relevance-based measures of retrieval effectiveness. They detail some of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts.

Harter and Hert discuss evaluation of Internet search engines in light of the Cranfield model and note the very real differences between evaluation of what are basically batch systems, such as Cranfield, and interactive systems, such as Internet search engines. For example, the Internet search engine databases are dynamic; they are continually updated with new information as search robots move from site to site extracting data and adding it to their indexes. Because the collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. Cranfield-like experiments have been conducted on Alta Vista, Excite, Hotbot, Infoseek, Lycos and OpenText, but with no attempt to use recall. The authors also discuss various problems of system/user interaction that occur in online systems. They note that over decades of research on IR, individual differences among the humans involved -- whether they be indexers, searchers or users -- have been observed to affect retrieval performance.

The authors say “the history of IR evaluation has been dominated by the Cranfield paradigm but punctuated by many commentaries, critiques and suggestions for [improving] the evaluation of IR systems.” In the balance of the chapter they treat emerging themes in IR evaluation and extend their survey to include user behavior, human-computer interaction and contexts for evaluation such as digital libraries.

The following emerging themes are considered by the authors:

Dimensions of design and evaluation activities include extensiveness, effectiveness, efficiency, cost, benefit, cost-effectiveness, cost-benefit, cost performance benefit, system quality, information quality, use, user satisfaction, individual impact, organizational impact, technical infrastructure, content, services, support, management, network services, content services, government structures and more. Different researchers have used different sets of dimensions in their studies.

Evaluation from the perspective of system stakeholders covers users of IR systems, user perceptions and attitudes, measures of user-system interactions and IR system stakeholders other than users (system managers, system designers, vendors and content producers).

Among the methods used in IR evaluation studies are recall and precision measures, transaction monitoring, questionnaires, structured interviews, focus group discussions, filtering software, observations and verbal protocols.

The last section of the chapter looks at the future and suggests ways of actualizing the potential the authors envision. They consider two themes: different theoretical perspectives on the IR phenomenon, and the proliferation, diversification and hybridization of systems and how these might shape IR evaluation efforts. In conjunction with these, Harter and Hert propose a research agenda for IR evaluation that includes theory building and related empirical work, alternative strategies for developing evaluation models, extending the scope of IR, comparing experimental systems with operational systems and using multiple evaluation methods.

II. Basic Techniques and Technologies

Section II includes four chapters, all of which are first occurrences in ARIST. Howard D. White and Katherine W. McCain of Drexel University write about “Visualization of Literatures” and Edie M. Rasmussen of the University of Pittsburgh writes about “Indexing Images.” Walter J. Trybula of SEMATECH reviews the strategy of “Data Mining and Knowledge Discovery,” and Péter Jacsó reviews “Content Evaluation of Databases.”

White and McCain open their chapter by positing that information science is the interface between people and literatures. Literatures are large and people have been helped to understand them through graphic representations of bibliographic data as maps or charts. Among the ways that literatures can be modeled are bibliographic models, editorial models, bibliometric models, user models and synthetic models. Frequently the four kinds of models are expressed in terms of the first, that is, they are reduced to entries, citations, subject headings or keywords. In these reduced forms they can be visualized or displayed. This chapter presents recent models of literatures that offer visual cues to relationships among writings that often are based on term occurrences and co-occurrences. Two-dimensional and three-dimensional displays of relationships transcend the potential of one-dimensional lists used by early bibliometricians. White and McCain agree with the authors of the chapter on visualization that appeared in ARIST volume 30 in defining visualization as “a computer-assisted means of helping humans beings form ‘a mental image of a domain space.’” In the current chapter, the authors limit their review of literatures to mainly bibliographic text.

The major sections of the White and McCain chapter deal with online visualization, offline visualizations, a critique of visualization research and a conclusion. In “Online Visualizations” they detail examples of users’ models of literatures, discuss the concept of the associative thesaurus, detail examples of bibliometric models, and applaud the growing number of studies that critically examine the new visualization techniques. In “Offline Visualizations” they discuss methodologies used between 1985 and 1996, and the problem of visualizing changing literatures in a static medium (hard-copy print). In the same section they provide analyses of several application areas for visualization: visualizing interactivity, visualizing interdisciplinarity, mapping of organizational membership, supporting science policy, and supporting library services and research collections.

In critiquing the research, White and McCain consider the insufficient attention given to user-friendly visual design to be a major problem. The chapter concludes with the assumption that future literature visualization is likely to be done on a computer screen, in which case attention should be paid to the following questions. Is the display an improvement over a simple list? Does it provide new capabilities? Is it rapidly intelligible? Is it helpful in real time? Is it tied to an important collection? Can it be scaled up to larger collections? Is it readily available at reasonable cost? The authors express the hope that, in the future, the same visualization interface used for bibliographic domain analysis will be used for document retrieval.

Rasmussen opens her chapter, “Indexing Images,” by observing that, until recently, large collections of images were relatively inaccessible because of the limitations of duplication and dissemination. Now, through digitization, they are widely accessible. Image collections exist in the application areas of culture, education, commerce and science. In the past few years information scientists and computer scientists have combined computerized retrieval of images or graphics with computerized retrieval of documents.

This chapter is concerned with access to digital image collections by means of manual and automatic indexing. Rasmussen distinguishes between concept-based indexing, in which images and the objects therein are manually identified and described in terms of what they are and what they represent, and content-based indexing, in which features of images (e.g., color) are automatically identified and extracted.

Following her introduction, Rasmussen’s chapter is organized into six major sections. The section titled “Studies of Image Systems and their Use” discusses specific image collections (visual resources), their users, the needs of users, uses of image systems (indicated by queries submitted to the collections) and human-computer interaction issues. “Approaches to Indexing Images” covers techniques used by several researchers. “Image Attributes” are the indexable material used to describe an image.

“Concept-based Indexing” covers approaches to describing concepts based on controlled vocabulary or natural language. In “Content-based Indexing,” automatic identification of indexable features is more consistent than manual indexing, which is limited by the subjectivity of the indexer, who cannot necessarily anticipate future uses for the image. “Browsing in Image Retrieval” helps users refine queries or select relevant images. Rasmussen concludes her chapter by observing that research in user issues is just beginning; that the field needs to establish well-developed standards and techniques for assessing performance so that researchers can evaluate, benchmark and compare image retrieval systems.

Trybula focuses on numeric databases in his review of the literature on “Data Mining and Knowledge Discovery.” Because computer storage space has become much less expensive in recent years, more data are being stored. Unfortunately, a lot of data are being generated and collected without considering the applications that may be needed to access them. These data will need to be restructured so that information can be extracted from them and turned into knowledge.

The majority of the material reviewed by Trybula covers the time period from 1994 through 1996. He explains the various uses of terminology in the field and observes that, while the terms data mining (DM) and knowledge discovery (KD) are relatively new, a search of Lexis/Nexis yielded more than 2,200 articles containing the term data mining in the 12 months prior to February 1997. Data mining is defined as the automated process of evaluating data and finding relationships. Knowledge discovery is defined as the automated process of extracting information, especially unpredicted relationships or previously unknown patterns among the data. In computer science, knowledge discovery of databases (KDD) encompasses both KD and DM; Trybula treats KD and DM separately but as a part of the KDD process. Other terms/processes he explains include data cleaning, data warehouse, discovery-driven data mining, online analytical processing, online transaction processing, undiscovered public knowledge and validation.

Major sections in Trybula’s chapter include the knowledge acquisition process, data mining, evaluation methods and knowledge discovery. He concludes by pointing out that better coordinated methods are needed for providing the user with improved means of structuring the search mechanism to explore the data for relationships.

Jacsó has written a chapter titled “Content Evaluation of Databases.” He notes that database quality is judged by many criteria, including content, ease of use, accessibility, customer support, documentation and value-to-cost ratio. The principal factor in determining database quality is content. Database content is defined by the scope and coverage of the database and its currency, accuracy, consistency and completeness. The scope of a database is determined by its composition and coverage, including the time period (length), number of journals and other primary sources (width), number of articles included from journals (depth) and geographic and language distribution. The currency of a database is measured by the time lag between publication of the primary source and availability of the corresponding records in the database. Database accuracy is the extent to which the records are free of misspellings. Consistency is the extent to which records within the database follow the same rules with regard to record structure, format and representation. Record completeness is measured by the consistency with which applicable data elements are assigned to all the records in the database.

These criteria can be evaluated qualitatively and/or quantitatively in order to determine the profile of a database, and to ascertain any needed defensive search strategies. Jacsó reviews the major contributions to the literature of the past few years dealing with content evaluation methods, techniques and results and provides a background summary of milestone studies.

III. Applications

There are three applications chapters in this volume, “Chemical Structure Handling by Computer” by C. Gregory Paris of Novartis Pharmaceuticals Corporation, “Information Ethics” by Martha Montague Smith of Indiana University and “Legal Informatics: Application of Information Technology in Law” by Sanda Erdelez and Sheila O’Hare.

In his chapter on “Chemical Structure Handling by Computer,” Paris notes that the need for representation of chemical structure diagrams by computer software has created a subdomain of information retrieval (IR) that integrates the requirements of a research chemist for graph-theoretic algorithms with the database designs of computer science. Paris’s review identifies and discusses the current research topics in this area, covers selected portions of the literature, which exploded between 1989 and 1996, and addresses the general issues of representation, comparison and matching algorithms, and retrieval strategies. This IR research, and the resultant implementation and application of chemical structure retrieval software, is discussed in the context of the contemporary pharmaceutical and chemical user environment with a bias toward drug discovery research. Paris reviews not only the handling of the 2D chemical structure diagram, but also emphasizes newly developed techniques for the representation and searching of flexible 3D chemical models. Additional special topics include quality control of chemical database content, chemical similarity and clustering, query refinement, visualization, chemical structure “corpus linguistics” and molecular diversity. Paris concludes by identifying current trends in both research and application of chemical IR. He discusses some unexpected gaps in the research literature, including a dearth of visualization tools for analysis of aggregates of chemical structures.

Smith’s chapter, “Information Ethics,” deals with information ethics, computer ethics and cyberethics. She starts by explaining the relationship between her chapter and those of several prior ARIST authors who dealt with aspects of ethics. The focus of the chapter is on the emergence of the terminology of applied ethics through channels of professional practice and scholarly communication, not on specific issues such as censorship, intellectual freedom, professional ethics and codes of ethics. Smith surveys the beginnings of information ethics from 1988 through 1996, the development of computer ethics and cyberethics, and the contributions of the philosophy of information and information technology to ethical discourse. While these three areas of applied ethics reflect different disciplinary and metaphorical backgrounds, they share a concern for the impact of new technologies and thus new realities on individuals and society. Major themes that pervade the literature of the three fields are privacy, access, ownership, property, accuracy, security and democracy.

A growing literature concerned with philosophical issues such as the nature, meaning and purpose of information and information technology promises to provide the needed foundation for ethical reflection. The term information ethics has been used for nearly 10 years and the literature from that period shows a progression from library and information science to public policy to a more global perspective. In the coming decade, with the emergence of terms such as global information ethics, Smith suggests that information ethics may become the umbrella term used to unify the conceptual boundaries of computer ethics, cyberethics, network ethics and machine ethics, as well as other areas of applied ethics in information science and technology.

Erdelez and O’Hare discuss the use of automated processing of information in the legal setting and the use of information technology for the legal profession. In their chapter, “Legal Informatics: Application of Information Technology in Law,” they review scholarly and selected nonscholarly writings from 1988 to 1996, the period since the last ARIST chapter on information systems and the law in volume 23. They begin with a short overview of terminology explaining the concept of legal informatics, its etiology and background, and they explain its distinction from computer law and information law. Unlike the term medical informatics, which has been accepted for more than a decade, the term legal informatics is seldom used in the United States.

The specific topics addressed in this chapter include electronic access to legal information (including computer-assisted legal research and legal information systems), artificial intelligence and the law, information technology use in law offices and related environments (e.g., law schools, courts and law libraries) and law and the Internet. The chapter concludes with a discussion of the impact of information technology on the legal profession.


Professor Martha E. Williams is the editor of ARIST. She can be reached at the University of Illinois, Coordinated Science Laboratory, Computer & Systems Research Lab, 1308 W. Main, Urbana, IL 61801; by phone at 217/333-1074 or 217/333-8462; or by fax at 217/762-3956.

Topics from Recent ARIST Issues

Recent volumes of the Annual Review of Information Science and Technology (ARIST) have included chapters on such diverse topics as information retrieval techniques, social informatics of digital libraries, policy for the ’Net, browsing, the history of information science, environmental scanning, social intelligence, user acceptance of information technology, connectionist models and information retrieval, query expansion, natural language processing, speech synthesis and recognition, electronic image information, health informatics, visualization, virtual reality, parallel information processing, relevance and information behavior, the human-computer interface for retrieval, artificial intelligence, information pricing, information technology standards, expert systems, information law, and cataloging and classification for the Internet.

The Annual Review of Information Science and Technology (ARIST), Volume 32, can be ordered from:

Information Today, Inc.
143 Old Marlton Pike
Medford, NJ 08055