ASIS Midyear Meeting, May 1999
Contributed Papers
(Contributed Papers are listed alphabetically by author under the day and time they are to speak.)

MONDAY, MAY 24, 1999
8:30- 10:00 a.m.

Ongoing Evaluation of Internet Resources: Policies and Procedures
Rachael Bower
 
The Internet Scout Project, sponsored by the National Science Foundation, is located in
The Computer Sciences Department at the University of Wisconsin-Madison. The Project promotes
the progress of research and education in the United States by improving the Internet's information
infrastructure. The Internet Scout Project was one of the first, and is now one of the oldest, most
respected organizations to recognize the importance of ongoing evaluation and provision of Internet
resources which are content-rich, authoritative, and effectively presented. A 1998 user assessment
determined that over 60% of the largest academic research libraries in the United States link to the
Internet Scout Project to provide evaluated networked resources for their patrons.
 
The Project publishes the Scout Report, a weekly guide to new, high quality Internet resources. The
report has been in continuous publication since 1994 with a focus on providing links to and
annotations about new sites of interest to the academic community. In order to bring academics sites
more topically suited to their fields of study, three subject-specific scout reports were added in
1997. Editors and subject-specialists monitor hundreds of sources to locate new Internet resources
which are evaluated for authority, currency, quality, scope of information, relevancy, and overall
quality prior to selection for the Scout Report. Sites included in any of the Scout Reports are then
evaluated for inclusion in Signpost, a searchable, browseable database of resources from the Scout
Reports, and are again evaluated for subject content at the time of cataloging, when Library of
Congress Subject Headings and abbreviated Library of Congress Classification is applied. All
resources in Signpost are evaluated for currency on an ongoing basis using automated and human
means.

HOW THE SERVICES ARE BUILT
The project's flagship, The Scout Report, is in its fifth year of weekly publication, and is published in
a variety of formats to suit individual audience needs. With a readership in excess of 250,000 a
week, this Report highlights new and newly discovered Internet resources of broad academic
interest. In 1999 a new feature was added to the Scout Report called In The News. This feature
focuses on a current event "in the news" by linking users to stories in appropriate online sources (The
New York Times, for example) and also links readers to past sites reviewed in Scout Reports which
will bring in-depth knowledge to the topic of the featured news story. This gives users a link to past
reports and an overview of a current event that they may be hearing about from other media
sources.

To give users access to enriching, authoritative sites related to their professional and/or academic
interests, three subject-specific scout reports were added in 1997: the Scout Reports for Science &
Engineering, Social Sciences, and Business & Economics. Tailored to the concerns of their
respective audiences these biweekly reports are published in multiple formats. To locate the quality
resources featured in the Scout Reports, editors and subject-specialists from UW-Madison monitor
some 200 listservs and usenet news groups, read major newspapers, search the Internet and
metasites, and follow up on suggestions from readers.

In 1997 it became apparent that in order to make the information contained in both current and past
Scout Reports available as resources to the academic community a good system for searching
through past archives of the reports was needed. In answer to this problem Scout Report Signpost
was created. Signpost is a searchable, browseable database that contains over 5,600 of the
resource descriptions that have appeared in the Scout Report and subject-specific Scout Reports.
This service, offered by the Scout Project, is the result of five years of resource discovery. Over
2,600 records have been cataloged using Library of Congress Subject Headings, making it possible
to search them by author, title, subject heading and annotation. The content of Signpost can be
browsed by subject, including close to 7000 subject headings; it is also possible to browse Signpost
by classification using Library of Congress classification areas. Resources included are provided by
government agencies and research laboratories, educational institutions, organizations, and
educational networks.

Part of the Signpost Projects research has been directed towards developing an alternative
classification scheme. The principles behind this approach include the need for a facet-based,
subject access scheme that is dynamic. One component of this work is studying the feasibility of
mapping Library of Congress Subject Headings (LCSH) to this newly developed alternative
classification scheme.

The Isaac Network
In addition to our past experience with producing, maintaining and disseminating a local collection of
valuable educational resources, we are now embarking on a new research project that will further
enhance access to Internet resources. When the Isaac Network is implemented, it will link together
geographically dispersed collections of highly selective Internet resources into a single search facility,
thereby allowing educators to search multiple collections with a single search interface. Each
academic institution participating in the Isaac Network will adhere to set criteria for resource
description.

Our presentation will discuss all the services and publications offered by the Internet Scout Project,
as well as future plans for dissemination of information via the Isaac Network. The presentation
would include hand-outs for those in attendance such as a "webliography" of suggested resources
from Scout Report Signpost, copies of the Scout Report and subject-specific scout reports, as well
as a demonstration of the Project's web site.

Attendees will be able to find out how evaluation criteria can be applied to the discovery of Internet
resources to ensure that resources selected are of a high quality. Also, the Isaac Network will be
running with dozens of sites accessible allowing attendees will gain information about it, and will have
an opportunity to receive their feedback for further improvements.
 
 

"Beam it up Scotty"-Industrial Designers Using Networks to Design and Distribute Products
Mark Evans and Paul Wormald
Department of Design and Technology,
Loughborough University, UK
 
Industrial design focuses on the appearance and usability of mass-manufactured products, and has evolved a variety of techniques for the acquisition and distribution of information to assist the practitioner. In recent years this acquisition and distribution has been significantly enhanced and extended by employing networked resources and services. This paper evaluates how a specific professional group is exploiting the internet, using the authors' findings from university teaching, research and professional practice. Many professions require access to information, but an area of
particular interest within industrial design is the ability to translate a three-dimensional object into a digital data file, transmit the data widely over distributed networks (including the internet), and then reproduce the item as a physical object. Whilst not quite in the same league as the "Transporter Room" on the "Starship Enterprise", such capabilities offer an insight into the way new product development has evolved to embrace the potential of the internet. In evaluating the use of networked
information and resources by industrial design students and professionals, the authors have identified the following generic applications:

1. Design research (information sources on the www such as patents and indexes to published journals, manufacturer web sites, use of consumer knowledge through newsgroups).

2. Design interaction (teleconferencing, videoconferencing, computer supported collaborative working, shared workspace).

3. Design distribution (designer web spaces, uploading/downloading of model data/images).

The paper examines the impact that networks and the internet have had on industrial design practice by discussing its contribution to information retrieval and dispersal. The gathering of information by designers is explored, along with means of contributing designerly information to the pool that
exists through such media as newsgroups, mailings lists and web pages. Computer supported  ollaborative design (CSCD) and computer supported collaborative working (CSCW) illustrate how teams can undertake design tasks without the constraints of geographical boundaries. This is extended to encompass the transmission of virtual product geometry to enable the building of a
physical model in a remote location (a technology that was once restricted to the realms of science fiction now appears to becoming a reality). An example of this involves the transmission of a computer aided design (CAD) model over the internet, and then using rapid prototyping (eg. stereolithography) to build it on the other side of the world. Another example is when a physical
object is converted into a digital model by employing three dimensional laser scanning. The scanned object data can then be globally distributed over the internet, re-built using rapid prototyping, and paint finished to reproduce the item that was originally scanned.

A case study involving the design of children's cutlery is used to illustrate how the applications discussed throughout the paper have been applied during professional practice. The paper concludes that in a profession where time to market is a key attribute, employing the internet and other networked systems can significantly reduce lead-times, bringing about both increased and enhanced design output and commercial benefits.
 
 

How to Make Information on the Internet More Verifiable
Don Fallis
School of Information Resources and Library Science
University of Arizona
1515 East First Street
Tucson, AZ 85719
fallis@u.arizona.edu

 A major focus of information science is on the development of techniques that make it easier for
people to find information on the Internet. However, as Peter Hernon points out, "it is not enough
that information isreadily available; before relying on any data or information, it may be
important to ascertain, for example, the veracity of the content" (Hernon, 1995, p. 133).
Unfortunately, since almost anyone can put almost anything on the Internet, there is a lot of
inaccurate information out there and it is difficult to distinguish the accurate from the inaccurate
information. As a result, it is difficult for people to acquire knowledge using the Internet.

After noting that the Internet can "deliver misinformation and uncorroborated opinion with equal
ease," Vinton Cerf concludes that "we have but one tool to apply: critical thinking" (Cerf 1998).
Following this suggestion, several recent articles (see, e.g., Tate and Alexander 1996 and Fitzgerald
1997) lay out some of the specific critical thinking skills that Internet users need to have. However,
improving people's critical thinking skills is not the only way we can address this problem. We can
also address this problem by making the information itself easier to verify.

In the first part of this paper, I describe what the authors of web pages need to do in order to make
their information more verifiable. In general terms, they have to provide Internet users with easier
access to evidence that the information is accurate (or that it is inaccurate). More precisely, a
successful verifiability technique must satisfy three basic criteria: (1) reliability, (2)
convincingness, and (3) economy. (The details of this theoretical framework are based in part on
signaling theory as described in Fallis 1998.)

In the second part of this paper, I use these criteria to evaluate some specific verifiability techniques
that have been proposed. The specific techniques that I look at are derived from the aforementioned
articles that lay out critical thinking skills for Internet users. Such articles typically identify features of
web pages that Internet users are supposed to look for and that are supposed to indicate that a web
page contains accurate information. I investigate the suggestion that authors can make their
information more verifiable by including these features in their web pages.

REFERENCES

Cerf, Vinton G. 1998. "Truth and the Internet."
ttp://info.isoc.org/internet/conduct/truth.shtml">http://info.isoc.org/internet/conduct/truth.shtml.

Fallis, Don. 1998. "Signaling Theory and Internet Epistemology."
Proceedings of WebNet-98 - World Conference of WWW, Internet, and Intranet.

Fitzgerald, Mary A. 1997. "Misinformation on the Internet: Applying
Evaluation Skills to Online Information."Emergency Librarian 24(3):9-14.

Hernon, Peter. 1995. "Disinformation and Misinformation Through the Internet: Findings of an
Exploratory Study." Government Information Quarterly 12(2):133-39.

Tate, Marsha and Jan Alexander. 1996. "Teaching Critical Evaluation Skills for World Wide Web
Resources." Computers in Libraries (November/December):49-55.
 
 

Web-based Course Delivery:  The Effect of the Medium on Student Perceptions of Learning and Their Use of Library-based Resources
Vicki L. Gregory and James O. Carey

Comparisons of student learning derived from traditional classroom instruction versus mediated instructional technologies have experienced cycles of replication beginning with references to military training films in the 1940's, and continuing with various forms of what we now call "distance
learning": instructional television in the 1950's and 1960's,  multi-image presentations in the 1970's, computers and multimedia in the 1980's and 1990's, and today the Web.  The results of such comparisons have been unequivocal: the technology used to deliver instruction is not reliably
predictive of student learning and achievement.  The factors that do predict the quantum of student learning are the types of teaching and learning strategies employed within the delivery method, whether through the traditional classroom lecture approach or otherwise.

Several caveats must be respected in reviewing these findings.  First, the medium used for delivering instruction must be capable of representing such stimuli (verbal, visual, auditory, motion, etc.) as may be required by the content of the course; second, although student performance does not seem to
suffer from the use of distance learning per se, student attitudes and motivation often do.  Negative factors cited include difficulty in communicating with the professor, the impersonal nature of the courses, lack of peer interaction, and the absence of the "group momentum" that helps
students manage course assignments and projects.  Such negative factors often result in low instructor ratings and high rates of withdrawals and incompletes.

In curriculum meetings during the 1997/198 academic year, concerns were expressed regarding a broad range of outcomes from Web-delivered courses when compared to other types of distance courses that we deliver at various sites within Florida.  Those of us involved in teaching Web courses, designed a formative evaluation survey for administration to our students, supplemental
to the traditional university evaluation survey, seeking student perceptions of various aspects of Web classes, including how their Web class experience compared to that of traditional face-to-face classes.  We sought perceptions as to the value of the courses taken, motivation for taking distance classes, appropriateness of the form of distance education to the subject matter at hand, and the kinds of resources students used (and where they used or located them) in the preparation of assignments and projects. Students were asked their preferences for types of resources (e.g., electronic reserves, the World Wide Web, electronic or print journals, etc.)  for future class use, whether in face-to-face or Web-delivered environments.

To date, five classes taught by four different faculty members have participated in the survey. Of the 50-60% responding, 57% felt that the depth of new information acquired in the Web course was "more" or "much more" than that acquired in the face-to-face classes they had taken, and, when adding the respondents that indicated that the depth of new information was the "same," 89% felt it was the same or more. As to interaction with classmates, 61% felt they had "less" or "much less" interaction and 53% felt that they were "less" or "much less" a member of a group; however, when queried as to how much they had learned from classmates, 60% indicated that they had learned "more" or "much more" from their classmates in the Web course – an intriguing reversal of their response concerning interaction and affiliation with classmates.

Our presentation will focus on interpretation of data obtained from the surveys along with instructors' perceptions of the level of student achievement, the effective use of library and other resources, and related issues.
 
 

The role of user evaluation in design and ongoing development of a
WWW browser-based interface to library and networked information
resources at the University of Western Australia

Jane E Klobas
The Graduate School of Management
The University of Western Australia
 

Over a period of two years, the Library at the University of Western Australia transformed the interface between users and library information resources from a simple on-line public access catalogue to a flexible, complex, and dynamic networked information resource (CygNET Online, <http://www.library.uwa.edu.au/>.. Throughout this period, the development team monitored user response to the developing resource. This paper focuses on the methods used for formal evaluation of user response at two key stages: during a pilot implementation, and after one year’s implementation of the new resource across the full library system. The paper is illustrated with extracts from the evaluation instruments.

The formal evaluations were based on an established instrument for measurement of interface usability, the QUIS (developed by the Human-Computer Interaction Laboratory at the University of Maryland). The QUIS was combined with instruments to measure users' confidence and attitudes to use of CygNET Online. Additional items were included to confirm, from a larger sample of users, observations made by individuals in e-mailed comments and informal discussions. Users were also asked how they intended to use CygNET in the future. This enlarged set of items helped evaluators to identify the system and social characteristics most closely associated with future use and to recommend that developers concentrate their efforts on improving those characteristics.

The CygNET Online interface was first implemented in parallel with the then existing Innopac online public access system during a six month pilot period in one of the university’s science libraries. Three months into the pilot period, CygNET users were asked to complete a survey form which included a QUIS for Innopac and a QUIS for CygNET. From this comparative evaluation, it was possible both to identify that users found the CygNET Online interface to electronic information resources an improvement over the Innopac interface, and to measure the extent to which they found it an improvement. Specific aspects of the interface and the context of use were identified as areas for improvement.

The success of the pilot implementation encouraged the Library’s management and staff to implement CygNET Online as the standard system-wide interface to information resources. One year after the pilot implementation (and 7 months after withdrawal of the text-based interface), the survey was repeated. In addition to questions which enabled comparison with earlier responses, questions which reflected learning and users’ comments during the year were added to the survey form.  Improvements that had been implemented as the result of responses to the pilot study were well received, and new areas for improvement were identified.

Formal, systematic, evaluation in the field enabled the sponsors and developers of CygNET Online to gain a good understanding of users’ response to the networked information resource. By measuring users’ perceptions of the interface itself, their attitudes to its use, and characteristics of the context of use, and by identifying those characteristics of both resource and context that are most closely associated with use, they have been able to identify and to respond positively to users’ recommendations for improvement.
 
 

 
 
 

Development of an Internet site evaluation tool for use by information management students.
M Middleton and S Edwards
Queensland University of Technology. School of Information Systems

QUT School of Information Systems carried out a project in association with Griffith University and QUT libraries. The project involved the identification and evaluation of electronic and print resources. These were described within the framework of UKOLN ROADS (Resource Organisation And Discovery in Subject-based services) software. They are being made available via the libraries' Web sites as Infoquest subject guides.

A group of students completing the graduate library and information studies course at QUT, undertook the resource discovery role as part of their professional practice, working in conjunction with professional librarians. Students were each allocated specific subject areas. Their tasks were to review current Internet guidance for the subject area, undertake resource discovery, evaluate the material found according to standard evaluation guidelines, and report the material for incorporation into a Web site. Twenty five students participated in the project. They were involved in 24 different subjects areas supervised by 12 different librarians from the 2 universities. Subject areas ranged widely and included Japanese studies, environmental engineering, forensic sciences and ethics. Email surveys of all students were performed before and after the exercise, and we also independently conducted small focus groups of staff and students. We report on the student participation with respect to improved Internet and library skills, and understanding of resource evaluation. This has had many positive outcomes such as professional experience, shared workload, recourse to library subject experts, and cooperation between faculty and libraries.

As an outcome of the project, we are developing an instrument that consolidates the experience gained from the exercise along with material from established guides to Internet site evaluation.  We are reviewing criteria for site evaluation, and comparing these with evaluation criteria for databases and for printed publications. We are developing a guide that provides a structured approach to site evaluation. It will be available through the Faculty's Web-based Integrated Learning Environment to be used by both information technology and library studies students as they undertake comparison of sites located by search engines. The guide provides a categorised approach to site evaluation that takes into account features such as functionality, organisation, accessibility, content, level and range.

The guide also is to provide support for carrying out metainformation creation exercises when constructing Web pages. These description, classification and indexing exercises are carried out with reference to evolving standards such as Dublin Core and the AGLS (Australian Government Locator Service). Therefore the instrument includes connection to software for supporting creation of metainformation content.
 
 

Evaluating a Gateway to Faculty Syllabi (GFS) on the Internet
Sam G. Oh
Assistant Professor
Sung Kyun Kwan University
Department of Library and Information Science
53, 3-Ga Myungryun-Dong
Chongro-Gu, Seoul, Korea

samoh@yurim.skku.ac.kr
Web: http://slisnet.skku.ac.kr/~ohs/
 

 The Gateway to Faculty Syllabi (GFS) is an attempt to provide faculty and students with fast access to quality syllabi available on the Internet. There are many syllabi available on the Internet, but access to them requires many steps and much time. The GFS system is constructed to alleviate such discovery problems (http://lis.skku.ac.kr/gemfac/) by human evaluation of all the syllabi available on the Internet.  Even though one might argue the time and effort needed to maintain this kind of system, the benefits to all the faculty and students in the world will prove its usefulness.  The system is based on the Dublin Core (DC) and adopted other elements that are not covered by the DC, but necessary for effective discoveries of faculty syllabi. After deciding all the elements needed to describe syllabi on the Internet, a conceptual schema of those elements is designed using the Entity-Relationship model. The metadata about faculty syllabi are stored in the relational DBMS.  Only the syllabi that have useful information for other faculty are included in the GFS system. In other words, a syllabus must be fully developed to be included in the system.  The system currently has only the syllabi metadata related to library and information science and will be expanded to other areas in the future. The Active Server Pages technology is employed to provide browsing and searching capabilities to those metadata.

A user can search the GFS system based on the actual course title and faculty name.  The search can be limited by a discipline, a university, and whether assignments or reading lists available or not.  The system also attempted to provide faculty and students with flexible browsing methods.  There are two ways to browse in the system. One way to browse is to use a discipline (e.g., Library and Information Science), to choose a major area (Information Organization) within that discipline, to select a uniform title (Cataloging and Classification) within that area. Then, all the actual courses associated with that uniform title will be displayed using a short display form (course title, professor, and affiliation). The course title in the short display is linked to the full display record while the course title in the full display is linked to the site that hosts the actual syllabus.  The other browsing method is to go directly to the list of uniform course titles without going through major areas.

A user study will be conducted to test whether major areas and uniform course titles used in the GFS are understandable to the users or not.  The subjects will be asked to find an actual course using the major area browsing list and also to find an actual course using the uniform title list. The steps taken to find the course as well as any difficulties using the classification scheme will be noted and used for the further improvement. The GFS system has potential to save a great deal of faculty and student time by assisting them to locate the syllabi they need in a speedy manner.  Providing two mirror sites (USA and Europe) of the GFS system should further enhance the accessibility of this system.
 
 

What is Web Usability Anyway?  A Conceptual Study on Usability in the Web Environment
Ping Zhang and Jiangping Chen

School of Information Studies
Syracuse University
 

With the web increasing at an exponential rate, web usability study becomes extremely important. The good news is that web usability is a well-used term and seems well studied in recently years. The bad news is that the term is used as if it was well defined, but yet no common understanding or formal description can be found. Different web usability studies have different coverage of issues and criteria. Some studies extended the traditional user interface usability results directly to the web environment. Some studies "invented" their own criteria claimed to be specific to the web environment. In general, the current web usability study literature shows a lack of conceptual discussion on what web usability is about and a lack of systematic investigation on web usability.

Several immediate problems occur when there is no formal description or definition of web usability. First, for web designers or evaluators who really want to study the usability of interested websites, it is frustrating on what to do without any formal  references. Second, it is questionable whether the usability is reached even when some web usability study is conducted be successfully. Third, it is difficult to develop standard user interface building tools for web applications that ensure or enhance good usability.

In this paper, we review the traditional user interface usability studies, analyze the uniqueness of the web environment, and then present a model of web usability.

Usefulness is a key component of practical acceptability of a system [Nielsen 93]. The same is true for the web. We first construct a model of web usefulness, which is composed of the usefulness of browser and usefulness of websites. For website usefulness, we consider the two aspects of a website: the functionality, and the information or the content. Usability is about the corresponding utility or functionality or whatever it is intended to be used. Consequently, web usability as a whole is about browser utility usability, website functionality usability, and website content usability. Web usability can be defined as how well users can use the web functionality (both browser and website) and content to accomplish their tasks.

Our model clarifies the concept of web usability and gives distinctions among different components of web usability. This can help designers to detect and solve usability problems by considering these aspects as a whole.  Although it is a model that needs more refinement, especially that the measurement for web usability needs further examination, the differentiation of components in it may improve our understanding to the usability concept in the web environment and can eventually guide us to
design more ease of use web applications.
 
 

MONDAY, MAY 24, 1999
10:30 a.m.-noon
PANELS

 

MONDAY, MAY 24, 1999
noon-1:30 p.m.
LUNCH
 

MONDAY, MAY 24, 1999
1:30- 3:00 p.m.
PANELS

 

MONDAY, MAY 24, 1999
3:30-5:00 p.m.

Community Networking Initiative Research and Evaluation Activities:
Methodological Issues Related to Evaluating User Needs and Outcomes
Related to Community Information Systems

Ann Peterson Bishop, Tonyia Tidline, Susan Shoemaker, and Pamela Salela
Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
501 East Daniel Street
Champaign, IL 61820
abishop@uiuc.edu
(217) 333-3280 (V)
(217) 244-3302 (F)

This paper will report on research and evaluation associated with the Community Networking Initiative (CNI), a project funded by the U.S. Telecommunications and Information Infrastructure Assistance Program (TIIAP) in the Department of Commerce.  CNI represents a collaboration of
the Urban League of Champaign County, Prairienet (a computer-based network providing access to local information and Internet services), and the University of Illinois. The project is aimed at building the participation of low-income residents and organizations representing them in community
networking through: the distribution of computers to low-income households, nonprofit organizations serving low-income audiences, and public access sites; training and support in information technology use; and a redesign of Prairienet that facilitates resource-sharing and coordinates information across organizations in a more problem-centered manner.

During the first year of the CNI project, our primary research goal has centered on conducting a community analysis in order to uncover problems facing low-income residents of the Champaign-Urbana area, learn what information is useful in addressing these problems and how it is currently obtained, and explore attitudes related to computer use.  Our primary evaluation goals for the first year were to document the extent and nature of CNI services delivered, the expectations and characteristics of community participants, and the experiences of both CNI staff and community participants.

This paper will describe findings from CNI research and evaluation activities and address policy issues related to redressing the balance between haves and have nots in the information age. A particular focus will be methodological issues related to evaluating user needs and outcomes related to community information systems. These include:

-- the need to employ multiple methods in order to develop realistic scenarios of use related to the information practices of low-income neighborhood residents and the community-based organizations with which they are affiliated;

-- difficulties in recruiting research participants;

-- the importance of exploiting appropriate conceptual foundations from across domains (e.g., the role of social networks in information exchange, informal collaboration in learning); and

-- utilization of research results.
 
 

Winners and Survivors:  Evolution of Digital Community Networks
Linda Schamber
Telephone: 940-565-2445
Fax: 940-565-3101
Email: schamber@lis.admin.unt.edu

Terry Sullivan
Telephone: 940-484-1897
Email: Terry@pantos.org

University of North Texas
Denton, TX

Community-based digital networks are a social phenomenon, arising more or less spontaneously in cities, towns, and villages around the world as networking technologies become more affordable and accessible. In the United States, the last couple of years have seen an explosion in development of local community sites on the World Wide Web, much of this apparently a bandwagon effect following closely the advent of inexpensive Web access in smaller communities. Networks that may have been intended originally to provide local citizens with access to rather mundane local public services have become significant providers of information, not only locally but also (to the surprise of some) nationally and internationally. As recently as three years ago, only a few dozen sites were truly operational; the rest were prototypes. Now many community sites are interesting, colorful, and highly interactive, and present a wide variety of resources relating to government, education, business, and entertainment.

This paper reports on the results of a followup survey of community network managers to assess changes in their situations and networks since they responded to an identical survey more than two years ago. At that time, many networks were grass-roots
operations, instigated and implemented by volunteers who were mainly fascinated by the technologies.

The preliminary results reveal a combination of winners and survivors. Out of 25 community network sites surveyed earlier, 6 have disappeared, 11 show a date of last update (ranging from 10/97 to 2/11/99), 10 name a responsible party, and 3 show active sponsorship. The current survey both replicates and expands on the earlier survey, emphasizing user needs analysis and management considerations. Site managers describe, in their own words, needs and uses of the network by the public, successful and popular features of the network, funding sources, and maintenance schedule. They offer then-and-now comparisons with responses from the earlier survey, including changes in advice they would offer a city considering setting up a community network.

Among these networks, there seems to be an identifiable relationship between having had a systematic startup process and eventual success, or at least survival with some visible activity. Generally, the most active networks show considerable evolution in nearly all areas, including:

- Formalization of ownership and management functions

- Stabilized funding

- Wider range and sophistication of content

- Increasingly interactive and attractive interfaces

- Systematized user evaluation program

- Broader and more diverse user groups

The social and commercial implications are enormous. With the recognition by local officials that a Web site--not a billboard--has become the true gateway welcome to the community, the digital network shifts the community's self-image to what may be an entirely new view of its place in the world. With the realization also that a Web site is an ideal tool for promoting the region and luring money--through commercial development, recreational events, and so forth--local businesses and institutions are now faced with competition from unexpected quarters. This includes information-intensive institutions, such as libraries, for which the community network provides both access and, in itself, competition as an information provider.
 
 
 
TUESDAY, MAY 25, 1999
8:30 a.m.-10:00 p.m.

Browser Caching and Web Log Analysis
John Fieber
Indiana University School of Library and Information Science
 

Electronic information systems routinely generate records of their use as a byproduct of normal operation.  These records are frequently analyzed to evaluate system performance, from low level evaluation of hardware and software components to high level evaluation of user information seeking behavior.  This paper reports on an investigation into the nature of transaction records generated by World Wide Web servers and on the limitations of inferences drawn from these data.

When all components of an information system are under the administrative control of the information provider, transaction logging can capture a reasonable accurate record of use for analysis.  In moving a system to the World Wide Web, the provider relinquishes control over two thirds of the system: 1) the client and 2) the communications protocol used between the client and server.  The client can no longer be instrumented directly, data collection from the client is limited to what the protocol will  carry, and the protocol itself becomes a negotiated agreement between the client and server.

The hypertext transport protocol (HTTP) used in Web based information systems carries little data directly relevant to the analysis of user behavior.  Efforts to expand data collection by layering higher level protocols, such as "cookies",  on top of HTTP are ultimately controlled by the user, not the producer.  Even more important, the design of HTTP encourages network bandwidth conservation through the mechanisms of document caching and proxy servers.  Reliability and validity problems arise for analysis of user behavior and content use because important user interactions are hidden by the HTTP protocol.

This paper reports on an investigation into the nature and quantity of these hidden interactions and the mechanisms conspiring to hide them.  The foundation is an empirical study showing the complex
influence of browser caching on Web server log data.  Cache rates were found to be too high to dismiss--45% on average and as high 94%--and dependent on factors including session length, the
position of a given page in the navigational structure of a Web site, the cache validation policy set in the Web browser, and the browsing style of the user.  The study also reveals subtle aspects of a Web site's technical construction that further complicate the caching equation.

Interactions that are visible to the server have reliability problems of their own.  The presence of proxy servers, multi-user systems Web anonymizers and other proactive privacy mechanisms make reliable linking of transactions into sessions impossible without the aid of higher level protocols such as HTTP cookies.  Even transaction time stamps and absolute sequence cannot always be relied upon.

Despite these problems, many commercial Web log analysis packages generate reports with inferences about user behavior such as session length, popular paths through a site, entry and exit points, and page view times.  This study shows that the data necessary for such inferences are simply not present in standard Web server log files. Efforts to collect more complete data require the  cooperation of the users, a difficult task in the loosely controlled environment of the World Wide Web.
 
 

Sampling Methodologies for Using Electronic Surveys to Evaluate Networked Information Resources
Jonathan Lazar and Jennifer Preece
Department of Information Systems
University of Maryland Baltimore County
Baltimore, MD 21250
jlazar1@umbc.edu
preece@umbc.edu

Introduction

     In recent years, local, state, and federal governments, as well as universities, libraries,
organizations, and companies have begun to make many resources available to the public through
the Internet. How can these networked resources be evaluated? Among the tools that have been
used to evaluate these resources are surveys, interviews, and focus groups. While "traditional"
paper surveys can be used, electronic surveys are beginning to become more prevalent in
evaluation. If one uses electronic surveys, it is very possible that one will not receive a
representative sample of the population of interest. What types of sampling methodologies can be
used to ensure that the survey responses are not biased? This paper focuses on sampling
methodologies for use with electronic surveys.

Electronic Surveys

     Electronic surveys can be implemented using either e-mail or web pages.
There are many advantages to using electronic surveys instead of paper surveys, including
eliminating postage and copying costs, and possibly lowering data entry time. These advantages
will be discussed. There are also some differences between e-mail and web-based surveys, and
these differences will also be discussed.

Sampling Techniques

     With traditional paper surveys, procedures for selecting a sample are well-established.
These sampling procedures are necessary to make accurate population estimates. Without these
sampling procedures, the survey responses may be biased and might not represent the true
population. How does one select a sample using electronic surveys? Some different approaches to
this problem are beginning to appear. The approaches chosen may be influenced by whether the
population of users is well-defined.

Populations That Are Well-Defined

     Federal, state and local government networks, library networks, and educational networks
usually offer a great deal of information to the general public, via the Internet. Many of these
networks also offer communication tools to users, including listservers, newsgroups, and bulletin
boards. In these resource networks, the population of users is well-defined. Users are required to
subscribe, or register, or login to use these resources. Even without communication tools, some
networked resources may require users to login to access specific database resources. For
instance, resource usage might be limited to citizens of a certain state, or employees of a certain
organization. In all of these situations, the population of users will automatically be known,
through the network transaction logs. In these types of networked information resources, the
population of users is well-defined. With a well-defined population, traditional random sampling
techniques can be modified, and used with electronic surveys to make true population estimates.
Examples from state and regional networks will be presented.

Populations That Are Not Well-Defined

     In some networked information resources, the population of users is not well-defined. In
these networks, there are usually no communication tools. Users are not required to register, or
be a part of a specific organization, or meet any requirements (such as citizenship of a certain
state). The target audience for the resources is generally broad. In these networks, there is no way
to know the true population of users. However, that doesn't mean that researchers cannot learn
more about the population of interest. Researchers can aim for a random sampling of usage (not
users), which may mean that certain users are over-represented. Or, researchers may aim for a
diverse response. Techniques to ensure a diverse response include having demographic questions
and examining web site logs. These techniques will be discussed in-depth.

Network Access

     It is important to note that electronic surveys will only be able to provide access to those
who are already using the networked resource in question. If researchers want to know more
about why users AREN'T using a networked resource, it is important to go to the population of
people who are potential users of the networked resource but are not using it. Tools to access
potential users of the networked resource include paper surveys, interviews, and focus groups.
These may be used in conjunction with electronic surveys. Examples from the literature will be
presented.

Summary

     Sampling methodologies for use with electronic surveys are just beginning to appear.
However, a full set of sampling methodologies for use with electronic surveys is needed. Some
evaluation approaches use electronic surveys exclusively, while some methods use a combination
of electronic surveys and traditional paper surveys, interviews, or focus groups. Sampling
methodologies that can meet all of these different needs and populations must be developed.
Otherwise, there will be no way to ensure that surveys implemented over the Internet have valid
results.

Selected References

     Anderson, S., and Gansneder, B. (1995). Using electronic mail surveys and computer-monitored data for studying computer-mediated communication systems. Social Science Computer Review, 13(1), 33-46.

     Anderson, S., and Harris, J. (1997). Factors associated with amount of use and benefits
obtained by users of a statewide educational telecomputing network. Educational Technology
Research and Development, 45(1), 19-50.

     Bertot, J., and McClure, C. (1996). Electronic surveys: Methodological implications for
using the world wide web to collect survey data. Proceedings of the 59th Annual Meeting of the
Association for Information Science; 173-185.

     Bertot, J., McClure, C., Moen, W., and Rubin, J. (1997). Web usage statistics:
Measurement issues and analytical techniques. Government Information Quarterly, 14(4), 373-395.

     Fowler, F. (1993). Survey Research Methods. (2nd ed.). Newbury Park, California: Sage
Publications.

     Lazar, J., and Preece, J. (1999). Designing and Implementing Web-Based Surveys. Journal
of Computer Information Systems (In press).

     Lazar, J., Tsao, R., and Preece, J. (1999). One Foot in Cyberspace and the Other on the
Ground: A Case Study of Analysis and Design Issues in a Hybrid Virtual and Physical
Community. WebNet Journal: Internet Internet Technologies, Applications, and Issues (In press).
 
 

North Carolina State University Libraries Evaluates the Use of Network Resources
Keith Morgan, North Carolina State University

For 1997/98, the North Carolina State University Libraries selected the area
 of electronic resources and services as the second topic in its systematic
 program of user surveys. Over the last decade, library expenditures for
 digital resources and computing equipment had increased dramatically, and
 the library had introduced new services and staff to serve users'
 information needs.   Consequently, library administration recognized the
 importance of ensuring that library expenditures truly address user needs on
 this campus and of determining what those needs are in the rapidly changing
 digital environment.

 The  project was assigned to the newly formed Digital Library
 Initiatives Department, whose responsibilities include the creation of
 innovative, user-responsive electronic services and resources for
 campus-wide instruction and research.  Because we wanted to learn as much as
 we could not only about users' actual experiences with electronic resources
 and services but also about their perceptions of them, the survey team
 decided that the focus group interview technique was appropriate for our
 situation. This methodology was chosen as appropriate for idea generation
 of needs assessment in the early phases of usability engineering.

 This presentation will delineate and assess the findings of this study,
 evaluate the usefulness of the focus group methodology, and what can be
 extrapolated from these observations about the use of networked resources.
 Among the findings of the survey that conference participants should
 find useful are faculty and student reactions to increased network database
 offerings, requests for increases in quantity, quality, and speed. Congruent
 with these requests are clearly stated desires for greater and easier
 accessibility (especially via remote access) to Web resources but also that
 librarians make the Web easier to use, with pages customized to individual
 interests The presentation will conclude with  a demonstration of the new
 "MyLibrary@NCState" portal, which illustrates one method of applying network
 technologies in response to user perception of information overload.
 
 

Online Journal Use in a Segment of Academe
George S. Porter
Sherman Fairchild Library of Engineering & Applied Science
California Institute of Technology
Mail Code 1-43, Pasadena, CA  91125
Telephone (626) 395-3409 Fax (626) 431-2681

Judith Jo Nollar
George S. Porter
Ian Roberts
Caroline Smith
Ed Sponsler

The Caltech Library System (CLS) Web Committee has gathered data on usage trends of printed volumes and analyzed server logs from the CLS web site and the online catalog.  Ht://Dig, from San Diego State, has been employed to map the adoption rates of online journals into the web pages of research groups and individuals on the Caltech campus.

We have explored the role played by the library in the promotion and adoption of online journal use.  We sought to determine whether either the library web site or the online catalog was the primary gateway for the campus community to licensed content.  We documented the roles of personal and group web pages in providing access to online journals.  We examined the organizational schemes employed in customized constellations of networked information resources templates to determine
if they could, or should, be emulated in library web site design.

Physics, astronomy, and mathematics are often thought of as closely allied academic disciplines.  We examined the inter-relatedness of subject matter in the production and use of online journals within these fields.  Use studies of library materials indicate that online availability of content coincided with a marked decrease in the use of bound astronomy volumes.  We sought to establish whether there is a causal relationship between these observations.  An attempt was made to examine the broader implications of this trend throughout the sciences.

The CLS Web Committee is focusing on these three disciplines in an effort to better understand the patterns of use in our community. Through a better understanding of use and adoption patterns, we hope to gain insight into organizational schemes that will promote the use of online journals.

Additional data is being acquired from the publishers of journals.  We have gained an unexpected insight in the field of electronic publishing, there are no standards for publishers in gathering or disseminating data.  A thorough review is being undertaken of the data elements which content licensors need to receive from content providers in order to make reasoned judgments of the utility of journals in online form. These same data elements are necessary to correlate documented print
usage with reported online accesses.

The acquisition of online usage information presents several other concerns not ordinarily envisioned.  Care must be exercised with respect to privacy issues.  Data linking specific research patterns to specific workstations, while potentially interesting, is anathema to the privacy rights advocated and respected throughout academe and the library sphere.  At the same time, easy measures of Internet traffic between a content provider and a campus, for instance byte counts, are roughly equivalent to measuring the tonnage of mail passing between the physical entities.

Useful data, file types requested, are an entirely different category of data, providing libraries with an opportunity to proactively support information display in the formats most desirable to their clientele.
In the print paradigm, printed page and adequate lighting were all of the relevant factors, electronic publishing significantly increases the number of variables in the quest to acquire and view information.
 
 

Alexandria Digital Library User and Use Evaluation: Experiments with a Neural Network
Method for Log Data Analysis

Philip Sallis
Professor of Computer Science and Vice-President (Research), Auckland Institute of Technology
philip.sallis@ait.ac.nz

Linda Hill, Mary Larsgaard, Kevin Lovette, Catherine Masi, Mary-Anna Rae
Alexandria Digital Library Project
UCSB, Santa Barbara
{lhill, mary, kal, masi, mrae}@alexandria.ucsb.edu
 

The Alexandria Digital Library (ADL) Project at the University of California, Santa Barbara, is
primarily a research project to develop a digital library of georeferenced information where a key
descriptive and retrieval attribute is the spatial, map-based representation of place (longitude and
latitude coordinates). It therefore presents new evaluation issues involving user expectations,
choices, results, and reactions in a new operational environment where spatially georeferenced
information and library services are merged with the latest capabilities of the web environment. Early
ADL user and use studies were presented at the 1997 ASIS Annual Conference. This paper
describes current experience with a methodology for managing and statistically analyzing user and
session log data. As an example of the type of analysis that can be done, summations of the session
log data are used to determine such user profile characteristics as frequency and duration of
sessions. Arbitrary value ranges are assigned fuzzy labels, such as low, medium and high frequency
of use, and short, medium and long session duration. The data analyzed can be extended to other
activities such as the collections searched and the query attributes used.  Each session instance is
summed for individual users, thus producing unique profile descriptions, which can be further
summed to generate typical user profiles for classes of users determined from the registration
data. Connectionist methods such as fuzzy clustering and Kohonen self-organizing maps (SOMs) are
described as appropriate analytical tools for this investigation. Finally, other sources of user and use
evaluations will be used as an interpretative framework for the log analysis results.

This is research-in-progress; the work we have done so far with profile development and user
surveying is considered to be a testing of the approach.  We defined a "good" test user group as
having at least three repeat users with enough (>100) "good" sessions to support analysis. A repeat
user is one who has used ADL more than once. A "good" session has at least one query. Using
these criteria, we were able to form two initial test user groups based on "role" (researcher and
library/information specialist). Registration, session, and query data were used to develop composite
profiles of these user groups through connectionist methods and statistical averaging. Individual
members of the test groups were contacted and asked to comment on the group profiles and rate
them from their points-of-view as well as provide evaluative comments about ADL.

Out of 3818 sessions for which we have session log data, 1955 are by repeat users and contain at
least one query. Out of a total of 143 registered users, 53 are repeat users and 40 of these have
sessions that include at least one query. Two users with substantially more sessions and queries than
the others were removed from the group, leaving 38 registered users that met our criteria.
Twenty-nine (29) of these are in our two initial user groups, with 17 in one and 12 in the other, with
a total of 627 sessions.

Criteria used to profile the groups include sex, age, affiliation, educational degree, proficiencies with
geospatial data, online searching, WWW, and computers, experience with ADL (frequency and
duration of sessions and use of previous versions of ADL), search characteristics, and queries per
session.
 
 

Gee--Look What We Did!
Methods of Measurement and Evaluation of the Effectiveness of
Departmental and Individual Web Sites in Academic Libraries

Jeanie M. Welch
University of North Carolina at Charlotte

This paper discusses the methods of measuring and evaluating departmental and individual Web sites in academic libraries, especially the web pages of public service departments and public service librarians.  For years the responsibilities of public service departments have been measured in both quantitative and qualitative terms.  Such traditional public service functions as individual desk service, bibliographic instruction, and collection development have been quantified through statistical log
, n ges in the early and mid-1990s needs to be formalized in the same manner as other traditional functions.

This discussion includes the types of Web sites (homepage, directional, reference, and combination) and the areas in which Web-related initiatives in public services can be formalized, including:
 

Suggested methods of measurement and evaluation of departmental Web sites include:
  Individual Web sites are those created and maintained by individual members of the department.  These can include a "virtual reference desk" Web page and subject- and course-specific Web pages created by subject specialists.  Suggested methods of measurement and evaluation of individual Web sites are similar and include:
  There are tools to assist in evaluating Web pages.  These include of log file analysis software (usage) and Web page analysis and link validation Web sites (e.g., Dr. HTML) (content).  An example of patron feedback is the annual Georgia Tech World Wide Web User survey.  According to this survey the two biggest com- plaints of Web users are:
  The last section discusses the implications and possible political advantages of such usage measurements and evaluation. These include the impact of Web pages on library-wide and departmental policies and procedures.  On a library-wide level the Web needs to be included in the library's service mission and annual reports.  Web page governance should be delineated as to the assignment of the roles and responsibilities of Webmasters.  The library should also have a voice in Web governance on the institu ir self-evaluations.

Over the years libraries have accommodated other new technologies (e.g., online databases and bibliographic utilities and CD-ROM products). The Web has become an integral part of libraries' service missions.  The "make-do" and  "catch-as-catch-can" ays of library Web creation and management should now be over. Web site usage measurement and Web site evaluation can facilitate the inclusion of Web-related activities in library policies and  procedures.
 

TUESDAY, MAY 25, 1999
10:30 a.m.-noon
PANELS
 
 

TUESDAY, MAY 25, 1999
noon-1:30 p.m.
LUNCH
 

TUESDAY, MAY 25, 1999
1:30-3:00 p.m.

The Effective Electronic Library
Peter Brophy

New models of information and library services are needed if we are to develop
meaningful measures of performance in the digital networked environment. At the
Centre for Research in Library & Information Management (CERLIM), Manchester
Metropolitan University, UK a number of studies have been undertaken under the
generic name of the Management Information for the Electronic Library (MIEL)
Programme. These include projects based on the approaches of quality management,
using ISO9000, the European ‘Model for Business Excellence’ and Garvin’s
‘Dimensions of Quality’; a study for the UK Higher Education Funding Councils
designed to extend the traditional library model developed under a strand of work
published as The Effective Academic Library; a public library sector project entitled
The Value & impact of end-user IT access in Libraries (VITAL); and a pan-European
research project, EQUINOX, which draws together a multinational team to provide a
broad range of service and software development perspectives from a variety of cultural
traditions (UK, Germany, Spain, Ireland, Sweden).

An initial set of electronic library performance indicators was published by CERLIM in
1997 for the UK academic sector. Subsequent work has refined the underlying model
and the CERLIM team is currently examining its impact on our understanding of the use
and impacts of electronic information services. At the same time, a software
specification is being produced within the EQUINOX project to enable performance
indicators to be generated by pulling datasets from disparate service providers (e.g.
web servers, library management systems, CD-ROM servers) within a standard
structure. Based on a DTD specified within an XML ‘wrapper’, this approach should
enable the community to encourage system suppliers to provide management
information within a standards-based framework.

Just as international agreement on performance indicators for the traditional library has
been achieved through extensive discussion and consultation, there is a need to reach
agreement on common approaches to the measurement of the performance of
electronic services. Members of the EQUINOX consortium are active within the relevant
ISO and IFLA bodies, and the intention is that the recommendations of EQUINOX will
be put forward to these bodies for consideration within the international standards
framework.

In this paper progress towards standard performance indicators for the networked
environment will be reported in the context of emerging theoretical service models. The
initial models explored three managerial perspectives (strategic, planning, operational)
and eight dimensions (performance, features, reliability, conformance, durability,
serviceability, aesthetics and perceived quality). Recent work has extended this to a
functional approach which builds on other networked information service models using
the core functions of resource discovery, location, request and delivery. This
multidimensional approach enables a rich picture of the networked service environment
to be presented.
 
 

Measuring the Impact of the Web on Non-work Based Tasks
John D’Ambra
School of Information Systems
The University of New South Wales
Sydney 2052 Australia
Phone: +61 02 9385 4854
Fax: +61 02 9662 4061
Email: j.dambra@unsw.edu.au

Many service providers are now providing applications on the Web that encourage people to do
business and satisfy information needs. The research reported in this paper attempts to investigate
the perceived value of these information services for meeting the information needs of users by
answering the question: What is the impact of information services of the Web on non-work based
tasks? The emphasis on non-work based tasks allows the research to focus on the use of the Web
in satisfying information needs that arise outside the work-place/task domain.
 

Three streams of literature are considered: usage of the Web, user satisfaction of the Web, and the
complementary field of individual performance and the impact of information technology. The issues
that arise from the usage and satisfaction literature are: the predominate use of the web as a
communication medium and the use of the web as a form of recreation and entertainment. Two
models emerge from the individual performance and the impact of information technology literature
which may be useful in attempting to measure the impact of the Web on non-work based tasks: The
theory of planned behavior and the technology to performance chain. Klobas (1995) tested the
ability of three models of information resource use to explain the use of the Internet. The three
models were: Information Use (Allen, 1990); Technology Assessment Model (TAM) (Davis, 1989);
and The Theory of Planned Behaviour (Azjen, 1993). Klobas found that the Theory of Planned
Behaviour best explained the use of the Internet. The good performance of the Theory of Planned
Behaviour in the study showed that information resource use is motivated by similar factors to other
human behaviours. This is an interesting finding, as unlike other information technology the use of the
Web is not mandatory, it is voluntary. Goodhue and Dale(1995) pursue the optional use of
technology and the investigation of other factors as contributing to the successful utilisation of
technology in a proposed model, the technology-to-performance chain. This paper argues that the
approach adopted by Goodhue and Dale is useful in measuring the impact of the web on non-work
based tasks because of the recognition that the use of the technology is optional and that the
technology should fit the task. These two major issues reflect the environment of web usage.
 

The paper then pursues the possibility of using the technology-to-performance model in measuring
the satisfaction of web users. To this end the following research propositions were explored through
a research method including focus groups and content analysis:
 

Proposition 1: Can a scale measuring the task technology fit of using the Web for non-work
based tasks be developed?

Proposition 2: What is the dimensionality of a task technology fit construct for Web usage?
 

The output from the content analysis is a series of items that constitute an un-tested scale for
measuring the task-technology fit of the Web for non-work based tasks. The number of items that
have emerged from the content analysis demonstrates that the issues operationalised are of interest
to the sample that participated in the focus groups. On observation the items do give some indication
of the dimensionality of the task technology fit of the Web and the depth of the possible
dimensionality of this construct. Some discussion of the groupings is offered.
 
 

Networked Information Resources: A Review of Research Methodologies
Kathleen R. Murray
University of North Texas

    The rapid growth of networked information resources both within organizations and in the global
networked environment has created an organizational imperative to participate in this arena with all
due haste while, at the same time, this haste has necessitated a "learn-as-you-build" approach. This
approach is exemplified in pilot projects and trials, which often seek to demonstrate technical
feasibility and identify implementation barriers and frequently are not concerned with formal
evaluation processes.

    Diverse investigations and experiments into various aspects of networked information resources
are underway in both private and public sectors of society. Many organizations lack methodologies
and practices for measurement and evaluation of either their pilot projects or strategic information
networks, whether these networks are organizational Intranets, inter-organizational extranets, or
public Internets. In the spirit of discovery that frequently characterizes strategic, leading-edge
technology projects, formative and summative evaluations tend to be loosely structured or not given
priority attention by implementers and researchers. In many cases, technical and financial factors are
major evaluation criteria. In other cases, evaluations are conducted subsequent to full
implementations and involve extensive criteria and various methodologies. It is of interest to discover
if emerging technology trials involving networked information are necessarily characterized by the
lack of structured evaluation processes. Additionally, when projects and trials do include
evaluations, what criteria are used to evaluate network-based services and information activities?
What evaluation methods are used? Are both objective system performance measures as well as
subjective quality measures employed?

    Considering the projected growth of networked information resources in all sectors of society, it
is worthwhile to review the scope of activities in this arena in order to: (a) classify existing areas of
study, (b) develop appropriate research methods, and (c) guide future research agendas. In the past,
reviews of work in progress from multiple perspectives have assisted researchers, informed policy,
and added to core knowledge in specific areas of study. This review considers a range of networked
information use and evaluation projects, as well as policy statements and strategic agendas from
organizations involved in networked information. Many of these strategic activities have multiple
participants and are funded under the umbrella research and development activities pursuant to
implementation of the National Information Infrastructure. Other strategic projects involve
communities of common interest, such as associations with common goals, or industry alliances
reflecting corporations with common requirements.

    To ensure a wide breadth of coverage, perspectives from commercial, academic, and
governmental arenas are considered. The final review (a) classifies current activities regarding the use
and evaluation of networked information resources, (b) identifies research methods for the evaluation
of networked information resources, and (c) proposes a research agenda for networked information
use and evaluation.
 
 

TUESDAY, MAY 25, 1999
3:30-5:00 p.m.

Structural & Administrative Metadata for Digital Libraries: The Making of America II Project
Howard Besser
UC Berkeley/UCLA

The Making of America (MoA II) Testbed Project is a Digital Library Federation (DLF) coordinated,
multi-phase endeavor that is investigating important issues in the creation of an integrated, but
distributed, digital library of archival materials (i.e., digitized surrogates of primary source materials
found in archives and special collections). The project began with a written White Paper that defined
the standards needed for creating and encoding digital archival objects (e.g., a digitized photograph, a
digital book or diary, etc.). The project has focused on the administrative metadata standards needed to
manage this type of digital object and structural metadata standards needed to navigate through the
object. The focus is on distributed repositories and the metadata and tools that will be needed for
interoperability.

The UC Berkeley Library is leading an effort on behalf of DLF institutions to come to common
agreement on structural and administrative metadata for the digitized versions of half a dozen different
documents types, and to create a testbed to examine interoperability issues. Thusfar (as of 2/99) the
UCB Library has defined the metadata elements, and produced tools to support the capture of
administrative and structural metadata during the creation of digitized archival materials. The
participating libraries have used these definitions and tools to create digital objects. Berkeley has built
tools that operate on the metadata and in the near future these will be tested with display of digitized
materials to archive users. Shortly after the ASIS Midyear conference the interoperable testbed should
be populated and ready to test retrieval across distributed repositories.

In this presentation the speaker will discuss the actual structural and administrative metadata that the
project has recommended. He will also talk about the consensus process which led to that selection.
He will discuss the set of "best practices" that the project recommends. And he will go over the
underlying systems architecture that the project assumes. He will also examine the ways in which the
project can be evaluated.

Further information on the project is available at http://sunsite.berkeley.edu/moa2/
 

 
WEDNESDAY, MAY 26, 1999
8:30-10:00 p.m.

Privacy, Electronic Commerce Resources, and the Web: An
Actor-Theoretic Examination of the P3P Project

Mark S. Ackerman
Associate Professor
Computing, Organizations, Policy, and Society (CORPS)
Information and Computer Science
University of California, Irvine

Electronic commerce allows the capture of enormous amounts of personal information, as well as requires the trans-national flows of that information (Cate 1997).  The Platform for Privacy Preferences Project (P3P), sponsored by the World Wide Web Consortium (W3C), is one attempt to find technical solutions for what will be a growing privacy problem on the Web (Cranor and Reagle, 1998).

This paper consists of two parts.  The first is an explanation of the P3P project, a general description of the actors involved in the P3P project, and an analysis of their competing and cooperating agendas. (All specifics, however, have been disguised for publication.)

This will provide an empirically-based analysis of the emerging privacy issues in Web-based electronic commerce, including those that are envisioned.  This analysis is based on actor theory (Hughes 1993, Latour 1993).  Many actors are extremely powerful, attempting to maintain political control over what has been geographically-based regulation of information flows.  Other powerful actors are attempting to control the electronic commerce infrastructure for the next twenty years; these actors largely look at personal privacy as a market feature.  Other actors are relatively minor players looking for a "piece of the action" or to promulgate their policy position.  As with other actor-theory analyses, an examination of the actors, their agendas, and their trajectories will lead to a fruitful discussion of the basic issues involved in creating privacy policy.  It should be noted that privacy policy strongly affects the public's future information needs and that the resulting formation of the actor network will largely determine what are considered relevant information issues in the e-commerce age.

Data for this section are from a participant-observation study of the P3P project over the last 1.5 years.  Field work included direct participation in working group meetings and activities as well as at
the W3C.

The second part of this paper is an analysis of how "usability" must shift to "social usability" (Kling and Elliott, 1995) in a Web-based environment.  If the first part of the paper could be considered as
how the social world mediates the development of technology, the second part considers how the technical will mediate social use. Tools mediating information access cannot merely incorporate narrowly construed notions of usability.  The P3P technology must fit into a myriad of social worlds, implementations and adoption strategies must consider these differing cultural and regulatory worlds, and evaluations must note these differences. For example, users may have competing social definitions of "privacy" and "proper use", and straightforward HCI measures of usability do not work in this context. As an alternative, social usability considers users as actors in their own networks, a more fruitful way to examine even seemingly minor issues as defaults in products shipped by software vendors.
 
 

Texts and Users: The Relevance of Humanities-Based Theories of Text
Production to the Study of Networked Information Use

Dr. D. Grant Campbell
Faculty of Information and Media Studies
Middlesex College
University of Western Ontario
London
Ontario N6A 5B7
PHONE: 519-661-3542
FAX: 518055103505
gcampbel@julian.uwo.ca
 

User studies have a long history in the field of Information Science, and have often been used to measure the use of specific information types and sources: card catalogues, microfilm catalogues, OPACs, online databases and now, more recently, the World Wide Web.  However, the work of  Darnton (1990) in the history of the book has shown us that information use occurs within a broad cycle of text production, ranging from writing, through publication and printing, to dissemination, use and reader response.  All stages of this cycle, including information use, are inevitably affected by the social, legal, economic and technical contexts in which text is physically produced and distributed.  User studies centered on digital media, therefore, must be wary of importing study designs born in the print domain without considering the fresh contexts in which those studies must take place.  In particular, they must take into account three fundamental shifts in the nature of textual production:

1. Mutability of content: as networked information can easily be revised, and as the revised information replaces the information it supplanted, users are no longer dealing with stable semantic content.

2. An emerging stability based on textual structure: standards are emerging at the structural level of electronic information, rather than at the semantic level, in the form of markup standards and metadata architectures.

3. A wider range of information media: as digital information increasingly crosses traditional media boundaries, access methods are expanding beyond traditional indexing methods to incorporate sound recognition, form and visual shape, all of which employ new cognitive responses.

The use of networked electronic information, therefore, is bound up with the circumstances and dynamics of electronic text production, and information researchers require new theories and new models to inform both qualitative and quantitative user study design.  As information access continues to be linked  to the development of information technology, evaluation approaches for networked information, therefore, will require an increased intercommunication between the systems stream and the needs and uses stream of library and information science.  Furthermore, text-centered approaches in humanities scholarship, particularly in the fields of literary studies and critical theory, could provide a source of new models and approaches. This is already happening, to some extent:
Vaughan and Dillon (1998) and Toms and Campbell (1999) have used genre theory, as developed in literary studies, to account for the ways in which users respond to the visual features of information.  But other insights could be found as well. In particular, the growing rift between mutable content and permanent structure could make use of  the concepts of literary structuralism: the isolation of essential structural features of text, features which can be perceived on a mythical level (as in the work of Northrop Frye) or on a linguistic level (as in the work of Roland Barthes), and which play a key role in the text's essential nature and its effect on the reader, over and above the semantic content of the text itself. This approach could have significant advantages in studying the effect of structure-based markup languages such as SGML and XML, and provide new ways of evaluating the effectiveness of the metadata systems, document-type-definitions and electronic stylesheets that will be used to present, organize and retrieve digital information.
 

Darnton, R. (1990).  The kiss of Lamourette.  New York: Norton.

Dillon, A. and Vaughan, M.W. (1997).  It's the journey and the destination: Shape and the emergent property of genre in digital documents.  New Review of Multimedia and Hypermedia 3: 91-106.

Toms, E.G. and Campbell, D.G. (1999). Genre as interface metaphor: Exploiting form and function in digital environments. In Proceedings of the Hawaii International Conference
 
 

Use, Users, and Usability of Academic Library Websites
Kyung H. Kim
SCILS, Rutgers University
Email: kyunghye@scils.rutgers.edu

Libraries, as a networked community, are expanding their presence on the World Wide Web to
meet the needs of the diverse communities they serve. Library web sites have proliferated as a major avenue of access to their collections and services. These websites have multiple purposes: bibliographic instruction, question/answering tools, public relations, and interactive customer services such as online recall or online interlibrary loan. The libraries function as gateways to internal and
external information resources. Indeed, as LeClair (1998) notes: "Library websites have evolved dramatically from their beginnings as a convenient index to electronic information resources to an electronic representation of the library itself." In this regard, one can argue that library websites are virtual representations of the traditional library which extend and redefine the basic library functions. These websites offer networked information services to enrich individuals' information seeking and enhance the quality of the interactions between the user and the library.

A survey of recent library website literature reveals that guidelines for designing such sites abound; nonetheless, only a few have studied the impact of their websites on their users. Further, the emphasis during this development phase has been on providing access but not on building user sensitive systems. As a result, our understanding of how users actually make use of library websites may be unnecessarily limited in understanding the users underlying motives and purposes. There is a
growing consensus, however, that in order to evaluate the library in a networked environment it is important to approach this issue from the user's point of view. In addition, it may be appropriate to study this problem using mixtures of qualitative and quantitative methods. The research offered here has two phases: (1) administer a paper/electronic survey to randomly selected individuals from a
large research university to assess their experience and expectations with using their library's websites; and, (2) test the usability of the websites using proxy users by videotaping their use and capturing their search history with built-in software. Thus, data from these quantitative and qualitative approaches is expected to enrich the overall results and possibly make them more representative of the patterns of use which might be made of library websites. This paper focuses on the first study, the
survey, and it aims to present the implications for the library's roles in a networked environment by improving service while addressing user-interface design issues.
 
 
Use, Benefits and Constraints of Electronic Communications in Africa: Lessons Learned In Trying to Assess Them

Menou, M.J., CIDEGI, France *
Asaba J.F.K., National AIDS Documentation and Information Centre, Kampala, Uganda.
Bazirake, B.B., Systems Librarian, Makerere University, Kampala, Uganda.
Chifwepa V., D.L.I.S., University of Zambia, Lusaka, Zambia.
Hafkin, N., Coordinator, African Information Society Initiative, U.N. Economic Commission for Africa , Addis Ababa, Ethiopia.
Rorissa, A., Systems Librarian, University of Namibia,, Windhoek, Namibia.

* To whom correspondence should be addressed: Michel.Menou@wanadoo.fr; B.P. 15, F-49350 Les Rosiers sur Loire, France
 

Between 1995 and 1997 a study was conducted in 4 African countries: Ethiopia, Senegal, Uganda and Zambia in order to investigate the benefits resulting from the use of electronic mail. Its scope was extended to Internet access as it became more widely accessible. The study was carried out with the support of the Canadian International Development Research Centre (IDRC) as part of the
second phase of the international research program called "Impact of information on development". The first phase of this program produced a preliminary framework for the study of the impact of information on development which individual projects in phase 2 were intended to check against real life cases.

The work was carried out by researchers from each country working in a coordinated fashion and using, in principle, standardized instruments. The basic procedure included an initial broadly based survey by Email of all subscribers to the only Email services originally available (Fidonet centers)
to collect general and use data, followed by interviews of a structured sample of 50 users (20 to 50% of the total initial population), focusing on impact issues. The interviews were to be repeated after some 6 months, with a view to validate the earlier replies. The identified benefits and drawbacks were supposed to be discussed with the stakeholders toward the end of the project.
The samples were so designed as to insure, as far as possible, representativity of the actual users population in each country as well as coherence across the countries with regard to the users occupation and frequency of use. While the first phase was completed more or less as planned from January 1995 through May 1996, it somewhat slipped over the second one, with a view to tentatively obtain more replies. As a result of the advent of full Internet access, the second phase (April 1996 - December 1997) was more an complement and extension of the first one than a validation.

The study was designed for a captive audience of Fidonet users but within a few months Internet mail and full Internet access began to be accessible together with competition among various ISPs. The level of exposure and experience  among users thus varied greatly and was constantly changing. In the meantime it proved very difficult to track those in the sample, even over this relatively short period. The fast changing scene of Internet services in Africa is likely to make the heterogeneity and volatility of the users population a lasting constraint in the continent for a relatively long time. Informal exchange of opinion through Email are not yet common and suffer from connectivity limitations. The gathering of factual as well as qualitative information is further affected by institutional and cultural biases such as security, hierarchical status, gender, age, etc.

As noted in many other studies, e.g. Saracevic and Kantor (1997), the concept of value and benefit of information is very hard to grasp for most people. Because of the subject of the study and the relative novelty of the phenomenon, respondents tended to focus on connectivity and use aspects. They could not address impact ones, but in very general terms. The time for the latter to mature may well be over 5 years. Most users in the study had 2 years or less of exposure to electronic communications.

Users in the study are predominantly male professionals in the public or not for profit sector. They consider themselves as computer literate. Most of them do not have a connected computer and use an institutional account, even though home or dual use reached some 30%. They seek contacts and information outside Africa for the fulfillment of their duties work. The major benefit so far has been an improvement of the cost-effectiveness of international communications. Limitations of the infrastructure (including in-house computer, networks and maintenance) and cost of telecommunications are the main obstacle, especially in rural areas which remain under-represented in the users population and suffer more material and financial constraints. Interestingly, at this  early stage, lack of training, techn ical support and information overload were pointed as worrying factors.

Both the nature of the phenomenon and the resources usually available make a nation wide  investigation most difficult. Focusing on particular communities and considering the changes brought in their major problems, rather than their use of the Internet, might bring more clarity. A truly longitudinal and participative approach involving all stakeholders is also recommended. It would be advisable for ISPs to closely monitor their customers and their behavior. Panels or focus groups may also be used at regular intervals in order to complement the basic data and traffic records.

REFERENCES

The full set of reports produced by the project in Spring 1998 can be found at http://www.bellanet.org/partners/aisi/
It includes:
Asaba,  J.F.K.;  Bazirake Bamuhiiga, B.  Connectivity in Africa: use, benefits and constraints of electronic communication - Uganda Phase 1  and Phase 2 (2 reports).
Chifwepa, V.  Connectivity in Africa: use, benefits and constraints of electronic communication - Zambia Phase 1 and Phase 2 (2 reports).
Diop, O.  La connectivité en Afrique: Utilisation, avantages et contraintes des communications électroniques - Sénégal Phase 1 and Phase 2 (2 reports).
Menou, M.J.  Connectivity in Africa: use, benefits and constraints of electronic communication -Synthesis Report : Part 1 Methodological issues - Part 2 Overview of the findings of the project. (2 reports).
Rorissa, A. Connectivity in Africa: use, benefits and constraints of electronic communication - Ethiopia Phase 1 and 2 .
Rorissa, A. Connectivity in Africa: use, benefits and constraints of electronic communication - Results Obtained from a questionnaires survey of participants in the April 1995 Regional Symposium on Telematics for Development in Africa .

Evaluation Unit. Department of Community Health, Addis Ababa University (1994).  HealthNet: Satellite communications research for development. Evaluation report. Draft.

Hafkin, N. & Menou, M.J. (1995).  Impact of electronic communication on development in Africa.  In: P. McConnell  (ed.), Making a difference. Measuring the impact of information on development: Proceedings of a workshop held in Ottawa, Canada, 10-12 July 1995 (pp. 71-85). Ottawa: IDRC.
(http://www.idrc.ca/books/focus/783/index.html).

Menou, M.J., (ed.) (1993)  Measuring the impact of information on development. Ottawa, IDRC. (http://www.idrc.ca/books/708.html).

Saracevic, T.; Kantor, P.B. (1997).  Studying the value of library and information services. Part 2.  Methodology and taxonomy.  Journal of the Association for Information Science, 48(6), 543-563.
 
 

Private Lives and Public Spaces

Gretchen Whitney
School of Information Sciences
University of Tennessee
804 Volunteer Blvd
Knoxville, TN 37909
gwhitney@utk.edu
Phone 423-974-7919
Fax 423-974-4967

Electronic discussion groups form virtual communities for mutual support and information sharing.   This is particularly true for patients coping with severe conditions such as Hodgkins Lymphoma:  sufferers trade information regarding their personal lives to gain emotional support as well as to help others.  In the Spring of 1998, three patient discussion groups were identified and queried on information issues and three months of the logs of their discussions were sampled and analyzed.

The groups were distinctive in the average number of message per poster (1.47, 2.01, and 3.08).  The 80/20 rule applied in no cases regarding posters to postings:  discussion was spread relatively evenly among participants. The messages were further categorized by their function and content: whether the poster was asking a question or providing an answer, "cheers" messages of support, advice, and so forth.  The groups showed clear differences by these measures.  One used "cheers" messages five times as frequently (27% of messages) as another , for example, and the third was midway between the two. In one group, informational articles were posted twice as often (10% of messages) as the second, and the third there were none. They however were clearly  information-seeking communities:  questions were asked in a third or more of messages.

While it was interesting to characterise these discourses, the more challenging results of the study were the ethical dilemmas posed in its conduct. The original approach was a survey of the online and offline information seeking practices of individuals actively participating in such groups. The reaction of the participants in the groups to the study prompted close consideration of the ethics of such work, and a shift in direction.

One of the central issues is the degree to which one treats public discussions of private and personal matters as a public record, available for study, or as a private discussion, to be left alone.  These were public discussions, readily available on the Web. However this is tempered by the fact that when messages are created, the poster perceives and acts as if the discussion is private.  Does the interest of the researcher take precedence, or that of the poster? If the interests of the poster are
foremost, critical research (that is, research that criticizes the behaviour of the participants), may be inhibited.  If those of the researcher are put first, the posters may feel violated and, as they have,
retaliate.  How does the  concept of informed consent apply in this electronic environment?

While the research concerns mirror some traditional environments, the electronic environment raises unique problems. For example, electronic communities such as this are different from a "real life" (RL) group in that they can speak and react as a single body.  Without the body language and other cues of a RL group which can indicate quietly to the group that the speaker, for example, is not to be taken seriously, one voice can sternly silence the entire group electronically.  What happens when the
interests of one poster and researcher collide?

The opinion of the posters has been clearly expressed:  archives of discussions are increasingly put behind passwords, and welcome messages are increasingly stating "No research may be conducted here."
 
 
 

 
 

WEDNESDAY, MAY 26, 1999
10:30 a.m.- noon

Designing Informative Information Retrieval Devices for
Undergraduates Researching a Social Science Term Paper
Charles Cole
Concordia University

Undergraduates researching and writing term papers for a social science course constitute an important proportion of information retrieval system (IRS) users. For these users, the objective of researchers following the cognitive approach is to link the factors that encourage IRS use, and the evaluation of the success or failure of the use, into a usefulness measure based on the term information. Because of its innovative definitions of IR terms such as information, interaction and feedback (Cole, in press, JASIS), the cognitive approach creates an IR user-system interaction model which is powerful enough to attain these objectives.

Information, the key term, is defined by cognitive researchers as a modification of the cognitive  "state" of the user, and true interaction between user and system is measured by how much the user becomes informed during the interaction. This information focus is a major difference between cognitive oriented design and traditional IR design. Traditional IR systems are not designed to inform the user while the user is online interacting with the system; the system is designed to direct the user to potential information away from the system. Because of the disconnect between IR use and information, IR users often find traditional IR use discouraging rather than encouraging.

The purpose of this paper is to describe the cognitive approach to feedback in information retrieval, then apply its principles and definitions to the design of "enabling" devices for undergraduates
researching and writing a term paper for a social science course. The devices stimulate the undergraduate to become informed during the interaction with the system.

Information-based IRS Design

Because traditional IR systems direct the user to sources of potential information in books, articles or web sites that are "off-line," they are forced to evaluate the user's experience with the system by asking the user to judge whether or not the records in the system's output are relevant. Relevance is an imprecise criterion (Mizarro, 1997). System-oriented relevance asks the user to evaluate the system's performance in matching inputted search terms with topically relevant records in the system. However, if the user becomes informed while reading through the system output, which is the case with a user-oriented relevance criterion like psychological relevance (Harter, 1992), output that
topically matches the user's original query to the system is judged no longer relevant. A performance measure based on how well the IR system stimulates the user to become informed while interacting with the system avoids this negative constraint on "interaction," freeing system feedback to challenge the user's original formulation of the search parameters so that the eventual query to the system better reflects the user's real information need.

An Information Measurement Device

This paper proposes a different model of system evaluation based on how informed the user becomes during the interaction with the IR system. It does this by taking a quantitative snapshot of the user's cognitive state at the beginning of the search and then again at the end of the search
(Cole, 1997). The instrument measures the change. IR devices that stimulate such a change, that stimulate the user to become informed "on-line," are described and discussed.

Cole, (1997). Calculating the information content of an information process of a domain expert using Shannon's mathematical theory of communication: A preliminary analysis. Information Processing & Management, 33, 715-726.

Cole, C. (in press). The Activity of understanding a problem during Interaction with an "enabling" information retrieval system: Modeling information flow. Journal of the Association for Information Science.

Harter, S.P. (1992). Variations in relevance assessments and the measurement of retrieval effectiveness. Journal of the American Society for Information Science, 47, 37-49.

Mizarro, S. (1997). Relevance: The whole history. Journal of the American
Society for Information Science, 48, 810-832.
 
 

"Show Me the Pictures! or Evaluation Studies of Art
Image Retrieval"

S.K. Hastings and T.J. Russell
University of North Texas
School of Library and Information Sciences
POB 311068 * ISB205F
Denton, Texas 76203-1068
940-565-2445
940-565-3101 (FAX)
hastings@lis.unt.edu
trussell@unt.edu

Both authors will present the paper.

The current study continues work in the analysis of user supplied index terms and investigates user evaluation in a broader context for retrieval of images on the Web.  Methods include online user surveys, log analysis, collection of user supplied access terms and correlation of user queries to retrieved sets of images.  This report of research in progress will look at new evaluation data from user interaction with the Bryant Caribbean Art Collection.  Preliminary research results show the need for user feedback, flexible indexing, and availability of images without subject categorization.  An overview of current image retrieval systems used by art museums will be presented with a discussion of the changing role of information science in the study of the arts and humanities.  Variables for evaluation studies in the networked environment will be identified and methods for continued study discussed.  Research agendas and models for future study will be presented.
 
 

Networked information resource use as
planned behaviour in context: a reflection on
the role of definition and measurement in
quantitative user studies

Dr Jane E Klobas
The Graduate School of Management
The University of Western Australia
 

Researchers in several fields have proposed theories that may be used to explain discretionary use of
networked information resources (NIRs). Cost-benefit theories in library and information science
suggest that people act to minimize effort; they use information resources that are accessible and
easy to use. If we act on these theories, we will invest in the usability of web sites, intranets, and
other NIRs. By contrast, cost-benefit theories of information technology use suggest that people will
overcome barriers associated with difficult to use interfaces if the perceived usefulness of the
technology is high. If we act on these theories, we will accept some imperfections in an NIR's
human-computer interface in exchange for access to information resources that contribute to finding
useful information and to getting work done well. A quite different approach is taken in models of
communication system use which suggest that differences in use reflect differences in task,
technology, and individual characteristics such as confidence in use, as well as differences in the
social influences of others.

This paper explores reasons for such a range of possible explanations of networked information
resource use by examining the validity of quantitative user studies. This approach may seem
surprising at a time when many fields are adopting interpretive theories and qualitative research
methodologies to deal with the apparent failure of quantitative research, yet this review shows that
attention to sound quantitative methods can improve our understanding of factors associated with
networked information resource use.

Research on campus wide information system (CWIS) use in Australian universities, conducted
within the framework of an established social psychological model of human behavior (the theory of
planned behavior, Ajzen, 1991) is used to demonstrate how attention to theory building, modeling
and measurement can be used to synthesise the diverse findings of research conducted both within
and across different information fields. A "planned behaviour in context" model of networked
information resource use, which incorporates the social and technical context of use, potential users'
perceptions of NIR characteristics (including information quality and interface usability), and
potential users' attitudes to the outcomes of use is proposed as a model which may contribute to an
understanding of the "social informatics" of networked information resource use. This discussion will
be illustrated by models of CWIS use drawn from several theories of information use, analysis of
common concepts (such as perceived information quality and perceived ease of use) and their
definitions, items that have been used to measure factors in the models, and discussion of the metric
properties of items and scales that may be used to measure factors associated with use of CWIS
and other intranets and NIRs.

Rather than dismissing quantitative research on the basis of a poor research tradition, the analysis
presented here shows that close attention to theory building, modeling of user behaviour, and
research methods (both qualitative and quantitative) can provide a good understanding of why
people use networked information resources. This understanding can, in turn, show how malleable
networked information resource characteristics, such as usability and information quality, are
associated with networked information resource.

References:

Allen, T. J. (1977).Managing the flow of technology: Technology transfer and the dissemination
of technological information within the R&D organization. Cambridge, MA: MIT Press.

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision
Processes, 50, 179-211.
 
 

Evaluation of an Visualization System for Information Retrieval at the Front End and the Back End
Gregory B. Newby
School of Information and Library Science
University of North Carolina at Chapel Hill
Chapel Hill, NC, 27599-3360
 gbnewby@ils.unc.edu

Visualization systems for information retrieval systems have qualities that differ from many other
visualization systems (such as wayfinding or mapping aids, tutorials, and graphical interfaces to
operating systems). The most obvious point of difference is the lack of an direct visual metaphor for
presenting most text. Visual IR system designers need to make decisions about how to present visual
representations of documents, terms, queries, and other components of IR. These front-end
decisions may be evaluated for usability, intuitiveness, learnability, and other desirable factors in a
similar fashion to the evaluation of other types of visualization systems.

A less obvious point of difference between visual IR systems and other types of visual systems is that
the underlying organization, representation and storage scheme for a document collection, and the
means for querying or otherwise accessing it, can differ vastly. IR approaches at the back-end of
visualization systems may range from Boolean keywords, to vector spaces, to probabilistic systems.

This contribution will review the issues of evaluation for visual IR systems, and discuss the different
factors that may come into play for evaluation. Comparisons to evaluation methods for other types of
visualization systems will be made.

Next, the methodology and results of an evaluation of a prototype visual IR system will be presented.
The new system, called YAVI, is based on an underlying representation model similar to the Vector
Space Model, but with the benefit of being better suited for a visual interface. Evaluation of YAVI
included three major aspects. First is evaluation of the underlying back-end retrieval engine, as
accomplished through participation in the international TREC experiments. Second is a comparison
of this back-end method for document representation to a psychometric study of human perceptions
of key concepts from a small document collection. Third is a user-based investigation into the
usability, utility, and learnability of the YAVI visual interface.
 
 

Inside the Processes of User-Web Interactions: Cognitive, Affective, and Physical Behaviors; Individual Differences and System Factors
Peiling Wang
University of Tennessee Knoxville
peilingw@utk.edu

This presentation is based on an empirical study on users' interaction with the World Wide Web resources in finding factual information. The purpose of the study is twofold: (1) to bring new understanding of Web searching with user-oriented view; (2) to examine the Web by analyzing user behaviors during the interactions and encountered problems pertaining to elements of the Web and its interfaces. The results of the study have implications on further studies of Web users as well as on improvement and design of the Web and interfaces. User's interaction with the Web is seen as a communication process in which a user initiates and carries on a series of dialogues with a highly complex networked information system mediated by an interface. This interaction is a cognitive process that is under the control of the user.
In order to study user-Web interaction holistically, first this study proposes a user-oriented multidimensional model consisting of three main components: (1) User, (2) Interface, and (3) the Web. In each component, there are several elements under study (Figure 1). The User component is the focus of the study, which is examined in terms of user's situations, cognitive behaviors, affective behaviors, and physical behaviors. The second component Interface is investigated by relating user behaviors to individual elements of interfaces. Interfaces consist of elements such as Access methods, Navigational tools, Messages, I/O devices, etc. Behind the screen is the accessible Web construed with individual objects, and their spaces (a Web space is a partitioned portion of the Web), organizational schemes, and metadata that describe and provide access to objects and Web spaces.
Second, the design of the study applies process-tracing technique to capture users' entire interaction with timestamps in three formats: textual log files on visited sites and pages, audio data on concurrent verbalization during the interaction, and video data on continuous screens and mouse and keyboard actions. Besides the process data, pre-search situational data and post-search evaluation data are also gathered to identify participant's computer and information experiences, cognitive style tendency, affective states, and perception of the search. Participants are twenty-four volunteers from an information science graduate program at a state university. They were given two imposed factual questions to search for answers. Situational data on each participant were collected using self-report questionnaires and standard psychological tests. A self-report post-search evaluation form was given to each participant at the termination of the interaction. The form asked the participant to rank confidence level of the results and allowed written comments on the interaction and the system. The collected three sets of process data were transcribed and integrated into a single textual format for content analysis.
Reported in this presentation is the second part of the study. That is the results of in-depth analyses of the recorded processes focusing on: (1) what happened during the interactions; (2) how these behaviors relate to situational variables, elements of Interfaces and elements of the Web. First, user behaviors are categorized into three classes: cognitive, affective, and physical aspects. In each category, the variables are identified. For example, in cognitive category, behaviors include pre-terminal question analysis, choice of access methods, selection of Web spaces, query formulation, evaluation of results, paths and moves, problem-solving strategies, presentation of results, etc. The affective behaviors are attitudes toward the Web, easiness with the interface, reaction to results, confidence in strategies, willingness to explore, tolerance on ambiguity, feelings at specific point, contentment after termination of the search, etc. The physical behaviors in Web interaction environment are the sensorimotor control related to use of and respond to input/output devices. In this category, variables are information processing (the presented information), hand-eye coordination, hand-mouth coordination, keystrokes, mouse-control (positioning and clicking), etc. These categories are not mutually exclusive in nature. An observed phenomenon may belong to more than one variable. For example, a typo occurred in a query statement. One of the four scenarios may be true: The person loudly spelled correctly but made a typo on keyboard. This is an error clearly belong to physical behavior: keystroke error due to hand-mouth coordination. However, if he loudly spelled a term incorrectly and typed in exactly as he spelled. This is an error of cognitive factor rather than physical factor. However, if he misspelled loudly but typed in a misterm that is different from his misspelling. Then, the typo is analyzed as both cognitive error as well as physical error. If there is a tone in his verbalization to indicate a kind of affective state such as being frustrated, which will be coded accordingly in affective category. Therefore, overlap in categorization of an observed unit is a necessary measure to decompose a single phenomenon. The observed cognitive, affective and physical behaviors will be further examined to reveal possible relationships to participants situational variables or system variables. The presentation of the findings addresses: (1) the nature of the roles that situational variables such as computer and information experiences, cognitive style tendency, and affective states play in user-Web interaction; (2) which elements of the interfaces and the Web (see Figure 1) contributed to the problems that participants encountered. The figure in pdf format is linked to this file.

Figure 1: (http://www.sis.utk.edu/peilingw/asis99m.ppt)

 
 

                                                                                                                         Updated: 5/11/99