ASIS Midyear Meeting, May 1999
 

PLENARY SESSIONS

ASIS Mid-Year Conference: Panel Discussion:
"The Importance of Evaluating Networked Information Services and Resources”
 
Peter Brophy

Peter Brophy is Professor of Information Management and Director of the Centre for Research in Library & Information Management at the Manchester Metropolitan University, U.K. Prior to moving to Manchester a year ago, he
spent 15 years as a University Librarian, latterly running a converged library and IT service. He is currently President of the UK Institute of Information Scientists.

THE ‘YES’ FACTOR: DELIVERING VALUE TO CUSTOMERS OF INFORMATION SERVICES

Peter Brophy
Director
Centre for Research in Library & Information Management
Manchester Metropolitan University
Geoffrey Manton Building
Rosamond Street West
Manchester M15 6LL
UK
Email: p.brophy@mmu.ac.uk

I’m never sure whether its apocryphal, but there’s a story often told of how the first Japanese automobiles imported to the West Coast of the USA turned out to be incapable of getting up the hills of San Francisco.  Whether there really was an oversight of this magnitude or not, the story brings back memories of the days when any item stamped ‘Made in Japan’ was sure to fall apart within days of its arrival. How things have changed. Now Lexus leads the customer satisfaction league tables in the USA while Subaru has just won the equivalent status in the UK.

I have long been intrigued by how these changes have taken place and, more appositely for our conference theme, how the principles involved can be applied to information services and information systems. The quality management movement is very well known on both sides of the Atlantic (though, interestingly, less so throughout continental Europe than in the UK). Its basic premises are taught in the first year of every undergraduate business studies course: quality is conformance to the customer’s requirements; quality is fitness for the customer’s purposes. However, the concepts are less well embedded in the information professions, and there are those who argue that they are inappropriate for the public sector in general and universities in particular. Ivory towers should not be sullied by such things!.

But this is a world of global competition, and universities are not immune. Companies like Amazon.com have shown us that we have real competition out there – I know of at least one research centre, not my own, which recently took a formal decision to stop ordering books via the university library – whether for stock or via inter-library loan – and to order everything from Amazon with virtually guaranteed 48 hour delivery. The speed and convenience of delivery outweigh the additional direct costs. So information and library services are going to have to compete seriously for our business.

There are two particular approaches that I have found useful in trying to develop the linkages between the quality movement and information & library services. The first is the Kano map; the second Garvin’s quality attributes.

First it’s important to acknowledge that what Garvin calls ‘Primary Operating Features’ are at the core of quality services. When we evaluate any kind of service, the first thing we’re looking for is confirmation that it meets the most basic requirement. Returning to the automobile example, we expect our new purchase to have four wheels, an engine and to have thoughtfully been provided with a fuel filler pipe and a steering wheel! If any of these are missing, the whole purchase falls down at first base. So in evaluating information services, there are primary requirements that simply have to be met. The Internet Search Engine, for example, that simply fails ever to deliver meaningful results (maybe it only indexes web pages from Central Nowhere High School) doesn’t make the grade. Primary Operating Features are the bottom-line and hugely important. In the UK we now have a national system of higher education quality assurance which has learning resources, including libraries and computing services, as one of six core foci. Most of its concerns are about basics: it demands evidence that learning resources are integrated into academic programmes, that users are satisfied, that quantitative measures of delivery are at acceptable levels and that the services are efficient and economic. The q.a. system works on a four point scale for each element: 1 means you’re dead in the water; 2 means that vital life signs have been detected, 3 is OK, but the Holy Grail is grade 4 – that means you are excellent. Experience shows that while the basics are very important, and you won’t get to 3 without them, getting to grade 4 requires something else.

The grading is done by evaluators who know all the tricks in the trade. They talk to you about your services, about their strengths and weaknesses and they make notes. Then they take away reports, files and statistical summaries to read. They talk to University management. They talk to faculty. Then they go and talk to students. And all the time they’re asking the same questions: what is that service like to use? Does it fit your needs? Is it effective? Is it efficient? Is it integrated into the university’s business? Then they come back and talk to you – the manager – again. But none of this gets you to the magic grade ‘4’. To get there you need something more; you need the student who – preferably without being bribed – will tell the evaluator that this is the most fantastic service ever invented, that the staff are great – they’ll die before they let an unsatisfied customer out of the door - the facilities are great, that she never wants to go home at night because she can’t tear herself away from the Library. She experienced the ‘Yes!’ factor.

To compete in the global marketplace we need that kind of reaction from our users. How do we get there?

Kano helps by pointing out the difference in effect on the user between expected and unexpected performance. Again using a transport analogy, if I board a train (as I do daily to commute between my home town of Lancaster and my work in Manchester) and find there are no seats (I don’t mean that they’re full, I mean there is no seating!) then no matter that all the normal performance criteria are fulfilled (the train’s on time, its clean, the staff are friendly, and so on) I will regard it as a very poor quality service and, like as not, tomorrow I’ll drive – a basic or ‘Primary Operating Feature’ has been missed. The interesting thing about this is that the presence of seating is actually neutral – I take it for granted, so that aspect of the service only has the ability to induce dissatisfaction.

The opposite effect can be produced by unanticipated features. So, I board the same train tomorrow and this time it has seats. But then, to my amazement, a steward comes round with fresh orange juice, breakfast cereals, fresh coffee, a full-fried English breakfast, the day’s newspaper (ironed for me!). I didn’t expect any of this, so if it hadn’t been there it wouldn’t have affected my general satisfaction with the service. But being there, its shifted my perception from ‘this is an OK train company’ to ‘this is the best’. And tomorrow? I’m first in the queue at the station! Yes!

Now, how do we apply all this to information services? There’s a long history in ILS of treating satisfaction as an individual, mental construct (e.g. Applegate, 1993). Bruce (1998) addressed this approach in a study which used a technique called magnitude estimation to try to measure satisfaction defined in relation to perception. There’s also been some debate in the UK about the SERVQUAL methodology which tries to measure the gap between expectation and experience (some for, some against) – see Brooks, Revill and Shelton (1997). However, this still raises the question as to what aspects or attributes of services contribute to the perception not just of satisfaction but of excellence.

This is where Garvin’s attributes start to become helpful. Garvin identified eight, and with some changes of emphasis and one significant change of concept, they seem to apply pretty well to ILS. They are:

1. Performance
 As we have seen, these are the primary operating features of the product or service.

 2. Features
 These are the secondary operating attributes, which add to a product or service in the customer's eyes but are not essential to it. For an automobile they might include a free sun roof or alloy wheels; for an information service they might range from a link to document delivery to relevant advertising. I remember being impressed by my library’s inter-library loans service when after having for years beaten us into the notion that you have to collect loans in person, they suddenly started posting them out. It is not always easy to distinguish ‘performance’ characteristics from ‘features’, especially as what is essential to one customer may be an optional extra to another, and there is always a tendency for ‘features’ to become ’performance’ over time – air conditioning in automobiles being a good example (and now ILL document delivery – I’d be really annoyed to be told to turn up in person again). Nevertheless there is a valid distinction to be made, and features hold much of the key to the ‘Yes!’ factor.

 3. Reliability
 Customers place high value on being able to rely on a product or service. For products this usually means that they perform as expected (or better). For example a key issue for an automobile buyer is the probability of avoiding breakdowns or even minor malfunctions, so reliability is often at the top of the list of issues when automobile league tables are compiled. For information services, a major issue is usually availability of the service. However, a link to more traditional evaluative approaches occurs through the concept of ‘correct service’ e.g. correct answers given to queries, links working correctly. There is some evidence that because users are not persistent – they try something, and if it doesn’t work they go away – broken links and the like can be a massive turn off.

 4. Conformance
 The question here is whether the product or service meets the agreed standard. This may be a national or international standard or locally determined service standards. The standards themselves, however they are devised, must of course relate to customer requirements. For us as information professionals it will be important to determine whether they have addressed interoperability issues, and how they utilise emerging standards such as XML, RDF, Dublin Core, Z39.50 and so on. But for users the issue is whether meaningful service standards for information services actually exist and are being met. Do your customers know what your guaranteed repair or replacement time is when their Pc blows up? At the weekend? Do they know the guaranteed delivery time for books ordered via the Library OPAC? And how does it compare with Amazon’s service standards?

 5. Currency
 Garvin uses the term ‘durability’, defined as ‘the amount of use the product will provide before it deteriorates to the point where replacement or discard is preferable to repair’. For most users of information services, however, the issues will centre on the question of the currency of information i.e. on how up to date the information provided is when it is retrieved. For this reason we prefer the term ‘currency’ rather than ‘durability’ to describe the application of this aspect to information services. There is a ‘Yes!’ element to this most obviously seen in ‘push’ services that deliver to the desktop.

 6. Serviceability
 When things go wrong, how easy will it be to put them right? How quickly can they be repaired? How much inconvenience will be caused to the customer, and how much cost? This last will include not just the cost of the repair itself, but the inconvenience and consequential losses the customer faces. So Ford or GM may score highly on serviceability because of the number of repair facilities they have available, and this may count with customers more than some other criteria. As well as obvious issues like PC repair or replacement services, the ‘serviceability’ issues occur in document delivery services if the wrong item is supplied, no matter whose ‘fault’ it may be. The heading of ‘serviceability’ also includes such factors as whether the customer is treated with courtesy when things go wrong: an issue that may be of central concern for information service evaluation.

 7. Aesthetics and Image
 While this is a highly subjective area, it is of prime importance to customers. Last month I gave a paper in Sweden entitled “Why supermarkets put fresh fruit and vegetables near the entrance “. One of my points was the contrast between information and library services and successful service organisations in the private sector. The supermarket uses part of its product range to rpoject an image of clean, healthy living: my own university library greets its customers with a man in uniform wearing a peaked cap and demanding your ID – some image!  In electronic environments it brings in the whole debate about what constitutes good design. In a web environment, the design of the home page may be the basis for user selection of services, and this may have little to do with actual functionality. You may have a great information service behind that home page, but do the users ever find it?

 8. Perceived Quality
This is one of the most interesting of attributes, because it recognises that all customers make their judgements on incomplete information. They do not carry out detailed surveys of ‘hit rates’ or examine the rival systems’ performance in retrieving a systematic sample of records. Most users do not read the service’s mission statement or service standards and do their best to by-pass the instructions pages. If it isn’t intuitive, you’ve lost the battle – back to Amazon again. Customers will quickly come to a judgement about the service based on the reputation of the service among their colleagues and acquaintances, their preconceptions  and their instant reactions to it.

One of the basic principles of these approaches (and it really comes back to those basic definitions) is that they push us into a user-centred model of evaluation. Its interesting that some of the emerging models of library/information services in networked environments have shifted away from process models towards broker models that are much more user centred (see Owen and Wiercz, 1996 for example). The user-centred models are much more helpful when personal preferences and requirements are factored in – so, for example, usability to a blind person may mean something quite different to usability to a sighted person. Incidentally, I reported on an initial attempt to apply the Garvin approach to ILS at the VALA conference last year (Brophy, 1998).

The approach also maps well to many of the quality assurance approaches which government (in the UK) is sponsoring – for example in public libraries the talk is now of ‘Best Value’. In European business circles, the talk is of ‘business excellence’ (probably ten years after the USA!) and the European Foundation for Quality Management has re-titled its annual award as ‘The European Award for Business Excellence’. One aspect of this that has become important recently is its emphasis on the satisfaction of all the stakeholders.

There is a down-side to user-centred evaluation, and QM approaches are vulnerable to it. Basically, a lot of customers can be ‘wrongly satisfied’. Some UK studies (Head & Marcella, 1993, for example) have demonstrated that even when a majority of reference question answers were erroneous, customers still went away satisfied. The danger is that expert knowledge will be ignored in favour of the ‘quick fix’ (or even ‘any fix’) solution, especially where customers are non-expert – such as undergraduates in our universities. Bruce’s finding that “Australian academic users generally have a high expectation of success and are satisfied with information seeking, regardless of how frequently they use the Internet or whether they have received any formal training” could indicate that the ‘wrongly satisfied’ syndrome is alive and well. This also brings in the issue of what experts are there for (for example, one of the reasons we employ professional librarians in academic libraries is because they have the expertise to evaluate sources, so it’s a bit odd if we don’t ask their views when we’re evaluating things).

So maybe what is needed is the application of the modified Kano/Garvin/SERVQUAL and similar approaches in a stakeholder setting, which acknowledges that many different individuals and groups have a valid and valuable perspective on quality, and hence on evaluation and hence on value. Building cars that can get up the hills of San Francisco was yesterday’s problem: today’s is to deliver networked information services which will excite customers to say ‘Yes!’ – and that will distinguish not the winners in the global networked service environment, but the survivors.
 

Applegate, R. Models of user satisfaction: understanding false positives Reference Quarterly 32 (1993), pp. 525-539.

Brooks, P. Revill, D. and Shelton, T. The Development of a scale to measure the quality of an academic library from the perspective of its users In Quality Management & Benchmarking in the Information Sector (ed. J. Brockman) London: Bowker-Saur, 1997pp. 263-304.

Brophy, P. It may be electronic but is it any good? Measuring the performance of electronic services In Robots to Knowbots: the wider automation agenda: Proceedings of the Victorian Association for Library Automation 9th Biennial Conference, January 28-30 1998 Melbourne: VALA, 1998. Pp. 217-230.

Bruce, H. User satisfaction with information seeking on the Internet. JASIS 49(6), 1998, 541-556.

Garvin, D.A.  Competing on the eight dimensions of quality Harvard Business Review 1987, pp. 101-9. Also, Garvin, D.A. Managing Quality

Head, M.C. and Marcella, R.A. Testing question: the quality of reference services in Scottish public libraries Library Review 42(6) 1993.

Kano’s model is described at http://www.servqual.com/kano.html

Owen, J.S.M. and Wiercx, A. Knowledge Models for Networked Library Services: Final Report (Report PROLIB/KMS 10119) Luxembourg: Office for Official publications of the European Communities, January 1996.
 

Ann Bishop

"SOCIALLY GROUNDED EVALUATION OF NETWORKED INORMATION SERVICES AND RESOURSES"

Ann Bishop
Assistant Professor
Graduate School of Library and Information Science
University of Illinois
501 E. Daniel St.
Champaign, IL 61820
abishop@uiuc.edu

In advocating for socially grounded evaluation, I have two very simple principles in mind:

• Evaluation of networked information services should encompass an assessment of social value and impact;

• Evaluation should encompass the study of social practices inherent in networked information service use.

Societal impact and the study of social practice are related in that it is impossible to evaluate social consequences without a close investigation of social practices surrounding use.  I suspect that these two principles can be readily accepted in principle by just about anyone involved in planning or implementing the  evaluation of networked information services.  The challenge comes in the degree to which they are actually enacted in each and every evaluation over which we have any control as researchers, system implementers, funders, or policymakers.  Too often, we focus on quantitative measures of system performance or extent of use.  Too often, we assume that social impacts are beyond our control or antithetical to the conduct of objective, scientific research.

I wish to argue, specifically, for greater attention in the evaluation of networked information services to the needs of, and outcomes for, marginalized members of society.

As professionals, we have a responsibility to critically examine the effects of our activities on society and to strive for positive social outcomes.  Virtually every profession related to the growth and exchange of knowledge has its strong advocates for social responsibility.  For example, Jay Rosen (1995) makes a case for “public journalism,” arguing that:

... Journalists should do what they can to support public life.  The press should help citizens participate and take them seriously when they do.  It should nourish or create the sort of public talk that might get us somewhere, what some of us would call a deliberative dialogue.  The press should change its focus on the public world so that citizens aren’t reduced to spectators in a drama dominated by professionals and technicians.

One example of the practice of public journalism cited by Rosen is a local newspaper’s attempt to ground its campaign coverage in a list of issues identified by residents as important.  Campaign speeches were mapped against this citizen’s agenda so that it was easy for residents to tell what was said about their concerns.

The profession of librarianship, of course, is fundamentally grounded in a sense of social responsibility, expressed prominently in advocacy for intellectual freedom and broad, equitable access to information resources.  And many information professionals are affilitated with the Computer Professionals for Social Responsibility, whose mission statement [http://www.cpsr.org/cpsr/about-cpsr.html#mission] includes assessment as a fundamental need, grounded in concerns about the impact of information systems on society:

As technical experts, CPSR members provide the public and policymakers with realistic assessments of the power, promise, and limitations of computer technology. As concerned citizens, we direct public attention to critical choices concerning the applications of computing and how those choices affect society.

Every project we undertake is based on five principles:   We foster and support public discussion of, and public responsibility for decisions involving the use of computers in systems critical to society. We work to dispel popular myths about the infallibility of technological systems.  We challenge the assumption that technology alone can solve political and social problems. We critically examine social and technical issues within the computer profession, both nationally and  internationally. We encourage the use of information technology to improve the quality of life.

Chip Bruce (Bruce and Hogan, 1998) presents a particularly cogent statement regarding the nexxus of social practice and social outcomes in the evaluation of information technology.  He argues that researchers need to undertake situated studies that closely examine how technologies are realized in given settings and how ideology operates within situations where technology and humans interact.  It is only by exploring how technology becomes so embedded in the living process that, for some users, it “disappears” (i.e., is so easy and natural to use that its use becomes automatic) that we can understand how technologies either “promote or forestall equality.” The users of a system who are most like that system’s designers are, in other words, the most likely to find the system natural and easy to use. Bruce and Hogan use the analogy of climbing up stairs: for many people, this is an automatic process, accomplished without thinking. But for someone using a wheelchair, a staircase is certainly not an invisible tool. They stress the need to critically examine what has become commonplace, and for whom.  Information system access and use is always “partial, restricted, and stratefied,” but we need to pay closer attention to whom the technology marks as “full, marginal, or nonpartipating” users.

In the research of Brenda Dervin, Rob Kling, Leigh Star, Bonnie Nardi and Vicky O’Day, Phil Agre, Chuck McClure and John Bertot, and Mueller and Jorge Schement (to name just a few), we can point to a spectrum of research related to computer-based information services that embodies some blend of critical attention to social practice and social value in design and evaluation.

What can we do to improve the social grounding of evaluations of networked information services?

First, we need to develop and promote evaluation approaches that address both social practice and social impact.  I really like the use of the word “services” in our panel title.  Unlike the phrase “networked information systems,” it foregrounds the use of information technology to achieve some human goal.  Our evaluation approaches should also emphasize use and impact in relation to people’s goals.  And they should specifically seek to identify and address the ways in which networked information service design choices cause certain people to become “full, marginal, or nonparticipating” users (Bruce and Hogan).  We must do a better job of gaining the participation of traditionally underserved users in both design and evaluation -- if you wait until the evaluation stage, it’s too late.

One promising approach is the creation of use scenarios to guide design and evaluation.  In contrast to top-down, technology-driven, abstract and formal approaches, scenarios provide concrete descriptions of use, focus on particular instances, elicit envisioned outcomes, and are open-ended, rough, and colloquial (Carroll).  Through scenarios, evaluation becomes a process of identifying enablers and barriers to use.  The trick here is to predict who will be a marginalized or nonparticipating user and make sure that representatives of those groups are given equal voice in contributing their scenarios.  For example, undergraduates with little computer experience should participate in the development of networked information services for academic libraries.  Low-income African Americans should participate in the development of networked community information services.  Most importantly, the scenarios contributed by potential users should serve as the basis of evaluation of outcomes and impact.  Evaluators should assess how well the service addressed the needs, goals, and barriers articulated in the use scenarios, and the scenarios created by marginalized groups should be given equal weight in the  evaluation stage.

Close studies of social practice are becoming increasingly common in research on networked information services.  Social informatics (Kling, 1998) and information ecologies (Nardi and O’Day, 1999) are becoming buzz words among academic information technology researchers.  Here again, the crucial question is whether marginalized users are included in the mix of people studied and whether system success is gauged against their needs, interests and capabilities.

Participatory design is an approach that insists that the goals of intended users drive technology and policy choices.  But, in keeping with my preferred emphasis on services and social equity, I think that participatory action research may be even more fruitful as an approach for the development and assessment of networked information services.  Participatory action research demands that producing relevant outcomes for marginalized members of society takes precedence over the needs and interests of academic researchers.  It seeks to enhance the problem-solving capacities of local community members by actively involving them in every phase of research--from setting the problem to deciding how project outcomes will be assessed.  In this approach, the intended users of an information service participate as researchers, not subjects (Reardon):

Participatory action research focuses on the information and analytical needs of society’s most economically, politically, and socially marginalized groups and communities, and pursues research on issues determined by the leaders of these groups.  It actively involves local residents as co-investigators on a equal basis with university-trained scholars in each step of the research process, and is expected to follow a nonlinear course throughout the investigation as the problem being studied is ‘reframed’ to accommodate neew knowledge that emerges.  [. . .]  Participatory action researchers intentionally promote social learning processes that can develop the organizational, analytical, and communication skills of local leadersers and their community-based organizations.  They place a premium on the discovery of knowledge that can lead to immediate improvements in local conditions.

Another approach that could be applied to the evaluation of networked information services is “community based research.” While community based research does not demand that community members participate as researchers, it is driven by community groups that are eager to know the research results and to use them in practical efforts to achieve constructive social change (Loka Institute, 1998).

As far as I know, participatory action research and community based research have not entered the mainstream of LIS evaluation.  I think they deserve serious consideration as a means to reframe our thinking about how to evaluate the use and impact of networked information services among marginalized audiences.

In my view, we should include socially-grounded approaches--like use scenarios, close qualitative studies of the social practices of use, community based research, and participatory action research--in all LIS system evaluation curricula.  I also think we need to work harder to guarantee that socially-grounded evaluation is actually included as a part of networked information service R&D projects.  Too often, evaluation of use and social outcomes is treated as an add-on that can easily be dispensed with if time or resources are scarce.  The study of social practice tends to take precedence over the study of social impacts.  And marginalized society members are marginalized as system users -- since they represent neither a major commercial market nor the “typical” user, there is no need to include them in design and evaluation studies.

My remarks really center on developing new ways of thinking about knowledge and how it is created and used: by researchers, networked information service designers, librarians, and marginalized members of society.  Donald Schon argues that universities should admit new forms of scholarship, like action research, and a new “epistemology of reflective practice” that must “account for and legitimze not only the use of knowledge produced in the academy, but the practitioner’s generation of actionable knowledge [...] that can be carried over, by reflective transfer, to new practice situations” (1995).  In efforts to promote the reshaping of universities as institutions more fully engaged in society (Kellogg Commission on the Future of State and Land-Grant Universities, 1999), we can look to “service learning” as a way to provide LIS students with experience related to socially-grounded evaluation and reflective practice.

Service learning is more than a new term for encouraging student participation in volunteerism, community service, or other forms of experiential learning.  It is distinguished by six key elements. The student: (1) provides some meaningful service (work), that (2)  meets a need or goal that is (3) defined by community members.  The service provided by the student (4) flows from and into course objectives, (5) is integrated into the course by means of assignments that require some form of reflection on the service in light of course objectives, and (6) the assignment is evaluated accordingly (Weigert, 1998).  Examples of service learning related to the development and use of networked information services include the citizen GIS projects that students participating in the East St. Louis Action Research Project undertake (http://www.imlab.uiuc.edu/eslarp), the collaboration of students  with “community information officers” in a course offered by Joan Durrance and Paul Resnick at the University of Michigan (http://www.si.umich.edu/Classes/697/), and student involvement in training low-income neighborhood residents in information system use in a course I teach at the University of Illinois   (http://www.lis.uiuc.edu/%7Ebishop/450CIsyl.html).

Resources

Bruce, B. C. and Hogan, M. P. (1998). The disappearance of technology: Toward an ecological model of literacy. In D. Reinking, M. McKenna, L. Labbo, & R. Kieffer (Eds.), Handbook of literacy and technology: Transformations in a post-typographic world (pp. 269-281). Hillsdale, NJ: Erlbaum. Available: http://www.ed.uiuc.edu/facstaff/chip/Publications/Disappearance.html.

Carroll, J. M. (1995). Introduction: The scenario perspective on system development. In John M. Carroll, (Ed.), Scenario-based design: Envisioning work and technology in system development (pp. 1-17). New York: Wiley.

Kellogg Commission on the Future of State and Land-Grant Universities. (1999). Returning to our roots: The engaged university. Washington, DC: National Association of State Universities and Land-Grant Colleges. [http://www.nasulgc.org/Kellogg/]

Kling, R. (1999). What is social informatics and why does it matter?  D-Lib Magazine, 5 (1)

Loka Institute. (1998).Community-based research in the United States -- including comparison with the Dutch science shops and the mainstream American research system.  Executive Summary.  Amherst, MA: The Loka Institute.  [http://www.loka.org/crn/pubs/comreprt.htm]

McClure, C. R., & Bertot, J. C. (1998). Public library use in Pennsylvania: Identifying uses, benefits, and impacts. Harrisburg, PA: Pennsylvania Department of Education.

Mueller, M. L., & Schement, J. R. (1996). Universal service from the bottom up: A study of telephone penetration in Camden, New Jersey. The Information Association, 12, 273-292.

Nardi, B. A., & O’Day, V. L. (1999). Information ecologies: Using technology with heart.  Cambridge, MIT Press.

Reardon, K. M. (1998). Participatory action research as service learning.  In Rhoads, R. A., and J. P. F. Howard (Eds.). Academic service learning: A pedagogy of action and reflection (pp. 57-64).  San Francisco: Jossey-Bass.

Rosen, J.  (1995).  Public journalism: A case for public scholarship. Change (May/June), 34-38.

Schon, D. A. (1995). Knowing-in-action: The new scholarship requires a new epistemology. Change, 27 (6), 27-34.

Schuler, D.  (1996).  Appendix F: Notes on Community-wide projects.  In New community networks: Wired for change (pp. 487-496).  New York: ACM Press.

Star, S. Leigh, & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces. Information Systems Research, 7 (1), 111-134.

Weigert, K. M. (1998). Academic service learning: Its meaning and relevance.  In Rhoads, R. A., and J. P. F. Howard (Eds.). Academic service learning: A pedagogy of action and reflection (pp. 3-10).  San Francisco: Jossey-Bass.

Whyte, W. (Ed.). (1991). Participatory action research. Newbury Park, CA: Sage.
 

Ron Larsen

                                                                                                                                                                                                   Updated: 5/24/99