library assessment: the way we have grown

20
Library Assessment: The Way We Have Grown Author(s): Fred Heath Source: The Library Quarterly, Vol. 81, No. 1 (January 2011), pp. 7-25 Published by: The University of Chicago Press Stable URL: http://www.jstor.org/stable/10.1086/657448 . Accessed: 13/05/2014 21:20 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to The Library Quarterly. http://www.jstor.org This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PM All use subject to JSTOR Terms and Conditions

Upload: fred-heath

Post on 04-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Library Assessment: The Way We Have Grown

Library Assessment: The Way We Have GrownAuthor(s): Fred HeathSource: The Library Quarterly, Vol. 81, No. 1 (January 2011), pp. 7-25Published by: The University of Chicago PressStable URL: http://www.jstor.org/stable/10.1086/657448 .

Accessed: 13/05/2014 21:20

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to TheLibrary Quarterly.

http://www.jstor.org

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 2: Library Assessment: The Way We Have Grown

7

[Library Quarterly, vol. 81, no. 1, pp. 7–25]

� 2011 by The University of Chicago. All rights reserved.

0024-2519/2011/8101-0003$10.00

THE LIBRARY QUARTERLYVolume 81 January 2011 Number 1

LIBRARY ASSESSMENT: THE WAY WE HAVE GROWN

Fred Heath1

This essay served as the basis for the opening keynote speech during the LibraryAssessment Conference in Baltimore in 2010. It set the stage by tracing the demandsfor accountability in the beginning of the century and outlining progress made. Itprovided important background and context for the following four keynote papersdelivered by Megan Oakleaf on teaching and learning, Danuta Nitecki on assess-ment of library spaces, Joe Matthews on performance measures, and Stephen Townon library value. This essay emphasizes some of the work the Association of ResearchLibraries and its partners have supported over the past decade and places ARLdevelopments in the larger context of assessment activities across the professionand around the globe.

If I have seen a little further it is by standing on the shoulders of giants.(Isaac Newton)

Introduction

It is my privilege to write this essay based on an invitation to deliver theopening keynote speech for the 2010 Library Assessment Conferencehosted in Baltimore by the Association of Research Libraries, the Universityof Virginia, and the University of Washington. In due course others willdeliver keynotes that build upon this essay. The Baltimore event is the thirdtime that administrators, educators, and practitioners of the sciences ofevaluation and assessment as it relates to libraries have gathered in NorthAmerica since 2006 [1–2]. And I am persuaded that through the work ofmany on the diverse aspects of library assessment, we are beginning todevelop a corpus of data and knowledge that will serve this group well as

1. Vice Provost and Director of Libraries, University of Texas; E-mail [email protected].

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 3: Library Assessment: The Way We Have Grown

8 THE LIBRARY QUARTERLY

we undertake to attain the conference theme: the development of effective,sustainable, practical assessment.

The conference itself has four other keynote speakers whose essays arepublished in this special issue of the Library Quarterly. An essay by MeganOakleaf treats us to research on how libraries and library services impactlearning outcomes, and Danuta Nitecki, whose work in the field of libraryassessment is extensive, will share with us aspects of the use of our libraryspaces in changing times. Joseph Matthews, whose two 2007 books on thetopic are evidence of and guide to the growing corpus of research andfindings on library assessment [3–4], focuses on performance measure-ment and the balanced scorecard. And the last keynote essay is on a topicparticularly important in these troubled economic times: Stephen Towndiscusses how we can measure and convey the value and impact of libraryservices. During the conference a large number of breakout sessions bygrassroots practitioners and discipline experts guide us through the manydaunting aspects of the assessment challenge, and they are primarily or-ganized along the keynote topics outlined in this special issue.

In this essay I will attempt to convey an overview of the strides we havemade over the past decade in library assessment—an account of the waywe have grown as a library assessment community. In attempting to carvethis overview, there will be a particular focus on the role that the Associationof Research Libraries (ARL) has played in knitting together diverse piecesof the library assessment movement into a coherent suite of services whileat the same time creating a constructive space in which other voices cancontribute to the assessment dialogue.

But before doing that, I must remind us that the library assessment“movement” did not emerge from the nest full grown a decade ago. Thesuccesses we have enjoyed have progenitors that reach back considerablyin time. As we look back, we quickly become aware that concerns withservice quality and library effectiveness have occupied both practitionersand researchers for at least a century.

Evaluation and assessment are synonymous with higher education in ourtime. Everyone attending a North American Conference on Library As-sessment is familiar with the roll call of regional accrediting organizationsthat oversee higher education quality, planning, and improvement in theUnited States. And most of us, on at least one occasion, have had to dragsome aspect of our library operations underneath the lens of one of thoseaccrediting bodies to affirm that we were faithfully upholding our part ofthe university compact with teachers and learners in our community. Ourcommunities care very much about the quality of teaching and learningon our campuses, as well as the quality of the research that steadily advancesthe frontiers of knowledge and understanding. We are comfortable with

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 4: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 9

accountability and transparency, and we are ready to demonstrate the re-turn on investment, the value received, to all who may be interested.

The Prescriptive Years

Evaluation was not always a component of higher education. In the nine-teenth century, colleges and universities mirrored the chaotic scene of arapidly industrializing America so graphically portrayed in the novels ofUpton Sinclair and Sinclair Lewis. One observer describes higher educa-tion institutions of that era as a “variegated hodgepodge of uncoordinatedpractices . . . which had never undergone any screening from anybody,and many [of] which were shoddy, futile, and absurd beyond anything wenow conceive” [5, p. 333]. Slowly, however, to harness the needs of theIndustrial Age, education began to be managed, harnessed, and directed.In the public sector, schools tended to follow a common manager-centricmodel. As David Tyack noted in his book, The One Best System, control ofpublic schools in urban settings were the province of elites, of “successfulmen.” Boards were comprised of business and professional stalwarts, whoturned over the administration of public schools to powerful superinten-dents charged to shape public education to the economic needs and socialconditions of urban and industrial America [6, p. 126].

Like the public schools, colleges and universities came to be subjectedto oversight and review. The rising tide of regulation saw the emergenceof accrediting societies. The New England Association of Colleges andSecondary Schools was established first in 1885, to be followed in shortorder by the Middle States, North Central, and Southern Associations [5,p. 334]. University libraries followed a similar path.

There was little in the way of benchmarking for libraries in the firstquarter of the twentieth century. Then in 1928, the Carnegie Corporationestablished an advisory committee, under the leadership of William WarnerBishop, for the purpose of extending over a million dollars in acquisitionsgrants to college and university libraries. That prestigious group of collegepresidents, deans, and library directors quickly discovered there were es-sentially no established or recognized standards with which to guide Car-negie’s investments [7]. And so the Carnegie Corporation and CollegeLibraries Standards were born, remarkable in their brevity and explicitness,vestiges of which remain with us to this day. Some twenty-one standardsembraced the range of library operations, from seating (25 percent of thestudent body) to collection size, staffing, cataloging and classification, andthe like. By 1934, most of the accrediting associations had settled on min-imum college library collections of 8,000 volumes and expenditures of fivedollars per student [8, pp. 204–6].

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 5: Library Assessment: The Way We Have Grown

10 THE LIBRARY QUARTERLY

For the most part, however, early assessment of research university ad-equacy was prescriptive, and the powerful advisors to the Carnegie Cor-poration personally wielded great influence. For the first time, efforts todevelop a “scorecard” measuring library effectiveness was established andthen largely abandoned by the board [7, pp. 15–16]. University adminis-trators and librarians often turned to visits by or the writings of eminentacademic librarians for guidance on how to conduct their affairs. Onemember of the Carnegie circle, for example, personally visited 125 of the200 supplicants for Carnegie aid [7, p. 18]. James Thayer Gerould’s bookThe College Library Building, and William Randall’s The College Library wereunderwritten by the powerful Carnegie Corporation [9, p. 466]. Otherinfluential leaders of the early twentieth century included Louis RoundWilson [10], Maurice Tauber [11], and Guy Lyle [12]. Like the works ofGerould and Randall, their writings were hugely influential during theirtime. Library leaders, it can be said, knew a good library when they saw one.

Collection checklists also played an important role in this prescriptiveera. The Carnegie Board soon discovered through its efforts at scorecardsand on-site surveys by luminaries, that simple volume counts were insuf-ficient means by which to assess eligibility for Carnegie largesse. The Car-negie-funded List of Books for College Libraries, by C. B. Shaw of Swarthmore,first published in 1930, served primarily as a means for evaluating holdingsand only secondarily as a purchase guide [7, p. 16; 13, p. 149]. Keepingthe accrediting societies in the game, the Southern Association of Collegesand Schools published its own guide, edited by William Stanley Hoole[14].

The Quantitative Years

Subsequently, the influence of the Carnegie Corporation, its interests re-directed, began to shift away from libraries. In time, the numerical bench-marks of accrediting societies, such as they were, also began to evanesce.In some ways, the change resulted from a strategic retreat by the societiesfrom that space. As a result of the Great Depression and the drainingcapital requirements of World War II, colleges and universities were fiscallystressed, and the accrediting societies began to replace specific librarybenchmarks (and other measures of institutional adequacy) with moreflexible guidelines. And as accrediting societies were permitting institutionsto measure library adequacy beneath the lens of institutional purpose,library leaders moved into the vacated quantitative space. The size of col-lections and the scale of institutional investment in their acquisition mat-tered, directors asserted. The American Library Association, an increasinglyinfluential organization, filled the breach, and in 1943 adopted standards

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 6: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 11

for collection size and expenditures, staffing size and compensation [15,p. 8]. In 1957, the Association of College and Research Libraries undertookto prepare a new set of standards. Completed in 1959, that six-page doc-ument served to guide the rapid buildup of college and university librariesin the post-Sputnik era, defining the dimensions of adequacy as the HigherEducation Act of 1965, Title IIA inaugurated a national effort to improveAmerica’s college and university libraries [8, p. 209]. The Clapp-JordanFormula, first published in 1965, was another gesture toward quantitativestandards as the measuring stick of adequacy. Reliance on checklists andother more qualitative approaches, said the authors, was “slow, tiresome,and costly” [13, p. 150].

The quantitative measures have never really gone away. In 1963, theAssociation of Research Libraries assumed oversight of what has come tobe known as the ARL Statistics, a statistical compendium based upon Ger-ould’s, reaching back to 1908 [16, p. 8; 17]. Building on the ARL Statistics,Kendon Stubbs developed the ARL Index in the early 1980s, metrics thatbecame widely recognized among the membership and beyond [18, pp.1–62; 19, pp. 527–38; 20, pp. 18–20; 21, pp. 79–85; 22, pp. 231–35; 23, pp.117–22]. The ARL Index is now the oldest, most stable, and most highlyregarded measure of research library operations, measuring inputs oncollection size, library expenditures, staffing, and services to produce anannual ranking of research libraries across North America [24, p. 5]. Andin that context it must be placed at the apex of the quantitative indicesthat research librarians use to assess the relative strength of their libraryprograms. That index itself went through tweaks over the years. Originallybased on ten variables selected or determined by factor analysis, a five-variable index has been employed since 1986 [16, p. 9]. From 2005–6 dataforward, the Expenditures-Focused Index, or as it is now known, the ARLInvestment Index, shed some of the artifact-derived factors in its algorithmto produce rankings based on total library expenditures, collection ex-penditures, salaries and wages, and total number of staff [25]. The annualpublication of the ARL Index is always a much-anticipated event, with aninstitution’s placement in the rankings often a matter of concern to librarydirector and university president alike.

Evolving Culture of Assessment

But even as the quantitative Index has grown in sophistication and accep-tance as a longitudinal input measure, so too has the recognition that acomplete program of assessment requires a broader perspective. That shiftin mind-set goes back several decades as ARL began to actively grapplewith the role that process and qualitative measures played in effective

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 7: Library Assessment: The Way We Have Grown

12 THE LIBRARY QUARTERLY

organizational assessment. In 1970 the ARL Office of University LibraryManagement Studies was established. Later renamed the Office of Lead-ership and Management Studies (OLMS), OLMS guided library directorsthrough efforts at organizational development and improvement until itsdiscontinuance in 2006 [16, p. 8]. Indeed, you can trace the taproot ofthe new culture of assessment that now characterizes ARL back to DuaneWebster‘s arrival in 1970. In 1971 Duane Webster authored Planning Aidsfor the University Library Director. With the book’s emphasis on planning anddevelopment, Webster pointed out to beleaguered library directors thatproper assessment of the requirements for change was one of the essentialelements of an effective planning program. A companion study by his officethe next year underscored the need to focus on organizational improve-ment and the development of staff capabilities [26]. In 1973, the Man-agement Review and Analysis Program (MRAP) was born, and only a coupleof years later the first MRAP studies were completed at Iowa State, Purdue,and Rochester [16, p. 9]. For those of you who may remember the acro-nyms, assessment was what MRAP (the Management Review and AnalysisProgram) and CAP (the Collection Analysis Program) were all about—informed decision making based upon carefully assembled information[27, p. 335].

It was during this period that ARL began to develop the managementtools still in use by North America’s research libraries. The focus on processwas evident from the time OLMS came into being, as was the influenceof organizational development gurus such as Chris Argyris [28] and RensisLikert [29]. A new generation of library leaders appeared to direct as-sessment and evaluation in research libraries. The 1973 work by RobertB. Downs and Art McAnnally, “Changing Roles of Directors of UniversityLibraries,” shifted the focus away from traditional hierarchical manage-ment structures and inputs and issued a call for the embrace of partici-patory management—a call soon to be echoed by Maurice Marchant, Wil-liam Birdsall, and others [30, p. 156; 31, pp. 103–25; 32].

In 1978, ARL adopted the Standards for University Libraries that had beena decade in preparation by an ACRL/ARL joint committee, funded at leastin part by the Council on Library Resources (CLR). The committee waschaired by Downs and included among its members Clifton Brock, GusHarrer, John Heussman, Jay Lucker, John McDonald, and Ellsworth Mason[33, p. 35.] The work of the Downs Committee was completed in 1975,and the final report was presented to the ARL membership in that year.A new joint committee was convened in that year to complete its work,chaired by Eldred Smith. New measures began jostling for recognitionalongside the Index. According to Beverly Lynch, the larger, wealthier in-stitutions opposed numbers, fearing minimal standards that would not

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 8: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 13

serve to sustain momentum or justify continuing library investment at thoseinstitutions. Support for the quantitative approach, such as it was, camefrom the smaller, less wealthy, and generally public member libraries. Intheir final version, the Standards for University Libraries were service oriented,advocating processes that would support the instruction and research pro-grams of the universities [33, pp. 35–46].

As library researchers and managers sought effectiveness measures thatranged beyond input measures and book lists, much of the groundbreakingwork took place in the library schools and on the university campuses—atrend that would continue into the mid-’90s. Frederick W. Lancaster de-veloped both an interest and an expertise in the field and through hismentorship opened the doors to many other researchers [34, 35]. As Lan-caster observed in his first work, “present standards are largely based oncurrent practices at existing institutions that, in some sense, are considered‘good.’ They emphasize inputs rather than outputs (services). . . . Perhapswhat is needed is standards by which individual institutions can evaluatetheir own performance in relation to the needs of their user population”[34]. Among the early thought leaders in research library circles in thosedays was Tom Shaughnessy, then director at the University of Minnesota.His own writings from that era evince an awareness of movements andleaders, such as Total Quality Movement (TQM), in the first instance, andDeming, in the second, as well as a concern of how to map those ideastoward organizational improvement in research libraries [36, pp. 7–10;37]. In an important issue of Library Trends, Shaughnessy squarely joinedthe issue of the relationship between the inputs that had traditionallydriven the research library community and the outcomes that the largerresearch community was seeking. The question of the relationship betweenexpenditures and quality was joined. That important issue of Library Trendsadded sparks to the ongoing research of library effectiveness with far-reaching implications [38].

Peter Hernon and Chuck McClure also established their early reputa-tions at least in part in the fields of evaluation and assessment [39, pp.71–86; 40, pp. 37–41]. Danuta Nitecki partnered with Peter Hernon toexplore the concepts of service quality and user satisfaction on the Yalecampus and elsewhere [41, pp. 9–17; 42, pp. 259–73]. Their careful workoverlapped and anticipated the research being done elsewhere that becamethe immensely popular LibQUAL�(R). Steve Hiller [43, p. 4; 44] and JimSelf [45, p. 2; 46, p. 3] were establishing national reputations for themselvesas they developed strong campus-based assessment programs at the Uni-versity of Washington and the University of Virginia, respectively—as wereAmos Lakos [47, p. 4; 48, p. 3] and Shelley Phipps [49, pp. 635–61; 50,p. 1]. From the Columbia study to the assessment of user satisfaction at

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 9: Library Assessment: The Way We Have Grown

14 THE LIBRARY QUARTERLY

Yale by Hernon and Nitecki, the library community appeared increasinglyready, and able, to take up Lancaster’s admonition to evaluate performancein the context of local needs and expectations.

In the meantime, in Europe, a strong assessment climate was also build-ing. The Department of Information and Library Management, at theUniversity of Northumbria at Newcastle, in many ways served to facilitatethe European dialogue. The first international conference on assessmentcan fairly be said to be the First Northumbria International Conference onPerformance Measurement in Library and Information Services, held inNorthumberland in 1995 [51]. The first conference proceedings docu-mented the rich diversity of inquiry across Europe and included contribu-tions from such stalwarts as Stephen Town, Roswitha Poll, and Ian Wink-worth. The proceedings have had an international flavor since that first year,when there were keynote addresses by U.S. and South African speakers.

With the blossoming of Web-based information technologies in the sec-ond half of the 1990s, large-scale, and collaborative, assessment projectsbecame increasingly feasible, and a new chapter was about to begin. AsKaren Coyle has observed, the tension between qualitative and quantitativemeasures of library performance began to take another turn in the mid-1990s as physical holdings and the acquisitions of printed materials beganto share prominence with digital formats and licensed resources [24, pp.602–3]. Or, as Danuta Nitecki put it plainspokenly, “A measure of libraryquality based solely on collections has become obsolete” [52, p. 181]. Asvolume counts and ARL rankings based on such inputs became less useful,ARL began to develop other measures to provide information on adequacyand return on investment.

In the winter of 1999, many of the leaders in library assessment anddevelopment met in Tucson, Arizona, to consider the need to developalternatives to expenditure metrics as measures of library performance[53, p. 11; 54, pp. 1–8]. Carla Stoffle (University of Arizona) and PaulKobulnicky (University of Connecticut) were among the leaders who fa-cilitated the conversation [54, p. 2]. There, ARL’s New Measures Initiativewas born, led by Stoffle and the ARL Statistics and Measurement Com-mittee. New Measures, according to ARL, was to become a suite of servicesthat libraries use to solicit, track, understand, and act upon users’ opinionsof service quality. Results have been used to develop a better understandingof perceptions of library service quality, interpret user feedback system-atically over time, and identify best practices across institutions. Recentyears have seen a collaborative culture of assessment reach its full maturity.Methodologists, anthropologists, statisticians, and others have joined li-brarians to produce an array of tools that enable library directors to directresources with greater precision to areas of highest client priority orgreatest need. For example, the anthropological work of Susan Gibbons

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 10: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 15

[55] has been popularized through ACRL publications and presentationsand has influenced the establishment of a key strategic direction for ARLin 1995, initially articulated as the contributions of libraries to Research,Teaching, and Learning (RTL) but more recently refocused on the Trans-formation of Research Libraries (TRL).

The New Measures Initiative, now rebranded as the StatsQUAL Gateway, toindicate its place within ARL, underscores the convergence of qualitativeand quantitative methodologies. According to a recent publication, thegoals are now almost entirely outcome focused: “The goal is to establishan integrated suite of library assessment tools that tell users’ library successstories, emphasize customer-driven libraries and demonstrate responsive-ness and engagement in improving customer service” [30, p. 150]. It isprobably worth taking a quick look at some of those instruments, and thereare a number of good overviews available [56, pp. 1–31].

The StatsQUAL Era

The StatsQUAL suite provides managers access to five protocols: ARL Sta-tistics, LibQUAL�(R), DigiQUAL, ClimateQUAL, and MINES for Libraries(StatsQUAL can be accessed at http://www.statsqual.org). They share somecommon characteristics. First of all, they are born of colloquy and commonpurpose, as researchers, administrators, and methodologists have cometogether to pool their best ideas toward common goods. Second, theycontinue the time-honored commitment of ARL to develop longitudinaldata that allow the community to assess its individual libraries over timewhile allowing for the emergence of useful benchmarks, applicable bestpractices, and sharing and learning from each other. From early practicesthat were limited by the boundaries of individual universities have growna suite of services that can be meaningfully employed by libraries in ARL,in North America, generally, and in the world [57, p. 4].

LibQUAL�(R).—In 1998, the year that Google first burst upon the scene,Colleen Cook (subsequently dean of libraries and then a PhD student atTexas A&M), Bruce Thompson (then TAMU professor of educational psy-chology), Yvonna Lincoln, and others began developing a modified versionof the SERVQUAL protocol, long a standard in the for-profit sector formeasuring user satisfaction [58]. The team proposed to ARL the devel-opment of a tailored, service-quality assessment tool, subsequently named“LibQUAL�(R),” which, when fully tested, would be given to ARL fornonprofit use in improving libraries [59, pp. 103–12].

In January 2000, the American Library Association held its midwintermeeting in San Antonio, and at that conference the representatives of adozen ARL libraries met in a classroom of a TAMU–San Antonio facility

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 11: Library Assessment: The Way We Have Grown

16 THE LIBRARY QUARTERLY

to discuss the possibility of pilot-testing LibQUAL�(R). Agreement wasreached, and the first baby steps in user satisfaction assessment were underway. Martha Kyrillidou, ARL director of statistics, and the TAMU teamsuccessfully submitted through ARL a proposal to the Fund for the Im-provement of Post-Secondary Education (FIPSE) [60, pp. 129–38]. Uponsuccessfully securing a three-year grant, ARL brought together a forum ofnotable speakers who worked extensively in helping libraries with service-quality improvements, and the papers from that event were published asa special issue of Library Trends on “Measuring Service Quality” [61, pp.541–780]. At that time, eleven years ago and not too far from where weare now, the ARL forum captured the latest thinking in assessment andprovided the platform for a rich exchange of ideas that flourished in thecoming years with the rapid expansion of the LibQUAL�(R) service.

LibQUAL�(R) includes the quantitative data yielded from the twenty-two core items, but it also includes qualitative data provided by users inthe form of open-ended comments. Consistently, across libraries, a strikingpercentage of participants—roughly 40 percent—provided comments,which flesh out users’ service-quality perceptions and make specific rec-ommendations for service-quality improvements. In its brief life, Lib-QUAL�(R) has collected data from more than 1 million library usersacross more than a thousand institutions. It has been used in the UnitedStates, Canada, Mexico, the Bahamas, French Polynesia, Australia, NewZealand, Singapore, the United Kingdom, France, Ireland, the Nether-lands, Belgium, Switzerland, Germany, Denmark, Finland, Norway, Sweden,Cyprus, Egypt, Israel, the United Arab Emirates, China, Japan, and SouthAfrica. Currently, the protocol supports eighteen language variations: Af-rikaans, American English, British English, Chinese (traditional), Danish,Dutch, Finnish, French (Belgian), French (Canadian), French (France),German, Greek, Hebrew, Japanese, Norwegian, Spanish, Swedish, andWelsh. A version in Arabic is currently under development. The variouseditions of LibQUAL�(R) have been used over a period of ten years [58].Of the tools in the StatsQUAL suite, LibQUAL�(R) perhaps brings ARLthe closest yet to recognizing Lancaster’s admonition to “evaluate theirown performance in relation to the needs of their user population.”

DigiQUAL.—LibQUAL�(R) needed to be repurposed to address andassess the services provided by digital libraries. A grant by the NationalLibrary Foundation helped ARL initiate research in this area by attemptinga summative evaluation protocol for digital libraries. The DigiQUAL toolresearched with support from the National Science Foundation’s NationalScience Digital Library (NSDL) Program articulated important dimensionsof digital library service quality but has yet to achieve the wide appeal andthe promise of bringing together a community of developers and evaluatorsthat focus on the success of digital library services from a service and user

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 12: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 17

perspective across different institutions and implementations. Like all thetools in the StatsQUAL suite, DigiQUAL is the fruit of multi-institutionalcollaboration (Texas A&M University, the University of Texas, and ARL aswell as NSDL partner projects and services) [57, p. 4].

ClimateQUAL.—ClimateQUAL, administered at the University of Texasfor the first time in the spring of 2010, is also the newest protocol in theassessment tool kit. In many ways it harkens back to the days of DuaneWebster’s arrival at ARL, the early studies of the Columbia University Li-braries, MRAP,and the first visible commitment of the association to orga-nizational development. Born of the work of Paul Hanges and BenjaminSchneider, the instrument originated there as the Organizational Climateand Diversity (OCDA) protocol. Indeed, its library developers, CharlesLowry and Sue Baughman, are now executive director and associate deputydirector, respectively, of ARL. The data set is proprietary and belongs tothe University of Maryland and ARL. In the words of its authors and owners,ClimateQUAL “uses deep assessment of a library’s staff to plumb the di-mensions of climate and organizational culture important for a healthyorganization in a library setting” [30, p. 156; 62, pp. 154–57]. Participantsin the protocol commit to share ideas and strategies that promise to im-prove organizational climate and improve service delivery [57, p. 4].

MINES for Libraries.—In some ways, MINES for Libraries, whose devel-opers are pragmatically aware of the way the information revolution haschanged the way researchers and learners interact with the research library,is the one protocol that may be the most interesting. MINES stands forMeasuring the Impact of Networked Electronic Services. The roots ofMINES was partly in the ARL E-Metrics project, a partnership of ARL andthe Florida State University Information Use Management and Policy In-stitute. Led by Sherrie Schmidt (Arizona State University) and Rush Miller(University of Pittsburgh), the E-Metrics project undertook to create abetter understanding of how the growing presence of electronic resourceswere used by the university community and how they contributed to usersuccess and satisfaction [54, p. 11; 63, pp. 161–77]. The ARL E-Metricswork was incorporated in the ARL Supplementary Statistics to the extentthat the data are focusing on institutional elements (usage, digital libraries,ebooks) [64]. The user component of this work is addressed effectivelywith the MINES for Libraries protocol.

MINES for Libraries focuses on the purpose of use of electronic re-sources, the demographics of the users, and the location of use. The pro-tocol was developed by Brinley Franklin, Vice Provost for Libraries at theUniversity of Connecticut, and Terry Plum, of the Simmons School ofLibrary and Information Science [65, pp. 28–40; 66, pp. 41–47], and hasits roots in a long-standing tradition of indirect cost studies. MINES forLibraries, like ClimateQUAL and LibQUAL�(R), is accessible to the li-

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 13: Library Assessment: The Way We Have Grown

18 THE LIBRARY QUARTERLY

brary community via ARL’s StatsQUAL portal, and the application of theprotocol does involve local networking expertise and capacity. It has beensuccessfully implemented in consortia like the Ontario Council of Uni-versity Libraries (OCUL) [67], but it has also been successful as a localinstitutional application at the University of Iowa and the University ofMacedonia in Thessaloniki, Greece [68]. As LibQUAL�(R) measures thelingering commitment of the student to the library as place, MINES forLibraries acknowledges that many library users are no longer constrainedto frequent the physical library to make use of resources that are increas-ingly accessible digitally [57, p. 4; 69–70].

This protocol also has the potential of expansion into the new directionslibrary assessment is emphasizing, the valuation studies. Building uponimportant work by Paula Kaufman [71] at the University of Illinois atUrbana-Champaign and Carol Tenopir [72, pp. 199–207; 73–75] at theUniversity of Tennessee, ARL staff has partnered with them in pursuing asystematic investigation and awareness of library valuation methodologies.Lib-Value is a three-year grant supported with funding by the Institute ofMuseum and Library Services (IMLS) that attempts to address limitationand expand the perspectives of return-on-investment studies implementedin public libraries and/or sponsored by vendors. The researchers have abroad perspective of library valuation methods, and their goal is to expandthe debate of these issues over the coming years [76].

The Globalization of Assessment

The decade since the ARL forum on Library Service Quality was a periodof rapid convergence in library assessment. The important work taking placein North America was mirrored by similar developments in Europe andelsewhere. The International Federation of Library Associations (IFLA) hasfostered the conversation through conferences and publications. RoswithaPoll’s influential study Measuring Quality has now been published in twoeditions and in six languages and serves as a guide to practitioners withmany indicators for performance assessment [77]. The European traditionis well documented in the biennial Northumbria Conference on Perfor-mance Measurement and Metrics [78]. From the first conference at New-castle in 1995, the rich diversity of research in library assessment was evi-dent. The Northumbria Conference has taken place mostly in the UnitedKingdom but also in places such as the United States, South Africa, andItaly, as they were scheduled adjacent to IFLA conferences. With eachsucceeding biennial conference, participation has become more richlydiverse. The eighth conference, held in Florence in the late summer of2009, included some forty-two papers from all around the globe. North

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 14: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 19

American presenters included John Bertot, Brinley Franklin, Martha Kyr-illidou, Charles Lowry, Steve Hiller, Wanda Dole, and others. Presentersfrom at least sixteen nations contributed to the colloquy [79].

In 2006, eleven years after the first Northumbrian Conference, ARLbrought to North America its very first Library Assessment Conference.More than 200 participants from seven nations participated—representingover 100 libraries, associations, library systems, or vendors. Some fortypapers were presented on the vast tool kit assembled to assist librarians intheir work [1]. Paul Hanges keynoted there on his work with theClimateQUAL protocol, and Brinley Franklin shared additional informa-tion on MINES. In 2008, the stakeholders and participants in the libraryassessment movement assembled again, this time in Seattle. Some 375professionals attended from around the globe, and some sixty-five paperswere offered. As the editors of the conference proceedings proudly noted,it was the largest library assessment ever held [2]. Here for the first time,and perhaps emblematic of the maturation of the movement itself, thefirst Library Assessment Career Achievement awards were awarded toDuane Webster, Amos Lakos, and Shelley Phipps [80, p. 9].

More recently the library assessment movement is also reaching com-munities in eastern European and other African and Asian countries bybringing these communities together in the Qualitative and QuantitativeResearch Methods in Libraries (QQML) conference. The first and secondQQML events took place in Chania, Crete, in 2009 and 2010, respectively.Keynote speakers included Peter Hernon and Danuta Nitecki in 2009 andW. F. Lancaster and Roswitha Poll in 2010. The organizing committee iscurrently planning future events in the coming years.

Summary

And so, for a decade now, ARL leaders and contributing collaborators havebeen at work developing and promoting innovative means of assessingresearch libraries, with an eye toward their continual improvement. Amethodological suite of protocols has been developed that recognizes anddraws upon the descriptive statistics in the ARL Index that trace their rootsto the beginning of an earlier century and that now include such tools asLibQUAL�(R), MINES for Libraries, and ClimateQUAL. A new genera-tion of assessment experts, such as Steve Hiller, Jim Self, Stephen Town[81, pp. 29–39], Danuta Nitecki, Peter Hernon, Brinley Franklin, ColleenCook, Bruce Thompson, Betinna Koeper, and Sayeed Choudhury, con-tribute to and draw upon the evolving suite of assessment protocols. Col-leen Cook [82] and Martha Kyrillidou [83] have subjected the protocolsto the rigor of the dissertation process. And the new leadership of ARL,

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 15: Library Assessment: The Way We Have Grown

20 THE LIBRARY QUARTERLY

Charles Lowry and Sue Baugham, bring their own distinguished back-grounds to the challenges of evaluation and assessment.

If there is a hallmark, a defining characteristic of this decade, it is a newera of colloquy—where methodologists from all sectors actively collaborateto advance the assessment of research library effectiveness. Major contri-butions to the study of user behaviors over the past decade have beenmade by OCLC, CLIR, and Ithaka [84–86]. The current conversation isboth global and inclusive, as practitioners and researchers learn from oneanother, combining and melding their instruments in order to optimizethe investments in and improve the effectiveness of library operations. JimSelf and Steve Hiller have served the library community as Visiting ProgramOfficers at ARL to answer a call critical to our constrained times: “to assistlibraries in developing effective, sustainable, and practical assessment pro-grams that demonstrate the libraries’ contributions to teaching, learning,and research” [87, pp. 171–77]. Open to all libraries, the lessons of sus-tainability are brought to the local campus by the program officers. Ad-ministration of the StatsQUAL protocols, interpretation of the result sets,development of local assessment plans, preparation for regional accredi-tation, and the establishment of benchmarks and performance standardsare now within the grasp of the local library.

Brinley Franklin reintroduced me to John Cotton Dana, whose writingstake us back to the beginning of the century, a long journey we havecovered all too briefly with this essay. “All public institutions,” said Dana,“should give returns for their costs; and those returns should be in gooddegree positive, definite, visible, and measurable. . . . Common sense de-mands that a publicly supported institution do something for its supportersand that some part at least of what it does be capable of clear descriptionand downright valuation” [88, p. 11]. It is clear that our best efforts ataccountability and demonstrating value and return on investment have notspared libraries from the challenges of the current fiscal climate. Whatour culture of assessment can do is to allow us to concentrate with precisionon the assignment of available resources to the goods and services ourcommunities most value. If we listen, and if we act purposefully, we willremain indispensable to teaching and learning.

The Library Assessment Conference in Baltimore offers an opportunityto learn of the additional steps that we in the profession have taken tomake our libraries better in service to the communities they serve. Welearn how close we have come to answering the admonitions of Mr. Danaand Professor Lancaster, two of the giants upon whose shoulders we stand.

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 16: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 21

REFERENCES

1. Association of Research Libraries. Proceedings of the 2006 Library Assessment Conference: Build-ing Effective, Sustainable, Practical Assessment. September 25–27, 2006, Charlottesville, VA.http://libraryassessment.org/bm∼doc/proceedings-lac-2006.pdf.

2. Association of Research Libraries. Proceedings of the 2008 Library Assessment Conference:Building Effective, Sustainable, Practical Assessment. August 4–7, 2008, Seattle. http://libraryassessment.org/bm∼doc/proceedings-lac-2008.pdf.

3. Matthews, Joseph R. The Evaluation and Measurement of Library Services. Westport, CT:Libraries Unlimited, 2007.

4. Matthews, Joseph R. Library Assessment in Higher Education. Westport, CT: Libraries Unlim-ited, 2007.

5. Carnovsky, Leon. “The Evaluation and Accreditation of Library Schools.” Library Quarterly37, no. 4 (October 1967): 333–47.

6. Tyack, David. The One Best System. Cambridge, MA: Harvard University Press, 1974.7. Bishop, William Warner. Carnegie Corporation and College Libraries, 1929–1938. New York:

Carnegie Corporation, 1938.8. Brown, Helen. “College Library Standards.” Library Trends 21 (October 1972): 204–18.9. Bishop, William Warner. “Centralized Purchasing for American College Libraries.” Library

Quarterly 7, no. 4 (October 1937): 465–70.10. Wilson, Louis Round. The Geography of Reading: A Study of the Distribution and Statues of

Libraries in the United States. Chicago: University of Chicago Press, 1938.11. Wilson, Louis Round, and Tauber, Maurice F. The University Library: Its Organization, Ad-

ministration and Functions. Chicago: University of Chicago Press, 1945.12. Lyle, Guy R. The Administration of the College Library. New York: Wilson, 1945.13. Broadus, Robert N. “Evaluation of Academic Library Collections: A Survey of Recent

Literature.” Library Acquisitions: Practice and Theory 1, no. 3 (1977): 149–55.14. Hoole, William Stanley. The Classified List of Reference Books and Periodicals for College Libraries.

Rev. ed. Atlanta: Southern Association of Colleges and Schools, 1947.15. Kaser, David. “Standards for College Libraries.” Library Trends 31 (Summer 1982): 7–19.16. Association of Research Libraries. A Gala Celebration of the 75th Anniversary of ARL’s Found-

ing, 1932–2007. October 10, 2007. Washington, DC: Association of Research Libraries,2007.

17. Molyneux, Robert. The Gerould Statistics. Washington, DC: Association of Research Li-braries, 2010. http://www.arl.org/bm∼doc/arlthegerouldstatistics.pdf.

18. Stubbs, Kendon. “The ARL Library Index and Quantitative Relationships in the ARL.”Report prepared for the Committee on ARL Statistics, November 1980.

19. Stubbs, Kendon. “University Libraries: Standards and Statistics.” College & Research Libraries42 (November 1981): 527–38.

20. Stubbs, Kendon. “On the ARL Library Index.” In Research Libraries: Measurement, Man-agement, Marketing. Minutes of the 108th Meeting of the Association of Research Libraries,May 1–2, 1986, Minneapolis. Washington, DC: Association of Research Libraries, 1986.

21. Stubbs, Kendon. “Lies, Damned Lies . . . and ARL Statistics.” In Research Libraries: Mea-surement, Management, Marketing. Minutes of the 108th Meeting of the Association ofResearch Libraries, May 1–2, 1986, Minneapolis. Washington, DC: Association of ResearchLibraries, 1986.

22. Stubbs, Kendon. “Apples and Oranges and ARL Statistics.” Journal of Academic Librarianship14 (September 1988): 231–35.

23. “Access and ARL Membership Criteria.” In Proceedings of the 125th Meeting of the Associationof Research Libraries. Washington, DC: Association of Research Libraries, 1993.

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 17: Library Assessment: The Way We Have Grown

22 THE LIBRARY QUARTERLY

24. Coyle, Karen. “Data, Raw and Cooked.” Preprint. Published in Journal of Academic Librar-ianship 33, no. 5 (September 2007). http://www.kcoyle.net/jal_33_5.html.

25. ”ARL Introduces Expenditures–Focused Index of University Library Members.” ARL PressReleases and Announcements, Washington, DC. http://www.arl.org/news/pr/arl-index-aug07.shtml.

26. Booz Allen Hamilton, Inc. Organization and Staffing of the Libraries of Columbia University:A Summary of the Case Study Sponsored by the Association of Research Libraries in Cooperationwith the American Council on Education under a Grant from the Council on Library Resources.Washington, DC: University Library Management Studies Office, Association of Re-search Libraries, 1972. ERIC ED061948. Full text available at http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accnopED061948.

27. Heath, Fred. “A Salute to a Leader: ARL’s Assessment Protocol Initiatives.” portal: Librariesand the Academy, 9, no. 3 (July 2009): 333–38.

28. Argyris, Chris. Understanding Organizational Behavior. Homewood, IL: Dorsey, 1960.29. Likert, Rensis. The Human Organization: Its Management and Value. New York: McGraw-Hill,

1967.30. Kyrillidou, Martha; Lowry, Charles; Hanges, Paul; Aiken, Juliet; and Justh, Kristina.

“ClimateQUAL� Organizational Climate and Diversity Assessment.” In Proceedings of theFourteenth National Conference of the Association of College and Research Libraries: Pushing theEdge; Explore, Engage, Extend. Edited by Dawn M. Mueller. March 12–15, 2009, Seattle.Chicago: American Library Association, 2009.

31. McAnnally, Arthur M., and Downs, Robert B. “Changing Roles of Directors of UniversityLibraries.” College & Research Libraries 34 (March 1973): 103–25.

32. Marchant, Maurice P. Participative Management in Academic Libraries. Westport, CT:Greenwood.

33. Lynch, Beverly. “University Library Standards.” Library Trends 31 (Summer 1982): 37–47.34. Lancaster, F. W., and Jonich, M. J. The Measurement and Evaluation of Library Services.

Arlington, VA: Information Resources Press, 1977.35. Lancaster, F. W. If You Want to Evaluate Your Library. Champaign: University of Illinois,

Graduate School of Library and Information Science, 1988.36. Shaughnessy, Thomas W. “Benchmarking, Total Quality Management, and Libraries.”

Library Administration and Management 7 (Winter 1993): 7–12.37. Shaughnessy, Thomas W. “Think Quality: The Deming Approach Does Work in Libraries.”

Library Journal 117, no. 9 (1992): 57–61.38. Shaughnessy, Thomas W., issue ed. “Perspectives on Quality in Libraries.” Library Trends

44, no. 3 (Winter 1996).39. McClure, Charles R., and Hernon, Peter. “Unobtrusive Testing and the Role of Library

Management.” Reference Librarian 7, no. 18 (Summer 1987): 71–85.40. McClure, Charles R., and Hernon, Peter. “Unobtrusive Reference Testing: The 55 Percent

Rule.” Library Journal 111, no. 7 (April 1986): 37–42.41. Hernon, Peter; Nitecki, Danuta; and Altman, Ellen. “Service Quality and Customer Sat-

isfaction: An Assessment and Future Directions.” Journal of Academic Librarianship 25, no.1 (January 1999): 9–17.

42. Nitecki, Danuta, and Hernon, Peter. “Measuring Service Quality at Yale University’s Li-braries.” Journal of Academic Librarianship 26, no. 4 (July 2000): 259–73.

43. Hiller, Steve. “Assessing User Needs, Satisfaction, and Library Performance at the Uni-versity of Washington Libraries.” Library Trends 49, no. 4 (2001): 605–25.

44. Hiller, Steve. “Another Tool in the Assessment Toolbox: Integrating LibQUAL� into theUniversity of Washington Libraries Assessment Program.” Journal of Library Administration,40, nos. 3/4 (2004): 121–37.

45. Willis, Alfred. “Interview: Using the Balanced Scorecard at the University of Virginia

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 18: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 23

Library: An Interview with Jim Self and Lynda White.” Library Administration and Man-agement 18, no. 2 (2004): 64–67.

46. Hiller, Steve; Kyrillidou, Martha; and Self, Jim. “When the Evidence Is Not Enough:Organizational Factors That Influence Effective and Successful Library Assessment.” Per-formance Measurement and Metrics 9, no. 3 (2008): 223–30.

47. Lakos, Amos. “Performance Measurement in Libraries and Information Sciences.” Collegeand Research Libraries News 59, no. 4 (1998): 250–51.

48. Lakos, Amos, and Phipps, Shelley. “Creating a Culture of Assessment: A Catalyst forOrganizational Change.” portal: Libraries and the Academy 4, no. 3 (2004): 345–61.

49. Phipps, Shelley. “Beyond Measuring Service Quality: Learning from the Voices of theCustomers, the Staff, the Processes, and the Organization.” Library Trends 49, no. 4 (2001):635–61.

50. Phipps, Shelley. “The Systems Design Approach to Organizational Development: TheUniversity of Arizona Model.” Library Trends 53, no. 1 (2004): 68–111.

51. Proceedings of the 1st Northumbria International Conference on Performance Measurement in Li-braries and Information Services. Edited by Pat Wressell. Newcastle on Tyne: University ofNorthumbria at Newcastle, 1995.

52. Nitecki, Danuta A. “Changing the Concept and Measure of Service Quality in AcademicLibraries.” Journal of Academic Librarianship 22, no. 3 (1996): 181–90.

53. Blixrud, Julia. “The Continuing Quest for New Measures.” ARL Newsletter: A BimonthlyReport on Research Library Issues and Actions from ARL, CNI, and SPARC, no. 207 (December1999): 11.

54. Blixrud, Julia. “Mainstreaming New Measures.” ARL Newsletter: A Bimonthly Report on Re-search Library Issues and Actions from ARL, CNI, and SPARC, nos. 230–31 (October–Decem-ber 2003): 1–8.

55. Foster, Nancy Fried, and Gibbons, Susan. Studying Students: The Undergraduate ResearchProject at the University of Rochester. Chicago: Association of College and Research Libraries,American Library Association, 2007.

56. “Special Double Issue on New Measures.” ARL Newsletter: A Bimonthly Report on ResearchLibrary Issues and Actions from ARL, CNI, and SPARC, nos. 230–31 (October–December2003).

57. Kyrillidou, Martha, and Cook, Colleen. “The Evolution of Measurement and Evaluationof Libraries: A Perspective from the Association of Research Libraries.” Library Trends 56,no. 4 (Spring 2008): 888–909.

58. Thompson, Bruce. The Origins/Birth of LibQUAL�. Washington, DC: Association of Re-search Libraries. http://www.libqual.org/about/about_lq/birth_lq.

59. Cook, Colleen; Heath, Fred; Thompson, Bruce; and Thompson, Russel. “The Search forNew Measures: The ARL LibQUAL� Project—a Preliminary Report.” portal: Libraries andthe Academy 1, no. 1 (2001): 103–12.

60. Thompson, Bruce; Cook, Colleen; and Heath, Fred. “How Many Dimensions Does It Taketo Measure Users’ Perceptions of Libraries? A LibQUAL� Study.” portal: Libraries and theAcademy 1, no. 2 (2001): 129–38.

61. Kyrillidou, Martha, and Heath, Fred, eds. “Measuring Service Quality.” Library Trends 49,no. 4 (2001): 541–780.

62. Kyrillidou, Martha, and Baughman, M. Sue. “ClimateQUAL: Organizational Climate andDiversity Assessment.” College and Research Libraries News 70, no. 3 (March 2009). http://www.acrl.org/ala/mgrps/divs/acrl/publications/crlnews/2009/mar/climatequal.cfm.

63. Miller, Rush; Schmidt, Sherrie; and Kyrillidou, Martha. “New Initiatives in PerformanceMeasures.” In Global Issues in 21st Century Research Librarianship. Helsinki: NORDINFOPublication no. 48, 2002.

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 19: Library Assessment: The Way We Have Grown

24 THE LIBRARY QUARTERLY

64. Association of Research Libraries. ARL Supplementary Statistics, 2007–2008. Washington,DC: Association of Research Libraries, 2009.

65. Franklin, Brinley, and Plum, Terry. “Successful Web Survey Methodologies for Measuringthe Impact of Networked Electronic Services (MINES for Libraries).” IFLA Journal 32,no. 1 (2006): 28–40.

66. Franklin, Brinley, and Plum, Terry. “Assessing the Value and Impact of Digital Content.”Journal of Library Administration 48, no. 1 (2008): 41–47.

67. Kyrillidou, Martha; Olshen, Toni; Franklin, Brinley; and Plum, Terry. MINES for Libraries�:Measuring the Impact of Networked Electronic Services and the Ontario Council of UniversityLibraries’ Scholar Portal, Final Report, January 26, 2006. Washington, DC: Association ofResearch Libraries, 2006. http://www.libqual.org/documents/admin/FINAL%20REPORT_Jan26mk.pdf.

68. Alvanoudi, N.; Kolovos, F.; and Kyrillidou, M. “MINES for Libraries: Measuring the Impactof Networked Electronic Resources” [in Greek]. Paper presented at the 17th AnnualConference of Greek Academic Libraries, Ioannina. http://www.libqual.org/documents/admin/MINESMacedoniaAlvanoudiKolovosKyrillidou.doc.

69. Franklin, Brinley; Kyrillidou, Martha; and Plum, Terry. “From Usage to User: LibraryMetrics and Expectations for the Evaluation of Digital Libraries.” In Evaluation of DigitalLibraries: An Insight of Useful Applications and Methods. Edited by Giannis Tsakonas andChristos Papatheodorou. Oxford: Chandos, 2009.

70. Kyrillidou, Martha; Cook, Colleen; and Lincoln, Yvonna. “Digital Library Service Quality:What Does It Look Like?” In Evaluation of Digital Libraries: An Insight of Useful Applicationsand Methods. Edited by Giannis Tsakonas and Christos Papatheodorou. Oxford: Chandos,2009.

71. Kaufman, Paula. “The Library as Strategic Investment: Results of the Illinois Return onInvestment Study.” Liber Quarterly 18, nos. 3/4 (December 2008): 424–36.

72. Tenopir, Carol, and King, Donald W. “Perceptions of Value and Value beyond Perceptions:Measuring the Quality and Value of Journal Article Readings.” Serials 20, no. 3 (November2007): 199–207.

73. King, Donald W.; Tenopir, Carol; and Clarke, Michael. “Measuring Total Reading ofJournal Articles.” D-Lib Magazine 12, no. 10 (October 2006). http://www.dlib.org/dlib/october06/king/10king.html.

74. Luther, Judy. University Investment in the Library: What’s the Return? A Case Study at theUniversity of Illinois at Urbana-Champaign. Library Connect White Paper no. 1. San Diego:Elsevier, 2008. http://libraryconnect.elsevier.com/whitepapers/0108/lcwp010801.html.

75. Tenopir, Carol; Love, Amy; Park, Joseph; Wu, Lei; Kingma, Bruce; King, Donald; Baer,Andrea; and Mays, Regina. University Investment in the Library, Phase II: An InternationalStudy of the Library Value to the Grant Process. Connect White Paper no. 2. San Diego: Elsevier,2010.

76. Mays, Regina; Tenopir, Carol; and Kaufman, Paula. “Lib-Value: Measuring Value andReturn of Investment of Academic Libraries.” Research Library Issues: A Bimonthly Reportfrom ARL, CNI, and SPARC, no. 271 (August 2010): 36–40.

77. Poll, Roswitha, and Boekhorst, Peter, eds. Measuring Quality: Performance Measurement inLibraries. 2nd rev. ed. Munich: K. G. Saur, 2007.

78. Performance Measurement in Libraries and Information Services: Proceedings of the 6th NorthumbriaInternational Conference. IFLA satellite conference on the impact and outcomes of libraryand information services: “Performance Measurement for a Changing Information En-vironment.” Collingwood College, Durham, England, August 22–25, 2005. Bradford: Em-erald Publishing Group, 2007.

79. ”Libraries Plus: Adding Value in the Cultural Community.” 8th Northumbria InternationalConference on Performance Measurement in Libraries and Information Services (PM8).

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions

Page 20: Library Assessment: The Way We Have Grown

LIBRARY ASSESSMENT 25

An IFLA satellite preconference sponsored by the Statistics and Evaluation Section, chairand cochairs Margarte Graham, Mike Heaney, and Stephen Thornton. Florence, August17–20, 2009.

80. “Library Assessment Conference Draws Hundreds of Attendees.” ARL Newsletter: A Bi-monthly Report on Research Library Issues and Actions from ARL, CNI, and SPARC 259 (August2008). http://www.arl.org/resources/pubs/br/br259.shtml.

81. Town, Stephen. “Academic Library Performance, Quality and Evaluation in the UK andEurope.” Conference Papers, Library Assessment Conference, Thessaloniki, 13–15 June, 2005.Edited by Mersini Moreleli-Cacouris, 29–39. Washington, DC: Association of ResearchLibraries, 2006.

82. Cook, Colleen. “A Mixed-Methods Approach to the Identification and Measurement ofAcademic Library Service Quality Constructs: LibQUAL�.” Doctoral diss., Texas A&MUniversity, 2001. Dissertation Abstracts International 62, 2295A (University Microfilms no.AAT3020024).

83. Kyrillidou, Martha. “Item Sampling in Service Quality Assessment Surveys to ImproveResponse Rates and Reduce Respondent Burden: The ‘LibQUAL� Lite’ RandomizedControl Trial (RCT).” Doctoral diss., University of Illinois at Urbana-Champaign, 2009.

84. De Rosa, Cathy; Cantrell, Joanne; Hawk, Janet; and Wilson, Alane. College Students’ Per-ceptions of Libraries and Information Resources. Dublin, OH: OCLC Online Computer LibraryCenter, 2006.

85. Tenopir, Carol. User and Users of Electronic Library Resources: An Overview and Analysis ofRecent Research Studies. Washington DC: Council of Library and Information Resources,2003.

86. Schonfeld, Roger C., and Housewright, Ross. “Faculty Survey 2009: Key Strategic Insightsfor Libraries, Publishers, and Societies.” Ithaka S�R (April 7, 2010), 37 pp.

87. Hiller, Steve; Kyrillidou, Martha; and Self, Jim. “Keys to Effective, Sustainable, andPractical Library Assessment.” Proceedings of the Library Assessment Conference, BuildingEffective, Sustainable, Practical Assessment. September 25–27, 2006. Charlottesville, VA, pp.171–77. http://www.arl.org/stats/initiatives/esp/index.shtml.

88. Dana, John Cotton. The New Museum: Selected Writings by John Cotton Dana. Edited by WilliamPenniston. Washington, DC: American Association of Museums, 1999. Quoted in “MINESfor Libraries: Measuring the Impact of Networked Information Services.” March–June2008 report, p. 11. Association of Research Libraries, Washington, DC, 2008.

This content downloaded from 194.29.185.110 on Tue, 13 May 2014 21:20:53 PMAll use subject to JSTOR Terms and Conditions