whither metrics? tools for assessing publication impact of academic library practitioners

5
METRICS · Whither Metrics? Tools for Assessing Publication Impact of Academic Library Practitioners by Christopher Stewart Available online 21 August 2010 Christopher Stewart is Dean of Libraries, Illinois Institute of Technology, 35 West 33rd Street, Chicago, IL 60616-3793, USA <[email protected]>. A t a time when the role of academic librarians is shifting to more active engagement in teaching, content management and development, and even data curation, it has become more important for librarians to engage in scholarship in a variety of ways. Original, published research in scholarly journals is a key measure of our productivity in this regard 1 and as in any discipline, monitoring publication patterns is a necessary endeavor. 2 As academic librarians, we use metrics to measure many things, yet the emphasis has more frequently been on using metrics to assess library service activities. To be sure, the importance of using tools such as LibQUAL cannot be underestimated as core activities in a culture of assessment, especially service assessment. Nitecki, for example, stresses the value that metrics has on the impact on librarianship,3 which can be interpreted to encompass our impact on our users through any number of means and measures. Clearly, we impact each other across the profession by sharing our successes and cultivating quality through shared stories, case studies, and our professional networks. But how are we measuring our scholarly output? How are we assessing it at the individual and, perhaps more importantly, the institutional level, and to what effect? The purpose of this column is to discuss the use of metrics to assess our scholarly output as library practitioners. Going forward, we will need to further define and arrive at greater consistency in regards to the tools we use. There is more than one choice and, quite expectedly, more than one strategy. Before looking at specific metrical tools, a brief overview of the basic methods for metrical analysis of scholarly output may be helpful. In general, there are three basic tools, one qualitative and two quantitative, for assessing scholarly output and, to some degree, quality: perception studies, citation counts, and publication counts. 4 One of the most well known and widely cited perception studies was Kohl and Davis's 1985 survey of ARL library directors as well as LIS deans, in which both groups rated the prestige of LIS journals for promotion and tenure. 5 The Kohl and Davis study was recently replicated and, while there was considerable continuityin the rankings over the 20-year period, there were notable differences between the ranked orders of LIS journals between the two populations. 6 For example, in the more recent of the two studies, only two journals shared spots in the ten highest ranked journals on the ARL directors and LIS Deans lists. As practitioners, our leadership values a different core group of LIS publications than their colleagues in library schools. This has obvious implications on where practi- tioners seek to publish their work. It also may require a different approach to measuring scholarly output, at both the individual as well as institutional level, for academic library practitioners. Citation counts are another method for assessing the scholarly output of library practitioners. Citation counts tally the number of times an article is cited in the literature. In a recent study examining the effectiveness of a variety of citation analysis tools, Google Scholar and Thompson Reuter's ISI Web of Knowledge, Google Scholar was found to be a convenient substitutefor Web of Knowledge when used in context. 7 As illustrated in other studies and data that will be presented in this column, a significant portion of published research by practitioners is generated by librarians at large research univer- sities, many of which are members of the Association of Research Libraries. 8 To that point, publication counts are also one of the most common methods for measuring scholarly productivity and typically involve summing the number of an individual's (or group of individuals) publications from resumes, vitaes, or databases. 9 While the methods and discussion that will be used in this analysis are informed by elements of the three methods described above, the primary focus here will be library practitioners as a group rather than individual practitioner authors. For this purpose, journal impact assessment is the tool that will be employed. While perception studies rely on the opinion of journal prestige by library leaders and LIS scholars, and citation counts measure volume of citations for individual authors, journal impact assessment measures the importance of a journal based on the volume of citations to articles in a journal. Most academic collection development librarians are familiar with journal assessment tools, and use them inform subscription decisions for journals and journal bundles across the disciplines. The most widely used tool, impact factor, derived from Thompson Reuters ISI Web of Knowledge Journal Citation Reports (JCR), is also the most controversial. Before continuing with impact factors, it is worth discussing the value of current models for assessing scholarly impact of practitioner scholarship. As in any model, even one based on so many existing tools and prior research such as the studies referenced in these pages, a fundamental question to be asked is always whether the thing being measured is indeed the correct thing to be measured. While there is surely value in keeping regular track of our research output as practitioners, are there alternative ways to assess the impact of our work and its effects on our colleagues and the institutions we serve? For example, even if the ranked list of leading impact factor library science journals were expanded to 20 or 25 (there are ten included in this analysis), would this provide enough coverage of our research The Journal of Academic Librarianship, Volume 36, Number 5, pages 449453 September 2010 449

Upload: christopher-stewart

Post on 31-Aug-2016

220 views

Category:

Documents


3 download

TRANSCRIPT

METRICS

· Whither Metrics? Tools for Assessing

Publication Impact of AcademicLibrary Practitionersby Christopher StewartAvailable online 21 August 2010

Christopher Stewart is Dean of Libraries, Illinois Institute of Technology35 West 33rd Street, Chicago, IL 60616-3793, USA<[email protected]>.

The Journal of Academic Librarianship, Volume 36, Number 5, pages

,

A t a time when the role of academic librarians is shifting tomore active engagement in teaching, content managementand development, and even data curation, it has becomemore

important for librarians to engage in scholarship in a variety of ways.Original, published research in scholarly journals is a key measure ofour productivity in this regard1 and as in any discipline, monitoringpublication patterns is a necessary endeavor.2 As academic librarians,we use metrics to measure many things, yet the emphasis has morefrequently been on usingmetrics to assess library service activities. Tobe sure, the importance of using tools such as LibQUAL cannot beunderestimated as core activities in a culture of assessment, especiallyservice assessment. Nitecki, for example, stresses the value thatmetrics has on the “impact on librarianship,”3 which can beinterpreted to encompass our impact on our users through anynumber of means and measures. Clearly, we impact each other acrossthe profession by sharing our successes and cultivating qualitythrough shared stories, case studies, and our professional networks.But how are we measuring our scholarly output? How are weassessing it at the individual and, perhaps more importantly, theinstitutional level, and to what effect? The purpose of this column is todiscuss the use of metrics to assess our scholarly output as librarypractitioners. Going forward, we will need to further define and arriveat greater consistency in regards to the tools we use. There is morethan one choice and, quite expectedly, more than one strategy.

Before looking at specific metrical tools, a brief overview of thebasicmethods formetrical analysis of scholarly outputmay be helpful.In general, there are three basic tools, one qualitative and twoquantitative, for assessing scholarly output and, to some degree,quality: perception studies, citation counts, and publication counts.4

One of the most well known and widely cited perception studies wasKohl and Davis's 1985 survey of ARL library directors as well as LISdeans, in which both groups rated the prestige of LIS journals forpromotion and tenure.5 The Kohl and Davis study was recentlyreplicated and, while there was “considerable continuity” in therankings over the 20-year period, there were notable differencesbetween the ranked orders of LIS journals between the twopopulations.6 For example, in the more recent of the two studies,only two journals shared spots in the ten highest ranked journals on

449–4

the ARL directors and LIS Deans lists. As practitioners, our leadershipvalues a different core group of LIS publications than their colleaguesin library schools. This has obvious implications on where practi-tioners seek to publish their work. It also may require a differentapproach to measuring scholarly output, at both the individual as wellas institutional level, for academic library practitioners.

Citation counts are another method for assessing the scholarlyoutput of library practitioners. Citation counts tally the number oftimes an article is cited in the literature. In a recent study examiningthe effectiveness of a variety of citation analysis tools, Google Scholarand Thompson Reuter's ISI Web of Knowledge, Google Scholar wasfound to be a “convenient substitute” for Web of Knowledge whenused in context.7 As illustrated in other studies and data that will bepresented in this column, a significant portion of published researchby practitioners is generated by librarians at large research univer-sities, many of which are members of the Association of ResearchLibraries.8 To that point, publication counts are also one of the mostcommon methods for measuring scholarly productivity and typicallyinvolve summing the number of an individual's (or group ofindividuals) publications from resumes, vitaes, or databases.9

While the methods and discussion that will be used in this analysisare informed by elements of the three methods described above, theprimary focus here will be library practitioners as a group rather thanindividual practitioner authors. For this purpose, journal impactassessment is the tool that will be employed. While perception studiesrely on theopinionof journal prestigeby library leaders and LIS scholars,and citation counts measure volume of citations for individual authors,journal impact assessment measures the importance of a journal basedon the volume of citations to articles in a journal. Most academiccollection development librarians are familiar with journal assessmenttools, and use them inform subscription decisions for journals andjournal bundles across the disciplines. The most widely used tool,impact factor, derived from Thompson Reuters ISI Web of KnowledgeJournal Citation Reports (JCR), is also the most controversial.

Before continuing with impact factors, it is worth discussing thevalue of current models for assessing scholarly impact of practitionerscholarship. As in any model, even one based on so many existingtools and prior research such as the studies referenced in these pages,a fundamental question to be asked is always whether the thing beingmeasured is indeed the correct thing to be measured. While there issurely value in keeping regular track of our research output aspractitioners, are there alternative ways to assess the impact of ourwork and its effects on our colleagues and the institutions we serve?For example, even if the ranked list of leading impact factor libraryscience journals were expanded to 20 or 25 (there are ten included inthis analysis), would this provide enough coverage of our research

53 September 2010 449

output? What other sources, including tools for tracking citations inthe growing pool of open access, peer-reviewed library sciencejournals are available? Finally, while citation and journal impactanalysis provide some measure of our scholarly activity, are theresupplementary tools for further assessing the impact of our work? Forexample, are the number of downloads of articles from these libraryscience journals also valid ways of surmising their influence onprofessional practice? After all, not all practitioners have the time orincentive to write research articles, but we all finds ways of learningfrom each other, especially by reading the professional literature. Newmodels for measuring the impact of our scholarly work are surelybeing debated and likely being devised. Tools such as citation analysisand impact factors will be important elements in broader, moreinclusive models for measuring the impact and reach of our scholarlywork in the coming years.

According to ISI Web of Knowledge, Journal Citation Reports offera “systematic, objective mean to evaluate the world's leadingjournals” and “help determine a publication's impact and influencein the global research community.”10 A journal's impact factor is the“average number of times articles from the journal published in thepast two years have been cited in the JCR year;” and is calculated bydividing the number of citations in the current JCR year by the totalnumber of articles published in the 1 to 2 previous years.11 An impactfactor of 1.3 means that, on average, articles in the journal have beencited on 1.3 times over the past 2 years. Impact factors arenormalized for journal size and age. The metric that will be used inour current discussion is the JCR 5-year impact factor. The 5-yearimpact factor is the “average number of times articles from thejournal published in the past five years have been cited in the JCRyear.”12 The 5-year impact factor can be considered a moremeaningful measure of a journal's impact because it allows enoughtime for propagation of research for a longer period of time, allowingfor the published work to be disseminated through the disciplineand, if useful, cited by other researchers.

There are, of course, limitations to the impact factor, which manyin the profession are quick to point out. The most common questionraised is whether the number of times an article is cited is a validmeasure of quality. A tallying of citations is just that: the quality of thecitations is not factored into the impact factor equation.13 In addition,as a result of the absence of a method for qualifying the quality of theciting author/journal, there also remains the possibility that refer-ences to articles may be of a negative, justifiably critical nature whenthe quality of the research is called to question. There are otherjournal assessment tools that at least partially address some of theselimitations, namely the Eigenfactor™Metrics. Eigenfactor scores arebuilt on an algorithm that positions journals as hubs in a networkwhere journal impact is based not only on the number of citationsreceived, but also the quality and level of connectivity in the network(“well connected journals”) of the citing journals.14 JCR includesEigenfactor Metrics such as the Article Influence™score, which isderived from the same “underlying journal citation data” as theimpact factor.15

Despite the limitations, the impact factor remains a“widely usedand well understood16 tool for assessing journal quality. In LibraryScience, the JCR Impact Factor is widely used for journal rankings.17 Inhis recent study of library science publication outputs of institutions,Seaman observed that, “imperfect as they may be, impact factors arefrequently used as a rough estimate of journal quality.”18 For librarypractitioners, particularly academic library practitioners, publicationin high impact journals is an important measure of scholarly outputthat can often have specific implications for promotion and tenure andgeneral implications for institutional prestige.

For the current analysis and discussion, ten journals were selectedfrom the JCR subject category “Information Science and LibraryScience” within the Social Science Citation Index. Sixty-five journals

450 The Journal of Academic Librarianship

are included in this very broad subject category that “covers resourceson a wide variety of topics, including bibliographic studies, cataloging,categorization, database construction and maintenance, electroniclibraries, information ethics, information processing and manage-ment, interlending, preservation, scientometrics, serials librarianship,and special libraries.”19 As in the Seaman study, only publications thatprimarily cover library science were included in the analysis.20

Numerous journals in the “Information Science and Library Science”subject category include publications in which librarian practitionerspublish very few articles if any at all. Included in this group, forexample, are journals such as the Journal of Computer-MediatedCommunication. Journals that primarily cover the “library science” halfof the subject category were retained. Also, as with the Seaman study,the focus of this analysis was on North American scholarly publica-tions. International journals were similarly excluded from theanalysis. Unlike the Seaman study, the current analysis includedmore journals, a shorter time frame, and, as will be discussed later, anintentional exclusion of LIS faculty. Because the goal of this analysisand discussion is to provide a broad view of practitioner andinstitutional scholarly output in the most general sense, journalscovering highly specialized areas of librarianship such as the Journalof the Medical Library Association were excluded from the analysis.Journals were selected for the current JCR Year (2009) and ranked by5-year impact factor. The recently released 5-year impact factor for2009 is used. All journals on the list are peer-reviewed as verifiedthrough information provided by the publisher, and all 2009 volumesand issues of each of the ten highest ranked journals are accounted forin the analysis.

When considering any metric, the veracity of the sources of datathat form the foundation of that metric is very important. To that end,the majority of the most well recognized and respected publicationsin library science are included in JCR.21 This coverage is supported bythe perception studies mentioned earlier: of the ten journals listed inTable 1, eight appear in the top ten rated by ARL directors in theNizonger and Davis perception study, while the remaining two titlesare ranked at numbers 12 and 13 on the ARL director's list.22 Table 1also shows 5-year impact factors of these journals. It is interesting tonote that two publishers, Elsevier and the American LibraryAssociation (through its divisions) are responsible for three quartersof top ten highest impact factor journals in library science. Themajority of articles published in these seven publications are authoredby academic library practitioners, indicating that, at least for workinglibrarians, there is a high concentration of only a few publishersamong leading journal venues for our research.

As publication counts are closely related to journal impact metrics,it is appropriate here to provide an overview of the numbers and typeof articles published in the leading library science journals in 2009 aspart of the current analysis. The list is large–781 articles–all of whichwere identified individually by reviewing each 2009 issue of each ofthe ten journals. This process will need to be repeated for each yearthis analysis is extended. As Table 2 illustrates, the majority ofmaterial published in these journals in 2009 were research articles(60%) and book reviews (31%). While each author in multipleauthored articles was counted, articles were only counted once.

As this is a discussion of metrics for assessing the scholarly outputof practitioners and their associated institutions, the next step was toseparate practitioner authors from LIS faculty. LIS faculty memberswere identified by title and, when affiliation was not immediatelyclear, their role was verified through further research (e.g., depart-mental websites). Graduate student assistants in LIS programs wereconsidered LIS faculty in this analysis. In the few instances wherefaculty from other disciplines co-authored with LIS faculty, the formerwere considered LIS faculty for the sake of this analysis. Practitionerswere identified by title and other biographical information providedto the journal. This was, for the most part, a fairly straightforward

Table 1Top Ten 2009 Library Science Journals by 5-Year Impact Factor

Journal ISSN 2009 Volume(s)2009 Five-YearImpact Factor Publisher

1 Library & InformationScience Research

0740–8188 31 1.418 Elsevier

2 College & ResearchLibraries

0010–0870 70 1.087 Association of College and ResearchLibraries (ALA)

3 portal: Librariesand the Academy

1531–2542 9 1.086 Johns Hopkins University Press

4 Journal of AcademicLibrarianship

0099–1333 35 0.925 Elsevier

5 The Library Quarterly 0024–2519 79 0.854 University of Chicago Press

6 Information Technology in Libraries 0730–9295 28 0.696 Library and InformationTechnology Association (ALA)

7 Library Resources andTechnical Services

0024–2527 53 0.613 Association for Library Collections &Technical Services (ALA)

8 Library Trends 0024–2594 57, 58 0.566 Johns Hopkins University Press

9 Reference & User Servicesquarterly

1094–9054 49 0.507 Reference and User ServicesAssociation (ALA)

10 Library Collections,Acquisitions, andTechnical Services

1464–9055 53 0.496 Elsevier

Table 2Types of Articles in 2009 Leading Library Science

Journals: all authors

Type of article Frequency PercentCumulativePercent

Book review 244 31.2 31.2

Column 18 2.3 33.5

Discovery 2 .3 33.8

Editorial 44 5.6 39.4

President's message 4 .5 39.9

Research agenda 2 .3 40.2

Research article 467 59.8 100

Total 781 100.0 100.0

process. It should be noted here that a very small number ofpractitioners were not academic librarians but, rather, employed atfoundations, a few library-related industries, and libraries (e.g., theLibrary of Congress) not affiliated with colleges and universities.These “non-academic” practitioners were still included as practi-tioners as they would not affect the rankings of academic institutionsin the final analysis. Finally, a good amount of published material that,while it is important to the discipline and profession on a variety oflevels, is not typically subject to peer review and was removed fromthe analysis. This material includes book reviews, columns, editorials,and other types of articles listed in Table 2. These exclusions aresimilar to parameters established in other studies23–25 of publicationcounts. Only research articles were retained for the analysis. As shownin Table 2, there were 467 research articles published in the top-ratedimpact factor journals in library science in 2009. Going forward in theanalysis, we now have a clearly defined unit of measurement forpublication counts.

Now, to return to library practitioners. As shown in Table 3, themajority (71%)of material published in the top ten impact factorjournals was authored by library practitioners. This proportionnarrows somewhat for research articles, with practitioner authorshipof 289 research articles versus LIS faculty authorship of 223 articles(see Table 4). Here, the balance of articles is divided roughly 60/40between practitioners, indicating that, on balance, practitionerspublish more material that is not subject to peer review. However,it should also be noted here that, of the ten journals, only two areknown for publishing mainly faculty research. The majority ofresearch published in the remaining eight journals was authored byacademic library practitioners. Further refinement of the model beingdescribed here would account for these the two journals known forprimarily faculty-authored research. Still, it is interesting to note thatmost of the highest impact factor journals in library science areprimarily venues for practitioner research.

Turning now to institutional output for scholarly publication inlibrary science, a final comparison between faculty and practitioner

production is possible. As previously mentioned, each author, eithersingle or as part of a multiple-authored article, is credited byauthorship and by institution. In most cases, faculty authors remainedwithin the ranks of their own in shared authorships. In cases where anarticle was written by both a practitioner(s) and an LIS facultymember(s) each author received an institutional affiliation. Forresearch articles, there were a total of 206 institutions represented.Forty-three of these institutions, however, are not North Americanacademic institutions. Most are foreign universities non-affiliatedresearch organizations and libraries. (It is important to note here that,because most the non-North American university authors are facultymembers in those institutions' library schools or related departments,these author/institutional affiliations had little effect on practitionerauthor/institutional affiliations reported in this essay.) Therefore,when factoring in the Canadian institutions represented in this

September 2010 451

Table 3Total Articles in 2009 Leading Library Science

Journals: Practitioners and LIS faculty

Frequency PercentCumulativePercent

Practitioner 558 71.4 71.4

LIS faculty 223 28.6 100.0

Total 781 100.0 100.0

Table 4Research Articles in 2009 Leading Library Science

Journals: Practitioners and LIS Faculty

Frequency PercentCumulativePercent

Practitioner 289 61.9 61.9

LIS faculty 178 38.1 100.0

Total 467 100.0 100.0

Table 5Top Author/Institutional Affiliations for Research

Articles: All authors

RankInstitution(N=14)

Authors(N=147) Percent

CumulativePercent

1 Florida StateUniversity

22 4.7 4.7

2 University of NorthCarolina Chapel Hill

17 3.6 8.4

3 University of IllinoisUrbana-Champaign

16 3.4 11.8

452 The Journal of Academic Librarianship

4 Ohio State University 14 3.0 14.8

5 University of NevadaLas Vegas

11 2.4 17.1

6 University of Alberta 10 2.1 19.3

6 University ofMichigan

10 2.1 21.4

7 Brigham YoungUniversity

8 1.7 23.1

7 Texas A&M University 8 1.7 24.8

8 Stanford University 7 1.5 26.3

8 University of WesternOntario

7 1.5 27.8

9 University ofNew Mexico

6 1.3 29.1

10 Kent State University 5 1.1 31.5

10 University of Windsor 5 1.1 32.6

Table 6Top Author/Institutional Affiliations for Research

Articles: Practitioners only

RankInstitution(N=155)

Authors(N=289) Percent

CumulativePercent

1 Ohio State University 14 4.8 4.8

2 University of NevadaLas Vegas

11 3.8 8.7

3 Brigham YoungUniversity

8 2.8 11.4

3 Texas A&M University 8 2.8 14.2

sample year, a small fraction(3%) of the 4391 U.S. institutions ofhigher education26 (most of which we can assume offer libraryservices), had author affiliations in the top ten most impactful libraryscience journals in North America in 2009.

A final step in refining a model for measuring the scholarly outputof academic libraries is quantifying and separating author/institu-tional affiliations for LIS faculty and practitioners. The results areinteresting. Table 5 shows, the top North American institutionalauthor affiliations (N=141) for all research articles.When accountingfor all authors, it is possible to derive a ranked list of the top teninstitutions. There are four two-way ties. These 14 institutionscomprise nearly a third of all research article output in the top tenlibrary science journals in 2009.

The picture is quite different, however, when library practitionerauthor/institutions are separated from LIS faculty/institution affilia-tions. The results are far more dispersed, at least for the sample yearchosen for this model. Table 6 shows results for practitioner/institutionaffiliations. After the top five institutions are listed (there is one twowaytie), the remaining 149 (approximately 80% of the total number ofinstitutions represented) become so dispersed as to prevent a rankedlist from being provided without resulting in a tied score consisting ofnumerous institutions. For example, 97 institutions had one practition-er/author affiliation.Within the top five institutions, however, there arethree that also appear on the all author/institution affiliation list. Four ofthese six institutions, led by Ohio State University and the University ofNevada Las Vegas, do not have graduate schools of library andinformation science, indicating a strong culture of practitioner researchat those institutions. The fifth, the University of Illinois at UrbanaChampaign, shows roughly a third of its 2009 library science articlesauthored by practitioners. The two institutions that led in overallresearch article output with 22 and 17 research articles respectively,Florida State University and theUniversity ofNorth Carolina ChapelHill,produced no research articles authored by practitioners in the top tenjournals according to impact factor. Finally, practitioners at 3% of theinstitutions produced nearly 20% of the research articles appearing inthe ten most impactful journals

The attempt here has been to show how one strategy, based onestablished metrics, can be used to measure scholarly output in

4 University ofNew Mexico

6 2.1 16.3

5 University of IllinoisUrbana-Champaign

5 1.7 18.0

6 Multiple institutions(N=9)

4 12.6 30.4

7 Multiple institutions(N=18)

3 18.0 49.1

8 Multiple institutions(N=25)

2 17.5 66.4

9 Multiple institutions 1 33.9 100

(N=97)

leading library science journals of library practitioners and establishauthor/institutional affiliations to determine which institutions areproducing the most output. Other studies such as Seaman, Wiberleyand Kohl-Davis/Nisonger-Davis27 have used a variety of toolsincluding publication counts, perception studies, and impact factors.This model proposes to expand the use of the impact factor toinclude a larger number of journals while focusing on academiclibrary practitioners. It is a model that can be infused with new dataannually as complete JCR years become available. With adjustments,other journal impact assessment tools such as Eigenfactor Metricsmay also be used. Once a core list of journals is established based onimpact factors and applicability across the widest range of thepractitioner population as it has been in this example, the number ofyears to be covered can be expanded to gain a more comprehensivepicture of practitioner publication rates and their relationship to arange of institutional variables. These variables can be included in amore refined statistical analysis. Institutional variables that couldhave particular significance in the analysis are the type of institutionby, for example, research activity as defined by the recently revisedCarnegie Classifications. Other institutional information could beincluded that could reveal meaningful relationships betweenpractitioner research and institution type. Analysis could beginwith correlative tests but possibly extend to other inferentialinquiry, which could include library funding data, staffing levels,and the availability of tenure for librarians (Seaman started downthis path), research funding, institutional governance, and othervariables. Of course, with publication data in hand, there are also anumber of issues of organizational culture and management that canbe explored to determine the types of academic library environ-ments that encourage and produce practitioner research, and thosethat do not.

The ongoing evolution of academic librarianship is marked (butnot limited to) the expansion of the academic librarian's pedagogicalrole, more collaborative relationships with faculty, repository devel-opment and contentmanagement, and the library's unique position ininterdisciplinary environments that have become so important tomany research universities. New levels of practitioner-driven schol-arship and research will advance our evolving work while chroniclingour achievements. Tracking this output is crucial and will enable us toquantify our impact on each other's work and research, ourinstitutions, and even those in other disciplines. When combinedwith other data, journal impact factors can provide a solid baseline forassessing the scope of our scholarly output on the whole.

If you would like to join this discussion on metrics, I welcome yourideas at [email protected].

NOTES AND REFERENCES

1. Seaman, Scott, “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/.(Accessed May 21, 2010).

2.Wiberley, Stephen E., Hurd, Julie M., and Weller, Ann C.,“Publication Patterns of U.S. Academic Librarians from 1993 to1997,” College & Research Libraries 60, no. 4: 352-362.

3. Danuta, Nitecki, “Joy to the World of Metrics,” Journal of AcademicLibrarianship 36, no. 2: 117-118, p. 117.

4. Seaman, Scott, “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/.(Accessed May 21, 2010).

5. Kohl, David, and Charles H. Davis. “Ratings of Journals by ARLLibrary Directors and Deans of Library and Information ScienceSchools.” College & Research Libraries 46, no. 1: 40-47.

6.Nisonger, Thomas E., and Davis, Charles H. “The Perception ofLibrary and Information Science Journals by LIS Education Deansand ARL Library Directors: A Replication of the Kohl-Davis Study.”College & Research Libraries 66, no. 4: 341-377, p. 341.

7.Martell, Charles. “A Citation Analysis of College & ResearchLibraries Comparing Yahoo, Google, Google Scholar, and ISIWorld of Knowledge with Implications for Tenure and Promo-tion.” College & Research Libraries 70, no. 5: 460-462, p. 170.

8.Wiberly, Stephen E., Hurd, Julie M., andWeller, Ann C. “PublicationPatterns of U.S. Academic Librarians from 1993 to 1997.” College &Research Libraries 60, no. 4: 352-362.

9. Seaman, Scott, “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/.(Accessed May 21, 2010).

10. “ISI Web of Knowledge [v.4.9] - Web of Science AdditionalResources.” http://apps.isiknowledge.com/additional_resources.do?highlighted_tab=additional_resources&product=WOS&-SID=4F1JHOHdnLBeFi4cliP&cacheurl=no. (Accessed June 12,2010).

11. “Journal Citation Reports Help.” http://admin-apps.isiknowledge.com/JCR/help/h_impfact.htm#impact_factor.(Accessed May 30,2010).

12. Ibid.13.West, Jevin D., Bergstrom, Theodore C., and Bergstrom, Carl T. “The

Eigenfactor Metrics: A Network Approach to Assessing ScholarlyJournals.” College & Research Libraries 71, no. 3: 236-244, p.236.

14. Ibid.15. Ibid.16. Ibid.17.Nisonger, Thomas E., and Davis, Charles H. “The Perception of

Library and Information Science Journals by LIS Education Deansand ARL Library Directors: A Replication of the Kohl-Davis Study.”College & Research Libraries 66, no. 4: 341-377.

18 Seaman, Scott, “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/.(Accessed May 21, 2010).

19. “Scope Notes 2010 Social Science Citation Index.” http://admin-apps.isiknowledge.com/JCR/static_html/scope_notes/SOCIAL/2009/SCOPE_SOC.htm#NU(Accessed May 22, 2010).

20. Seaman, Scott, “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/.(Accessed May 21, 2010).

21. Ibid.22.Nisonger, Thomas E., and Davis, Charles H. “The Perception of

Library and Information Science Journals by LIS Education Deansand ARL Library Directors: A Replication of the Kohl-Davis Study.”College & Research Libraries 66, no. 4: 341-377, p. 341.

23.Wiberley, Stephen E., Hurd, Julie M., and Weller, Ann C.“Publication Patterns of U.S. Academic Librarians from 1993 to1997.” College & Research Libraries 60, no. 4: 352-362.

24.Wiberley, Stephen E., Hurd, Julie M., and Weller, Ann C.“Publication Patterns of U.S. Academic Librarians from 1998 to2002.” College & Research Libraries 67, no. 3: 205-216.

25. Seaman “North American Institutions Most Frequently Repre-sented in High Impact Journals.”

26. “Carnegie Classifications | Basic Classification Summary Tables.”http://classifications.carnegiefoundation.org/summary/basic.php.(Accessed June 16, 2010).

27. Seaman used impact factors but limited his journal list to the topfive library science journals. One major journal, portal: Libraries inthe Academy had not yet been published for five years and was notincluded in his analysis.

September 2010 453