whither metrics, part ii. tools for assessing publication impact of academic library practitioners

4
METRICS · Whither Metrics, Part II. Tools for Assessing Publication Impact of Academic Library Practitioners by Christopher Stewart Available online 10 August 2011 L ast year on these pages, I outlined a method for assessing the publication output of academic library practitioners. Using 2009 as a base year, I tracked academic library practitioner publication rates across a range of leading journals in our field. 1 Using 2010 publication data journal and metrics, this year's column offers a follow-up. Over the past year, academic librarians have continued to become more actively engaged in teaching, content management and development, data curation, marketing and outreach, and a range of other evolving roles. And we are writing about it. Original, published research in scholarly journals is a key measure of our productivity in this regard, 2 and as in any discipline, monitoring publication activities and patterns are necessary and useful activities. We impact each other across the profession by exchanging ideas and cultivating quality through shared stories, case studies, and original research. Indeed, while a key performance metric for assessing library value is establishing linkages between the library's work and faculty publica- tion and research, 3 so too is our own work as part of our institutions' scholarly output. Tracking this output is an important metric in measuring our performance as a profession as well as part of the larger academic community. How we measure our scholarly output at the individual and institutional levels reflect the importance we place on this activity. The purpose of this column is to add another year of data on academic library practitioner scholarly output by using the same tools of analysis to identify differences, similarities and, possibly, emerging patterns over the two-year span. The tools used here are part of an evolving matrix, built on established as well as emerging strategies for quantifying scholarly output, that are being used in the profession today. As in last year's article, before looking at specific metrical tools, a brief overview of the established methods for analysis of scholarly output is in order. There are three basic tools, one qualitative and two quantitative, for assessing scholarly output and, to some degree, quality: perception studies, and citation and publication counts. 4 Recent perception studies include (but are not limited to) to investigations by Nisonger, Thomas, and Davis 5 and Wimberly, Hurd, and Weller. 6 Newer versions of both studies are continuations of earlier studies. The most recent version of the Davis study found that, while there was general continuity in the measured value placed on certain journals over a 20-year period, there was little cross over in the top ten journal rankings of LIS deans and academic library deans. 7 These data serve to inform practitioners and LIS faculty on where they may choose to publish their work. These results also point to the value of the perception studies as important elements for contextualizing quantitative publication data. Another method for assessing the scholarly output of library practitioners is the citation count. Citation counts extend the basic exercise of publication counts by simply tallying the number of times an article is cited in the literature. The effectiveness of this method, of course, depends on the extent to which the literature is discoverable. In last year's column, I reported on a study that outlined effectiveness of a variety of citation analysis tools, including Google Scholar and Thompson Reuter's ISI Web of Knowledge. Google Scholar was found to be a convenient substitutefor Web of Knowledge when used in context. 8 In general, publication and citation counts are expected to be higher at institutions that require publication for tenure and promotion. This analysis will replicate the process used last year by including publication counts by author/institution affiliation, citation impact as part of journal quality assessment, and perception studies in the discussion. The baseline for this investigation begins with journal impact assessment built on citation analysis, specifically Journal Citation Reports (JCR) published annually as part of Thomson Reuters ISI Web of Knowledge platform. While perception studies rely on the opinion of journal prestige by library leaders and LIS scholars, and citation counts measure the volume of citations for individual authors, journal impact assessment measures the importance of a journal based on the volume of citations to articles in a specific journal. Journal impact assessment, while often controversial, is one of the most commonly used tools for assessing journal quality and, by direct extension, value placed on scholarship published in these journals. The JCR Impact Factor is commonly used for journal rankings and the majority of the most well regarded publications in Library Science are included in JCR. 9 For more information on impact factors as well as other journal and article assessment tools such as EigenfactorMetrics and Article Influence (AI) scores, the reader can refer to my earlier column. 10 For the purposes of this follow-up, however, a brief overview of the type of impact factor used here should be sufficient for our discussion. Impact factors serve as assessment mechanisms of a publication's use and influence specific disciplines as well as research and scholarship on the whole. An impact factor measures the average number of times articles from a specific journal have been Christopher Stewart, Ed.D. Assistant Professor, Dominican University, Graduate School of Library and Information Science, 7900 W. Division St., River Forest, IL 60305, USA <[email protected]>. The Journal of Academic Librarianship, Volume 37, Number 5, pages 445448 September 2011 445

Upload: christopher-stewart

Post on 25-Aug-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Whither Metrics, Part II. Tools for Assessing Publication Impact of Academic Library Practitioners

METRICS

· Whither Metrics, Part II. Tools for

Assessing Publication Impact ofAcademic Library Practitionersby Christopher StewartAvailable online 10 August 2011

Last year on these pages, I outlined a method for assessing thepublication output of academic library practitioners. Using2009 as a base year, I tracked academic library practitioner

publication rates across a range of leading journals in our field.1 Using2010 publication data journal and metrics, this year's column offers afollow-up. Over the past year, academic librarians have continued tobecome more actively engaged in teaching, content management anddevelopment, data curation, marketing and outreach, and a range ofother evolving roles. And we are writing about it. Original, publishedresearch in scholarly journals is a key measure of our productivity inthis regard,2 and as in any discipline, monitoring publication activitiesand patterns are necessary and useful activities. We impact each otheracross the profession by exchanging ideas and cultivating qualitythrough shared stories, case studies, and original research. Indeed,while a key performance metric for assessing library value isestablishing linkages between the library's work and faculty publica-tion and research,3 so too is our own work as part of our institutions'scholarly output. Tracking this output is an important metric inmeasuring our performance as a profession as well as part of thelarger academic community. Howwemeasure our scholarly output atthe individual and institutional levels reflect the importance we placeon this activity. The purpose of this column is to add another year ofdata on academic library practitioner scholarly output by using thesame tools of analysis to identify differences, similarities and, possibly,emerging patterns over the two-year span. The tools used here arepart of an evolving matrix, built on established as well as emergingstrategies for quantifying scholarly output, that are being used in theprofession today.

As in last year's article, before looking at specific metrical tools, abrief overview of the established methods for analysis of scholarlyoutput is in order. There are three basic tools, one qualitative and twoquantitative, for assessing scholarly output and, to some degree,quality: perception studies, and citation and publication counts.4

Recent perception studies include (but are not limited to) toinvestigations by Nisonger, Thomas, and Davis5 and Wimberly,Hurd, and Weller.6 Newer versions of both studies are continuations

Christopher Stewart, Ed.D.Assistant Professor,

Dominican University,Graduate School of Library and Information Science,

7900 W. Division St., River Forest, IL 60305, USA<[email protected]>.

The Journal of Academic Librarianship, Volume 37, Number 5, pages 445–4

of earlier studies. The most recent version of the Davis study foundthat, while there was general continuity in the measured value placedon certain journals over a 20-year period, there was little cross over inthe top ten journal rankings of LIS deans and academic library deans.7

These data serve to inform practitioners and LIS faculty on where theymay choose to publish their work. These results also point to the valueof the perception studies as important elements for contextualizingquantitative publication data.

Another method for assessing the scholarly output of librarypractitioners is the citation count. Citation counts extend the basicexercise of publication counts by simply tallying the number of times anarticle is cited in the literature. The effectiveness of this method, ofcourse, depends on the extent to which the literature is discoverable. Inlast year's column, I reported on a study that outlined effectiveness of avariety of citation analysis tools, including Google Scholar andThompson Reuter's ISI Web of Knowledge. Google Scholar was foundto be a “convenient substitute” for Web of Knowledge when used incontext.8 In general, publication and citation counts are expected to behigher at institutions that require publication for tenure and promotion.

This analysis will replicate the process used last year by includingpublication counts by author/institution affiliation, citation impact aspart of journal quality assessment, and perception studies in thediscussion. The baseline for this investigation begins with journalimpact assessment built on citation analysis, specifically JournalCitation Reports (JCR) published annually as part of Thomson ReutersISI Web of Knowledge platform. While perception studies rely on theopinion of journal prestige by library leaders and LIS scholars, andcitation counts measure the volume of citations for individual authors,journal impact assessment measures the importance of a journalbased on the volume of citations to articles in a specific journal.Journal impact assessment, while often controversial, is one of themost commonly used tools for assessing journal quality and, by directextension, value placed on scholarship published in these journals.The JCR Impact Factor is commonly used for journal rankings and themajority of the most well regarded publications in Library Science areincluded in JCR.9 For more information on impact factors as well asother journal and article assessment tools such as Eigenfactor™Metrics and Article Influence (AI) scores, the reader can refer to myearlier column.10 For the purposes of this follow-up, however, a briefoverview of the type of impact factor used here should be sufficientfor our discussion. Impact factors serve as assessment mechanisms ofa publication's use and influence specific disciplines as well asresearch and scholarship on the whole. An impact factor measuresthe average number of times articles from a specific journal have been

48 September 2011 445

Page 2: Whither Metrics, Part II. Tools for Assessing Publication Impact of Academic Library Practitioners

Table 2

Types of Articles in 2010 Leading Library ScienceJournals: All Authors

Type of Article Frequency Percent Cumulative Percent

Book Review 239 29.1 29.1

Column 18 2.2 31.3

Editorial 45 5.5 36.8

Research Article 448 54.6 91.4

Other 71 8.7 100.0

Total 821 100.0 100

cited over a certain period of time. The 5-year impact factor is the“average number of times articles from the journal published in thepast five years have been cited in the JCR year.”11 The 5-year impactfactor may be considered a more substantive measure of a journal'simpact because it allows enough time for propagation of research for alonger period of time, allowing for the published work to bedisseminated through the discipline and, if useful, cited by otherresearchers.12

The top ten library science journals for 2010 by 5-year impactfactor are listed in Table 1. This list is comprised of titles in JCR's SocialScience Citation Index and, like last year's analysis, excludes journalsin which library practitioners publish little if at all (although LISfaculty may publish widely in these journals), international journals(the focus of this analysis is on North American scholarly output), and,finally, journals covering specialized areas of academic librarianship.While the 2010 list contains the same ten journals as 2009, the rankedorder has changed somewhat, most notably with Library Resourcesand Technical Services dropping from the seventh spot to the tenthspot. As the journals on the 2010 list have not changed, I can restatethe link between the quantitative element of the impact factor and thequalitative element of the perception study in regards to thesejournals. The majority (eight of ten) are also highly rated by ARLdirectors,13 which indicates a solid connection between journalsperceived as being of high quality by academic library leaders andwhere practitioners are publishing. Finally, while the journals on lastyear's list remained on this year's, impact factor scores for four of thetop five journals—which were the same top five as 2009—dropped, asdid scores for many of the other journals on the list. These data likelyindicate expanding citation of articles across different LIS publica-tions, open access journals, and other sources, all of which diversifythe range of scholarship in our field.

Overall individual article counts rose in 2010, mainly due to anincrease in non-peer reviewed content. As with last year, each issue ofthe entire 2010 volume for each journal was reviewed and individualarticles tallied. As shown in Table 2, there were 821 pieces publishedin the ten leading journals, compared to 781 pieces in the previousyear. The number of research articles dropped slightly, comprising55% of the total works published in 2010 versus 60% in 2009. The

Table 1

Top Ten 2010 General Library Science Journals by Five-Year Impact Factor

Rank Journal ISSN 2010 Volume(s) 2010 Five-YearImpact Factor

Publisher

1 Library & InformationScience Research

0740-8188 32 1.248 Elsevier

2 Portal: libraries and the academy 1531-2542 10 1.008 Johns Hopkins University Press

3 Journal of Academic Librarianship 0099-1333 36 0.909 Elsevier

4 College & Research Libraries 0010-0870 71 0.902 Association of Collegeand Research Libraries (ALA)

5 The Library Quarterly 0024-2519 80 0.738 University of Chicago Press

6 Information Technology inLibraries

0730-9295 29 0.639 Library and InformationTechnology Association (ALA)

7 Library Trends 0024-2594 59 0.588 Johns Hopkins University Press

8 Library Collections, Acquisitions,and Technical Services

1464-9055 34 0.392 Elsevier

9 Reference & User ServicesQuarterly

1094-9054 50 0.378 Reference and User ServicesAssociation (ALA)

10 Library Resources and TechnicalServices

0024-2527 54 0.351 Association for Library Collections& Technical Services (ALA)

446 The Journal of Academic Librarianship

number of book reviews published remained nearly the same. Whileeach author was counted with each institutional affiliation, individualarticles were tallied as well, allowing for further analysis if warrantedbased on the number of articles per institution rather than authors.

Practitioners published in these journals at a rate of approximatelythree to one compared to LIS faculty. To briefly review, LIS facultywere identified by title and institution (and verified trough furtherinvestigation when necessary). Faculty from other disciplines (a smallbut noticeable number) were considered LIS faculty for the sake ofthis investigation, as were LIS students. The important thing was toidentify academic library practitioners, which was achieved byanalyzing title and institutional affiliation (and verified throughfurther investigation when necessary). A small number of authorswere non-library (e.g., foundations and associations) affiliated, andwere considered practitioners. When author and/or institutionalinformation was incomplete or could otherwise not be identified, theauthor/institution was omitted from the count. As shown in Table 3,the number of library practitioner authors was 598 for all types ofwork (articles, book reviews, etc.), which is 7% higher than 2009,when 558 works were authored by library practitioners.

Refining the analysis to research articles only (Table 4), one findsthat the percentage of articles published by practitioners comprisednearly two-thirds of all research articles published in the top 5-year

Page 3: Whither Metrics, Part II. Tools for Assessing Publication Impact of Academic Library Practitioners

Table 3

Total Articles in 2010 Leading Library ScienceJournals: Practitioners and LIS Faculty

Frequency Percent Cumulative Percent

Practitioner 598 73.6 73.6

LIS Faculty 215 26.4 100.0

Total ⁎ 813 100.0 100.0

⁎ Eight articles deleted from count due to incomplete author/institutioninformation

Table 4

Research Articles in 2010 Leading Library ScienceJournals: Practitioners and LIS Faculty

Frequency Percent Cumulative Percent

Practitioner 279 62.4 62.4

LIS Faculty 168 37.6 100.0

Total 447 100.0 100.0

T

Leading Author/Institutional Affilia

Rank Institution (N=43)

1 Pennsylvania State University

2 University of North Carolina Chapel Hill

3 Queensland University of Technology (Australia)

4 University of Oklahoma

5 Oregon State University

5 University of Illinois Urbana Champaign

5 University of Maryland

6 Colorado State University

6 Eastern Michigan University

7 Multiple institutions (N=8)

8 Multiple institutions (N=10)

9 Multiple institutions (N=16)

impact factor journals. This percentage (62%) remained virtuallyunchanged from 2009. Journals that are held in high regard by librarypractitioners are also places where one will find a measurableproportion of LIS faculty research. However, while academic librarypractitioners publish the bulk of non-peer review material in thesejournals, they also publish the most of the peer-reviewedmaterial (allof these journals peer-reviewed). The next step in the analysisinvolves looking at the institutions where publication activity washighest

Looking at the entire population of author/institution affiliations,there is, as in 2009, only a small fraction of the 4391 U.S. institutions ofhigher education14 represented in the top ten most impactful NorthAmerican Library Science journals. There is also a non-NorthAmerican institution on the 2010 list: Queensland University ofTechnology in Australia. While there were two Canadian universities(both research universities) on the 2009 list, there are none in 2010.

able

tion

Au

As interesting, only two institutions from the 2009 list appear on the2010 list: the University of Illinois at Urbana-Champaign and theUniversity North Carolina at Chapel Hill, both with highly ranked LISprograms. There are several institutions on the leader's list that havegraduate schools of Library and Information Science. There was also agreat deal more author/institution dispersion in 2010, with a largernumber of authors spread across more institutions at the top of thelist, resulting in multiple ties for several places and a very highnumber of institutions with affiliations of one or two authors. Becauseof the greater number and dispersion of institutions with multipleauthor affiliations, institutions at the top of the list (three or moreauthor affiliations) account for nearly half of the total research articlespublished in the top journals in 2010, up from about one third in 2009.However, the institutional picture is rather different after one refinesthe analysis to academic library practitioner output.

For 2010, after eliminating articles not authored by librarypractitioners (including several institutions where LIS faculty com-prised most of the publication output), only one institution from the2009 list of leading institutions is found on the 2010 list, theUniversity of Illinois at Urbana-Champaign. As opposed to the 2009list, all of the institutions indicating five or more author affiliations arepublic universities, and all but one aremembers of ARL, indicating thatthe top performing libraries are among the nation's larger researchuniversities. Nearly two-thirds (N=106) of the population ofinstitutions is represented by a single practitioner author, whichreiterates the point that most of the published output of academiclibrarians is solo-authored.15 Finally, because of greater author andinstitution dispersal, the proportion of institutions producing themost author affiliations is slightly larger than in the prior year: 5% ofinstitutions are linked with approximately 18% of all authors in the2010 analysis versus 3% of institutions to 20% of author affiliations in2009. The movement of institutions on the leading list, along with alarger dispersal of authors indicates annual fluctuations in publishingoutput at large research universities (and likely smaller institutions aswell). Pennsylvania State University, for example, which topped the2010 list with 12 author affiliations and nine articles, did not appearon the top author/institution affiliation list in 2009 (Tables 5 and 6).

The strategy outlined here for assessing scholarly output ofacademic library practitioners, while built around journal rankingbased on a 5-year impact factor, also incorporates publication countsand analysis and recognizes the importance of perception studies. Itcreates another year's data that can be used as a building block for a

5

s for Research Articles (Partial List)

thors (N=203) Percent Cumulative Percent

13 2.9 2.9

11 2.2 5.3

9 2.0 7.6

9 2.0 9.6

7 1.6 11.1

7 1.6 12.7

7 1.6 14.3

6 1.3 15.6

6 1.3 16.9

5 per institution 25.8

4 per institution 34.7

3 per institution 45.4

September 2011 447

Page 4: Whither Metrics, Part II. Tools for Assessing Publication Impact of Academic Library Practitioners

Table 6

Author/Institutional Affiliations for Research Articles: Practitioners Only

Rank Institution (N=164) Authors (N=279) Percent Cumulative Percent

1 Pennsylvania State University 12 4.3 4.3

2 Oregon State University 7 2.5 6.8

2 University of Illinois Urbana-Champaign 7 2.5 9.3

3 Colorado State University 6 2.2 11.5

3 Eastern Michigan University 6 2.2 13.6

4 James Madison University 5 1.8 15.4

4 University of Florida 5 1.8 17.2

4 University of Illinois Chicago 5 1.8 19.0

5 Multiple institutions (N=6) 4 per institution 27.6

6 Multiple institutions (N=10) 3 per institution 38.4

7 Multiple institutions (N=33) 2 per institution 62.0

8 Multiple institutions (N=106) 1 per institution 100.0

multi-year analysis that may add a new dimension to a growing bodyof research and, equally important, prove useful for academic librarypractitioners. The goal of this year's and last year's studies has been toillustrate how one strategy, based on established metrics, can be usedto measure scholarly output by library practitioners in leading LibraryScience journals while identifying author affiliations to determinewhich institutions are producing the most output. As the amount ofpublication data is broad, the right strategy for assessing scholarlyimpact may well be specific to the types of institutions and academiclibraries being studied. For example, a recently released study byresearchers at the University of Arkansas focused on academiclibrarian publication activity in leading journals of agriculture,engineering, science, and medicine between 2000 and 2010.16

Certainly, the diverse range of institutional categories across highereducation provide opportunities for the kind of “apples to apples”comparisons of scholarly impact that practitioners, particularly thoseat smaller and mid-sized institutions may find useful. Moreover, as Ipointed out in my earlier column, new tools for assessment arenecessary to capture the wide range of scholarly activity now beingengaged in by academic librarians. As academic librarianshipcontinues to evolve, so too will the metrics we devise for measuringour impact.

NOTES AND REFERENCES

1. Stewart, Christopher. “Whither Metrics? Tools for AssessingPublication Impact of Academic Library Practitioners.” Journal ofAcademic Librarianship 36, no. 5 (2010): 449–453.

2. Seaman, Scott. “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/. (Accessed May 21, 2010).

3. Association of College and Research Libraries, and Megan Oakleaf.2010. “The Value of Academic Libraries: A Comprehensive ResearchReview and Report,” http://www.acrl.ala.org/value, p. 17. (AccessedJune 26, 2010).

4. Seaman, Scott. “North American Institutions Most FrequentlyRepresented in High Impact Journals,” LIBRES 18, no. 2, http://libres.curtin.edu.au/libres18n2/. (Accessed May 21, 2010).

448 The Journal of Academic Librarianship

5.Nisonger, Thomas E., and Davis, Charles H. “The Perception ofLibrary and Information Science Journals by LIS Education Deansand ARL Library Directors: A Replication of the Kohl–Davis Study.”College & Research Libraries 66, no. 4 (2005): 341.

6.Wiberley, Stephen E., Hurd, Julie M., & Weller, Ann C. “PublicationPatterns of U.S. Academic Librarians from 1998 to 2002,” College &Research Libraries 67, no. 3 (2006): 205–216.

7. Nisonger, Thomas E. & Davis, Charles H. “The Perception of Libraryand Information Science Journals by LIS Education Deans and ARLLibrary Directors: A Replication of the Kohl–Davis Study,” College &Research Libraries 67, no. 4 (2005): 341.

8.Martell, Charles. “A Citation Analysis of College & Research LibrariesComparing Yahoo, Google, Google Scholar, and ISI World ofKnowledge with Implications for Tenure and Promotion,” College& Research Libraries 70, no. 5 (2009): 170.

9. Ibid.10. Stewart, Christopher. “Whither Metrics? Tools for Assessing

Publication Impact of Academic Library Practitioners,” Journal ofAcademic Librarianship 36, no. 5 (2010): 450.

11. “Journal Citation Reports Help,” http://admin-apps.isiknowledge.com/JCR/help/h_impfact.htm#impact_factor. (Accessed May 30,2010).

12. Stewart, Christopher. “Whither Metrics? Tools for AssessingPublication Impact of Academic Library Practitioners.” Journal ofAcademic Librarianship 36, no. 5 (2010): 449.

13. Nisonger, Thomas E., and Davis, Charles H. “The Perception ofLibrary and Information Science Journals by LIS Education Deansand ARL Library Directors: A Replication of the Kohl–Davis Study.”College & Research Libraries 66, no. 4 (2005): 341.

14. “Carnegie Classifications | Basic Classification Summary Tables,”http://classifications.carnegiefoundation.org/summary/basic.php.(Accessed June 16, 2010).

15. Harkin, Amy, and Stankus, Tony. “The Affiliations of U.S. AcademicLibrarians in the Most Prominent Journals of Science, Engineering,Agricultural, and Medical Librarianship.” Science & TechnologyLibraries 30, no. 2 (2011): 147.

16. Ibid.