Assessing Scholarly Productivity: The Numbers Game

Assessing Scholarly Productivity

To evaluate the work of scholars objectively, funding agencies and tenure committees may attempt to quantify both its quality and impact. Quantifying scholarly work is fraught with danger, but the current emphasis on assessment in academe suggests that such measures can only become more important. There are a number of descriptive statistics associated with scholarly productivity. These fall broadly into two categories: those that describe individual researchers and those that describe journals.

Rating Researchers

Raw Citation Counts

One way to measure the impact of a paper is to simply count how many times it has been cited by others. This can be accomplished by finding the paper in Google Scholar and noting the "Cited by" value beneath the citation. Such numbers may be added together, or perhaps averaged over a period of years, to provide an informal assessment of scholarly productivity. Better yet, use Google Scholar Citations to keep a running list of your publications and their "cited by" numbers. For more information on determining where, by whom, and how often an article has been cited, see IC Library's guide on Cited Reference Searching.

Article comparing Google Scholar, Scopus and World of Science and their pros and cons.

Google Scholar as a new data source for citation analysis.

H-index

The h-index, created by Jorge E. Hirsh of the University of California, San Diego, is described by its creator as follows:

A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np - h) papers have no more than h citations each.1

In other words, if I have an h-index of 5, that means that my five most-cited papers each have been cited five or more times. This can be visualized by a graph, on which each point represents a paper. The scholar's papers are ranked along the x-axis by decreasing number of citing papers, while the actual number of citing papers is shown by the point's position along the y-axis. The grey line represents the equality of paper rank and number of citating articles. The h-index is equal to the number of points above the grey line.




The value of h will depend on the database used to calculate it. 2 Thomson Reuter's Web of Science and Elsevier's Scopus (neither is available at IC) offer automated tools for calculating this value. In November of 2011, Google Scholar Citations became generally available. This will calculate h based on the Google Scholar database. An add-on for Firefox called the Scholar H-Index Calculator is also based on Google Scholar data.

Google Scholar Metrics includes lists of top-ranked journals by index in a variety of subject areas.

Comparisons of h are only valid within a discipline, since standards of productivity vary widely between fields. Researchers in the life sciences, for instance, will generally have higher h values than those in physics.1

A large number of modifications to the h-index have been proposed, many attempting to correct for factors such as length of career and co-authorship.

ImpactStory (currently in beta) is a service that attempts to show the impact of research not only through citations but through social media (i.e., how often an article has been tweeted about, saved to social bookmarking services, etc.).

Researchers at the National Institutes of Health have developed a method to quantify the influence of a research article by making novel use of its co-citation network to field-normalize the number of citations it has received. The beta version of iCite, can be used to calculate the Relative Citation Ratios of articles listed in PubMed.

Altmetrics  is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship. Wiley now makes altmetrics available for their fully open access journals. Other scholarly publishers such as Elsevier and Sage also offer altmetrics information at the article level, including comments and shares made by readers via social media channels, blogs, newspapers, etc. See an example below taken from the Science Direct datase.

Rating Journals

Rightly or wrongly, the quality of a paper is sometimes judged by the reputation of the journal in which it is published. Various metrics have been devised to describe the importance of a journal.

Impact Factor

The Impact Factor (IF) is a proprietary measure calculated annually by Thomson Reuters (formerly by ISI). This figure is based on how often papers published in a given journal in the preceding two years are cited during the current year. This number is divided by the number of "citable items" published by that journal during the preceding two years to arrive at the IF. Weaknesses of this metric include sensitivity to inflation caused by extensive self-citation within a journal and by single, highly-cited articles. For more information about the IF, see the essays of Dr. Eugene Garfield, founder of ISI. Determining a journal's IF requires access to Thomson Reuters Journal Citation Reports, not available at IC Library.

Eigenfactor

The eigenfactor is a more recent, and freely-available metric, devised at the University of Washington by Jevin West and Carl Bergstrom.3 Where the IF counts all citations to a given article as being equal, the eigenfactor weights citations based on the impact of the citing journal. Its creators assert that it can be viewed as "a rough estimate of how often a journal will be used by scholars." Eigenfactor values are freely avialable at eigenfactor.org.

SCImago Journal Rank Indicator

The SCImago Journal Rank indicator (SJR) is another open-source metric.4 It uses an algorithm similar to Google's PageRank. Currently, this metric is only available for those journals covered in Elsevier's Scopus database. Values may be found at scimagojr.com.

 

Journal Article Acceptance Rates

Locating acceptance rates for individual journals or for specific disciplines can be difficult, yet is necessary information for promotion and tenure. Journals with lower article-acceptance rates are frequently considered to be more prestigious and more “meritorious.”

The method of calculating acceptance rates varies among journals.  Some journals use all manuscripts received as a base for computing this rate.  Other journals allow the editor to choose which papers are sent to reviewers and calculate the acceptance rate on those that are reviewed that is less than the total manuscripts received.  Also, many editors do not maintain accurate records on this data and provide only a rough estimate.  Furthermore, the number of people associated with a particular area of specialization influences the acceptance rate.  If only a few people can write papers in an area, it tends to increase the journal's acceptance rate. Some journals will include the acceptance rate in the “information for authors” area of the print journal or on the home pages for the journal.

Some sources to find journal acceptance rates are as follows:

Cabell's Directories of Publishing Opportunities -Ithaca Collge School of Business has a subscription which covers the following areas: management, marketing, accounting, economics and finance. Go to Your Access where you can either browse or search for a specific journal(s). Note:  On-campus access only as Ithaca College IP address is required.

MLA International Bibliography Choose Advanced Search, then Directory of Periodicals. You can then look up the periodical you are interested in.

American Psychological Association (APA) Journal Statistics and Operations Data -These PDFs provide information about manuscript rejection rates, circulation data, publication lag time, and other journal statistics

Association for the Advancement of Computers in Education (AACE) -submission review policy, acceptance rate and indices.


 

Cited Reference Searching

 How to Find Cited References.
 

References

1.
Hirsch, J.E. An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America 102, 16569 -16572 (2005).
2.
Meho, L.I. & Yang, K. Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58, 2105-2125 (2007).
3.
Bergstrom, C. Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News 68, (2007).
4.
González-Pereira, B., Guerrero-Bote, V.P. & Moya-Anegón, F. The SJR indicator: A new indicator of journals' scientific prestige.

Contact Us

picture of Jim Bondra

Jim Bondra

Business Librarian
(607) 274-1962
picture of Ron Gilmour

Ron Gilmour

Web Services Librarian
(607) 274-3674

Search Argos