What do your metrics tell you about impact?
This post was written by David Beales, engineering librarian, and Marisa Ramirez, digital scholarship services librarian.
With increasing emphasis placed on scholars and research centers to capture the return and value of their work, research impact has become a topic of great interest in academia. This interest has increased in intensity as grant recipients are expected to report on the impact of a project’s published output and as scholars are charged with demonstrating impact of their scholarship in promotion and tenure portfolios. The established measures of impact are based on journal and author citation metrics. Now, there is also an emerging field of altmetrics which attempts to harvest real-world data to demonstrate real-world impact.
Journal metrics attempt to use article citation data to quantify their impact. As a scholar, being able to demonstrate that your work is being published in the most influential journals within your discipline is evidence that your work is highly regarded within your field. The most common way to do this is to use the Journal Impact Factor, which is found by taking the total number of articles published by a journal in the previous two years and dividing it by the number of times that those articles have been cited. As any academic would tell you, this is a relatively crude measure and it doesn’t account well for the different citation patterns that occur within complex subject areas.
Eigenfactor: a better alternative?
There is a more sophisticated alternative to the Impact Factor: the Eigenfactor. This takes the same approach of dividing the number of articles published in a journal by the number of citations, but it does not consider all citations to be equal. Instead, using an algorithm similar to Google’s PageRank, the Eigenfactor ranking places more value on citations from influential journals. This helps compensate for those different citation patterns, but both Eigenfactor and Impact Factor use the same initial data and they are both poor indicators of impact for subjects in the humanities and social sciences.
Author metrics have an unfortunate history, in large part because they were born from the inappropriate use of journal impact factor methods to measure the work of individual scholars. The use of such a crude measure, which does not take into account the length of a scholar’s career or distinguish between scholars with a solid track record of quality work and those who have published only one or two highly cited papers has been widely discredited, to be replaced by the Hirsch Index. The h-index, which is a combined measure of a scholar’s productivity and impact, is currently the accepted standard measure of author impact, although it has many competitors, each one with its own claim to overcome the perceived failings of the others.
Find your own journal and author metrics
If you want to find out the Impact Factor of a journal and how it compares to other journals in your subject area, the library provides access through Journal Citation Reports. There is a lot of information in these reports, so this short video shows how to get the best from them.
You can use the Web of Science database’s citation reports feature to find you own author metrics, including your h-index. This video will show you how. If you find that your metrics are lower than expected, it could be because Web of Science does not have great coverage of your subject area. Many people find Google Scholar citations to work better for them.
Google Scholar citations is a free and simple way for authors to track use of their work, by leveraging the online citations to your work. Get started with Google Scholar citations
One of the simplest ways to ensure that ensure that all of your work is tracked and credited to you is to create an ORCID ID. This is particularly useful if you have a common name, have changed your surname or have moved institutions. Find out more about ORCID.
Whether you use Journal Impact Factors, Eigenfactors, the h-index or its competing metrics, they all share the same limitation; they all look at impact mainly from the limited perspective of citations in academic journals. Until recently, this was all that was available, but there is an emerging, alternative approach that attempts to provide a more accurate picture of how discoveries impact the increasingly networked world. The use of these “altmetrics” to measure research impact is of interest to research centers and principal investigators looking for new ways to demonstrate the reach of their scholarship.
Altmetrics draw on a diverse set of social media and online sources to deliver (arguably) broader, richer, and timelier assessments of current and potential scholarly impact.
Impact Story is a free service that enables scholars to tell data-driven stories about their impacts. Point this tool to your articles, slides, datasets, or code, and in a few minutes an impact profile is created for you. You can embed this in a CV or even include this in grant applications to show how your work is being used online. Click here for an example of a profile.
Publishers are also taking notice of altmetrics to track usage of articles they publish (often called “article-level metrics”). Altmetric is a publisher-focused tool which collects data from social-media sites and creates an Altmetric score to demonstrate impact on the article level. Used by publishers such as Nature Publishing Group, Springer and Public Library of Science (PLoS).
The jury is still out on if blog posts and tweets and other social-media activity are sophisticated and reliable enough to capture true impact, and what – if any – measures can be taken to prevent “gaming” these metrics.
To learn more about altmetrics:
- The weakening relationship between the Impact Factor and papers’ citations in the digital age by George A. Lozano, Vincent Lariviere, Yves Gingras.
- Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact by J Priem, H Piwowar, and B Hemminger, University of North Carolina at Chapel Hil and National Evolutionary Synthesis Center (NESCent). Using correlation and factor analysis, the results in this study suggest that citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone.
- The Spread of Scientific Information: Insights from the Web Usage Statistics in PLOS Article-Level Metrics by K Yan and M Gerstein, Department of Molecular Biophysics and Biochemistry, Yale University, U.S.A. This peer reviewed articles in PLOS ONE detailscorrelation studies, usage decay patterns, and other analyses of the PLOS article Level Metrics data set..
- Article-level metrics on FriendFeed. An open discussion forum where the community collates and discusses various article-level metrics developments.