Journal metrics are usually based on the number of citations received by articles in a particular journal over a specific period of time. They attempt to signifying a journal's importance and influence in a particular field. Unless they are weighted journal metrics are not comparable across disciplines or databases, and are not available for all journals; new or emerging journals may not have had time to accrue enough data to appear in rankings.
Journal metrics are by their nature controversial. As with all measures of scholarly impact, they should be used in the appropriate context and never in isolation. See out dedicated page for more information on the responsible use of research metrics .
Below is a list of common journal metrics.
Journal Impact Factor (JIF)
arrow_forward
The Impact Factor (IF) or Journal Impact Factor (JIF) is a citation-based metric that reflects the yearly average number of citations of articles published in the last two years in a given journal. It is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factors are often deemed to be more important, or carry more intrinsic prestige in their respective fields, than those with lower impact factors.
Impact factors can be used as one of a range of measures when comparing journals. However impact factors do not take account of citation patterns across different fields and so you cannot use them to accurately compare journals across different disciplines. As a journal-level metric impact factors should not be used to evaluate the merit of individual articles or researchers. This aligns with recommendation one in the San Francisco Declaration on Research Assessment (DORA).
How is an impact factor calculated? In any given year the two-year journal impact factor is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years, and the total number of citable outputs published in that journal during the two preceding years. An example is provided below (source: Wikipedia: https://en.wikipedia.org/wiki/Impact_factor).
Criticism of impact factors Impact Factors are generally viewed in the scholarly community as fundamentally flawed in that they present the mean of data that are not normally distributed, and so they are influenced heavily by a small number of highly-cited papers in a journal. For example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the actual number of citations for a single article in the journal is in most cases much lower than the impact factor would suggest. Impact Factors can also be easily gamed. Journals can adopt editorial policies such as limiting the number of citable outputs (thereby reducing the denominator in the equation), they can publish outputs expected to be highly cited early in the calendar year to allow maximum time for the papers to be cited, and some journals even ask authors to add extraneous citations to an article before the journal will accept it (a practice known as coercive citation).
CiteScore
arrow_forward
CiteScore is a citation-based metric similar to Journal Impact Factor (JIF) is based on the number of citations to documents in a journal over four consecutive years, divided by the number of the same document types indexed in Scopus and published in those same four years. CiteScore includes a wider variety of document types than the Journal Impact Factor and, in order to capture the citation peak for the majority of disciplines, it spans fours years of publications rather than two. CiteScore is frequently used as a proxy for the relative importance of a journal within its field; journals with a higher CiteScore are often deemed to be more important, or carry more intrinsic prestige in their respective fields, than those with a lower CiteScore.
CiteScore can be used as one of a range of measures when comparing journals. However CiteScore does not take account of citation patterns across different fields and so you cannot use them to accurately compare journals across different disciplines. As a journal-level metric CiteScores should not be used to evaluate the merit of individual articles or researchers. This aligns with recommendation one in the San Francisco Declaration on Research Assessment (DORA).
How is CiteScore calculated? In any given year the CiteScore of a journal is the number of citations received in that year and the previous three years, for documents published in the journal during that period (four years), divided by the total number of published documents (articles, reviews, conference papers, book chapters, and data papers) in the journal during the same four-year period. An example is provided below (source: Wikipedia: https://en.wikipedia.org/wiki/CiteScore).
Criticism of CiteScore Many of the criticisms levelled at the Journal Impact Factor can also be levelled at CiteScore. Although CiteScore includes a wider range of document types and is calculated over a longer period of time, it is open to much of the same gaming that can be used to inflate an Impact Factor. Helpfully Elsevier make CiteScore information freely available via Scopus so the methodology and underlying data are open to interrogation.
Source Normalized Impact per Paper (SNIP)
arrow_forward
Source Normalized Impact per Paper (SNIP) is a journal metric that accounts for field-specific differences in citation practices. It does so by comparing each journal’s citations per publication with the citation potential of its field, defined as the set of publications citing that journal. SNIP therefore measures contextual citation impact and enables direct comparison of journals in different subject fields, since the value of a single citation is greater for journals in fields where citations are less likely, and vice versa. SNIP is calculated annually and is freely available from Scopus.
SCImago Journal Rank (SJR)
arrow_forward
SCImago Journal Rank (SJR) accounts for both the number of citations received by a journal and the importance or prestige of the journals where the citations come from. A journal's SJR indicator is a numeric value representing the average number of weighted citations received during a selected year per document published in that journal during the previous three years.
Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. Higher SJR values are intended to indicate greater journal prestige. SJR also accounts for journal size by averaging across recent publications and is calculated annually. SJR is freely available from Scopus.