Article impact and journal impact factors

In the scientometric literature we are very often warned not to use journal impact factors to judge the performance of researchers or research groups. For this statement I always refer back to Seglen (1997). Seglen showed that only 50% of the articles in three chemistry journals contributed to 90% of the citations to those journals, i.e. the other half of the articles only contributed to 10% of the citation impact. It is one of those illustrations of the long tail of scientometrics.

In my courses on citation analysis I point always to this fact, and elaborate on the use of journal impact factors in journal selection as part of a publication strategy. Choose the highest impact factor journals to submit your best work is a simple advice.

In the latest analysis by the NOWT of research performance in the Netherlands, my university is placed of the second division of Dutch universities ranked by citation impact. One of the points the report made quite clear was that the field corrected journal impact of the articles was far below the national average. Actually, it was only the second worst university in this respect, only Tilburg University feared worse. (NOWT 2008, table 4.5 on p.40).

I think there is a necessity to pay more attention to this fact at our university. In a informal citation analysis for one of our chair groups I am going to elaborate this point a bit further.

Relative impact versus Journal impact factors

If you look at the relative impact of their articles published the period 1998-2005 and the journal impact factors you get a large scatter diagram. If you want to draw a regression line, it seems a bit meaningless. The slope is just positive, but the R² is only 0.0048. The problem is of course that the relative impacts of the articles are far from normally distributed. The average of the relative impacts per article is 1.35, whereas the median is 0.92. Most articles have a relative impact below world average. If you calculate the average article impact for this group as the sum of citations divided by the sum of the baseline citations the relative impact is 1.28.

For me the picture became much clearer when I drew the lines for the median citation impact and the median journal impact. If you look at the articles below the median citation impact line, most articles are concentrated in the lower journal impact factor quadrant 36 versus 16. Of the higher impact articles most articles are concentrated in higher journal impact factor quadrant, 35 against 16. Actually those 35 articles were published in only 14 different journals.

Relative impact versus Journal impact factors with the median lines

Perhaps this research group should focus their publication output on those 14 journal titles, and stay away from the 21 journals associated with lower left quadrant. I found this approach quite revealing.

References
NOWT (2008). Wetenschaps- en Technologie- Indicatoren 2008. Maastricht, Nederlands Observatorium van Wetenschap en Technologie (NOWT). http://www.nowt.nl/docs/NOWT-WTI_2008.pdf
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ 314(7079): 497-502. http://bmj.bmjjournals.com/cgi/content/full/314/7079/497