On Impact Factors and article quality

I just found this quote:

Thus, while it is incorrect to say that the impact factor gives no information about individual papers in a journal, the information is surprisingly vague and can be dramatically misleading.

(Adler et al. 2008)

The report is a very critical discussion about the use and abuse of impact factors and the h-index.

Reference:
Adler, R., J. Ewing, et al. (2008). Citation Statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS), Joint Committee on Quantitative Assessment of Research. 26p. http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf

Hattip: Sidi

Article impact and journal impact factors

In the scientometric literature we are very often warned not to use journal impact factors to judge the performance of researchers or research groups. For this statement I always refer back to Seglen (1997). Seglen showed that only 50% of the articles in three chemistry journals contributed to 90% of the citations to those journals, i.e. the other half of the articles only contributed to 10% of the citation impact. It is one of those illustrations of the long tail of scientometrics.

In my courses on citation analysis I point always to this fact, and elaborate on the use of journal impact factors in journal selection as part of a publication strategy. Choose the highest impact factor journals to submit your best work is a simple advice.

In the latest analysis by the NOWT of research performance in the Netherlands, my university is placed of the second division of Dutch universities ranked by citation impact. One of the points the report made quite clear was that the field corrected journal impact of the articles was far below the national average. Actually, it was only the second worst university in this respect, only Tilburg University feared worse. (NOWT 2008, table 4.5 on p.40).

I think there is a necessity to pay more attention to this fact at our university. In a informal citation analysis for one of our chair groups I am going to elaborate this point a bit further.

Relative impact versus Journal impact factors

If you look at the relative impact of their articles published the period 1998-2005 and the journal impact factors you get a large scatter diagram. If you want to draw a regression line, it seems a bit meaningless. The slope is just positive, but the R² is only 0.0048. The problem is of course that the relative impacts of the articles are far from normally distributed. The average of the relative impacts per article is 1.35, whereas the median is 0.92. Most articles have a relative impact below world average. If you calculate the average article impact for this group as the sum of citations divided by the sum of the baseline citations the relative impact is 1.28.

For me the picture became much clearer when I drew the lines for the median citation impact and the median journal impact. If you look at the articles below the median citation impact line, most articles are concentrated in the lower journal impact factor quadrant 36 versus 16. Of the higher impact articles most articles are concentrated in higher journal impact factor quadrant, 35 against 16. Actually those 35 articles were published in only 14 different journals.

Relative impact versus Journal impact factors with the median lines

Perhaps this research group should focus their publication output on those 14 journal titles, and stay away from the 21 journals associated with lower left quadrant. I found this approach quite revealing.

References
NOWT (2008). Wetenschaps- en Technologie- Indicatoren 2008. Maastricht, Nederlands Observatorium van Wetenschap en Technologie (NOWT). http://www.nowt.nl/docs/NOWT-WTI_2008.pdf
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ 314(7079): 497-502. http://bmj.bmjjournals.com/cgi/content/full/314/7079/497

Some musings on the JCR

Last year some of our researchers asked me what had happened to the Impact Factor of the journal Water Science and Technology. In the 2005 edition it was still included in the JCR and showed an showed an increasing trend in Impact Factor. Not the top of all journals, but a good player. After correspondence with ISI (Thomson Reuters Scientific) we found out that it was indeed excluded from the JCR because it lacked the desired quality. Later I understood from one of the editors that perhaps too many conference papers caused this problem. The editors changed the editorial policies and complied with ISI to upgrade the standards. After these improvements the journal was set for inclusion in 2007 again.

Indeed the journal has appeared again in the latest edition of the JCR. A shinning IF of 1.240 which is higher than ever. For 2006 the IF has been calculated and presented in the 2007 edition as well. A wee bit low, but it is important that there is a continous set of data. But what really amazes me is the fact that when you search in de 2006 edition of the JCR you still don’t find this journal. In the 2005 edition it is there again. It strikes me as odd. Still hanging on the old idea of a paper edition.

Another pain point of me with the JCR is the strange division between the Science edition and the Social Science edition of the JCR.Today I had to check for a set of journals their impact factors. Each time you have to guess wether the journal would be included in the Science edition or the Social Science edition.

I can imagine there is a sales argument to sell either smaller set to smaller institutions. But when you subscribe to the complete set, I can’t see any reason whatsoever why we have to live with this barrier in the database. It seems a relic from times long time gone.

JCR 2007 releases new impact factors

June is always the time to look out for the newest update of the Journal Citation Reports. Yesterday I checked and they weren’t there yet. Today the JCR was updated and included the 2007 figures.

You can leave it at that. We subscribe to this databases, and it has been updated. That’s all.

For the Journal Citation Reports which is updated only once a year that simple message will not suffice in my opinion. Only Thomson Reuters Scientific doesn’t appear to share my view. JCR is an important database. On the release of the latest figures, armies of researchers want to consult the database to see whether the journal on which editorial board they are has increased its Impact Factor. Or they use it to judge where to submit their next set of articles.

When the Essential Science Indicators are updated, once every two months. The event is accompanied with a slew of information from Thomson. When the even more popular database of JCR is updated we don’t receive any information whatsoever.

We have to find out ourselves that the coverage of journals has been expanded, growing from 6166 in the 2006 JCR Science edition to 6417 in the 2007 edition. For the JCR Social Science edition the number of journals covered increased with 97 journals to a total of 1865 journals. Which journals? We are left to guess for ourselves. Some Spanish journals they have worked out.

Thomson Reuters Scientific knows, but they haven’t told us (yet). Some journals have been dropped from the list. We only have to find out ourselves which ones. The increase in journals this year is only a prelude to the increase which we might expect next year since they have included some 700 new regional journals in Web of Science.

I might be mistaken, but at first sight there must be some interesting news worthy facts in the yearly update of JCR. Worthy of informing at least your subscribing librarians, who can on their turn inform their users. We want to inform our users on these events. We are more than willing to promote your products. Thomson, you can facilitate this work a whole lot better, but you should inform us a whole lot better than this.

Towards a publication strategy

This afternoon we had the opportunity to inform some of our participants in the Graduate School of VLAG on the procedures in the preparation of the external peer review which will take place next year. The first part of our presentation was, on my part, quite straight forward explaining the details of the bibliometric analysis which is part of the self assessment in preparation of the external peer review.

The second part of our presentation,  presented by Marianne, was much more speculative. Perhaps more interesting. It dealt with the opportunities to enhance your publication impact. There are no hard guidelines on this subject whatsoever. We had to strech our imagination to the limit, but I think we found quite a balanced set of rules to set out for our audience.

SlideShare | View | Upload your own