Publishing for impact

It has been a while. Yes.

But here is a link to a presentation on points for a publication strategy I gave a little while back for some 300 PhD students at our university. The presentation was titled “Publishing for impact”

Publishing for impact

View more presentations from Wouter Gerritsma.

If you are interested in the actual presentation than you need to have a look at the registration of the whole symposium on writing a world class paper, I start somewhere around 2:51.  The other presentations that afternoon were interesting as well. See for those presentations the news item in our newsletter.

Related Towards a publication strategy

The role of university rankings in university marketing

ResearchBlogging.org So far I did not notice any proper research on the role of university rankings in relation to university marketing. Of course, I am aware of many instances that the importance of university rankings have been mentioned in this respect, but evidence to substantiate these claims are rare.

I was therefore pleasantly surprised by the research of Liang-Hsuan Chen (2008) which only passed my screen today. She found that for Asian graduate students attending Canadian universities the rankings played an important role in university selection. She found:

Graduate students enrolled in professional programs ranked factors such as the ranking of the program and affordability of tuition with high importance in choosing a Canadian graduate school. The fact that the ranking of program was ranked with the highest importance by this group of students was in part due to the availability of program ranking information and marketing efforts (e.g., the MBA Tour) undertaken by the programs.

My impression from this piece of research, whether you like it or not, rankings do play their role in the perception and choice of international students in their selection of university to complete their graduate education. Rankings have different purposes Chen explains:

Reputational ranking became a proxy for the quality of education. Although much criticized by academics for its lack of both validity and reliability, reputational ranking serves three purposes: first, it is a promotional tool for higher education institutions to recruit students; second, it is an assessing tool for international students to screen out competitive choices; and third, it is a marketing and signaling tool for students themselves after they graduate.

So it’s not only important to be present in the various University rankings. You better make sure you rank well!

References
Chen, Liang-Hsuan (2008) Internationalization or International Marketing? Two Frameworks for Understanding International Students’ Choice of Canadian Universities, Journal of Marketing For Higher Education, 18(1): 1-33, http://dx.doi.org/10.1080/08841240802100113 (Subscription required)

Journal quality, an unexpected improvement of the JCR

It is odd to say, but for researcher the journal as an entity is disappearing. Scientist search for information in online databases and select from title and abstract information whether the article suits their needs. The days that scientists visited the library and browsed the table of contents of the most important journals to keep up with their field have long gone .

Still there is a lot of emotion around journals titles. Scientist want to publish their research in the best possible journal. Earlier this year the NOWT (2008) published a report on the performance of Dutch universities and there it was clearly shown that field normalized citation impact for each university correlated positively with the field normalized journal quality.
Journal quality versus Citation impact

Looking at this graph it is clear that there is considerable reason to selected the best journals in their field to publish your results. However, until recent the only widely available journal quality indicator has been the journal impact factor. There has been a lot of criticism on the uses and abuses of impact factors, but they have stood their time. All scientists are at least aware of impact factors. For years ISI, Thomson Reuters were in fact the sole gate keepers of journal quality rankings.

Over the last years a number of products, free and fee based, have tried to come up with new and competing journal ranking measures. SicmagoJR (based on Scopus data), journal analyzer from Scopus, Eigenfactor.org and the data from Thomson’s own Essential Science Indicators of course.

This week Thomson Reuters announced that they will update the journal citation report. From the 1st of February we get a entirely new Journal Citation Report. From the press release:

  • Five-Year Impact Factor – provides a broader range of citation activity for a more informative snapshot over time.
  • Journal “Self Citations” – An analysis of journal self citations and their contribution to the Journal Impact Factor calculation.
  • Graphic Displays of Impact Factor “Box Plots” – A graphic interpretation of how a journal ranks in different categories.
  • Rank-in-Category Tables for Journals Covering Multiple Disciplines – Allows a journal to be seen in the context of multiple categories at a glance rather than only a single one.

It is highly unusual to see two updates per year for JCR. But it is interesting to to note how they are moving under the pressure of some competition.

Literature:
NOWT (2008). Wetenschaps- en Technologie- Indicatoren 2008. Maastricht, Nederlands Observatorium van Wetenschap en Technologie (NOWT). http://www.nowt.nl/docs/NOWT-WTI_2008.pdf (in Dutch)

Self citations do work

Blogging on Peer-Reviewed ResearchIn a very extensive article van Raan has studied the effect of self citations on the total citations to a groups’ work. In the concluding paragraph van Raan writes:

[] external citations are enhanced by self-citations, so that we have the “chain reaction:” Larger size leads to more self-citations, which lead to more external citations. This mechanism is strongest for the lower impact journals—they “make size work”—as well as for higher performance groups. In other words, lower impact journals enable research groups more than do higher impact journals to “advertise” their other work by means of self-citations.

Most interesting to note about this article was that van Raan cited himself 11 times out of 28 in total. It may seem to be a bit excessive, but stresses his point excellently.

Another point that I always stress within the theme of publication strategy is to consider Open Acces publishing. Since the last few years I have noted that van Raan is publishing his articles in OA on Arxiv. His group has not (yet) demonstrated the advantage of OA publishing on citation impact scientifically yet, but the master of scientometrics is putting it into practice anyway. Something to be considered by every researcher very seriously.

Reference
van Raan, A. F. J. (2008). Self-citation as an impact-reinforcing mechanism in the science system. Journal of the American Society for Information Science and Technology, 59(10): 1631-1643. http://arxiv.org/ftp/arxiv/papers/0801/0801.0524.pdf

The mysterious ways of Web of Science

A while back, one of our researchers asked me how Steven Salzberg arrived at the number of citations for the paper on the Arabidopsis genome in Nature. When he checked Web of Science, it only delivered zero citations and that couldn’t be true for such a breakthrough paper. Peter found 2689 citations! How did he do that?

I checked out the paper in Web of Science myself first as well, and found also zero citations.

Zero citations from Web of Science for the Arabidopsis papers

I was not entirely surprised since I realized it was one of those consortium papers. I knew Thomson had some problems with a consortium paper in the past. But annoying it was.

I first checked about the issue around the human genome project and found it being mentioned even in Science Watch from Thomson. But from the article it appeared that Thomson only improved the tracking for citations from that Human Genome project paper, and not the raised issue per se. Even though the Arabidopdsis paper was even older the citations to this paper had not been corrected. It appeared that something in the searching or tracking of citations by WoS went wrong but where was the error being made?

I made a few futile attempts in the cited ref search with Arabidopsis as author, or Arabidopsis*. Searched in the cited ref search for Kaul as author (which is listed in the end of the original article as first author) but that only resulted in some 130 citations. Not sufficient to justify Steven Salzberg number of citations. I did not like to use the cited ref search to look for the cited articles from Nature in 2000 this is a very large result set that you have to wade through innumerable pages of results since you can’t refine these type of searches by volume or page numbers. (Wouldn’t that be nice?)

To reassure my inquisitive researcher I pointed him to Scopus (Sorry Thomson) where the he could see a reassuring 3000+ citations himself. Meanwhile I did not have a quick fix for this problem.

It was only later when I looked into the problem again, and somehow I was forwarded to the all databases search rather than the Web of Science search tab, which I normally use. To my utter amazement the title search delivered this time two records. Both with zero citations, but more importantly it showed next to [Anon] Ar Gen In, as the author.

Zero citations from Web of Science for the Arabidopsis papers

Now the problem was simple. I had found the author. A cited ref search yielded indeed nearly the 2689 citations from Steven Salzberg.

Zero citations from Web of Science for the Arabidopsis papers

But these figures are not entirely correct either since there are some additional 131 citations to be found with Kaul as a first author reference to Nature with the correct volume and page number.

Of course I requested at Web of Science a correction of the citation data, but forgot to include Kaul’s citations. Hopefully this will be repaired at a later date.

But what makes me really wondering is the slight -but very important- difference in record presentation between the All Databases search and the Web of Science search  on Web of Knowledge. For me personally the standard entry in Web of Knowledge is the Web of Science tab. Not in my normal working routine would I ever go to the all databases tab to look up a number of citations. Just by luck I found the right author name on this occasion. But it shouldn’t have to become the standard way to perform searches shouldn’t it?