Some observations during the bibliometrics session at the Österreichische Bibliothekartag

Albeit the program consistently talks about the Österreichische Bibliothekartag (singular) the whole library day spans actually 4 days. One would have expected at least the Österreichische Bibliothekartaggen (plural) but they insist in mentioning only one day. Of those four days, I was only present during part of the morning of the third day, so this is a very limited report on the Österreichische Bibliothekartag. Looking at their program, it is a very comprehensive and interesting program. Never thought that you could cover a complete session, 5 presentations, talking about cooking books (No pun intended). It only reflects that bibliometrics was only a small part of the program amongst many other subjects covered. I noticed a lot of presentations on e-book platforms, many digitization projects, plenty of mobile less of library 2.0 than you would expect (is the hype over?) and open access had also a very limited role. What struck me as interesting for conference organizers, is that many commercial presentation were programmed equally throughout the sessions. Just a sign of taking the sponsors seriously.

So far on the conference as a whole, of which I actually experienced too little. On to the bibliometrics sessions. The session was chaired by Juan Gorraiz, a bubbly Spaniard working already for years in Austria. Give him the opportunity and he will take the floor and would love to take all the time available and fill the slots for all presentations planned.

The first presentation was on a piece of research that should result in a masters thesis at some point, but some preliminary results were presented in this session by Christian Gumpenberger. The focus of the research was on the acceptance and familiarity of Austrian researchers with bibliometrics. The results were not really shocking, most researchers stated that they were familiar with impact factors, but for the moment there was no clue as to whether they were aware about a thing like a two year citation window. Or the difference between citable items and non-citable items leading to the inflation of impact factors for journals like Nature and Science. Christian sketched some sunny skies for bibliometrics in Austria, but in the subsequent discussion part this sunny view was criticized quite a bit. Notwithstanding I would like to have a look at this MS thesis when it becomes available.

The second presentation was from Italian origin by Nicola de Bellis. Nicola has written an interesting book on citation analysis in which he stresses the sociological, philosophical and historical aspects of bibliometric analyses. It is always interesting to hear a presentation like this, away from the fact finding number crunching approach which I normally have and dream a bit away on outlines of what in an ideal world should be done on a subject like this. Quite a lot, but some of it is beyond being practical. When you carry out bibliometric analyses in the library at some scale, like dealing with 18,000 papers that have collected 265,000 citations like we do in our library, you can only be practical. So there is an interesting conflict between his presentation (which will be on-line soon, I hope) and mine which followed Nicola his presentation.

I don’t want to cover all aspects of Nicolas his presentation. Go and read the book, which I am going to do as well. But at one point during his presentation I strongly disagreed with him. Where he stated that only the mediocre scientists have an interest in bibliometrics and the top scientists normally don’t have an interest in this topic. My experience it quite the contrary. In the first place it was one of Wageningen’s top scientist who urged the library to take a subscription on Web of Science back in 2001, and made it possible with a special contribution from his top institute. He knew he was a highly cited scientist, but somehow he needed Web of Science to confirm his reputation. Later on as well, apart from the discussion with scholars in the social sciences department, it has always been those top performing groups that invited me to give a presentation on this subject rather than the groups that were lagging behind in the bibliometric performance indicators. To me it has always appeared that those who are leading the pack are also interested in staying ahead of the rest and invite the library to explain the results obtained and enhance their performance in the future.

The second observation in Nicola his presentation where he was far beyond practical where he insisted on the point that for a publication all citations to this publication should be retrieved from the three general databases (Web of Science, Scopus and Google Scholar) in the first place supplemented with citations from at least one citation enriched subject specific database. Well that’s a lot of work for single publication in the first place, leading to deduplication errors if you’re not very careful. Secondly it should be well know that Google Scholar, albeit attractive because of tools like Harzing’s Publish-or-Perish, is not a reliable database for citation counts at his moment (Jacso 2008). Google Scholar still has serious problems with ordinary counting and depuplication and should therefore not be used for serious citation analyses. The third argument against the use of multiple databases goes a bit further into the theory of bibliometrics and relies on approaches described by Waltman et al. (2011) and Leydesdorff et al. (2011). The key point is that a number of citations in itself has no meaning. It should be related to the citations of related documents in the same field of science. You can do that by normalizing on the mean citation rate in the field (Waltman et al. 2011) or by the perhaps more sophisticated approach sketched by Leydesdorff et al. (2011) based on the citation distributions in the fied to which the paper belongs. The latter approach is very novel, and has not really been widely tested yet. Both these approaches rely on the availability of the all the citations to the publications in a certain field of science of a certain age and document type. This can be expected that you have the availability of the means or citation distribution when you work with a specific database (for WoS there is plenty experience, with Scopus it is coming with SciVal Strata but for Google Scholar it doesn’t exist yet), but is beyond reality when you derive citation data from three or four databases at the same time.

But apart from these critical points I just made, I liked the presentation by De Bellis very much. For those interested in similar views on the citation practice I really recommend to read MacRoberts & MacRoberts (1996) as well.

The session closed with my presentation, which is enclosed here

Bibliometric analysis tools on top of the university’s bibliographic database, new roles and opportunities for library outreach

View more presentations from Wouter Gerritsma

After which the session ended with some discussion but soon all 30 or so participants hurried themselves to the coffee.


De Bellis, N. (2009). Bibliometrics and citation analysis : From the Science Citation Index to cybermetrics. ISBN 9780810867130, The Scarecrow Press, 450p. (download here)
Jacsó, P. (2008). The pros and cons of computing the h-index using Google Scholar. Online Information Review, 32 (3): 437-451
Leydesdorff, L., L. Bornmann, R. Mutz & T. Opthof (2011). Turning the tables on citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology n/a-n/a
MacRoberts, M. H. & B. R. MacRoberts (1996). Problems of citation analysis. Scientometrics, 36(3): 435-444
Waltman, L., N. J. van Eck, T. N. van Leeuwen, M. S. Visser & A. F. J. van Raan (2011). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1): 37-47.

2 thoughts on “Some observations during the bibliometrics session at the Österreichische Bibliothekartag”

  1. Hi Wouter

    the bibliometric session in Innsbruck was very interesting for me too. I enjoyed your presentation and your work with the metadata repository at Wageningen is really top-notch, congratulations. On the other side, I disagree with you on the interpretation of bibliometric-informed evaluations. This is not the place for detailed arguments on such a complex issue. I just wish to resume some points of my presentation that might have been misunderstood.

    I never said that ”only the mediocre scientists have an interest in bibliometrics” … that’s clearly wrong and misrepresents my ideas. What I meant in the incriminated slide has nothing to do with the quality or intellectual stature of the scientists turning to the library for bibliometric advice. I’m in no position to judge them as scientists, neither – I would say – are other librarians and bibliometricians on account of bibliometric evidence only. What I meant is simply that, since the distribution of bibliometric virtues such as productivity and citation impact is universally and ‘’scale-invariably’’ skewed, it is very likely that most of the individual researchers undergoing a bibliometric evaluation will belong to the medium-to-low range of the curve … please note: the curve of bibliometric virtues, not of scientific abilities, value, qualities.
    In fact, I strongly maintain that citation impact cannot be confused neither with actual influence nor with quality or contribution to knowledge advancement, at least not at the individual level of analysis. Many counter-examples could be invoked to invalidate such a wrong connection. But it’s not even necessary to call in historical evidence: the equation

    more citations = more impact = more influence = more quality

    is simply misleading. The statistical, positive correlation existing between high influential publications by prominent scientists and their top citation score is not a valid argument to advocate its general validity. A correlation doesn’t say anything about causes and cannot be used to make explanations/predictions of any sort. Obviously there’s something at work behind groundbreaking contributions that might be ascribed to a pure quality dimension, however you measure it (citations, peer reviewing, prizes, number and extension of applications, etc.). But the lack of that correlation in not-so-groundbreaking publications doesn’t convey any insight on what’s happening behind the scenes. To know more you have to dig deeper outside the citation pond: is it a really bad article or just the wrong article in the wrong place? Is the paper actually worse or is it simply less visible than many other functionally equivalent contributions?

    You’re definitely right on one point: my model of contextualized individual citation analysis is not functional to quick-and-dirty managerial decisions, but quick-and-dirty procedures are exactly what should be avoided when the personal and professional life of individual scientists is at stake. Quite the contrary: slow-and-clean is the way to get it right. And to make it slow-and-clean we need to roll up our sleeves and try harder than just retrieving crude citation scores and baseline figures for bibliometric comparisons. Of course, as you say, when bibliometric analyses are carried out on 18,000 papers, you cannot afford to be very subtle. But it’s not a good reason to skip the how-it-should-be part of the job, is it?

Comments are closed.