Consistent search interfaces, oh so difficult

One of my annoyances of searching for journals in Web of Science has always been that in standard search you have to fill in the full journal title but when you search for a journal in the cited ref search you have to use the abbreviated jounal title. A very inconvenient way of doing searches in the same database, albeit a different index. Explain this in your classes on searching databases. Another small grunt in this respect is that the title abbreviations between or within different ISI products is not the same either so you are always left guessing.

This afternoon I had to check since when the Journal of Environmental Planning and Management has been indexed in Web of Science. The answer was found quite quickly. The journal only started this year to be covered by WoS. So I had to look up some citation data using a cited ref search. Easier said than done.

Using the official journal abbreviation list on the cited ref search the journal appeared not be there. But it has been indexed on WoS since the beginning of this year already. Moving over to the new interface, assuming they would have updated matters there a lot more, brought me some more disappointment. The journal list in the new interface was not up to date either.

Guessing the abbreviation I arrived quickly at the following abbreviations being used within WoS for the same journal:


This list is certainly not exhaustive, but just illustrates my point of different abbreviations for the same journal (how do they ever calculate the right impact fact you might wonder?).

My idea is that when you have such a major overhaul of you web platform that you look at the search ergonomy as well. Full title search in the normal search and abbreviated title search in the cited ref search should have been a problem reported back to ISI headquarters by all marketeers and sales people on many different occasions. So this little annoyance should have been rectified in latest extensive product overhaul.

That journal abbreviation lists are not up to date with the latest additions of newly indexed periodicals is a sign of very sloppy maintenance of your databases. For an important database such as Web of Science I would have expected higher standards of accuracy.

It seems that the competition has not yet fully woken up this giant in database land. Please Thomson wake up!

Student’s expectation of databases

A Swedish research project om a comparison on students search behaviour for information with Google Scholar and Metalib concluded with “The study concludes that overall, students were not very satisfied with either tool“. I could leave it at that, but there was this really important paragraph towards the end that concluded:

Our study showed that almost half of searches launched in Metalib by users without training resulted in 0 results and a large part of the reason was the expectation that a search was Google-like in nature, in other words keyword searching with quotation marks used to indicate a phrase. Instead Metalib often uses a default phrase search. The result is a disaster. Libraries need to work with Libris and Fujitsu to do whatever possible to change this discrepancy between student expectations and search rules in Metalib otherwise the product will remain seriously flawed. [Emphasis added]

We, at our library, have to give this conclusion really serious attention. Databases like Scopus and nowadays WoK as well have adapted themselves to become Google like. Our own catalog however, is more Metalib like.

Reading tip: the report consists for more than half of appendices.

Hattip: Nicole C. Engard

Nygren, E., G. Haya & W. Widmark (2007). Students experience of Metalib and Google Scholar. Stockholm, Sweden, Stockholm university library. 158 pp.

Reprise : Impact factors calculated with Scopus compared to JCR Did I report yesterday on the first preprint article that compared Impact factors calculated with JCR and Scopus, later that day a second journal was published on e-lis covering the same subject. Gorraiz and Schoegl (2007) took the analysis really a step further than Pislyakov (2007). Not only did they include a larger set of journals in their sample 100 compared to 20, they also looked at the other bibliometric indicator the immediacy index.

Interesting is the determination of the authors to look for journals in the chosen subject area, pharmacology, that were not included in the JCR but should have been there on the basis of their citations. In the journal selection process of Thomson some other factors are taken into account, but in practice we expect all top journals in a certain category to be included in the JCR/WoS database. So it is interesting to learn that there are a number of journals that should have been included on the basis of citation data in the databases of Thomson.

At the beginning of the article the authors state:

Since there are more journals included in Scopus than in WoS, a journal in Scopus has a higher cace to get cited in general. Therefore the the values for the impact factor and the immediacy index should also be higher in Scopus

This might sound plausible, but in actual fact the effect of a larger journal base is much smaller. Because Web of Science already covers virtually all top journals in the subject category they also cover the journals where most citations take place. Outside the top journals relatively little citation traffic takes place. This has been demonstrated by Ioannidis (2006) and is also indicated in journal selection policy of Thomson where they refer to some of their own research:

More recently, an analysis of 7,528 journals covered in the 2005 JCR® revealed that as few as 300 journals account for more than 50% of what is cited and more than 25% of what is published in them. A core of 3,000 of these journals accounts for about 75% of published articles and over 90% of cited articles.

What really is disturbing from both the articles of Gorraiz and Schoegl (2007) and Pislyakov (2007) is that both databases are not one hundred percent reliable when it comes to number of article published in a given year. For Scopus there we can expect some minor discrepancies since we are dealing with a young database that shows still some fluctuations in content. Elsevier still has some work to do. For WoS it is sometimes just sloppiness in indexing and that is unforgivable.

Gorraiz, J. & C. Schloegl (2007). A bibliometric analysis of pharmacology and pharmacy journals: Scopus versus Web of Science. Journal of Information Science 00(00): 00-00.
Ioannidis, J. P. A. (2006). Concentration of the Most-Cited Papers in the Scientific Literature: Analysis of Journal Ecosystems. PLoS ONE 1(1): e5.
Pislyakov, V. (2007). Comparing two “thermometers”: Impact factors of 20 leading economic journals according to Journal Citation Reports and Scopus. E-Lis.

Impact factors calculated with Scopus compared to JCR

You only had to wait for it. With the rich resource of citation data available in Scopus, somebody was going use it and calculate Impact Factors. Quantitative journal evaluations was once the single domain of Thomson Scientific (formerly ISI) but nowadays they face more and more competition. Elsevier, with Scopus, has so far hesitated to step into the arena of journal evaluation, but Vladimir Pislyakov (2007) has made a start for the 20 top journals in economics.
He compared the Impact factor from the JCR with the Impact he construed for the same journals with citation data from Scopus. In his methodology he made small mistake by not excluding the non citable items, which is quite easy to do in Scopus. But this will not invalidate his results. What was to be expected, confirming our experience with higher citations in Scopus compared to Web of Science, is that overall more citations per article were found in Scopus. This resulted in slightly higher IF as calculated by Scopus. What is more interesting is that the rankings of the journals based on Scopus data differed from the ranking based on the JCR impact factors. Overall they correlated well, but looking into detail, there was a journal that dropped from rank 5 to 13, another from 11 to 18. So there is merit to investigate this on a larger scale than those 20 journals in economics.
In the end the author makes a big mistake, he states

“Since impact factor is considered to be one of the crucial citation indicators which is widely used in research assessment and science administration, it is important to examine it critically from various points of view and investigate the environment in which it is calculated.”

Those are practices we should stay away from. The IF as such is only of interest for scientists when they select a journal for publication. IF should not be used for research evaluation of grant applications.

Pislyakov, V. (2007). Comparing two “thermometers”: Impact factors of 20 leading economic journals according to Journal Citation Reports and Scopus. E-Lis.