institutional impact factors

This is wrong. This is so wrong. Impact factors are a measure of the number of citations of a scientific journal, typically used as a proxy of the relevance of that journal in its field (Wikipedia dixit). Following the example in Wikipedia, the 2013 impact factor of a journal is defined as the number of times that articles published in 2011 and 2012 have been cited in 2013 divided by the total number of articles published in 2011 and 2012. It is sort of the number of citations per article that articles in a journal have gathered in a 1-2 year time frame.

Impact factors are somewhat controversial, because they depend on the citation behavior in each discipline, so comparing the impact factor of a journal in bio and another in math is like comparing apples and bananas. Likewise, its use to judge the merits of a particular research article is questionable, because it ignores the fact that the number of citations greatly varies from article to article within the same journal. So to even try to define the impact factor for a research institution is just wrong, wrong, wrong. Buyers beware.

That said, the impact factor is an alluring concept, because it reduces everything to a single number, inviting you to forget that life is actually a complicated place. And, sometimes, it is fun to do stuff even if it is wrong, specially when you don’t work in that field and you are not facing reviewers. Sort of a guilty pleasure. So what I did was to go to web of science, and played around with the 2013 impact factor of different institutions, taking advantage that you can search by organization. I decided to refine the results and select only articles, so strictly speaking I am departing from the more conventional definition of impact factor, which uses all the citable items in a journal. To test the results, I went to the Academic Ranking of World Universities 2013 and took a random sample of 10 universities among the top 100 (I really did, using random.sample from python), and compared their ranking with their institutional impact factor. The result? A Pearson coefficient of -0.82.

../_images/IF_ARWU.png

Emboldened by this brush with statistical significance, I then went on to calculate the impact factor of some of my favorite institutions. For instance, I compared a few of the national laboratories in the US (my apologies to Brookhaven and Pacific Northwestern):

../_images/IF_natlabs.png

With the overall conclusion that Argonne rocks.

I also know that the impact factors of the University of Sevilla and the Ruhr University Bochum are 2.2 and 3.3. The University of Illinois at Urbana-Champaign is 3.8, Northwestern University 5.1, and Argonne National Laboratory a surprising 6.1, so I can now frame my biography as a constant search for institutional impact factor excellency. If you go to the stratosphere of scientific institutions, places like Caltech have earned bragging rights with an institutional impact factor of 6.8.

Given the fact that there are very few new ideas, it is likely that somewhere there is a community of enthusiastic researchers specialized in institutional impact factors that I have overlooked. I also wonder if there is actually such a large difference between a metric that lumps publications related only by the fact that have been published in the same journal and one that lumps publications related only by being affiliated to the same institution.