A small working party from the Stats Society recently went through a journal ranking exercise for the government’s ERA. I was part of this group but made a rather modest contribution to the discussions as I was busy with the business and management journal rankings in my role here at MBS. The final result was a list of recognised journals in the field of Statistics and Probability. They were to be divided into four tiers, with some distributional restrictions. The A* journals we ultimately agreed upon were
Annals of Statistics, Annals of Probability, Biometrics, Biometrika, Biostatistics, JASA, JRSSB, Probability Theory and Related Fields, JCGS.
I have several observations.
First, the working party acted in good faith throughout. Mediocre journals were not recommended for promotion based on private agendas – as far as I could see – apart from a natural tendency of people to value top journals in their own corner of statistics.
There was some discussion about whether JCGS and Biostatistics should really be in the top tier. Five years ago we would not have said so, and I know from my discussions with a broader range of academics that not everyone agrees that these two journals are comparable to JASA or JRSSB. But I took these inclusions to be partly pre-emptive, as the fields of bio-statistics and highly computational methods are likely to be action packed areas for the medium term, and these two are the best journals in those fields.
Predictably, there was much discussion focused around journals near the boundaries between tiers, most importantly between A* and A. At one point, consideration was given to dropping Biometrics in order to make room for another probability journal. I mainly held my tongue in these discussions so as not to derail them. But it is about time we started talking openly about how silly and destructive the whole ARC ranking exercise is.
Statisticians are very familiar with the concepts of variability and bias. These are natural labels for my two main complaints about the ARC ranking exercise.
Discrete tiers (Variability).
We got into the absurd situation of saying Biometrics was not a top journal because Annals of Applied Probability may (possibly) be better. Such a conflict should never occur. It occurs because of an arbitrary set of guidelines from the ARC that impose (a) four discrete tiers and (b) a distribution on these tiers. These together mean that there can only be 9 tier A* journal in Statistics and Probability. Who decided this? Certainly not us. No competent statistician attempting to measure performance would ever propose a tiered system with an imposed distribution.
Deliberately discretising a measurement of performance simply adds statistical noise – and very unsmooth noise at that. We see the effects of the unsmooth noise at the boundaries of the tiers. A* journals get dropped to A by mathematical fiat. We see wider effects of the noise in journals of somewhat varying quality being put in the same tier. It is like measuring human heights in units of 6 inches. Quality should be measured smoothly and I think that the academic community have shot themselves in the foot by ever agreeing to tiers. We should simply give a score out of 100 based on some combination of pre-existing rankings from professional bodies as well as citation data. No doubt the ARC would point out that we academics give grades to students. However, this is mainly imposed by tradition and we also give a numeric scores out of 100 and a GPA over the degree.
I have the journal rankings from three separate exercises HERE: ***I only list the journals that the SSAI decided to have on the Statistics and Probability list. The rankings are A*, A, B, C and the ARC limits the number of A* journals to 5% which means 7 journals. I also have ratings of these same journals by the Maths Society which included a much larger list of journals – and seem to not have a rating of C. This list was to have been used for the RQF. Finally, I have the ratings of the ARC which were released a few months ago for comment and which motivated the SSAI to produce their alternative list in response.
The journals where there are significant disagreements are:
Biometrika: maths=A*, ssai=A*, arc=C
Statistical Science: maths=A*, ssai=A, arc=A*
Statistical Methods in Medical Research: maths=A, ssai=B, arc=A*
Journal of Quality Technology: maths=B, ssai=C arc=A*
Annals of Applied Statistics: maths=A, ssai=A, arc=C
Operations Research Letters: maths=A, ssai=C, arc=C
Probability Surveys: maths=A, ssai=C, arc=C
The points of disagreement between SSAI and Maths are not so surprising. They are for journals outside what we would consider the core list of statistical research journals: Operations Research Letters, Statistical Science (which is a survey journal), Probability Surveys (which is a more mathematical journal). I am sure that the Maths Society and SSAI would easily resolve these ranking clashes with a short meeting. I think that the bottom line is that a bunch of academics can be trusted to come up with a journal ranking that has a very high correlation with “the truth”.
By way of contrast, how could the ARC rate the Annals of Applied Statistics and Biometrika as tier C? When did the journal of Quality Technology become an A* journal in Statistics and Probability?? We cannot answer this question because we are not told of the process by which the ARC moved from the MathSoc list to their list. I am told that it was based on input from a single international but unnamed academic.
Things were even worse in the business domain, where A* journals were dropped from the list entirely and a whole bunch of journals starting with Asian, Korean and Japanese appeared from nowhere in the A* list. If you were going to start promoting mediocre regional journals into the A* tier you would start with the local journals, wouldn’t you think?
This ERA process is not really consultation. It is an appearance of consultation. The rules of engagement are set up without our input. Our initial recommendations within this imposed system are repudiated. We are finally asked to make further input with no guarantee that our views will prevail. This is politics and it is ugly, corrupt and unfair.
*** Peter Hall has pointed out to me that I have been using the penultimate SSAI ranking. The final ranking actually removed Operations Research Letters and upgrade Statistical Science to tier A. So as suggested in the text, Maths and Stats did resolve some of their ranking differences.