Much of U.S. academia has been anxiously awaiting the overdue National Research Council's rankings of academic programs, but I've been hearing rumors of a brewing fight concerning those rankings between the NRC and the Computing Research Association. The specific issue is how to measure the research productivity of computer science faculty: the story goes that the NRC will use only ISI-listed journal publications for this measurement, despite very clear statements from major professional bodies that ISI data should not be used in computer science due to its omission of most of the leading publication venues. Due to this omission, it seems likely that rankings based on ISI data would lead to the false appearance of lower productivity in CS than in other subjects, and to distortions when researchers whose specialties happen to fall into ISI-listed publications are inaccurately shown as being highly productive.
As far as I can tell, the options seem to be:
NRC backs down and uses some other source of publication data: very unlikely.
NRC doesn't rank computer science programs at all: possible.
NRC doesn't include research productivity measures when ranking computer science programs: also possible.
NRC continues to use ISI data and releases an inaccurate set of rankings, leading to a much more public fight with CRA and other computer science research bodies, as well as to fights within the administrations of a lot of universities over the interpretation of the rankings: most likely.
It's our own damn fault for publishing in conferences instead of journals and we should conform to the rest of academia or suffer the consequences: maybe, but the NRC rankings should be descriptive not prescriptive.
None of this is true and I shouldn't listen to unfounded rumors: also possible.
Is anyone else in possession of more hard information concerning this issue and willing to share?
Really, computer science should embrace them. Often I try to read a computer science paper, realize halfway through that it makes no sense, then realize it is a conference paper and has therefore never been peer-reviewed in any meaningful way. No one was forced to revise it, no one fact-checked it, no one did anything but thumb through it, see their friends' name at the top, and accept it to the conference. It is degrading the field and making it impossible for other fields to use the results from computer science. And if all the other disciplines just making up algorithms on their own won't scare computer scientists into writing journal papers, I don't know what will. If the NRC ranks CS schools based on journal papers, so much for the better.
The better conferences do some real reviewing. But the way it's supposed to work is that the authors revise the conference paper and make a more complete and more fully reviewed journal paper out of it. There are too many instances where that hasn't happened, and I agree that can be a problem.
As for people in other disciplines realizing the usefulness of algorithms research and contributing to it themselves, why should this be something to scare me? It seems just the opposite: more people appreciate the kind of research I do. What could be bad about that? I don't want to keep the field small in order to reduce competition.
Attempting to publish in journals is really unappealing when they aren't as widely read as conference proceedings and also take much longer to review and (usually) much much longer to publish. These are, of course, fixable problems. For example, biology seems to have basically solved the time-to-review-and-publish problem. We should find out how they do it and then do it that way.
I'm not sure biology is really the area that we should be comparing to: the issues involved in reviewing biology papers are too different. I think it makes more sense to compare to mathematics. And my experience with publishing in mathematics journals is that they seem to be only a little faster than the theoretical CS ones, despite the much greater importance of journal publishing in math than CS.