The feedback to authors from SoCG is now out, and with that I feel that I can comment on my own impressions (from an author's point of view) of the rebuttal system used this year, in which the program committee sent the authors questions on their papers, gave the authors a week to answer (in a limited amount of space), and then deliberated for some more weeks after the answers were received. I submitted four papers of which one was accepted; for once, the one the committee liked was also the one that I thought was my own strongest work, so although I'd be happier with more acceptances I thought the overall results were reasonable. However, that in itself doesn't say much about whether rebuttals were or were not a pointless waste of everyone's time. So let's look in more depth at the four cases:

Paper #1: accepted. Two different PC members sent questions: one concerned a possible simplification of our results (which I think overlooked some of the difficulties of the problem) while the other asked for more motivation for some of the results. I was a little worried at this point, because my general feeling was that any questioning implied something other than a slam-dunk accept by the committee, but it was accepted after all. And in the author feedback, the committee specifically thanked us for what they saw as helpful answers. Verdict: rebuttal probably helped here.

Paper #2: rejected. The PC "question" basically consisted of a long negative review to which little response was possible. It was obvious that, if the PC member sending this comment was representative of the rest of the committee, then the paper would be rejected. Nevertheless we made some attempt at response. The author feedback acknowledged that we did so but to my mind didn't show any evidence that we'd made any difference. (However, one of my co-authors disagrees, writing "It looks like we made up some ground with the rebuttal, but not enough.") Verdict: It was probably a slight plus to have the committee's decision telegraphed early in this way, in that it softened the blow of the later real rejection, but otherwise rebuttal was a pointless waste of time.

Paper #3: rejected. The PC did not submit any questions and the author feedback made it clear that they understood the paper well enough and just weren't sufficiently excited by it. Rebuttal would not likely have helped. Verdict: A good choice by the committee not to waste anyone's time with rebuttal.

Paper #4: rejected. The PC did not submit any questions. The author feedback contains some inaccuracies that probably hurt this paper (e.g. a description of the submission as an improvement to some other work that was actually subsequent) but it also makes it clear that all the reviewers on the PC found this paper confusing in many ways. Rebuttal could probably have helped a little, but not enough to tip the balance in favor of this paper. Verdict: because rebuttal was not used, it failed to prevent the sort of error it was intended to prevent, in which the PC made a decision based in part on an easily-corrected mistake. But the much bigger failure was from the SODA program committee, whose author feedback from an earlier rejection did not give me any idea just how confusing this paper would be to the SoCG committee. Instead I got the impression that I could just resubmit without serious revision, and that seems to have been the main factor behind the rejection this time.

In general, I think the cost of doing rebuttal is not so high, so that if it makes a few improvements to the committee's decisions then it's probably worth it. And that's especially true when (as in cases #3 and #4, and unlike case #2) the committee avoids using rebuttal when it can be predicted to be pointless. So, based on the possibility that rebuttal may have helped in case #1, I think having a rebuttal phase was a good idea.

I would be very interested to hear more from the committee, though, whether they thought it was helpful, since they have a bigger perspective on a larger set of papers. For instance, what fraction of papers had their scores changed in any significant way by the rebuttal phase? Perhaps we can hear more from the chair at the conference business meeting, using aggregate statistics that don't compromise the confidentiality of any particular submissions?

PS Suresh has a related post praising the SoCG committee for the high quality of their reviews. It's tangential to the point of this post, but I completely agree. He also posts his rebuttal experiences which seem similar to my cases #2 and #3.


None: Two comments

1. I would think if the feedback phase continues in our area of CS (even just in SoCG), then the reviewers will get better at judging when it is helpful to ask questions and when its just a waste of everyone's (including their) time.

My experience was similar to yours in that one paper had only a simple clarification requested, and may have helped it almost get accepted. Another had a set of questions that made it clear that we had a lot of ground to make up with the reviewers. Despite our detailed response, those same questions came in the reviewer comments. My guess is that they were from a sub-referee, and were pasted into the feedback section by the main reviewer.

2. I want to say I was very impressed with the level of detail we got back in our reviews. The summary reviews (I guess from post discussion on the paper) were very informative in how close we might have been to getting accepted. And reviews themselves provided many helpful suggestions and really showed where improvement is needed. Thank you reviewers if you are reading this!



I am not from the Computational Geometry community and this was my first submission to SOCG. My paper got rejected, but the reviews kind of surprised me. Two of the reviewers had suggested that my paper get accepted, while one of them was neutral. All of them however had mentioned that the paper was mathematically and technically dense, which I now feel might have been the cause for rejection. I had gotten two rebuttal questions from one of the reviewers- which we completely understood and replied to- but one of those questions trickled in the final reviews as well. It didn't turn out too good for me but I guess I wouldn't represent the statistical mean in the reviewing process.


Well, as you can see from this post (and from the twitter feeds at the time the acceptance/rejection notices came out) you're far from alone in having a decent paper rejected from the conference, so don't feel bad about that. And normally some level of mathematical density is taken by these committees as a good thing, but I guess not if it's to the point where they don't understand the paper.