The lists of accepted papers from STOC (with abstracts) and SoCG (sadly, without) recently came out, and it looks like they both have a large number of interesting papers. My expectation for STOC is to see only a handful of topics I want to see mixed in among a large amount of other stuff, which is why I go to that one so much less frequently than SODA or SoCG, but this year's program looks genuinely tempting, with several strong papers on graph minors, FPT algorithms, data structures, exponential-time algorithms, and even graph drawing (Chuzhoy on crossing numbers); I'm also interested in Chakrabarti and Regev's paper on communication complexity for reasons I may explain in another post. I didn't submit to STOC myself, but my results from SoCG match those of Suresh and Sariel; I'll hold off on the details until I have a preprint available to share.
Instead, I wanted to discuss a related issue: synchronization of conference deadlines. The SoCG results were sent to the authors on Sunday; the ICALP deadline was today, two days later. There was enough time to send your paper unchanged to the next conference, but there wasn't enough time to receive the more detailed feedback from the previous conference, or make any substantive changes to your paper before you resubmit.
I have no inside knowledge of how the ICALP deadline was decided, but I do know of other conferences where the deadlines are deliberately set this way. Of course, they aren't deliberately set up to prevent any revision, but the submission deadlines are deliberately set to be after some other conference's notification date. And when that happens, optimization of other criteria (such as the amount of time the program committee has to read the papers) causes the deadline to be close after the notification date, too close for serious revision.
I'm starting to think that coordinating the deadlines like this is a bad idea.
I'm not against the recycling of submissions; that would be hypocritical, since I do it frequently myself. I think that a large fraction of the papers that are rejected from top conferences, despite their rejection, present worthwhile ideas that deserve their place in the literature. I think that aiming high and getting some rejections is an important part of how we calibrate the strength of our papers and of our conferences. And I think that resubmitting a rejected paper can be a powerful antidote to the arbitrariness and trendiness of the program committee process.
No, what I think is a bad idea is the immediate resubmission of a paper without waiting for and using the feedback from the previous rejection, and deadlines that are timed to encourage this sort of behavior. From the point of view of an author, you should want your papers to be as well written as possible, and that means taking advantage of the feedback one gets from peer review. And from the point of a conference program chair, more submissions are a good thing, but wouldn't it be better not to have to deal with submissions that still have the same problems that caused them to be rejected the previous time?
Everything you write is true. But there is one more element in favor of synchronization: without it, there would be a very strong game element in submitting to conferences. That is, one has to choose between a more prestigious conference A with a smaller chance of being accepted or the less prestigious conference B with a higher chance. Or more precisely, one has to choose between chains A→C and B→D. It would be very unhealthy if people were forced to make such decisions and rewarded/punished for being brave/conservative.
I'm not quite sure why that would be so unhealthy — it would lead people to put more attention into self-evaluation, not necessarily a bad thing. But I see the point about encouraging people to aim high.
The synchronization doesn't always go from greater conferences to lesser. Next week, we have the WADS submission deadline, and then in mid-April the WADS notifications are (perhaps only coincidentally) a few days before the ESA submission deadline. But I think ESA may be a bit more prestigious than WADS.
There are conferences that do exactly as Daniel suggest. for example, this year, the AISTATS results conflict with the ICML deadline. ICML clashes with KDD, whose results come out too late for NIPS, whose results come out too late for ICDM. KDD, ICDM are data mining conferences, and AISTATS, ICML and NIPS are machine learning conferences, all with a fair bit of cross submission potential.
Theory is actually somewhat reasonable in this regard: there's usually two weeks between FOCS and SODA, and a week or more between SODA and ALENEX if you are so inclined.
We also have way too many conferences if you include WADS / ICALP / ESA / ISAAC / etc. So if you miss one deadline there's always another one soon anyway.