A thought inspired by Cliff's presentation at the SODA business meeting of keywords appearing often in accepted and rejected papers (with a lot of overlap between the two lists): someone should try teaching one of those Bayesian spam filters how to recognize good algorithms paper titles. I'm not suggesting that any of the papers I saw for the STOC PC fell to the level of spam, and I don't think any committee member would seriously think about using such a tool instead of their eyes, but it might be a useful early warning for authors that the committee is likely to be skeptical about their paper...


So which keywords were good and which were bad?
I didn't record them, and I don't know if Cliff is making his slides available, but there's a brief recap in Jeff's business meeting report:
Good title: "Approximate number games: A Way to Design improved graph algorithms without data bounds" Bad title: "Efficiently routing and scheduling random dynamic trees with complexity bounds."
Of course, this doesn't take length into account; e.g. long strings of modifiers are likely to indicate a highly specialized result not interesting to most of the committee.