Nine new graph algorithm papers at JGAA
I'm not sure why the Journal of Graph Algorithms and Applications still doesn't have an RSS feed, but anyway they just published another batch of papers.
There are three regular submissions on threshold-width (embedding a graph as a subgraph of a threshold graph with low chromatic number), partial cube recognition (my own paper), and genus distribution (that is, if you pick a random cyclic ordering of the edges at each vertex, what is the distribution of the genus of the resulting embedding?). There is also a special issue of papers from WALCOM 2009 (a South Asian graph algorithm conference that I hadn't previously known of), with six papers on graphs determined by their degree sequence, approximating weighted graphs by sets of trees, adding edges to make graphs Hamiltonian while minimizing crossings, low-area grid drawings of outerplanar graphs, and two papers on enumeration of planar graphs.
These days we are having to become more careful about adding sufficient new material to our conference papers to make them into journal papers. In general I'm not very sympathetic to this requirement; the conference paper should report on however much as will fit into the conference format, regardless of whether that exceeds 70% of the total length of the full paper, and the full paper should be as long as it needs to be to be complete, not puffed out to some magical percentage of new content. Imposing other requirements only makes it less likely that some conference papers will ever become journal papers, which I see as a bad thing because not having fully refereed versions of theoretical papers calls into question their reliability and the reliability of any future papers that depend on them.
In any case, for my partial cube paper (the journal version of a paper from SODA'08), the requirement of having enough new material was not so onerous, because while revising it I came to the conclusion that it needed more detail throughout, and also because I wanted to add a section describing my implementation of the algorithm, something that wasn't in the conference version. Then, at the last minute, I was requested by one of the referees to add some computational experiments using the implementation, which I did. It turns out that, although the two parts of my algorithm have equal asymptotic worst-case running times, one of the two parts is significantly faster than the other on the graphs I tried. So the extra work needed to add this to the paper was at least helpful in that it gives me a more focused idea of what to work on if I want to improve the algorithm.
PS I just experimented with turning anonymous comments back on for my LiveJournal posts, and within 15 minutes got more spam. So they're staying off, but I believe you should be able to comment using OpenID without having to register with LiveJournal.
Comments:
2011-07-20T03:53:05Z
These days we are having to become more careful about adding sufficient new material to our conference papers to make them into journal papers.
We are? Who's actually pushing for this?
2011-07-20T04:09:22Z
I'm not sure I should go into too much identifiable detail since it was from a private mailing list but (as part of a discussion on conference page limits) I saw an email from another respected algorithms researcher whose paper had already been rejected from two decent journals, not because of any judgment of the paper's actual content but because of being too similar the conference version. In the second rejection letter, the editor of the journal stated that they had calculated the overlap between conference and journal version to be 85% (your guess as to what this number actually means is as good as mine), expressed fears that such a large overlap would lead to the journal being sued by the conference, and advised the author to plan more carefully how much of their work they should publish in conferences.
So the answer is: some journal editors, and some journal referees. And since it's their journal, they're of course free to do this.
2011-07-20T04:25:52Z
Sued? Really?
[facepalm]
According to John Iacono, Michael Fredman was unwilling to publish an abbreviated paper in this year's WADS proceedings, because he noticed that the language in Springer's copyright forms says that Springer owns the rights to all derivative works. Read strictly, this language implies that if you publish a conference paper in an LNCS volume, publishing a full version of the same paper in a journal violates Springer's copyright. (It's unclear whether publishing a full version in a Springer journal is legally allowed.) Long story short: Fredman's WADS paper will be longer than everyone else's page limit.
2011-07-20T04:55:00Z
What I hear from my machine learning colleagues is that rules like this have basically shut down journal publication for them altogether. They tell me that their journals have even stricter rules where the papers have to have significant new content that is intellectually novel, new since the conference paper, not just more details (or proofs) that didn't fit into the conference proceedings. And if you're going to generate that much new content, you'll get more mileage out of it by making a new conference paper out of it rather than sending it to a journal. So, no journal versions for them. I hope theory publishing doesn't go the same way.