# Top ten algorithms preprints of 2011

Last year when I rounded up the algorithms preprints on the cs.DS section of the arXiv, there were 643 preprints there. This year, there are 798. So the growth rate is slowing a little, down to around 5/4 from a higher rate of 3/2 from 2008 to 2009, but in absolute numbers cs.DS is still growing significantly.

Here is a very personal top 10, in chronological order. The same ground rules as last time apply: I'm only listing papers that appeared as preprints in cs.DS (so that rules out, for instance, Virginia Williams' improvement to matrix multiplication as well as some other arXiv preprints I've already posted about), and I'm excluding my own papers. Additionally, because I'm still reviewing papers for SoCG 2012, I didn't include computational geometry papers that have not appeared elsewhere and so might plausibly have been submitted there.

**Solving connectivity problems parameterized by treewidth in single exponential time**, Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michał Pilipczuk, Johan van Rooij, and Jakub Onufry Wojtaszczyk, arXiv:1103.0534, and FOCS 2011. One of the papers I listed last year showed that, for many graph optimization problems parameterized by the treewidth of the graph, naive dynamic programming algorithms can't be improved without some surprising consequences. This paper shows the opposite: for some other problems (such as finding a Hamiltonian path or a minimum connected dominating set) it is possible to achieve a running time singly exponential in the treewidth, even though the naive dynamic programs are instead exponential in \( w\log w . \)**3-SAT faster and simpler – Unique-SAT bounds for PPSZ hold in general**, Timon Hertli, arXiv:1103.2165, and FOCS 2011. The new record holder for the fastest (though of course still exponential) time bound for 3-SAT, \( O(1.308^n), \) achieved by using a modified instance cost function to show that a previous method designed for problems with a unique solution works equally well for other problems as well.**The Grothendieck constant is strictly smaller than Krivine's bound**, Mark Braverman, Konstantin Makarychev, Yury Makarychev, and Assaf Naor, arXiv:1103.6161, and FOCS 2011. Grothendieck's inequality states that, if a matrix \( A \) has small inner product with all matrices \( B \) whose coefficients are bounded, then \( A \) also has small inner product with all matrices \( C \) formed as the outer product of two unit vectors. Grothendieck's constant quantifies this inequality by giving the maximum ratio between the two different kinds of inner product. This paper proves a new bound on the value of this constant, tight enough to disprove a previous conjecture on its value. Expressed in these terms, the result would seem to have little to do with algorithms, but in a separate preprint, arXiv:1108.2464, Khot and Naor survey many applications of the inequality to approximation algorithms for several concrete and important problems. For instance, in a FOCS 2007 paper, Charikar, Makarychev and Makarychev used this inequality as part of a method to approximate the feedback arc set of a graph (or more precisely to approximate the difference between the size of this set and \( m/2 \)), a problem that in turn can be applied for instance in layered graph drawing.**Subexponential parameterized algorithm for minimum fill-in**, Fedor V. Fomin, Yngve Villanger, arXiv:1104.2230, and SODA 2012. Although phrased in the language of graph theory, the problem considered here is essentially the same as that of solving sparse systems of linear equations by Gaussian elimination, choosing carefully the order of pivots and variables to be eliminated in such a way that the fill-in (the number of matrix coefficients that were initially zero but become zero during the elimination process) is as small as possible. The authors describe exact algorithms for this problem (that is, finding the exact optimal solution rather than trying to approximate it) whose dependence on the number of nonzeros is greatly reduced from previous solutions.**Minimum weight cycles and triangles: equivalences and algorithms**, Liam Roditty and Virginia Vassilevska Williams, arXiv:1104.2882, and FOCS 2011. In graphs with integer-weighted edges, finding the shortest weighted cycle is not significantly harder than finding the shortest weighted triangle in a denser graph with the same number of nodes and roughly the same range of edge weights. Since integer-weighted triangle-finding can be solved with fast matrix multiplication, so can the problem of finding short cycles, significantly better than matrix multiplication methods for all pairs shortest paths.**Confluent persistence revisited**, Sebastien Collette, John Iacono, and Stefan Langerman, arXiv:1104.3045, and SODA 2012. Persistence is the ability to access old versions of a data structure; confluent persistence is a very strong form of persistence in which updates may combine multiple old versions. Although efficient methods are known for achieving weaker forms of persistence for arbitrary data structures, less was known for confluent persistence, but this new paper shows that it can be applied efficiently to any pointer-based data structure as long as updates do not create a structure that includes two different versions of a single node. In this way, the version structure of each individual node is a tree, rather than a DAG, allowing dynamic tree structures to be used to navigate the version history.**Circle graph recognition in time \( O((n+m)\alpha(n+m)) \)**, Emeric Gioan, Christophe Paul, Marc Tedder, and Derek Corneil, arXiv:1104.3284. Circle graphs are the intersection graphs of sets of chords of a circle. The new near-linear time bound for finding a set of chords representing a given graph (or showing that it cannot be represented in this way) comes from an incremental algorithm for split decomposition (partition of a graph by complete bipartite cuts) combined with a LexBFS ordering of its vertices. (As an aside, I don't care for the notation in the title – I think it's generally a bad idea to put notation in titles and in this case the alpha should go inside the O.)**Multiple-source multiple-sink maximum flow in directed planar graphs in near-linear time**, Glencora Borradaile, Philip N. Klein, Shay Mozes, Yahav Nussbaum, and Christian Wulff-Nilsen, arXiv:1105.2228, and FOCS 2011. The maximum flow problem is usually defined for only a single source vertex and sink vertex, because a problem with multiple sources can easily be transformed into a problem with a single source by adding a single new source vertex connected to all of the old sources. But for planar graphs (or other special graph classes) this doesn't work because the added vertex and all of its edges destroy the planarity of the graph. Part of my interest in this variant of the flow problem involves trying to generalize fast flow algorithms to minor-closed graph families; by the graph structure theorem, graphs in any such family can be decomposed into subgraphs embedded onto surfaces together with some non-embedded parts in the form of clique-sums, apexes, and vortices. One of my own recent papers made steps towards handling clique-sums, and this one makes steps towards handling apexes, so it makes me hopeful that a more general solution is possible.**Counting perfect matchings as fast as Ryser**, Andreas Björklund, arXiv:1107.4466, and SODA 2012. In 1963, Ryser showed how to count the perfect matchings in an \( n \)-vertex bipartite graph in time within a polynomial factor of \( 2^{n/2}. \) Finally, 38 years later, the same speedup can be shown to work for non-bipartite graphs.**13/9-approximation for graphic TSP**, Marcin Mucha, arXiv:1108.1130, and STACS 2012. The latest in a series of recent papers improving the previous-best traveling salesman approximation, Christofides' algorithm, for the metric spaces described by shortest path distances on unweighted undirected graphs. The first improvement over Christofides came in a FOCS 2011 paper by Gharan, Saberi, and Singh, but the actual improvement was tiny. Also in FOCS 2011, Mömke and Svensson (arXiv:1104.3090) reduced the approximation ratio to 1.461, and this paper cuts it down even more to 1.444 by improving the analysis of the method of Mömke and Svensson.

One surprise to me in compiling this list was how many of these papers were from FOCS 2011: I've come to think of SODA as the top place to find interesting algorithms papers, and STOC and FOCS as conferences mainly for the other sorts of theory that are less connected with my own interests. Maybe I should revise my thinking.