Monday at SODA
Second day of SODA. My attendance at talks is getting more sporadic as are my notes from the ones I did attend.
Unsurprisingly, the conference program features many sets of talks that interest me but that I can't attend without a time machine due to parallel scheduling, but also plenty of blocks of time where the topics of all the parallel talks are too divergent from my own interests to draw my attention to any of them. There must be some interesting optimization problems to be solved in collecting data from prospective attendees about what talks they're likely to want to attend, grouping the talks into tuples to be offered in parallel in order to minimize conflicts, and then grouping the parallel tuples of talks into parallel sessions in order to minimize the number of times everyone has to change sessions. Of course these problems are all likely to be NP-hard but maybe some approximation is possible.
Antoine Vigneron spoke on some nice results concerning very general algorithms for optimizing geometric quantities described as sums of algebraic functions over low-dimensional Euclidean spaces. The idea was quite simple: for each of the functions in the sum, draw a family of level sets for an exponentially spaced sequence of values, partition the space into cells by overlaying all the level sets, and try a candidate point within each cell. For instance, he mentioned the problem of finding a circle that best fits a set of \( n \) points in the plane, where the quality of a fit is measured as the sum of the Euclidean distances of the points to the circle. This quality measure can be described as a sum of algebraic functions in a three-dimensional space (coordinatized by the center and radius of the circles) and he gets a \( (1+\varepsilon) \)-approximation algorithm that (while exhibiting worse dependence on \( n \)) improves the dependence on the \varepsilon compared to a previous algorithm of Har-Peled from \( (1/\varepsilon)^{60} \) to \( (1/\varepsilon)^3 \).
Next after Vigneron, and one of the most egregious examples of strange scheduling, was Petr Hliněný's talk on crossing numbers, scheduled opposite a session that was largely about planar graphs. Hliněný's result is that, for certain bounded genus graphs, a very simple algorithm (that cuts open handles one at a time along dual cycles and then, once all the handles have been cut, reroutes the cut edges through the resulting planar graph) provides a constant factor approximation to the minimum possible number of crossings in a planar drawing. A key tool is a new parameter of embedded graphs that he calls stretch: the minimum product of the lengths of any two cycles that have only one crossing point (such cycles must necessarily be nonseparating). It was also an amusing surprise to see one of my drawings of the Nauru graph used as the illustration for the concept of graphs embedded on surfaces.
Back in the planar session, the talks described above were bookended by talks by Christian Wulff-Nilsen and Jeff Erickson that used very similar techniques to very different ends. Christian's algorithm was for finding the set of shortest paths in all the different graphs formed by deleting one edge from a given planar digraph, and Jeff's was on solving planar max-flow by reformulating it as the same sort of parametric negative cycle-detection problem that I used in my WADS paper with Kevin Wortman on minimum dilation stars. But they both perform a sort of network simplex algorithm for finding shortest paths in which a shortest path tree from a problem with slightly different edge lengths is rearranged by swapping edges in and out, they both use dynamic tree data structures on the complementary dual tree to find which swaps to perform, and in both cases a bound of \( O(n) \) on the number of swaps leads to an overall O(n log n) time bound.
I also liked the talk by Ryan Williams on his results with Pătraşcu on the relation between lower bounds on exponential time and polynomial time algorithms. The so-called "exponential time hypothesis" is that 3SAT, clique finding, and many related problems require time 2Ω(n) to solve exactly; it's consistent with the algorithms we know and, if any one of these problems could be solved more quickly, they all could. But as Williams and Pătraşcu show, the ETH can be translated downwards to imply that \( k \)-SUM problems require time \( n^{\Omega(k)} \), the assumption that CNF-SAT can't be solved in time \( (2-\varepsilon)^n \) can be translated downwards to a lower bound of \( n^k \) on finding \( k \)-dominating sets, etc. I have some vague memory of a talk by Gerhard Woeginger at IWPEC 2004 on similar subjects (in Woeginger's case, an upward translation from algorithms for \( k \)-SUM to algorithms for subset sum, which could equally well be viewed as a downward translation from hardness of subset sum to hardness of \( k \)-SUM). Williams and Pătraşcu cite Woeginger's paper but don't really mention this connection; on the other hand, subset sum isn't one of the problems covered by the ETH.
Talk slides for my shameless attempt to cash in on the irrational enthusiasm for approximation algorithms, in the same session as Williams' talk, are here.
Comments:
2010-01-19T09:36:08Z
I see that the images on all your slides are properly licensed! Wow.
For the record, the longest path approximation of Andreas Björklund and myself was improved (to super-polylogarithmic length) by Hal Gabow in 2004.
Harold N. Gabow: Finding Paths and Cycles of Superpolylogarithmic Length. SIAM J. Comput. 36(6): 1648-1671 (2007)
—Thore Husfeldt
2010-01-19T15:10:14Z
Yes, even the xkcd strip. Thanks for the reference; I'll add it to future versions of the paper.
2010-01-19T15:16:49Z
Unsurprisingly, the conference program features many sets of talks that interest me but that I can't attend without a time machine due to parallel scheduling, but also plenty of blocks of time where the topics of all the parallel talks are too divergent from my own interests to draw my attention to any of them.
Hear, hear. I wanted to see the talk on a model for MapReduce, but alas, we both missed it (my talk was at the same time).
I have some vague memory of a talk by Gerhard Woeginger at IWPEC 2004 on similar subjects (in Woeginger's case, an upward translation from algorithms for k-SUM to algorithms for subset sum, which could equally well be viewed as a downward translation from hardness of subset sum to hardness of k-SUM). Williams and Pătraşcu cite Woeginger's paper but don't really mention this connection; on the other hand, subset sum isn't one of the problems covered by the ETH.
This is a good point. I think a subexponential algorithm for SUBSET SUM would also imply ETH is false, because with subexponential SUBSET SUM you can solve "1-in-k SAT" in subexponential time. (Our reduction is something similar to the below.)
For each variable x, make two (m+n)-bit vectors V_x and V_{not(x)}, where
V_x[i] = 1 iff x is in clause i, for i=1,...,m,
V_x[m+j] = 1 iff x is the jth variable, for j=1,...,n
V_{not(x)}[i]=1 iff not(x) is in clause i, for i=1,...,m
V_{not(x)}[m+j] = 1 iff x is the jth variable, for j=1,...,n
Now the task is to find a subset of vectors that sums exactly to the all-ones vector. Note our subset cannot have both V_{x} and V_{not(x)}, for all x. By a "Freiman isomorphism" which maps these vectors to numbers over a sufficiently large base, we get a SUBSET SUM instance where the target T is the number that the all-ones vector maps to.
Cheers, ryan
2010-01-21T03:37:33Z
Perhaps I'm just repeating what Ryan said, but it's pretty easy to assume hardness for a-SUM and derive hardness for b-SUM, for b < a. Simply compute the sum of all (a/b)-tuples.
So the challenge is indeed to show hardness of Subset Sum.
2010-01-21T03:38:00Z
Last comment was by mihai.
2010-01-21T03:46:18Z
Actually, I think you're repeating what Gerhard said.
2010-01-19T22:36:18Z
> ... Of course these problems are all likely to be NP-hard
> but maybe some approximation is possible.
There are O(1) talks and O(1) time slots, so there is an optimal O(1) time algorithm. Who needs approximations? :-)
- David Wood
2010-01-20T06:22:32Z
Yes, but some constants are more constant than others :)
Daniel