Day Two (Monday Morning Madness)
The early morning Monday sessions of SODA had, for me, the most collisions between talks I wanted to see in all three sessions.
I ended up seeing:
- Embedding metrics...
by Abraham, Bartal, and Neiman. There's been a lot of work lately on embedding finite metrics into "nice" spaces, which was what this talk and its whole session was about. Generally, one measures the quality of the embedding by testing how it distorts the length of any individual distance, and then combining these distortion numbers somehow. This one finds embeddings of any metric into a tree-like metric (of a couple different types) in such a way that it's simultaneously good for many different Lp-like ways of combining the distortions.
- Threshholds for k-orientability
Two, two, two talks in one talk slot! By Vijaya Ramachandran, Peter Sanders, and several of their co-authors. Both about how to orient the edges of an undirected graph in such a way that the in-degree of the resulting directed graph is minimized; I'd been trying to track down the literature on exactly this problem recently for a graph drawing application. I could find two 2006 papers claiming polynomial time algorithms for optimal orientation, but there's an easy linear time reduction to max flow (to test whether a graph is k-orientable, make a flow graph with capacity k from a source to each vertex, capacity one from each vertex to each incident edge, and capacity one from each edge to the sink, and test whether m units of flow are possible) that must have been known for much longer. The Sanders paper at least gives a 1994 reference for an equivalent load balancing problem but I'm not sure that's good enough to justify a claim that the minimum-degree orientation problem itself was long known to be polynomial. My 1991 paper on low-degree orientation is no help either. The actual content of the two SODA papers with the single talk slot is two simpler heuristics using which one can characterize the k-orientability threshholds for random graphs; according to Sanders' experiments these heuristics are actually very likely to find the optimal orientation even for graphs with numbers of edges near the threshhold.
- L∞2 is easy, L∞3 is NP-complete
Back to the metric embedding session for a talk by Jeff Edmonds on testing whether a finite metric embeds into a low-dimensional sup-norm space. He speeds up an old two-dimensional embedding algorithm of Malitz and Malitz from cubic to near-quadratic (that is, near linear in the input size since there are quadratically many distances to specify). ETA: I forgot to mention the best part! It works by turning the problem into one of dynamic graph connectivity, an old interest of mine. He then uses a construction involving a collection of rectangles connected diagonally corner-to-corner into a Möbius strip to reduce subset sum to three-dimensional embedding. Of course, if a metric space embeds into a low-dimensional sup-norm space, so does its tight span, so this seemingly says something about the difficulty of tight span construction.
- Ultra-succinct representation of trees
From the third session, on data structures. The problem here is to represent trees using extremely few bits while still allowing various navigation operations. For instance, representing a tree with a set of balanced parentheses in the obvious way uses only 2n bits but needs to be augmented to allow navigation. Wing-Kin Sung and his co-authors instead represent a tree by writing for each node (in pre-order) a unary representation of its number of children. If you start with a gratuitous open paren, then write each node's degree in unary as a string of one open paren for each child followed by one close paren, you get a balanced parenthesis sequence representing a tree but one that is different from the obvious representation. By instead compressing the sequence of numbers of children better, you get a code in which the length of the encoding of a tree is proportional to some definition of its entropy; for instance, binary trees use cn bits where c is some constant better than 2. But I'm left wondering: if the main application is compressing suffix trees of strings, why is this better than Ferragina et al.'s string indexing data structures that directly compress the string itself?
- d-Dimensional Linear Arrangement
Charikar et al assign distinct (one-dimensional) integer coordinates to a graph's vertices to minimize some Lp combination of the edge lengths. The "dimension" in the title is 1/p because some previous paper related this problem to a 1/p-dimensional arrangement problem, but that relation doesn't work for the arrangement algorithm here. Nevertheless I think I should look at this paper more carefully because it also shows an approximate equivalence between their arrangement problem and some kind of optimal hierarchical clustering, which I'm interested in in connection with my squarepants paper.
I also would have liked to see Alon and Schwarz's expander construction talk but something had to give. The rest of the day was less hectic but still had several good talks. Monika Henzinger told us how algorithms have changed the world (or at least made Google's search results better), I hope to update the WP Scheinerman's conjecture page based on lots of good references from Gonçalves' 1-STRING talk, I was very pleased with myself for solving in real-time a small conjecture from Babai and Gorozedzky's paper on sandpiles, and we had two very intriguing geometry talks on generalized Voronoi diagrams by Nielsen and then Asano.