The 27th ACM-SIAM Symposium on Discrete Algorithms, in Arlington, Virginia, just finished, along with its satellite workshops ALENEX (experimental algorithmics, one of the most heavily rejected topics from SODA) and ANALCO (analytic combinatorics). With four parallel sessions going at most times, there's no way to take in everything, so here are my impressions of the small fraction I saw.

The first talk I saw, Sunday morning by Rasmus Pagh, concerned locality-sensitive hashing, a technique for performing approximate nearest-neighbor searches in high dimensions with subquadratic preprocessing and sublinear query time (with exponents depending on approximation quality). We can do this in linear space, or (now with Pagh's paper) without false negatives: every point within a query radius of the given point should be reported, along with some other farther-away points. Pagh asked as an open question whether we can get similar bounds for simultaneously achieving both linear space and no false negatives.

In the early afternoon I switched between SODA and ALENEX. Aditya Bhaskara spoke about distributed peer-to-peer network reorganization: making local changes to network connectivity to improve communications. Random graphs are good in many ways, and one way of obtaining one (subject to fixed vertex degrees) is to replace randomly-chosen pairs of disjoint edges by different edges on the same endpoints, but that doesn't work in a distributed setting because it can disconnect your graph. Instead, it works better to choose a random three-edge path and reconnect its two endpoints to the two middle vertices. Both kinds of random changing will eventually produce a random graph, but slowly (polynomial but very high degree). Bhaskara showed that they instead reach an expander (almost as good for most purposes) much more quickly.

Back at ALENEX, Markus Blumenstock shaved a log off an algorithm for pseudoarboricity (the maximum average degree of a subgraph) by using an approximation algorithm in place of an exact algorithm in the earlier steps of a binary search. He asked for other situations where the same trick works.

Later in the afternoon, Benjamin Raichel spoke on his work with Avrim Blum and Sariel Har-Peled on approximating high-dimensional convex hulls (in the sense that every input point should be at small distance from the approximation) as the hull of a small sample of points, by repeatedly adding the point farthest from the current hull to the sample. Despite the simplicity of the algorithm, the analysis is complicated and shows that both the number of sample points and the best distance you could achieve with a sample of that size are approximated to within an \( \epsilon^{-2/3} \) factor, independent of dimension.

One of the good things about conferences like this is meeting up with people you otherwise wouldn't likely encounter. For instance, while grabbing coffee at a nearby bagel shop on Monday morning I ran into Brent Heeringa, whom I had previously met on the boat ride excursion at WADS 2011. Long ago I wrote a paper on reset sequences (synchronizing words) of finite automata, which included an algorithm for finding such a sequence by repeatedly appending the shortest sequence that merges two states. It's not hard to see (although it hadn't occurred to me to mention) that the result is an \( n \)-approximation, and Brent pointed me to recent work by Gawrychowski and Straszak at MFCS 2015 using PCP theory to prove that no significantly-sublinear approximation is possible. This turns out to be an obstacle to proving the Černý conjecture on the existence of quadratic-length reset sequences, since the same linear factor comes up as the ratio between the lower and upper bounds on reset sequence length.

We have long known that the power of two choices can be used to prevent conflicts in talk scheduling: after scheduling all the talks logically as usual, schedule an equal number of parallel sessions in which every speaker presents their talk a second time, in a random order. This would act much like a cuckoo hash table and allow anyone who doesn't have too long a list of must-see talks to find a schedule to see everything they want. Ironically, one of this year's scheduling victims was Mr. Power of Two Choices himself, Michael Mitzenmacher, speaking about two data structural applications of this idea in ANALCO directly opposite the data structures session of SODA, where John Iacono and Stefan Langerman presented an entertaining talk on a problem related to the dynamic optimality conjecture for binary search trees. One property you'd like a binary search tree to have is finger searching: if you search for two items near each other in the tree, the cost should be the log of their distance. Another is the working set property: if you're accessing only a subset of the items, the cost per access should be the log of the size of the subset. A third is static optimality: you should do as well as top-down searches in any static tree. If you combine working sets and fingers, you get a property equivalent to static finger trees: you should do as well as searches starting from the previously accessed item in any static tree. On a long train ride, Iacono and Langerman tried unsuccessfully to understand Cole's proof of the finger searching property for splay trees; instead, they ended up proving the static finger property for greedy ass trees, a different data structure defined using their previous work on the geometry of binary search trees.

In his invited talk Monday, Sariel Har-Peled surveyed the theory and algorithmics of graphs of polynomial expansion, including his work with Quanrud at ESA 2015 on using this theory to get approximate independent sets for intersection graphs of fat objects. There's a trick there: these graphs may be dense, but the ones that you get by overlaying two independent subsets of fat objects have polynomial expansion and therefore a good separator theorem. This property turns out to be enough to make a local optimization algorithm find good independent sets: if your independent set is not close enough to the optimum, you can improve it by exchanging a small piece of it surrounded by a separator for its overlay with the optimum. Sariel has already put his slides online if you want to learn more.

My own talk Monday was about a class of 4-polytopes whose graphs can be recognized in polynomial time, generalizing the 3-dimensional polyhedra formed from the Halin graphs. We don't know whether recognizing the graphs of all 4-polytopes is polynomial, although I suspect not (it should be complete for the existential theory of the reals). My work also connects this problem to another problem with open complexity, clustered planarity. And the graphs from my paper turn out to have polynomial expansion. Following Sariel's example, here are my talk slides.

Later in the afternoon I wanted to see the talk for a paper by Igor Pak and Scott Garrabrant on counting permutation patterns, but nobody showed up to deliver it. Instead I saw two talks about spanners, sparse graphs that approximate the distances of denser graphs (or of finite metric spaces). Arnold Filtser proved that any metric space has a spanning tree whose weight is proportional to the minimum spanning tree and whose average distortion (the ratio of tree distance to ambient distance, averaged over all pairs of points) is constant. And, in one of two best-paper winners (I didn't see the other one, by student Mohsen Ghaffari, on distributed independent sets) Shiri Chechik showed with Christian Wulff-Nilsen that every graph has a spanner (with constant distortion \( 2k \) for all pairs of points) whose weight and number of edges are both within a factor of \( n^{1/k} \) of the minimum spanning tree. This is optimal modulo a conjecture of Erdős, according to which graphs of girth \( 2k+1 \) can have as many as \( \Omega(n^{1+1/k}) \) edges; deleting any edge from such a graph would give high distortion for the endpoints of the deleted edge.

The final session on Monday was on computational geometry. Kyle Fox showed how to find paths approximating the minimum of the integral of inverse local feature size (a measure of how well the path avoids obstacles), complementing a paper by Nayyeri from WADS on approximating the integral of local feature size (a measure of how well the path stays near to a sampled submanifold). Sariel spoke again, on approximate levels in arrangements; I learned from his talk that lizards are coming, that the convex hulls of the tiles in a tiling form pseudocircles, and that one can triangulate these pseudocircles to get a tiling again by simpler-shaped pieces within the original ones.

Monday evening was the business meeting. Next year will be in Barcelona (according to the presenter at the meeting, home of the world's worst soccer team), but at a hotel 20 minutes from downtown. Phil Klein gets the unenviable task of chairing the program committee.

Timothy Chan gave two talks on derandomization: one Monday about low-dimensional linear programming, and a second Tuesday about shaving super-polylogarithmic factors from all pairs shortest paths (with Ryan Williams). The method also applies to testing whether pairs of vectors are orthogonal, which is almost the same as whether pairs of sets are disjoint, which is almost the same as whether pairs of graph vertices are not in any triangle, problems for which Tsvi Kopelowitz (with Porat and Pettie) gave 3SUM-based lower bounds in the same session. For instance, one can list all triangles in a \( d \)-degenerate graph in time \( O(md) \) (maybe with a log shaved?) and this bound is tight even for the graphs that have significantly fewer triangles than this.

Jakub Łącki closed out the morning's contributed talks with one on graphs whose degree distribution obeys a power law. He pointed out that for a random graph with this property, one can also use the power law to bound the number of higher-degree neighbors of each vertex (to a function significantly smaller than its degree). Random graphs are unrealistic but real-world graphs seem to obey similar local bounds, and this can be used to develop efficient algorithms. For instance, to find the triangles in such a graph, one can orient every edge from lower to higher degree (giving a DAG) and then search for pairs of outgoing edges.

My favorite talk of Tuesday afternoon was the one by Veit Wiechert, on his work with Gwen Joret and Piotr Micek on sparsity properties of the covering graphs of partially ordered sets. If the covering graph is series-parallel (or simpler) then the underlying partial order must have bounded order-dimension, but there are orders of unbounded dimension whose covering graphs are planar graphs of bounded treewidth. Nevertheless one can go much farther in this connection between sparsity and dimension by adding a height constraint: the main result of the paper is that for covering graphs of bounded expansion, the partial orders of bounded height have bounded dimension. One can't go farther, because there are nowhere-dense graphs of bounded-height orders that have unbounded dimension. This result implies, for instance, that in my WADS paper on parametric closure problems, the polynomial bound for incidence posets of bounded-treewidth graphs can immediately be extended to incidence posets of bounded-expansion graphs.

Of course, there's much more that I haven't written about (even among the stuff I saw), and I'd be interested in seeing similar reports that any other attendees might write. Please link them in the comments.