I just returned from San Diego, where ALENEX, ANALCO, and SODA were held this year. I'm only going to write about a fraction of the things that happened at these conferences, in part because (with four different sessions happening in parallel much of the time) it was only possible for one person to see a fraction of those things. Also I already posted bits about the ALENEX/ANALCO business meeting and SODA business meeting so I won't repeat those here.

Sunday's scheduled plenary talk by Peter Winkler on pursuit games was unfortunately cancelled because of illness; instead we got a nice talk on locally decodable codes by Sergey Yekhanin. Unfortunately I missed a couple of Sunday afternoon talks i wanted to see (Timothy Chan on the Four Russians and Julia Chuzhoy on the wall theorem) because they conflicted with another session of interest to me. In it, Michael Walter described how to list all short lattice vectors in exponential time (but a better exponential than before). Friedrich Eisenbrand showed that an old greedy algorithm to approximate the largest simplex within the convex hull of a high-dimensional point set is better than previously thought, by a nice analysis that involves forming an orthogonal basis for the space that has the same volume as the simplex, grouping the basis vectors by their lenghts, and approximating the convex hull by a product of balls for each group. And Sepideh Mahabadi showed that if you want to construct a data structure for a set of lines that can find the approximate-nearest line to a query point, the const is only about the same as for the more standard nearest point to a query point.

The mystery of the late Sunday session was the meaning of the talk title "The parameterized complexity of K", by Bingkai Lin, the winner of both the best student paper and best overall paper awards. It turned out to be a typo in the program: "K" should have been "\( k \)-biclique". The problem Lin studied is finding a complete bipartite subgraph \( K_{k,k} \) in a given graph; one would expect it to be \( \mathsf{W}[1] \)-hard, like clique-finding, but this wasn't known. Lin proved the expected hardness result by a reduction from clique-finding in which he took a graph product of a graph possibly containing a large clique with another graph having some sort of Ramsey property, and showed that the resulting product graph either does or doesn't contain a large biclique. He gave two constructions for the Ramsey graph, a randomized one that only blows up the parameter \( k \) (the size of the clique one is trying to find) by a polynomial factor, and a deterministic one that blows it up by a factorial factor. So there is a big gap between deterministic and randomized, but to me that's not surprising when Ramsey theory is involved. The bigger question to me is whether the polynomial blowup of even the randomized reduction can be changed into constant blowup, so that we can extend known results that \( n^{O(k)} \) is optimal for clique-finding (unless some form of the exponential time hypothesis is false) to similar results for biclique-finding.

Monday I ended up spending the whole day at ALENEX. UCI student Jenny Lam started the day with online algorithms for a version of caching in which the items to be cached have sizes (like a web cache and unlike a virtual memory system) and must be assigned contiguous locations within the cache memory. A system of breaking the cache memory into blocks, grouping items into FIFO queues of items with similar priority, and representing each queue by a linked sequence of blocks turns out to work well. It gives up a little in performance compared to systems for caching that don't worry about the memory placement part of the problem, but most of what it gives up is because of using FIFO instead of LRU within each queue rather than because of the fixed memory placement. Also Monday morning, Claire Mathieu gave the second of the invited talks, on moving from worst-case inputs to a "noisy input" model in which one assumes that the input comes from some kind of ground truth with a nice structured solution that has been randomly perturbed; one would like to be able to recover the solution (with high probability) to the extent possible. This turns out to be mathematically almost the same as the "planted solution" model of generating test inputs, in which a random input is perturbed by making it have a solution of higher quality than a random input is likely to have by itself, and then one asks whether this planted solution can be found among the randomness. However, the emphasis of why one is doing this is different: not as a test case, but because real-world inputs are often nicer than worst-case inputs and we want to try to capture that in our input model.

In the first afternoon talk, Sandor Fekete set up an integer program for finding triangulations that maximize the length of the shortest edge. He discussed two lower bounds for the problem, one being the shortest convex hull edge and the second being the shortest interior diagonal that is not crossed by any other diagonal; the second one turns out to be more powerful than the first, and he posed as an open question (or actually a conjecture but with an answer I don't believe) the problem of finding the expected length of this shortest uncrossed diagonal, for random inputs (say uniform in a square). And Maarten Löffler and Irina Kostitsyna gave what I think was the most entertaining talk of the conference, with hidden party hats under the seats of some audience members and gummy bears on the slides, about algorithms for approximating certain geometric probabilities (whether two random points from two given distributions can see each other around given obstacles). The most memorable talk for me from the late afternoon session was the last one, by my academic brother Pino Italiano, on 2-connectivity in directed graphs. In undirected graphs, you can define a 2-connected block to be a maximal subgraph that can't be disconnected by a vertex deletion, or you can define a 2-connected component to be an equivalence class of the pairwise relation of having two vertex-disjoint paths, but these two definitions give you the same thing. In directed graphs, the blocks and components are not the same, and you can construct blocks in linear time but the best algorithms for components are quadratic. In his ALENEX paper (he also had a SODA paper in this area), Pino implemented and tested several algorithms for these problems, with the surprising result that even though the worst case performance of the component algorithms is quadratic the practical performance seems to be linear. So this probably means there are still theoretical improvements to be made.

That brings me to today. In the morning, Natan Rubin spoke about systems of Jordan curves that all intersect each other (it is conjectured that most of the intersections must consist of more than one point, and he confirmed this for some important special cases) and Andrew Suk spoke about geometric Ramsey theory (for instance if you have \( n \) points in general position in the plane you can find a logarithmic subset for which all triples have the same order type, meaning they are in convex position); he significantly increased the size of the subset one can find for several similarly-defined problems. And Avrim Blum gave the third invited talk, on algorithmic problems in machine learning.

In the early afternoon, there were again two sessions I wanted to go to in parallel, so I decided to try switching between them. In one of the two, Joshua Wang spoke on his work with Vassilevska-Williams, Williams, and Yu (how likely is VVW to be first alphabetically among so many authors?) on subgraph isomorphism. For finding three-vertex induced subgraphs, the hardest graphs to find are triangles or their complements, which can be found in matrix-multiplication time. The authors of this work showed that the same time bounds can extend to certain four-vertex subgraphs, such as the diamond (\( K_4 \) minus an edge). A randomized algorithm for finding diamonds turns out to be easy, by observing that a certain matrix product counts diamonds plus six times the number of \( K_4 \)s, and then choosing a random subgraph to make it likely that if diamond exists, the number of diamonds is nonzero mod six. The harder part of the paper was making all this deterministic. In the other session, Topi Talvitie formulated a natural geometric version of the \( k \)-shortest paths problem: find k shortest paths that are locally unimprovable, or equivalently find \( k \) shortest homotopy classes of paths. It can be solved by a graph \( k \)-shortest paths algorithm on the visibility graph of the obstacles, or by a continuous Dijkstra algorithm that he explained with a nice analogy to the levels of a parking garage (see the demo at http://dy.fi/wsn). Louis Barbu showed that there are still more tricks to be extracted from Dobkin-Kirkpatric hierarchies: by using one hierarchy of each type and switching between a polyhedron and its polar as necessary, it is possible to find either a separating plane or an intersection point of two polyhedra in log time. And Jeff Erickson's student Chao Xu spoke on weakly simple polygons; these are polygons whose edges are allowed to overlap each other without crossing, and both defining them properly and recognizing them efficiently turns out to be an interesting problem.

In the final session, I learned from Daniel Reichman about contagious sets: sets of vertices in a graph with the property that if you repeatedly augment the set by adding vertices that have two neighbors already in the set, it eventually grows to cover the whole graph. In a \( d \)-regular expander, the size of such a set should be \( n/d^2 \) (it turns out), and Reichman presented several partial results in this direction. Loukas Georgiadis gave the more theoretical of the two talks on directed 2-connectivity. Jakub Tarnawski showed how to generate uniformly random spanning trees in time \( O(m^{4/3}), \) the best known for sparse graphs. (The problem can be solved by taking a random walk until the whole graph is covered and then selecting the first edge into each vertex, but that is slower.) And Hongyang Zhang formulated a notion of connectivity of pairs of vertices in which one chooses a random spanning forest in a graph and then looks at the probability that the pair is part of the same tree; this apparently has connections to liquidity in certain financial webs of trust.

The ALENEX, ANALCO, and SODA proceedings are all online, there's plenty more of interest beyond what I've mentioned here, and it all appears to be freely accessible without any need for institutional subscriptions.