I'm in Portland, Oregon for SODA, ALENEX, and ANALCO (don't pronounce it the way it looks). There are four parallel sessions – three for SODA, one today for ALENEX, and one tomorrow for ANALCO – so plenty of good talks to choose among, and plenty that for one reason or another I had to miss. Here are some of the highlights of my first day:

Dániel Marx gave a SODA talk on what I think is a very important new result with Sylvain Guillemot, that finding patterns in permutations can be done in fixed-parameter-tractable time. The methods are also interesting, based on analogies with graph minor theory: he defines a "grid" in a permutation to be a partition of its scatterplot into a grid of non-empty cells by vertical and horizontal lines, and shows (constructively by a polynomial time algorithm) that every permutation either has a grid of size at least \( r \) or a decomposition of width at most an (exponential) function of \( r \). Here a decomposition is a sequence of operations in which the bounding boxes of two subsets of points in the scatterplot are merged, and its width is the maximum number of bounding boxes that can be hit by an axis-parallel line at any step of the merging process. A grid is a kind of superpattern, so if a large enough one exists then whatever pattern you're looking for also exists. And if on the other hand you get a low-width decomposition, you can do some dynamic programming on it to find whether your pattern exists. (In fact, the largest grid size in a permutation and the largest \( k \) for which it's a \( k \)-superpattern are polynomially related: \( k \) is at least equal to the side length of the grid and less than area of the next larger grid.)

At ALENEX, Aurick Qiao spoke on his work with Kushagra, Lopez-Ortiz, and Munro on what appears to be the fastest known variant of quicksort, based on doing a four-way split with three pivots (chosen randomly as the second, fourth, and sixth of seven random selections). They apparently also tried seven pivots but it wasn't as good. Their main insight is that the time is controlled more by cache misses than by the number of comparisons or the number of swaps; these three parameters are all \( \Theta(n\log n), \) but with different constants, and the three-pivot version has the best cache constants.

Herbert Edelsbrunner's invited talk concerned alpha-shapes, persistence, and recent developments in topological stability, describing a research program that has unfolded over 30 years. He made it seem like it all fit together as a single story, despite warning us that he had no idea where it was all going as it happened.

The first afternoon session at ALENEX contained two graph drawing talks (my student has another at ANALCO tomorrow but there appear to be none in SODA). Markus Chimani spoke on using weighted SAT solvers (in which the goal is to find the maximum-weight satisfying assignment) to exactly minimize crossings in upward planar drawings (a problem that has been standard in graph drawing for years but until now only solved heuristically). And Carsten Gutwenger also described a system for turning a graph drawing problem into a system of Boolean equations, but one that can be solved in polynomial time (linear equations over \( GF_2 \). The problem he approached with this method is testing whether a hierarchically clustered graph is clustered-planar; his algorithm will say yes, no, or maybe, so it doesn't constitute a polynomial-time solution, but in practice the maybe answer is very rare.

In the final ALENEX session, Benjamin Burton gave a fast-paced talk on computational methods in low-dimensional topology. He pointed out that for him, mathematics counts as an application area of algorithms, and he divided algorithmic problems in topology into five classes: theoretically and practically easy (e.g. classifying two-dimensional surfaces), practically easy but theoretically unclear (e.g. recognizing the unknots), hard but still possible, theoretically solvable but practically unusable (e.g. classifying three-manifolds), and theoretically undecidable (e.g. classifying four-manifolds). In many cases, the problems in the second and third categories can be solved by finding the right kind of normal surface within a three-dimensional triangulation. The integer vectors describing normal surfaces form a non-convex subset of a convex polyhedral cone in a high-dimensional space, and in some of the practically-easy problems one can find the surface you want among the extremal rays of this cone. But for the problems he was considering today, you instead need the minimal points (Hilbert basis) among the points interior to the cone; the main point of his paper was that these can be found more quickly by merging the non-convex constraints into an algorithm for finding the Hilbert basis instead of testing them later.

At this point I shifted back to SODA for some graph theory talks to end the day. One of these, by Martin Vatshelle (with Daniel Lokshtanov and Yngve Villanger) finally closes out the problem of classifying the complexity of weighted independent set problems on families of graphs characterized by a forbidden induced subgraph of at most five vertices. If the forbidden subgraph has a cycle or a degree-four vertex, it's \( \mathsf{NP} \)-complete; otherwise it's solvable in polynomial time. But Martin thinks it's unlikely that such a \( \mathsf{P} \)–\( \mathsf{NPC} \) dichotomy continues to hold for all larger forbidden induced subgraphs.