I’m writing this from Helsinki, where ALGO 2018 just finished. ALGO is the conglomeration of the European Symposium on Algorithms with multiple other satellite symposia and workshops; this year it included the 4th International Symposium on Algorithmic Aspects of Cloud Computing (ALGOCLOUD), 14th International Symposium on Algorithms and Experiments for Wireless Networks (ALGOSENSORS), 18th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS), 13th International Symposium on Parameterized and Exact Computation (IPEC), 18th Workshop on Algorithms in Bioinformatics (WABI), and 16th Workshop on Approximation and Online Algorithms (WAOA). My two papers were in IPEC, and I mostly attended talks from ESA and IPEC, but with a few others from the other workshops as well.

I’ve already written about my paper on finding obstacle-avoiding subsets of point sets. I’ll make a separate post on my other paper (on parameterized recognition of sparse leaf power graphs) once we have a preprint online; IPEC does its proceedings after the conference, so we haven’t completed our updates from the conference peer review yet. But I thought I’d post here about a few of the talks that caught my attention. These are a fraction of the ones I saw out of a fraction of the ones that could have been seen at the conference, so they’re probably not a very representative sample, but regardless here they are:

  • My favorite contributed talk of Monday morning was by Shay Solomon on his paper with Nicole Wein, “Improved dynamic graph coloring”. Graph coloring can be hard, so this paper encapsulates that hardness by assuming you have a subroutine that can \(k\)-color subgraphs of the graphs you’re given, which are changing by insertions and deletions of edges. They show how to maintain a coloring with \(O(k\log^2 n)\) colors, in amortized constant vertex-recolorings per update. The trick is to maintain a partition of the vertices of the graph into statically-colored subgraphs, each having a number of vertices that’s a power of two and existing over a number of update steps equal to the same power of two. These subgraphs together cover most of the edges of the graph, in the sense that the remaining edges form a subgraph with logarithmic vertex degree. There are logarithmically many static graphs at any time, each using \(k\) colors, so the total number of colors used by them all is \(O(k\log n)\). Because the remaining subgraph has low degree, one can dynamically recolor each vertex after each update using logarithmically many colors. The overall coloring is then the Cartesian product of the static and dynamic colorings.

  • Each day of the conference had an invited plenary lecture, the first of which was by Claire Mathieu. Claire spoke about her experience with the French national system for assigning high school graduates to university degree programs; there are roughly 800k students, each of whom (by law) is to be matched with a position. They’re using stable matching, of course, but in a somewhat surprising way. Theoretically, one might suppose that the right thing to do is for students and schools to each submit a ranking of who they would like to be matched to, run the Gale–Shapley algorithm, and then return the results to everybody. There are two choices (students propose and schools propose) for how to run this algorithm, but the students-propose variant has the advantage that it incentivizes the students to be honest in their rankings: there is no advantage to be gained by submitting a ranking that differs from your actual ranking. But in practice, students-propose and schools-propose produce results that are 99.9% the same, and the bigger problem is that the students don’t really know their rankings up front (beyond one or two top choices).

    So instead they do the schools-propose version, interactively in a sequence of rounds where the schools actually send out proposals to the students and the students are required to respond within a period of three days to each new proposal (deciding whether to reject it, or accept it and reject whatever other earlier offer they might have previously accepted). Most of the difficulty in designing the system comes in adjusting the school rankings to account for requirements that some student offers come with dorm rooms and others don’t, that schools are required to take a certain fraction of low-income students, and that students are required to take a certain fraction of local students. These fractions are baked into the adjusted rankings, so that no matter which prefix of a school’s ranking ends up getting proposals, the required fractions will be included. I found the quick-turnaround rule for student responses problematic, though, and Claire agreed. For instance my son has been working summers at a remote wilderness camp, with no internet access for weeks at a time, and would not have been able to participate in such a system. One could imagine allowing students to preselect their rankings if they wanted to, and Claire says her committee proposed to do that, but unfortunately it was rejected. Another complication is that even though everyone must be matched, the graph of applicant and school rankings does not have a perfect matching; Claire didn’t detail the consolation round needed to deal with students who could not be matched in the main algorithm.

  • Another of the contributed talks, on Monday afternoon, was by Michal Opler (with Vít Jelínek and Pavel Valtr) on “Generalized coloring of permutations”. The question here is whether you can partition a given permutation (represented as a sequence of the numbers from \(1\) to \(n\)) into two or more subsequences that belong to specified permutation classes. Their main technique is to describe each permutation class by a nondeterministic logspace recognition algorithm. When this is possible, one can combine these recognizers to produce another \(\mathsf{NL}\) recognition algorithm for the permutations that can be partitioned in this way. Then the fact that \(\mathsf{NL}\subset\mathsf{P}\) leads to a polynomial time algorithm.

  • Three of the talks involved interesting graph parameters with which I was unfamiliar. On Tuesday Florian Nelles spoke on “Efficient and adaptive parameterized algorithms on modular decompositions” with Stefan Kratsch. Their paper showed that problems like counting triangles could be solved efficiently for graphs of low modular width, the size of the largest prime graph in a modular decomposition of the graph. On Wednesday O-Joung Kwon spoke on “Generalized distance domination problems and their complexity on graphs of bounded mim-width”; the mim-width is the maximum induced matching size among the cuts of a branch decomposition chosen to minimize this number. It has the advantage that it stays small for graph products, so for instance \(k\)-leaf powers have mim-width one even though other width parameters can grow with \(k\). And Dušan Knop spoke on Thursday on “Integer programming in parameterized complexity: Three miniatures” with Tomáš Gavenčiak and Martin Koutecký. Their paper shows that sum coloring can be solved efficiently in the neighborhood diversity of the graph, the number of distinct neighborhoods that different vertices can have. A graph with \(n\) vertices and bounded neighborhood diversity can be represented using only \(O(\log n)\) bits of information (the number of vertices of each type) so “efficiently” means that the part of the time bound depending on \(n\) should be only logarithmic.

  • Tim Roughgarden gave a very nice invited talk on Tuesday, “How computer science informs modern auction design”. The running example was an auction used by the US FCC in 2016–2017 to reallocate television channels for other uses, in three phases: a reverse auction to buy back old television channels from their owners, a consolidation phase in which the remaining television stations are moved to different frequencies so that the same set of channels is freed nationwide, and a conventional forward auction phase to re-sell the newly-freed channels. A reverse auction involves starting with unrealistically-high prices for whatever it is you want to buy, and then lowering the prices as long as the remaining sellers provide enough supply. There’s a theorem here, that this works well when there’s a good reverse greedy approximation algorithm, which could motivate a lot of potential research; reverse greedy algorithms have been somewhat neglected as forward greedy methods are usually easier to understand and have better approximations. In the coloring case a good approximation guarantee is unlikely but one can still give up the guarantees and use the same approach. The bigger difficulty is that, to ensure that there is enough remaining supply to free up enough channels after the consolidation stage, the auctioneers need to solve a graph coloring problem at each offer of a price drop to each seller. These problems are of moderate size (around 2k vertices and 20k edges) so a lot of algorithm engineering was needed to solve them quickly enough to make the auction work.

  • Unfortunately the best student papers from the two tracks of ESA were scheduled opposite each other, so I could only go to one. I chose Max Bannach’s talk on his paper “Practical access to dynamic programming on tree decompositions” with Sebastian Berndt from track B. They implemented two open-source software systems. Jdrasil builds tree decompositions of graphs. Using it one can write in a hundred or so lines of Java a program to do simple dynamic programs such as the one for 3-coloring; Bannach presented data showing that this is very competitive with state-of-the-art solvers. His second (and more experimental) system is Jatatosk, which can automatically generate these dynamic programming algorithms from a description of the problem in a restricted fragment of the monadic second-order logic of graphs.

  • The other award-winning ESA talks were scheduled sequentially rather than in parallel, occupying the rest of Tuesday afternoon until the boat-tour excursion and conference dinner. They were Eva Rotenberg’s talk on “Decremental SPQR-trees for planar graphs” (track A winner, with Jacob Holm, Pino Italiano, Adam Karczmarz, and Jakub Łącki); “An exact algorithm for the Steiner forest problem” (track B, by Daniel R. Schmidt, Bernd Zey, and François Margot), and James Abello presenting his work with former theorists Adam Buchsbaum and Jeff Westbrook from ESA 1998 on external-memory graph algorithms (winner of the test-of-time award) and some of his own later work on visualization of massive graph data based on this paper.

  • Wednesday morning Bart Jansen spoke on “Lower bounds for dynamic programming on planar graphs of bounded cutwidth” (with Bas van Geffen, Arnoud de Kroon, and Rolf Morel). This was the paper that (indirectly, through a cstheory.stackexchange question) inspired one of my own papers from GD 2017, “The effect of planarization on width”. The main idea of Bart’s paper is to develop crossover gadgets for hardness reductions, showing that lower bounds for the parameterized complexity of graph problems can be extended without much change from general graphs to planar graphs.

  • Another talk from Wednesday morning that I enjoyed was Thore Husfeldt’s “Multivariate analysis of orthogonal range searching and graph distances parameterized by treewidth” (with Karl Bringmann and Måns Magnusson), even though really there wasn’t much new in it. The paper looks at computing the diameter of a graph, which Cabello and Knauer had shown to be near-linear-time with fixed-parameter tractable dependence on the treewidth by reducing it to a computational geometry problem, orthogonal range searching, and then applying standard range tree data structures. The usual (Bentley 1980) analysis of range trees solves the recurrence \(T(n,d)=2T(n/2,d)+T(n,d-1)\) to show that their time per operation is \(O(\log^d n)=O(n^{\epsilon}d^{O(d)})\), giving a somewhat high dependence on the width parameter (which becomes the dimension \(d\)). But a different analysis based on more carefully counting the number of paths to each copy of each point in the range tree (Monier 1980) shows that their time per operation is \(O\left(\tbinom{\lceil\log n\rceil+d}{d}\right)=O(n^{\epsilon}2^{O(d)})\), better dependence on the width without really sacrificing anything in the dependence on \(n\).

  • Wednesday’s plenary talk was by Mihai Pop, titled “From clustering to variant discovery: Algorithmics opportunities in microbiome research”. A part that particularly caught my attention involved the visualization of assembly graphs, graphs whose vertices represent sequenced pieces of DNA and whose edges represent the possibility of gluing two sequences together to a longer sequence. Apparently the standard tool for this visualization problem is Bandage, which produces 3d visualizations in which the clearly assembled parts form long curved ropes connecting smaller and more-intricate parts where the assembly is still ambiguous. They look pretty but Pop didn’t like them, because they put the visual emphasis on the wrong parts. Instead he’s been developing a tool called Metagenomescope with a very different visualization style that he thinks is more useful.

  • This year’s Nerode Prize winners were Stefan Kratsch and Magnus Wahlström; Wahlström gave the invited talk for the prize on Thursday. It was given for their TALG 2014 paper “Compression via matroids: A randomized polynomial kernel for odd cycle transversal” but the award citation on the FPT wiki and the talk itself also covered material from their follow-up FOCS 2012 paper, “Representative sets and irrelevant vertices: New tools for kernelization”.

  • Friday morning at IPEC was an invited (but not plenary) tutorial on parameterized counting problems by Radu Curticapean, for which he had filmed some beautiful animations involving playdough and M&Ms. He mostly focused on problems of counting copies of small graphs in larger ones, where “copies” can mean subgraphs, induced subgraphs, or homomorphisms. It turns out that these three types of copies each form a basis for the same space of “motifs”, functions that you get as linear combinations of counts of different small graphs, but they behave differently from each other. For the number of induced subgraphs, the only thing you can do is a brute force search: the exponent of \(n\) in the time bounds for counting the copies of any \(k\)-vertex graph is proportional to \(k\). For the number of subgraphs, the exponent is controlled in the same way by the vertex cover number of the small subgraph you’re looking for, and for the number of homomorphisms, the exponent is controlled by treewidth. But the homomorphism basis is the most fundamental one in the sense that the exponent for computing any motif is the same as the maximum exponent for any of the elements in its expansion for the homomorphism basis.

    The next talk by Marc Roth with Johannes Schmitt (“Counting induced subgraphs: A topological approach to \(\#\mathsf{W}[1]\)-hardness”) continued the same theme, showing that for many natural graph counting problems (like finding the number of \(k\)-vertex connected subgraphs or bipartite subgraphs) the exponent-controlling term in the homomorphism expansion is the one for the \(k\)-clique, and that therefore the exponent is necessarily near-linear. Their method uses an inclusion-exclusion formula for the coefficient of the \(k\)-clique in the homomorphism expansion and a reduction from this to a simpler inclusion-exclusion formula on the potential edge classes of a circulant graph, previously used in connection with evasiveness by Kahn, Saks, and Sturtivant (FOCS 1983 and Combinatorica 1984). Roth and Schmitt’s paper won both the best paper and best student paper awards at IPEC.

  • The final plenary talk of the conference was by Gerhard Woeginger, speaking on “Some easy and some not so easy geometric optimization problems”. The talk was on a particular approximation technique for problems of approximating the construction of a graph on a set of geometric points that minimizes or maximizes the sum of Euclidean edge lengths. When the metric is not Euclidean, but instead a polyhedral distance function, one can choose a normal vector for each facet of the distance function. The length of an edge is obtained by taking the dot products of the endpoints with each normal vector, and choosing the normal vector for which the absolute value of the difference of dot products is maximum. This allows certain geometric optimization problems like MaxTSP to be transformed into a graph problem on the complete bipartite directed graph that has points on one side, normal vectors on the other side, and dot products or (depending on direction) negated dot products as edge weights. A traveling salesperson tour on the points can be expanded into an Euler tour of a subgraph of this bipartite graph that visits the points in the same order, and this expansion is both weight-preserving and reversible. So to solve MaxTSP we need only find the maximum-weight Eulerian subgraph that visits each point-vertex only twice. This can be done in \(\mathsf{XP}\) time (parameterized by the number of normal vectors) by guessing the shape of the subgraph (how many of the point-vertices connect each pair of normal-vector-vertices) and then solving a matching problem to pair the actual points with the abstract point-vertices of the guessed shape. By choosing polyhedral metrics that approximate the Euclidean metric with few normal vectors, one obtains a polynomial-time approximation scheme for Euclidean MaxTSP in any bounded dimension, and for many similar maximization problems. But this approach doesn’t work for the usual (minimization) TSP, nor for some other maximization problems that Gerhard discussed. He finished by asking for a more general theory to explain more clearly when we can expect this technique to work and when it fails. Another interesting question for the same line of research is whether the \(\mathsf{XP}\) dependence on the number of normal vectors can be replaced by \(\mathsf{FPT}\) dependence, giving an efficient PTAS.