My latest preprint, “A stronger lower bound on parametric minimum spanning trees” (arXiv:2105.05371, to appear at WADS) gives examples of graphs, with edge weights that are linear functions of a parameter \(\lambda\), such that different choices of \(\lambda\) lead to \(\Omega(m\log n)\) different minimum spanning trees, improving a bound of \(\Omega\bigl(m\alpha(n)\bigr)\) from one of my earlier papers. But it was almost about a different problem in discrete geometry rather than graph theory, and it almost didn’t happen at all. I thought I had a bound for another related problem until the proof fell apart, irreparably. I was in the process of throwing away my mostly-written draft when I found a different proof, allowing me to rescue the paper.

Here’s the problem I thought I was solving when I started writing the paper: Suppose you want to construct a complicated shape using unions and intersections of simpler shapes. For the version of the problem I was considering, the shapes belong to the Euclidean plane, and the simple shapes that you start with are the half-planes above a line. When you take unions or intersections of these shapes, the more complicated shapes that you get are sets of points above a piecewise linear \(x\)-monotone curve. Another way to understand the same setup is that you’re starting with linear functions and building more-complicated piecewise linear functions by taking pointwise maxima or minima. And what I wanted to know was: If you have a formula expressing a shape using unions and intersections of \(n\) upper halfplanes, or equivalently expressing a piecewise-linear function using maxima and minima of \(n\) linear functions, how complicated can the result be? I thought I had a proof that one could construct shapes with \(\Omega(n\log n)\) vertices, or piecewise-linear functions with \(\Omega(n\log n)\) breakpoints, and when it broke I thought I didn’t have a paper any more.

Recursive construction of a piecewise linear function by maxima and minima of simpler functions

The figure above illustrates what I thought was the recursive construction. The base case, in the upper left, is a linear function (a piecewise-linear function with one piece, generated by a max-min formula with one term). In middle left we have a second-level function, the pointwise maximum of two of these linear functions, with two pieces. On the bottom left we have a third-level function, the pointwise minimum of two second-level functions, with six pieces. And the large image on the right shows a fourth-level function, the pointwise maximum of two third-level functions, with 16 pieces. At each level of recursion, you replace each line by two perturbed copies, getting a breakpoint where they cross. When you take a maximum, each breakpoint that looked like a local maximum expands to three breakpoints, while each breakpoint that looked like a local minimum stays as just a single breakpoint; the case of taking a minimum is symmetric. Setting up and solving a recurrence for the numbers of breakpoints of each type gives \(\Omega(n\log n)\).

The problem was that I couldn’t control the resulting piecewise-linear functions well enough to ensure that I could expand all of the local maxima into triple breakpoints and produce new breakpoints for each pair of crossing lines. These two issues are related, because you get a tripled breakpoint only for pairs of pairs of lines that have a certain above-below relation, and the breakpoint of a pair of crossing lines changes their above-below relation. It works for each step in the figure, but that’s because these cases are still too small for the problems to show up. So the analysis above breaks down.

As well as unions and intersections of shapes, or minima and maxima of functions, there’s another graph-theoretical interpretation of the same problem, and that’s where the rewritten paper comes in. The piecewise linear functions that you get from recursive unions and intersections correspond to parametric solutions to the bottleneck shortest path problem: find a path that connects two fixed vertices of a graph, whose heaviest edge is as light as possible, and let \(\beta\) (the bottleneck) be the weight of this edge. How does \(\beta\) vary as a function of \(\lambda\)? For series-parallel graphs, the two vertices should be the two terminals, series composition of graphs gives you the maximum of their bottleneck functions, and parallel composition of graphs gives you the minimum of their bottleneck functions. So for these graphs, the parametric bottleneck shortest path problem is the same one that I didn’t solve.

However, the bottleneck shortest path problem is solved by the minimum spanning tree, in the sense that the path between two vertices in a minimum spanning tree is always a bottleneck shortest path (although there may be other equally good paths). Some of the breakpoints of the bottleneck function, the ones that look locally like a minimum of two linear functions, come from combinatorial changes in the parametric minimum spanning tree, and (by negating everything and swapping min for max if necessary) we can ensure that at least half of the changes in the worst-case bottleneck function come from spanning tree changes in this way. Therefore, lower bounds on the bottleneck problem extend to minimum spanning trees, and upper bounds on minimum spanning trees extend to the bottleneck problem. In fact, my previous \(\Omega\bigl(m\alpha(n)\bigr)\) bound on the spanning tree problem came from a \(\Omega\bigl(n\alpha(n)\bigr)\) bound on two-level piecewise linear functions (minima of maxima of linear functions), and a previous \(O(mn^{1/3})\) upper bound of Tamal Dey on the spanning tree problem implies an \(O(n^{4/3})\) upper bound on the bottleneck problem.

So when my lower bound for the bottleneck problem fell apart, I instead started thinking about trying to find a similar recursive lower bound for spanning trees instead of bottleneck paths, and was more successful. It works more easily because I don’t have to control the piecewise linear functions so carefully in order to keep their crossings and breakpoints intact; instead, I can just take three copies of the lower level of the construction, flatten them by linear transformations so they are each close to a line, with their breakpoints in disjoint intervals of the \(\lambda\)-axis, and combine them as if they were linear. It wouldn’t work for the bottleneck problem because you would only get a constant number of new breakpoints where one of the recursive copies crosses over to the other, but for the spanning tree problem you’re combining trees rather than functions so you get more breakpoints in these regions.

The figure below gives an example of the construction, a series-parallel graph with six vertices (upper right) and linear edge weight functions (upper left) that produces 12 parametric minimum spanning trees (bottom). The red, blue, and green parts show the three copies of the recursive construction that are combined to form this example. For a more detailed explanation see the preprint. The preprint also includes a packing argument that transforms the resulting \(\Omega(n\log n)\) bound for \(n\)-vertex series-parallel graphs into an \(\Omega(m\log n)\) bound for graphs whose number \(m\) of edges can be significantly larger than \(n\), but I think that’s more just a technicality.

Six-vertex series-parallel graph with 12 parametric minimum spanning trees

It would be interesting either to find a different construction proving that the halfspace / piecewise linear function / bottleneck path problem can have complexity \(\Omega(n\log n)\), matching this new result, or to prove an upper bound separating this problem from the lower bound on the parametric minimum spanning tree problem, but that will have to wait for another day.

(Discuss on Mastodon)