Via the Process Algebra Diary, I learned of a recent paper by Andreas Björklund and Thore Husfeldt, Shortest Two Disjoint Paths in Polynomial Time, which just won the Track A best paper award at ICALP. It looks like a very interesting paper; I can see why it won the award.

Here's the problem it solves: you're given a graph with two pairs of terminals $$s_1$$–$$t_1$$ and $$s_2$$–$$t_2$$. The goal is to find disjoint paths connecting the two pairs of terminals with minimum total cost. It almost sounds like the problem solved by Suurballe's algorithm, but that one allows either $$s$$ to be connected to either $$t$$, while here we have to connect the correct pairs. The specific version of this problem they solve is for undirected and unweighted graphs, with disjointness meaning either that the paths share no vertices or that they can share vertices but not edges. The unweighted part could easily be modified to paths weighted by small integers. Directed or real-weighted versions would also be natural but I don't know what's known about them and I don't think this paper's ideas are likely to work for them.

My understanding of this all is still a bit vague, but here's a summary.

The techniques are algebraic rather than combinatorial, meaning that rather than working with paths directly it works with matrices and polynomials and things like that. This is a bit unusual for graph algorithms but not unprecedented; for instance there are standard maximum matching algorithms that work this way. The number of perfect matchings is a matrix permanent, and permanents are difficult ($$\#\mathsf{P}$$-complete) to compute. But if there's only one matching, the permanent is the same as the determinant: both are just the number one. By using random edge weights from a suitable range of integers (large enough to make it work, small enough not to blow up the computation time) one can "isolate" a single matching with smaller weight than the others, compute a determinant of a matrix constructed from the weighted graph, and use it to find the matching you isolated. The coefficients of the matrix are powers of two and the lowest nonzero power of two in the binary representation of the determinant can be used to read off the matching.

The new algorithm is similar in flavor, but trickier, and is inspired by Valiant's work on holographic algorithms. Again, it uses permanents. You can compute them modulo 2 because in that case it's the same as the determinant, and what the authors show is that this can be extended in two ways, simultaneously: to modulo 4 instead of modulo 2, and to matrices whose coefficients are polynomials mod 4 instead of just numbers. The short summary of this part of the algorithm is that it performs a sequence of pivots to make one column of the matrix even, computes the permanent of a minor of the matrix eliminating the even column, and then does some stuff to get from that result to the permanent of the whole matrix. "Some stuff" involves using permanents of matrices of modulo 2-polynomials in order to compute some correction terms to add to the subpermanent.

With these fancy modular permanents in hand, a disjoint paths problem that has a single optimal solution can be found in a very similar way to the matching solution: cook up a matrix of mod-4 polynomials, such that the lowest-degree nonzero coefficient in its permanent can be used to read off the answer. In the general case, when there might be more than one optimal solution, a random weighting (again, from a range of integers that's not too big and not too small) isolates one of them and allows the rest of the algorithm to work.