Random independent sets in bounded-treewidth graphs
This week my student Daniel Frishberg posted our new preprint “Rapid mixing of the hardcore Glauber dynamics and other Markov chains in bounded-treewidth graphs”, arXiv:2111.03898. Despite the somewhat-technical title and content, it’s on an easy-to-explain problem, of generating random combinatorial objects (like independent sets in graphs) using random walks. Start with an undirected graph and an empty subset of its vertices, and then repeatedly choose a random vertex, flipping whether it is in or out of the subset whenever the resulting subset remains independent. How many of these steps does it take until the resulting random distribution on independent sets is nearly uniform?
In general, this can take exponentially many steps, but our paper proves that in graphs of bounded treewidth it is only polynomial. Our proof’s exponent depends on the treewidth, but maybe a better proof can get the dependence on treewidth out of the exponent and into the constant factor of the polynomial instead. However, if all you want to do is generate random independent sets, this is not the right way to do it. The point of the paper is less about fast algorithms and more about the connectivity of the space of independent sets. So if you do want a fast algorithm for random generation, what should you do?
This is, unfortunately, an area where some care is required and much of the literature does not take the required care. Problematic issues include:
-
What is the model of computation for arithmetic operations? Much of the literature for the design and analysis of graph algorithms assumes without much attention that all arithmetic operations take unit time. This is a reasonable assumption for small numbers (of at most polynomial magnitude), as occur in many algorithms including the random walk above. In those cases it is a close match to the kind of operation that can be done in a single instruction on a modern CPU. It is not a reasonable assumption for counting algorithms dealing with numbers of exponential magnitude. Counting is an important subproblem for exact random generation, so we need large numbers. Typically, the number of independent sets is linear in the input size, the number of bits required to store this number is linear, and the amount of time needed to perform a single operation on a number this large should again be assumed to involve a bignum-package subroutine. The slow steps are multiplications, which can by recent results of Harvey and van der Hoeven be assumed to take \(O(n\log n)\) time.
-
What is the model for generating random numbers? If random numbers are generated as bits, this only allows the direct generation of random choices with probabilities that are dyadic rational. For independent sets, we need other probabilities. Rejection sampling leads to an algorithm whose running time is itself a random variable, so we need to use expected time or high-probability bounds instead of worst-case time analysis.
-
How is the input presented? For the random-walk method, it is just a graph, but for other algorithms taking advantage of low treewidth we need a good tree decomposition, and finding one is nontrivial. We should either state explicitly that a tree decomposition is given and that its width rather than the graph width controls the algorithm’s runtime, or factor in the time to find a decomposition and the width of decomposition that we find.
With that all said, how might we go about computing a random independent set for a given \(n\)-vertex graph of treewidth \(w\), achieving the best theoretical performance?
First, we should find an approximate tree decomposition. There are many known algorithms for this problem, but I think the right choice is the one from the recent paper “An improvement of Reed’s treewidth approximation”, Belbasi and Fürer, WALCOM 2021, arXiv:2010.03105. It finds a tree decomposition of width at most \(5w\), in time \(O(2^{8.8w}n\log n)\). There are other algorithms with a linear dependence on \(n\), but much worse dependence on \(w\), and the \(n\log n\) part of the bound will be dominated by later steps. We can also bootstrap this decomposition, using dynamic programming on it to find a better decomposition, but I think this is too expensive for the savings it produces.
Second, we should count independent sets. Root the tree decomposition, so that we can talk about the subtree descending from any bag and the subgraph of the given graph that it corresponds to. Then for each bag of the decomposition (in bottom-up order), and each independent subset of the bag, we want to count the number of independent sets in the corresponding subgraph that intersect the bag in the given subset. These independent sets are formed by combining choices of independent sets in child nodes. Therefore, we need to find the numbers of independent sets in each child node that are consistent with the choice of subset at the parent, and multiply them together. Each subset of a bag contributes to exactly one such computation at its parent, so the total number of arithmetic operations for this computation is the product of the number of bags and the number of subsets per bag. With bignum arithmetic, the total time for computing all of these counts is \(O(2^{5w}n^2\log n)\). Our new preprint cites a paper for this subproblem that isn’t sufficiently careful about bignum arithmetic, but still somehow ends up with a slower cubic total time bound; I think they’re just being sloppy and overcounting somehow.
Third, we can then reverse the counting process and step downward through the tree decomposition choosing how the random independent set intersects each bag. When we do this for a bag, we already know the intersection of the independent set with the parent bag. Therefore, all we have to do is choose among the subsets of the child bag that are consistent with that already-made choice, using probabilities that are proportional to the numbers of independent sets that can be formed for each consistent subset. We know these numbers because we already computed them in the second stage. So it’s just a single random choice per bag, which (because it involves bignum probabilities that are not dyadic rational) can reasonably be assumed to take a random amount of time whose expectation is linear. For the overall algorithm, the total expected time is \(O(n^2)\). Also, this stage of the algorithm can be repeated over and over, generating new random independent sets, without having to generate new tree decompositions or new counts.
Putting it all together, it appears that the total time for randomly generating independent sets, using dynamic programming on the tree decomposition to count these sets, and assuming fast bignum arithmetic, is \(O(2^{O(w)}n^2\log n)\) for the preprocessing stages of the algorithm, and then \(O(n^2)\) for each successive generated set.