I recently posted questions on mathoverflow about chessboard complexes and on TCSexchange about Markov chains on 3-colorings of cycles. There's actually a (tenuous) connection between the two problems, which I wanted to explain here.
The chessboard complex is an abstract simplicial complex formed from an \( m\times n \) grid of points by making an edge between every two points that are on two different rows and columns, a triangle for every three points that are on three different rows and columns, a tetrahedron for every four points that are on four different rows and columns, and more generally a simplex for every set of points no two of which are on the same row and column. Another way of saying the same thing in familiar language to computer scientists is that the simplices of the chessboard complexes correspond to the matchings of complete bipartite graphs: the vertices of the graph correspond to the rows and columns of the grid, the edges of the graph correspond to the grid points, and a set of edges forms a matching if and only if the corresponding set of grid points has no two points in the same row or column.
The obvious chessboard complexes to look at are the square ones (the ones formed from an \( n\times n \) grid), because those are the ones with the most symmetries: as well as permuting the rows and the columns independently of each other, you can also swap the whole grid diagonally. But recent work by Blagojević, Matschke, and Ziegler on Tverberg theorems brought my attention to the fact that the off-square chessboard complexes formed from an \( n\times (n+1) \) grid are maybe a little nicer. The \( 2 \times 3 \) chessboard complex is just a six-vertex cycle. (Try drawing it!) In the \( 3 \times 4 \) complex, the neighborhood of every vertex looks like a copy of the \( 2 \times 3 \) chessboard complex: that is, every vertex is surrounded by a six-cycle of triangles, meeting each other edge to edge. So, locally, every point looks like it belongs to a nice triangulated surface, although this surface is all tangled up when you try to draw it in the \( 3 \times 4 \) grid that it comes from:
However, it's possible to untangle it to get a nice triangulation of the torus. I've drawn it \( 4 \times 5 \) rather than \( 3 \times 4 \) because the vertices repeat around the boundary, showing how the rectangle should be glued to form a torus. The numbers in each vertex are its coordinates in the original \( 3 \times 4 \) grid.
My mathoverflow question is about trying to similarly untangle the \( 4 \times 5 \) chessboard complex. Although the \( 3 \times 4 \) complex is a manifold (every point looks locally like a point on the plane), the \( 4 \times 5 \) complex is not: most points look locally like a point in space, but the \( 20 \) grid points have neighborhoods that are \( 3 \times 4 \) tori rather than the spheres that a manifold would have. Technically, it's a pseudomanifold, and so are all the bigger \( n \times (n+1) \) chessboard complexes. But if you poke 20 little holes in the \( 4 \times 5 \) complex, deleting its vertices, the remaining parts do form a manifold, and (as it turns out) a nice one: it has a lot of symmetries, and can also be represented as the portion of Euclidean space that surrounds a link formed by tieing 20 loops of thread together into some sort of tangle. Each of the loops in this tangle is topologically a torus, and it corresponds to the torus neighborhood of one of the deleted points.
Ok, so that's the chessboard complex part. What about colorings?
There's been a lot of work in theoretical computer science on randomly sampling from the colorings of a graph, generally by setting up a symmetric Markov chain that has the colorings as its states and that has the same probability of going from coloring X to coloring Y as it does for going from coloring Y back to X. This property (and a few more technical properties) guarantees that, if the Markov chain runs long enough, it will be close to equally likely to be in any of its states; the question is, how long is long enough? The result one wants to prove is that the Markov chain is rapidly mixing, that is, that it converges to equiprobable in only a polynomial number of steps, so that one can use it as part of a polynomial-time sampling algorithm.
We can't hope to efficiently sample colorings that use an optimal number of colors, because if we could then we could color the same graphs efficiently (by sampling a coloring and then using that coloring), and coloring is NP-complete. But maybe we can sample colorings with a number of colors that, though not necessarily optimal, is close to some provable bound? For instance, Brooks' theorem guarantees that, if a graph has maximum degree \( \Delta \) then with only two exceptions (cliques and odd cycles) it has a coloring with \( \Delta \) colors. And much of the work on random sampling of colorings has shown that, with a similar guarantee on the number of colors, one can randomly sample the colorings. For instance, it is known that one can sample the 5-colorings of graphs with \( \Delta = 3 \) (Dyer, Bubley, and Greenhill, SODA 1998), and it is conjectured more generally that this method works whenever the number of colors is at most \( \Delta + 2 \) (see, e.g., this nice survey by Frieze and Vigoda which includes many more related results).
But what if we're more ambitious, and we want to sample from exactly the optimal colorings of a graph? Of course, to avoid the NP-completeness issue we should restrict our attention to graphs that we already know how to optimally color, and that have a nontrivial space of optimal colorings. The first examples that comes to mind (for me, at least) are the odd cycles: they are 3-colorable, but they have a number of 3-colorings that grows asymptotically as 2n (more precisely it's the nearest multiple of six to \( 2^n \)). For instance, the 5-cycle has 30 different colorings; if we connect them into a Markov chain using the Wang–Swendsen–Kotecký dynamics (in which one finds a connected component in the subgraph induced by two of the colors and swaps the colors within that component) the state space looks like this (in Lombardi Spirograph style):
Incidentally, the Glauber dynamics for the same 5-cycle, in which we try to change the color of one vertex at a time, doesn't work: it includes only the edges that connect the 15 outer states into a 15-cycle and the edges that connect the 15 inner states into another 15-cycle. There's no way to get from an outer state to an inner state or vice versa. Of course, generating random 3-colorings of an odd cycle is actually pretty easy to do directly, but it still seems interesting to test whether the Markov chain approach can be made to work for this case, and that was the subject of my TCSexchange question.
So what's the connection between chessboard complexes and Markov chains of graph colorings? Let's simplify the set of moves of the Markov chain by only allowing Wang–Swendsen–Kotecký moves that change the colors of one or two vertices (in the diagram above, that means deleting all the straight edges that pass through the center of the diagram) and in exchange let's look at a more complicated graph than an odd cycle: the Petersen graph. The Petersen graph can be viewed as an odd graph, the graph whose vertices are the two-element subsets of a five-element set and whose edges connect disjoint two-element subsets. As with odd graphs more generally, the maximum independent sets all have the same form as each other: they are formed by choosing an element \( x \) of the five-element set and including all two-element subsets that contain \( x \). This means that there are exactly five maximum independent sets, that they contain exactly four vertices, and that no two of them are disjoint. Every 3-coloring of the Petersen graph must have exactly one of these five maximum independent sets as a color class (otherwise the color classes would be too small to add up to the total number of vertices in the graph). The remaining six vertices induce a matching, and each matched pair can be colored in one of two ways. Therefore, the Petersen graph has 120 optimal colorings.
The simplified Markov chain on these colorings allows two kinds of moves: we can swap the colors of the endpoints of one of the edges in the matching, or we can change the color of one of the vertices in the maximum independent set. There are eight ways to change the color of one of the maximum independent set vertices, and eight ways to color the six matched vertices that are not in the maximum independent set; each of the color change operations is consistent with exactly one of the colorings of the matching. Therefore, each 3-coloring of the Petersen graph has exactly four neighboring colorings: three in which we change the colors of a matching edge and one in which we change the color of a maximum independent set vertex.
What does this state space look like, as a big graph on 120 vertices? It's a little hard to draw the whole thing, and I haven't tried. But if we only use the steps that recolor the two endpoints of a matching edge, what we get is a graph formed by 15 little cubes. Within each cube, we can't change the colors of the vertices in the maximum independent set, but we can flip each of the three matched edges independently. The 15 cubes can be thought of as forming a \( 3 \times 5 \) grid: there are three colors that can be used to color the maximum independent set, and five choices of which maximum independent set to use. Each little cube has eight edges connecting it to other little cubes, one for each of its vertices. And the eight neighboring cubes are exactly the ones that are not in the same row or column of the \( 3 \times 5 \) grid: whenever we recolor a single vertex, the new coloring uses a different maximum independent set and uses a different color for that set. So the state space of 3-colorings of the Petersen graph looks like a grid of 15 cubes, connected in the pattern of a \( 3 \times 5 \) chessboard complex! This also shows that both the one-vertex and two-vertex recoloring moves are necessary: with only the one-vertex recolorings, we'd only be able to reach one other coloring and with only the two-vertex recolorings, we'd only be able to stay within one of the cubes, but together they allow navigation both within the cubes and within the \( 3 \times 5 \) chessboard complex that provides the backbone structure of the state space.
As one final tangent, the \( 3 \times 5 \) chessboard complex itself contains many cube subgraphs. A \( 2 \times 4 \) chessboard complex is itself a cube graph (draw it and untangle it!) and the neighborhood of a vertex in the \( 3 \times 5 \) complex is a copy of the \( 2 \times 4 \) complex. So each vertex has a cube as its neighborhood. Unfortunately this doesn't seem to mean that the \( 3 \times 5 \) complex itself, or the Petersen coloring space, have any kind of nice polyhedral structure.