I spent much of the past week in Snowbird, Utah, for the annual Symposium on Computational Geometry and its satellite Workshop on Massive Data Algorithmics. A random selection of SoCG highlights (with apologies to anyone whose research I'm misrepresenting or omitting):

  • Helmut Pottmann gave an excellent invited talk Monday morning, on the design of buildings as free-form shapes using meshes of triangular or quadrilateral-shaped glass panes separated by girders and joints. Although these sorts of meshes have been widely used in finite element simulation and computer graphics, their architectural applications involve different constraints, both aesthetic (the panes should be nicely shaped, and the lines of the girders should flow well across the surface of the building; some architects prefer uniform sizes of panes, while others prefer them to vary) and structural (it is much easier to make flat panes of glass than curved ones, the joints are easier to make when the axes of adjacent joints are coplanar, and the girders meet up more neatly at the joints when the mesh has the property that there is a combinatorially equivalent mesh with edges parallel to the original mesh at a constant offset distance away). And of course the constraints lead to interesting algorithmic problems of constructing these meshes. It made me want to dig out all my old papers on quad-meshing by circle packing and semi-structured quad mesh decomposition and think about how some of the ideas from those papers could fit into this new application.

  • Ken-ichi Kawarabayashi spoke about the unlikely success of his co-authors' home countries of Japan, Switzerland, and Slovenia in the World Cup embedding graphs into space in such a way that all cycles are the boundaries of disks that are not crossed by any other edge of the embedding. It was known that an embedding of this type exists if and only if the graph has no minor in the Petersen family (seven small graphs including the Petersen graph and K6), but that isn't helpful for actually finding the embedding. Ken-ichi's algorithm solves the embedding problem efficiently by repeatedly simplifying large planar parts of the graph (it turns out that their 3d embedding has to respect the planar embedding) until the remaining graph has low treewidth, at which point it is possible to finish off the problem using dynamic programming. There's a connection to the still-open problem of the computational complexity of testing unknottedness (known to be in NP but not known to be NP-complete): it turns out to be equivalent to testing whether a given embedding of a graph is flat. So there's some hope that progress on flat embeddings might lead to progress in computational knot theory.

  • Sergio Cabello's co-authors Francis Lazarus and Éric Colin de Verdiére presented back-to-back talks in the Tuesday morning parallel sessions about quickly finding short topologically-nontrivial cycles in graphs embedded on two-dimensional surfaces. The first of the two papers concerned unweighted undirected graphs; the desired cycle could be found by adding one edge to a breadth-first search tree rooted at one of the vertices of the cycle, but this would give a quadratic time algorithm (linear time to find the best cycle formed by each of a linear number of trees). The key new idea is to combine many small partial breadth first search trees with different roots into one big tree, in order to reduce the total number of trees that need to be processed in this way. The second paper used a similar idea of adding edges to trees to form candidate cycles; one needs two trees (single-source and single-destination shortest path trees) to determine the cycle itself and its length, but surprisingly, only one of these trees is needed for testing the nontriviality of the cycles. The overall algorithm uses a separator-based divide and conquer to control the combination of the number and sizes of the trees.

  • Afra Zomorodian presented a paper about computing topological information such as persistent homology for shapes represented as point clouds in a Euclidean space. The traditional approach to this has been to spend a lot of effort to get a nice low-dimensional (geometric) simplicial complex representing the shape, such as an alpha-complex (the Delaunay triangulation with the big simplices removed), and to justify all this effort on the basis that the topological computation that follows is going to be more expensive anyway. But as Afra demonstrated with implementation timings, the topological part is actually quick: the slow part is just building the complex. Instead he showed that these computations could be greatly sped up by starting with a more simply-defined cell complex that has much higher dimension than the ambient space (the Vietoris–Rips complex), representing it implicitly using only its maximal simplices, and then performing various topology-preserving simplifications in order to find an equivalent complex that is small enough to be represented explicitly.

  • Igor Pak spoke very entertainingly about acute triangulations (partitions of a space or polyhedron into simplices, meeting face to face, in which all dihedral angles are strictly less than 90 degrees). I was one of the co-authors of an earlier paper on the subject, that constructed acute triangulations of three-dimensional space; Pak and his co-authors (and another team on a different paper) improved on this by finding acute triangulations of the cube and the other Platonic solids, and they conjectured that every convex polyhedron has such a triangulation. In five dimensions, there are no nontrivial acute triangulations, because it's not even possible to surround a single vertex by acute simplices. But in four dimensions, there's still some hope: Pak and his co-authors show that there's no acute triangulation with translational symmetry (and in particular no acute triangulation of the hypercube) but maybe it's possible to find an acute triangulation of 4-space in which the dihedral angles approach 90 degrees as one moves away from the origin.

It seems unfair to report my own paper on graph-theoretic characterizations of orthogonal polyhedra as a highlight, but I thought my co-author Elena Mumford presented it well as the very last talk of the conference. I hope to have her talk slides to put online soon. And of course there were many good talks and papers that I've completely omitted; see the SoCG proceedings in the ACM digital library for all the rest.

Although the talks at Massive covered algorithms for a wide range of specific problems, the overall focus seemed to me to be less about solving problems and more about exploring model space. Along with the by-now traditional work on cached external memory models and streaming algorithms, two newer models struck me as particularly interesting:

  • In John Iacono's talk, he considered a model of computation in which a single CPU is connected to thousands of separate disks, and in which a single round of computation involves accessing a block of storage from each disk — he made a back-of-the-envelope calculation to show that this was reasonable for current technology. The performance of this kind of system is dominated by the latency of disk accesses, but all of the disks can be accessed in parallel, so the goal is to minimize the number of rounds of disk access needed to answer the query. In John's talk, he described how to use this model to handle predecessor queries of the kind solved in classical computer systems by binary searching. One possible solution to the predecessor problem is to use a B-tree, but this would typically use two or three rounds. John's data structure is instead based on a Patricia tree, stored in a cuckoo hash table so that the node representing any prefix of a query binary number can be accessed directly rather than via a path of pointers, and augmented so that each node stores the maximal element in its left subtree. A query can be answered in a single round by examining in parallel all the nodes on a path from the query value to the root.

  • My colleague Mike Goodrich and Sergei Vassilvitskii both talked about algorithms in the MapReduce framework. Mike's paper develops an implicit B-tree representation that can be traversed upwards or downwards in a number of rounds equal to the B-tree height, and uses this data structure to perform efficient simulations of CRCW parallel random access machine and bulk synchronous parallel algorithms. Sergei described algorithms designed more directly for the MapReduce model, including algorithms for finding triangles in graphs (using a similar partition into high and low degree vertices as my WADS paper on similar problems) and for minimum spanning trees in not-too-sparse graphs (using multiple rounds of sparsification). One reason for all the excitement about MapReduce is that (unlike past models of parallelism such as the PRAM) it actually works: the hard parts have been engineered away so that installing it, running it, and programming for it are all easy. But, as Sergei said, there hasn't yet been a lot of nontrivial algorithm design for this model, so there's still a lot of low-hanging fruit.

Snowbird itself was a pleasant place to visit. Although the timing was awkwardly placed between their winter and summer seasons (the ski runs had just shut down and the luge run was still being set up) the weather cooperated for the hiking break on Tuesday and on the next night I had the unexpected pleasure of being snowed on in a hot tub in June. The restaurant selection was a little slim but some of the local choices were pretty good (my favorites: the Fork Lift for breakfast and the Steak Pit for dinner) and with a rental car the whole of Salt Lake was within easy reach. And (except for a brief resort-wide power outage on the last day) the internet connectivity was as good as I can remember at any conference.

Some other SoCG/Massive reports: Sariel, Sorelle, Sorelle, Sorelle, and Suresh. It's enough to make me want to change my name to something starting with S[vowel]R, or to encourage Sergio and Sergei to start blogging.





Comments:

None: