Report from Geometry Week
I just returned from visiting Eindhoven, the Netherlands, for Computational Geometry Week, including the 31st International Symposium on Computational Geometry, the 4th Annual Minisymposium on Computational Topology, the Workshop on Geometric Networks, the Workshop on Stochastic Geometry and Random Generation, the Workshop on Geometric Intersection Graphs, the Young Researchers Forum, and the CG Week Multimedia Exposition, almost all of which I attended pieces of (it was not possible to attend everything because SoCG had two parallel sessions and the workshops were run in parallel to each other).
After a welcoming reception the evening before at campus café De Zwarte Doos ("the black box", but I was warned not to search Google for that phrase because some other company is using it for other purposes) the conference itself began Monday morning (June 22) with the best-paper talk from SoCG, "Combinatorial discrepancy for boxes via the \( \gamma_2 \) norm", by Jirka Matoušek (posthumously) and Aleksandar Nikolov. The question they study is, given a set of \( n \) points in \( d \)-dimensional Euclidean space, how to label the points with \( +1 \) and \( -1 \) so that the sum of the labels in each axis-aligned box is as small as possible. They provide a lower bound for how big these sums might be, of the form \( \log n^d + O(1), \) nearly tight and roughly squaring the previous bound. The method is simple and elegant and was presented very clearly by Nikolov; it consists of showing that the discrepancy (the quantity in question) is nearly the same as the \( \gamma_2 \) norm of an associated matrix (the product of the maximum row norm and column norm of two matrices that multiply to the given one, chosen to have small rows and columns respectively), that the points can be assumed to form a grid, that the matrix for points in a grid is a Kronecker product of lower-dimensional matrices of the same type, that the one-dimensional matrix (an all-one lower triangular matrix) has logarithic \( \gamma_2 \) norm, and that the \( \gamma_2 \) norm is multiplicative with respect to the Kronecker product.
Next, we saw the three videos and one demo of the Multimedia Exposition. My favorite was "Tilt: the video", on the hardness of puzzles in which you tilt a panel with multiple moving balls and fixed obstacles in order to try to move the balls into given configurations:
Another highlight from Monday was Jeff Erickson's talk at the computational topology workshop. Listed in the program as "????", it turned out to be about the pre-history of computational topology and computational geometry, in which Jeff informed us that most of the basic concepts and algorithms in Chapter 1 of O'Rourke's computational geometry book have been known for centuries. In particular winding numbers and turning numbers can be traced to Thomas Bradwardine in 1320. Algorithms for computing the signed areas of curves and polygons (by summing over trapezoids or triangles determined by pieces of their boundaries) come from Albrecht Meister in 1770, as do some pretty strong hints about the Whitney–Graustein theorem that turning number is a complete topological invariant for regular homotopy. The algorithm for testing whether a point is inside or outside of a polygon by shooting a ray for the point and counting how many times it crosses the boundary comes from Gauss, who also asked which chord diagrams represent the crossing sequences of immersed circles. Even later, Max Dehn gave the first full proof of the Jordan curve theorem for polygons by proving the existence of triangulations, proving the existence of ears in the triangulations (often called Meisters' ear theorem, but this is a different Meisters, later than both Dehn and Meister), and doing the obvious induction.
Tuesday started with a talk by Mikkel Abrahamsen on finding bitangents of polygons. The method is very simple, and uses only two pointers into each polygon. Two of these pointers point to vertices from each polygon that define a candidate bitangent. The other two pointers walk around the two polygons in tandem, checking whether everything else is on the correct side of the bitangent. If not, we get a new bitangent to try and we reset the two walks. The technical advantage of this over previous methods (involving computing convex hulls of convex polygons) is that it uses only constant space instead of linear space, but it's the sort of thing I was very happy to see at SoCG and find very difficult to imagine getting into a more general theoretical conference like FOCS and STOC. Next, in the same session, Luis Barba presented what turned out to be the winner of the best student presentation award, a talk on finding geodesic centers of polygons, points such that the whole polygon can be reached by paths within the polygon that are as short as possible.
Also on Tuesday was the first of two invited talks, by Ben Green on his work with Terry Tao on the number of ordinary lines. If you have a finite set of points in the plane, not necessarily in general position but not all on a line, then there will be at least one "ordinary line" containing exactly two points; this is the Sylvester–Gallai theorem. But there will generally be more than one ordinary line; for instance if all but one point is on a line then the number of ordinary lines is still large. What Green and Tao showed is that (for very large point sets) the number of ordinary lines is at least \( n/2 \) when \( n \) (the number of points) is even, and \( 3n/4 \) when it is odd, matching known upper bounds. The method involves taking the projective dual line arrangement, using Euler's formula to show that when the number of ordinary lines (simple crossings in the dual) is small then most crossings involve three lines and most faces in the line arrangement are triangles, and using these properties to show that this implies that the points can be covered by a small number of cubic curves. Then there's a big case analysis involving the various different possible cubic curves; the one that gives the fewest ordinary lines turns out to be a conic plus a line. Towards the end of the talk he reviewed several related conjectures including one that he credited to Kára, Pór, and Wood on whether every large set of points contains either a large collinear subset or a large pairwise-visible subset; there are some hints that algebraic methods can also be used for this but it looks trickier than for counting ordinary lines.
Unfortunately the Young Researchers' Forum papers don't seem to be individually linked from the conference web site, but a book of all two-page abstracts is available as a single large pdf file. Two of the Tuesday afternoon talks caught my attention. Andrew Winslow spoke about tiling the plane with translates of a polyomino. It was known that this is possible if and only if the boundary of the polyomino can be partitioned into six pieces such that opposite pieces of boundary fit precisely together. Winslow translates this into a string problem in which one describes the polyomino by a string over four characters describing the four directions a boundary segment can be oriented. Then the problem becomes one of finding a cyclic rotation of this string, and a partition of it into six substrings, such that opposite substrings are reverse complements. Some combinatorics on words involving "admissible factors" reduces the problem to something that (like many string problems) can be solved using suffix trees. And Anika Rounds spoke in the same session, showing some hardness results on realizability of linkages (systems of rigid bodies connected by pins). Unlike some of the other work on linkages I've reported on here, the bodies are not allowed to overlap, but this constraint makes the realizability problem strongly \( \mathsf{NP} \)-hard.
Also of note Tuesday was the afternoon snack of Bossche bollen, a Dutch specialty in the form of chocolate-covered whipped cream bombs.
One of the Wednesday morning talks had an odd title: "Low-quality dimension reduction". What this turns out to mean is that one has a high-dimensional nearest neighbor query problem and one translates it into a lower-dimensional \( k \)-nearest-neighbor problem. The result is an approximate nearest neighbor data structure with truly linear space and sublinear query time, with the query time exponent depending only on the approximation quality and no exponential dependence on dimension. Another of the Wednesday morning talks, by Hsien-Chih Chang, concerned a generalization of Voronoi diagrams to points with vector weights, where one wants to find all points whose vector of weights, augmented with one more coordinate for the distance to a query point, is not dominated by any other point. When the weight coordinates are independent random variables this turns out to have near-linear complexity and near-constant output size per query.
After an enthusiastic presentation by Benjamin Burton at the Wednesday afternoon business meeting, we decided to go to Brisbane, Australia, in 2017. The 2016 location has already been set for Boston, co-located with STOC. Other business meeting topics included reports from the various program chairs, a diiscussion of the possibility of setting up a non-profit organization to run the conference (about which we'll probably see more online later this year), a change to the steering committee elections to institute staggered terms, and a report from Jack Snoeyink of the NSF on some new initiatives for international cooperation. The rest of the day was occupied by the excursion (a boat ride and walking tour in nearby Den Bosch) and dinner (in the Orangerie of Den Bosch, a decommissioned Gothic church). At the dinner, Pankaj Agarwal and Vera Sacristán presented moving rememberances of Jirka Matoušek and Ferran Hurtado, respectively, both of whom were important figures in the computational geometry community and both of whom died this past year.
In the first section of Thursday morning, the well-synchronized timing of the parallel sessions came in handy. Daniel Dadush described deterministic algorithms for estimating the volume of a convex body by using a symmetric subset of the body to construct a lattice whose set of points within a slightly-expanded verson of the body can be easily listed and approximate its volume well. The problem has a randomized approximation scheme and deterministic algorithms must be at least exponential-time, but unlike previous ones this one is only single-exponential. The other parallel session included a nice \( \mathsf{APX} \)-hardness reduction for \( k \)-means clustering, from vertex covers n trangle-free graphs. The dmension is high (each vertex becomes a dimension and each edge becomes a point, the sum of two basis vectors) but this can be reduced using the Johnson–Lindenstrauss lemma. And back to the first session, Timothy Chan greatly simplified an old algorithm of Bernard Chazelle for constructing the intersection of two convex polyhedra in linear time.
In the next session we saw what I thought was one of the better student presentations, but one that was ineligible for the best-presentation award because it was scheduled after the voting closed. Its presenter, Arie Bos, is a retiree who has gone back to school, and he told me he thought it would be unfair to compete for the prize because he has so much experience making presentations in his past career. The subject was generalized Hilbert curves, fractal curves that can be formed by taking a Hamiltonian tour of a hypercube and then repeatedly replacing each vertex of the path by a shrunken copy of the same tour. The result of the paper is a new method for generating these tours in such a way that subtours are particularly compact: the bounding box of every subtour is at least \( 1/4 \) full, regardless of dimension.
Susanne Albers presented the second of two invited talks, on data-dependent algorithm analysis. Her thesis was that, although worst-case analysis has been effective at encouraging the development of efficient algorithms and at distinguishing efficient from inefficient ones, it can be too pessimistic and in some cases unable to distinguish which of two competing algorithms is the better one in practice. The first half of the talk concerned randomized data models. The fully random model describes the input to e.g. quicksort but is not realistic enough for most problems. Better randomized models include smoothed analysis, the planted subgraph model (in which one is supposed to fnd a non-random solution part of an input hidden within a larger random part), and the randomized incremental model seen frequently in computational geometry in which a worst-case set of geometric objects is randomly ordered. The second half of the talk concerned some of Albers' own recent work, on deterministic online algorithms, focusing on modeling the locality of referencing in online input sequences. For online caching, rather than previous work using access graphs, she instead looks at the vector of distances between requests of the same types, where the distance is measured by the number of distinct other types between two requests of one type. By restricting the definition of the competitive ratio to input sequences with the same vector, she shows that the classical LRU strategy is much more tightly competitive than could be shown by the classical input-length based analysis, and that it is significantly better than other choices. She also performs a similar analysis for list updates, but using a dfferent data model in which one looks at runs and long runs in the subsequence of inputs having two particular data values.
After a lunch featuring Hollandse Nieuwe herring (eaten raw in small pieces on toothpicks rather than the traditional method of lowering a whole herring down one's throat like a bird) I spent the remainder of the conference at the Workshop on Geometric Intersection Graphs. There's an open problem list from the workshop online; eventually it is supposed to include problems from the open problem session but currently it seems to be only a pre-provided list of problems. I contributed one on the complexity of coloring circle graphs. A 1992 paper of Walter Unger claimed a proof that this is \( \mathsf{NP} \)-complete for 4-coloring and polynomial for 3-coloring, but I don't believe the 3-coloring part: it was presented too sketchily and there is no full journal version. So I think it should still be considered open whether 3-coloring of circle graphs is easy or hard. Both the 3-coloring and 4-coloring questions are also interesting for triangle-free circle graphs, which are always 5-colorable.
Overall, I found it to be a well-run, entertaining, interesting, and informative conference. I would have liked to see more application papers both in the main symposium and the satellites, but that's been a perennial issue since the start of SoCG. The new open access publisher and formatting seems to be working well, and I'm looking forward to next year in Boston.
Comments:
2015-07-01T00:02:56Z
It is trivial that every large set of points contains a large collinear subset or a large general-position subset (e.g. run a greedy algorithm). The big-line-big-clique conjecture of Kara-Por-Wood says that every large set of points contains a large collinear subset or a large subset of pairwise visible points. Being pairwise visible is much stronger than being in general position.
Toa -> Tao
- David Wood
2015-07-01T00:16:29Z
Thanks, fixed. In case anyone else is confused about the difference between general position and pairwise visibility: the difference concerns which points can block visibility. A subset of points is in general position if nothing within that subset blocks the visibilities between two other points in the subset, but a subset is pairwise visible if nothing in the original set blocks the visibilities. So pairwise visibility is a much stronger requirement.