Today I finally started skipping some sessions, so apologies to anyone whose talk I missed because of that. The ones I didn't skip included the second invited talk, by Ken Goldberg, and a nice session on geometric combinatorics.

The main subject of Ken's talk was on trying to put automated manufacturing on the sort of firm mathematical foundations that Turing's work has done for computing: as he put it, putting the "Turing" into "manufacturing". Much of his talk concerned parts orienters, devices that through sequences of (often sensor-less) mechanical actions move a part into a known position, ready to be included in an assembly. This was something of a flashback for me, as it was the subject of one of my first papers, although I think my own work in this area gets cited less for its manufacturing applications and more for its connection to the Černý conjecture in automata theory. Ken's talk also included some material on part fixturing: designing devices to hold a part in place while something is done to it.

János Pach gave a nice talk on geometric Ramsey theory. He spoke of passionately hating the theory of semialgebraic sets in the 1990s, when he was hearing a lot about it from Saugata Basu and others, but it somehow soaked in anyway until now it has become the obviously right tool for defining geometric graphs and uniform multigraphs, and studying Ramsey theory on these graphs. In this formulation, a geometric graph has as its vertices a set of points in a vector space of bounded dimension $$d$$ (possibly interpreted as parameters describing more complicated geometric objects) and as its edges the pairs of points belonging to a $$2d$$-dimensional semialgebraic set of bounded complexity. This framework allows one to represent any kind of intersection graph of line segments or other sets of bounded description complexity, for instance, by making the semialgebraic edge set describe the condition that two line segments intersect. It also generalizes immediately to uniform hypergraphs. Ramsey theory tells us that every graph (or uniform hypergraph) has a large subset of its vertices that is either complete (all possible edges are present) or independent (none are present); large means logarithmic for graphs and an iterated logarithm for $$k$$-uniform hypergraphs, but we still don't know whether we should iterate the log $$k$$ or $$k-1$$ times. For geometric graphs and hypergraphs, as János shows, it's always $$k-1$$.

In early 2009 I wrote here about Gabriel Nivasch getting tight bounds on the length of Davenport–Schinzel sequences of even order. Now Seth Pettie has found similarly tight bounds on the sequences of odd order, effectively closing out the problem. There are some surprises: the order five case has a nice simple new form, smaller than the previous bound, and for order seven and higher, the length is the same as the next smaller even order. Here's a vaguely related puzzle: suppose that a string $$s$$ is formed from an alphabet of $$n$$ letters, with no contiguous substring $$t$$ of length two or more in which the maximum number of occurrences of a letter in $$t$$ is greater than or equal to the number of distinct letters in $$t$$. For instance, "abcabc" is one such string, because each substring with two copies of one of the letters has all three letters in it. But "abbac" and "abcbca" are not allowed, because their substrings "bb" and "bcbc" violate the condition (as they might also in a DS-sequence). How long can $$s$$ be? The answer is a nice formula, but it's much larger than the length of a Davenport–Schinzel sequence. For a spoiler, see Lemma 9 of this paper.

Although I don't think I've highlighted any of them in my posts here, there have also been a lot of good talks in the satellite workshops. I ended the day with a late session including three of them: a survey of range searching by Pankaj Agarwal, a rant about how we should be developing online animations of geometric algorithms for teaching purposes by Suresh Venkatasubramanian (with some helpful hints about how we might go about doing it), and a talk by Jonathan Shewchuk about how new powerful mesh improvement techniques make it possible to simulate fluids using Lagrangian methods (where the mesh moves with the fluid) instead of Eulerian methods (where the fluid flows through a static mesh) and why doing it that way gives much more accurate simulations.

We were supposed to hear about the best student presentation tonight at the banquet, but we ended up in a room that wasn't really conducive to making announcements, so it didn't happen. I guess we'll find out the results tomorrow.