If I don't write out my impressions from ALENEX today, it's just going to pile up, so...

Among the contributed talks, I particularly liked Christophe Weibel's on reverse search. For those who don't know, this is a technique (codified by Avis and Fukuda) for listing all elements in some set of combinatorial objects. The idea is to define a "parent" function that connects each element to a simpler element, with one exception, the simplest "root" object that doesn't have a parent. This determines a tree whose vertices form the set one is trying to describe, and one simply performs a depth-first traversal of the tree. In Weibel's case, the objects he was trying to list in this way were the vertices of a Minkowski sum of convex polytopes. The usual advantage listed for the reverse search method (compared to a graph traversal on more easily defined graph structures on the objects to be listed) is its low memory usage: one can often do the depth first traversal using only enough memory to store the current point in the tree, without having to use any data structures to keep track of which objects have already been listed. However, as Weibel argued, this isn't really an issue in his case, as the sets he was trying to list had at most tens of millions of elements in them, enough to fit easily into main memory. Rather, the reverse search can be easily distributed to multiple computers in a distributed system or to multiple threads in a threaded multicore system, simply by making a new thread every time one wishes to explore a new subtree of the depth-first search tree. As Weibel showed experimentally, this method gets linear or close-to-linear speedup on real distributed systems of up to eight processors, and there's no reason to expect it to be limited to that few.

I also quite enjoyed the ANALCO plenary talk, by Bruno Salvy. Salvy's talk combined ideas from numerical algorithms (particularly Newton's method for very quickly converging to roots of polynomials) with the method of analytic combinatorics (as represented by the Flajolet-Sedgewick book creatively titled "Analytic Combinatorics") of describing combinatorial structures as being built up by recursive formulae that can be syntactically translated into generating functions for the numbers of those objects. As an example, Salvy discussed the Newton iteration \( y_{i+1}=y_i-(y_i-1-zy_i^2)(1-2z) \) for finding the roots of the quadratic polynomial \( y −1 −zy^2 \). One may compute each iterate as a real number, starting with \( 0 \), for a fixed constant \( z \), to get a root. One may treat \( z \) as a variable and compute iterates as a formal power series of \( z \) to get a power series; this power series describes the function that maps \( z \) to the corresponding root of the polynomial, and turns out to have the Catalan numbers as its coefficients. The quadratic convergence of the numerical iterates translates, in this power series level of thinking, into the fact that the number of correct coefficients in the power series doubles at each iteration. Going one level higher still, using the Flajolet–Sedgewick correspondence between formulas and combinatorial objects, one may treat the polynomial one is trying to find the root of as a recursive formula \( Y = 1 + ZY^2 \) that describes the family of all binary trees (\( Y \)) as being formed as a disjoint union (\( + \)) of the single-node tree (\( 1 \)) and of the trees formed by a non-leaf node (\( Z \)) with two binary-tree children (\( Y^2 \)). Newton iteration then describes a method of generating these combinatorial structures by a sequence of iterations in which smaller collections of binary trees eventually converge to the whole language. Going back down through the levels of abstraction, the Catalan power series is the generating function for the binary trees, and its convergence shows that Newton iteration for the numerical root starting at zero quickly converges and can be used to compute statistical information about random binary trees. The point of all this was not just pretty abstraction and de-abstraction (although it was that, and I suspect the categorists would have a field day describing this all as categorification and decategorification) but to lead up to the development of efficient algorithms for generating random instances of combinatorial structures: Salvy concluded with a table in which he showed his method as being much faster than others for generating random instances of XML-based treelike structures such as SVG files.

At the business meeting Bob Sedgewick was pushing David Johnson to "mainstream" ALENEX/ANALCO by making them run concurrently with SODA as a fourth parallel session rather than a day earlier. It wasn't clear whether they would retain their independent identity as satellite workshops of SODA or whether the idea was to pressure the SODA program committee into accepting these papers alongside and indistinguishable from the regular SODA papers, but either way I tend to think this is a bad idea. Three parallel sessions is already much worse than two in its tendency for making the third of the papers that I'm interested in be scheduled on top of each other leaving me with 2/3 of the time that I don't want to see anything and skip out on all the talks. With four parallel sessions it would be multiple parallel talks I want to see for 1/4 of the time and free time for 3/4 of the time, and even less overall attendance at all of the talks. Ok, the math doesn't really quite work out like that but sometimes that's how it feels.

Finally, as Suresh recently posted, this year's SoCG PC is experimenting with an author feedback period: emails went out last night to the authors of all submissions, containing either requests for clarification from the program committee members or (in some cases) blanks, with a five-day response deadline. I've previously participated in such things for some other conferences and mostly thought they were a waste of time, something to make the authors feel better about their rejections without changing any decisions. But, while some of the questions I got didn't help me sleep last night, I'm cautiously optimistic that this will actually be a very helpful process that will reduce the frequency of certain kinds of misunderstandings that inevitably happen in these reviews. More later, maybe?



Comments:

None:
2010-01-17T23:35:46Z

something to make the authors feel better about their rejections without changing any decisions.

I tend to agree with that, but in my opinion this alone would make them useful.

We all have heard many anecdotes about the occasional silly comment from a sub-reviewer (see reviewer two must be stopped for some samples).

Presumably in most cases the PC members discussed and overruled such a comment. The rebuttal phase provides some reassurance to the authors that the silliness of the comment was noted and properly discounted.