I've spent most of the past week in Korea, for the 21st International Symposium on Algorithms and Computation (ISAAC). Somehow I had the impression of Korea in December as being a very cold place, but although I needed to bundle up a little for a day of sightseeing in Seoul (and although we were treated one morning at the conference to the pretty sight of snow falling over the ocean) it's been quite mild here in Jeju where the conference has been held. And the food and hospitality here have been outstanding.

As usual there were too many good talks to cover them all in any detail, so let me just summarize one per day.

Iwama, Seto, Takai, and Tamaki had a nice result on exact algorithms for 3-SAT, presented the first morning, although I'd already seen Iwama speak about the same result in Dagstuhl. Roughly speaking, there are two main approaches to exact 3-SAT algorithms: backtracking (order the variables and recursively try all settings of the variables) and local search (repeatedly try flipping a variable that will satisfy an unsatisfied clause). The backtracking approach works better when there are many satisfying assignments while the local search approach works better when there are fewer; based on this tradeoff, a previous paper of Iwama and Tamaki had shown how to combine the two approaches by using the same random starting assignment for both of them, achieving a worst-case running time better than either method alone. Separately, Hofmeister et al had shown that a non-uniform random distribution on the starting assignment (one that avoids certain obviously-wrong combinations of variables) can work better for the local search approach than a uniform distribution; but this method didn't work in combination with the backtracking approach. The new paper works by choosing a uniformly random assignment, applying the backtracking algorithm to it, modifying the assignment in a careful way to make its distribution conform to the nonuniform distribution of Hofmeister et al, and applying the local search method to the modified assignment. It achieves a record time of \( O(1.32113^n) \) to solve 3-SAT instances with \( n \) variables.

On the second day, Gu and Tamaki presented a paper on the branchwidth of planar graphs. Branchwidth is like treewidth in many respects, but with the important difference that, for planar graphs, optimal branch-decompositions can be constructed in polynomial time; this makes it advantageous to use for FPT and approximation algorithms, compared to treewidth because one doesn't lose an additional factor by approximating it. It was also known that (again, in planar graphs) branchwidth is between \( g \) and \( cg \), where the largest grid-graph minor has size \( g\times g \), and \( c \) is some constant, previously known to be at most \( 4 \). The value of \( c \) is important because, through this dependence between branchwidth and grid minor size, it shows up in the exponent in the time bounds for FPT algorithms on planar graphs. The new paper shows that \( c\le 3 \); it also shows that \( c \) cannot be reduced below two, by presenting a simple example of a graph with branchwidth twice as large as its grid minor size: a cylindrical grid formed by \( g \) concentric cycles, each of length \( 2g \).

I didn't think I would be able to see John Augustine's talk on the CNN problem, Friday morning, because I was scheduled to be the session chair for the other parallel session. But one of the sets of authors from my session had transportation difficulties that prevented them from getting to the conference, freeing a time slot at the end of the session and allowing me to move to the other one. In the version of the problem considered by Augustine, a point is moving continuously in the (L1-metric) plane under the control of an adversary, and the algorithm most move a second point so that, at all times, the algorithm's point and the adversary's point are on a common horizontal or vertical line. The cost to the algorithm is the L1 distance its point moves, and the quality of an algorithm is its competitive ratio, the ratio between the cost it actually achieves and the optimal cost that it could have achieved if only it knew the adversary's future moves. The difference between Augustine's version of the problem and previously considered versions is that in his problem the adversary's point must move continuously, where previously it was allowed to jump around arbitrarily, but it also generalizes another previously considered special case in which the adversary's point was only allowed to jump along horizontal or vertical lines. Augustine's algorithm alternates between two strategies, one in which it moves like a chess rook, keeping the adversary's point to its left on a horizontal line, and a second in which it moves like a chess bishop, trying to get its point closer to the adversary's point. He shows that his method achieves a competitive ratio of \( 3+2\sqrt{3} \), improving a ratio of \( 9 \) for the more restrictive problem in which the adversary's motion is restricted to horizontal and vertical lines.

My own talks were on regular labelings and flow in one-crossing-minor-free graphs; I also had a paper on finding cliques, presented by my co-author Darren Strash (a student of Mike Goodrich at UCI).





Comments:

None: Thanks...
2010-12-20T09:28:28Z

David... appreciate the mention. I'd like to also give a quick shoutout to my coauthor Nick Gravin.

John.