Report from Karlsruhe
This week, I'm in Karlsruhe (again), where I've been attending the European Symposium on Algorithms and the associated workshops of ALGO 2008.
My favorite talks of the conference so far were both on Tuesday. Mark Overmars gave the daily plenary talk, on path planning in computer games, punctuated with some slight-of-hand involving linked ropes. The issue is that local navigation rules aren't robust (he gave some examples of characters getting looped in GTA4, and of some animal companion in a fantasy adventure game having to teleport to keep up with the player, causing running away to be the best way of rescuing the companion from enemies), while A* or other optimal path planning methods are slow, inflexible in the presence of other moving characters, and lead to artificial-looking motions. Mark's proposed solution instead quickly finds both a (non-optimal, but short) "control path" and an obstacle-free region surrounding it within which the character should stay; the character uses local rules to navigate towards a moving control point on the control path.
I almost missed the other talk when I took a break from the conference to search for a German SIM card for my phone. But the nearest phone store turned out to be so close that I returned in time, and am glad I did. It was by Kirsch, Mitzenmacher, and Wieder (I think Udi Wieder gave the talk) and was entitled More Robust Hashing: Cuckoo Hashing with a Stash. Cuckoo hashing involves having two hash functions used as indices into two different hash tables: every item is placed at one of these two locations, and if both are already full then other items are moved to their alternative locations in order to make room. It offers guaranteed constant-time searches, although insertions may be costlier, and variants of it are extremely space-efficient. But it can fail, when some group of \( k \) items are mapped to fewer than \( k \) combined slots in the two hash tables. This happens with such a small probability that the conventional solution is to rebuild the whole hash table when this sort of failure happens, but that may not be acceptable in some applications. Instead, the authors propose augmenting the table with a "stash", a small list of items that could not be hashed; they show that, for any polynomial failure probability, this stash needs only constant size (essentially the size is just the exponent of the failure probability). One of the things I liked about it is that they reduce the problem very cleanly to a natural mathematical question about random graphs: the process of adding items to the hash table, by choosing pairs of locations to place the items, can be translated into a process of forming a random (bipartite) graph by adding edges one at a time, and with this translation the items that must be stashed correspond to edges that connect two vertices of an already-cyclic component. The problem becomes one of proving that these edges are extremely rare.
Somehow I've gotten myself elected to the steering committee of this thing, so it seems likely I'll be back. Next year it will be in Copenhagen (there's a story here involving academic politics at Lund that I can't find any details of online) and the year after that in Liverpool (home of Europe's oldest Chinatown, they say).
ETA: The Lund story.