# Report from the Hamilton Workshop

I've been visiting Dublin, Ireland (my first time here) for the Fifth William Rowan Hamilton Geometry and Topology Workshop at the Hamilton Mathematics Institute of Trinity College, where I spoke about hyperconvex metric spaces, finding optimal stars, orthogonal convex hulls, and embedding into the Manhattan plane.

It was a good conference, but a bit like stepping into a parallel world where everyone's evil twin does low-dimensional topology and geometric group theory: the workshop theme had the phrase "Computational Geometry" in it, but it's an Other Computational Geometry that really mostly means low-dimensional topology and geometric group theory; there's an Other David Epstein (spelled wrong) who does etc and was here last year; and there's an Other Kevin Wortman (not spelled wrong) who does etc and wasn't here, but whose name caused some confusion when I referred to the work of my former student Kevin Wortman. Still, my interests and those of the Other Computational Geometers have enough overlap that I think we all got something useful out of the experience.

I'm not going to recount all of the talks (especially because I didn't understand them all) but here are a few that caught my attention.

Nathan Dunfield has posted his slides on “Practical Solutions to Hard Problems in 3-Dimensional Topology.” Hard in this context means, actually, as 3d topology goes, pretty easy: problems such as finding incompressible surfaces in manifolds that can be solved in merely exponential time by triangulating the manifold (that is, representing it as a complex of glued-together tetrahedra), observing that the thing you want can be represented as a normal surface, describing all normal surfaces as the integer points in an unbounded high-dimensional convex polytope, and looking at the boundary rays of the polytope. Despite the complication this all works pretty well (he said) when the number of tetrahedra is moderate (say 30 or 40), but these methods hit a wall somewhere around 50 tetrahedra and he needed more like 120 of them to define some examples of manifolds coming from a number-theoretic construction. So he found some heuristics and tricks that allowed him to guess the surfaces he was looking for without the exhaustive searching; there seems to be little hope of guaranteeing that these tricks work well in general, but they worked well enough for his examples.

János Pach was there as well and gave a talk I'd mostly seen before (but I think the rest of the audience hadn't) on bounding the numbers of edges in a graph when you know that it has a nice enough drawing (say, there is a constant bound on the size of the largest mutually crossing set of edges).

Robert Meyerhoff spoke on “Computer-Aided Analysis of Hyperbolic 3-Manifolds.” The use of the computer in this work is very quantitative and numerical. For instance, as part of a different calculation he needed to show that (with, it turns out, six or seven exceptions) if one expands a toroidal tube around the shortest non-contractable curve in a compact hyperbolic 3-manifold, one can keep expanding until the tube radius is at least \( \log(3)/2 \) before it runs into itself and can't expand any more. The solution technique involves finding several numerical parameters of the manifold (such as the length of the non-contractable curve, the shortest distance between two copies of the curve in the universal cover, etc) and using a computer to map out blocks in the parameter space where a radius lower than \( \log(3)/2 \) would lead to a contradiction (e.g. some other curve that is shorter than the supposedly shortest one). The output of this search (a partition of parameter space into blocks and data describing of what goes wrong within each block) forms what is in principle a rigorous proof; a program that verifies the proof can be much simpler than one that searches for it, and the level of rigor resulting from this sort of search seems to be quite acceptable to the low dimensional topologists.

Tim Riley had a fun talk that involved asymptotic analysis of quickly growing functions of a type that was familiar to me from theoretical CS. Specifically, he looked at the following process one could play on a sequence of finitely many non-negative numbers: remove the first number from the sequence, and replace each remaining positive number \( i \) in the sequence by the pair of numbers \( i,i-1 \). For instance, the sequence 3,3,3 would become 3,2,3,2, then 2,1,3,2,2,1, then 1,0,3,2,2,1,2,1,1,0, then 0,3,2,2,1,2,1,1,0,2,1,1,0,1,0,0, etc. As he showed, any sequence eventually becomes empty after a finite number of steps like this, but a sequence of \( n \) \( i \)'s takes roughly \( A(i,n) \) steps where \( A \) is the Ackermann function. He used this rapid growth to define groups with unexpected properties.

There were also several talks presented entirely on the chalkboard, something I still see occasionally in mathematics department seminars but not at the other conferences I go to. I found it quite refreshing, but I think it would only work for hour-length talks, not for the 20-minute ones we more frequently get.