I just got back from the International Meshing Roundtable, an annual conference on unstructured mesh generation and related topics, where I presented my paper on diamond-kite meshes; here are my talk slides. After the talk Scott Mitchell asked me what happens if I try to smooth a diamond-kite mesh, and the answer is a little surprising: nothing! Each vertex is already at the centroid of its neighbors, so Laplacian smoothing won't change its position. (Stated another way, the mesh is a Tutte embedding of its graph.)
Of course, other smoothing methods might do something different. Some time in the last ten years, centroidal Voronoi tesselation seems to have taken over from Laplacian smoothing as the first choice for mesh smoothing or remeshing, at least for triangle meshes, because it leads to many nearly-equilateral triangles. When used in its most basic form, it produces meshes that also have uniform size, but that can be changed by modifying the underlying metric. For instance, Bruno Levy and Nicolas Bonneel had an interesting paper in which they lifted three-dimensional surfaces to six dimensions (with some k-nearest-neighbor tricks for making the six-dimensional Voronoi cell computations tractable) by appending the surface normal vector to the three-dimensional coordinates. The resulting metric combines curvature with distance, leading to triangles aligned to the curvature of the surface. John Edwards, Wenping Wang and Chandrajit Bajaj had another paper on centroidal Voronoi remeshing, using weighted 3d Euclidean distances to produce triangles on a surface, scaled to its curvature. For nearly-flat and parallel pieces of surface, weighting by curvature produces topologically incorrect meshes, and weighting by local feature size fixes that but makes the triangles too dense; their solution is to partition the surface into regions within which curvature and feature size match, mesh them separately, and then stitch the results together.
The best paper award went to "A PDE based approach to multidomain partitioning and quadrilateral Meshing" by Nicolas Kowalski, Franck Ledoux and Pascal Frey. They start by fitting a continuous field of "plus signs" (orientations for square mesh elements) to the input domain: the plusses should align with the domain boundaries, and smoothly interpolate to the interior of the domain. There's a cute trick here: if you choose one of the four unit-vector directions of the plus sign, and take it to the fourth power, the resulting unit vector doesn't depend on which of the four directions you choose, and the plus sign can be uniquely recovered from this unit vector. So this part of the problem becomes one of fitting a smooth vector field to boundary conditions, which can be done using solutions to the heat equation (the PDE part of the title). Once the orientations have all been chosen, one can trace flow lines through them, starting from the singularities of the field, to decompose the domain into blocks within which structured quad meshes can be used; this part is a bit like a continuous version of the algorithm in my paper "Motorcycle graphs: canonical quad mesh partitioning", but the traced lines need to continue across other lines rather than stopping at them, in order to ensure that the numbers of quads along the boundaries of each block match up. The algorithm can run into problems where a streamline gets into an infinite spiraling pattern instead of terminating; they say this can be handled by snapping to a nearby singularity but I'm not entirely convinced, because there might not be a singularity to snap to.
The Meshing Roundtable community is much more heavily weighted to industry than the other conferences I usually go to, and perhaps because of that is a bit more formal. The only person I saw in a T-shirt was invited speaker Mark Meyer from Pixar, who gave a very entertaining and informative talk about Pixar's use of subdivision surfaces. Pixar has open-sourced their subdivision surface library, and two other talks also touched on issues of open sourcing of meshing code. This fact led to much discussion, especially concerning the choice by one of the other groups to use GPL as their open source license.
The conference banquet was at the Computer History Museum in Mountain View. Despite being in that part of the world frequently, I'd never been before, and it was worth the visit. If you've been working with computers for any length of time it can be quite nostalgic, as well as informative, although also quite effective at making one feel old. (Yes, I have actually used punch cards, long ago.)
There's much more I could write about but in the interest of finishing this post let's just say that it was an enjoyable and successful conference and I hope the interval until the next time I go to it is much shorter than the previous one.