Robust graph isomorphism and its applications
I think part of the reason graph isomorphism has been such a tricky problem, theoretically, is that in practice it's too easy. Almost all graphs have some small irregularities that can be used as landmarks for identifying all the features of the graph, even when they've been scrambled arbitrarily. Only a small class of highly symmetric graphs pose any actual difficulties, and that's why a deep knowledge of group theory (the study of symmetries) has been so useful in theoretical work on graph isomorphism. It's also why finding counterexamples to crank graph isomorphism algorithms is hard enough that the cranks don't do it themselves and avoid the embarrassment of someone else doing it for them.
In many cases, even if you add some noise to a graph, its underlying irregularities will show through, allowing you to recognize it and its individual features. That's the main idea behind my most recent arXiv preprint, "Models and Algorithms for Graph Watermarking" (arXiv:1605.09425, with Goodrich, Lam, Mamano, Mitzenmacher, and Torres, to appear at ISC 2016).
The problem we study is one of watermarking copies of a graph (for instance a large social network) so that if you see one of your copies later you can tell which one it was, by using the graph structure rather than extra information such as vertex labels. To do so, we identify a small number of "landmark" high-degree vertices (generally, the ones with the highest degrees), use the pattern of adjacencies to landmarks to give unique identities to a larger set of (medium-degree) vertices, and flip a small set of randomly selected edges among these vertices. With high probability (when the graph to be watermarked is drawn from a suitable random family) even if an adversary tries to mask the watermark by scrambling the vertices or flipping more edges, the vertex identifications and pattern of flipped edges will be recoverable.
Because we're choosing our landmarks in a fairly naive way (by vertex degrees, or as in our implementation with ties broken by neighboring degrees), our algorithms wouldn't work for random regular graphs. But even in such cases, there are other features such as the numbers of triangles they belong to or their distance vectors that often allow some vertices to be distinguished from others. Finding out which features of this type remain robust when noise is added to the graph seems like a promising line of research.