xkcd mentions the faster-than-factorial dynamic programming solution to the traveling salesman problem!

The quick explanation: if you try to solve the traveling salesman by trying all cyclic permutations of the input, it will take time \( O(n!) \) (there are \( (n-1)! \) cyclic permutations, and it takes time \( O(n) \) to test each one). Or maybe \( O((n-1)!) \) if you use some sort of Gray code on permutations to reduce the time to test each one to a constant. In either case, you won't be able to solve problems with more than a dozen or so vertices.

In 1962, Richard Bellman, and independently Held and Karp, observed that the problem can be solved much more efficiently by dynamic programming. Pick one of the \( n \) vertices (arbitrarily) as a start, and call it \( s \). Then compute, for each subset \( A \) of the remaining vertices, and each additional node \( t \) in \( A \), the shortest path that starts at \( s \), runs through each vertex of \( A \), and ends at \( t \): if we let \( P(A,t) \) denote the length of this path, then \[ P(A,t) = \min_{x\in A\setminus\{t\}} P(A\setminus\{t\},x) + d(x,t). \] There are \( O(n2^n) \) choices of \( A \) and \( t \), and each one takes time \( O(n) \) to run through the choices of \( x \), so the total time is \( O(n^2 2^n) \), the bound stated in the xkcd comic.

The drawback of this algorithm, though, is that it also uses a lot of space. Say you're a little bit careful about this, and run through all the choices of \( A \) in order by size, so that while you're computing solutions for sets of size \( k+1 \) you only need to remember all the solutions you've already computed for sets of size \( k \). (This will only let you find the length of the shortest tour, but the tour itself can likely be constructed in similar space using tricks similar to Hirschberg's 1975 space-saving technique for longest common subsequences). Then the algorithm requires the most space when \( k \) is around \( n/2 \), at which point it will need to store \( O(n^{1/2} 2^n) \) solutions. Say each solution is a \( 32 \)-bit floating point number; then, when \( n \) is \( 20 \), you'd need roughly \( 20 \) megabytes of memory to run the algorithm. If you have a computer with a gigabyte of memory, the largest \( n \) you'd be able to solve is \( n=25 \). Some of my own research concerns special cases of the traveling salesman problem where these space limitations can be removed, and the problem solved by a simpler backtracking algorithm; the time for my algorithm is still exponential, though, so it's still limited to relatively small inputs (up to maybe \( 100 \) vertices).

Fortunately, these limitations are primarily theoretical. If you really need fast exact solutions to large traveling salesman problems, branch and bound algorithms based on linear programming can solve much larger problems. The Concorde code by Chvatal et al seems to be a good example of this technique. It claims to have solved to optimality inputs of size up to 15,112 vertices. A lot more than you can get from Johnson's algorithm, but maybe still not as good as the \( O(1) \) "sell it on ebay" solution suggested in xkcd.

ETA: See this later post for algorithms that trade off between the \( 2^n \) time and space of the dynamic program above and the \( 4^n \) time and polynomial space of the divide and conquer algorithm mentioned in the comments.