Hamming's problem
I just merged today two articles in Wikipedia on the same problem, regular numbers, that is, numbers of the form \( 2^i 3^j 5^k \). The ancient Babylonians liked these numbers because their reciprocals had terminating sexagesimal representations; more recently, "Hamming's problem" is the task of generating these numbers efficiently. Dijkstra described a very simple but inefficient functional solution: if \( H \) is the sequence of numbers to be generated, then \( H \) consists of the number \( 1 \) concatenated with the merger of \( 2H \), \( 3H \), and \( 5H \). But there are two flaws with this. First, it generates the same numbers many times: e.g. \( 6 \) is generated both as \( 2\times 3 \) and as \( 3\times 2 \). This can be alleviated by letting \( A \) be the powers of \( 2 \), \( B \) be \( A \) merged with \( 3B \), and \( H \) be \( B \) merged with \( 5H \). But even if that improvement is made, and one does the merges in the most obvious and natural lazy-functional way, the sequence is generated inefficiently: the number of merges a generated number is involved in is \( O(j+k) \), and all these merges cause generating \( n \) numbers in this way to take \( \Theta(n^{4/3}) \) arithmetic operations.
Below is an approach that instead generates \( n \) numbers in a linear number of operations. The basic principle is, when solving equations like \( H = \mathrm{merge}(B, 5\times H) \), instead of generating \( 5\times H \) using a recursive instance of the same algorithm, to cache a single list of the whole sequence of \( H \) and reuse that stored sequence when computing \( 5\times H \).
However, the pure functional approach uses less space to generate the same sequence, \( O(n^{2/3}) \) instead of \( O(n) \) for the caching approach. I can find algorithms that generate the sequence in \( O(n^{2/3}) \) space and \( O(n \log n) \) time (essentially, by turning it into a graph shortest path problem in an infinite grid graph, applying Dijkstra's algorithm for shortest paths, and only keeping track of nodes on the search frontier), but is it possible to solve optimally both in time and space?
ETA: Yes, it is, by a very simple change to the above code. Simply throw away the parts of the cached sequence that the back pointer has moved past. If amortized complexity rather than constant time per output is acceptable, add the following lines to the else case in powtimes:
ETA2, March 14: Writing in a more imperative style actually gives tighter code, though perhaps it is not as readable:
Comments:
2007-03-13T09:41:56Z
I'm leonardo_m (not logged in), there is a long and *very* interesting discussion about this: http://lambda-the-ultimate.org/node/608
Another discussion about it: http://mail.python.org/pipermail/python-list/2005-January/thread.html#303475
2007-03-13T17:31:46Z
Thanks. That clarifies that the original version by Dijkstra is linear time if one is working in a language in which defining a lazy sequence in this way creates a data structure in which each sequence element is calculated only once and then the same calculated sequence is reused whenever it's referenced elsewhere. It's only when one tries to do something like
(recalculating the sequence recursively rather than reusing it) that one runs into trouble. The lambda-the-ultimate discussion includes a better Python implementation using tee() to reuse the sequence, and the \( O(n^{2/3}) \) space analysis of algorithms of this type that keep only the part of the sequence still required for future element generation.