Sorting when the sorted order keeps changing
Even though we know it doesn’t mean much, there are plenty of systems that maintain opinion-based rankings of things: the best movies, songs, or albums of all time, the best names to give babies, safest or most interesting places to live, best local restaurants, or even the top computer science departments. The problem is, because they’re opinion-based, the rankings keep changing. Different styles of movie, food, or computer science research fall into or out of fashion, and this causes the ordering of who is the best to change. And it’s obviously really important to have an accurate ordering. So what can you do?
One approach would be to stop the world and re-rank everybody periodically, as Newsweek does every year for academics. But then your ranking will regularly become a whole year out of date. Wouldn’t it be better to change your reported ranking continuously, as the actual rankings change? That way everyone will have to visit your ranking site more often to keep up with how their favorites are doing. But it wouldn’t do to make each fine-grained update be as expensive as recomputing the whole ranking from scratch. It would be better if you could only ask the people who produce the ranking for a little bit of information at a time, just enough to keep your ranking accurate as opinions change under it.
That is the problem modeled mathematically by Anagnostopoulos, Kumar, Mahdian, and Upfal in a paper from ICALP 2009 and later published in the journal Theoretical Computer Science, “Sorting and selection on dynamic data”. The specific ranking website they chose to describe (bix.com) has since gone defunct, but the principle remains: maintain a ranking of a set of objects by asking users binary questions about which of two objects is better. The objects are assumed to have an underlying linear ordering, which is changing randomly by a process that selects a random consecutive pair of objects in the underlying ordering and swaps them. At the same time, the ranking algorithm is maintaining its own ordering (not the same as the underlying ordering), and changing it by a process in which it selects any pair of two objects, asks what the underlying order of those two objects is, and uses the result to update its ordering somehow. Since human opinions are noisy, I suspect that the process of asking about the underlying order of two objects isn’t just presenting a comparison to a single user, but rather showing the same binary comparison to many users and letting them vote. Anyway, assuming the two processes operate at roughly the same rate (one swap in the underlying order per comparison by the ranking algorithm), can a smart enough ranking algorithm keep up?
Here “keep up” means maintaining a ranking that is a small number of inversions away from the correct underlying ranking; the number of inversions is called the Kendall tau distance. Anagnostopoulos et al. showed that it is possible to keep this distance small: if there are \(n\) items to rank, an algorithm that repeatedly runs quicksort can maintain a ranking that is within Kendall tau distance \(O(n\log\log n)\) of the underlying ranking. And that’s close to optimal, because the distance could be as big as quadratic, and because they also proved a lower bound that, no matter how you choose pairs to compare and how to update your ranking, the expected distance between your ranking and the underlying ranking will be at least \(\Omega(n)\). This is because the random positioning of changes in the underlying ranking means that, no matter how you look for them, you have too small a chance of seeing most of the very recent changes.
So now I have a new paper on this problem: “Optimally Sorting Evolving Data” (arXiv:1805.03350, with Besa, Devanny, Goodrich, and Johnson, to appear in ICALP). It gets rid of that last log-log factor and shows how to stay within distance \(O(n)\) of the underlying ranking, by being less clever. Instead of using a clever and fast sorting algorithm like quicksort, we use a simpler and slower one, a variation of bubble sort: just scan through the pairs of elements comparing each pair until you find two that are out of order, swap the out-of-order pair, and then scan backwards comparing the smaller element with earlier elements to find where to put it. Continue the outer scan until everything has been put into place. And then repeat again, because once we think we’ve sorted everything we start over again from the beginning.
How can this slow sorting algorithm be better than quicksort? The trick is that it’s only slow when it has a lot of work to do. Each comparison that it makes, beyond the initial scan, reduces the number of inversions by one. So unless the number of inversions is already very small, the algorithm will on average decrease the distance to the underlying ranking by close to one inversion per step. On the other hand, the random swapping process on the underlying ranking is generally acting to increase the distance, but it will do so by less than one unit per step (in expectation), because it has some probability of reversing its own swaps before the ranking algorithm finds them. So (unless the distance is already small) the ranking algorithm makes the distance smaller more quickly than the random swapping makes it larger, and it eventually becomes small. Making this reasoning formal and rigorous (and accounting for the fact that the two processes are not independent: random swaps can make the ranking algorithm do something other than correctly sorting its input) takes up the main part of the paper.
This is actually the second paper we’ve written on the subject, but the first one isn't on arXiv yet so I haven’t mentioned it here before.1 It is “Quadratic time algorithms appear to be optimal for sorting evolving data” from ALENEX 2018. Our new paper relies heavily in its proof technique on the number of steps in the random swap process and the number of comparison steps in the ranking algorithm being equal, and on the specifics of the sorting algorithm we chose, but the ALENEX paper shows experimentally that these assumptions shouldn’t be necessary. The same algorithm, or any of several other bubble-sort-like algorithms, work well even if the rate of random change is higher (plausibly, any constant factor times the rate of comparisons). Quicksort is not as good at low rates of change, but becomes better in comparison to the bubble sorts as the rate of change becomes higher. And (unsurprisingly) quicksort takes many fewer steps than bubble sort to converge to a stable distance from the underlying ranking. So for low rates of change, what seems likely to be best would be to start with a pass of quicksort to get close to the right ranking, and then switch to repeated bubble sort after that. But we don’t know how to prove most of this, so there’s plenty more still to do.
-
Update 2018-05-15: The ALENEX paper is now up at arXiv:1805.05443. ↩