# The nearest neighbor in an antimatroid

Franz Brandenburg, Andreas Gleißner, and Andreas Hofmeier have a 2013 paper that considers the following problem: given a finite partial order \( P \) and a permutation \( \pi \) of the same set, find the nearest neighbor to \( \pi \) among the linear extensions of \( P \). Here "nearest" means minimizing the Kendall tau distance (number of inversions) between \( \pi \) and the chosen linear extension. Or, to put it another way: you are given a directed acyclic graph whose vertices are tagged with distinct numbers, and you want to choose a topological ordering of the graph that minimizes the number of pairs that are out of numerical order. Among other results they showed that this is NP-hard, 2-approximable, and fixed-parameter tractable.

An idea I've been pushing (most explicitly in my recent *Order* paper) is that, when you have a question involving linear extensions of a partial order, you should try to generalize it to the basic words of an antimatroid. So now, let \( A \) be an antimatroid and \( \pi \) be a permutation on its elements. What is the nearest neighbor of \( \pi \) among the basic words of \( A \)? Can the fixed-parameter algorithm for partial orders be generalized to this problem?

Answer: Yes, no, and I don't know. Yes, the problem is still fixed-parameter tractable with a nice dependence on the parameter. No, not all FPT algorithms generalize directly. And I don't know, because I don't seem to have subscription access to the journal version of the BGH paper, the preprint version doesn't include the FPT algorithm, and I don't remember clearly enough what Franz told me about this a month or so ago, so I can't tell which one they're using.

But anyway, here's an easy FPT algorithm for the partial order version of the problem (that might or might not be the BGH algorithm). For any element \( x \), we can define a set \( L \) of the elements coming before \( x \) in the given permutation \( \pi \), and another set \( R \) of the elements coming after \( x \) in the permutation; \( L \), \( x \), and \( R \) form a three-way partition of the elements. We say that \( x \) is "safe" if there exists a linear extension of \( P \) that gives the same partition for \( x \). Otherwise, we call \( x \) "unsafe". Then in the linear extension nearest to \( \pi \), every safe element has the same position that it has in \( \pi \). For, if we had a linear extension \( \sigma \) for which this wasn't true, then the sequence \( (\sigma\cap L),x,(\sigma\cap R) \) would also be a linear extension and would have fewer inversions. On the other hand, every unsafe element participates in at least one inversion, so if the optimal solution value is \( k \) then there can be at most \( 2k \) unsafe elements. Therefore, we can restrict both \( \pi \) and \( P \) to the subset of unsafe elements, solve the problem on the resulting linear-sized kernel, and then put back the safe elements in their places, giving an FPT algorithm.

You can define safe elements in the same way for antimatroids but unfortunately they don't necessarily go where they should. As an extreme example, consider the antimatroid on the symbols abcdefghijklmnopqrstuvwxyz\( * \) whose basic words are strings of distinct symbols that are alphabetical up to the star and then arbitrary after it, and the permutation \( \pi ={} \)zyxwvutsrqponmlkjihgfedcba\( * \) that wants the symbols in backwards order but keeps the star at the end. The star is safe, but if we put it in its safe place then the only possible basic word is abcdefghijklmnopqrstuvwxyz\( * \) with 325 inversions. Instead, putting it first gives us the basic word \( * \)zyxwvutsrqponmlkjihgfedcba with only 26 inversions. So the same kernelization doesn't work. It does work to restrict \( \pi \) and \( P \) to the elements whose positions in \( \pi \) are within \( k \) steps of an unsafe element, but that gives a bigger kernel (quadratic rather than linear).

Instead, let's try choosing the elements of the basic word one at a time. At each step, if the element we choose comes later in \( \pi \) than \( i \) other elements that we haven't chosen yet, it will necessarily cause \( i \) inversions with those other elements, and the total number of inversions of the word we're finding is just the sum of these numbers \( i \). So when the number of inversions is small, then in most steps we should choose \( i = 0 \), and in all steps we should choose small values of \( i \). In fact, whenever it's possible to choose \( i = 0 \), it's always necessary to do so, because any basic word consistent with the choices we've already made that doesn't make this choice could be made better by moving the \( i = 0 \) element up to the next position.

So this leads to the following algorithm for finding a basic word with distance \( k \): at each step where we can choose \( i = 0 \), do so. And at each step where the antimatroid doesn't allow the \( i = 0 \) choice, instead recursively try all possible choices of \( i \) from \( 1 \) to \( k \) that are allowed by the antimatroid, but then subtract the value of \( i \) we chose from \( k \) because it counts against the number of inversions we have left to find.

Each leaf of the recursion takes linear time for all its \( i = 0 \) choices, so the main factor in the analysis is how many recursive branches there are. This number is one for \( k = 0 \) (because we can never branch), and it's also one for \( k = 1 \) (because at a branch point we can only choose \( i = 1 \) after which we are in the \( k = 0 \) case). For each larger value of \( k \), the first time we branch we will be given a choice of all possible smaller values of \( k \), and the total number of branches in the recursion will be the sum of the numbers of branches for these smaller values. That is, if \( R(k) \) denotes the number of recursive branches for parameter \( k \), it obeys the recursion \[ \begin{align} R(0) &= R(1) = 1,\\ R(k) &= \sum_{i\lt k} R(i), \end{align} \] which solves to \( R(k)=2^{k-1} \). So this algorithm is still fixed-parameter tractable, with only single-exponential dependence on \( k \). If we don't know \( k \) ahead of time, we can run the whole algorithm for \( k = 1,2,3,\dots \) and the time bound will stay the same.

Given the existence of this simple \( O(2^knI) \) algorithm (where \( I \) is the time for testing whether the antimatroid allows an element to be added in the current position), does it make sense to worry about a kernelization, which after all doesn't completely solve the problem, but only reduces it to a smaller one? Yes. The reason is that if you kernelize (using the \( O(k^2) \)-size kernel that restricts to elements that are within \( k \)\ steps of an unsafe element) before recursing, you separate out the exponential and linear parts, and get something more like \( O(nI + 2^k k^2 I) \). But the difference between quadratic and linear kernels is swamped by the exponential part of the time bound, so rather than looking for smaller kernels it would be better to look for a more clever recursion with less branching.

The same authors also have another paper on Spearman footrule distance (how far each element is out of its correct position, summed over all the elements) but the kernelization in this paper looks a little trickier and I haven't thought carefully about whether the same approach might work for the antimatroid version of that problem as well.