Some progress on the problem I mentioned earlier, of density near the origin in central limits. I can now handle another fairly broad case, that of centrally symmetric distributions. In fact, for these distributions, a stronger statement about density near the origin can be made, applying to all bounded-radius balls rather than merely to sufficiently large ones:

Theorem: Let $$F$$ be a centrally symmetric distribution with bounded support on $$\mathbb{R}^d$$, and let $$X_i$$ be independent draws from $$F.$$ Then for any $$K\gt 0$$ and even $$n,$$

$P\left[\left|\sum_{0\le i\lt n} X_i \right| \le K\right] = \Omega(n^{-d/2}).$

Proof: Consider $Y = \sum_{0\le i\lt n/2} X_i$ and $Z = \sum_{n/2\le i\lt n} -X_i.$ $$Y$$ and $$Z$$ are independent, and (by symmetry) identically distributed. By an appropriate multidimensional central limit theorem (applied to the affine hull of the support of $$F$$), with constant probability both $$Y$$ and $$Z$$ are bounded by $$O(\sqrt{n})$$. Partition the ball of radius $$O(\sqrt{n})$$ into $$O(n^{d/2})$$ subsets $$S_j$$, each of diameter at most $$K$$, and let $$p_j$$ be the probability that $$Y$$ (or $$Z$$) belongs to $$S_j$$. If both $$Y$$ and $$Z$$ belong to the same $$S_j$$, then they are within distance $$K$$ of each other, so $P\left[\left|\sum_{0\le i\lt n} X_i \right| \le K\right] = P[|Y-Z|\le K] \ge \sum_j p_j^2.$ But the sum of squares of a set of variables, that together add to a fixed value, is minimized when each variable is equal, so

$\sum_j p_j^2 = \Omega(n^{-d/2})$

As a corollary, the original statement of the problem, that there exists $$K$$ such that the probability of being within distance $$K$$ of the origin is $$\Omega(n^{-d/2})$$ for all $$n$$ (not just all even $$n$$) holds for these distributions as well. For, if $$n$$ is even, it follows from the theorem above, while if $$n$$ is odd, by the triangle inequality $\left|\sum_{0\le i\lt n} X_i \right| \le \left|\sum_{0\le i\lt n-1} X_i \right| + |X_{n-1}|$ and we can choose $$K$$ to be any constant greater than the maximum possible value of $$|X_i|$$. The example of a simple random walk on an integer lattice shows that this choice of $$K$$ is the smallest possible.

ETA: The same argument seems to apply more generally to nonconstant radii, showing that $P\left[\left|\sum_{0\le i\lt n} X_i \right| \le r\right] = \Omega(r^d n^{-d/2})$ whenever $$r = O(\sqrt{n})$$ and either $$n$$ is even or $$r\gt (1+\epsilon)E[|X_i|]$$ for some constant $$\epsilon\gt 0$$.