BARC Talk by Samuel McCauley

Online List Labeling with Predictions

Abstract:
A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm.

However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. In this talk we will step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem.

In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions.

Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error.

Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past. This is joint work with Benjamin Moseley, Aidin Niaparast, and Shikha Singh.