10 October 2019

Seven BARC papers accepted for SODA’20

SODA'20

Basic Algorithms Research Copenhagen is represented with 7 papers for the SODA conference 2020, to be held in Salt Lake City, Utah, 5 - 8 January next year.

The ACM-SIAM Symposium on Discrete Algorithms (SODA20) focuses on research topics related to efficient algorithms and data structures for discrete problems and has accepted 7 papers authored or co-authored by members of the BARC team:

Fully-Dynamic All-Pairs Shortest Paths: Improved Worst-Case Time and Space Bounds
Authors: Maximilian Probst Gutenberg, Christian Wulff-Nilsen

The paper gives a polynomial improvement over the best deterministic worst-case time bounds for maintaining all-pairs shortest paths in a fully-dynamic directed graph in both the unweighted and the edge-weighted setting. In addition, it provides Las Vegas algorithms with worst-case bounds matching state-of-the-art for Monte Carlo algorithms.


Decremental SSSP in Weighted Digraphs: Faster and Against an Adaptive Adversary
Authors: Maximilian Probst Gutenberg, Christian Wulff-Nilsen

The paper gives improved bounds for maintaining approximate single-source shortest paths in a directed graph undergoing edge deletions. It provides the first improvement over the classical algorithm of Even and Shiloach against an adaptive adversary and improves on the best algorithms against an oblivious adversary.


Deterministic Algorithms for Decremental Approximate Shortest Paths: Faster and Simpler
Authors: Maximilian Probst Gutenberg, Christian Wulff-Nilsen

This paper studies the decremental approximate single-source shortest path problem in an undirected graph. A new algorithm is presented which improves on state-of-the-art in the deterministic setting. Applications are given for dynamic maintenance of sparse emulators and for decremental approximate all-pairs shortest paths.


Faster Algorithms for Edge Connectivity via Random 2-Out Contractions
Authors: Mohsen GhaffariKrzysztof NowickiMikkel Thorup

We provide a simple new randomized contraction approach to the global minimum cut problem for simple undirected graphs. The contractions exploit 2-out edge sampling from each vertex rather than the standard uniform edge sampling. We demonstrate the power of our new approach by obtaining better algorithms for sequential, distributed, and parallel models of computation. 
https://arxiv.org/abs/1909.00844


Oblivious Sketching of High-Degree Polynomial Kernels
Authors: Thomas Dybdahl Ahle, Michael Kapralov, Jakob Bæk Tejs Knudsen, Rasmus Pagh, Ameya Velingker, David P. Woodruff, Amir Zandieh

The paper provides a general method for applying sketching solutions developed in numerical linear algebra over the past decade to a tensoring of data points without forming the tensoring explicitly.  This leads to the first oblivious sketch for the polynomial kernel with a target dimension that is only polynomially dependent on the degree of the kernel function, as well as the first oblivious sketch for the Gaussian kernel on bounded datasets that does not suffer from an exponential dependence on the dimensionality of input data points. 


Worst-case Polylog Incremental SPQR-trees: Embeddings, Planarity, and Triconnectivity
Authors: Jacob Holm, Eva Rotenberg

We show that every labelled planar graph G can be assigned a canonical embedding φ(G), such that for any planar G' that differs from G by the insertion or deletion of one edge, the number of local changes to the combinatorial embedding needed to get from φ(G) to φ(G') is O(log n), and give a matching lower bound. As a secondary result, we obtain deterministic data structures for incremental planarity testing, incremental planar embedding, and incremental triconnectivity, that each have worst case O(log³ n) update and query time, answering an open question by La Poutré and Westbrook from 1998.


Approximately counting and sampling small witnesses using a colourful decision oracle
Authors: Holger Dell, John Lapinskas, Kitty Meeks

The paper gives improved "black box" results for turning algorithms which decide whether or not a witness exists into algorithms to approximately count the number of witnesses, or to sample from the set of witnesses approximately uniformly, with essentially the same running time. This has implications in parameterized and fine-grained complexity.