BARC/EADS Talk by Michael Kapralov

Title

An Adaptive Sublinear-Time Block Sparse Fourier Transform

Abstract

The problem of approximately computing a small number $k$ of dominant Fourier coefficients of a vector of length $n$ quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT  has resulted in algorithms with $O(k\log n\log (n/k))$ runtime and $O(k\log n)$ sample complexity. These results are proved using non-adaptive algorithms, and the latter sample complexity result is essentially the best possible under the sparsity assumption alone: It is known that even adaptive algorithms must use $\Omega((k\log(n/k))/\log\log n)$ samples. By {\em adaptive}, we mean being able to exploit previous samples in guiding the selection of further samples.

In this work we revisit the sparse FFT problem with the added twist that the sparse coefficients approximately obey a $(k_0,k_1$)-block sparse model. In this model, signal frequencies are clustered in $k_0$ intervals with width $k_1$ in Fourier space, where $k= k_0k_1$ is the total sparsity. Signals arising in applications are often well approximated by this model with $k_0\ll k$.

We give the first sparse FFT algorithm for $(k_0, k_1)$-block sparse signals with the sample complexity of $O^*(k_0k_1 + k_0\log(1+ k_0)\log n)$ at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved  in the works on {\em model-based compressive sensing} using random Gaussian measurements, but used $\Omega(n)$ runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the $O(k\log n)$ sample complexity bound. 

Joint work with Volkan Cevher, Jonathan Scarlett and Amir Zandieh.

Bio

Portrait of Michael KapralovMichael Kapralov is an Assistant Professor in the School of Computer and Communication Sciences at EPFL, and part of the Theory Group at EPFL. He completed his PhD at Stanford iCME, where he was advised by Ashish Goel. After graduating from Stanford he spent two years as a postdoc at the Theory of Computation Group at MIT CSAIL, working with Piotr Indyk, and then a year at the IBM T. J. Watson Research Center as a Herman Goldstine Postdoctoral Fellow.

He is broadly interested in theoretical computer science, with an emphasis on theoretical foundations of big data analysis. Most of his algorithmic work is in sublinear algorithms, where specific directions include streaming, sketching, sparse recovery and Fourier sampling.