2015-2016

2016/05/25: Eric Blais, "A Polynomial Lower Bound for Testing Monotonicity"

Eric Blais (Waterloo)

Abstract:

One of the earliest and most fundamental problems in property testing is monotonicity testing: how many queries to a Boolean function f:\{0,1\}^n \to \{0,1\} do we need to distinguish between the case where f is monotone and the one where f is far from monotone? Extensive study of this problem has revealed surprising connections to other topics in the analysis of Boolean functions, including directed isoperimetric inequalities and invariance principles for linear threshold functions.

Yet, despite all this effort, until recently there was still an exponential gap between the best upper and lower bounds for the number of queries that general (adaptive) testers need to test monotonicity.

In this talk, we will see how noise sensitivity and Talagrand’s random DNF functions can be used to significantly improve the lower bound for this problem. As a result, we will see that a polynomial number of queries is both sufficient and necessary to test monotonicity. We will also discuss some related results.

Joint work with Alexander Belov.


2016/05/11: Ankit Garg, "Operator scaling and applications to non-commutative rational identity testing"

Ankit Garg (Princeton)

Abstract:

Sinkhorn in 1964 studied the problem of matrix scaling, given a non-negative matrix A, find diagonal matrices D_1,D_2, with positive entries (if exist) and s.t. D_1 A D_2 is doubly stochastic. He proved that a natural iterative algorithm finds this scaling if entries of A are strictly positive. Matrix scaling has since found applications in various fields such as statistics, numerical analysis, operations research, combinatorial geometry and approximating the permanent. Gurvits in 2004 found a striking generalization of matrix scaling, called operator scaling, where one wants to “scale” completely positive operators to make them “doubly stochastic.” Gurvits proved that a natural iterative algorithm converges, and, in fact, in polynomial number of iterations in some special cases. We analyze Gurvits’ algorithm completely and prove that it always converges in polynomial number of iterations. Our analysis is simple, providing explicit bounds on the “capacity” measure of completely positive operators introducted by Gurvits. Using this we obtain the first deterministic polynomial time algorithm for computing rank of a symbolic matrix in non-commuting variables. Previous algorithms required exponential time (with or without randomness). This problem (of computing rank of a symbolic matrix) has a rich history and has appeared in various forms in a remarkable number of mathematical and computational areas. Besides non-commutative algebra, it also has various connections to computational complexity and de-randomization, commutative invariant theory and quantum information theory.

This is based on joint work with Leonid Gurvits, Rafael Oliveira and Avi Wigderson.

2016/04/27: Omer Reingold, "Constant-round Interactive-proofs for Delegating Computation"

Omer Reingold (Samsung Research)

Abstract:

Interactive proofs, introduced by Goldwasser, Micali, and Rackoff, have had a dramatic impact on Complexity Theory and Cryptography. In particular, the celebrated IP=PSPACE Theorem [LFKN92,Shamir92] allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a large number of communication rounds and heavy computation for generating the proof.

We introduce new interactive proofs that are very efficient in the number of rounds and computation time, that are particularly well suited for delegating bounded-space computations (e.g., in the context of cloud computing). Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements:

Joint work with Guy Rothblum and Ron Rothblum.


2016/03/30: Ran Raz, "Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning"

Ran Raz (Weizmann Institute)

Abstract:

We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial.

More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn from a stream of samples $(a_1,b_1), (a_2,b_2), \ldots$, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $(n^2)/ 25$ bits of memory, requires an exponential number of samples.

Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is $O(n)$ (where $n$ is the space needed to store one sample).

We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length $n$, as well as time complexity of $n$ per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than $(n^2)/25$ memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.

2016/03/11: Tali Kaufman, "Bounded degree high dimensional expanders"

Tali Kaufman (Bar Ilan University)

Abstract:

In recent years a high dimensional theory of expanders has emerged. The notion of combinatorial expansion of graphs (i.e. the Cheeger constant of a graph) has seen two generalizations to high dimensional simplicial complexes (or hypergraphs). One generalization is called coboundary expansion, and the other is termed cosystolic expansion. Gromov has shown that cosystolic expanders have the topological overlapping property. He then asked whether there exist bounded degree complexes with the topological overlapping property in every dimension.

In the well studied one dimensional case, the existence of a bounded degree combinatorial expander is easy to prove. However, bounded degree high dimensional expanders (random or explicit), according to either of the definitions above, were not known until our work.

In this talk we present a local to global criterion on a complex that implies cosystolic expansion. We then use our criterion to present explicit bounded degree cosystolic expanders of every dimension. This solves in the affirmative the open question raised by Gromov.

We expect that the emerging theory of high dimensional expansion is likely to have various application in the theory of computation. Thus, one of the goals of this talk in to introduce this concept to the theory community.

No prior background is assumed.

Based on joint works with Alex Lubotzky, David Kazhdan, and Shai Evra.

2016/03/02: Ola Svensson, "Lift-and-Round to Improve Scheduling on Unrelated Machines"

Ola Svensson (EPFL)

Abstract:

Weighted completion time and makespan are some of the most relevant and well-studied measures of quality of service in scheduling and resource allocation problems. While these objectives have been studied since the 50’s, a systematic study of their approximability was started in the 90’s. Since then, impressive progress has led to a complete understanding of these objectives in simple machine models, such as identical machines.

In contrast, it remains a notorious problem to understand the approximability in the more general unrelated machine setting: the classic algorithms developed in the 90’s resisted any improvements while having guarantees that are far from the strongest known lower bounds.

In this talk, we overview recent developments with a focus on our recent approximation algorithm that improves upon the barrier of 3/2 for the weighted completion time objective. The improvement is based on a new lift-and-project based SDP relaxation and a general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.

This is based on joint work with Nikhil Bansal and Aravind Srinivasan (abs/1511.07826).

2016/02/17: Ronitt Rubinfeld, "Local Computation Algorithms"

Ronitt Rubinfeld (Tel Aviv University and MIT)

Abstract:

Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety.   However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input?

We survey recent work in the model of local computation algorithms which for a given input, supports queries by a user to values of specified bits of a legal output.  The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. In this talk, we describe a number of problems for which sublinear time and space local computation algorithms have been developed.


2016/02/03: Ilya Razenshteyn, "Sketching and Embedding are Equivalent for Norms"

Ilya Razenshteyn (MIT)

Abstract:

Imagine the following communication task. Alice and Bob each have a point from a metric space. They want to transmit a few bits and decide whether their points are close to each other or are far apart. Of particular interest are sketching protocols: Alice and Bob both compute short summaries of their inputs and then a referee, given these summaries, makes the decision; sketches are very useful for the nearest neighbor search, streaming, randomized linear algebra etc. Indyk (FOCS 2000) showed that for $\ell_p$ the spaces with $0 \le p \le 2$ the above problem allows a very efficient sketching protocol. Consequently, any metric that can be mapped into the space $\ell_p$ with all the distances being approximately preserved has a good protocol as well.

I will show that for normed spaces (a very important class of metric spaces) embedding into $\ell_p$ is the only possible technique for solving the communication problem. Slightly more formally, we show that any normed space that admits a good communication (in particular, sketching) protocol for distinguishing close and far pairs of points embeds well into $\ell_p$ with $p$ being close to 1. The proof uses tools from communication complexity and functional analysis.

As a corollary, we will show communication lower bounds for the planar Earth Mover’s Distance (minimum-cost matching metric) and for the trace norm (the sum of the singular values of a matrix) by deriving them from the (known) non-embeddability theorems and (the contrapositive of) our result.

The talk is based on a joint paper with Alexandr Andoni and Robert Krauthgamer (arXiv:1411.2577).



2016/01/20: Scott Aaronson, "The Largest Possible Quantum Speedups"

Scott Aaronson (MIT)

Abstract:

I’ll describe recent progress on understanding the largest possible quantum vs. classical speedups in the setting of query complexity. First, I’ll discuss “Forrelation,” a problem that Andris Ambainis and I proved yields essentially the largest quantum speedup of any promise problem, as well as Fourier Sampling, which yields essentially the largest speedup of any sampling problem. I’ll then explain how Fourier Sampling relates to BosonSampling, the Bremner-Jozsa-Shepherd commuting Hamiltonians model, and other experimental systems that fall short of universal quantum computing, but that might be used in the near future to try to demonstrate quantum supremacy on some task. Finally, I’ll explain how my student Shalev Ben-David “broke the Grover barrier” this summer, achieving a 2.5th-power separation between randomized and quantum query complexities for a total Boolean function—and how the proof of a key conjecture about the Forrelation problem would improve the separation to cubic.


2015/12/09: Virginia Vassilevska Williams, "Fine-Grained Complexity of Problems in P"

Virginia Vassilevska Williams (Stanford)

Abstract:

A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in t(n)^{1-eps} time for eps>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s.

Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P\neq NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the running time is already polynomial. In recent years, a new theory has been developed, based on “fine-grained reductions” that focus on exact running times. Mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require essentially t(n) time for some t, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes.

The main key problems used to base hardness on have been: the 3SUM problem, the CNF-SAT problem (based on the Strong Exponential TIme Hypothesis (SETH)) and the All Pairs Shortest Paths Problem (APSP). Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance, Longest Common Subsequence (LCS) etc.

In this talk I will give an overview or the current progress in this area of study, and will highlight some new exciting developments leading to new, even more believable conjectures and to surprising new consequences of fast algorithms for Edit Distance or LCS.

2015/11/25: Piotr Indyk, "Fast Algorithms for Structured Sparsity"

Piotr Indyk (MIT)

Abstract:

Sparse representations of signals (i.e., representations that have only few non-zero or large coefficients) have emerged as powerful tools in signal processing theory, algorithms, machine learning and other applications. However, real-world signals often exhibit rich structure beyond mere sparsity. For example, a natural image, once represented in the wavelet domain, often has the property that its large coefficients occupy a subtree of the wavelet hierarchy, as opposed to arbitrary positions. A general approach to capturing this type of additional structure is to model the support of the signal of interest (i.e., the set of indices of large coefficients) as belonging to a particular family of sets. Computing a sparse representation of the signal then corresponds to the problem of finding the support from the family that maximizes the sum of the squares of the selected coefficients. Such a modeling approach has proved to be beneficial in a number of applications including compression, de-noising, compressive sensing and machine learning. However, the resulting optimization problem is often computationally difficult or intractable, which is undesirable in many applications where large signals and datasets are commonplace.

In this talk, I will outline some of the past and more recent algorithms for finding structured sparse representations of signals, including piecewise constant approximations, tree-sparse approximations and graph-sparse approximations. The algorithms borrow several techniques from combinatorial optimization (e.g., dynamic programming), graph theory, and approximation algorithms. For many problems the algorithms often run in (nearly) linear time, which makes them applicable to very large datasets.

Joint work with Chinmay Hegde and Ludwig Schmidt

2015/11/11: Tim Roughgarden, "Complexity Theory and Algorithmic Game Theory: Some New Connections"

Tim Roughgarden (Stanford)

Abstract:

We survey three recent applications of complexity theory to algorithmic game theory.

First, we explain “why prices need algorithms,” in the sense that the guaranteed existence of market equilibria is inextricably connected to the computational complexity of related optimization problems: demand oracles, revenue-maximization, and welfare-maximization. This connection implies several impossibility results, and suggests a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept.

Second, we explain when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the price of anarchy (POA) in games derived from the problem. This is a new approach to POA lower bounds, based on reductions in lieu of explicit constructions. These lower bounds are particularly significant for problems of designing games that have only near-optimal equilibria, ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks.

Finally, we study “the borders of Border’s theorem,” a fundamental result in auction design that gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment. We explain a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border’s theorem cannot be extended significantly beyond the state-of-the-art.

Includes joint work with Parikshit Gopalan, Noam Nisan, and Inbal Talgam-Cohen.


2015/10/28: Li-Yang Tan, "An average-case depth hierarchy theorem for Boolean circuits"

Li-Yang Tan (TTIC)

2015/10/14: Ryan O'Donnell, "How to refute a random CSP"

Ryan O'Donnell (CMU)

2015/10/07: David Zuckerman, "Explicit Two-Source Extractors and Resilient Functions"

David Zuckerman (UT Austin)

2015/09/30: Mika Göös, "Communication Complexity vs. Partition Numbers"

Mika Göös (University of Toronto)