# 2021-2022

## 2022/06/01: Inbal Talgam-Cohen, "Algorithmic Contract Theory"

Inbal Talgam-Cohen (Technion)

## 2022/05/18: Thatchaphol Saranurak, "All Pairs Minimum Cuts in Nearly Quadratic Time: A Tutorial"

Thatchaphol Saranurak (University of Michigan)

## 2022/05/04: Vera Traub, "Approximation Algorithms for Connectivity Augmentation Problems"

Vera Traub (ETH Zurich)

## 2022/04/20: Rasmus Kyng, "Almost-Linear Time Algorithms for Maximum Flows and More"

Rasmus Kyng (ETH Zurich)

Abstract

Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as p-hacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.

In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning. We show that any statistical query algorithm can be made reproducible with a modest increase in sample complexity, and we use this to construct reproducible algorithms for finding approximate heavy-hitters and medians. Using these ideas, we give the first reproducible algorithm for learning halfspaces via a reproducible weak learner and a reproducible boosting algorithm. We initiate the study of lower bounds and inherent tradeoffs for reproducible algorithms, giving nearly tight sample complexity upper and lower bounds for reproducible versus non-reproducible SQ algorithms. Finally, we discuss connections to other well-studied notions of algorithmic stability, such as differential privacy.

Joint work with Russell Impagliazzo (UCSD), Rex Lei (UCSD), and Toniann Pitassi (Columbia University), to appear in STOC 2022.

## 2022/03/23: Shuichi Hirahara, "Excluding PH Pessiland"

Shuichi Hirahara (National Institute of Informatics, Japana)

Abstract

Pessiland is the worst of Impagliazzo's five possible worlds: it is a world where NP is hard on average and pseudorandom generators do not exist. Excluding Pessiland (i.e., showing the equivalence between average-case hardness of NP and the existence of pseudorandom generators) is one of the most important open questions in theoretical computer science.

In this talk, we propose to consider PH (Polynomial Hierarchy) variants of Impagliazzo's five possible worlds. Our main result is to unconditionally rule out PH variants of Pessiland. I will also mention recent progress on excluding PH Heuristica: average-case hardness of PH follows from exponential worst-case hardness of PH.

Based on joint works with Rahul Santhanam.

## 2022/03/09: Roei Tell, "Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise"

Roei Tell (IAS)

Abstract

In this talk I'll show how to revise the classical hardness-vs-randomness framework, so that it can work in a non-black-box fashion. Specifically, we will construct derandomization algorithms that don't rely on classical PRGs, and instead "extract pseudorandomness" from the given input on which we want to simulate the probabilistic machine.

Using a non-black-box approach allows us to deduce stronger conclusions (or alternatively rely on weaker hypotheses), compared to classical approaches. In one instantiation of the new framework, we reveal a close connection between the promiseBPP = promiseP conjecture and a new type of uniform lower bounds. In another instantiation, we simulate probabilistic algorithms with essentially no observable runtime overhead, under plausible hypotheses.

Based on a joint work with Lijie Chen.

## 2022/02/23: Merav Parter, "New Diameter Reducing Shortcuts: Breaking the $O(\sqrt{n})$ Barrier"

Merav Parter (Weizmann Institute of Science)

Abstract

For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open.

In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.

Joint work with Shimon Kogan.

## 2021/12/8: Guy Blanc, "Properly learning decision trees in almost polynomial time"

Guy Blanc (Stanford)

Abstract

We give an n^{O(loglog n)}-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over {-1,1}^n. Even in the realizable setting, the previous fastest runtime was n^{O(log n)}, a consequence of a classic algorithm of Ehrenfeucht and Haussler.

Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be "pruned" so that every variable in the resulting tree is influential.

Joint work with Jane Lange, Mingda Qiao, and Li-Yang Tan. To appear in FOCS 2021

## 2021/12/1: William Kuszmaul, "Linear Probing Revisited: Tombstones Mark the Demise of Primary Clustering"

William Kuszmaul (MIT)

Abstract

The linear-probing hash table is one of the oldest and most widely used data structures in computer science. However, linear probing also famously comes with a major drawback: as soon as the hash table reaches a high memory utilization, elements within the hash table begin to cluster together, causing insertions to become slow. This phenomenon, now known as "primary clustering", was first captured by Donald Knuth in 1963; at a load factor of $1 - 1/x$, the expected time per insertion becomes $\Theta(x^2)$, rather than the more desirable $\Theta(x)$.

We show that there is more to the story than the classic analysis would seem to suggest. It turns out that small design decisions in how deletions are implemented have dramatic effects on the asymptotic performance of insertions. If these design decisions are made correctly, then even a hash table that is continuously at a load factor $1 - \Theta(1/x)$ can achieve average insertion time $\tilde{O}(x)$. A key insight is that the tombstones left behind by deletions cause a surprisingly strong "anti-clustering" effect, and that when insertions and deletions are one-for-one, the anti-clustering effects of deletions actually overpower the clustering effects of insertions.

We also present a new variant of linear probing, which we call "graveyard hashing", that completely eliminates primary clustering on any sequence of operations. If, when an operation is performed, the current load factor is $1 - 1/x$ for some $x$, then the expected cost of the operation is $O(x)$. One corollary is that, in the external-memory model with a data block size of $B$, graveyard hashing offers the following remarkable guarantee: at any load factor $1-1/x$ satisfying $x = o(B)$, graveyard hashing achieves $1 + o(1)$ expected block transfers per operation. Past external-memory hash tables have only been able to offer a $1+o(1)$ guarantee when the block size $B$ is at least $\Omega(x^2)$.

Based on joint work with Michael A. Bender and Bradley C. Kuszmaul. To appear in FOCS 2021.

## 2021/11/10: Kuikui Liu, "Spectral Independence: A New Tool to Analyze Markov Chains"

Kuikui Liu (University of Washington)

Abstract

Markov chain Monte Carlo is a widely used class of algorithms for sampling from high-dimensional probability distributions, both in theory and in practice. While simple to implement, analyzing the rate of convergence to stationarity, i.e. the "mixing time", remains a challenging problem in many settings. We introduce a new technique to bound mixing times called "spectral independence"", which says that certain pairwise correlation matrices all have bounded spectral norm. This surprisingly powerful technique originates in the emerging study of high-dimensional expanders, and has allowed us to "unify" nearly all existing approaches to approximate counting and sampling by building new connections with other areas, including statistical physics, geometry of polynomials, functional analysis, and more. Through these connections, several long-standing open problems have recently been answered, including counting bases of matroids and optimal mixing of the Glauber dynamics/Gibbs sampler up to the algorithmic phase transition threshold.

Based on several joint works with Dorna Abdolazimi, Nima Anari, Zongchen Chen, Shayan Oveis Gharan, Eric Vigoda, Cynthia Vinzant, and June Vuong.

## 2021/10/27: Shravas Rao, "Degree vs. Approximate Degree and Quantum Implications of Huang's Sensitivity Theorem"

Shravas Rao (Northwestern University)

Abstract

Based on the recent breakthrough of Huang (2019), we show that for any total Boolean function f,

The degree of f is at most quadratic in the approximate degree of f. This is optimal as witnessed by the OR function.

The deterministic query complexity of f is at most quartic in the quantum query complexity of f. This matches the known separation (up to log factors) due to Ambainis, Balodis, Belovs, Lee, Santha, and Smotrovs (2017).

We apply these results to resolve the quantum analogue of the Aanderaa--Karp--Rosenberg conjecture. We show that if f is a nontrivial monotone graph property of an n-vertex graph specified by its adjacency matrix, then Q(f)=Ω(n), which is also optimal. We also show that the approximate degree of any read-once formula on n variables is Θ(√n).

Based on joint work with Scott Aaronson, Shalev Ben-David, Robin Kothari, and Avishay Tal.

## 2021/10/13: Nutan Limaye, "Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits"

Nutan Limaye (IT University of Copenhagen)

Abstract

Every multivariate polynomial P(X) can be written as a sum of monomials, i.e. a sum of products of variables and field constants. In general, the size of such an expression is the number of monomials that have a non-zero coefficient in P.

What happens if we add another layer of complexity, and consider sums of products of sums (of variables and field constants) expressions? Now, it becomes unclear how to prove that a given polynomial P(X) does not have small expressions. In this result, we solve exactly this problem.

More precisely, we prove that certain explicit polynomials have no polynomial-sized "Sigma-Pi-Sigma" (sums of products of sums) representations. We can also show similar results for Sigma-Pi-Sigma-Pi, Sigma-Pi-Sigma-Pi-Sigma and so on for all "constant-depth" expressions.

The talk is based on a joint work of Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas.

## 2021/09/29: Audra McMillan, "Hiding among the clones: a simple and nearly optimal analysis of privacy amplification by shuffling"

Audra McMillan (Apple)

Abstract

Differential privacy (DP) is a model of privacy-preserving machine learning that has garnered significant interest in recent years due to its rigorous privacy guarantees. An algorithm differentially private if the output is stable under small changes in the input database. While DP has been adopted in a variety of applications, most applications of DP in industry actually satisfy a stronger notion called local differential privacy. In local differential privacy data subjects perturb their data before it reaches the data analyst. While this requires less trust, it comes a substantial cost to accuracy. Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta [EFMRTT19] demonstrated that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously [BEMMRLRKTS17] and has led to significant interest in the shuffle model of privacy [CSUZZ19, EFMRTT19]. In this talk, we will discuss a new result on privacy amplification by shuffling, which achieves the asymptotically optimal dependence in the local privacy parameter. Our result is based on a new proof strategy which is simpler than previous approaches, and extends to a lightly weaker notion known as approximate differential privacy with nearly the same guarantees.

Based on joint work with Vitaly Feldman and Kunal Talwar.