2022-2023

2022/05/18: Thatchaphol Saranurak, "All Pairs Minimum Cuts in Nearly Quadratic Time: A Tutorial"

Thatchaphol Saranurak (University of Michigan)


Slides (PDF)

2022/05/04: Vera Traub, "Approximation Algorithms for Connectivity Augmentation Problems"

Vera Traub (ETH Zurich)


2022/04/20: Rasmus Kyng, "Almost-Linear Time Algorithms for Maximum Flows and More"

Rasmus Kyng (ETH Zurich)


2022/04/06: Jessica Sorrell, "Reproducibility in Learning"

Jessica Sorrell (UCSD)


Abstract

Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as p-hacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.

In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning. We show that any statistical query algorithm can be made reproducible with a modest increase in sample complexity, and we use this to construct reproducible algorithms for finding approximate heavy-hitters and medians. Using these ideas, we give the first reproducible algorithm for learning halfspaces via a reproducible weak learner and a reproducible boosting algorithm. We initiate the study of lower bounds and inherent tradeoffs for reproducible algorithms, giving nearly tight sample complexity upper and lower bounds for reproducible versus non-reproducible SQ algorithms. Finally, we discuss connections to other well-studied notions of algorithmic stability, such as differential privacy.

Joint work with Russell Impagliazzo (UCSD), Rex Lei (UCSD), and Toniann Pitassi (Columbia University), to appear in STOC 2022.

2022/03/23: Shuichi Hirahara, "Excluding PH Pessiland"

Shuichi Hirahara (National Institute of Informatics, Japana)


Abstract

Pessiland is the worst of Impagliazzo's five possible worlds: it is a world where NP is hard on average and pseudorandom generators do not exist. Excluding Pessiland (i.e., showing the equivalence between average-case hardness of NP and the existence of pseudorandom generators) is one of the most important open questions in theoretical computer science.

In this talk, we propose to consider PH (Polynomial Hierarchy) variants of Impagliazzo's five possible worlds. Our main result is to unconditionally rule out PH variants of Pessiland. I will also mention recent progress on excluding PH Heuristica: average-case hardness of PH follows from exponential worst-case hardness of PH.

Based on joint works with Rahul Santhanam.

2022/03/09: Roei Tell, "Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise"

Roei Tell (IAS)


Abstract

In this talk I'll show how to revise the classical hardness-vs-randomness framework, so that it can work in a non-black-box fashion. Specifically, we will construct derandomization algorithms that don't rely on classical PRGs, and instead "extract pseudorandomness" from the given input on which we want to simulate the probabilistic machine.

Using a non-black-box approach allows us to deduce stronger conclusions (or alternatively rely on weaker hypotheses), compared to classical approaches. In one instantiation of the new framework, we reveal a close connection between the promiseBPP = promiseP conjecture and a new type of uniform lower bounds. In another instantiation, we simulate probabilistic algorithms with essentially no observable runtime overhead, under plausible hypotheses.

Based on a joint work with Lijie Chen.

2022/02/23: Merav Parter, "New Diameter Reducing Shortcuts: Breaking the $O(\sqrt{n})$ Barrier"

Merav Parter (Weizmann Institute of Science)


Abstract

For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open.

In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.

Joint work with Shimon Kogan.