List of local speakers:
List of external speakers:
Schedule
The following timeline is preliminary and subject to change. Talk titles and abstracts will be posted when available.
A light breakfast will be available before the first talk, coffee will be available throughout the day, and lunch
will be provided.
Abstracts:
Some lesser-known applications of Kolmogorov complexity in communication complexity
Marius Zimand, Towson University
Abstract: Our goal is to show that applications of Kolmogorov complexity go beyond the
well-known incompressibility method. We present applications in communication complexity that
rely on deeper concepts and results from Kolmogorov complexity such as common information,
stochastic strings, and others. Using these tools we obtain matching lower bounds for the
communication complexity of secret key agreement protocols in the model with public randomness.
Biography: Marius Zimand is a Professor at Towson University, working in computational complexity and algorithmic information theory.
Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network
Michael Dinitz, Johns Hopkins University
Abstract: We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective.
Given a graph $G$, a distance bound $L$, and $p$ pairs of vertices $(s_1,t_1), \dots,(s_p,t_p)$, the objective is
to find a minimum-cost subgraph $G'$ such that $s_i$ and $t_i $have distance at most $L$ in $G'$ (for every $i$ in $[p]$).
Our main result is on the fixed-parameter tractability of this problem with parameter $p$.
We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms
for those cases. In all other cases, we show that the problem is $W[1]$-hard. We also extend our
results to handle general edge lengths and costs, precisely characterizing which demands allow for
good FPT approximation algorithms and which demands remain $W[1]$-hard even to approximate.
Biography:
Michael is an assistant professor in the Department of
Computer Science at Johns Hopkins University.
Previously he was a postdoctoral fellow at the Weizmann Institute of Science,
after receiving his PhD from Carnegie Mellon University.
He works broadly in algorithms, with an emphasis on approximation algorithms
and algorithms related to computer networking and/or distributed computing.
Out-of-Equilibrium Dynamics for Natural Algorithms
Bernard Chazelle, Princeton University
Abstract: One of the key features of any living system
is to stay away from equilibrium. The same is
true of common processes in opinion formation and social learning.
The current scarcity of techniques for the analysis of
out-of-equilibrium systems points calls for
a serious rethink of the current approach to dynamics.
We discuss some of these issues in the context of time-varying
Markov chains, iterated learning, and opinion dynamics.
Biography:
Bernard Chazelle is Eugene Higgins Professor of Computer Science
at Princeton University, where he has been on the faculty since 1986.
His current research focuses on the ``algorithmic nature'' of living
systems. A professor at the Collège de France in Paris in recent years
as well as a member of the Institute for Advanced Study in Princeton, he
received his PhD in computer science from Yale University in 1980. The
author of the book, "The Discrepancy Method," he is a fellow of the
American Academy of Arts and Sciences, the European Academy of Sciences,
the Association for Computing Machinery, and the recipients of three
Best-Paper awards from SIAM.
Oblivious resampling oracles and parallel algorithms for the Lopsided Lov\'{a}sz Local Lemma
David Harris, University of Maryland at College Park
Abstract: The Lov\'{a}sz Local Lemma (LLL) is a probabilistic tool which shows that,
if a collection of ``bad'' events $\mathcal B$ in a probability space are not too
likely and not too interdependent, then there is a positive probability that no
bad-events in $\mathcal B$ occur. Moser \& Tardos (2010) gave sequential
and parallel algorithms which transformed most applications of the
variable-assignment LLL into efficient algorithms.
A framework of Harvey \& Vondr\'{a}k (2015) based on
``resampling oracles'' extended this give very general
sequential algorithms for other probability spaces satisfying the Lopsided Lov\'{a}sz Local Lemma (LLLL).
We describe a new structural property of resampling
oracles which holds for all known resampling oracles,
which we call ``obliviousness.'' Essentially,
it means that the interaction between two bad-events $B, B'$ depends only on the randomness used to resample $B$, and not on the precise state within $B$ itself.
This property has two major consequences. First, it is the key to achieving a unified parallel LLLL algorithm, which is faster than previous, problem-specific algorithms of Harris (2016) for the variable-assignment LLLL algorithm and of Harris \& Srinivasan (2014) for permutations. This new algorithm extends a framework of Kolmogorov (2016), and gives the first RNC algorithms for perfect matchings and hamiltonian cycles of $K_n$.
Second, this property allows us to build LLLL probability spaces out of a relatively simple ``atomic'' set of events. It was intuitively clear that existing LLLL spaces were built in this way; but the obliviousness property formalizes this and gives a way of automatically turning a resampling oracle for atomic events into a resampling oracle for conjunctions of them. Using this framework, we get the first sequential resampling oracle for perfect matchings on the complete $s$-uniform hypergraph $K_n^{(s)}$, and the first commutative resampling oracle for hamiltonian cycles of $K_n$.
Biography:
David Harris received his BA in mathematics from Harvard University and his PhD in applied
mathematics from University of Maryland, where he was advised by Dr. Aravind Srinivasan.
He currently lives in Silver Spring, MD.
Efficient Fine-Grained Algorithms
Barna Saha, University of Massachusetts, Amherst
Abstract:
One of the greatest successes of computational complexity theory is the
classification of countless fundamental computational problems into
polynomial-time and NP-hard ones, two classes that are often referred
to as tractable and intractable, respectively. However, this crude
distinction of algorithmic efficiency is clearly insufficient when
handling today's large scale of data. We need a finer-grained design
and analysis of algorithms that pinpoints the exact exponent of
polynomial running time, and a better understanding of when a speed-up
is not possible. Over the years, many polynomial-time approximation
algorithms were devised as an approach to bypass the NP-hardness
obstacle of many discrete optimization problems. This area has
developed into a rich field producing many algorithmic ideas and
has lead to major advances in computational complexity. So far,
however, a similar theory for high polynomial time problems to understand
the trade-off between quality and running time is vastly lacking.
In this presentation, I will give you an overview of the
newly growing field of fine-grained algorithms and complexity,
and my contributions therein. This will include fundamental
problems such as edit distance computation, all-pairs shortest paths, parsing and matrix multiplication.
Biography: Barna Saha is currently an Assistant Professor in the School of
Computer Science at the University of Massachusetts, Amherst.
She received her PhD in Computer Science from the University
of Maryland College Park in 2011. After spending three years
as a research scientist at AT&T Shannon Laboratories, she joined
UMass Amherst in September 2014. Her research interests span
theoretical computer science, specifically algorithm design
and analysis, discrete optimization, applied probability, and
foundational aspects of large-scale data management. She is the
recipient of a NSF CAREER Award (2017), a Google Faculty Award (2016),
a Yahoo Academic Career Enhancement Award (2015), and a NSF CRII Award (2015).
Non-Malleable Codes for Small-Depth Circuits
Dana Dachman-Soled, University of Maryland at College Park
Abstract:
We construct efficient, unconditional non-malleable codes that are secure against
tampering functions computed by small-depth circuits. For constant-depth circuits of
polynomial size (i.e. AC$^0$ tampering functions), our codes have codeword length
$n=k^{1+o(1)}$ for a $k$-bit message. This is an exponential improvement of the
previous best construction due to Chattopadhyay and Li (STOC 2017), which
had codeword length $2^{O(\sqrt{k})}$.
Our construction remains efficient for circuit depths as large as
$\Theta(\log(n)/\log\log(n))$ (indeed, our codeword length remains $n \leq k^{1+\epsilon})$,
and extending our result beyond this would require separating P from NC$^1$.
We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to
split-state tampering. A novel aspect of our work is the incorporation of techniques
from unconditional derandomization into the framework of non-malleable reductions.
In particular, a key ingredient in our analysis is a recent pseudorandom switching
lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma
from circuit complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.
This is joint work with: Marshall Ball, Siyao Guo, Tal Malkin and Li-Yang Tan
A Computer Scientist and a Legal Scholar Walk into a Bar
Kobbi Nissim, Georgetown University
Abstract:
We will explore some of the gaps between technical and legal conceptions of privacy and
argue for the development of rigorous paradigms for bridging between or even unifying
these concepts. We will present strategies and first results towards doing so considering
an example from the GDPR and differential privacy.
Universal Sketches, Software-Defined Networks and K9
Vladimir Braverman, Johns Hopkins University
Abstract:
Streaming and sketching algorithms have found many applications in computer science and other areas.
A typical sketching algorithm approximates one function. Given a class of functions $F$,
it is natural to ask if it is possible to compute a single sketch $S$
that will approximate every function $f$ from $F$. We call S a "universal" sketch for $F$.
In this talk we will survey some results on universal sketches for several classes of functions.
For example, we will discuss sketches for functions of the form $\sum_i g(f_i)$ where $f_i$ is a frequency of element $i$ in the streaming model.
In addition, we will describe a sketch that approximates a sub-class of symmetric norms
(a norm is symmetric if it is invariant under sign-flips and coordinate-permutations) and
outline a surprising connection between universal sketches and concentration of
measure and Milman's theorem. Also, we will describe a recent result for subset (i.e. 0-1 weighted)
$\ell_0$ and $\ell_1$ norms. For these problems we obtain a nearly optimal upper and lower bounds for streaming space complexity.
Finally, we will demonstrate the applicability of universal sketches for Software Defined Networks (SDN).
For SDN, we will present the UnivMon (short for Universal Monitoring) framework that can
simultaneously achieve both generality and high fidelity across a broad spectrum of monitoring tasks.
Bio:
Vladimir Braverman is an Associate Professor of Computer Science at the Johns Hopkins University.
His research interests include sublinear algorithms and their applications in computer science.
Quantum Algorithms for Semidefinite Programs and Convex Optimization
Xiaodi Wu, University of Maryland at College Park
Abstract: Convex optimization has been a central topic in the study of theoretical computer science, machine learning,
and operations research in the last decades. It's also a natural target to investigate potential quantum speedups.
In this talk, we will show the following two quantum speedups in this context.
(1) The first one optimizes a general (dimension $n$) convex function over a convex body
using $O(n)$ quantum queries to oracles that evaluate the objective function and determine
membership in the convex body; this gives a quadratic improvement over the best-known classical algorithm.
(2) The second one solves a special class of convex optimization problems,
semidefinite programs (SDPs), where we design a quantum algorithm that
solves $n$-dimensional SDPs with m constraints in $O(\sqrt{n} +\sqrt{m})$, whereas the state-of-the-art
classical algorithms run in time at least linear in $n$ and $m$.
Based on results in arXiv:1710.02581v2 and arXiv:1809.01731v1.