Edge Disjoint Paths Revisited
Chandra Chekuri, Bell Labs
Given a graph G (possibly directed) and set of pairs of
vertices the
maximum edge disjoint paths problem (EDP) is the optimization
problem
of maximizing the number of pairs that can be connected by edge
disjoint paths. EDP is NP-hard and in directed graphs also hard to
approximate as shown by the \Omega(m^{1/2-\eps})-hardness result
of
Guruswami et al. This seemed to have settled the approximability
of
the problem since an O(m^{1/2})-approximation is achievable. Here
m is
the number of edges in the graph. The O(m^{1/2}) bound is also the
best known upper bound on the approximability of the problem in
undirected graphs.
In this talk we show that approximability of the problem in
directed
graphs is still open by suggesting that n, the number of vertices,
is
a more appropriate measure to express the approximability.
In particular we conjecture that the approximability threshold for
directed EDP is \Omega(n^{1/2}). We justify this by obtaining the
first "sub-linear" (in terms of n) approximation ratios
for the the
EDP and by noting that the hardness result of Guruswami et al
applies
only to sparse graphs. In particular we show that the greedy
algorithm
has an O(min(n^{2/3}, m^{1/2}))-approximation in undirected graphs
and
an O(min(n^{4/5},m^{1/2}))-approximation in directed graphs. Our
analysis
is based on an interesting question on the maximum multicommodity
flow in simple unit-capacity graphs. We also give an
O(n^{1/2} log n)-approximation EDP in directed acyclic graphs via
LP rounding, matching the integrality gap to within a log factor.
This is joint work with Sanjeev Khanna (U. Penn).
|
Reconstruction
from Subsequences
Miroslav Dudik, Princeton University
In a "reconstruction problem" for a
class of combinatorial objects,
certain "projection" operations on the objects are
specified; each
projection discards some information about its argument. The
general
problem is whether the collection of all projections of an object,
carries
sufficient information to reconstruct it. We investigate the case
in which
the objects are sequences, and their projections are their
subsequences of
a fixed length k. We show that reconstruction of sequences of
length n
requires k to be at least \exp{\Omega(\log^{1/2} n)}. This
improves the
previous bound of \Omega(\log n).
Joint work with Leonard Schulman.
|
A Polynomial Time Recognition
Algorithm for Perfect Graphs
Maria Chudnovsky, Princeton University
A graph is Berge if no induced subgraph is an odd cycle of length >3 or
the complement of one. It has long been an open question whether there is a
polynomial time algorithm to test if a graph is Berge. Recently in joint
work with Neil Robertson Paul Seymour and Robin Thomas we found a decomposition
theorem for Berge graphs, proving that any such graph either admits a
useful decomposition or falls into one of five basic classes. This led to a
proof of Berge's strong perfect graph conjecture, and it was naturally
expected to also lead to a polynomial time recognition algorithm for Berge
graphs, but that does not seem to be the case.
In November 2002 in joint work with Paul Seymour we finally found such an
algorithm. It does not use the decomposition theorem- we do not test
whether a graph has a certain structure, but rather test directly for odd
holes and antiholes. The proof of correctness does not use the decomposition
theorem either.
|
New degree bounds for
polynomials with prescribed signs
Ryan O'Donnell, MIT
We consider the polynomial threshold function (PTF)
problem: Given a
boolean function f : {0,1}^n -> {0,1}, find an n-variate
polynomial with
as small a degree as possible which is positive on the points in
{0,1}^n
where f is 1 and negative on the points where f is 0. Our primary
motivation comes from computational learning theory, where it is
known
that if every function in a class of functions has a PTF of degree
d, then
the class can be learned in time roughly n^d. Upper and lower degree
bounds for PTFs are also useful for other problems in complexity
theory.
We prove both new upper and lower bounds. Our new
upper bounds are for
the class of boolean formulas of small size. One consequence of our
new
upper bounds is the first known subexponential time learning
algorithm for
boolean formulas of superlinear size and superconstant depth.
Our lower bounds are the first new lower bounds
for PTF degree since the
work of Minsky and Papert in 1968. In particular, we show that the
function given by the AND of two majority functions has PTF degree
at
least Omega(log n / log log n). This is known to be nearly sharp.
Our
lower bounds are proved via a new constructive technique.
This is joint work with Rocco Servedio (Harvard).
|
New Lattice Based
Cryptographic Constructions
Oded Regev, Institute for Advanced Study
We introduce the use of Fourier analysis on
lattices as an integral part
of a lattice based construction. The tools we develop provide an
elegant description of certain Gaussian distributions around
lattice
points. Our results include two cryptographic constructions which
are
based on the worst-case hardness of the unique shortest vector
problem. The main result is a new public key cryptosystem whose
security guarantee is considerably stronger than previous results
(O(n^{1.5}) instead of O(n^7)). This provides the first
alternative to
Ajtai and Dwork's original 1996 cryptosystem. Our second result is
a
collision resistant hash function which, apart from improving the
security in terms of the unique shortest vector problem, is also
the
first example of an analysis which is not based on Ajtai's
iterative
step. Surprisingly, the two results are derived from the same tool
which presents two indistinguishable distributions on the segment
[0,1). It seems that this tool can have further applications and
as an
example we mention how it can be used to solve an open problem
related
to quantum computation.
|
A Constant-Factor Approximation Algorithm for the
Multicommodity
Rent-or-Buy Problem
Amit Kumar, Bell Labs
In this talk, I shall present the first constant-factor approximation
algorithm for network design with multiple commodities and economies of scale.
We consider the rent-or-buy problem, a type of multicommodity buy-at-bulk
network design in which there are two ways to install capacity on any
given edge. Capacity can be rented, with cost incurred on a
per-unit of capacity basis, or bought, which allows unlimited use
after payment of a large fixed cost. Given a graph and a set of
source-sink pairs, we seek a minimum-cost way of installing sufficient
capacity on edges so that a prescribed amount of flow can be sent
simultaneously from each source to the corresponding sink.
Recent work on buy-at-bulk network design has concentrated on the
special case where all sinks are identical; existing constant-factor
approximation algorithms for this special case make
crucial use of the assumption that all commodities ship flow to the
same sink vertex and do not obviously extend to the multicommodity
rent-or-buy problem. Prior to our work, the best heuristics for the
multicommodity rent-or-buy problem achieved only logarithmic
performance guarantees and relied on the machinery of relaxed
metrical task systems or metric embeddings.
By contrast, we solve the network design problem directly via a novel
primal-dual algorithm.
This is joint work with Anupam Gupta and Tim Roughgarden.
|
Expanders: A Powerful Weapon for Code
Constructions
Venkatesan Guruswami, University of Washington
Expanders, which are sparse yet highly connected graphs with
additional pseudorandom properties, have found applications in
varied settings. In particular, they have been the main
combinatorial tool in the explicit constructions of
error-correcting codes with linear time encoding and decoding
algorithms. Recently, these constructions were improved to the
point where the rate (a measure of efficiency of the code) vs.
error-correction trade-off is near-optimal.
In this talk, I'll begin by presenting a brief overview of how
expanders are used in these constructions. I'll then talk about
some recent work which uses expanders and related combinatorial
tools to construct codes with *linear time* ``list decoding''
algorithms. Under list decoding, the decoder is permitted to
output a small *list* of candidate messages which must include the
transmitted message, and this enables meaningful error recovery
from much larger amounts of noise. Unlike the prior constructions
of list decodable codes which were algebraic (either explicitly or
in disguise), our construction is combinatorial, and in addition
to expanders, relies on spectral partitioning tools to find dense
clusters in graphs.
I'll also highlight some related future challenges in the area
of list decoding.
Based on joint works with Piotr Indyk.
|
A new shortest path algorithm
for real-weighted graphs
Seth Pettie, UT Austin
We present a new algorithm for the all-pairs
shortest path problem
on real-weighted graphs. Our algorithm improves on Dijkstra's
classical shortest path algorithm -- the previous best -- and runs
in time O(mn + n^2 log log n), where m and n are the number of
edges
and vertices, respectively.
The new ingredients in our algorithm are (1) a
method for representing
the dependencies between shortest paths emanating from nearby
vertices
and (2) a method for computing exact distances from approximate
distances.
|
A PAC-MDL bound
John Langford, IBM
It turns out that every bound on the true error
rate of a
learned classifier is related to the communication complexity of the
labels given the unlabeled data. I'll show this, and show a bound
which uses the communication complexity directly to bound the error
rate of a learned classifier. This "PAC-MDL" bound unifies
VC,
Occam's Razor, and PAC-Bayes bounds.
|
Basing Cryptography on
Imperfect Randomness
Yevgeniy Dodis, New York University:
Randomization is vital in cryptography: secret
keys should be randomly
generated and most cryptographic primitives (e.g., encryption)
*must*
be probabilistic. As a common abstraction, it is assumed that there
is
a source of truly random bits available to all the participants of
the
system. While convenient, this assumption is often highly
unrealistic,
and cryptographic systems have to be built based on *imperfect*
sources of randomness. Remarkably, this fundamental problem has
received little attention so far, despite the fact that a related
question of simulating probabilistic (BPP) algorithms with imperfect
random sources has a long and rich history.
In this work we initiate the quantitative study
concerning feasibility
of building secure *cryptographic* primitives using imperfect random
sources. Specifically, we concentrate on symmetric-key encryption
and
message authentication, where the shared secret key comes from an
imperfect random source instead of being assumed truly random. In
each
case, we compare the class of "cryptographic" sources for
the task
at hand with the classes of "extractable" and "simulatable"
sources, where: (1) "cryptographic" refers to sources for
which the
corresponding symmetric-key primitive can be built; (2)
"extractable" refers to a very narrow class of sources
from which
one can extract nearly perfect randomness; and (3) "simulatable"
refers to a very general class of weak random sources which are
known
to suffice for BPP simulation. For both encryption and
authentication,
we show that the corresponding cryptographic sources lie strictly in
between extractable and simulatable sources, which implies that
"cryptographic usage" of randomness is more demanding than
the
corresponding "algorithmic usage", but still does not
require
perfect randomness. Interestingly, cryptographic sources for
encryption and authentication are also quite different from each
other, which suggests that there might not be an elegant way to
describe imperfect sources sufficient for "general
cryptographic
use".
If time permits, some follow-up work will be
mentioned as well. Joint
work with Joel Spencer.
|
Every-Case Complexity of the Distributed Minimum Spanning Tree
Problem.
Michael Elkin, IAS
We introduce a new complexity measure, **every-case complexity**.
This complexity measure is by far more delicate than its
traditional worst- and average-case counteraparts, but it is
applicable only to problems that can be solved in linear or
sublinear time. We motivate restricting our attention to these
problems by the great body of research that was conducted throughout
the last decade on algorithms with sublinear running time.
The focus of our research is on the every-case complexity of the
**minimum-weight spanning tree (MST)** problem in the
**distributed** model of computation.
In this model each vertex of the input n-vertex graph G
hosts a processor, and each processor has infinite computational power,
but its initial knowledge of the graph is very limited.
Specifically, each vertex v knows the identities of its neighbors,
and the weights of the edges that are incident to v.
The vertices are allowed to communicate through the edges of the graph.
The communication is synchronous, and occurs in **discrete rounds**.
The goal is to minimize the number of rounds of distributed computation,
which is henceforth referred as the **running time**.
The MST problem is one of the most fundamental and well-studied
problems in the area of distributed computing. We prove tight upper and
lower bounds on the every-case complexity
of this problem, and identify explicit
graph parameters that reflect this complexity.
In particular, we devise a protocol for the MST problem
with significantly smaller running time than the previously best-known one.
The presentation is based on a very recent manuscript called
"Inapproximability and Every-Case Complexity of the
Distributed Minimum Spanning Tree Problem".
|
On the Impossibility of Dimension
Reduction in l_1
Bo Brinkman, Princeton University
The Johnson-Lindenstrauss Lemma shows that any n points in
Euclidean space (with distances measured by the l_2 norm) may
be mapped down to O((log n)/eps^2) dimensions such that no
pairwise distance is distorted by more than a (1+eps)$
factor. Determining whether such dimension reduction is possible in
l_1 has been an intriguing open question. For a distortion of
(1+eps), the best known lower bound on the dimensions was
Omega(log n) and it was not known whether a mapping to O(log
n)
dimensions could be achieved while maintaining constant distortion.
Charikar and Sahai recently showed lower bounds for dimension
reduction in l_1 that can be achieved by linear projections.
We show strong lower bounds for general dimension reduction in
l_1. We give an explicit family of n points in l_1 such
that any embedding with distortion delta requires
n^{Omega(1/delta^2)} dimensions. This proves that there is no
analog of the Johnson-Lindenstrauss Lemma for l_1; in fact
embedding with any constant distortion requires n^{Omega(1)}
dimensions. Further, embedding the points into l_1 with
1+eps distortion requires n^{1/2-O(eps log(1/eps))} dimensions.
Our proof establishes this lower bound for shortest path metrics of
series-parallel graphs. We make extensive use of linear programming
and duality in devising our bounds. We expect that the tools and
techniques we develop will be useful for future investigations of
embeddings into l_1.
This is joint work with Moses Charikar
|
Sublinear Geometric Algorithms
Ding Liu, Princeton University
We initiate an investigation of sublinear
algorithms
for geometric problems in two and three dimensions.
We give optimal algorithms for intersection detection
of convex polygons and polyhedra, point location
in two-dimensional Delaunay triangulations and Voronoi diagrams,
and ray shooting in convex polyhedra, all of which run in time
O(sqrt{n}), where n is the size of the input.
We also provide sublinear solutions for
the approximate evaluation of the volume of
a convex polytope and the length of the shortest path
between two points on the boundary.
Joint work with Bernard Chazelle and Avner Magen
|
|
|