Princeton Seminars in Theoretical Computer Science, 
Fall  2002

(All events held at the Computer Science Department, 35 Olden St., unless stated otherwise)
                Schedule 
Sept 20 Manoj Prabhakaran, Princeton University
Concurrent Zero Knowledge with Logarithmic Round Complexity
Sept 27 Jiri Matousek, Charles University, Prague 
Unique-sink orientations of cubes
Oct 4 Tim Roughgarden, Cornell University
Selfish Routing and the Price of Anarchy
Oct 11 Subhash Khot, Princeton University
Hardness of Coloring 3-Colorable 3-Uniform Hypergraphs
Oct 18 Adam Klivans, Harvard University
Learning Intersections of Half Spaces 
Oct 25 Avi Wigderson, IAS and Hebrew University
SL=L (well, almost...) and other rarely-wrong derandomizations
Nov 8 Santosh Vempala, MIT
Random Walks and Geometric Algorithms
Nov 22 Shai Halevi, IBM
The Disk-Sector Encryption Problem: Constructing Tweakable
Enciphering Modes that are Secure in the Sense of a Strong PRP

Nov 26 Eran Halperin, ICSI Berkeley    
Polylogarithmic Inapproximability
3 pm in CS 102,  Note the unusual time and place !

Dec 6 Jon Feldman, MIT
Decoding Turbo Codes via Linear Programming

 

             Abstracts
Concurrent Zero Knowledge with Logarithmic Round Complexity

Manoj Prabhakaran, Princeton University

Joint work with Alon Rosen and Amit Sahai

Zero Knowledge (ZK) protocols are fascinating tools in cryptography,
which allow a Prover to convince a Verifier of the truth of a
statement, without letting her know anything beyond the fact that the
statement is true. Concurrent Zero Knowledge extends this
concept to a setting where many sessions of the protocol are running
concurrently, and the verifiers of the different sessions can collude
to extract knowledge from the provers.

Improving on previous constructions, we provide a concurrent ZK
protocol for proving membership in NP languages using \omega(\log n)
rounds (n being a security parameter). The properties of the protocol
are guaranteed so long as the number of sessions is polynomial in n.
This is an improvement over the previous work by Kilian et al which
uses \omega(\log^2 n) rounds.

Like most of the previous constructions, ours is also a "Black-box" ZK
protocol. This essentially completes the characterization of the round
complexity of "Black-Box" Concurrent ZK, as Canetti et al have shown
that Black Box Concurrent ZK needs \tilde\Omega(\log n) rounds.

 

Unique-sink orientations of cubes

Jiri Matousek, Charles University, Prague

The $n$-cube is considered as a graph (with vertex set $\{0,1\}^n$). An
unique-sink orientation (USO) is an orientation of the edges of the
$n$-cube such that every face of the cube has exactly one sink (directed
cycles are allowed). Such orientations arise from several sources, such as
linear programs (considering a generic linear function on a polytope
isomorphic to the cube), certain linear complementarity problems, and
certain convex programs. Algorithms have been studied for finding the
global sink in a USO; the USO is specified by an oracle that, given a
vertex, returns the orientation of the edges incident to that vertex.
Upper and lower bounds for the complexity of such algorithms have recently
been given by Welzl, Szabo, and Schurr; improving them significantly is
the main challenge. The speaker has proved that the number of USO is
$2^{\Theta(2^n \log n)}$. The number of acyclic USO is easily seen to be
between $2^{\Omega(2^n)}$ and $2^{O(2^n\log n)}$; it would be nice to find
better bounds.

 

Selfish Routing and the Price of Anarchy

Tim Roughgarden, Cornell University

A central and well-studied problem arising in the management of a
large network is that of routing traffic to achieve the best possible
network performance. In many networks, it is difficult or even
impossible to impose optimal routing strategies on network traffic,
leaving network users free to act according to their own interests.
The result of local optimization by many selfish network users with
conflicting interests need not possess any type of global optimality;
hence, this lack of regulation carries the cost of decreased network
performance. In this talk, we will discuss methods for quantifying
the worst-possible loss in network performance arising from such
noncooperative behavior.

Includes joint work with Eva Tardos.

 

Hardness of Coloring 3-Colorable 3-Uniform Hypergraphs

Subhash Khot, Princeton University

                 We consider the problem of coloring a 3-colorable
3-uniform hypergraph. In the minimization version of this problem,
given a 3-colorable 3-uniform hypergraph, one seeks an algorithm
to color the hypergraph with as few colors as possible. We show
that it is NP-hard to color a 3-colorable 3-uniform hypergraph with
constantly many colors. In fact, we show a stronger result that it
is NP-hard to distinguish whether a 3-uniform hypergraph with n
vertices is 3-colorable or it contains no independent set of size
\delta n for an arbitrarily small constant \delta > 0. In the
maximization version of the problem, given a 3-colorable 3-uniform
hypergraph, the goal is to color the vertices with 3 colors so as
to maximize the number of non-monochromatic edges. We show that it
is NP-hard to distinguish whether a 3-uniform hypergraph is
3-colorable or any coloring of the vertices with 3 colors has at
most 8/9+\epsilon fraction of the edges non-monochromatic where
\epsilon > 0 is an arbitrarily small constant. This result is tight
since assigning a random color independently to every vertex makes
8/9 fraction of the edges non-monochromatic.

                These results are obtained via a new construction of a
probabilistically checkable proof system (PCP) for NP. We develop
a new construction of the "PCP Outer Verifier". An important feature
of this construction is "smoothening of the projection maps". Our
techniques also lead to simpler proofs of Hastad's PCP constructions
and several other constructions based on it.

                Dinur, Regev and Smyth independently obtained a better
result for hardness of 3-uniform hypergraph coloring. They showed
that it is NP-hard to color a 2-colorable 3-uniform hypergraph with
constantly many colors. In the "good case", the hypergraph they
construct is 2-colorable and hence their result is stronger. In the
"bad case" however, the hypergraph we construct has a stronger property,
namely, it does not even contain an independent set of size \delta n.


Learning Intersections of Halfspaces

Adam Klivans, Harvard University

The problem of learning an intersection of halfspaces is one of the
central challenges of computational learning theory. Intersections of
halfspaces form a powerful concept class: arbitrary convex bodies and DNF
formulae can be viewed as special cases.

We will give two new results for learning arbitrary functions of
halfspaces from random examples only:

1. An arbitrary function of $k$ halfspaces can be learned with respect to
the uniform distribution to accuracy $\epsilon$ in time
n^{O(k^{2}/\epsilon^{2})}. This gives the first polynomial time
algorithm for learning the intersection of a constant number of
halfspaces to within any constant error parameter.

2. An arbitrary function of $k$ halfspaces where the sum of the
coefficients of each halfspace is bounded by $w$ can be learned
with respect to any distribution in time $n^{O(k^{2} \log w)}$
(and time polynomial in the accuracy parameter \epsilon). This
gives the first quasipolynomial time algorithm for learning an
intersection of a constant number of halfspaces with polynomially
bounded weights under any distribution.

The results rely on new, complexity theoretic representations of
halfspaces involving Fourier analysis and numerical approximation.

Joint work with Ryan O'Donnell (MIT) and Rocco Servedio (Harvard)

 

SL=L (well, almost...) and other rarely-wrong derandomizations

Avi Wigderson, IAS and Hebrew University

For every e>0, we present a log-space deterministic algorithm
that correctly decides undirected graph connectivity
on all but at most exp(n^e) of all n-vertex graphs.
The same holds for every problem in Symmetric Log-space (SL).

Using a plausible complexity assumption 
(which is not know to imply BPP=P)
we show that, for every e>0, each problem in BPP has
a polynomial-time deterministic algorithm that errs
on at most exp(n^e) of all n-bit long inputs.
The same holds if we replace BPP by MA and deterministic by
nondeterministic algorithms.

The results are obtained by exploring which probabilistic 
algorithms can be derandomized (in the sense above) by
 
generating their coin tosses deterministically from the input itself.

Joint work with Oded Goldreich

 

Random Walks and Geometric Algorithms

Santosh Vempala, MIT     

Random walks have intrigued us for a long time. In recent years, they have also played a crucial role in the discovery of polynomial-time algorithms. The talk will illustrate this using random walks in Euclidean space, showing how they lead to a simple algorithm for the fundamental problem of convex optimization. Along the way, we will describe some engaging ways to walk randomly and highlight their geometry.

 

The Disk-Sector Encryption Problem: Constructing Tweakable
Enciphering Modes that are Secure in the Sense of a Strong PRP

Shai Halevi, IBM

We describe block-cipher modes of operation that turn an $n$-bit block
cipher into a tweakable enciphering scheme that acts on sectors of $mn$
bits, where $m \ge 2$. When the underlying block cipher is secure in the
sense of a strong pseudorandom permutation (PRP) our schemes are secure
in the sense of variable-input-length, tweakable, strong PRP. Such an
object can be used to encipher the sectors of a disk, in-place, offering
security as good as can be obtained in this setting.

Our first scheme, CMC, makes a pass of CBC encryption, a masking step,
and then a pass of CBC decryption. Our second scheme, EME, does a pass
of modified-ECB encryption, a masking step, and then a pass of
modified-ECB decryption. This second mode is parallelizable. Modes CMC
and EME are faster in software and simpler than any known instantiation
of Naor and Reingold's hash--encrypt--hash approach.

Besides proving the security of our schemes we initiate a more general
investigation of tweakable enciphering schemes, considering issues like
non-malleability, the adding in of tweaks, and extending the domain of
input to arbitrary bit strings.

 

Polylogarithmic Inapproximability

Eran Halperin, ICSI Berkeley

We settle the following gap in the area of approximation 
algorithms for NP-hard optimization problems.
Several (natural) problems are known to have algorithms achieving
approximation ratio that is polylogarithmic in the input size.
However, none of them is known to have a hardness of approximation
result that excludes the possibility of a logarithmic approximation.

The class of such problems includes Bandwidth, Pathwidth,
Group-Steiner-Tree, Job-Shop-Scheduling, and Min-Bisection.
We bridge this gap by providing the first polylogarithmic
intractability results; specifically, we focus on the Group-Steiner-Tree problem,
but our proofs immediately extend to a few related problems.

We will start by describing a polylogarithmic lower bound
on the integrality ratio of a linear programming relaxation
for Group-Steiner-Tree.

This lower bound is almost tight with the known algorithmic upper bound
of Garg, Konjevod and Ravi [SODA '98],
yielding the first polylogarithmic integrality ratio for a natural
relaxation.
(This part is joint work with Guy Kortsarz, Robert Krauthgamer, Aravind
Srinivasan and Nan Wang [SODA '03].)

We will then describe the first polylogarithmic hardness of approximation,
which is nearly tight with the aforementiond algorithm of Garg et al.
Interestingly, the reduction proving this result involves a recursive
invocation of reduction to Set-Cover due to Lund and Yannakakis [STOC '93],
but the analysis draws from the aforementioned lower bound on the
integrality ratio. (This part is joint work with Robert Krauthgamer [Manuscript].)

 

Decoding Turbo Codes via Linear Programming 

Jon Feldman, MIT Laboratory for Computer Science 

Turbo codes, discovered in 1993 by Berrou, Glavieux and Thitimajshima, revolutionized the field of coding theory by performing extremely close to Shannon capacity. These codes feature a very simple encoder, and a decoder based on Pearl's "belief propagation" algorithm, a tool commonly used in artificial intelligence. This family of codes has been studied intensely for the last ten years, and has also inspired a "rediscovery" of low-density parity-check (LDPC) codes, first studied in 1962 by Gallager. 

However, the question of why these codes perform so well is still far from being answered. The principal difficulty lies with the decoder, whose behavior is quite unpredictable. Belief propagation is a general iterative heuristic that computes an est imation of the marginal probabilities of a graph-based model. This heuristic may not even converge, and even if it does converge, the quality of its output has no provable guarantees. 

In this talk, we introduce a new approach to decoding Turbo codes that uses linear programming (LP) relaxation. We discuss the general approach of using LP relaxation for decoding, and how this problem differs from more conventional optimization problems. A decoder is given in detail for the case of Repeat-Accumulate (RA) codes, a simple class of turbo codes. For the binary symmetric channel, where each bit of the code word is flipped with probability $p$, we give an exact characterization of the noise patterns that cause decoding error. Using this characterization, we give an explicit construction of a rate-1/2 RA code (parameterized by its length $n$), and prove an inverse-polynomial (in $n$) bound on the probability of decoding failure. We will also discuss recent extensions to the case of LDPC codes, and connections between the LP decoder and various forms of the belief propagation algorithm. 

This is joint work with David R. Karger and Martin J. Wainwright.

 

   (archive of past announcements)