hartmut klauck centre for quantum technologies nanyang technological university singapore
DESCRIPTION
Equality: EQ(x,y)=1 iff x=y Disjointness: DISJ(x,y)=1 iff x and y are disjoint sets Inner Product mod 2: IP(x,y)=1 iff i x i Æ y i is odd Representation as matrices: M(x,y) contains entries f(x,y)TRANSCRIPT
Hartmut KlauckCentre for Quantum TechnologiesNanyang Technological University
Singapore
An introduction to lower bound methods in
communication complexity
Two players Alice and Bob want to cooperatively compute a function f(x,y)
Alice knows x, Bob knows yHow much communication is needed?
The communication model
f(x,y)
Equality: EQ(x,y)=1 iff x=yDisjointness: DISJ(x,y)=1 iff x and y are
disjoint setsInner Product mod 2: IP(x,y)=1 iff i xiÆyi is
odd
Representation as matrices:M(x,y) contains entries f(x,y)
Examples, the communication matrix
The set of inputs that share the same message sequence form a combinatorial rectangle in the communication matrix
Proof sketch: Alice’s first message depends on her input only,
partitions the rowsThen Bobs message partitions the columns etc.
The messages partition the communication matrix into combinatorial rectangles
A key insight
For any integer matrix M the partition number is the minimum number of monochromatic rectangles needed to partition M
Clearly: D(M)¸ log part(M)The covering number is the minimum
number of rectangles needed to cover the entries of M with monochromatic rectangles
This corresponds to nondeterministic protocols
The partition number/coverings
D(EQ)=n+1Consider the inputs x,xNo two inputs x,x and y,y can be in the same
combinatorial rectangleOtherwise x,y is also in the same rectangle!
Hence we need at least 2n 1-rectangles to cover the 1-inputs of EQ
On the other hand n rectangles are enough to cover the 0-inputs of EQ
Example
It is known that log part(f) cannot be too far from D(f) for Boolean f
But part(f) is not always easy to determineWe will consider different relaxations of
part(f) that are easier to calculate
Lower bound methods
We can relax the partition requirement for Boolean functions
If M can be partitioned into k 1-rectangles, then M can be written as the sum of k rank 1 matricesIe. rank(M) · k
Examples:rank(EQ)=2n, rank(DISJ)=2n-1, rank(IP)=2n-1
Relaxation 1: the rank bound
Let M be a Boolean matrixWe know that D(M)¸ log rank(M)Conjecture: D(M)· poly(log rank(M))Best known upper bound is rank(M)There is a polynomial gap
D(f)=(n), log rank(f)=n0.61
Conjecture: quadratic gap
The log rank conjecture
Observation: 1-rectangles are rank one matrices with nonnegative entries only
prank(M) is the minimum k such that M can be written as the sum of k rank 1 matrices with nonnegative entries
Clearly D(M)¸ log prank(M) for all Boolean MNow we get a bound that is polynomially tight:
D(M)· O(log prank(M)¢ log prank(J-M)) for all Boolean M
J is the all ones matrix
The nonnegative rank
log prank(M)· poly(log rank(M)) for all Bool. M
Every Bool. m£ n rank r matrix M has a monochromatic submatrix of size mn/2polylog(r)
Every Bool. m£ n rank r matrix M has a submatrix of size mn/2polylog(r) that has rank <.99r
The rank conjecture has also been related to some open problems in additive combinatorics
Different formulations of the conjecture
Computing prank is NP-hardAlways gives polynomially tight bounds, but
hard to show
Problems with prank
In general it is hard to estimate the partition number (number of rectangles in a partition of the comm. matrix)
Idea: write as an integer program, relax into a lin. program, estimate via the dualThe dual is a max. problem!
Relaxation 2: the rectangle bound
Consider the set R of all 1-monochromatic rectangles in M
Every R2R gets a weight wR2 {0,1}
Minimize wR
such that for all x,y with f(x,y)=1: R:x,y2R wR=1(implicit: for all x,y with f(x,y)=0: R: x,y2R
wR=0)
The optimum is the partition number
The 0-1-program
R: set of all 1-monochromatic rectangles in MEvery R2R gets a nonnegative real weight wR
Minimize wR
such that for all x,y with f(x,y)=1: R: x,y2R wR=1(implicit: for all x,y with f(x,y)=0: R: x,y2R wR=0)
The optimum of this LP lower bounds the partition number
The linear program
A variant of the LP lower bounds the (one-sided) nondeterministic CC:Minimize wR
such that for all x,y with f(x,y)=1: R: x,y2R wR¸ 1
Denote the optimal value B(M)Use max of B(M), B(J-M) to show lower bounds on det.
CCThen: D(M)· O(log B(M)¢ log B(J-M)) + log2 n
Bounds for D(M) obtainied this way are never worse than quadratically smaller than D(M)
Comments
In the dual there is one real variable for every input x,y
Max Áx,y
such that for all 1-chromatic R: x,y2R Áx,y· 1
In other words, put weights on inputs to “balance” the weights on each 1-chromatic rectangle
For the dual of the nondeterministic LP bound the variables must have nonnegative values
The dual
Max Áx,y
such that for all 1-chromatic R: x,y2R Áx,y· 1Suppose that all Áx,y ¸ 0After rescaling the Áx,y can be regarded as a
probability distributionThe scaling factor is the max size of matrices
that are 1-monochromatic under the distribution
The dual
Equality:Weights Áx,x = 1The only 1-monochromatic rectangles contain exactly
one input x,x Thus B(EQ)¸ 2n
Inner Product modulo 2:Consider f(x,y)=1-IP(x,y)1-chromatic rectangles A£B satisfy that A ? BHence dim(A)+dim(B)· n ) |A|¢|B|· 2n All 1-inputs get weight ½n
There are ¸ 22n-1 1-inputs Hence B(f)¸ 2n/2
Some examples
Disjointness:1-inputs satisfy i xiÆyi = 0There are 3n 1-inputs1-chrom. rectangles A£B have A?B Hence still |A|¢ |B|· 2n
Give weights 1/2n
We get the bound B(DISJ)¸ 3n/2n
Some examples
Recall the primal program:
Minimize wR
such that for all x,y with f(x,y)=1: R: x,y2R wR=1
We don’t allow x,y to be “covered too much”In the dual this means we can use negative weights Áx,y
Makes it easier to satisfy constraintsHave not used that in our examples
The partition constraint
Promise-Nondisjointness: f(x,y)=0 if i xiÆ yi =0
f(x,y)=1 if i xi Æ yi =1
otherwise f is undefinedThe LP for f with the constraint R: x,y2R wR¸ 1
has optimum · nUse the following LP:
Minimize wR
such that for all x,y with f(x,y)=1: R:x,y2R wR = 1for all x,y with f(x,y)=0: R: x,y2R wR = 0for all other x,y: R: x,y2R wR · 1
z
Another example
We have to exhibit a solution to the dualWe should put positive weights on inputs with
intersections size 1Negative weights on inputs with larger
intersection sizeChoose 2
Weight 0 elsewhere
Example cont.
The following fact is useful [Razborov 92]Let ¹k denote the uniform distribution on x,y with
|xÅ y|=k and |x|=|y|=n/4Then for all large enough R=A£ B¹ x1(R)¸ (1-²)¹0(R)
This mean all large rectangles are corruptedSimilarly ¹2(R)¸ (1-²) 2 ¹1(R) for all 1-chrom. RHence putting 1-² the weight on x,y with intersection 2 as
on x,y with intersection 1 is enough to satisfy the constraints for all large 1-chrom. rectangles
Constraints are also true for small rectangles if weights are not too large
Total weight is 2 (n)
Example cont.
Readily generalizes to bounded error protocols
Can be generalized to deal with quantum protocols
Proof consist of exhibiting a dual solution
For randomized: Use contraints ¸ 1-² and · 1 in the primal program
Advantages of the rectangle bound
Primal:Min wR
For all x,y with f(x,y)=1: R:x,y2R wR¸ 1-²For all x,y with f(x,y)=0: R: RwR · ²
This is the rectangle/corruption bound
Example
Razborov showed that there is a distribution on inputs such that for all large R the fraction of 0-inputs is ² time the fraction of 1-inputs
This corresponds to a solution of the dual
Example DISJ
Previously we bounded the size of monochromatic rectangles
In the bounded error scenario we want to bound the size of almost monochromatic rectangles
Often this is much harder!What about the bias?Often easier to bound
But usually not good enough
Relaxation 3: Discrepancy
Fix a distribution ¹ on inputsThen the discrepancy of a rectangle is
|¹(RÅ 1)-¹(RÅ 0)|
Disc(f)=min log of the aboveQuite easy to show that Disc (f) is a lower
bound on D(f)Even for randomized, quantum, and (weakly)
unbounded error
Discrepancy
Disc(IP)=(n)All rectangles are almost balanced
Disc(DISJ)=O(log n)Nondeterministic complexity is small, hence
large monochromatic rectangles exist
The method fails to capture either randomized or quantum communication complexity
Examples
Re-visit the result about inner productIndeed it is hard to compute IP even with very
small errorError 1/2-½²n
BUTmany functions are close to IP even if their discrepancy is small
“Generalized” or “Smooth” discrepancy
What to do?
Take the function Maj(x,y)=1 iff i xiÆ yi¸ n/2Easy to see that Disc(Maj)=O(log n)Nevertheless Maj is close enough to IP to
inherit the lower bound (n) for quantum protocols
An Idea
Every rectangle R gets a real weight wR
Min wR
such thatfor all x,y with f(x,y)=1: R contains x,y wR2 [1-²,1]for all x,y with f(x,y)=0: R contains x,y wR2[0,²]
Dual: Put weights on inputs such that all large rectangles are almost balanced
Difference to the rectangle bound: two-sided balance condition
Another LP bound
Relaxing the rank bound furtherWhy?Motivated by quantum communicationNorm based methods allow to deal with
entanglement in quantum protocolsWe will arrive at (almost) the same quantity
as above (the LP bound) (via Grothendieck’s inequality)
Relaxation 4
D(f)¸ log rank(f)¸ ||A||2/mnThere is another relaxation of the rank: °2=max u,v ||M ± u¢v|| tr
rank(M)¸ °2 (M)2
Then define °2® as the minimum °2 of any M that is ®-close to M
This method subsumes all previous methods for lower bounding quantum cc
Norm based ideas