two applications of the gaussian poincaré inequality in ... · two applications of the gaussian...
TRANSCRIPT
Two Applications of the Gaussian PoincaréInequality in the Shannon Theory
Vincent Y. F. Tan (Joint work with Silas L. Fong)
National University of Singapore (NUS)
2016 International Zurich Seminar on Communications
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 1 / 17
Gaussian Poincaré Inequality
Theorem
For Zn iid∼ N (0, 1) and any differentiable mapping f such that
E[(f (Zn))2] <∞, and E[‖∇f (Zn)‖2] <∞
we havevar[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Controlling the variance of f (Zn) that is a function of i.i.d. randomvariables in terms of the gradient of f (Zn)
Using the Gaussian Poincaré inequality for appropriate f ,
var[f (Zn)] = O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 2 / 17
Gaussian Poincaré Inequality
Theorem
For Zn iid∼ N (0, 1) and any differentiable mapping f such that
E[(f (Zn))2] <∞, and E[‖∇f (Zn)‖2] <∞
we havevar[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Controlling the variance of f (Zn) that is a function of i.i.d. randomvariables in terms of the gradient of f (Zn)
Using the Gaussian Poincaré inequality for appropriate f ,
var[f (Zn)] = O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 2 / 17
Gaussian Poincaré Inequality
Theorem
For Zn iid∼ N (0, 1) and any differentiable mapping f such that
E[(f (Zn))2] <∞, and E[‖∇f (Zn)‖2] <∞
we havevar[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Controlling the variance of f (Zn) that is a function of i.i.d. randomvariables in terms of the gradient of f (Zn)
Using the Gaussian Poincaré inequality for appropriate f ,
var[f (Zn)] = O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 2 / 17
Gaussian Poincaré Inequality in Shannon Theory
Polyanskiy and Verdú (2014) bounded the KL divergence betweenthe empirical output distribution of AWGN channel codes PYn andthe n-fold product of the CAOD P∗Y , i.e.,
D(PYn‖(P∗Y)n)
Often we need to bound the variance of certain log-likelihoodratios (dispersion)
Demonstrate its utility by establishing
Strong converse for the Gaussian broadcast channels
Properties of the empirical output distribution of delay-limited codesfor quasi-static fading channels
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 3 / 17
Gaussian Poincaré Inequality in Shannon Theory
Polyanskiy and Verdú (2014) bounded the KL divergence betweenthe empirical output distribution of AWGN channel codes PYn andthe n-fold product of the CAOD P∗Y , i.e.,
D(PYn‖(P∗Y)n)
Often we need to bound the variance of certain log-likelihoodratios (dispersion)
Demonstrate its utility by establishing
Strong converse for the Gaussian broadcast channels
Properties of the empirical output distribution of delay-limited codesfor quasi-static fading channels
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 3 / 17
Gaussian Poincaré Inequality in Shannon Theory
Polyanskiy and Verdú (2014) bounded the KL divergence betweenthe empirical output distribution of AWGN channel codes PYn andthe n-fold product of the CAOD P∗Y , i.e.,
D(PYn‖(P∗Y)n)
Often we need to bound the variance of certain log-likelihoodratios (dispersion)
Demonstrate its utility by establishing
Strong converse for the Gaussian broadcast channels
Properties of the empirical output distribution of delay-limited codesfor quasi-static fading channels
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 3 / 17
Gaussian Broadcast Channel
Assume g1 = g2 = 1 and σ22 > σ2
1
Input Xn must satisfy
‖Xn‖22 =
n∑i=1
X2i ≤ nP
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 4 / 17
Gaussian Broadcast Channel
Assume g1 = g2 = 1 and σ22 > σ2
1
Input Xn must satisfy
‖Xn‖22 =
n∑i=1
X2i ≤ nP
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 4 / 17
Gaussian Broadcast Channel
Assume g1 = g2 = 1 and σ22 > σ2
1
Input Xn must satisfy
‖Xn‖22 =
n∑i=1
X2i ≤ nP
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 4 / 17
Capacity Region
An (n,M1n,M2n, εn)-code consists ofan encoder f : {1, . . . ,M1n} × {1, . . . ,M2n} → Rn such that thepower constraint is satisfied;
two decoders ϕj : Rn → {1, . . . ,Mjn} for j = 1, 2;
such that the average error probability
P(n)e := Pr(W1 6= W1 or W2 6= W2) ≤ εn.
(R1,R2) is achievable⇔ ∃ a sequence of (n,M1n,M2n, εn)-codess.t.
lim infn→∞
1n
log Mjn ≥ Rj, j = 1, 2, and
limn→∞
εn = 0.
Capacity region C is the set of all achievable rate pairs
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 5 / 17
Capacity Region
An (n,M1n,M2n, εn)-code consists ofan encoder f : {1, . . . ,M1n} × {1, . . . ,M2n} → Rn such that thepower constraint is satisfied;
two decoders ϕj : Rn → {1, . . . ,Mjn} for j = 1, 2;
such that the average error probability
P(n)e := Pr(W1 6= W1 or W2 6= W2) ≤ εn.
(R1,R2) is achievable⇔ ∃ a sequence of (n,M1n,M2n, εn)-codess.t.
lim infn→∞
1n
log Mjn ≥ Rj, j = 1, 2, and
limn→∞
εn = 0.
Capacity region C is the set of all achievable rate pairs
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 5 / 17
Capacity Region
An (n,M1n,M2n, εn)-code consists ofan encoder f : {1, . . . ,M1n} × {1, . . . ,M2n} → Rn such that thepower constraint is satisfied;
two decoders ϕj : Rn → {1, . . . ,Mjn} for j = 1, 2;
such that the average error probability
P(n)e := Pr(W1 6= W1 or W2 6= W2) ≤ εn.
(R1,R2) is achievable⇔ ∃ a sequence of (n,M1n,M2n, εn)-codess.t.
lim infn→∞
1n
log Mjn ≥ Rj, j = 1, 2, and
limn→∞
εn = 0.
Capacity region C is the set of all achievable rate pairs
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 5 / 17
Capacity Region
Cover (1972) and Bergmans (1974) showed that
C = RBC =⋃
α∈[0,1]
R(α)
where
R(α) ={(R1,R2) : R1 ≤ C
(αPσ2
1
),R2 ≤ C
((1− α)PαP + σ2
2
)}and
C(x) =12
log(1 + x).
Direct part: Random coding + Superposition coding
Converse part: Fano’s inequality + Entropy power inequality
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 6 / 17
Capacity Region
Cover (1972) and Bergmans (1974) showed that
C = RBC =⋃
α∈[0,1]
R(α)
where
R(α) ={(R1,R2) : R1 ≤ C
(αPσ2
1
),R2 ≤ C
((1− α)PαP + σ2
2
)}and
C(x) =12
log(1 + x).
Direct part: Random coding + Superposition coding
Converse part: Fano’s inequality + Entropy power inequality
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 6 / 17
Capacity Region
Cover (1972) and Bergmans (1974) showed that
C = RBC =⋃
α∈[0,1]
R(α)
where
R(α) ={(R1,R2) : R1 ≤ C
(αPσ2
1
),R2 ≤ C
((1− α)PαP + σ2
2
)}and
C(x) =12
log(1 + x).
Direct part: Random coding + Superposition coding
Converse part: Fano’s inequality + Entropy power inequality
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 6 / 17
Capacity Region
P(n)e → 0
P(n)e 6→ 0 P(n)
e → 1?
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 7 / 17
Capacity Region
P(n)e → 0
P(n)e 6→ 0 P(n)
e → 1?
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 7 / 17
Capacity Region
P(n)e → 0
P(n)e 6→ 0
P(n)e → 1?
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 7 / 17
Capacity Region
P(n)e → 0
P(n)e 6→ 0 P(n)
e → 1?
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 7 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1?
Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1? Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1? Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1? Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1? Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
Strong converse vs weak converse
Can we claim that if (R1,R2) /∈ C, then
P(n)e → 1? Indeed!
Sharp phase transition between what’s possible and what’s not
The strong converse has been established for only degradeddiscrete memoryless BC
Ahlswede, Gács and Körner (1976) used the blowing-up lemma
BUL doesn’t work for continuous alphabets [but see Wu and Özgür(2015)]
Oohama (2015) uses properties of the Rényi divergence
Good bounds between the Rényi divergence Dα(P‖Q) and therelative entropy D(P‖Q) exist for finite alphabets
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 8 / 17
ε-Capacity Region
(R1,R2) is ε-achievable⇔ ∃ a sequence of (n,M1n,M2n, εn)-codess.t.
lim infn→∞
1n
log Mjn ≥ Rj, j = 1, 2, and
lim supn→∞
εn ≤ ε.
Capacity region Cε is the set of all achievable rate pairs
Strong converse holds iff Cε does not depend on ε.
We already know that
RBC = C0 ⊂ Cε
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 9 / 17
Strong converse
Theorem
The Gaussian BC satisfies the strong converse property:
Cε = RBC, ∀ ε ∈ [0, 1)
Key ideas in proof:
Derive an appropriate information spectrum converse bound
Use the Gaussian Poincaré inequality to bound relevant variances
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 10 / 17
Weak Converse for GBC [Bergmans (1974)]
Step 1: Invoke Fano’s inequality to assert that for any sequence ofcodes with vanishing error probability εn → 0,
Rj ≤1n
I(Wj;Ynj ) + o(1), ∀ j ∈ {1, 2}.
Step 2: Single-letterize and entropy power inequality
I(W1;Yn1 ) ≤ nI(X;Y1|U)
EPI≤ nC
(αPσ2
1
)I(W1;Yn
1 ) + I(W2;Yn2 ) ≤ nI(U;Y2)
EPI≤ nC
((1− α)PαP + σ2
2
)
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 11 / 17
Weak Converse for GBC [Bergmans (1974)]
Step 1: Invoke Fano’s inequality to assert that for any sequence ofcodes with vanishing error probability εn → 0,
Rj ≤1n
I(Wj;Ynj ) + o(1), ∀ j ∈ {1, 2}.
Step 2: Single-letterize and entropy power inequality
I(W1;Yn1 ) ≤ nI(X;Y1|U)
EPI≤ nC
(αPσ2
1
)I(W1;Yn
1 ) + I(W2;Yn2 ) ≤ nI(U;Y2)
EPI≤ nC
((1− α)PαP + σ2
2
)
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 11 / 17
Strong Converse for DM-BC [Ahlswede et al. (1976)]
Step 1: Invoke the blowing-up lemma to assert that for anysequence of codes with non-vanishing error probability ε ∈ [0, 1),
Rj ≤1n
I(Wj;Ynj ) + o(1), ∀ j ∈ {1, 2}.
Step 2: Single-letterize
I(W1;Yn1 ) ≤ nI(X;Y1|U),
I(W1;Yn1 ) + I(W2;Yn
2 ) ≤ nI(U;Y2)
where Ui := (W2,Y i−11 ). One also uses the degradedness
condition here:
I(W2,Y i−12 ,Y i−1
1 ;Y2i) = I(Ui;Y2i).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 12 / 17
Strong Converse for DM-BC [Ahlswede et al. (1976)]
Step 1: Invoke the blowing-up lemma to assert that for anysequence of codes with non-vanishing error probability ε ∈ [0, 1),
Rj ≤1n
I(Wj;Ynj ) + o(1), ∀ j ∈ {1, 2}.
Step 2: Single-letterize
I(W1;Yn1 ) ≤ nI(X;Y1|U),
I(W1;Yn1 ) + I(W2;Yn
2 ) ≤ nI(U;Y2)
where Ui := (W2,Y i−11 ). One also uses the degradedness
condition here:
I(W2,Y i−12 ,Y i−1
1 ;Y2i) = I(Ui;Y2i).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 12 / 17
Our Strong Converse Proof for Gaussian BC
Convert code defined based on avg error prob ≤ ε to one basedon max error prob ≤
√ε =: ε′ w/o loss in rate [Telatar]
Establish information spectrum bound. For every (w1,w2), everycode with max error prob ≤ ε′ satisfies
ε′ ≥ Pr(
logP(Yn
1 |w1)
P(Yn1 )≤ nR1 − γ1(w1,w2)
)− n2e−γ1(w1,w2)
− 1
{2n(R1+R2)
∫D1(w1)
P(yn1)P(w2|w1, yn
1) dyn1 > n2
}Indicator term is often negligible (by Markov’s inequality)
Establish a bound on the coding rate
nR1 ≤ E[
logP(Yn
1 |w1)
P(Yn1 )
]+
√2
1− ε′var[
logP(Yn
1 |w1)
P(Yn1 )
]+ 3 log n
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 13 / 17
Our Strong Converse Proof for Gaussian BC
Convert code defined based on avg error prob ≤ ε to one basedon max error prob ≤
√ε =: ε′ w/o loss in rate [Telatar]
Establish information spectrum bound. For every (w1,w2), everycode with max error prob ≤ ε′ satisfies
ε′ ≥ Pr(
logP(Yn
1 |w1)
P(Yn1 )≤ nR1 − γ1(w1,w2)
)− n2e−γ1(w1,w2)
− 1
{2n(R1+R2)
∫D1(w1)
P(yn1)P(w2|w1, yn
1) dyn1 > n2
}
Indicator term is often negligible (by Markov’s inequality)
Establish a bound on the coding rate
nR1 ≤ E[
logP(Yn
1 |w1)
P(Yn1 )
]+
√2
1− ε′var[
logP(Yn
1 |w1)
P(Yn1 )
]+ 3 log n
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 13 / 17
Our Strong Converse Proof for Gaussian BC
Convert code defined based on avg error prob ≤ ε to one basedon max error prob ≤
√ε =: ε′ w/o loss in rate [Telatar]
Establish information spectrum bound. For every (w1,w2), everycode with max error prob ≤ ε′ satisfies
ε′ ≥ Pr(
logP(Yn
1 |w1)
P(Yn1 )≤ nR1 − γ1(w1,w2)
)− n2e−γ1(w1,w2)
− 1
{2n(R1+R2)
∫D1(w1)
P(yn1)P(w2|w1, yn
1) dyn1 > n2
}Indicator term is often negligible (by Markov’s inequality)
Establish a bound on the coding rate
nR1 ≤ E[
logP(Yn
1 |w1)
P(Yn1 )
]+
√2
1− ε′var[
logP(Yn
1 |w1)
P(Yn1 )
]+ 3 log n
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 13 / 17
Our Strong Converse Proof for Gaussian BC
Convert code defined based on avg error prob ≤ ε to one basedon max error prob ≤
√ε =: ε′ w/o loss in rate [Telatar]
Establish information spectrum bound. For every (w1,w2), everycode with max error prob ≤ ε′ satisfies
ε′ ≥ Pr(
logP(Yn
1 |w1)
P(Yn1 )≤ nR1 − γ1(w1,w2)
)− n2e−γ1(w1,w2)
− 1
{2n(R1+R2)
∫D1(w1)
P(yn1)P(w2|w1, yn
1) dyn1 > n2
}Indicator term is often negligible (by Markov’s inequality)
Establish a bound on the coding rate
nR1 ≤ E[
logP(Yn
1 |w1)
P(Yn1 )
]+
√2
1− ε′var[
logP(Yn
1 |w1)
P(Yn1 )
]+ 3 log n
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 13 / 17
Our Strong Converse Proof for Gaussian BC
Use the Gaussian Poincaré inequality with careful identification off and peak power constraint ‖Xn‖2 ≤ nP to assert that
var[
logP(Yn
1 |w1)
P(Yn1 )
]= O(n).
Thus we conclude that
nR1 ≤ I(W1;Yn1 ) + O(
√n), ∀ ε ∈ [0, 1).
Backoff term is of the order O(1/√
n), i.e.,
λ log M∗1n + (1− λ) log M∗2n = nCλ + O(√
n)
where
Cλ := maxα∈[0,1]
{λC(αPσ2
1
)+ (1− λ)C
((1− α)PαP + σ2
2
)}but nailing down the constant (dispersion) seems challenging.
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 14 / 17
Our Strong Converse Proof for Gaussian BC
Use the Gaussian Poincaré inequality with careful identification off and peak power constraint ‖Xn‖2 ≤ nP to assert that
var[
logP(Yn
1 |w1)
P(Yn1 )
]= O(n).
Thus we conclude that
nR1 ≤ I(W1;Yn1 ) + O(
√n), ∀ ε ∈ [0, 1).
Backoff term is of the order O(1/√
n), i.e.,
λ log M∗1n + (1− λ) log M∗2n = nCλ + O(√
n)
where
Cλ := maxα∈[0,1]
{λC(αPσ2
1
)+ (1− λ)C
((1− α)PαP + σ2
2
)}but nailing down the constant (dispersion) seems challenging.
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 14 / 17
Our Strong Converse Proof for Gaussian BC
Use the Gaussian Poincaré inequality with careful identification off and peak power constraint ‖Xn‖2 ≤ nP to assert that
var[
logP(Yn
1 |w1)
P(Yn1 )
]= O(n).
Thus we conclude that
nR1 ≤ I(W1;Yn1 ) + O(
√n), ∀ ε ∈ [0, 1).
Backoff term is of the order O(1/√
n), i.e.,
λ log M∗1n + (1− λ) log M∗2n = nCλ + O(√
n)
where
Cλ := maxα∈[0,1]
{λC(αPσ2
1
)+ (1− λ)C
((1− α)PαP + σ2
2
)}but nailing down the constant (dispersion) seems challenging.Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 14 / 17
Another Application: Quasi-Static Fading Channels
Consider the channel model
Yi =√
HXi + Zi, i = 1, . . . , n
where Zi are independent standard normal random variables and
E[1/H] ∈ (0,∞)
If fading state is h and message is w, codeword is fh(w) ∈ Rn.
Long-term power constraint
1Mn
Mn∑w=1
∫R+
PH(h)‖fh(w)‖2 dh ≤ nP
Delay-limited capacity [Hanly and Tse (1998)], i.e., the maximumtransmission rate under the constraint that the maximal errorprobability over all H > 0 vanishes
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 15 / 17
Another Application: Quasi-Static Fading Channels
Consider the channel model
Yi =√
HXi + Zi, i = 1, . . . , n
where Zi are independent standard normal random variables and
E[1/H] ∈ (0,∞)
If fading state is h and message is w, codeword is fh(w) ∈ Rn.
Long-term power constraint
1Mn
Mn∑w=1
∫R+
PH(h)‖fh(w)‖2 dh ≤ nP
Delay-limited capacity [Hanly and Tse (1998)], i.e., the maximumtransmission rate under the constraint that the maximal errorprobability over all H > 0 vanishes
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 15 / 17
Another Application: Quasi-Static Fading Channels
Consider the channel model
Yi =√
HXi + Zi, i = 1, . . . , n
where Zi are independent standard normal random variables and
E[1/H] ∈ (0,∞)
If fading state is h and message is w, codeword is fh(w) ∈ Rn.
Long-term power constraint
1Mn
Mn∑w=1
∫R+
PH(h)‖fh(w)‖2 dh ≤ nP
Delay-limited capacity [Hanly and Tse (1998)], i.e., the maximumtransmission rate under the constraint that the maximal errorprobability over all H > 0 vanishes
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 15 / 17
Another Application: Quasi-Static Fading Channels
Consider the channel model
Yi =√
HXi + Zi, i = 1, . . . , n
where Zi are independent standard normal random variables and
E[1/H] ∈ (0,∞)
If fading state is h and message is w, codeword is fh(w) ∈ Rn.
Long-term power constraint
1Mn
Mn∑w=1
∫R+
PH(h)‖fh(w)‖2 dh ≤ nP
Delay-limited capacity [Hanly and Tse (1998)], i.e., the maximumtransmission rate under the constraint that the maximal errorprobability over all H > 0 vanishes
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 15 / 17
Vanishing Normalized Relative Entropy
The delay-limited capacity [Hanly and Tse (1998)] is
C(PDL), where PDL :=P
E[1/H]
For any sequence of capacity-achieving codes with vanishingmaximum error probability
limn→∞
1n
D(PYn‖(P∗Y)n)→ 0 where P∗Y = N (0,PDL).
Every good code is s.t. the induced output distribution “looks like”the n-fold CAOD.
Extend to the case where the error probability is non-vanishing
Control a variance term
var[
logPYn|Xn,H(Yn|Xn, h)
PYn|H(Yn|h)
]= O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 16 / 17
Vanishing Normalized Relative Entropy
The delay-limited capacity [Hanly and Tse (1998)] is
C(PDL), where PDL :=P
E[1/H]
For any sequence of capacity-achieving codes with vanishingmaximum error probability
limn→∞
1n
D(PYn‖(P∗Y)n)→ 0 where P∗Y = N (0,PDL).
Every good code is s.t. the induced output distribution “looks like”the n-fold CAOD.
Extend to the case where the error probability is non-vanishing
Control a variance term
var[
logPYn|Xn,H(Yn|Xn, h)
PYn|H(Yn|h)
]= O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 16 / 17
Vanishing Normalized Relative Entropy
The delay-limited capacity [Hanly and Tse (1998)] is
C(PDL), where PDL :=P
E[1/H]
For any sequence of capacity-achieving codes with vanishingmaximum error probability
limn→∞
1n
D(PYn‖(P∗Y)n)→ 0 where P∗Y = N (0,PDL).
Every good code is s.t. the induced output distribution “looks like”the n-fold CAOD.
Extend to the case where the error probability is non-vanishing
Control a variance term
var[
logPYn|Xn,H(Yn|Xn, h)
PYn|H(Yn|h)
]= O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 16 / 17
Vanishing Normalized Relative Entropy
The delay-limited capacity [Hanly and Tse (1998)] is
C(PDL), where PDL :=P
E[1/H]
For any sequence of capacity-achieving codes with vanishingmaximum error probability
limn→∞
1n
D(PYn‖(P∗Y)n)→ 0 where P∗Y = N (0,PDL).
Every good code is s.t. the induced output distribution “looks like”the n-fold CAOD.
Extend to the case where the error probability is non-vanishing
Control a variance term
var[
logPYn|Xn,H(Yn|Xn, h)
PYn|H(Yn|h)
]= O(n).
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 16 / 17
Concluding Remarks
Gaussian Poincaré inequality is useful for Shannon-theoreticproblems with uncountable alphabets
var[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Allows us to establish strong converses and properties of goodcodes by controlling variance of log-likelihood ratios
For more details, please see
Arxiv: 1509.01380 (Strong converse for Gaussian broadcast)
Arxiv: 1510.08544 (Empirical output distribution of good codes forfading channels)
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 17 / 17
Concluding Remarks
Gaussian Poincaré inequality is useful for Shannon-theoreticproblems with uncountable alphabets
var[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Allows us to establish strong converses and properties of goodcodes by controlling variance of log-likelihood ratios
For more details, please see
Arxiv: 1509.01380 (Strong converse for Gaussian broadcast)
Arxiv: 1510.08544 (Empirical output distribution of good codes forfading channels)
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 17 / 17
Concluding Remarks
Gaussian Poincaré inequality is useful for Shannon-theoreticproblems with uncountable alphabets
var[f (Zn)] ≤ E[‖∇f (Zn)‖2].
Allows us to establish strong converses and properties of goodcodes by controlling variance of log-likelihood ratios
For more details, please see
Arxiv: 1509.01380 (Strong converse for Gaussian broadcast)
Arxiv: 1510.08544 (Empirical output distribution of good codes forfading channels)
Vincent Tan (NUS) Applications of the Poincaré Inequality IZS 2016 17 / 17