source-channel communication in networks: …hkumath.hku.hk/~ghan/wci/jun.pdfsource-channel...
TRANSCRIPT
![Page 1: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/1.jpg)
Jun Chen 1 / 23
Source-Channel Communication in Networks:
Separation Theorems and Beyond
Jun Chen
Electrical and Computer Engineering
McMaster University
Joint work with Kia Khezeli, Lin Song, Chao Tian, Suhas Diggavi, and Shlomo Shamai
![Page 2: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/2.jpg)
Outline
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 2 / 23
◆ Optimality of the source-channel separation architecture for lossy sourcecoding in general networks
◆ The source broadcast problem: Application of the source-channelseparation theorem as a converse method
◆ Other converse methods for the source broadcast problem
![Page 3: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/3.jpg)
Source-Channel Separation Theorem
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 3 / 23
◆ Source-channel communication
nS Encoder Channel Decoder ˆ n
S
nX
nY
![Page 4: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/4.jpg)
Source-Channel Separation Theorem
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 3 / 23
◆ Source-channel communication
nS Encoder Channel Decoder ˆ n
S
nX
nY
◆ Separation theorem (Shannon 48)
nS
Source
Encoder
Channel
Encoder
Channel
DecoderChannel
Source
Decoderˆ nS
nX
nYi i
For any achievable end-to-end distortion D,
R(D) ≤ C.
![Page 5: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/5.jpg)
Two Proofs
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 4 / 23
nS Encoder Channel Decoder ˆ n
S
nX
nY
◆ Standard proof
■ Information-theoretic definition of channel capacity and rate-distortionfunction
C = maxpX
I(X;Y )
R(D) = minpS|S
:E[d(S,S)]≤D
I(S; S)
■ Converse theorem of channel coding: I(Xn;Y n) ≤ nC
■ Converse theorem of lossy source coding: I(Sn; Sn) ≥ nR(D)■ Data processing inequality: I(Xn;Y n) ≥ I(Sn; Sn)
![Page 6: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/6.jpg)
Two Proofs
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 4 / 23
nS Encoder Channel Decoder ˆ n
S
nX
nY
◆ Standard proof
■ Information-theoretic definition of channel capacity and rate-distortionfunction
C = maxpX
I(X;Y )
R(D) = minpS|S
:E[d(S,S)]≤D
I(S; S)
■ Converse theorem of channel coding: I(Xn;Y n) ≤ nC
■ Converse theorem of lossy source coding: I(Sn; Sn) ≥ nR(D)■ Data processing inequality: I(Xn;Y n) ≥ I(Sn; Sn)
◆ Alternative proof
■ Operational definition of channel capacity and rate-distortion function■ Achievability theorem of channel coding: I(Xn;Y n) ≤ nC
■ Achievability theorem of lossy source coding: I(Sn; Sn) ≥ nR(D)■ Data processing inequality: I(Xn;Y n) ≥ I(Sn; Sn)
![Page 7: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/7.jpg)
More Proofs
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 5 / 23
nS Encoder Channel Decoder ˆ n
S
nX
nY
◆ Channel-centered proof
■ View pY n|Xn as a communication channel: I(Xn;Y n) ≤ nC
■ View pY n|Xn as a test channel: I(Xn;Y n) ≥ R(pXn,Y n)■ R(pXn,Y n) ≥ nR(D)
![Page 8: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/8.jpg)
More Proofs
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 5 / 23
nS Encoder Channel Decoder ˆ n
S
nX
nY
◆ Channel-centered proof
■ View pY n|Xn as a communication channel: I(Xn;Y n) ≤ nC
■ View pY n|Xn as a test channel: I(Xn;Y n) ≥ R(pXn,Y n)■ R(pXn,Y n) ≥ nR(D)
◆ Source-centered proof
■ View pSn|Sn as a test channel: I(Sn; Sn) ≥ nR(D)
■ View pSn|Sn as a communication channel: I(Sn; Sn) ≤ C(pSn|Sn)■ C(pSn|Sn ) ≤ nC
![Page 9: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/9.jpg)
Source-Channel Separation in Networks: General Source
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 6 / 23
◆ Optimality: The memoryless sources at source nodes are arbitrarilycorrelated, each of which is to be reconstructed at possibly multipledestinations within certain distortions, but the channels in this networkare synchronized, orthogonal and memoryless point-to-point channels.
Encoder 1
Encoder 2
0 1,n n
S S
2
nS
1,1
nX
Decoder 1 0 1,1ˆ ˆ,
n nS S
2 1,2ˆ ˆ,n nS S
1,1 1,1P Y X
2,1 2,1P Y X 1,2 1,2P Y X
2,2 2,2P Y X
1,1
nY
2,2
nX 2,2
nY
1,2
nX
Decoder 2
1,2
nY2,1
nX
2,1
nY
![Page 10: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/10.jpg)
Source-Channel Separation in Networks: General Channel
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 7 / 23
◆ Optimality: The memoryless sources are mutually independent, each ofwhich is to be reconstructed only at one destination within a certaindistortion, but the channels are general, including multi-user channelssuch as multiple access, broadcast, interference and relay channels,possibly with feedback.
Encoder 1
Encoder 2
0 1,n nS S
!1 2 1 2, ,P Y Y X X
0 2,n nS S
1
nX
2
nX
Decoder 1
Decoder 22
ˆ nS
1
nY
2
nY
0 1ˆ ˆ,n nS S
![Page 11: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/11.jpg)
The Source Broadcast Problem
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 8 / 23
Sm transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
◆ We say (κ,D1,D2) is achievable if
n
m≤ κ,
1
m
m∑
t=1
pS(t),Si(t)∈ Di, i = 1, 2. (∗)
Remark: (∗) is more general than conventional distortion constraints since
1
m
m∑
t=1
E[di(S(t), Si(t))] = E[d(S, Si)],
where pS,Si= 1
m
∑m
t=1 pS(t),Si(t), i = 1, 2.
![Page 12: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/12.jpg)
Gaussian Source over Gaussian Broadcast Channel
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 9 / 23
mS Encoder
Decoder 1+
+
1
nZ
2
nZ
1ˆmS
2ˆmSDecoder 2
1
nY
2
nY
nX
◆ Tradeoff between the transmit power P , the bandwidth mismatch factorκ, and the achievable reconstruction distortion pair (d1, d2)
◆ A mysterious auxiliary random variable (Reznic, Feder, and Zamir 06):S + U , where U is independent of everything else.
P ≥ supσ2
U>0
N1
(
σ2S(d1 + σ2
U )
d1(d2 + σ2U )
) 1
κ+ (N2 −N1)
(
σ2S + σ2
U
d2 + σ2U
) 1
κ−N2
![Page 13: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/13.jpg)
Source Broadcast with Receiver Side Information
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 10 / 23
Sm1 , Sm
2transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
Sm2
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
◆ We say (κ,D1,D2) is achievable if
n
m≤ κ,
1
m
m∑
t=1
pS1(t),S2(t),S1(t)∈ D1,
1
m
m∑
t=1
pS2(t),S2(t)∈ D2.
![Page 14: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/14.jpg)
A Source-Channel Separation Theorem
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 11 / 23
Sm1 , Sm
2transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
Sm2
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
◆ (κ,D1,D2) is achievable ⇐⇒ (RS1|S2(D1), RS2
(D2)) ∈ κC1|2(pY1,Y2|X),where
RS1|S2(D1) = min
pS1,S2,S1
∈D1
I(S1; S1|S2),
RS2(D2) = min
pS2,S2
∈D2
I(S2; S2),
and C1|2(pY1,Y2|X) is the capacity region of broadcast channel pY1,Y2|X
when the message intended for receiver 2 is available at receiver 1.
![Page 15: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/15.jpg)
Broadcast Channel with Receiver Side Information
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 12 / 23
M1,M2transmitter
f (n)pY1,Y2|X
receiver 1g(n)1
M2
receiver 2g(n)2
M1
M2
Xn1
Y n1
Y n2
◆ Capacity region C1|2(pY1,Y2|X) (Kramer and Shamai 07)
R1 ≤ I(X;Y1),
R2 ≤ I(V ;Y2),
R1 +R2 ≤ I(X;Y1|V ) + I(V ;Y2)
for some pV,X .
![Page 16: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/16.jpg)
Broadcast Channel with Receiver Side Information
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 13 / 23
◆ C1|2(pY1,Y2|X) = C(pY1,Y2|X) if Y1 is less noisy than Y2, but notnecessarily so if Y1 is more capable than Y2.
BEC-BSC
R1
R2
C1|2(pY1,Y2|X)C(pY1,Y2|X)
![Page 17: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/17.jpg)
A Reduction Argument
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 14 / 23
◆ Introduction of remote source {(S1(t), S2(t))}∞t=1
Given 1m
∑m
t=1 pS(t),Si(t)∈ Di, i = 1, 2, one can compute the induced
1m
∑m
t=1 pS1(t),S2(t),S1(t)and 1
m
∑m
t=1 pS2(t),S2(t). So the separation
theorem can be applied.
Smtransmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
![Page 18: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/18.jpg)
A Reduction Argument
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 14 / 23
◆ Introduction of remote source {(S1(t), S2(t))}∞t=1
Given 1m
∑m
t=1 pS(t),Si(t)∈ Di, i = 1, 2, one can compute the induced
1m
∑m
t=1 pS1(t),S2(t),S1(t)and 1
m
∑m
t=1 pS2(t),S2(t). So the separation
theorem can be applied.
(Sm1 , Sm
2 ) → Sm transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
![Page 19: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/19.jpg)
A Reduction Argument
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 14 / 23
◆ Introduction of remote source {(S1(t), S2(t))}∞t=1
Given 1m
∑m
t=1 pS(t),Si(t)∈ Di, i = 1, 2, one can compute the induced
1m
∑m
t=1 pS1(t),S2(t),S1(t)and 1
m
∑m
t=1 pS2(t),S2(t). So the separation
theorem can be applied.
Sm1 , Sm
2transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
Sm2
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
![Page 20: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/20.jpg)
A Reduction Argument
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 14 / 23
◆ Introduction of remote source {(S1(t), S2(t))}∞t=1
Given 1m
∑m
t=1 pS(t),Si(t)∈ Di, i = 1, 2, one can compute the induced
1m
∑m
t=1 pS1(t),S2(t),S1(t)and 1
m
∑m
t=1 pS2(t),S2(t). So the separation
theorem can be applied.
Sm1 , Sm
2transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
Sm2
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
◆ There exists some pS,S1,S2= pSpS1,S2|S
with pS,Si∈ Di, i = 1, 2 such
that (I(S1; S1|S2), I(S2; S2)) ∈ κC1|2(pY1,Y2|X) for all pS1,S2|S .Moreover, there is no loss of generality in choosing S1 = S.
![Page 21: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/21.jpg)
Gaussian Source with Squared Error Distortion Measure
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 15 / 23
◆ Let S1 = S and S2 = S + U , where U ∼ N (0, σ2U ).
κ < κ∗
κ = κ∗
κ > κ∗
R1
R2
κ < κ∗
κ = κ∗
κ > κ∗
SourceκC1|2(pY1,Y2|X)
![Page 22: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/22.jpg)
Bivariate Gaussian Source over Gaussian Broadcast Channel
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 16 / 23
◆ Tradeoff between the transmit power P , the bandwidth mismatch factorκ, and the achievable reconstruction distortion pair (D1,D2)
1 2,
m mS S
Encoder
Decoder 1+
+
1
nZ
2
nZ
1ˆ mS
2ˆ mSDecoder 2
1
nY
2
nY
nX
◆ Scalar case without bandwidth mismatch
■ Source-channel separation is suboptimal (Gao and Tuncel 11).■ Uncoded scheme is optimal at low SNR (Bross, Lapidoth, and
Tinguely 10).■ Hybrid scheme is optimal (Tian, Diggavi, and Shamai 11).
![Page 23: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/23.jpg)
Characterization of the Power-Bandwidth-Distortion Tradeoff
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 17 / 23
◆ Source: (S1,S2) ∼ N (0,ΣS) with Si ∼ N (0,ΣSi), i = 1, 2
◆ Channel: Zi ∼ N (0, Ni), i = 1, 2, with N1 < N2
![Page 24: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/24.jpg)
Characterization of the Power-Bandwidth-Distortion Tradeoff
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 17 / 23
◆ Source: (S1,S2) ∼ N (0,ΣS) with Si ∼ N (0,ΣSi), i = 1, 2
◆ Channel: Zi ∼ N (0, Ni), i = 1, 2, with N1 < N2
◆ A necessary condition:
P ≥ minΘ1,Θ2
supΣU≻0
N1
( |ΣS||Θ1 +ΣU|
|Θ1||Θ2 +ΣU|
) 1
κ+ (N2 −N1)
( |ΣS +ΣU|
|Θ2 +ΣU|
) 1
κ−N2,
where the minimization is over Θi =
(
Θi,i ∗∗ ∗
)
, i = 1, 2, subject to
ΣS � Θ2 � Θ1 ≻ 0 and Θi,i � Di, i = 1, 2.
![Page 25: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/25.jpg)
Characterization of the Power-Bandwidth-Distortion Tradeoff
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 17 / 23
◆ Source: (S1,S2) ∼ N (0,ΣS) with Si ∼ N (0,ΣSi), i = 1, 2
◆ Channel: Zi ∼ N (0, Ni), i = 1, 2, with N1 < N2
◆ A necessary condition:
P ≥ minΘ1,Θ2
supΣU≻0
N1
( |ΣS||Θ1 +ΣU|
|Θ1||Θ2 +ΣU|
) 1
κ+ (N2 −N1)
( |ΣS +ΣU|
|Θ2 +ΣU|
) 1
κ−N2,
where the minimization is over Θi =
(
Θi,i ∗∗ ∗
)
, i = 1, 2, subject to
ΣS � Θ2 � Θ1 ≻ 0 and Θi,i � Di, i = 1, 2.
◆ This bound is tight when S2 is a scalar and κ = 1:
P ≥ supΣU≻0
N1|ΣS +ΣU|
|D1 +ΣU1|(d2 + σ2
U2)+ (N2 −N1)
σ2S2
+ σ2U2
d2 + σ2U2
−N2,
where ΣU =
(
ΣU1∗
∗ σ2U2
)
.
![Page 26: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/26.jpg)
System Diagram
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 18 / 23
1
nS
2
nS
WZ Encoding 1
nS
WZ Encoding 2
nS
1 1 2 2
n T n n
aX a Sa S
DP Encoding,1
n
dX
Channel
Encoding,2
n
dX
+
+
+
1
nZ
2
nZ
1
nY
2
nY
nX
1
nY
2
nY
Channel
Decoding
,2
n
dX
-
DP Decodindg LMMSE1
nS
1ˆ nS
Channel
Decoding
,2
n
dX
-
LMMSE2
nS
2ˆ nS
WZ Decoding
WZ Decoding
Source-channel separation theorems can be used to prove the optimalityof non-separation based schemes (e.g., hybrid coding schemes) anddetermine performance limits even in scenarios where the separationarchitecture is suboptimal!
![Page 27: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/27.jpg)
A Converse Method Based on Channel Comparison
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 19 / 23
Sm1
transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
pSm1
,Sm2
|Sm
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
C(pSm1
,Sm2
|Sm1
) ⊆ nC(pY1,Y2|X)
![Page 28: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/28.jpg)
A Converse Method Based on Channel Comparison
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 19 / 23
Sm1
transmitterf (m,n)
pY1,Y2|X
receiver 1g(n,m)1
pSm1
,Sm2
|Sm
receiver 2g(n,m)2
Sm1
Sm2
Xn1
Y n1
Y n2
C(pSm1
,Sm2
|Sm1
) ⊆ nC(pY1,Y2|X)
◆ A single-letter version: If (κ,D1,D2) is achievable, then
C(pS , pS1,S2|S) ⊆ κC(pY1,Y2|X)
for some pS,S1,S2= pSpS1,S2|S
with pS,Si∈ Di, i = 1, 2. A proper
definition of C(pS , pS1,S2|S) is needed. For simplicity, we can replace
C(pS , pS1,S2|S) with Marton’s inner bound Cin(pS, pS1,S2|S
).
![Page 29: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/29.jpg)
Gaussian Source with Squared Error Distortion Measure
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 20 / 23
◆ For any pS,S1,S2= pSpS1,S2|S
with E[(S − Si)2] ≤ di, i = 1, 2,
C(G-BC(d1, d2)) ⊆ C(pS , pS1,S2|S).
◆ If (κ, d1, d2) is achievable, then C(G-BC(d1, d2)) ⊆ κC(pY1,Y2|X).
κ < κ∗
κ = κ∗
κ > κ∗
R1
R2
C(G-BC(d1, d2))κC(pY1,Y2|X)
![Page 30: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/30.jpg)
Binary Uniform Source with Hamming Distortion Measure
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 21 / 23
◆ For any pS1,S1,S2= pSpS1,S2|S
with E[S ⊕ Si] ≤ di, i = 1, 2,
C(BS-BC(d1, d2)) ⊆ C(pS , pS1,S2|S).
◆ If (κ, d1, d2) is achievable, then C(BS-BC(d1, d2)) ⊆ κC(pY1,Y2|X).
κ < κ∗
κ = κ∗
κ > κ∗
R1
R2
C(BS-BC(d1, d2))κC(pY1,Y2|X)
![Page 31: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/31.jpg)
Another Converse Method
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 22 / 23
◆ A general ordering: Given any U1, · · · , UL, there exist V1, · · · , VL suchthat I(UA1
; S1) + I(UA2; S2|UA1
) + · · ·+ I(UAk; S2|U∪k−1
j=1Aj
)
≤ κ[I(VA1;Y1) + I(VA2
;Y2|VA1) + · · ·+ I(VAk
;Y2|V∪k−1
j=1Aj
)]
for any A1, · · · ,Ak ⊆ {1, · · · , L}.
![Page 32: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/32.jpg)
Another Converse Method
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 22 / 23
◆ A general ordering: Given any U1, · · · , UL, there exist V1, · · · , VL suchthat I(UA1
; S1) + I(UA2; S2|UA1
) + · · ·+ I(UAk; S2|U∪k−1
j=1Aj
)
≤ κ[I(VA1;Y1) + I(VA2
;Y2|VA1) + · · ·+ I(VAk
;Y2|V∪k−1
j=1Aj
)]
for any A1, · · · ,Ak ⊆ {1, · · · , L}.
◆ A subset of inequalitiesI(U0; Si) ≤ κI(V0;Yi), i = 1, 2I(U0, Ui; Si) ≤ κI(V0, Vi;Yi), i = 1, 2I(U0; S1) + I(U2; S2|U0) ≤ κ[I(V0;Y1) + I(V2;Y2|V0)]I(U0; S2) + I(U1; S1|U0) ≤ κ[I(V0;Y2) + I(V1;Y1|V0)]I(U0, U1; S1) + I(S; S2|U0, U1) ≤ κ[I(V0, V1;Y1) + I(X;Y2|V0, V1)]I(U0, U2; S2) + I(S; S1|U0, U2) ≤ κ[I(V0, V2;Y2) + I(X;Y1|V0, V2)]I(U0; S1) + I(U2; S2|U0) + I(S; S1|U0, U2)
≤ κ[I(V0;Y1) + I(V2;Y2|V0) + I(X;Y1|V0, V2)]I(U0; S2) + I(U1; S1|U0) + I(S; S2|U0, U1)
≤ κ[I(V0;Y2) + I(V1;Y1|V0) + I(X;Y2|V0, V1)]Therefore, if (κ,D1,D2) is achievable, then
Cout(pS , pS1,S2|S) ⊆ κCout(pY1,Y2|X)
for some pS,S1,S2= pSpS1,S2|S
with pS,Si∈ Di, i = 1, 2.
![Page 33: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/33.jpg)
Conclusion
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 23 / 23
◆ The source-channel separation theorem can be useful even in thescenarios where the source-channel separation architecture is strictlysuboptimal!
◆ From source-channel separation to source-channel correspondence
![Page 34: Source-Channel Communication in Networks: …hkumath.hku.hk/~ghan/WCI/Jun.pdfSource-Channel Communication in Networks: Separation Theorems and Beyond ... Conclusion Jun Chen 2 / 23](https://reader031.vdocument.in/reader031/viewer/2022022510/5ad89f617f8b9ab8378d8b0d/html5/thumbnails/34.jpg)
Conclusion
Outline
Theorem
Proof I
Proof II
Optimality I
Optimality II
Source Broadcast
Gaussian Case
Variant
Separation
Side Information
Example
Reduction
Gaussian Case
Broadcast
Tradeoff
System Diagram
Comparison
Gaussian Case
Binary Case
Another Method
Conclusion
Jun Chen 23 / 23
◆ The source-channel separation theorem can be useful even in thescenarios where the source-channel separation architecture is strictlysuboptimal!
◆ From source-channel separation to source-channel correspondence
Thank you!