d:/ar-stelle/neues/lehre/seminare/grundlagen der...

18
Seminar-Praktikum “Nachrichtentechnik” Seminarversuch 5 “Untersuchung stochastischer Signale” Fachgebiet: Nachrichtentechnische Systeme Name: Matr.-Nr.: Betreuer: Datum: N T S Die Vorbereitungsaufgaben m¨ ussen vor dem Seminartermin gel ¨ ost werden.

Upload: others

Post on 16-Mar-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

Seminar-Praktikum “Nachrichtentechnik”

Seminarversuch 5 “Untersuchung stochastischer Signale”

Fachgebiet: Nachrichtentechnische Systeme

Name: Matr.-Nr.:

Betreuer: Datum:

N T S

Die Vorbereitungsaufgaben mussen vordem Seminartermin gelost werden.

Page 2: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

CONTENTS i

Contents

1 Introduction 1

2 First order Markoff sequences 1

2.1 Generation of first order Markoff sequences. . . . . . . . . . .. . . . . . . . . 1

2.2 Determination of the variance. . . . . . . . . . . . . . . . . . . . . .. . . . . 2

2.3 Determination of the correlation coefficient. . . . . . . . .. . . . . . . . . . . 2

2.4 Determination of the correlation function. . . . . . . . . . .. . . . . . . . . . 3

2.5 MATLAB simulations of the first order Markoff processes.. . . . . . . . . . . 6

3 Analysis of recorded signals 12

4 Experiments 13

5 Problems to be solved at home 14

Page 3: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

1

1 Introduction

The aim of this seminar is to give the participants a better insight into probability theory andstatistical signal theory by experiments. Means are provided to visualize sections of the signalsand their corresponding probability density functions (pdf), autocorrelation functions (acf) andpower spectral densities (psd). These quantities are most important and thus most frequentlyused in communication engineering. However, from the ”dry”definition of these quantities incommunication theory it is usually difficult for students tocomprehend their real meaning. Inthe experiments, all of these quantities are visualized simultaneously together with a section ofthe signal itself. Thus, it is possible to perceive the consequences in these quantities, if someparameters of the process are changed or signals are filteredor signals are passed over nonlinearsystems, etc.

There are always differences between theoretical results and results obtained by the evaluationof practical measurements. This is due to the fact that in theoretical considerations usuallysome assumptions are made which cannot be met in practical measurements. In probabilitytheory in particular it is assumed that expected values are averages based on infinitely manysamples - which is impossible to meet in practical evaluations.With the aim to show the differences between theoretically correct results and the results ob-tained from practical measurements, some features of “firstorder Markoff sequences” are the-oretically discussed in the next chapter. The corresponding MATLAB program generates thesesequences artificially and displays the theoretically correct results and the corresponding resultsobtained by the evaluation of only a limited number of samples from only one of the samplefunctions of the random Markoff process.

There are definitely great differences between continuous time random processes and discretetime random sequences. However, if a computer is used to evaluate measurements it is onlypossible to sample first continuous time signals and then evaluate a limited number of sampleswith mathematical routines. The experiments shall show also these differences if the partici-pants evaluate first the “problems to be solved at home” in thelast chapter, which refer to thecorresponding theoretically precise results in case of continuous time processes.

2 First order Markoff sequences

2.1 Generation of first order Markoff sequences.

First the procedure how to create first order Markoff sequences is explained.Consider the following recursive structure infig. 1:

If x(k) is a random input sequence with statistically independent random variablesx(k) andx(j) ∀j 6= k and fx(k)(x) = fx(j)(x) = fx(x) ∀k, j , i.e. all random variablesx(k),x(j) ∀k, j show the same Gaussian pdf with zero meanx = Ex(k) = 0 ∀k, then therandom sequencey(k) is a stationary first order Markoff sequence. The proof fory(k) to befirst order Markoff is omitted here. However, some statistical features of this sequence shall bedetermined.

Page 4: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2 2 FIRST ORDER MARKOFF SEQUENCES

z−1d

c y(k)x(k)

Fig. 1: Recursive discrete time filter.

Since the input sequence is Gaussian and stationary becauseall x(k) show the same pdf andthe given structure is a linear discrete time filter, theny(k) is also stationary and Gaussiandistributed with zero meany = Ey(k) = 0 ∀k.

2.2 Determination of the variance.

Clearly, the difference equation from the structure above is:

y(k) = d · y(k − 1) + c · x(k) (1)

with y(k − j) statistically independent ofx(k) ∀j ≥ 1.

Fromeqn 1follows:

Ey2(k)

= σ2

y +

=0︷︸︸︷y2 = E

(d · y(k − 1) + c · x(k))2

= E(

d2 · y2(k − 1) + 2cd · y(k − 1) · x(k) + c2 · x2(k))

= d2 · Ey2(k − 1)

+ 2cd · Ey(k − 1) · x(k) + c2 · E

x2(k)

= d2 · Ey2(k − 1)

+ c2 · E

x2(k)

becausey(k − 1) andx(k) are statistically independent and thus:Ey(k − 1) · x(k) = y︸︷︷︸

=0

· x︸︷︷︸=0

= 0.

From the stationarity of the sequencey(k) follows: Ey2(k) = Ey2(k − j) = σ2y +

=0︷︸︸︷y .

Hence: σ2y = d2 · σ2

y + c2 · σ2x ⇔ σ2

y =c2

1 − d2· σ2

x (2)

With c2 = 1 − d2 or c =√

1 − d2 the variance of the input and output sequence is the same,i.e. σ2

y = σ2x. This condition shall be assumed in the following.

2.3 Determination of the correlation coefficient.

Thecorrelation coefficientρ for two random variablesu andv is defined as:

ρ =E(u − u) · (v − v)

σu · σv

with − 1 < ρ < 1. (3)

Page 5: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2.4 Determination of the correlation function. 3

If ρ = ±1 the random variablesu, v are strictly correlated, i.e. proportional to each other, ifρ = 0 the random variablesu, v are uncorrelated.

For subsequent random variablesy(k), y(k − 1) of the Markoff sequence as defined above thecalculation of the correlation coefficient simplifies becausey = 0:

ρ =Ey(k) · y(k − 1)

σ2y

=E[d · y(k − 1) + c · x(k)] · y(k − 1)

σ2y

from eqn 1

=Ed · y2(k − 1) + c · x(k) · y(k − 1)

σ2y

= d · Ey2(k − 1)σ2

y

= d (4)

Hence, with the conditionc =√

1 − d2 for the coefficients of the recursive filter, the varianceσ2

y of the stationary random Markoff output sequencey(k) is the same as the varianceσ2x

of the independent input sequencex(k) and the correlation coefficientρ between any of twosubsequent output random variablesy(k), y(k − 1) ∀k equalsd.

Thus, with the adjustment ofd in the range−1 < d < 1 it is possible to generate randomsequences with a correlation coefficient which is adjustable between strict or weak, positive ornegative correlation or no correlation at all.

2.4 Determination of the correlation function.

The autocorrelation function of the discrete time processy(k) is defined as

Ryy(m) = Ey(k) · y(k + m)

Ryy(1) = Ey(k) · [d · y(k) + c · x(k + 1)] = d · σ2y

Ryy(2) = Ey(k) · [d · y(k + 1) + c · x(k + 2)]= Ey(k) · [d · (d · y(k) + c · x(k + 1)) + c · x(k + 2)] = d2 · σ2

y

and by further recursion:

Ryy(m) = Ey(k) · y(k + m) = dm · σ2y = ρm · σ2

y ∀m ≥ 0

With the general feature:Ryy(−m) = Ryy(m) of all autocorrelation functions which is easyto proof here in the same way as above like for example:

Ryy(−1) = Ey(k) · y(k − 1) = E[d · y(k − 1) + c · x(k)] · y(k − 1) = d · σ2y

...... so finally

Ryy(m) = Ey(k) · y(k + m) = ρ|m| · σ2y ∀m (5)

The correlation coefficient is in the range−1 < ρ < 1, i.e. |ρ| < 1. Thus, for positiveρ theautocorrelation function of the first order Markoff processRyy(m) = ρ|m| · σ2

y is positive and

Page 6: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

4 2 FIRST ORDER MARKOFF SEQUENCES

−10 −5 0 5 10

−0.5

0

0.5

1

j →

ρ = 0.5

−10 −5 0 5 10

−0.5

0

0.5

1

j →

ρ = − 0.5

Fig. 2: Autocorrelation functions of discrete time first order Markoff processes

exponentially decreasing. For negativeρ the autocorrelation function of the first order Markoffprocess is alternating and exponentially decreasing , i.e.Ryy(m) ≥ 0 for ρ < 0 and evenmandRyy(m) ≤ 0 for ρ < 0 and oddm as shown infig. 2.

The corresponding power spectral densitySyy(Ω) related to the discrete time autocorrelationfunctionRyy(m) is a continuous function inΩ and defined as:

Syy(Ω) =∞∑

m=−∞

Ryy(m) · e−jmΩ. (6)

−10 −5 0 5 100

0.5

1

1.5

2

2.5

3

3.5

f →

ρ = 0.5

−10 −5 0 5 100

0.5

1

1.5

2

2.5

3

3.5

f →

ρ = − 0.5

Fig. 3: Power spectral densities of discrete time first order Markoff processes calcu-lated using the FFT of the discrete time correlation functions fromfig. 2

However, for practical evaluationsRyy(m) is only available for a limited number ofm andSyy(Ω) cannot be calculated with a computer program for continuousnormalized frequencies

Page 7: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2.4 Determination of the correlation function. 5

Ω, not even for a frequency limited interval. For this reason the corresponding power spec-tral densities are calculated using the FFT ofRyy(m) as shown infig. 3. The FFT alwayscalculates a limited number of values forSyy at discrete frequencies!

A comparison of the formula for the psd ineqn. 6 with the formula of the discrete Fouriertransform (DFT) ofRyy(mTa) - where the values ofRyy are assumed to be the values ofRyy(τ) taken at the time instancesτ = mTa - shows (see lecture NT1 - eqn. 7.20):

Syy(Ω) =∞∑

m=−∞

Ryy(m) · e−jmΩ

Syy(nω0) =N−1∑

m=0

Ryy(mTa) · e−j2π n·m

N =N−1∑

m=0

Ryy(mTa) · e−j(n·ω0Ta)·m with ω0 =2π

NTa

=N−1∑

m=0

Ryy(mTa) · e−jmΩn with Ωn = n · ω0Ta

Apart from the fact, that the summation in the last formula islimited to onlyN values of theacf because in practical cases only a limited number ofRyy values are available, the first andlast formula above are almost the same except that the DFT calculates only frequency discretevalues at angular frequenciesnω0 = nΩn/Ta = n · 2π/(NTa). The FFT is a fast computerimplementation of the DFT.

So far some theoretical considerations concerning first order Markoff sequences where the timediscrete correlation functionRyy(m) was calculated as an ensemble average.In practical applications there is usually only one time limited sample function of the processavailable. In this case the only possibility to obtain estimates of the autocorrelation function isthe assumption, that the recorded sample functiony(ℓ)(k) ∀k = 1, (1), N is a time limited sec-tion of one of the infinitely many sample functions of an ergodic process. With this ergodicityhypothesis time averages can be calculated only from a limited number of sampled values ofthe sample function.

For ergodic processes ensemble averages equal the corresponding time averages for any of thesample functions. The expected value Ey(k) for example equals:

Ey(k) = limN→∞

1

2N + 1

N∑

j=−N

y(ℓ)(j) ∀j with y(ℓ)(j) theℓth sample function. (7)

But in practice it is impossible to calculate thelimN→∞

and thus, only estimates can be obtained.

Correspondingly, as an estimate for the autocorrelation functionRyy(j) the following averagefor a section of one discrete time sample function can be used.

Ryy(j) =1

N − j

N−j∑

i=1

y(ℓ)(i) · y(ℓ)(i + j) ∀j ≥ 0 and any sample functionℓ (8)

Such acf-estimates are increasingly unreliable the greater j, because for the “time shift”j closeto N only N − j products ofy(ℓ)(i) · y(ℓ)(i + j) are averaged.

Clearly, the power spectral density is also a more or less rough estimate because it is obtainedusing the FFT of the time average acf-estimate. Moreover, the power spectral density showsaliasing errors due to the implicit assumption of a periodicextension of the acf by the FFT.

Page 8: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

6 2 FIRST ORDER MARKOFF SEQUENCES

2.5 MATLAB simulations of the first order Markoff processes.

The following diagramsfigs. 4 and 5show MATLAB-simulations of first order Markoff pro-cesses.

All 4 subplots in each of the figures4 and 5show in principle a limited number of calculateddiscrete values. However, the subplots display seemingly continuous functions though the plotsshould display bar plots like for example infig. 2 or fig. 3. The continuous representationshave been used here because bar-plots for so many values lookmore confusing.

In the upper left subdiagram a section of the Markoff sequence is shown.

The upper right subdiagram shows an estimate of the probability density function from thegenerated samples and the theoretical Gaussian pdf with thesame parametersy and σ2

y =var(y). Obviously the deviation of the estimate and the theoretically correct pdf (which issmoother) is considerable and due to the fact, that only 5000samples have been evaluated.

The lower left subdiagram shows the estimated autocorrelation function Ryy(j) calculatedaccording toeqn. 8and the theoretically calculated autocorrelation function Ryy(j) accordingto eqn. 5.

The lower right subdiagram shows the theoretical and estimated power spectral density whichis calculated by the FFT fromRyy(j) andRyy(j), respectively.

Clearly, there is a deviation of the estimates from the theoretically calculated pdf, acf, andpsd but the similarities are obvious. The plots of the theoretically correct functions are alwayssmoother.

A comparison of the corresponding subfigures fromfig. 4 and fig. 5visualizes the principaldifferences between positive and negative correlation.In fig. 4 the correlation coefficient isρ = 0.8 which indicates, that two subsequents valuesy(k), y(k + 1) of the signal are more likely to be close to each other than fora negativecorrelation coefficientρ = −0.8 as shown infig. 5, where the signal valuesy(k + 1) ∀k aremore likely to be close to−y(k) ∀k.

The 3 figuresfig. 6, fig. 7, fig. 8 show in the upper subfigure the joint density, the contour linesof the joint density and the scatter diagram of two subsequent samples for different correlationcoefficients. In the scatter diagram the joint outcomesy(k + 1) versusy(k) ∀k ∈ 1, (1), Nare shown.In fig. 6 the correlation coefficient isρ = 0, i.e the random variablesy(k),y(k + 1) are uncor-related. For the Gaussian random variables in this case theyare also statistically independent.The contour lines would be circles for the same vertical and horizontal scaling. However, in theplotted diagram they look like ellipses because the vertical and horizontal scaling is different.The main axes of the ellipses are parallel to the main axes of the diagram. This indicates thatthe random variablesy(k),y(k + 1) are statistically independent.

Fig. 7 shows in principle the same diagrams for a positive correlation coefficientρ = 0.8. Herethe contour lines of the joint pdf are ellipses for any horizontal and vertical scaling. The mainaxes of these ellipses are no more parallel to the axes of the coordinate system but skewed.Comparingfig. 7 andfig. 8 illustrates the difference between these ellipses for positive andnegative correlation sincefig. 8 shows the same type of diagrams forρ = −0.8.

Page 9: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2.5M

ATLA

Bsim

ulationsofthe

firstorderM

arkoffprocesses.7

0 100 200 300 400 500−4

−2

0

2

4signal y(k)

−4 −2 0 2 40

0.2

0.4

Estimated probability density

mean=−0.00 var= 1.01

−50 0 50−2

−1

0

1

2

Estimated and theoretical autocorrelation function

t/Ta →

ρ = 0.80

−50 0 500

2

4

6

8

10

12Estimated and theoretical power spectral density

f (100 Ta) →

ρ = 0.80

Fig.4:

Exam

pleofsim

ulatedfirstorderM

arkoffsequencesy(k

)and

theircorrespond-ing

pdf,acfandpsd

with

positivecorrelation

coefficientρ

=0.8

Page 10: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

82

FIR

ST

OR

DE

RM

AR

KO

FF

SE

QU

EN

CE

S

0 100 200 300 400 500−4

−2

0

2

4signal y(k)

−5 0 50

0.1

0.2

0.3

0.4

0.5Estimated probability density

mean= 0.00 var= 0.99

−50 0 50−2

−1

0

1

2

Estimated and theoretical autocorrelation function

t/Ta →

ρ = −0.80

−50 0 500

2

4

6

8

10Estimated and theoretical power spectral density

f (100 Ta) →

ρ = −0.80

Fig.5:

Exam

pleofsim

ulatedfirstorderM

arkoffsequencesy(k

)and

theircorrespond-ing

pdf,acfandpsd

with

negativecorrelation

coefficientρ

=−

0.8

Page 11: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2.5 MATLAB simulations of the first order Markoff processes. 9

−50

5

−5

0

50

0.1

0.2

0.3

3D−plot of the joint Gaussian density of two subsequent samples

my(k)=−0.01, m

y(k+1)=−0.01, σ

y(k)=1.0072, σ

y(k+1)=1.01, ρ=−0.00

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Contour lines of the joint Gaussian density above

my(k)=−0.01, m

y(k+1)=−0.01, σ

y(k)=1.0072, σ

y(k+1)=1.01, ρ=−0.00

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Joint outcomes y(k),y(k+1)

Fig. 6: Upper subfig.:Joint probability density of two subsequent samples;Middle subfig.: Contour diagram of the joint probability density;Lower subfig.: Scatter diagram of the generated samples.All for positive correlation coefficientρ = 0.0

Page 12: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

10 2 FIRST ORDER MARKOFF SEQUENCES

−50

5−5

0

50

0.2

0.4

0.6

3D−plot of the joint Gaussian density of two subsequent samples

my(k)=−0.02, m

y(k+1)=−0.02, σ

y(k)=0.9769, σ

y(k+1)=0.98, ρ=0.80

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Contour lines of the joint Gaussian density above

my(k)=−0.02, m

y(k+1)=−0.02, σ

y(k)=0.9769, σ

y(k+1)=0.98, ρ=0.80

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Joint outcomes y(k),y(k+1)

Fig. 7: Upper subfig.:Joint probability density of two subsequent samples;Middle subfig.: Contour diagram of the joint probability density;Lower subfig.: Scatter diagram of the generated samples.All for positive correlation coefficientρ = 0.8

Page 13: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

2.5 MATLAB simulations of the first order Markoff processes. 11

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5

0

0.05

0.1

0.15

3D−plot of the joint Gaussian density of two subsequent samples

my(k)=0.01, m

y(k+1)=0.01, σ

y(k)=0.9807, σ

y(k+1)=0.98, ρ=−0.80

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Contour lines of the joint Gaussian density above

my(k)=0.01, m

y(k+1)=0.01, σ

y(k)=0.9807, σ

y(k+1)=0.98, ρ=−0.80

−5 −4 −3 −2 −1 0 1 2 3 4 5−5

0

5Joint outcomes y(k),y(k+1)

Fig. 8: Upper subfig.:Joint probability density of two subsequent samples;Middle subfig.: Contour diagram of the joint probability density;Lower subfig.: Scatter diagram of the generated samples.All for positive correlation coefficientρ = −0.8

Page 14: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

12 3 ANALYSIS OF RECORDED SIGNALS

With the underlying MATLAB program students can visualize online the changes in the signaly(k) the pdf, acf and psd for different correlation coefficients in the range−1 < ρ < 1 anddifferent adjustable mean valuesy. A double slider-gui (graphic user interface) is provided forthe adjustment ofρ andy. Each new adjustment produces immediately the 4 subfigures.

3 Analysis of recorded signals

MATLAB provides prefabricated routines to record and to output signals via the AUDIO-channels. Unfortunately, the dynamic input range for the AUDIO-input channel (line in) isonly between -150 mV≤ Uin ≤ 150 mV. If the input signals exceed this range, the corre-sponding recorded signals are clipped by the A/D-converter. Thus, it is possible to analyzearbitrary signals generated in the laboratory with the sameMATLAB program which has beenused in the previous chapter if the signal amplitudes do not exceed the dynamic range.

The second limitation of the AUDIO channels is the sampling rate which is fixed to 44100samples/second=44.1 kHz. Consequently, the highest frequency components of the signals tobe analyzed should not exceed≈ 20kHz.

Moreover, the AUDIO channels suppress DC signal componentslike oscilloscopes with AC-coupled input amplifier. This is to be taken into consideration if the input signals to be analysedcontain DC-offsets. Usually the output signals of nonlinearsystems contain DC-componentseven if the input signals are applied without DC-offsets.

As long as statistical properties of the recorded signals are to be determined the ergodicityhypothesis is required; i.e. the assumption, that the recorded signal sequence is a time limitedsection of one of the infinitely many sample-functions whichform a random process. With thisergodicity hypothesis it is then possible to estimate statistical features like for example the pdffrom relative occurrences of different amplitudes of the signal or the meany or the acf (seepage 5) as a time average and the power spectral density as theFFT of the acf from any of thesample functions, i.e. also from the one which has been recorded.

Page 15: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

13

4 Experiments

At disposal is a signal generator which produces either sinusoidal signals, periodic rect-signals,triangular signals and pseudo-random noise. Further a PC with AUDIO card and the MATLABprograms used to evaluate the measurements.

First use the MATLAB m-file with name “noise.m” that producesartificially stationary firstorder Markoff sequencesy(k) as described in chapter 2. It provides a two-slider graphik userinterface which enables the adjustment of the correlation coefficientρ with the slidery1 andthe mean valuey with the slidery2.

1. Adjust different correlation coefficientsρ and observe the consequences in the generateddiagrams which show the signal sequence, the pdf, the acf, the psd, the joint pdf ofsubsequent samples and the scatter-plot.

With an adequate interpretation of the joint pdf of subsequent samples it is possible toexplain what positive and negative correlation means with respect to the signal sequence(see for examplefigs. 7, 8).

2. Adjust different mean values with slidery2 and observe the consequences in the gener-ated diagrams.

Check whether the diagrams show what has been theoretically established in the “prob-lems to be solved at home” and try to interpret the differences between the psd-diagramfor Markoff sequencesy(k) and the psd of a continuous time processy(t).

3. Connect the signal generator output with the line-input ofthe AUDIO card of the PC,use the MATLAB program with name datarec.m, which records signals from the signalgenerator and carry out the following experiments.Make sure that the signals do not exceed the input range of±150 mV.

4. Generate and record a sinusoidal signal and check whetherthe evaluated pdf, acf and psdcorrespond to the theoretical results from the “problems tobe solved at home”.

5. Generate and record a triangular signal and check whetherthe evaluated pdf makes sense.

6. Generate and record a “white noise signal” and check whether the evaluated pdf, acf andpsd are reasonable.

7. Feed the “white noise” into anRC-low-pass filter, record and evaluate the filter outputsignal and check whether the evaluated pdf, acf and psd are reasonable. Adjust differentbandwidth of the low-pass filter by choosing adequate valuesof R andC.

8. Feed the “white noise” into anCR-high-pass filter, record and evaluate the filter outputsignal and check whether the evaluated pdf, acf and psd are reasonable. Adjust differentbandwidth of the high-pass filter by choosing adequate values ofC andR.

9. Feed noise into a nonlinear circuit consisting of a diode-resistor voltage divider and anal-yse the output signal at the resistor. Configure the resistor such, that the output voltage isin the required range< ± 150 mV but the input voltage> ± 1V.Observe the consequences with respect to the pdf and psd.

Page 16: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

14 5 PROBLEMS TO BE SOLVED AT HOME

5 Problems to be solved at home

1. Find the relation between the autocorrelation functionRxx(τ) = Ex(t) · x(t + τ)and the auto-covariance functionCxx(τ) = E(x(t) − x) · (x(t + τ) − x) for a timecontinuous stationary random processx(t) with x = m

(1)x .

2. Determine the power spectral densitySxx(ω)

• in terms ofRxx(τ)

• and in terms ofCxx(τ) andx = m(1)x .

3. Determine the total powerPx of the time continuous stationary random processx(t) fromthe power spectral densitySxx(ω) and findPx

• in terms ofRxx(.)

• and in terms ofCxx(.) andx = m(1)x or σ2

x andx = m(1)x , respectively.

4. x = m(1)x can be considered as the DC-component of the random processx(t). Hence,

x2 = (m(1)x )2 is the DC-power of the process!

Which quantity is represented byσ2x = Cxx(0)?

5. Consider the power spectral densitySxx(ω) of a stationary random process with nonzeromeanx 6= 0.Which feature of the power spectral densitySxx(ω) indicates the fact thatx 6= 0?

6. Give the definition of the correlation coefficientρ of two random variablesx andy.

7. Given is a stationary random processx(t) = A · sin(ω0t + φ) if the random variableφis uniformly distributed in the range0 ≤ φ ≤ 2π.

• Calculate the autocorrelation functionRxx(τ) = Ex(t) · x(t + τ) of this pro-cess.

• Calculate the pdffx(t)(x, t) of this process.

8. Now, consider a white noise processx(t) which is passed over a low-pass filter with the

transfer function: H(ω) =1

RC· 1

1

RC+ jω

• Find and sketch the autocorrelation functionRxx(τ) and the power spectral densitySxx(ω) of this process.

• Find and sketch the autocorrelation functionRyy(τ) and the power spectral densitySyy(ω) of the processy(t) at the output of this low-pass filter.

Page 17: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

15

Solution of the problems.

1. Rxx(τ) = Cxx(τ) + x2

2. Sxx(ω) b r

FRxx(τ) = Cxx(τ) + x2

Sxx(ω) =

∫ ∞

−∞

(Cxx(τ) + x2) · e−jωτ dτ =

∫ ∞

−∞

Cxx(τ) · e−jωτ dτ + 2πx2 · δ(ω).

3. Px =1

∫ ∞

−∞

Sxx(ω) dω =1

∫ ∞

−∞

Sxx(ω) · e+jωτ∣∣τ=0︸ ︷︷ ︸

=1

dω = Rxx(0)

Px = Cxx(0) + x2 = σ2x + x2.

4. Hence,x2 = (m(1)x )2 is the DC-power of the process and consequentlyσ2

x = Cxx(0) isthe total power of all AC-components of the process andPX = σ2

x + x2 is the total powerof the process.

5. The power spectral densitySxx(ω) of a stationary random process with nonzero meanalways shows a Dirac impulse atω = 0.

6. ρ =E(x − x) · (y − y)

σx · σy

.

7.

Rxx(τ) = Ex(t) · x(t + τ) = EA · sin(ω0t + φ) · A · sin(ω0(t + τ) + φ)

=A2

2· Ecos(ω0τ) − cos(2ω0t + ω0τ + 2φ)

=A2

2·[cos(ω0τ) −

∫ ∞

−∞

cos(2ω0t + ω0τ + 2φ)fφ(φ) dφ

]

=A2

cos(ω0τ) − 1

∫ 2π

0

cos(2ω0t + ω0τ + 2φ) dφ

︸ ︷︷ ︸=0

=A2

2· cos(ω0τ).

Sincex(t) is stationary the pdf is not dependent on time. Thus, for every time instancethe pdf is identical. Fort = 0 the random variablex(0) = A · sin(φ) is a nonlinearfunction of the random variableφ. Thus, with the following formula the pdf of the

Page 18: D:/AR-Stelle/Neues/Lehre/Seminare/Grundlagen der …nts.uni-due.de/downloads/nts2/seminar_nt2_v5.pdf · 2009. 5. 18. · 1 1 Introduction The aim of this seminar is to give the participants

16 5 PROBLEMS TO BE SOLVED AT HOME

random variablex(0) = x is obtained:

fx(x) =∑

i

fφ(φi(x))

|g′(φi(x))| with φi(x) the roots of the functionx = sin(φ), i.e.

all valuesφ which lead to the samex.

x = g(φ) = sin(φ) ⇒ φi = arcsin(x) with 2 roots for0 ≤ φ ≤ 2π.

g′(φ) = cos(φ) =√

1 − sin2(φ) ⇒ g′(φi(x)) =√

1 − x2

fφ(φ) =1

2π· rect

2π− 1

2

)= const =

1

2πfor 0 ≤ φ ≤ 2π and 0 elsewhere!

and−1 ≤ x = g(φ) = sin(φ) ≤ 1 hence,

fx(x) =2

2π√

1 − x2· rect

(x

2

)=

1

π√

1 − x2· rect

(x

2

)

The following sketch infig. 9 shows this pdf with poles at -1 and 1.

fx(x)

x-0.5 0 0.5 10-1

Fig. 9: Probability densityfx(x) = fx(t)(x, t)