signals and systemssignals and systems collection editor: marco f. duarte authors: thanos antoulas...

223
Signals and Systems Collection Editor: Marco F. Duarte

Upload: others

Post on 23-Jan-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Signals and Systems

Collection Editor:Marco F. Duarte

Page 2: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael
Page 3: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Signals and Systems

Collection Editor:Marco F. Duarte

Authors:

Thanos AntoulasRichard Baraniuk

Dan CalderonMarco F. DuarteCatherine ElderNatesh GaneshMichael HaagDon Johnson

Stephen KruzickMatthew MoravecJustin Romberg

Louis ScharfMelissa SelikJP SlavinskyDante Soares

Online:< http://legacy.cnx.org/content/col11557/1.6/ >

OpenStax-CNX

Page 4: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

This selection and arrangement of content as a collection is copyrighted by Marco F. Duarte. It is licensed under the

Creative Commons Attribution License 3.0 (http://creativecommons.org/licenses/by/3.0/).

Collection structure revised: November 22, 2013

PDF generated: August 18, 2014

For copyright and attribution information for the modules contained in this collection, see p. 206.

Page 5: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Table of Contents

1 Review of Prerequisites: Complex Numbers

1.1 Geometry of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Algebra of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Representing Complex Numbers in a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Continuous-Time Signals

2.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Common Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3 Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4 Energy and Power of Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5 Continuous Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.6 Continuous Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Introduction to Systems

3.1 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.3 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 Time Domain Analysis of Continuous Time Systems

4.1 Continuous Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.2 Continuous Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544.3 BIBO Stability of Continuous Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.4 Continuous-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.5 Properties of Continuous Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5 Introduction to Fourier Analysis

5.1 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.2 Continuous Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685.3 Eigenfunctions of Continuous-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705.4 Continuous Time Fourier Series (CTFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6 Continuous Time Fourier Transform (CTFT)

6.1 Continuous Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776.2 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786.3 Properties of the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816.4 Common Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856.5 Continuous Time Convolution and the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7 Discrete-Time Signals

7.1 Common Discrete Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.2 Energy and Power of Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937.3 Discrete-Time Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 957.4 Discrete Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987.5 Discrete Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

8 Time Domain Analysis of Discrete Time Systems

8.1 Discrete Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058.2 Discrete Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 1068.3 Discrete-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098.4 Properties of Discrete Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 1158.5 BIBO Stability of Discrete Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Page 6: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

iv

9 Discrete Time Fourier Transform (DTFT)

9.1 Discrete Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219.2 Eigenfunctions of Discrete Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249.3 Discrete Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259.4 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289.5 Common Discrete Time Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 1319.6 Discrete Time Convolution and the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 133

10 Sampling and Reconstruction

10.1 Signal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13710.2 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13910.3 Signal Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14210.4 Perfect Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14710.5 Aliasing Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15010.6 Anti-Aliasing Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15310.7 Changing Sampling Rates in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 15510.8 Discrete Time Processing of Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

11 Capstone Signal Processing Topics

11.1 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16711.2 DFT: Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16911.3 The Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17011.4 Matched Filter Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 174Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

12 Appendix: Mathematical Pot-Pourri

12.1 Basic Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18512.2 Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19012.3 Solving Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

13 Appendix: Viewing Interactive Content

13.1 Viewing Embedded LabVIEW Content in Connexions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19813.2 Getting Started With Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 7: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 1

Review of Prerequisites: ComplexNumbers

1.1 Geometry of Complex Numbers1

note: This module is part of the collection, A First Course in Electrical and Computer Engineer-ing. The LaTeX source les for this collection were created using an optical character recognitiontechnology, and because of this process there may be more errors than usual. Please contact us ifyou discover any errors.

The most fundamental new idea in the study of complex numbers is the imaginary number j. Thisimaginary number is dened to be the square root of −1:

j =√−1 (1.1)

j2 = −1. (1.2)

The imaginary number j is used to build complex numbers x and y in the following way:

z = x+ jy. (1.3)

We say that the complex number z has real part x and imaginary part y:

z = Re [z] + jIm [z] (1.4)

Re [z] = x; Im [z] = y. (1.5)

In MATLAB, the variable x is denoted by real(z), and the variable y is denoted by imag(z). In commu-nication theory, x is called the in-phase component of z, and y is called the quadrature component. Wecall z =x + jy the Cartesian representation of z, with real component x and imaginary component y. Wesay that the Cartesian pair (x, y)codes the complex number z.

We may plot the complex number z on the plane as in Figure 1.1. We call the horizontal axis the realaxis and the vertical axis the imaginary axis. The plane is called the complex plane. The radius andangle of the line to the point z = x+ jy are

r =√x2 + y2 (1.6)

1This content is available online at <http://legacy.cnx.org/content/m21411/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

1

Page 8: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

2 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

θ = tan−1(yx

). (1.7)

See Figure 1.1. In MATLAB, r is denoted by abs(z), and θ is denoted by angle(z).

Figure 1.1: Cartesian and Polar Representations of the Complex Number z

The original Cartesian representation is obtained from the radius r and angle θ as follows:

x = rcosθ (1.8)

y = r sin θ. (1.9)

The complex number z may therefore be written as

z = x+ jy

= rcosθ + jrsinθ

= r (cosθ + j sin θ) .

(1.10)

The complex number cosθ + jsinθ is, itself, a number that may be represented on the complex plane andcoded with the Cartesian pair (cosθ, sinθ). This is illustrated in Figure 1.2. The radius and angle to thepoint z = cosθ + jsinθ are 1 and θ. Can you see why?

Figure 1.2: The Complex Number cosθ + jsinθ

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 9: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

3

The complex number cosθ + jsinθ is of such fundamental importance to our study of complex numbersthat we give it the special symbol ejθ :

ejθ = cosθ + jsinθ. (1.11)

As illustrated in Figure 1.2, the complex number ejθ has radius 1 and angle θ. With the symbol ejθ, wemay write the complex number z as

z = rejθ. (1.12)

We call z = rejθ a polar representation for the complex number z. We say that the polar pair r∠θcodes thecomplex number z. In this polar representation, we dene |z| = r to be the magnitude of z and arg (z) = θto be the angle, or phase, of z:

|z| = r (1.13)

arg (z) = θ. (1.14)

With these denitions of magnitude and phase, we can write the complex number z as

z = |z|ejarg(z). (1.15)

Let's summarize our ways of writing the complex number z and record the corresponding geometric codes:

z = x+ jy = rejθ = |z|ej arg(z).↓ ↓

(x, y) r∠θ

(1.16)

In "Roots of Quadratic Equations"2 we show that the denition ejθ = cosθ + jsinθ is more than symbolic.We show, in fact, that ejθ is just the familiar function ex evaluated at the imaginary argument x = jθ. Wecall ejθ a complex exponential, meaning that it is an exponential with an imaginary argument.

Exercise 1.1.1Prove (j)2n = (−1)n and (j)2n+1 = (−1)nj. Evaluate j3, j4, j5.

Exercise 1.1.2Prove ej[(π/2)+m2π] = j, ej[(3π/2)+m2π] = −j, ej(0+m2π) = 1, and ej(π+m2π) = −1. Plot theseidentities on the complex plane. (Assume m is an integer.)

Exercise 1.1.3Find the polar representation z = rejθ for each of the following complex numbers:

a. z = 1 + j0;b. z = 0 + j1;c. z = 1 + j1;d. z = −1− j1.

Plot the points on the complex plane.

Exercise 1.1.4Find the Cartesian representation z = x+ jy for each of the following complex numbers:

a. z =√

2ejπ/2 ;b. z =

√2ejπ/4;

c. z = ej3π/4 ;

2"Complex Numbers: Roots of Quadratic Equations" <http://legacy.cnx.org/content/m21415/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 10: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

4 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

d. z =√

2ej3π/2.

Plot the points on the complex plane.

Exercise 1.1.5The following geometric codes represent complex numbers. Decode each by writing down the

corresponding complex number z:

a. (0.7,−0.1) z = ?b. (−1.0, 0.5) z = ?c. 1.6∠π/8 z =?d. 0.4∠7π/8 z =?

Exercise 1.1.6Show that Im [jz] = Re [z] and Re [−jz] = Im [z]. Demo 1.1 (MATLAB). Run the followingMATLAB program in order to compute and plot the complex number ejθ for θ = i2π/360, i =1, 2, ..., 360:

j=sqrt(-1)

n=360

for i=1:n,circle(i)=exp(j*2*pi*i/n);end;

axis('square')

plot(circle)

Replace the explicit for loop of line 3 by the implicit loop

circle=exp(j*2*pi*[1:n]/n);

to speed up the calculation. You can see from Figure 1.3 that the complex number ejθ, evaluatedat angles θ = 2π/360, 2 (2π/360) , ..., turns out complex numbers that lie at angle θ and radius 1.We say that ejθ is a complex number that lies on the unit circle. We will have much more to sayabout the unit circle in Chapter 2.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 11: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

5

Figure 1.3: The Complex Numbers ejθ for 0 ≤ θ ≤ 2π (Demo 1.1)

1.2 Algebra of Complex Numbers3

note: This module is part of the collection, A First Course in Electrical and Computer Engineer-ing. The LaTeX source les for this collection were created using an optical character recognitiontechnology, and because of this process there may be more errors than usual. Please contact us ifyou discover any errors.

The complex numbers form a mathematical eld on which the usual operations of addition and multipli-cation are dened. Each of these operations has a simple geometric interpretation.

1.2.1 Addition and Multiplication.

The complex numbers z1 and z2 areadded according to the rule

z1 + z2 = (x1 + jy1) + (x2 + jy2)

= (x1 + x2) + j (y1 + y2) .(1.17)

3This content is available online at <http://legacy.cnx.org/content/m21408/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 12: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

6 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

We say that the real parts add and the imaginary parts add. As illustrated in Figure 1.4, the complexnumber z1 + z2 is computed from a parallelogram rule, wherein z1 + z2 lies on the node of a parallelogramformed from z1 and z2.

Exercise 1.2.1Let z1 = r1e

jθ1 and z2 = r2ejθ2 . Find a polar formula z3 =r3e

jθ3 for z3 = z1 + z2 that involvesonly the variables r1, r2, θ1, and θ2. The formula for r3 is the law of cosines.

The product of z1 and z2 is

z1z2 = (x1 + jy1) (x2 + jy2)

= (x1x2 − y1y2) + j (y1x2 + x1y2) .(1.18)

Figure 1.4: Adding Complex Numbers

If the polar representations for z1 and z2 are used, then the product may be written as 4

z1z2 = r1ejθ1r2e

jθ2

= (r1cosθ1 + jr1sinθ1) (r2cosθ2 + jr2sinθ2)

= ( r1 cos θ1r2 cos θ2 − r1 sin θ1r2 sin θ2)

+ j ( r1 sin θ1r2 cos θ2 + r1 cos θ1r2 sin θ2)

= r1r2cos (θ1 + θ2) + jr1r2sin (θ1 + θ2)

= r1r2ej(θ1+θ2).

(1.19)

We say that the magnitudes multiply and the angles add. As illustrated in Figure 1.5, the product z1z2 liesat the angle (θ1 + θ2).

4We have used the trigonometric identities cos (θ1 + θ2) = cosθ1 cos θ2− sin θ1 sin θ2 and sin (θ1 + θ2) = sinθ1 cos θ2+cosθ1sin θ2 to derive this result.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 13: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

7

Figure 1.5: Multiplying Complex Numbers

Rotation. There is a special case of complex multiplication that will become very important in our studyof phasors in the chapter on Phasors5. When z1 is the complex number z1 = r1e

jθ1 and z2 is the complexnumber z2 = ejθ2 , then the product of z1 and z2 is

z1z2 = z1ejθ2 = r1e

j(θ1+θ2). (1.20)

As illustrated in Figure 1.6, z1z2 is just a rotation of z1 through the angle θ2.

Figure 1.6: Rotation of Complex Numbers

Exercise 1.2.2Begin with the complex number z1 = x+ jy = rejθ. Compute the complex number z2 = jz1 in itsCartesian and polar forms. The complex number z2 is sometimes called perp(z1). Explain why bywriting perp(z1) as z1e

jθ2 . What is θ2? Repeat this problem for z3 = −jz1.

5"Phasors: Introduction" <http://legacy.cnx.org/content/m21469/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 14: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

8 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

Powers. If the complex number z1 multiplies itself N times, then the result is

(z1)N = rN1 ejNθ1 . (1.21)

This result may be proved with a simple induction argument. Assume zk1 = rk1ejkθ1 . (The assumption is

true for k = 1.) Then use the recursion zk+11 = zk1z1 = rk+1

1 ej(k+1)θ1 . Iterate this recursion (or induction)until k + 1 = N . Can you see that, as n ranges from n = 1, ..., N , the angle of zfrom θ1 to 2θ1, ..., to Nθ1

and the radius ranges from r1 to r21, ..., to r

N1 ? This result is explored more fully in Problem 1.19.

Complex Conjugate. Corresponding to every complex number z = x + jy = rejθ is the complexconjugate

z∗ = x− jy = re−jθ. (1.22)

The complex number z and its complex conjugate are illustrated in Figure 1.7. The recipe for ndingcomplex conjugates is to change jto − j. This changes the sign of the imaginary part of the complexnumber.

Figure 1.7: A Complex Variable and Its Complex Conjugate

Magnitude Squared. The product of z and its complex conjugate is called the magnitude squared ofz and is denoted by |z|2 :

|z|2 = z∗z = (x− jy) (x+ jy) = x2 + y2 = re−jθrejθ = r2. (1.23)

Note that |z| = r is the radius, or magnitude, that we dened in "Geometry of Complex Numbers"(Section 1.1).

Exercise 1.2.3Write z∗ as z∗ = zw. Find w in its Cartesian and polar forms.

Exercise 1.2.4Prove that angle (z2z

∗1) = θ2 − θ1.

Exercise 1.2.5Show that the real and imaginary parts of z = x+ jy may be written as

Re [z] =12

(z + z∗) (1.24)

Im [z] = 2j (z − z∗) . (1.25)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 15: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

9

Commutativity, Associativity, and Distributivity. The complex numbers commute, associate, anddistribute under addition and multiplication as follows:

z1 + z2 = z2 + z1

z1z2 = z2z1

(1.26)

(z1 + z2) + z3 = z1 + (z2 + z3)

z1 (z2z3) = (z1z2) z3

z1 (z2 + z3) = z1z2 + z1z3.

(1.27)

Identities and Inverses. In the eld of complex numbers, the complex number 0 + j0 (denoted by 0)plays the role of an additive identity, and the complex number 1 + j0 (denoted by 1) plays the role of amultiplicative identity:

z + 0 = z = 0 + z

z1 = z = 1z.(1.28)

In this eld, the complex number −z = −x + j (−y) is the additive inverse of z, and the complex number

z−1 = xx2+y2 + j

(−y

x2+y2

)is the multiplicative inverse:

z + (−z) = 0

zz−1 = 1.(1.29)

Exercise 1.2.6Show that the additive inverse of z = rejθ may be written as rej(θ+π).

Exercise 1.2.7Show that the multiplicative inverse of z may be written as

z−1 =1z∗z

z∗ =1

x2 + y2(x − jy) . (1.30)

Show that z∗z is real. Show that z−1 may also be written as

z−1 = r−1e−jθ. (1.31)

Plot z and z−1 for a representative z.

Exercise 1.2.8Prove (j)−1 = −j.

Exercise 1.2.9Find z−1 when z = 1 + j1.

Exercise 1.2.10Prove

(z−1)∗ = (z∗)−1 = r−1ejθ = 1

z∗z z. Plot z and(z−1)∗

for a representative z.

Exercise 1.2.11Find all of the complex numbers z with the property that jz = −z∗. Illustrate these complex

numbers on the complex plane.

Demo 1.2 (MATLAB). Create and run the following script le (name it Complex Numbers)6

6If you are using PC-MATLAB, you will need to name your le cmplxnos.m.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 16: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

10 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

clear, clg

j=sqrt(-1)

z1=1+j*.5,z2=2+j*1.5

z3=z1+z2,z4=z1*z2

z5=conj(z1),z6=j*z2

avis([-4 4 -4 4]),axis('square'),plot(z1,'0')

hold on

plot(z2,'0'),plot(z3,'+'),plot(z4,'*'),

plot(z2,'0'),plot(z3,'+'),plot(z4,'*'),

plot(z5,'x'),plot(z6,'x')

Figure 1.8: Complex Numbers (Demo 1.2)

With the help of Appendix 1, you should be able to annotate each line of this program. View your graphicsdisplay to verify the rules for add, multiply, conjugate, and perp. See Figure 1.8.

Exercise 1.2.12Prove that z0 = 1.Exercise 1.2.13(MATLAB) Choose z1 = 1.05ej2π/16 and z2 = 0.95ej2π/16. Write a MATLAB program to computeand plot zn1 and zn2 for n = 1, 2, ..., 32. You should observe a gure like Figure 1.9.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 17: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

11

Figure 1.9: Powers of z

1.3 Representing Complex Numbers in a Vector Space7

note: This module is part of the collection, A First Course in Electrical and Computer Engineer-ing. The LaTeX source les for this collection were created using an optical character recognitiontechnology, and because of this process there may be more errors than usual. Please contact us ifyou discover any errors.

So far we have coded the complex number z = x+ jy with the Cartesian pair (x, y) and with the polar pair(r∠θ). We now show how the complex number z may be coded with a two-dimensional vector z and showhow this new code may be used to gain insight about complex numbers.

Coding a Complex Number as a Vector. We code the complex number z = x + jy with the

two-dimensional vector z =

x

y

:x+ jy = z ⇔ z =

x

y

. (1.32)

We plot this vector as in Figure 1.10. We say that the vector z belongs to a vector space. This meansthat vectors may be added and scaled according to the rules

z1 + z2 =

x1 + x2

y1 + y2

(1.33)

az =

ax

ay

. (1.34)

7This content is available online at <http://legacy.cnx.org/content/m21414/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 18: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

12 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

Figure 1.10: The Vector z Coding the Complex Number z

Furthermore, it means that an additive inverse −z, an additive identity 0, and a multiplicative identity1 all exist:

z + (−z) = 0 (1.35)

lz = z. (1.36)

The vector 0 is 0 =

0

0

.Prove that vector addition and scalar multiplication satisfy these properties of commutation, association,

and distribution:

z1 + z2 = z2 + z1 (1.37)

(z1 + z2) + z3 = z1 + (z2 + z3) (1.38)

a (bz) = (ab) z (1.39)

a (z1 + z2) = az1 + az2. (1.40)

Inner Product and Norm. The inner product between two vectors z1 and z2 is dened to be the realnumber

(z1, z2) = x1x2 + y1y2. (1.41)

We sometimes write this inner product as the vector product (more on this in Linear Algebra8)

(z1, z2) = zT1 z2

= [x1 y1]

x2

y2

= (x1x2 + y1y2) .(1.42)

Exercise 1.3.1Prove (z1, z2) = (z2, z1) .

8"Linear Algebra: Introduction" <http://legacy.cnx.org/content/m21454/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 19: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

13

When z1 = z2 = z, then the inner product between z and itself is the norm squared of z:

||z||2 = (z, z) = x2 + y2. (1.43)

These properties of vectors seem abstract. However, as we now show, they may be used to develop a vectorcalculus for doing complex arithmetic.

A Vector Calculus for Complex Arithmetic. The addition of two complex numbers z1 and z2

corresponds to the addition of the vectors z1 and z2 :

z1 + z2 ⇔ z1 + z2 =

x1 + x2

y1 + y2

(1.44)

The scalar multiplication of the complex number z2 by the real number x1 corresponds to scalar multipli-cation of the vector z2 by x1 :

x1z2 ⇔ x1

x2

y2

=

x1x2

x1y2

. (1.45)

Similarly, the multiplication of the complex number z2 by the real number y1 is

y1z2 ↔ y1

x2

y2

=

y1x2

y1y2

. (1.46)

The complex product z1z2 = (x1 + jy1) z2 is therefore represented as

z1z2 ↔

x1x2 − y1y2

x1y2 + y1x2

. (1.47)

This representation may be written as the inner product

z1z2 = z2z1 ↔

(v, z1)

(w, z1)

(1.48)

where v and w are the vectors v =

x2

−y2

and w =

y2

x2

. By dening the matrix x2 −y2

y2 x2

, (1.49)

we can represent the complex product z1z2 as a matrix-vector multiply (more on this in Linear Algebra9):

z1z2 = z2z1 ↔

x2 −y2

y2 x2

x1

y1

. (1.50)

With this representation, we can represent rotation as

zejθ = ejθz ↔

cosθ −sinθsinθ cosθ

x1

x2

. (1.51)

9"Linear Algebra: Introduction" <http://legacy.cnx.org/content/m21454/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 20: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

14 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

We call the matrix

cosθ −sinθsinθ cosθ

a rotation matrix.

Exercise 1.3.2Call R (θ) the rotation matrix:

R (θ) =

cosθ −sinθsinθ cosθ

. (1.52)

Show that R (−θ) rotates by (−θ). What can you say about R (−θ)w when w = R (θ) z?

Exercise 1.3.3Represent the complex conjugate of z as

z∗ ↔

a b

c d

x

y

(1.53)

and nd the elements a, b, c, and d of the matrix.

Inner Product and Polar Representation. From the norm of a vector, we derive a formula for themagnitude of z in the polar representation z = rejθ :

r =(x2 + y2

)1/2= ||z|| = (z, z)1/2

. (1.54)

If we dene the coordinate vectors e1 =

1

0

and e2 =

0

1

, then we can represent the vector z as

z = (z, e1) e1 + (z, e2) e2. (1.55)

See Figure 1.11. From the gure it is clear that the cosine and sine of the angle θ are

cosθ =(z, e1)||z||

; sinθ =(z, e2)||z||

(1.56)

Figure 1.11: Representation of z in its Natural Basis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 21: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

15

This gives us another representation for any vector z:

z = ||z||cosθe1 + ||z||sinθe2. (1.57)

The inner product between two vectors z1 and z2 is now

(z1, z2) =[(z1, e1) eT1 (z1, e2) eT2

] (z2, e1) e1

(z2, e2) e2

= (z1, e1) (z2, e1) + (z1, e2) (z2, e2)

= ||z1||cosθ1||z2||cosθ2 + ||z1|| sin θ1||z2||sinθ2.

(1.58)

It follows that cos (θ2 − θ1) = cosθ2 cos θ1 + sinθ1sinθ2 may be written as

cos (θ2 − θ1) =(z1, z2)||z1|| ||z2||

(1.59)

This formula shows that the cosine of the angle between two vectors z1 and z2, which is, of course, thecosine of the angle of z2z

∗1 , is the ratio of the inner product to the norms.

Exercise 1.3.4Prove the Schwarz and triangle inequalities and interpret them:

(Schwarz) (z1, z2)2 ≤ ||z1||2||z2||2 (1.60)

(triangle) I z1 − z2|| ≤ ||z1 − z3||+ ||z2 − z3 ||. (1.61)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 22: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

16 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 23: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 2

Continuous-Time Signals

2.1 Signal Classications and Properties1

2.1.1 Introduction

This module will begin our study of signals and systems by laying out some of the fundamentals of signal clas-sication. It is essentially an introduction to the important denitions and properties that are fundamentalto the discussion of signals and systems, with a brief discussion of each.

2.1.2 Classications of Signals

2.1.2.1 Continuous-Time vs. Discrete-Time

As the names suggest, this classication is determined by whether or not the time axis is discrete (countable)or continuous (Figure 2.1). A continuous-time signal will contain a value for all real numbers along thetime axis. In contrast to this, a discrete-time signal2, often created by sampling a continuous signal, willonly have values at equally spaced intervals along the time axis.

Figure 2.1

1This content is available online at <http://legacy.cnx.org/content/m47271/1.2/>.2"Discrete-Time Signals" <http://legacy.cnx.org/content/m0009/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

17

Page 24: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

18 CHAPTER 2. CONTINUOUS-TIME SIGNALS

2.1.2.2 Analog vs. Digital

The dierence between analog and digital is similar to the dierence between continuous-time and discrete-time. However, in this case the dierence involves the values of the function. Analog corresponds to acontinuous set of possible function values, while digital corresponds to a discrete set of possible functionvalues. An common example of a digital signal is a binary sequence, where the values of the function canonly be one or zero.

Figure 2.2

2.1.2.3 Periodic vs. Aperiodic

Periodic signals3 repeat with some period T , while aperiodic, or nonperiodic, signals do not (Figure 2.3).We can dene a periodic function through the following mathematical expression, where t can be any numberand T is a positive constant:

f (t) = f (T + t) (2.1)

The fundamental period of our function, f (t), is the smallest value of T that the still allows (2.1) to betrue.

3"Continuous Time Periodic Signals" <http://legacy.cnx.org/content/m10744/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 25: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

19

(a)

(b)

Figure 2.3: (a) A periodic signal with period T0 (b) An aperiodic signal

2.1.2.4 Finite vs. Innite Length

As the name implies, signals can be characterized as to whether they have a nite or innite length set ofvalues. Most nite length signals are used when dealing with discrete-time signals or a given sequence ofvalues. Mathematically speaking, f (t) is a nite-length signal if it is nonzero over a nite interval

t1 < f (t) < t2

where t1 > −∞ and t2 < ∞. An example can be seen in Figure 2.4. Similarly, an innite-length signal,f (t), is dened as nonzero over all real numbers:

∞ ≤ f (t) ≤ −∞

Figure 2.4: Finite-Length Signal. Note that it only has nonzero values on a set, nite interval.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 26: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

20 CHAPTER 2. CONTINUOUS-TIME SIGNALS

2.1.2.5 Causal vs. Anticausal vs. Noncausal

Causal signals are signals that are zero for all negative time, while anticausal are signals that are zero forall positive time. Noncausal signals are signals that have nonzero values in both positive and negative time(Figure 2.5).

(a)

(b)

(c)

Figure 2.5: (a) A causal signal (b) An anticausal signal (c) A noncausal signal

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 27: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

21

2.1.2.6 Even vs. Odd

An even signal is any signal f such that f (t) = f (−t). Even signals can be easily spotted as theyare symmetric around the vertical axis. An odd signal, on the other hand, is a signal f such thatf (t) = −f (−t) (Figure 2.6).

(a)

(b)

Figure 2.6: (a) An even signal (b) An odd signal

Using the denitions of even and odd signals, we can show that any signal can be written as a combinationof an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, wehave to look no further than a single equation.

f (t) =12

(f (t) + f (−t)) +12

(f (t)− f (−t)) (2.2)

By multiplying and adding this expression out, it can be shown to be true. Also, it can be shown thatf (t) + f (−t) fullls the requirement of an even function, while f (t) − f (−t) fullls the requirement of anodd function (Figure 2.7).

Example 2.1

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 28: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

22 CHAPTER 2. CONTINUOUS-TIME SIGNALS

(a)

(b)

(c)

(d)

Figure 2.7: (a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) =12

(f (t) + f (−t)) (c) Odd part: o (t) = 12

(f (t)− f (−t)) (d) Check: e (t) + o (t) = f (t)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 29: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

23

Example 2.2Consider the signal dened for all real t described by

f (t) = sin (2πt) /t t ≥ 1

0 t < 1(2.3)

This signal is continuous time, analog, aperiodic, innite length, causal, and neither even nor odd.

2.1.3 Signal Classications Summary

This module describes just some of the many ways in which signals can be classied. They can be continuoustime or discrete time, analog or digital, periodic or aperiodic, nite or innite, and deterministic or random.We can also divide them based on their causality and symmetry properties. There are other ways to classifysignals, such as boundedness, handedness, and continuity, that are not discussed here but will be describedin subsequent modules.

2.2 Common Continuous Time Signals4

2.2.1 Introduction

Before looking at this module, hopefully you have an idea of what a signal is and what basic classicationsand properties a signal can have. In review, a signal is a function dened with respect to an independentvariable. This variable is often time but could represent any number of things. Mathematically, continuoustime analog signals have continuous independent and dependent variables. This module will describe someuseful continuous time analog signals.

2.2.2 Important Continuous Time Signals

2.2.2.1 Sinusoids

One of the most important elemental signal that you will deal with is the real-valued sinusoid. In itscontinuous-time form, we write the general expression as

Acos (ωt+ φ) (2.4)

where A is the amplitude, ω is the frequency, and φ is the phase. Thus, the period of the sinusoid is

T =2πω

(2.5)

4This content is available online at <http://legacy.cnx.org/content/m47606/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 30: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

24 CHAPTER 2. CONTINUOUS-TIME SIGNALS

Figure 2.8: Sinusoid with A = 2, w = 2, and φ = 0.

2.2.2.2 Unit Step

Another very basic signal is the unit-step function that is dened as

u (t) =

0 if t < 0

1 if t ≥ 0(2.6)

t

1

Figure 2.9: Continuous-Time Unit-Step Function

The step function is a useful tool for testing and for dening other signals. For example, when dierentshifted versions of the step function are multiplied by other signals, one can select a certain portion of thesignal and zero out the rest.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 31: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

25

2.2.2.3 Unit Pulse

Many engineers interpret the unit step function as the representation of turning on a switch and leaving iton. The unit-pulse function can be thought of as turning a switch on and o after a unit of time. It isdened as

p (t) =

1 if 0 ≤ t ≤ 1

0 if t < 0 or t > 1(2.7)

t = 0 so that it is an even function.

Figure 2.10: Continuous-Time Unit-Step Function

Note that the pulse can be easily written in terms of unit step functions as p (t) = u (t+ 0.5)−u (t− 0.5)

2.2.2.4 Triangle Function

The last function we will introduce is the triangle function, which represents and input that increases andthen decreases linearly with time. It is dened as

Λ (t) =

t+ 1 if − 1 ≤ t ≤ 0

1− t if 0 ≤ t ≤ 1

0 if t < −1 or t > 1

(2.8)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 32: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

26 CHAPTER 2. CONTINUOUS-TIME SIGNALS

Figure 2.11: Continuous-Time Triangle Function

2.2.3 Common Continuous Time Signals Summary

Some of the most important and most frequently encountered signals have been discussed in this module.There are, of course, many other signals of signicant consequence not discussed here. As you will see later,many of the other more complicated signals will be studied in terms of those listed here. Especially takenote of the complex exponentials and unit impulse functions, which will be the key focus of several topicsincluded in this course.

2.3 Signal Operations5

2.3.1 Introduction

This module will look at two signal operations aecting the time parameter of the signal, time shifting andtime scaling. These operations are very common components to real-world systems and, as such, should beunderstood thoroughly when learning about signals and systems.

2.3.2 Manipulating the Time Parameter

2.3.2.1 Time Shifting

Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtractinga quantity of the shift to the time variable in the function. Subtracting a xed positive quantity from thetime variable will shift the signal to the right (delay) by the subtracted quantity, while adding a xed positiveamount to the time variable will shift the signal to the left (advance) by the added quantity.

5This content is available online at <http://legacy.cnx.org/content/m10125/2.18/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 33: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

27

Figure 2.12: f (t− T ) moves (delays) f to the right by T .

2.3.2.2 Time Scaling

Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If thatquantity is greater than one, the signal becomes narrower and the operation is called compression, while ifthe quantity is less than one, the signal becomes wider and is called dilation.

Figure 2.13: f (at) compresses f by a.

Example 2.3Given f (t) we woul like to plot f (at− b). The gure below describes a method to accomplish this.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 34: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

28 CHAPTER 2. CONTINUOUS-TIME SIGNALS

(a) (b)

(c)

Figure 2.14: (a) Begin with f (t) (b) Then replace t with at to get f (at) (c) Finally, replace t witht− b

ato get f

`a

`t− b

a

´´= f (at− b)

2.3.2.3 Time Reversal

A natural question to consider when learning about time scaling is: What happens when the time variableis multiplied by a negative number? The answer to this is time reversal. This operation is the reversal ofthe time axis, or ipping the signal over the y-axis.

Figure 2.15: Reverse the time axis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 35: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

29

2.3.3 Time Scaling and Shifting Demonstration

Figure 2.16: Download6 or Interact (when online) with a Mathematica CDF demonstrating DiscreteHarmonic Sinusoids.

2.3.4 Signal Operations Summary

Some common operations on signals aect the time parameter of the signal. One of these is time shifting inwhich a quantity is added to the time parameter in order to advance or delay the signal. Another is the timescaling in which the time parameter is multiplied by a quantity in order to dilate or compress the signal intime. In the event that the quantity involved in the latter operation is negative, time reversal occurs.

2.4 Energy and Power of Continuous-Time Signals7

From physics we've learned that energy is work and power is work per time unit. Energy was measured inJoule (J) and work in Watts(W). In signal processing energy and power are dened more loosely withoutany necessary physical units, because the signals may represent very dierent physical entities. We can saythat energy and power are a measure of the signal's "size".

6See the le at <http://legacy.cnx.org/content/m10125/latest/TimeshifterDrill_display.cdf>7This content is available online at <http://legacy.cnx.org/content/m47273/1.4/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 36: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

30 CHAPTER 2. CONTINUOUS-TIME SIGNALS

2.4.1 Signal Energy

2.4.1.1 Analog signals

Since we often think of a signal as a function of varying amplitude through time, it seems to reason that agood measurement of the strength of a signal would be the area under the curve. However, this area mayhave a negative part. This negative part does not have less strength than a positive signal of the same size.This suggests either squaring the signal or taking its absolute value, then nding the area under that curve.It turns out that what we call the energy of a signal is the area under the squared signal, see Figure 2.17

Energy - Analog signal: Ea =∫∞−∞ (|x (t) |)2

dt

Note that we have used squared magnitude(absolute value) if the signal should be complex valued. If thesignal is real, we can leave out the magnitude operation.

(a)

(b)

Figure 2.17: Sketch of energy calculation (a) Signal x(t) (b) The energy of x(t) is the shaded region

2.4.2 Signal Power

Our denition of energy seems reasonable, and it is. However, what if the signal does not decay fast enough?In this case we have innite energy for any such signal. Does this mean that a fty hertz sine wave feedinginto your headphones is as strong as the fty hertz sine wave coming out of your outlet? Obviously not.This is what leads us to the idea of signal power, which in such cases is a more adequate description.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 37: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

31

Figure 2.18: Signal with ininite energy

2.4.2.1 Analog signals

For analog signals we dene power as energy per time interval.

Power - analog signal: Pa = limT→∞

1T

∫ T2

−T2(|x (t) |)2

dt

For periodic analog signals, the power needs to only be measured across a single period.

Power - periodic analog signal with period T0: Pa = 1T0

∫ T02

−T02

(|x (t) |)2dt

Example 2.4Given the signal x (t) = sin (2πt), shown in Figure 2.19, calculate the power for one period.

For the analog sine we have Pa = 11

∫ 1

0sin2 (2πt) dt = 1

2 .

Figure 2.19: Analog sine.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 38: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

32 CHAPTER 2. CONTINUOUS-TIME SIGNALS

2.5 Continuous Time Impulse Function8

2.5.1 Introduction

In engineering, we often deal with the idea of an action occurring at a point. Whether it be a force ata point in space or some other signal at a point in time, it becomes worth while to develop some way ofquantitatively dening this. This leads us to the idea of a unit impulse, probably the second most importantfunction, next to the complex exponential, in this systems and signals course.

2.5.2 Dirac Delta Function

The Dirac delta function, often referred to as the unit impulse or delta function, is the function thatdenes the idea of a unit impulse in continuous-time. Informally, this function is one that is innitesimallynarrow, innitely tall, yet integrates to one. Perhaps the simplest way to visualize this is as a rectangularpulse from a − ε

2 to a + ε2 with a height of 1

ε . As we take the limit of this setup as ε approaches 0, we seethat the width tends to zero and the height tends to innity as the total area remains constant at one. Theimpulse function is often written as δ (t). ∫ ∞

−∞δ (t) dt = 1 (2.9)

Figure 2.20: This is one way to visualize the Dirac Delta Function.

8This content is available online at <http://legacy.cnx.org/content/m10059/2.27/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 39: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

33

Figure 2.21: Since it is quite dicult to draw something that is innitely tall, we represent the Diracwith an arrow centered at the point it is applied. If we wish to scale it, we may write the value it isscaled by next to the point of the arrow. This is a unit impulse (no scaling).

Below is a brief list a few important properties of the unit impulse without going into detail of theirproofs.

Unit Impulse Properties

• δ (αt) = 1|α|δ (t)

• δ (t) = δ (−t)• δ (t) = d

dtu (t), where u (t) is the unit step.• f (t) δ (t) = f (0) δ (t)

The last of these is especially important as it gives rise to the sifting property of the dirac delta function, whichselects the value of a function at a specic time and is especially important in studying the relationship of anoperation called convolution to time domain analysis of linear time invariant systems. The sifting propertyis shown and derived below.∫ ∞

−∞f (t) δ (t) dt =

∫ ∞−∞

f (0) δ (t) dt = f (0)∫ ∞−∞

δ (t) dt = f (0) (2.10)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 40: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

34 CHAPTER 2. CONTINUOUS-TIME SIGNALS

2.5.3 Unit Impulse Limiting Demonstration

Figure 2.22: Click on the above thumbnail image (when online) to download an interactive MathematicaPlayer demonstrating the Continuous Time Impulse Function.

2.5.4 Continuous Time Unit Impulse Summary

The continuous time unit impulse function, also known as the Dirac delta function, is of great importanceto the study of signals and systems. Informally, it is a function with innite height ant innitesimal widththat integrates to one, which can be viewed as the limiting behavior of a unit area rectangle as it narrowswhile preserving area. It has several important properties that will appear again when studying systems.

2.6 Continuous Time Complex Exponential9

2.6.1 Introduction

Complex exponentials are some of the most important functions in our study of signals and systems. Theirimportance stems from their status as eigenfunctions of linear time invariant systems. Before proceeding,you should be familiar with complex numbers.

9This content is available online at <http://legacy.cnx.org/content/m10060/2.25/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 41: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

35

2.6.2 The Continuous Time Complex Exponential

2.6.2.1 Complex Exponentials

The complex exponential function will become a critical part of your study of signals and systems. Its generalcontinuous form is written as

Aest (2.11)

where s = σ+jω is a complex number in terms of σ, the attenuation constant, and ω the angular frequency.

2.6.2.2 Euler's Formula

The mathematician Euler proved an important identity relating complex exponentials to trigonometric func-tions. Specically, he discovered the eponymously named identity, Euler's formula, which states that

ejx = cos (x) + jsin (x) (2.12)

which can be proven as follows.In order to prove Euler's formula, we start by evaluating the Taylor series for ez about z = 0, which

converges for all complex z, at z = jx. The result is

ejx =∑∞k=0

(jx)k

k!

=∑∞k=0 (−1)k x2k

(2k)! + j∑∞k=0 (−1)k x2k+1

(2k+1)!

= cos (x) + jsin (x)

(2.13)

because the second expression contains the Taylor series for cos (x) and sin (x) about t = 0, which convergefor all real x. Thus, the desired result is proven.

Choosing x = ωt this gives the result

ejωt = cos (ωt) + jsin (ωt) (2.14)

which breaks a continuous time complex exponential into its real part and imaginary part. Using thisformula, we can also derive the following relationships.

cos (ωt) =12ejωt +

12e−jωt (2.15)

sin (ωt) =12jejωt − 1

2je−jωt (2.16)

2.6.2.3 Continuous Time Phasors

It has been shown how the complex exponential with purely imaginary frequency can be broken up intoits real part and its imaginary part. Now consider a general complex frequency s = σ + ωj where σ is theattenuation factor and ω is the frequency. Also consider a phase dierence θ. It follows that

e(σ+jω)t+jθ = eσt (cos (ωt+ θ) + jsin (ωt+ θ)) . (2.17)

Thus, the real and imaginary parts of est appear below.

Ree(σ+jω)t+jθ = eσtcos (ωt+ θ) (2.18)

Ime(σ+jω)t+jθ = eσtsin (ωt+ θ) (2.19)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 42: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

36 CHAPTER 2. CONTINUOUS-TIME SIGNALS

Using the real or imaginary parts of complex exponential to represent sinusoids with a phase delay multipliedby real exponential is often useful and is called attenuated phasor notation.

We can see that both the real part and the imaginary part have a sinusoid times a real exponential. Wealso know that sinusoids oscillate between one and negative one. From this it becomes apparent that thereal and imaginary parts of the complex exponential will each oscillate within an envelope dened by thereal exponential part.

(a) (b)

(c)

Figure 2.23: The shapes possible for the real part of a complex exponential. Notice that the oscillationsare the result of a cosine, as there is a local maximum at t = 0. (a) If σ is negative, we have the case ofa decaying exponential window. (b) If σ is positive, we have the case of a growing exponential window.(c) If σ is zero, we have the case of a constant window.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 43: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

37

2.6.3 Complex Exponential Demonstration

Figure 2.24: Interact (when online) with a Mathematica CDF demonstrating the Continuous TimeComplex Exponential. To Download, right-click and save target as .cdf.

2.6.4 Continuous Time Complex Exponential Summary

Continuous time complex exponentials are signals of great importance to the study of signals and systems.They can be related to sinusoids through Euler's formula, which identies the real and imaginary parts ofpurely imaginary complex exponentials. Eulers formula reveals that, in general, the real and imaginary partsof complex exponentials are sinusoids multiplied by real exponentials. Thus, attenuated phasor notation isoften useful in studying these signals.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 44: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

38 CHAPTER 2. CONTINUOUS-TIME SIGNALS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 45: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 3

Introduction to Systems

3.1 Introduction to Systems1

Signals are manipulated by systems. Mathematically, we represent what a system does by the notationy (t) = S (x (t)), with x representing the input signal and y the output signal.

Denition of a system

Systemx(t) y(t)

Figure 3.1: The system depicted has input x (t) and output y (t). Mathematically, systems operateon function(s) to produce other function(s). In many ways, systems are like functions, rules that yield avalue for the dependent variable (our output signal) for each value of its independent variable (its inputsignal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S (·) the input-outputrelation for the system.

This notation mimics the mathematical symbology of a function: A system's input is analogous to anindependent variable and its output the dependent variable. For the mathematically inclined, a system is afunctional: a function of a function (signals are functions).

Simple systems can be connected togetherone system's output becomes another's inputto accomplishsome overall design. Interconnection topologies can be quite complicated, but usually consist of weaves ofthree basic interconnection forms.

1This content is available online at <http://legacy.cnx.org/content/m0005/2.19/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

39

Page 46: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

40 CHAPTER 3. INTRODUCTION TO SYSTEMS

3.1.1 Cascade Interconnection

cascade

S1[•] S2[•]x(t) y(t)w(t)

Figure 3.2: The most rudimentary ways of interconnecting systems are shown in the gures in thissection. This is the cascade conguration.

The simplest form is when one system's output is connected only to another's input. Mathematically,w (t) = S1 (x (t)), and y (t) = S2 (w (t)), with the information contained in x (t) processed by the rst, thenthe second system. In some cases, the ordering of the systems matter, in others it does not. For example, inthe fundamental model of communication 2 the ordering most certainly matters.

3.1.2 Parallel Interconnection

parallel

x(t)

x(t)

x(t)

+y(t)

S1[•]

S2[•]

Figure 3.3: The parallel conguration.

A signal x (t) is routed to two (or more) systems, with this signal appearing as the input to all systemssimultaneously and with equal strength. Block diagrams have the convention that signals going to morethan one system are not split into pieces along the way. Two or more systems operate on x (t) and theiroutputs are added together to create the output y (t). Thus, y (t) = S1 (x (t))+S2 (x (t)), and the informationin x (t) is processed separately by both systems.

2"Structure of Communication Systems", Figure 1: Fundamental model of communication<http://legacy.cnx.org/content/m0002/latest/#commsys>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 47: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

41

3.1.3 Feedback Interconnection

feedback

S1[•]x(t) e(t) y(t)

S2[•]

–+

Figure 3.4: The feedback conguration.

The subtlest interconnection conguration has a system's output also contributing to its input. Engineerswould say the output is "fed back" to the input through system 2, hence the terminology. The mathematicalstatement of the feedback interconnection (Figure 3.4: feedback) is that the feed-forward system producesthe output: y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system'soutput to y (t): e (t) = x (t) − S2 (y (t)). Feedback systems are omnipresent in control problems, with theerror signal used to adjust the output to achieve some condition dened by the input (controlling) signal.For example, in a car's cruise control system, x (t) is a constant representing what speed you want, and y (t)is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (outputequals input).

3.2 System Classications and Properties3

3.2.1 Introduction

In this module some of the basic classications of systems will be briey introduced and the most importantproperties of these systems are explained. As can be seen, the properties of a system provide an easy wayto distinguish one system from another. Understanding these basic dierences between systems, and theirproperties, will be a fundamental concept used in all signal and system courses. Once a set of systems can beidentied as sharing particular properties, one no longer has to reprove a certain characteristic of a systemeach time, but it can simply be known due to the the system classication.

3.2.2 Classication of Systems

3.2.2.1 Continuous vs. Discrete

One of the most important distinctions to understand is the dierence between discrete time and continuoustime systems. A system in which the input signal and output signal both have continuous domains is saidto be a continuous system. One in which the input signal and output signal both have discrete domains issaid to be a discrete system. Of course, it is possible to conceive of signals that belong to neither category,such as systems in which sampling of a continuous time signal or reconstruction from a discrete time signaltake place.

3This content is available online at <http://legacy.cnx.org/content/m10084/2.24/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 48: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

42 CHAPTER 3. INTRODUCTION TO SYSTEMS

3.2.2.2 Linear vs. Nonlinear

A linear system is any system that obeys the properties of scaling (rst order homogeneity) and superposition(additivity) further described below. A nonlinear system is any system that does not have at least one ofthese properties.

To show that a system H obeys the scaling property is to show that

H (kf (t)) = kH (f (t)) (3.1)

Figure 3.5: A block diagram demonstrating the scaling property of linearity

To demonstrate that a system H obeys the superposition property of linearity is to show that

H (f1 (t) + f2 (t)) = H (f1 (t)) +H (f2 (t)) (3.2)

Figure 3.6: A block diagram demonstrating the superposition property of linearity

It is possible to check a system for linearity in a single (though larger) step. To do this, simply combinethe rst two steps to get

H (k1f1 (t) + k2f2 (t)) = k1H (f1 (t)) + k2H (f2 (t)) (3.3)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 49: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

43

3.2.2.3 Time Invariant vs. Time Varying

A system is said to be time invariant if it commutes with the parameter shift operator dened by ST (f (t)) =f (t− T ) for all T , which is to say

HST = STH (3.4)

for all real T . Intuitively, that means that for any input function that produces some output function, anytime shift of that input function will produce an output function identical in every way except that it isshifted by the same amount. Any system that does not have this property is said to be time varying.

Figure 3.7: This block diagram shows what the condition for time invariance. The output is the samewhether the delay is put on the input or the output.

3.2.2.4 Causal vs. Noncausal

A causal system is one in which the output depends only on current or past inputs, but not future inputs.Similarly, an anticausal system is one in which the output depends only on current or future inputs, but notpast inputs. Finally, a noncausal system is one in which the output depends on both past and future inputs.All "realtime" systems must be causal, since they can not have future inputs available to them.

One may think the idea of future inputs does not seem to make much physical sense; however, we haveonly been dealing with time as our dependent variable so far, which is not always the case. Imagine ratherthat we wanted to do image processing. Then the dependent variable might represent pixel positions to theleft and right (the "future") of the current position on the image, and we would not necessarily have a causalsystem.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 50: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

44 CHAPTER 3. INTRODUCTION TO SYSTEMS

(a)

(b)

Figure 3.8: (a) For a typical system to be causal... (b) ...the output at time t0, y (t0), can only dependon the portion of the input signal before t0.

3.2.2.5 Stable vs. Unstable

There are several denitions of stability, but the one that will be used most frequently in this course willbe bounded input, bounded output (BIBO) stability. In this context, a stable system is one in which theoutput is bounded if the input is also bounded. Similarly, an unstable system is one in which at least onebounded input produces an unbounded output.

Representing this mathematically, a stable system must have the following property, where x (t) is theinput and y (t) is the output. The output must satisfy the condition

|y (t) | ≤My <∞ (3.5)

whenever we have an input to the system that satises

|x (t) | ≤Mx <∞ (3.6)

Mx andMy both represent a set of nite positive numbers and these relationships hold for all of t. Otherwise,the system is unstable.

3.2.3 System Classications Summary

This module describes just some of the many ways in which systems can be classied. Systems can becontinuous time, discrete time, or neither. They can be linear or nonlinear, time invariant or time varying,

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 51: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

45

and stable or unstable. We can also divide them based on their causality properties. There are other waysto classify systems, such as use of memory, that are not discussed here but will be described in subsequentmodules.

3.3 Linear Time Invariant Systems4

3.3.1 Introduction

Linearity and time invariance are two system properties that greatly simplify the study of systems that exhibitthem. In our study of signals and systems, we will be especially interested in systems that demonstrate bothof these properties, which together allow the use of some of the most powerful tools of signal processing.

3.3.2 Linear Time Invariant Systems

3.3.2.1 Linear Systems

If a system is linear, this means that when an input to a given system is scaled by a value, the output of thesystem is scaled by the same amount.

Linear Scaling

(a) (b)

Figure 3.9

In Figure 3.9(a) above, an input x to the linear system L gives the output y. If x is scaled by a value αand passed through this same system, as in Figure 3.9(b), the output will also be scaled by α.

A linear system also obeys the principle of superposition. This means that if two inputs are addedtogether and passed through a linear system, the output will be the sum of the individual inputs' outputs.

(a) (b)

Figure 3.10

4This content is available online at <http://legacy.cnx.org/content/m2102/2.26/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 52: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

46 CHAPTER 3. INTRODUCTION TO SYSTEMS

Superposition Principle

Figure 3.11: If Figure 3.10 is true, then the principle of superposition says that Figure 3.11 (Superpo-sition Principle) is true as well. This holds for linear systems.

That is, if Figure 3.10 is true, then Figure 3.11 (Superposition Principle) is also true for a linear system.The scaling property mentioned above still holds in conjunction with the superposition principle. Therefore,if the inputs x and y are scaled by factors α and β, respectively, then the sum of these scaled inputs willgive the sum of the individual scaled outputs:

(a) (b)

Figure 3.12

Superposition Principle with Linear Scaling

Figure 3.13: Given Figure 3.12 for a linear system, Figure 3.13 (Superposition Principle with LinearScaling) holds as well.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 53: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

47

Example 3.1Consider the system H1 in which

H1 (f (t)) = tf (t) (3.7)

for all signals f . Given any two signals f, g and scalars a, b

H1 (af (t) + bg (t)) = t (af (t) + bg (t)) = atf (t) + btg (t) = aH1 (f (t)) + bH1 (g (t)) (3.8)

for all real t. Thus, H1 is a linear system.

Example 3.2Consider the system H2 in which

H2 (f (t)) = (f (t))2(3.9)

for all signals f . Because

H2 (2t) = 4t2 6= 2t2 = 2H2 (t) (3.10)

for nonzero t, H2 is not a linear system.

3.3.2.2 Time Invariant Systems

A time-invariant system has the property that a certain input will always give the same output (up totiming), without regard to when the input was applied to the system.

Time-Invariant Systems

(a) (b)

Figure 3.14: Figure 3.14(a) shows an input at time t while Figure 3.14(b) shows the same inputt0 seconds later. In a time-invariant system both outputs would be identical except that the one inFigure 3.14(b) would be delayed by t0.

In this gure, x (t) and x (t− t0) are passed through the system TI. Because the system TI is time-invariant, the inputs x (t) and x (t− t0) produce the same output. The only dierence is that the outputdue to x (t− t0) is shifted by a time t0.

Whether a system is time-invariant or time-varying can be seen in the dierential equation (or dierenceequation) describing it. Time-invariant systems are modeled with constant coecient equations.A constant coecient dierential (or dierence) equation means that the parameters of the system are notchanging over time and an input now will give the same result as the same input later.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 54: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

48 CHAPTER 3. INTRODUCTION TO SYSTEMS

Example 3.3Consider the system H1 in which

H1 (f (t)) = tf (t) (3.11)

for all signals f . Because

ST (H1 (f (t))) = ST (tf (t)) = (t− T ) f (t− T ) 6= tf (t− T ) = H1 (f (t− T )) = H1 (ST (f (t))) (3.12)

for nonzero T , H1 is not a time invariant system.

Example 3.4Consider the system H2 in which

H2 (f (t)) = (f (t))2(3.13)

for all signals f . For all real T and signals f ,

ST (H2 (f (t))) = ST

(f(t)2

)= (f (t− T ))2 = H2 (f (t− T )) = H2 (ST (f (t))) (3.14)

for all real t. Thus, H2 is a time invariant system.

3.3.2.3 Linear Time Invariant Systems

Certain systems are both linear and time-invariant, and are thus referred to as LTI systems.

Linear Time-Invariant Systems

(a) (b)

Figure 3.15: This is a combination of the two cases above. Since the input to Figure 3.15(b) is a scaled,time-shifted version of the input in Figure 3.15(a), so is the output.

As LTI systems are a subset of linear systems, they obey the principle of superposition. In the gurebelow, we see the eect of applying time-invariance to the superposition denition in the linear systemssection above.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 55: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

49

(a) (b)

Figure 3.16

Superposition in Linear Time-Invariant Systems

Figure 3.17: The principle of superposition applied to LTI systems

3.3.2.3.1 LTI Systems in Series

If two or more LTI systems are in series with each other, their order can be interchanged without aectingthe overall output of the system. Systems in series are also called cascaded systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 56: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

50 CHAPTER 3. INTRODUCTION TO SYSTEMS

Cascaded LTI Systems

(a)

(b)

Figure 3.18: The order of cascaded LTI systems can be interchanged without changing the overalleect.

3.3.2.3.2 LTI Systems in Parallel

If two or more LTI systems are in parallel with one another, an equivalent system is one that is dened asthe sum of these individual systems.

Parallel LTI Systems

(a) (b)

Figure 3.19: Parallel systems can be condensed into the sum of systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 57: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

51

Example 3.5Consider the system H3 in which

H3 (f (t)) = 2f (t) (3.15)

for all signals f . Given any two signals f, g and scalars a, b

H3 (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH3 (f (t)) + bH3 (g (t)) (3.16)

for all real t. Thus, H3 is a linear system. For all real T and signals f ,

ST (H3 (f (t))) = ST (2f (t)) = 2f (t− T ) = H3 (f (t− T )) = H3 (ST (f (t))) (3.17)

for all real t. Thus, H3 is a time invariant system. Therefore, H3 is a linear time invariant system.

Example 3.6As has been previously shown, each of the following systems are not linear or not time invariant.

H1 (f (t)) = tf (t) (3.18)

H2 (f (t)) = (f (t))2(3.19)

Thus, they are not linear time invariant systems.

3.3.3 Linear Time Invariant Demonstration

Figure 3.20: Interact(when online) with the Mathematica CDF above demonstrating Linear TimeInvariant systems. To download, right click and save le as .cdf.

3.3.4 LTI Systems Summary

Two very important and useful properties of systems have just been described in detail. The rst of these,linearity, allows us the knowledge that a sum of input signals produces an output signal that is the summedoriginal output signals and that a scaled input signal produces an output signal scaled from the originaloutput signal. The second of these, time invariance, ensures that time shifts commute with application ofthe system. In other words, the output signal for a time shifted input is the same as the output signal for theoriginal input signal, except for an identical shift in time. Systems that demonstrate both linearity and timeinvariance, which are given the acronym LTI systems, are particularly simple to study as these propertiesallow us to leverage some of the most powerful tools in signal processing.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 58: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

52 CHAPTER 3. INTRODUCTION TO SYSTEMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 59: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 4

Time Domain Analysis of ContinuousTime Systems

4.1 Continuous Time Systems1

4.1.1 Introduction

As you already now know, a continuous time system operates on a continuous time signal input and producesa continuous time signal output. There are numerous examples of useful continuous time systems in signalprocessing as they essentially describe the world around us. The class of continuous time systems thatare both linear and time invariant, known as continuous time LTI systems, is of particular interest as theproperties of linearity and time invariance together allow the use of some of the most important and powerfultools in signal processing.

4.1.2 Continuous Time Systems

4.1.2.1 Linearity and Time Invariance

A system H is said to be linear if it satises two important conditions. The rst, additivity, states for everypair of signals x, y that H (x+ y) = H (x) +H (y). The second, homogeneity of degree one, states for everysignal x and scalar a we have H (ax) = aH (x). It is clear that these conditions can be combined togetherinto a single condition for linearity. Thus, a system is said to be linear if for every signals x, y and scalarsa, b we have that

H (ax+ by) = aH (x) + bH (y) . (4.1)

Linearity is a particularly important property of systems as it allows us to leverage the powerful tools oflinear algebra, such as bases, eigenvectors, and eigenvalues, in their study.

A system H is said to be time invariant if a time shift of an input produces the corresponding shiftedoutput. In other, more precise words, the system H commutes with the time shift operator ST for everyT ∈ R. That is,

STH = HST . (4.2)

Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,physical systems should react the same to identical inputs at dierent times.

When a system exhibits both of these important properties it allows for a more straigtforward analysisthan would otherwise be possible. As will be explained and proven in subsequent modules, computation

1This content is available online at <http://legacy.cnx.org/content/m47437/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

53

Page 60: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

54CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

of the system output for a given input becomes a simple matter of convolving the input with the system'simpulse response signal. Also proven later, the fact that complex exponential are eigenvectors of linear timeinvariant systems will enable the use of frequency domain tools such as the various Fouier transforms andassociated transfer functions, to describe the behavior of linear time invariant systems.

Example 4.1Consider the system H in which

H (f (t)) = 2f (t) (4.3)

for all signals f . Given any two signals f, g and scalars a, b

H (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH (f (t)) + bH (g (t)) (4.4)

for all real t. Thus, H is a linear system. For all real T and signals f ,

ST (H (f (t))) = ST (2f (t)) = 2f (t− T ) = H (f (t− T )) = H (ST (f (t))) (4.5)

for all real t. Thus, H is a time invariant system. Therefore, H is a linear time invariant system.

4.1.3 Continuous Time Systems Summary

Many useful continuous time systems will be encountered in a study of signals and systems. This courseis most interested in those that demonstrate both the linearity property and the time invariance property,which together enable the use of some of the most powerful tools of signal processing. It is often useful todescribe them in terms of rates of change through linear constant coecient ordinary dierential equations.

4.2 Continuous Time Impulse Response2

4.2.1 Introduction

The output of an LTI system is completely determined by the input and the system's response to a unitimpulse.

System Output

Figure 4.1: We can determine the system's output, y(t), if we know the system's impulse response,h(t), and the input, f(t).

The output for a unit impulse input is called the impulse response.

2This content is available online at <http://legacy.cnx.org/content/m34629/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 61: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

55

Figure 4.2

4.2.1.1 Example Approximate Impulses

1. Hammer blow to a structure2. Hand clap or gun blast in a room3. Air gun blast underwater

4.2.2 LTI Systems and Impulse Responses

4.2.2.1 Finding System Outputs

By the sifting property of impulses, any signal can be decomposed in terms of an integral of shifted, scaledimpulses.

f (t) =∫ ∞−∞

f (τ) δ (t− τ) dτ (4.6)

δ (t− τ) peaks up where t = τ .

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 62: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

56CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

Figure 4.3

Since we know the response of the system to an impulse and any signal can be decomposed into impulses,all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,calculate the system's output for every impulse and add the outputs back together. This is the processknown as Convolution. Since we are in Continuous Time, this is the Continuous Time ConvolutionIntegral.

4.2.2.2 Finding Impulse Responses

Theory:

a. Solve the system's dierential equation for y(t) with f (t) = δ (t)b. Use the Laplace transform

Practice:

a. Apply an impulse-like input signal to the system and measure the outputb. Use Fourier methods

We will assume that h(t) is given for now.

The goal now is to compute the output y(t) given the impulse response h(t) and the input f(t).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 63: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

57

Figure 4.4

4.2.3 Impulse Response Summary

When a system is "shocked" by a delta function, it produces an output known as its impulse response. Foran LTI system, the impulse response completely determines the output of the system given any arbitraryinput. The output can be found using continuous time convolution.

4.3 BIBO Stability of Continuous Time Systems3

4.3.1 Introduction

BIBO stability stands for bounded input, bounded output stability. BIBO stablity is the system propertythat any bounded input yields a bounded output. This is to say that as long as we input a signal withabsolute value less than some constant, we are guaranteed to have an output with absolute value less thansome other constant.

4.3.2 Continuous Time BIBO Stability

In order to understand this concept, we must rst look more closely into exactly what we mean by bounded.A bounded signal is any signal such that there exists a value such that the absolute value of the signal isnever greater than some value. Since this value is arbitrary, what we mean is that at no point can the signaltend to innity, including the end behavior.

3This content is available online at <http://legacy.cnx.org/content/m47280/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 64: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

58CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

Figure 4.5: A bounded signal is a signal for which there exists a constant A such that |f (t) | < A

4.3.2.1 Time Domain Conditions

Now that we have identied what it means for a signal to be bounded, we must turn our attention to thecondition a system must possess in order to guarantee that if any bounded signal is passed through thesystem, a bounded signal will arise on the output. It turns out that a continuous time LTI (Section 3.2)system with impulse response h (t) is BIBO stable if and only if

Continuous-Time Condition for BIBO Stability∫ ∞−∞|h (t) |dt <∞ (4.7)

This is to say that the impulse response is absolutely integrable.

4.3.3 BIBO Stability Summary

Bounded input bounded output stability, also known as BIBO stability, is an important and generallydesirable system characteristic. A system is BIBO stable if every bounded input signal results in a boundedoutput signal, where boundedness is the property that the absolute value of a signal does not exceed somenite constant. In terms of time domain features, a continuous time system is BIBO stable if and only if itsimpulse response is absolutely integrable.

4.4 Continuous-Time Convolution4

4.4.1 Introduction

Convolution, one of the most important concepts in electrical engineering, can be used to determine theoutput a system produces for a given input signal. It can be shown that a linear time invariant system

4This content is available online at <http://legacy.cnx.org/content/m47482/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 65: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

59

is completely characterized by its impulse response. The sifting property of the continuous time impulsefunction tells us that the input signal to a system can be represented as an integral of scaled and shiftedimpulses and, therefore, as the limit of a sum of scaled and shifted approximate unit impulses. Thus, bylinearity, it would seem reasonable to compute of the output signal as the limit of a sum of scaled andshifted unit impulse responses and, therefore, as the integral of a scaled and shifted impulse response. Thatis exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine alinear time invariant system's output from knowledge of the input and the impulse response.

4.4.2 Convolution and Circular Convolution

4.4.2.1 Convolution

4.4.2.1.1 Operation Denition

Continuous time convolution is an operation on two continuous time signals dened by the integral

(f ∗ g) (t) =∫ ∞−∞

f (τ) g (t− τ) dτ (4.8)

for all signals f, g dened on R. It is important to note that the operation of convolution is commutative,meaning that

f ∗ g = g ∗ f (4.9)

for all signals f, g dened on R. Thus, the convolution operation could have been just as easily stated usingthe equivalent denition

(f ∗ g) (t) =∫ ∞−∞

f (t− τ) g (τ) dτ (4.10)

for all signals f, g dened on R. Convolution has several other important properties not listed here butexplained and derived in a later module.

4.4.2.1.2 Denition Motivation

The above operation denition has been chosen to be particularly useful in the study of linear time invariantsystems. In order to see this, consider a linear time invariant system H with unit impulse response h. Givena system input signal x we would like to compute the system output signal H (x). First, we note that theinput can be expressed as the convolution

x (t) =∫ ∞−∞

x (τ) δ (t− τ) dτ (4.11)

by the sifting property of the unit impulse function. Writing this integral as the limit of a summation,

x (t) = lim∆→0

∑n

x (n∆) δ∆ (t− n∆) ∆ (4.12)

where

δ∆ (t) = 1/∆ 0 ≤ t < ∆

0 otherwise(4.13)

approximates the properties of δ (t). By linearity

Hx (t) = lim∆→0

∑n

x (n∆)Hδ∆ (t− n∆) ∆ (4.14)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 66: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

60CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

which evaluated as an integral gives

Hx (t) =∫ ∞−∞

x (τ)Hδ (t− τ) dτ. (4.15)

Since Hδ (t− τ) is the shifted unit impulse response h (t− τ), this gives the result

Hx (t) =∫ ∞−∞

x (τ)h (t− τ) dτ = (x ∗ h) (t) . (4.16)

Hence, convolution has been dened such that the output of a linear time invariant system is given by theconvolution of the system input with the system unit impulse response.

4.4.2.1.3 Graphical Intuition

It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes.Consider the convolution of two functions f, g given by

(f ∗ g) (t) =∫ ∞−∞

f (τ) g (t− τ) dτ =∫ ∞−∞

f (t− τ) g (τ) dτ. (4.17)

The rst step in graphically understanding the operation of convolution is to plot each of the functions.Next, one of the functions must be selected, and its plot reected across the τ = 0 axis. For each real t, thatsame function must be shifted left by t. The product of the two resulting plots is then constructed. Finally,the area under the resulting curve is computed.

Example 4.2Recall that the impulse response for the capacitor voltage in a series RC circuit is given by

h (t) =1RC

e−t/RCu (t) , (4.18)

and consider the response to the input voltage

x (t) = u (t) . (4.19)

We know that the output for this input voltage is given by the convolution of the impulse responsewith the input signal

y (t) = x (t) ∗ h (t) . (4.20)

We would like to compute this operation by beginning in a way that minimizes the algebraiccomplexity of the expression. Thus, since x (t) = u (t) is the simpler of the two signals, it isdesirable to select it for time reversal and shifting. Thus, we would like to compute

y (t) =∫ ∞−∞

1RC

e−τ/RCu (τ)u (t− τ) dτ. (4.21)

The step functions can be used to further simplify this integral by narrowing the region of inte-gration to the nonzero region of the integrand. Therefore,

y (t) =∫ max0,t

0

1RC

e−τ/RCdτ. (4.22)

Hence, the output is

y (t) = 0 t ≤ 0

1− e−t/RC t > 0(4.23)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 67: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

61

which can also be written as

y (t) =(

1− e−t/RC)u (t) . (4.24)

4.4.3 Online Resources

The following pages have interactive Java applets that demonstrate several aspects of continuous-time con-volution.

Joy of Convolution (Johns Hopkins University)5

Step-by-Step Convolution (Rice University)6

4.4.4 Convolution Demonstration

Figure 4.6: Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download,right-click and save target as .cdf.

4.4.5 Convolution Summary

Convolution, one of the most important concepts in electrical engineering, can be used to determine theoutput signal of a linear time invariant system for a given input signal with knowledge of the system's unit

5http://www.jhu.edu/signals/convolve/index.html6http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Convo1/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 68: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

62CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

impulse response. The operation of continuous time convolution is dened such that it performs this functionfor innite length continuous time signals and systems. The operation of continuous time circular convolutionis dened such that it performs this function for nite length and periodic continuous time signals. In eachcase, the output of the system is the convolution or circular convolution of the input signal with the unitimpulse response.

4.5 Properties of Continuous Time Convolution7

4.5.1 Introduction

We have already shown the important role that continuous time convolution plays in signal processing. Thissection provides discussion and proof of some of the important properties of continuous time convolution.Analogous properties can be shown for continuous time circular convolution with trivial modication of theproofs provided except where explicitly noted otherwise.

4.5.2 Continuous Time Convolution Properties

4.5.2.1 Associativity

The operation of convolution is associative. That is, for all continuous time signals f1, f2, f3 the followingrelationship holds.

f1 ∗ (f2 ∗ f3) = (f1 ∗ f2) ∗ f3 (4.25)

In order to show this, note that

(f1 ∗ (f2 ∗ f3)) (t) =∫∞−∞

∫∞−∞ f1 (τ1) f2 (τ2) f3 ((t− τ1)− τ2) dτ2dτ1

=∫∞−∞

∫∞−∞ f1 (τ1) f2 ((τ1 + τ2)− τ1) f3 (t− (τ1 + τ2)) dτ2dτ1

=∫∞−∞

∫∞−∞ f1 (τ1) f2 (τ3 − τ1) f3 (t− τ3) dτ1dτ3

= ((f1 ∗ f2) ∗ f3) (t)

(4.26)

proving the relationship as desired through the substitution τ3 = τ1 + τ2.

4.5.2.2 Commutativity

The operation of convolution is commutative. That is, for all continuous time signals f1, f2 the followingrelationship holds.

f1 ∗ f2 = f2 ∗ f1 (4.27)

In order to show this, note that

(f1 ∗ f2) (t) =∫∞−∞ f1 (τ1) f2 (t− τ1) dτ1

=∫∞−∞ f1 (t− τ2) f2 (τ2) dτ2

= (f2 ∗ f1) (t)

(4.28)

proving the relationship as desired through the substitution τ2 = t− τ1.7This content is available online at <http://legacy.cnx.org/content/m10088/2.18/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 69: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

63

4.5.2.3 Distribitivity

The operation of convolution is distributive over the operation of addition. That is, for all continuous timesignals f1, f2, f3 the following relationship holds.

f1 ∗ (f2 + f3) = f1 ∗ f2 + f1 ∗ f3 (4.29)

In order to show this, note that

(f1 ∗ (f2 + f3)) (t) =∫∞−∞ f1 (τ) (f2 (t− τ) + f3 (t− τ)) dτ

=∫∞−∞ f1 (τ) f2 (t− τ) dτ +

∫∞−∞ f1 (τ) f3 (t− τ) dτ

= (f1 ∗ f2 + f1 ∗ f3) (t)

(4.30)

proving the relationship as desired.

4.5.2.4 Multilinearity

The operation of convolution is linear in each of the two function variables. Additivity in each variableresults from distributivity of convolution over addition. Homogenity of order one in each varible results fromthe fact that for all continuous time signals f1, f2 and scalars a the following relationship holds.

a (f1 ∗ f2) = (af1) ∗ f2 = f1 ∗ (af2) (4.31)

In order to show this, note that

(a (f1 ∗ f2)) (t) = a∫∞−∞ f1 (τ) f2 (t− τ) dτ

=∫∞−∞ (af1 (τ)) f2 (t− τ) dτ

= ((af1) ∗ f2) (t)

=∫∞−∞ f1 (τ) (af2 (t− τ)) dτ

= (f1 ∗ (af2)) (t)

(4.32)

proving the relationship as desired.

4.5.2.5 Conjugation

The operation of convolution has the following property for all continuous time signals f1, f2.

f1 ∗ f2 = f1 ∗ f2 (4.33)

In order to show this, note that (f1 ∗ f2

)(t) =

∫∞−∞ f1 (τ) f2 (t− τ) dτ

=∫∞−∞ f1 (τ) f2 (t− τ)dτ

=∫∞−∞ f1 (τ) f2 (t− τ) dτ

=(f1 ∗ f2

)(t)

(4.34)

proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 70: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

64CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

4.5.2.6 Time Shift

The operation of convolution has the following property for all continuous time signals f1, f2 where ST isthe time shift operator.

ST (f1 ∗ f2) = (ST f1) ∗ f2 = f1 ∗ (ST f2) (4.35)

In order to show this, note that

ST (f1 ∗ f2) (t) =∫∞−∞ f2 (τ) f1 ((t− T )− τ) dτ

=∫∞−∞ f2 (τ)ST f1 (t− τ) dτ

= ((ST f1) ∗ f2) (t)

=∫∞−∞ f1 (τ) f2 ((t− T )− τ) dτ

=∫∞−∞ f1 (τ)ST f2 (t− τ) dτ

= f1 ∗ (ST f2) (t)

(4.36)

proving the relationship as desired.

4.5.2.7 Dierentiation

The operation of convolution has the following property for all continuous time signals f1, f2.

d

dt(f1 ∗ f2) (t) =

(df1

dt∗ f2

)(t) =

(f1 ∗

df2

dt

)(t) (4.37)

In order to show this, note that

ddt (f1 ∗ f2) (t) =

∫∞−∞ f2 (τ) d

dtf1 (t− τ) dτ

=(df1dt ∗ f2

)(t)

=∫∞−∞ f1 (τ) d

dtf2 (t− τ) dτ

=(f1 ∗ df2

dt

)(t)

(4.38)

proving the relationship as desired.

4.5.2.8 Impulse Convolution

The operation of convolution has the following property for all continuous time signals f where δ is the Diracdelta funciton.

f ∗ δ = f (4.39)

In order to show this, note that

(f ∗ δ) (t) =∫∞−∞ f (τ) δ (t− τ) dτ

= f (t)∫∞−∞ δ (t− τ) dτ

= f (t)

(4.40)

proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 71: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

65

4.5.2.9 Width

The operation of convolution has the following property for all continuous time signals f1, f2 whereDuration (f) gives the duration of a signal f .

Duration (f1 ∗ f2) = Duration (f1) +Duration (f2) (4.41)

. In order to show this informally, note that (f1 ∗ f2) (t) is nonzero for all t for which there is a τ such thatf1 (τ) f2 (t− τ) is nonzero. When viewing one function as reversed and sliding past the other, it is easy tosee that such a τ exists for all t on an interval of length Duration (f1) +Duration (f2). Note that this is notalways true of circular convolution of nite length and periodic signals as there is then a maximum possibleduration within a period.

4.5.3 Convolution Properties Summary

As can be seen the operation of continuous time convolution has several important properties that havebeen listed and proven in this module. With slight modications to proofs, most of these also extend tocontinuous time circular convolution as well and the cases in which exceptions occur have been noted above.These identities will be useful to keep in mind as the reader continues to study signals and systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 72: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

66CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

SYSTEMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 73: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 5

Introduction to Fourier Analysis

5.1 Introduction to Fourier Analysis1

5.1.1 Fourier's Daring Leap

Fourier postulated around 1807 that any periodic signal (equivalently nite length signal) can be built upas an innite linear combination of harmonic sinusoidal waves.

Given the collection

B = ej 2πT nt∞n=−∞ (5.1)

any nite-energy function x (t) can be approximated arbitrarily closely by

x (t) =∞∑

n=−∞Cn e

j 2πT nt. (5.2)

Now, The issue of exact convergence did bring Fourier2 much criticism from the French Academy of Science(Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its presen-tation on 1807. It was not resolved for almost a century, and its resolution is interesting and important tounderstand from a practical viewpoint.

Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result ofthe fact that sinusoids are Eigenfunctions (Section 5.3) of linear, time-invariant (LTI)3 systems. This is tosay that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoidon the output. Then, since Fourier analysis allows us to redene the signals in terms of a combination ofsinusoids, all we need to do is determine how any given system acts on each possible sinusoid (its transferfunction) and we have a complete understanding of the system. Furthermore, since we are able to denethe passage of sinusoids through a system as the multiplication of that sinusoid by its scaling factor, we canconvert the passage of any signal through a system from convolution (Section 4.4) (in time) to multiplication(in frequency). These ideas are what give Fourier analysis its power.

Now, after hopefully having sold you on the value of this method of analysis, we must examine exactlywhat we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the FourierSeries (Section 5.4), Continuous-Time Fourier Transform (Section 6.2), Discrete-Time Fourier Transform(Section 9.3) and Discrete Fourier Transform (Section 11.1). All of these transforms act essentially the sameway, by converting a signal in time to an equivalent signal in frequency (sinusoids). However, dependingon the nature of a specic signal i.e. whether it is nite- or innite-length and whether it is discrete- orcontinuous-time) there is an appropriate transform to convert the signal into the frequency domain.

1This content is available online at <http://legacy.cnx.org/content/m47439/1.2/>.2http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html3"Continuous Time Systems" <http://legacy.cnx.org/content/m10855/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

67

Page 74: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

68 CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

5.2 Continuous Time Periodic Signals4

5.2.1 Introduction

This module describes the type of signals acted on by the Continuous Time Fourier Series.

5.2.2 Periodic Signals

When a function repeats itself exactly after some given period, or cycle, we say it's periodic. A periodicfunction can be mathematically dened as:

f (t) = f (t+mT )m ∈ Z (5.3)

where T > 0 represents the fundamental period of the signal, which is the smallest positive value of Tfor the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Anyfunction that satises this equation is said to be periodic with period T.

We can think of periodic functions (with period T ) two dierent ways:

1. as functions on all of R

Figure 5.1: Continuous time periodic function over all of R where f (t0) = f (t0 + T )

2. or, we can cut out all of the redundancy, and think of them as functions on an interval [0, T ] (or,more generally, [a, a+ T ]). If we know the signal is T-periodic then all the information of the signal iscaptured by the above interval.

Figure 5.2: Remove the redundancy of the period function so that f (t) is undened outside [0, T ].

An aperiodic CT function f (t), on the other hand, does not repeat for any T ∈ R; i.e. there exists noT such that this equation (5.3) holds.

4This content is available online at <http://legacy.cnx.org/content/m47350/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 75: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

69

5.2.3 Demonstration

Here's an example demonstrating a periodic sinusoidal signal with various frequencies, amplitudes andphase delays:

Figure 5.3: Interact (when online) with a Mathematica CDF demonstrating a Periodic Sinusoidal Signalwith various frequencies, amplitudes, and phase delays. To download, right click and save le as .cdf.

To learn the full concept behind periodicity, see the video below.

Khan Lecture on Periodic Signals

This media object is a Flash object. Please view or download it at<http://www.youtube.com/v/tJW_a6JeXD8&rel=0&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_embedded&fs=1>

Figure 5.4: video from Khan Academy

5.2.4 Conclusion

A periodic signal is completely dened by its values in one period, such as the interval [0,T].

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 76: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

70 CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

5.3 Eigenfunctions of Continuous-Time LTI Systems5

5.3.1 Introduction

Prior to reading this module, the reader should already have some experience with linear algebra and shouldspecically be familiar with the eigenvectors and eigenvalues of square matrices. A linear time invariantsystem is a linear operator dened on a function space that commutes with every time shift operator onthat function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.The concept of an eigenfunction is closely tied to the concept of an eigenvector in linear algebra. Eigen isgerman for "self": the eigenfunction of a system is a function that, when fed to the system, produces inthe output a copy of the function, perhaps rescaled. More concretely, f is an eigenfunction for a systemH if H (f) = λf for some scalar λ. It is particularly easy to calculate the output of a system when aneigenfunction is the input as the output is simply the eigenfunction scaled by the associated eigenvalue. Aswill be shown, continuous time complex exponentials serve as eigenfunctions of linear time invariant systemsoperating on continuous time signals.

5.3.2 Eigenfunctions of LTI Systems

Consider a linear time invariant system H with impulse response h operating on some space of innite lengthcontinuous time signals. Recall that the output H (x (t)) of the system for a given input x (t) is given by thecontinuous time convolution of the impulse response with the input

H (x (t)) =∫ ∞−∞

h (τ)x (t− τ) dτ. (5.4)

Now consider the input x (t) = est where s ∈ C. Computing the output for this input,

H (est) =∫∞−∞ h (τ) es(t−τ)dτ

=∫∞−∞ h (τ) este−sτdτ

= est∫∞−∞ h (τ) e−sτdτ.

(5.5)

Thus,

H(est)

= λsest (5.6)

where

λs =∫ ∞−∞

h (τ) e−sτdτ (5.7)

is the eigenvalue corresponding to the eigenfunction est.There are some additional points that should be mentioned. Note that, there still may be additional

eigenfunctions of a linear time invariant system not described by est for some s ∈ C. Furthermore, the abovediscussion has been somewhat formally loose as est may or may not belong to the space on which the systemoperates. However, for our purposes, complex exponentials will be accepted as eigenfunctions of linear timeinvariant systems. A similar argument using continuous time circular convolution would also hold for spacesnite length signals.

5.3.3 Eigenfunction of LTI Systems Summary

As has been shown, continuous time complex exponential are eigenfunctions of linear time invariant systemsoperating on continuous time signals. Thus, it is particularly simple to calculate the output of a linear time

5This content is available online at <http://legacy.cnx.org/content/m47308/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 77: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

71

invariant system for a complex exponential input as the result is a complex exponential output scaled by theassociated eigenvalue. Consequently, representations of continuous time signals in terms of continuous timecomplex exponentials provide an advantage when studying signals. As will be explained later, this is whatis accomplished by the continuous time Fourier transform and continuous time Fourier series, which applyto aperiodic and periodic signals respectively.

5.4 Continuous Time Fourier Series (CTFS)6

5.4.1 Introduction

In this module, we will derive an expansion for continuous-time, periodic functions, and in doing so, derivethe Continuous Time Fourier Series (CTFS).

Since complex exponentials (Section 2.6) are eigenfunctions of linear time-invariant (LTI) systems (Sec-tion 5.3), calculating the output of an LTI system H given est as an input amounts to simple multiplication,where H (s) ∈ C is the eigenvalue corresponding to s. As shown in the gure, a simple exponential inputwould yield the output

y (t) = H (s) est (5.8)

Figure 5.5: Simple LTI system.

Using this and the fact that H is linear, calculating y (t) for combinations of complex exponentials is alsostraightforward.

c1es1t + c2e

s2t → c1H (s1) es1t + c2H (s2) es2t

∑n

cnesnt →

∑n

cnH (sn) esnt

The action of H on an input such as those in the two equations above is easy to explain. H inde-pendently scales each exponential component esnt by a dierent complex number H (sn) ∈ C. As such, ifwe can write a function f (t) as a combination of complex exponentials it allows us to easily calculate theoutput of a system.

6This content is available online at <http://legacy.cnx.org/content/m47348/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 78: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

72 CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

5.4.2 Fourier Series Synthesis

Joseph Fourier7 demonstrated that an arbitrary periodic function x (t) can be written as a linear combinationof harmonic complex sinusoids

x (t) =∞∑

n=−∞cne

j2πf0nt (5.9)

where f0 = 1T is the fundamental frequency. For almost all x (t) of practical interest, there exists cn to make

(5.9) true. If x (t) is nite energy ( x (t) ∈ L2 [0, T ]), then the equality in (5.9) holds in the sense of energyconvergence; if x (t) is continuous, then (5.9) holds pointwise. Also, if x (t) meets some mild conditions (theDirichlet conditions), then (5.9) holds pointwise everywhere except at points of discontinuity.

The cn - called the Fourier coecients - tell us "how much" of the sinusoid ej2πf0nt is in x (t). The formulashows x (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since itis an eigenfunction of every LTI system). Mathematically, it tells us that the set of complex exponentialsej2πf0nt , n ∈ Z

form a basis for the space of T-periodic continuous time functions.

Example 5.1We know from Euler's formula that cos (2πft) + sin (2πft) = 1−j

2 ej2πft + 1+j2 e−j2πft.

7http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 79: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

73

5.4.3 Synthesis with Sinusoids Demonstration

Figure 5.6: Interact(when online) with a Mathematica CDF demonstrating sinusoid synthesis. Todownload, right click and save as .cdf.

Guitar Oscillations on an iPhone

This media object is a Flash object. Please view or download it at<http://www.youtube.com/v/TKF6nFzpHBU?version=3&hl=en_US>

Figure 5.7

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 80: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

74 CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

5.4.4 Fourier Series Analysis

Finding the coecients of the Fourier series expansion involves some algebraic manipulation of the synthesisformula. First of all we will multiply both sides of the equation by e−(j2πf0kt), where k ∈ Z.

f (t) e−(j2πf0kt) =∞∑

n=−∞cne

j2πf0nte−(j2πf0kt) (5.10)

Now integrate both sides over a given period, T :∫ T

0

f (t) e−(j2πf0kt)dt =∫ T

0

∞∑n=−∞

cnej2πf0nte−(j2πf0kt)dt (5.11)

On the right-hand side we can switch the summation and integral and factor the constant out of the integral.∫ T

0

f (t) e−(j2πf0kt)dt =∞∑

n=−∞cn

∫ T

0

ej2πf0(n−k)tdt (5.12)

Now that we have made this seemingly more complicated, let us focus on just the integral,∫ T

0ej2πf0(n−k)tdt,

on the right-hand side of the above equation. For this integral we will need to consider two cases: n = k andn 6= k. For n = k we will have: ∫ T

0

ej2πf0(n−k)tdt = T , n = k (5.13)

For n 6= k, we will have:∫ T

0

ej2πf0(n−k)tdt =∫ T

0

cos (2πf0 (n− k) t) dt+ j

∫ T

0

sin (2πf0 (n− k) t) dt , n 6= k (5.14)

But cos (2πf0 (n− k) t) has an integer number of periods, n− k, between 0 and T . Imagine a graph of thecosine; because it has an integer number of periods, there are equal areas above and below the x-axis of thegraph. This statement holds true for sin (2πf0 (n− k) t) as well. What this means is∫ T

0

cos (2πf0 (n− k) t) dt = 0 (5.15)

which also holds for the integral involving the sine function. Therefore, we conclude the following aboutour integral of interest: ∫ T

0

ej2πf0(n−k)tdt =

T if n = k

0 otherwise(5.16)

We plug in this result in (5.12) to see if we can nish nding an equation for our Fourier coecients. Usingthe facts that we have just proven above, we can see that the only time (5.12) will have a nonzero result iswhen k and n are equal: ∫ T

0

f (t) e−(j2πf0nt)dt = Tcn , n = k (5.17)

Finally, we have our general equation for the Fourier coecients:

cn =1T

∫ T

0

f (t) e−(j2πf0nt)dt (5.18)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 81: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

75

Example 5.2Consider the square wave function given by

x (t) = 1/2 t ≤ 1/2

−1/2 t > 1/2(5.19)

on the unit interval t ∈ Z [0, 1).

ck =∫ 1

0x (t) e−j2πktdt

=∫ 1/2

012e−j2πktdt−

∫ 1

1/212e−j2πktdt

=j(−1+ejπk)

2πk

(5.20)

Thus, the Fourier coecients of this function found using the Fourier series analysis formula are

ck = −j/πk kodd

0 keven. (5.21)

5.4.5 Fourier Series Summary

Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals usinga set of complex exponentials as a basis. The continuous time Fourier series synthesis formula expresses acontinuous time, periodic function as the sum of continuous time, discrete frequency complex exponentials.

f (t) =∞∑

n=−∞cne

j2πf0nt (5.22)

The continuous time Fourier series analysis formula gives the coecients of the Fourier series expansion.

cn =1T

∫ T

0

f (t) e−(j2πf0nt)dt (5.23)

In both of these equations f0 = 1T is the fundamental frequency.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 82: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

76 CHAPTER 5. INTRODUCTION TO FOURIER ANALYSIS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 83: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 6

Continuous Time Fourier Transform(CTFT)

6.1 Continuous Time Aperiodic Signals1

6.1.1 Introduction

This module describes the type of signals acted on by the Continuous Time Fourier Transform.

6.1.2 Periodic and Aperiodic Signals

When a function repeats itself exactly after some given period, or cycle, we say it's periodic. A periodicfunction can be mathematically dened as:

f (t) = f (t+mT )m ∈ Z (6.1)

where T > 0 represents the fundamental period of the signal, which is the smallest positive value of Tfor the signal to repeat. Because of this, you may also see a signal referred to as a T-periodic signal. Anyfunction that satises this equation is said to be periodic with period T.

An aperiodic CT function f (t) does not repeat for any T ∈ R; i.e. there exists no T such that thisequation (6.1) holds.

Suppose we have such an aperiodic function f (t) . We can construct a periodic extension of f (t) calledfTo (t) , where f (t) is repeated every T0 seconds. If we take the limit as T0 →∞, we obtain a precise model ofan aperiodic signal for which all rules that govern periodic signals can be applied, including Fourier Analysis(with an important modication). For more detail on this distinction, see the module on the ContinuousTime Fourier Transform.

1This content is available online at <http://legacy.cnx.org/content/m47356/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

77

Page 84: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

78 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

6.1.3 Aperiodic Signal Demonstration

Figure 6.1: Interact (when online) with a Mathematica CDF demonstrating Periodic versus AperiodicSignals.To download, right-click and save as .cdf.

6.1.4 Conclusion

Any aperiodic signal can be dened by an innite sum of periodic functions, a useful denition that makesit possible to use Fourier Analysis on it by assuming all frequencies are present in the signal.

6.2 Continuous Time Fourier Transform (CTFT)2

6.2.1 Introduction

In this module, we will derive an expansion for any arbitrary continuous-time function, and in doing so,derive the Continuous Time Fourier Transform (CTFT).

Since complex exponentials (Section 2.6) are eigenfunctions of linear time-invariant (LTI) systems (Sec-tion 5.3), calculating the output of an LTI system H given est as an input amounts to simple multiplication,where H (s) ∈ C is the eigenvalue corresponding to s. As shown in the gure, a simple exponential inputwould yield the output

y (t) = H (s) est (6.2)

Using this and the fact that H is linear, calculating y (t) for combinations of complex exponentials is alsostraightforward.

c1es1t + c2e

s2t → c1H (s1) es1t + c2H (s2) es2t

2This content is available online at <http://legacy.cnx.org/content/m47319/1.5/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 85: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

79

∑n

cnesnt →

∑n

cnH (sn) esnt

The action of H on an input such as those in the two equations above is easy to explain. H inde-pendently scales each exponential component esnt by a dierent complex number H (sn) ∈ C. As such, ifwe can write a function f (t) as a combination of complex exponentials it allows us to easily calculate theoutput of a system.

Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signalsin terms of a set of simpler functions by superposition of a number of complex exponentials. Below we willpresent the Continuous-Time Fourier Transform (CTFT), commonly referred to as just the FourierTransform (FT).

6.2.2 Fourier Synthesis

Joseph Fourier3 demonstrated that an arbitrary periodic signal s (t) can be written as a linear combinationof harmonic complex sinusoids

s (t) =∞∑

n=−∞cne

j 2πT nt (6.3)

where T is the fundamental period. For almost all s (t) of practical interest, there exists cn to make (6.3)true. If s (t) is nite energy, then the equality in (6.3) holds in the sense of energy convergence; if s (t) iscontinuous, then (6.3) holds pointwise. Also, if s (t) meets some mild conditions (the Dirichlet conditions),then (6.3) holds pointwise everywhere except at points of discontinuity.

The cn - called the Fourier coecients - tell us "how much" of the sinusoid ej2πT nt is in s (t). The formula

shows s (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since itis an eigenfunction of every LTI system). Mathematically, it tells us that the set of complex exponentialsej

2πT nt , n ∈ Z

form a basis for the space of T -periodic continuous time functions. Because the CTFT

deals with nonperiodic signals, we must nd a way to include all real frequencies in the general equations.For the CTFT we simply let T go to innity. This will also change the summation over integers to anintegration over real numbers.

Example 6.1We know from Euler's formula that cos (2πft) + sin (2πft) = 1−j

2 ej2πft + 1+j2 e−j2πft.

6.2.3 The Fourier Transform

We have seen how all signals s (t) can be decomposed in terms of the sinusoids ej2πft. We also know thatthese sinusoids are complex exponentials, and so they are eigenfunctions of LTI systems; recall as well thatsuch eigenfunctions easily pass through LTI systems. Thus, we can use these two principles to easily obtainthe output to any signal for any system: First, we obtain the Fourier coecients of the signal to decomposeit as the sum of scaled sinusoids; next, we run each scaled sinusoid through the system, which in essencescales (multiplies) it by the sinusoid's eigenvalue; and nally we sum together all the outputs (thanks tolinearity and superposition) to obtain the output. What remains to be shown is the way to easily computethe coecients and eigenvalues of a signal and a system. Both of these problems are solved using the FourierTransform.

Continuous-Time Fourier Transform

S (f) =∫ ∞−∞

s (t) e−(j2πft)dt (6.4)

3http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 86: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

80 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

Inverse CTFT

s (t) =∫ ∞−∞

S (f) ej2πftdf (6.5)

For signals, the CTFT provides the Fourier coecients that are attached to sinusoids to represent the signalas in (6.3). The inverse CTFT then provides the signal as the linear combination of all sinusoids with thecorresponding weights, extending (6.2). For systems, the CTFT can provide the eigenvalues for all sinusoidswhen applied to the impulse response function h (t). Applying the CTFT to the input signal and the impulseresponse allows for very easy computation of the system output, as we will soon observe.

warning: It is not uncommon to see the above formula written slightly dierent. One of themost common dierences is the way that the exponential is written. The above equations use thefrequency variable f in the exponential. However, in some cases the radial frequency ω is usedinstead, where ω = 2πf . Therefore, you may often see the expressions above with ω in place of 2πfin the exponentials. Click here4 for an overview of the notation used in Connexion's DSP modules.

6.2.4 CTFT Denition Demonstration

Figure 6.2: Interact (when online) with a Mathematica CDF demonstrating Continuous Time FourierTransform. To Download, right-click and save as .cdf.

4"DSP notation" <http://legacy.cnx.org/content/m10161/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 87: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

81

6.2.5 Example Problems

Exercise 6.2.1 (Solution on p. 90.)

Find the Fourier Transform (CTFT) of the function

x (t) =

e−(αt) if t ≥ 0

0 otherwise(6.6)

Exercise 6.2.2 (Solution on p. 90.)

Find the inverse Fourier transform of the ideal lowpass lter dened by

X (f) =

1 if |f | ≤M0 otherwise

(6.7)

6.3 Properties of the CTFT5

6.3.1 Introduction

This module will look at some of the basic properties of the Continuous-Time Fourier Transform (CTFT).

6.3.2 Summary Table of Fourier Transform Properties

Table of Fourier Transform Properties

Operation Name Signal ( x (t) ) Transform ( X (f) )

Linearity (Section 6.3.3.1: Lin-earity)

ax1t+ bx2t aX1f + bX2f

Scalar Multiplication (Sec-tion 6.3.3.1: Linearity)

αx (t) αX (f)

Duality (Section 6.3.3.2: Dual-ity)

X (t) x (−f)

Time Scaling (Section 6.3.3.3:Time Scaling)

x (αt) 1|α|X

(fα

)Time Shift (Section 6.3.3.4: TimeShifting)

x (t− τ) X (f) e−(j2πfτ)

continued on next page

5This content is available online at <http://legacy.cnx.org/content/m47347/1.4/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 88: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

82 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

Convolution in Time (Sec-tion 6.3.3.5: Convolution)

x1 (t) ∗ x2 (t) X1 (f)X2 (f)

Convolution in Frequency (Sec-tion 6.3.3.5: Convolution)

x1 (t)x2 (t) X1 (f) ∗X2 (f)

Dierentiation (Section 6.3.3.6:Time Dierentiation)

dn

dtnx (t) (j2πf)nX (f)

Parseval's Theorem (Sec-tion 6.3.3.7: Parseval's Relation)

∫∞−∞ (|x (t) |)2

dt∫∞−∞ (|X (f) |)2

df

Modulation (Frequency Shift)(Section 6.3.3.8: Modulation(Frequency Shift))

x (t) ej2πφt X (f − φ)

Symmetry for Real Signals (Sec-tion 6.3.3.9: Symmetry for RealSignals)

x (t) is real X (f) = X∗ (f)

Table 6.1

6.3.3 Discussion of Fourier Transform Properties

6.3.3.1 Linearity

The combined addition and scalar multiplication properties in the table above demonstrate the basic propertyof linearity. What you should see is that if one takes the Fourier transform of a linear combination of signalsthen it will be the same as the linear combination of the Fourier transforms of each of the individual signals.This is crucial when using a table (Section 6.4) of transforms to nd the transform of a more complicatedsignal.

Example 6.2We will begin with the following signal:

z (t) = ax1 (t) + bx2 (t) (6.8)

Z (f) = aX1 (f) + bX2 (f) (6.9)

6.3.3.2 Duality

Duality is a property that can make life quite easy when solving problems involving Fourier transforms.Basically what this property says is that since a rectangular function in time is a sinc function in frequency,then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similaritybetween the forward CTFT and the inverse CTFT. The only dierence is a frequency reversal.

6.3.3.3 Time Scaling

This property deals with the eect on the frequency-domain representation of a signal if the time variableis altered. The most important concept to understand for the time scaling property is that signals that arenarrow in time will be broad in frequency and vice versa. The simplest example of this is a delta function,

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 89: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

83

a unit pulse6 with a very small duration, in time that becomes an innite-length constant function infrequency.

The table above shows this idea for the general transformation from the time-domain to the frequency-domain of a signal. You should be able to easily notice that these equations show the relationship mentionedpreviously: if the time variable is increased then the frequency range will be decreased.

6.3.3.4 Time Shifting

Time shifting shows that a shift in time is equivalent to a linear phase shift in frequency. Since the frequencycontent depends only on the shape of a signal, which is unchanged in a time shift, then only the phasespectrum will be altered. This property is proven below:

Example 6.3We will begin by letting z (t) = x (t− τ). Now let us take the Fourier transform with the previousexpression substituted in for z (t).

Z (f) =∫ ∞−∞

x (t− τ) e−(j2πft)dt (6.10)

Dene σ = t − τ . Through the calculations below, you can see that only the variable in theexponential are altered thus only changing the phase in the frequency domain.

Z (f) =∫∞−∞ x (σ) e−(j2πf(σ+τ))dσ

= e−(j2πfτ)∫∞−∞ x (σ) e−(j2πfσ)dσ

= e−(j2πfτ)X (f)

(6.11)

6.3.3.5 Convolution

Convolution is one of the big reasons for converting signals to the frequency domain, since convolution intime becomes multiplication in frequency. This property is also another excellent example of symmetrybetween time and frequency. It also shows that there may be little to gain by changing to the frequencydomain when multiplication in time is involved.

We will introduce the convolution integral here, but if you have not seen this before or need to refresh yourmemory, then look at the continuous-time convolution (Section 6.5) module for a more in depth explanationand derivation.

y (t) = x1 (t) ∗ x2 (t)

=∫∞−∞ f1 (τ) f2 (t− τ) dτ

(6.12)

6.3.3.6 Time Dierentiation

Since LTI (Section 3.2) systems can be represented in terms of dierential equations, it is apparent withthis property that converting to the frequency domain may allow us to convert these complicated dierentialequations to simpler equations involving multiplication and addition. This is often looked at in more detailduring the study of the Laplace Transform7.

6"Elemental Signals": Section Pulse <http://legacy.cnx.org/content/m0004/latest/#pulsedef>7"The Laplace Transform" <http://legacy.cnx.org/content/m10110/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 90: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

84 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

6.3.3.7 Parseval's Relation

∫ ∞−∞

(|x (t) |)2dt =

∫ ∞−∞

(|X (f) |)2df (6.13)

6.3.3.8 Modulation (Frequency Shift)

Modulation is absolutely imperative to communications applications. Being able to shift a signal to a dierentfrequency, allows us to take advantage of dierent parts of the electromagnetic spectrum is what allows usto transmit television, radio and other applications through the same space without signicant interference.

The proof of the frequency shift property is very similar to that of the time shift (Section 6.3.3.4: TimeShifting); however, here we would use the inverse Fourier transform in place of the Fourier transform. Sincewe went through the steps in the previous, time-shift proof, below we will just show the initial and nal stepto this proof:

z (t) =∫ ∞−∞

X (f − φ) ej2πftdf (6.14)

z (t) = f (t) ej2πφt (6.15)

6.3.3.9 Symmetry for Real Signals

For signals that are real-valued, we have that x (t) = x∗ (t). Thus, we can evaluate

X∗ (f) =∫∞−∞ x∗ (t) ej2πftdt

=∫∞−∞ x (t) e−(j2π(−f)t)dt

= X (−f)

(6.16)

6.3.4 Online Resources

The following online resources provide interactive explanations of the properties of the CTFT:Continuous-Time Fourier Transform Properties Applet (Internet Explorer)8

Continuous-Time Fourier Transform Properties Applet (Other Browsers)9

6.3.5 Properties Demonstration

An interactive example demonstration of the properties is included below:

This media object is a LabVIEW VI. Please view or download it at<CTFTSPlab.llb>

Figure 6.3: Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.

8http://www.jhu.edu/signals/ctftprops-mathml/index.htm9http://www.jhu.edu/signals/ctftprops/indexCTFTprops.htm

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 91: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

85

6.4 Common Fourier Transforms10

6.4.1 Common CTFT Pairs

10This content is available online at <http://legacy.cnx.org/content/m47344/1.5/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 92: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

86 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

Time Domain Signal Frequency Domain Signal Condition

e−(at)u (t) 1a+j2πf a > 0

eatu (−t) 1a−j2πf a > 0

e−(a|t|) 2aa2+(2πf)2 a > 0

te−(at)u (t) 1(a+j2πf)2 a > 0

tne−(at)u (t) n!(a+j2πf)n+1 a > 0

δ (t) 1

1 δ (f)

ej2πf0t δ (f − f0)

cos (2πf0t) 12 (δ (f − f0) + δ (f + f0))

sin (2πf0t) j2 (δ (f + f0)− δ (f − f0))

u (t) 12δ (f) + 1

j2πf

sgn (t) 1jπf

cos (2πf0t)u (t) 14 (δ (f − f0) + δ (f + f0)) +

jf2π(f0

2−f2)

sin (2πf0t)u (t) 14j (δ (f − f0)− δ (f + f0)) +

f02π(f0

2−f2)

e−(at)sin (2πf0t)u (t) 2πf0(a+j2πf)2+(2πf0)2 a > 0

e−(at)cos (2πf0t)u (t) a+j2πf(a+j2πf)2+(2πf0)2 a > 0

p(t

)= u (t+ τ)− u (t− τ) 2τ sin(2πfτ)

2πfτ = 2τsinc (2fτ)

(2ω0) sin(2πf0t)2πf0t

=(2f0) sinc (2f0t)

u (f + f0)− u (f − f0) = p(

f2f0

)Λ(tτ

)=(

tτ + 1

) (u(tτ + 1

)− u

(tτ

))+(

− tτ + 1

) (u(tτ

)− u

(tτ − 1

)) τsinc2 (fτ)

f0sinc2 (f0t)(ff0

+ 1)(

u(ff0

+ 1)− u

(ff0

))+(

− ff0

+ 1)(

u(ff0

)− u

(ff0− 1))

=

Λ(ff0

)continued on next page

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 93: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

87

∑∞n=−∞ δ (t− nT ) f0

∑∞n=−∞ δ (f − nf0) f0 = 1

T

e−t2

2σ2 σ√

2πe−2(πfσ)2

Table 6.2

Notesp (t) is the pulse function for arbitrary real-valued t.

p (t) = 0 if |t| > 1/2

1 if |t| ≤ 1/2(6.17)

Λ (t) is the triangle function for arbitrary real-valued t.

Λ (t) = 1 + t if− 1 ≤ t ≤ 0

1− t if 0 < t ≤ 1

0 otherwise

(6.18)

6.5 Continuous Time Convolution and the CTFT11

6.5.1 Introduction

This module discusses convolution of continuous signals in the time and frequency domains.

6.5.2 Continuous Time Fourier Transform

The CTFT (Section 6.2) transforms a innite-length continuous signal in the time domain into an innite-length continuous signal in the frequency domain.

CTFT

X (f) =∫ ∞−∞

x (f) e−(j2πft)dt (6.19)

Inverse CTFT

x (t) =∫ ∞−∞

X (f) ej2πftdf (6.20)

6.5.3 Convolution Integral

The convolution integral expresses the output of an LTI system based on an input signal, x (t), and thesystem's impulse response, h (t). The convolution integral is expressed as

y (t) =∫ ∞−∞

x (τ)h (t− τ) dτ (6.21)

Convolution is such an important tool that it is represented by the symbol ∗, and can be written as

y (t) = x (t) ∗ h (t) (6.22)

Convolution is commutative. For more information on the characteristics of the convolution integral, readabout the Properties of Convolution (Section 4.5).

11This content is available online at <http://legacy.cnx.org/content/m47346/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 94: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

88 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

6.5.4 Demonstration

Figure 6.4: Interact (when online) with a Mathematica CDF demonstrating Use of the CTFT in signaldenoising. To Download, right-click and save target as .cdf.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 95: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

89

6.5.5 Convolution Theorem

Let f and g be two functions with convolution f ∗ g.. Let F be the Fourier transform operator. Then

F (f ∗ g) = F (f) · F (g) (6.23)

F (f · g) = F (f) ∗ F (g) (6.24)

By applying the inverse Fourier transform F−1, we can write:

f ∗ g = F−1 (F (f) · F (g)) (6.25)

6.5.6 Conclusion

The Fourier transform of a convolution is the pointwise product of Fourier transforms. In other words,convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain(e.g., frequency domain).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 96: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

90 CHAPTER 6. CONTINUOUS TIME FOURIER TRANSFORM (CTFT)

Solutions to Exercises in Chapter 6

Solution to Exercise 6.2.1 (p. 81)In order to calculate the Fourier transform, all we need to use is (6.4) (Continuous-Time Fourier Transform),complex exponentials (Section 2.6), and basic calculus.

X (f) =∫∞−∞ x (t) e−(j2πft)dt

=∫∞

0e−(αt)e−(j2πft)dt

=∫∞

0e(−t)(α+j2πf)dt

= 0− −1α+j2πf

(6.26)

X (f) =1

α+ j2πf(6.27)

Solution to Exercise 6.2.2 (p. 81)Here we will use (6.5) (Inverse CTFT) to nd the inverse FT given that t 6= 0.

x (t) =∫M−M ej2πftdf

= 12πte

j2πft|Mf=−M

= 1πt sin (2πMt)

(6.28)

x (t) = (2M) (sinc (2Mt)) (6.29)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 97: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 7

Discrete-Time Signals

7.1 Common Discrete Time Signals1

7.1.1 Introduction

Before looking at this module, hopefully you have an idea of what a signal is and what basic classicationsand properties a signal can have. In review, a signal is a function dened with respect to an independentvariable. This variable is often time but could represent any number of things. Mathematically, discretetime analog signals have discrete independent variables and continuous dependent variables. This modulewill describe some useful discrete time analog signals.

7.1.2 Important Discrete Time Signals

7.1.2.1 Sinusoids

One of the most important elemental signal that you will deal with is the real-valued sinusoid. In itsdiscrete-time form, we write the general expression as

Acos (Ωn+ φ) (7.1)

where A is the amplitude, Ω is the frequency, and φ is the phase. Because n only takes integer values, theresulting function is only periodic if 2π

Ω is a rational number.

Discrete-Time Cosine Signal

n

sn

1…

Figure 7.1: A discrete-time cosine signal is plotted as a stem plot.

Note that the equation representation for a discrete time sinusoid waveform is not unique.

1This content is available online at <http://legacy.cnx.org/content/m47447/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

91

Page 98: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

92 CHAPTER 7. DISCRETE-TIME SIGNALS

7.1.2.2 Complex Exponentials

As important as the general sinusoid, the complex exponential function will become a critical part of yourstudy of signals and systems. Its general discrete form is written as

Aesn (7.2)

where s = σ+jΩ , is a complex number in terms of σ, the attenuation constant, and Ω the angular frequency.The discrete time complex exponentials have the following property.

ejΩn = ej(Ω+2π)n (7.3)

Given this property, if we have a complex exponential with frequency Ω + 2π, then this signal "aliases"to a complex exponential with frequency Ω, implying that the equation representations of discrete complexexponentials are not unique.

7.1.2.3 Unit Impulses

The second-most important discrete-time signal is the unit sample, which is dened as

δ [n] =

1 if n = 0

0 otherwise(7.4)

Unit Sample

1

n

δn

Figure 7.2: The unit sample.

More detail is provided in the section on the discrete time impulse function. For now, it suces to saythat this signal is crucially important in the study of discrete signals, as it allows the sifting property to beused in signal representation and signal decomposition.

7.1.2.4 Unit Step

Another very basic signal is the unit-step function dened as

u [n] =

0 if n < 0

1 if n ≥ 0(7.5)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 99: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

93

Figure 7.3: Discrete-Time Unit-Step Function

The step function is a useful tool for testing and for dening other signals. For example, when dierentshifted versions of the step function are multiplied by other signals, one can select a certain portion of thesignal and zero out the rest.

7.1.3 Common Discrete Time Signals Summary

Some of the most important and most frequently encountered signals have been discussed in this module.There are, of course, many other signals of signicant consequence not discussed here. As you will see later,many of the other more complicated signals will be studied in terms of those listed here. Especially takenote of the complex exponentials and unit impulse functions, which will be the key focus of several topicsincluded in this course.

7.2 Energy and Power of Discrete-Time Signals2

7.2.1 Signal Energy

7.2.1.1 Discrete signals

For time discrete signals the "area under the squared signal" makes no sense, so we will have to use anotherenergy deniton. We dene energy as the sum of the squared magnitude of the samples. Mathematically

Energy - Discrete time signal: Ed =∑∞n=−∞ (|x [n] |)2

Example 7.1Given the sequence y [l] = blu [l], where u [n] is the unit step function. Find the energy of thesequence.

We recognize y [l] as a geometric series. Thus we can use the formula for the sum of a geometricseries and we obtain the energy, Ed =

∑∞l=0 y

2 [l] = 11−b2 . This expression is only valid for |b| < 1.

If we have a larger |b|, the series will diverge. The signal y [l] then has innite energy. So let's havea look at power...

7.2.2 Signal Power

Our denition of energy seems reasonable, and it is. However, what if the signal does not decay fast enough?In this case we have innite energy for any such signal. Does this mean that a fty hertz sine wave feedinginto your headphones is as strong as the fty hertz sine wave coming out of your outlet? Obviously not.This is what leads us to the idea of signal power, which in such cases is a more adequate description.

2This content is available online at <http://legacy.cnx.org/content/m47357/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 100: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

94 CHAPTER 7. DISCRETE-TIME SIGNALS

Figure 7.4: Signal with ininite energy

7.2.2.1 Discrete signals

For time discrete signals we dene power as energy per sample.

Power - Discrete time: Pd = limN→∞

12N+1

∑Nn=−N (|x [n] |)2

For periodic discrete-time signals, the integral need only be dened over one period:

Power - Discrete time periodic signal with period N0: Pd = 1N0

∑N0−1n=0 (|x [n] |)2

Example 7.2Given the signal x [n] = sin

(π 1

10n), shown in Figure 7.5, calculate the power for one period.

For the discrete sine we get Pd = 120

∑20n=1 sin2

(110πn

)= 0.500. Download power_sine.m3 for

plots and calculation.

Figure 7.5: Discrete time sine.

3See the le at <http://legacy.cnx.org/content/m11526/latest/power_sine.m>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 101: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

95

7.3 Discrete-Time Signal Operations4

7.3.1 Introduction

This module will look at two signal operations aecting the time parameter of the signal, time shiftingand time scaling. While they appear at rst to be straightforward extensions of the continuous-time signaloperations, there are some intricacies that are particular to discrete-time signals.

7.3.2 Manipulating the Time Parameter

7.3.2.1 Time Shifting

Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtractingan integer quantity of the shift to the time variable in the function. Subtracting a xed positive quantityfrom the time variable will shift the signal to the right (delay) by the subtracted quantity, while adding axed positive amount to the time variable will shift the signal to the left (advance) by the added quantity.

Figure 7.6: f [n− 3] moves (delays) f to the right by 3.

7.3.2.2 Time Scaling

Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If that quantityis greater than one, the signal becomes narrower and the operation is called decimation. In contrast, if thequantity is less than one, the signal becomes wider and the operation is called expansion or interpolation,depending on how the gaps between values are lled.

7.3.2.2.1 Decimation

In decimation, the input of the signal is changed to be f [cn] . The quantity used for decimation c must bean integer so that the input takes values for which a discrete function is properly dened. The decimatedsignal f [cn] corresponds to the original signal f [n] where only each n sample is preserved (including f [0]),and so we are throwing away samples of the signal (or decimating it).

4This content is available online at <http://legacy.cnx.org/content/m47809/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 102: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

96 CHAPTER 7. DISCRETE-TIME SIGNALS

Figure 7.7: f [2n] decimates f by 2.

7.3.2.2.2 Expansion

In expansion, the input of the signal is changed to be f[nc

]. We know that the signal f [n] is dened only

for integer values of the input n. Thus, in the expanded signal we can only place the entries of the originalsignal f at values of n that are multiples of c. In other words, we are spacing the values of the discrete-timesignal c − 1 entries away from each other. Since the signal is undened elsewhere, the standard conventionin expansion is to ll in the undetermined values with zeros.

Figure 7.8: fˆn2

˜expands f by 2.

7.3.2.2.3 Interpolation

In practice, we may know specic information about the signal of interest that allows us to provide goodestimates of the entries of f

[nc

]that are missing after expansion. For example, we may know that the signal

is supposed to be piecewise linear, and so knowing the values of f[nc

]at n = m

c and at n =P

1

c allows us to

infer the values for n between∑

1 andP

1

c − 1 . This process of inferring the undened values is known asinterpolation. The rule described above is known as polar interpolation; although more sophisticated rulesexist for interpolating values, linear interpolation will suce for our explanation in this module.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 103: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

97

Figure 7.9: fˆn2

˜with interpolation lls in the missing values of the expansion using linear extensions.

Example 7.3Given f [n] we woul like to plot f [an− b]. The gure below describes a method to accomplish this.

(a) (b) (c)

Figure 7.10: (a) Begin with f [n] (b) Then replace n with an to get f [an] (c) Finally, replace n withn− b

ato get f

ˆa

`n− b

a

´˜= f [an− b]

7.3.2.3 Time Reversal

A natural question to consider when learning about time scaling is: What happens when the time variableis multiplied by a negative number? The answer to this is time reversal. This operation is the reversal ofthe time axis, or ipping the signal over the y-axis.

Figure 7.11: Reverse the time axis

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 104: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

98 CHAPTER 7. DISCRETE-TIME SIGNALS

7.3.3 Signal Operations Summary

Some common operations on signals aect the time parameter of the signal. One of these is time shifting inwhich a quantity is added to the time parameter in order to advance or delay the signal. Another is the timescaling in which the time parameter is multiplied by a quantity in order to expand or decimate the signal intime. In the event that the quantity involved in the latter operation is negative, time reversal occurs.

7.4 Discrete Time Impulse Function5

7.4.1 Introduction

In engineering, we often deal with the idea of an action occurring at a point. Whether it be a force ata point in space or some other signal at a point in time, it becomes worth while to develop some way ofquantitatively dening this. This leads us to the idea of a unit impulse, probably the second most importantfunction, next to the complex exponential, in this systems and signals course.

7.4.2 Unit Sample Function

The unit sample function, often referred to as the unit impulse or delta function, is the function thatdenes the idea of a unit impulse in discrete time. There are not nearly as many intricacies involved in itsdenition as there are in the denition of the Dirac delta function, the continuous time impulse function.The unit sample function simply takes a value of one at n=0 and a value of zero elsewhere. The impulsefunction is often written as δ (n).

δ [n] =

1 if n = 0

0 otherwise(7.6)

Unit Sample

1

n

δn

Figure 7.12: The unit sample.

Below we will briey list a few important properties of the unit impulse without going into detail of theirproofs.

Unit Impulse Properties

• δ [αn] = 1|α|δ [n]

• δ [n] = δ [−n]• δ [n] = u [n]− u [n− 1]• f [n] δ [n] = f [0] δ [n]

5This content is available online at <http://legacy.cnx.org/content/m47448/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 105: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

99

∞∑n=−∞

f [n] δ [n] =∞∑

n=−∞f [0] δ [n] = f [0]

∞∑n=−∞

δ [n] = f [0] (7.7)

7.4.3 Discrete Time Impulse Response Demonstration

Figure 7.13: Interact(when online) with a Mathematica CDF demonstrating the Discrete Time ImpulseFunction.

7.4.4 Discrete Time Unit Impulse Summary

The discrete time unit impulse function, also known as the unit sample function, is of great importanceto the study of signals and systems. The function takes a value of one at time n=0 and a value of zero

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 106: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

100 CHAPTER 7. DISCRETE-TIME SIGNALS

elsewhere. It has several important properties that will appear again when studying systems.

7.5 Discrete Time Complex Exponential6

7.5.1 Introduction

Complex exponentials are some of the most important functions in our study of signals and systems. Theirimportance stems from their status as eigenfunctions of linear time invariant systems; as such, it can beboth convenient and insightful to represent signals in terms of complex exponentials. Before proceeding, youshould be familiar with complex numbers.

7.5.2 The Discrete Time Complex Exponential

7.5.2.1 Complex Exponentials

The complex exponential function will become a critical part of your study of signals and systems. Its generaldiscrete form is written as

zn (7.8)

where z is a complex number. Recalling the polar expression of complex numbers, z can be expressed in termsof its magnitude |z| and its angle (or argument) ω in the complex plane: z = |z|ejω. Thus zn = (|z|)nejωn.In the context of complex exponentials, ω is referred to as frequency. For the time being, let's considercomplex exponentials for which |z| = 1.

These discrete time complex exponentials have the following property, which will become evident throughdiscussion of Euler's formula.

ejωn = ej(ω+2π)n (7.9)

Given this property, if we have a complex exponential with frequency ω + 2π, then this signal "aliases"to a complex exponential with frequency ω, implying that the equation representations of discrete complexexponentials are not unique.

7.5.2.2 Euler's Formula

The mathematician Euler proved an important identity relating complex exponentials to trigonometric func-tions. Specically, he discovered the eponymously named identity, Euler's formula, which states that for anyreal number x,

ejx = cos (x) + jsin (x) (7.10)

which can be proven as follows.In order to prove Euler's formula, we start by evaluating the Taylor series for ez about z = 0, which

converges for all complex z, at z = jx. The result is

ejx =∑∞k=0

(jx)k

k!

=∑∞k=0 (−1)k x2k

(2k)! + j∑∞k=0 (−1)k x2k+1

(2k+1)!

= cos (x) + jsin (x)

(7.11)

because the second expression contains the Taylor series for cos (x) and sin (x) about t = 0, which convergefor all real x. Thus, the desired result is proven.

Choosing x = ωn, we have:

ejωn = cos (ωn) + jsin (ωn) (7.12)

6This content is available online at <http://legacy.cnx.org/content/m34573/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 107: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

101

which breaks a discrete time complex exponential into its real part and imaginary part. Using this formula,we can also derive the following relationships.

cos (ωn) =12ejωn +

12e−jωn (7.13)

sin (ωn) =12jejωn − 1

2je−jωn (7.14)

7.5.2.3 Real and Imaginary Parts of Complex Exponentials

Now let's return to the more general case of complex exponentials, zn. Recall that zn = (|z|)nejωn. We canexpress this in terms of its real and imaginary parts:

Rezn = (|z|)ncos (ωn) (7.15)

Imzn = (|z|)nsin (ωn) (7.16)

We see now that the magnitude of z establishes an exponential envelope to the signal, with ω controllingrate of the sinusoidal oscillation within the envelope.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 108: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

102 CHAPTER 7. DISCRETE-TIME SIGNALS

(a)

(b)

(c)

Figure 7.14: (a) If |z| < 1, we have the case of a decaying exponential envelope. (b) If |z| > 1, wehave the case of a growing exponential envelope. (c) If |z| = 1, we have the case of a constant envelope.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 109: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

103

7.5.3 Discrete Complex Exponential Demonstration

Figure 7.15: Interact (when online) with a Mathematica CDF demonstrating the Discrete Time Com-plex Exponential. To Download, right-click and save target as .cdf.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 110: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

104 CHAPTER 7. DISCRETE-TIME SIGNALS

7.5.4 Discrete Time Complex Exponential Summary

Discrete time complex exponentials are signals of great importance to the study of signals and systems. Theycan be related to sinusoids through Euler's formula, which identies the real and imaginary parts of complexexponentials. Eulers formula reveals that, in general, the real and imaginary parts of complex exponentialsare sinusoids multiplied by real exponentials.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 111: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 8

Time Domain Analysis of Discrete TimeSystems

8.1 Discrete Time Systems1

8.1.1 Introduction

As you already now know, a discrete time system operates on a discrete time signal input and produces adiscrete time signal output. There are numerous examples of useful discrete time systems in digital signalprocessing, such as digital lters for images or sound. The class of discrete time systems that are bothlinear and time invariant, known as discrete time LTI systems, is of particular interest as the properties oflinearity and time invariance together allow the use of some of the most important and powerful tools insignal processing.

8.1.2 Discrete Time Systems

8.1.2.1 Linearity and Time Invariance

A system H is said to be linear if it satises two important conditions. The rst, additivity, states for everypair of signals x, y that H (x+ y) = H (x) +H (y). The second, homogeneity of degree one, states for everysignal x and scalar a we have H (ax) = aH (x). It is clear that these conditions can be combined togetherinto a single condition for linearity. Thus, a system is said to be linear if for every signals x, y and scalarsa, b we have that

H (ax+ by) = aH (x) + bH (y) . (8.1)

Linearity is a particularly important property of systems as it allows us to leverage the powerful tools oflinear algebra, such as bases, eigenvectors, and eigenvalues, in their study.

A system H is said to be time invariant if a time shift of an input produces the corresponding shiftedoutput. In other, more precise words, the system H commutes with the time shift operator ST for everyT ∈ Z. That is,

STH = HST . (8.2)

Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,physical systems should react the same to identical inputs at dierent times.

When a system exhibits both of these important properties it opens. As will be explained and provenin subsequent modules, computation of the system output for a given input becomes a simple matter of

1This content is available online at <http://legacy.cnx.org/content/m47454/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

105

Page 112: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

106 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

convolving the input with the system's impulse response signal. Also proven later, the fact that complexexponential are eigenvectors of linear time invariant systems will encourage the use of frequency domaintools such as the various Fouier transforms and associated transfer functions, to describe the behavior oflinear time invariant systems.

Example 8.1Consider the system H in which

H (f [n]) = 2f [n] (8.3)

for all signals f . Given any two signals f, g and scalars a, b

H (af [n] + bg [n]) = 2 (af [n] + bg [n]) = a2f [n] + b2g [n] = aH (f [n]) + bH (g [n]) (8.4)

for all integers n. Thus, H is a linear system. For all integers T and signals f ,

ST (H (f [n])) = ST (2f [n]) = 2f [n− T ] = H (f [n− T ]) = H (ST (f [n])) (8.5)

for all integers n. Thus, H is a time invariant system. Therefore, H is a linear time invariantsystem.

8.1.2.2 Causality

The causality property requires that a system's output depends only on past and present values of the input.For a discrete-time system, this means that the value of the output y [n0] at a specic time n0 can onlydepend on values of the input x [n] for n < n0.

8.1.3 Discrete Time Systems Summary

Many useful discrete time systems will be encountered in a study of signals and systems. This course is mostinterested in those that demonstrate both the linearity property and the time invariance property, whichtogether enable the use of some of the most powerful tools of signal processing. It is often useful to describethem in terms of rates of change through linear constant coecient dierence equations, a type of recurrencerelation.

8.2 Discrete Time Impulse Response2

8.2.1 Introduction

The output of a discrete time LTI system is completely determined by the input and the system's responseto a unit impulse.

2This content is available online at <http://legacy.cnx.org/content/m47363/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 113: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

107

System Output

Figure 8.1: We can determine the system's output, y[n], if we know the system's impulse response,h[n], and the input, x[n].

The output for a unit impulse input is called the impulse response.

Figure 8.2

(a) (b)

Figure 8.3

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 114: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

108 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

8.2.1.1 Example Impulses

Since we are considering discrete time signals and systems, an ideal impulse is easy to simulate on a computeror some other digital device. It is simply a signal that is 1 at the point n = 0, and 0 everywhere else.

8.2.2 LTI Systems and Impulse Responses

8.2.2.1 Finding System Outputs

By the sifting property of impulses, any signal can be decomposed in terms of an innite sum of shifted,scaled impulses.

x [n] =∑∞k=−∞ x [k] δk [n]

=∑∞k=−∞ x [k] δ [n− k]

(8.6)

The function δk [n] = δ [n− k] peaks up where n = k.

(a) (b)

Figure 8.4

Since we know the response of the system to an impulse and any signal can be decomposed into impulses,all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,calculate the system's output for every impulse and add the outputs back together. This is the processknown as Convolution. Since we are in Discrete Time, this is the Discrete Time Convolution Sum.

8.2.2.2 Finding Impulse Responses

a. Apply an impulse input signal to the system and record the outputb. Use Fourier methods

We will assume that h [n] is given for now.

The goal is now to compute the output y [n] given the impulse response h [n] and the input x [n].

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 115: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

109

Figure 8.5

8.2.3 Impulse Response Summary

When a system is "shocked" by a delta function, it produces an output known as its impulse response. Foran LTI system, the impulse response completely determines the output of the system given any arbitraryinput. The output can be found using discrete time convolution.

8.3 Discrete-Time Convolution3

8.3.1 Introduction

Convolution, one of the most important concepts in electrical engineering, can be used to determine theoutput a system produces for a given input signal. It can be shown that a linear time invariant system iscompletely characterized by its impulse response. The sifting property of the discrete time impulse functiontells us that the input signal to a system can be represented as a sum of scaled and shifted unit impulses.Thus, by linearity, it would seem reasonable to compute of the output signal as the sum of scaled and shiftedunit impulse responses. That is exactly what the operation of convolution accomplishes. Hence, convolutioncan be used to determine a linear time invariant system's output from knowledge of the input and the impulseresponse.

8.3.2 Convolution and Circular Convolution

8.3.2.1 Convolution

8.3.2.1.1 Operation Denition

Discrete time convolution is an operation on two discrete time signals dened by the integral

(f ∗ g) [n] =∞∑

k=−∞

f [k] g [n− k] (8.7)

for all signals f, g dened on Z. It is important to note that the operation of convolution is commutative,meaning that

f ∗ g = g ∗ f (8.8)

3This content is available online at <http://legacy.cnx.org/content/m47455/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 116: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

110 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

for all signals f, g dened on Z. Thus, the convolution operation could have been just as easily stated usingthe equivalent denition

(f ∗ g) [n] =∞∑

k=−∞

f [n− k] g [k] (8.9)

for all signals f, g dened on Z. Convolution has several other important properties not listed here butexplained and derived in a later module.

8.3.2.1.2 Denition Motivation

The above operation denition has been chosen to be particularly useful in the study of linear time invariantsystems. In order to see this, consider a linear time invariant system H with unit impulse response h. Givena system input signal x we would like to compute the system output signal H (x). First, we note that theinput can be expressed as the convolution

x [n] =∞∑

k=−∞

x [k] δ [n− k] (8.10)

by the sifting property of the unit impulse function. By linearity

Hx [n] =∞∑

k=−∞

x [k]Hδ [n− k] . (8.11)

Since Hδ [n− k] is the shifted unit impulse response h [n− k], this gives the result

Hx [n] =∞∑

k=−∞

x [k]h [n− k] = (x ∗ h) [n] . (8.12)

Hence, convolution has been dened such that the output of a linear time invariant system is given by theconvolution of the system input with the system unit impulse response.

8.3.2.1.3 Graphical Intuition

It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes.Consider the convolution of two functions f, g given by

(f ∗ g) [n] =∞∑

k=−∞

f [k] g [n− k] =∞∑

k=−∞

f [n− k] g [k] . (8.13)

The rst step in graphically understanding the operation of convolution is to plot each of the functions.Next, one of the functions must be selected, and its plot reected across the k = 0 axis. For each real t, thatsame function must be shifted left by t. The product of the two resulting plots is then constructed. Finally,the area under the resulting curve is computed.

Example 8.2Recall that the impulse response for a discrete time echoing feedback system with gain a is

h [n] = anu [n] , (8.14)

and consider the response to an input signal that is another exponential

x [n] = bnu [n] . (8.15)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 117: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

111

We know that the output for this input is given by the convolution of the impulse response withthe input signal

y [n] = x [n] ∗ h [n] . (8.16)

We would like to compute this operation by beginning in a way that minimizes the algebraiccomplexity of the expression. However, in this case, each possible coice is equally simple. Thus, wewould like to compute

y [n] =∞∑

k=−∞

aku [k] bn−ku [n− k] . (8.17)

The step functions can be used to further simplify this sum. Therefore,

y [n] = 0 (8.18)

for n < 0 and

y [n] =n∑k=0

(ab)k (8.19)

for n ≥ 0. Hence, provided ab 6= 1, we have that

y [n] = 0 n < 0

1−(ab)n+1

1−(ab) n ≥ 0. (8.20)

8.3.2.2 Circular Convolution

Discrete time circular convolution is an operation on two nite length or periodic discrete time signals denedby the integral

(f ∗ g) [n] =N−1∑k=0

^f [k]

^g [n− k] (8.21)

for all signals f, g dened on Z [0, N − 1] where^f,^g are periodic extensions of f and g. It is important to

note that the operation of circular convolution is commutative, meaning that

f ∗ g = g ∗ f (8.22)

for all signals f, g dened on Z [0, N − 1]. Thus, the circular convolution operation could have been just aseasily stated using the equivalent denition

(f ∗ g) [n] =N−1∑k=0

^f [n− k]

^g [k] (8.23)

for all signals f, g dened on Z [0, N − 1] where^f,^g are periodic extensions of f and g. Circular convolution

has several other important properties not listed here but explained and derived in a later module.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 118: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

112 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

Alternatively, discrete time circular convolution can be expressed as the sum of two summations givenby

(f ∗ g) [n] =n∑k=0

f [k] g [n− k] +N−1∑k=n+1

f [k] g [n− k +N ] (8.24)

for all signals f, g dened on Z [0, N − 1].Meaningful examples of computing discrete time circular convolutions in the time domain would involve

complicated algebraic manipulations dealing with the wrap around behavior, which would ultimately bemore confusing than helpful. Thus, none will be provided in this section. Of course, example computationsin the time domain are easy to program and demonstrate. However, disrete time circular convolutions aremore easily computed using frequency domain tools as will be shown in the discrete time Fourier seriessection.

8.3.2.2.1 Denition Motivation

The above operation denition has been chosen to be particularly useful in the study of linear time invariantsystems. In order to see this, consider a linear time invariant system H with unit impulse response h. Givena nite or periodic system input signal x we would like to compute the system output signal H (x). First,we note that the input can be expressed as the circular convolution

x (n) =N−1∑k=0

^x [k]

^δ [n− k] (8.25)

by the sifting property of the unit impulse function. By linearity,

Hx [n] =N−1∑k=0

^x [k]H

^δ [n− k] . (8.26)

Since Hδ (n− k) is the shifted unit impulse response h (n− k), this gives the result

Hx [n] =N−1∑k=0

^x [k]

^h [n− k] = (x ∗ h) [n] . (8.27)

Hence, circular convolution has been dened such that the output of a linear time invariant system is givenby the convolution of the system input with the system unit impulse response.

8.3.2.2.2 Graphical Intuition

It is often helpful to be able to visualize the computation of a circular convolution in terms of graphicalprocesses. Consider the circular convolution of two nite length functions f, g given by

(f ∗ g) [n] =N−1∑k=0

^f [k]

^g [n− k] =

N−1∑k=0

^f [n− k]

^g [k] . (8.28)

The rst step in graphically understanding the operation of convolution is to plot each of the periodicextensions of the functions. Next, one of the functions must be selected, and its plot reected across thek = 0 axis. For each k ∈ Z [0, N − 1], that same function must be shifted left by k. The product of the tworesulting plots is then constructed. Finally, the area under the resulting curve on Z [0, N − 1] is computed.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 119: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

113

8.3.3 Online Resources

The following website provides an interactive Java applet illustrating discrete-time convolution:Discrete Joy of Convolution (Johns Hopkins University)4

4http://www.jhu.edu/signals/discreteconv/index.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 120: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

114 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

8.3.4 Interactive Element

Figure 8.6: Interact (when online) with the Mathematica CDF demonstrating Discrete Linear Convo-lution. To download, right click and save le as .cdfAvailable for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 121: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

115

8.3.5 Convolution Summary

Convolution, one of the most important concepts in electrical engineering, can be used to determine theoutput signal of a linear time invariant system for a given input signal with knowledge of the system's unitimpulse response. The operation of discrete time convolution is dened such that it performs this functionfor innite length discrete time signals and systems. The operation of discrete time circular convolution isdened such that it performs this function for nite length and periodic discrete time signals. In each case,the output of the system is the convolution or circular convolution of the input signal with the unit impulseresponse.

8.4 Properties of Discrete Time Convolution5

8.4.1 Introduction

We have already shown the important role that discrete time convolution plays in signal processing. Thissection provides discussion and proof of some of the important properties of discrete time convolution.Analogous properties can be shown for discrete time circular convolution with trivial modication of theproofs provided except where explicitly noted otherwise.

8.4.2 Discrete Time Convolution Properties

8.4.2.1 Associativity

The operation of convolution is associative. That is, for all discrete time signals f1, f2, f3 the followingrelationship holds.

f1 ∗ (f2 ∗ f3) = (f1 ∗ f2) ∗ f3 (8.29)

In order to show this, note that

(f1 ∗ (f2 ∗ f3)) [n] =∑∞k1=−∞

∑∞k2=−∞ f1 [k1] f2 [k2] f3 [(n− k1)− k2]

=∑∞k1=−∞

∑∞k2=−∞ f1 [k1] f2 [(k1 + k2)− k1] f3 [n− (k1 + k2)]

=∑∞k3=−∞

∑∞k1=−∞ f1 [k1] f2 [k3 − k1] f3 [n− k3]

= ((f1 ∗ f2) ∗ f3) [n]

(8.30)

proving the relationship as desired through the substitution k3 = k1 + k2.

8.4.2.2 Commutativity

The operation of convolution is commutative. That is, for all discrete time signals f1, f2 the followingrelationship holds.

f1 ∗ f2 = f2 ∗ f1 (8.31)

In order to show this, note that

(f1 ∗ f2) [n] =∑∞k1=−∞ f1 [k1] f2 [n− k1]

=∑∞k2=−∞ f1 [n− k2] f2 [k2]

= (f2 ∗ f1) [n]

(8.32)

proving the relationship as desired through the substitution k2 = n− k1.

5This content is available online at <http://legacy.cnx.org/content/m47456/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 122: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

116 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

8.4.2.3 Distribitivity

The operation of convolution is distributive over the operation of addition. That is, for all discrete timesignals f1, f2, f3 the following relationship holds.

f1 ∗ (f2 + f3) = f1 ∗ f2 + f1 ∗ f3 (8.33)

In order to show this, note that

(f1 ∗ (f2 + f3)) [n] =∑∞k=−∞ f1 [k] (f2 [n− k] + f3 [n− k])

=∑∞k=−∞ f1 [k] f2 [n− k] +

∑∞k=−∞ f1 [k] f3 [n− k]

= (f1 ∗ f2 + f1 ∗ f3) [n]

(8.34)

proving the relationship as desired.

8.4.2.4 Multilinearity

The operation of convolution is linear in each of the two function variables. Additivity in each variableresults from distributivity of convolution over addition. Homogenity of order one in each varible results fromthe fact that for all discrete time signals f1, f2 and scalars a the following relationship holds.

a (f1 ∗ f2) = (af1) ∗ f2 = f1 ∗ (af2) (8.35)

In order to show this, note that

(a (f1 ∗ f2)) [n] = a∑∞k=−∞ f1 [k] f2 [n− k]

=∑∞k=−∞ (af1 [k]) f2 [n− k]

= ((af1) ∗ f2) [n]

=∑∞k=−∞ f1 [k] (af2 [n− k])

= (f1 ∗ (af2)) [n]

(8.36)

proving the relationship as desired.

8.4.2.5 Conjugation

The operation of convolution has the following property for all discrete time signals f1, f2.

f1 ∗ f2 = f1 ∗ f2 (8.37)

In order to show this, note that (f1 ∗ f2

)[n] =

∑∞k=−∞ f1 [k] f2 [n− k]

=∑∞k=−∞ f1 [k] f2 [n− k]

=∑∞k=−∞ f1 [k] f2 [n− k]

=(f1 ∗ f2

)[n]

(8.38)

proving the relationship as desired.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 123: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

117

8.4.2.6 Time Shift

The operation of convolution has the following property for all discrete time signals f1, f2 where ST is thetime shift operator with T ∈ Z.

ST (f1 ∗ f2) = (ST f1) ∗ f2 = f1 ∗ (ST f2) (8.39)

In order to show this, note that

ST (f1 ∗ f2) [n] =∑∞k=−∞ f2 [k] f1 [(n− T )− k]

=∑∞k=−∞ f2 [k]ST f1 [n− k]

= ((ST f1) ∗ f2) [n]

=∑∞k=−∞ f1 [k] f2 [(n− T )− k]

=∑∞k=−∞ f1 [k]ST f2 [n− k]

= f1 ∗ (ST f2) [n]

(8.40)

proving the relationship as desired.

8.4.2.7 Impulse Convolution

The operation of convolution has the following property for all discrete time signals f where δ is the unitsample funciton.

f ∗ δ = f (8.41)

In order to show this, note that

(f ∗ δ) [n] =∑∞k=−∞ f [k] δ [n− k]

= f [n]∑∞k=−∞ δ [n− k]

= f [n]

(8.42)

proving the relationship as desired.

8.4.2.8 Width

The operation of convolution has the following property for all discrete time signals f1, f2 whereDuration (f)gives the duration of a signal f .

Duration (f1 ∗ f2) = Duration (f1) +Duration (f2)− 1 (8.43)

. In order to show this informally, note that (f1 ∗ f2) [n] is nonzero for all n for which there is a k such thatf1 [k] f2 [n− k] is nonzero. When viewing one function as reversed and sliding past the other, it is easy tosee that such a k exists for all n on an interval of length Duration (f1) +Duration (f2)− 1. Note that thisis not always true of circular convolution of nite length and periodic signals as there is then a maximumpossible duration within a period.

8.4.3 Convolution Properties Summary

As can be seen the operation of discrete time convolution has several important properties that have beenlisted and proven in this module. With silight modications to proofs, most of these also extend to discretetime circular convolution as well and the cases in which exceptions occur have been noted above. Theseidentities will be useful to keep in mind as the reader continues to study signals and systems.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 124: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

118 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

8.5 BIBO Stability of Discrete Time Systems6

8.5.1 Introduction

BIBO stability stands for bounded input, bounded output stability. BIBO stablity is the system propertythat any bounded input yields a bounded output. This is to say that as long as we input a signal withabsolute value less than some constant, we are guaranteed to have an output with absolute value less thansome other constant.

8.5.2 Discrete Time BIBO Stability

In order to understand this concept, we must rst look more closely into exactly what we mean by bounded.A bounded signal is any signal such that there exists a value such that the absolute value of the signal isnever greater than some value. Since this value is arbitrary, what we mean is that at no point can the signaltend to innity, including the end behavior.

Figure 8.7: A bounded signal is a signal for which there exists a constant A such that |f (t) | < A

8.5.2.1 Time Domain Conditions

Now that we have identied what it means for a signal to be bounded, we must turn our attention to thecondition a system must possess in order to guarantee that if any bounded signal is passed through thesystem, a bounded signal will arise on the output. It turns out that a continuous-time LTI (Section 3.2)system with impulse response h [n] is BIBO stable if and only if it is absolutely summable. That is

Discrete-Time Condition for BIBO Stability

∞∑n=−∞

|h [n] | <∞ (8.44)

6This content is available online at <http://legacy.cnx.org/content/m47362/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 125: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

119

8.5.3 BIBO Stability Summary

Bounded input bounded output stability, also known as BIBO stability, is an important and generallydesirable system characteristic. A system is BIBO stable if every bounded input signal results in a boundedoutput signal, where boundedness is the property that the absolute value of a signal does not exceed somenite constant. In terms of time domain features, a discrete time system is BIBO stable if and only if itsimpulse response is absolutely summable.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 126: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

120 CHAPTER 8. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 127: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 9

Discrete Time Fourier Transform(DTFT)

9.1 Discrete Time Aperiodic Signals1

9.1.1 Introduction

This module describes the type of signals acted on by the Discrete Time Fourier Transform.

9.1.2 Periodic and Aperiodic Signals

When a function repeats itself exactly after some given period, or cycle, we say it's periodic. A periodicfunction can be mathematically dened as:

f [n] = f [n+mN ]m ∈ Z (9.1)

where N > 0 represents the fundamental period of the signal, which is the smallest positive value ofN for the signal to repeat. Because of this, you may also see a signal referred to as an N -periodic signal.Any function that satises this equation is said to be periodic with period N . Periodic signals in discretetime repeats themselves in each cycle. However, only integers are allowed as time variable in discrete time.We denote signals in such case as f [n], where n takes value over the integers. Here's an example of adiscrete-time periodic signal with period N :

1This content is available online at <http://legacy.cnx.org/content/m47369/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

121

Page 128: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

122 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

discrete-time periodic signal

Figure 9.1: Notice the function is the same after a time shift of N

We can think of periodic functions (with period N) two dierent ways:

1. as functions on all of R

Figure 9.2: discrete time periodic function over all of R where f [n0] = f [n0 +N ]

2. or, we can cut out all of the redundancy, and think of them as functions on an interval [0, N ] (or,

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 129: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

123

more generally, [a, a+N ]). If we know the signal is N -periodic then all the information of the signalis captured by the above interval.

Figure 9.3: Remove the redundancy of the period function so that f [n] is undened outside [0, N ].

An aperiodic DT function, however, f [n] does not repeat for any N ∈ R; i.e. there exists no N suchthat this equation (9.1) holds. This broader class of signals can only be acted upon by the DTFT.

Suppose we have such an aperiodic function f [n] . We can construct a periodic extension of f [n] calledfNo [n] , where f [n] is repeated every N0 seconds. If we take the limit as N0 → ∞, we obtain a precisemodel of an aperiodic signal for which all rules that govern periodic signals can be applied, including FourierAnalysis (with an important modication). For more detail on this distinction, see the module on theDiscete Time Fourier Transform.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 130: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

124 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

9.1.3 Aperiodic Signal Demonstration

Figure 9.4: Click on the above thumbnail image (when online) to download an interactive MathematicaPlayer testing Periodic versus Aperiodic Signals. To download, right-click and save as .cdf.

9.1.4 Conclusion

A discrete periodic signal is completely dened by its values in one period, such as the interval [0, N ]. Anyaperiodic signal can be dened as an innite sum of periodic functions, a useful denition that makes itpossible to use Fourier Analysis on it by assuming all frequencies are present in the signal.

9.2 Eigenfunctions of Discrete Time LTI Systems2

9.2.1 Introduction

Prior to reading this module, the reader should already have some experience with linear algebra and shouldspecically be familiar with the eigenvectors and eigenvalues of linear operators. A linear time invariantsystem is a linear operator dened on a function space that commutes with every time shift operator onthat function space. Thus, we can also consider the eigenvector functions, or eigenfunctions, of a system.It is particularly easy to calculate the output of a system when an eigenfunction is the input as the outputis simply the eigenfunction scaled by the associated eigenvalue. As will be shown, discrete time complexexponentials serve as eigenfunctions of linear time invariant systems operating on discrete time signals.

2This content is available online at <http://legacy.cnx.org/content/m47459/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 131: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

125

9.2.2 Eigenfunctions of LTI Systems

Consider a linear time invariant system H with impulse response h operating on some space of innite lengthdiscrete time signals. Recall that the output H (x [n]) of the system for a given input x [n] is given by thediscrete time convolution of the impulse response with the input

H (x [n]) =∞∑

k=−∞

h [k]x [n− k] . (9.2)

Now consider the input x [n] = esn where s ∈ C. Computing the output for this input,

H (esn) =∑∞k=−∞ h [k] es(n−k)

=∑∞k=−∞ h [k] esne−sk

= esn∑∞k=−∞ h [k] e−sk.

(9.3)

Thus,

H (esn) = λsesn (9.4)

where

λs =∞∑

k=−∞

h [k] e−sk (9.5)

is the eigenvalue corresponding to the eigenvector esn.There are some additional points that should be mentioned. Note that, there still may be additional

eigenvalues of a linear time invariant system not described by esn for some s ∈ C. Furthermore, the abovediscussion has been somewhat formally loose as esn may or may not belong to the space on which the systemoperates. However, for our purposes, complex exponentials will be accepted as eigenvectors of linear timeinvariant systems. A similar argument using discrete time circular convolution would also hold for spacesnite length signals.

9.2.3 Eigenfunction of LTI Systems Summary

As has been shown, discrete time complex exponential are eigenfunctions of linear time invariant systemsoperating on discrete time signals. Thus, it is particularly simple to calculate the output of a linear timeinvariant system for a complex exponential input as the result is a complex exponential output scaled bythe associated eigenvalue. Consequently, representations of discrete time signals in terms of discrete timecomplex exponentials provide an advantage when studying signals. As will be explained later, this is whatis accomplished by the discrete time Fourier transform and discrete time Fourier series, which apply toaperiodic and periodic signals respectively.

9.3 Discrete Time Fourier Transform (DTFT)3

9.3.1 Introduction

In this module, we will derive an expansion for arbitrary discrete-time functions, and in doing so, derive theDiscrete Time Fourier Transform (DTFT).

Since complex exponentials (Section 2.6) are eigenfunctions of linear time-invariant (LTI) systems4, cal-culating the output of an LTI system H given ejΩn as an input amounts to simple multiplication, where

3This content is available online at <http://legacy.cnx.org/content/m47370/1.2/>.4"Eigenfunctions of LTI Systems" <http://legacy.cnx.org/content/m10500/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 132: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

126 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

H (Ω) ∈ C is the eigenvalue corresponding to Ω. As shown in the gure, a simple exponential input wouldyield the output

y [n] = H (Ω) ejΩn (9.6)

Figure 9.5: Simple LTI system.

Using this and the fact that H is linear, calculating y [n] for combinations of complex exponentials isalso straightforward.

c1ejΩ1n + c2e

jΩ2n → c1H (Ω1) ejΩ1n + c2H (Ω2) ejΩ1n

∑l

clejΩln →

∑l

clH (Ωl) ejΩln

The action of H on an input such as those in the two equations above is easy to explain. H indepen-dently scales each exponential component ejΩln by a dierent complex number H (Ωl) ∈ C. As such, ifwe can write a function y [n] as a combination of complex exponentials it allows us to easily calculate theoutput of a system.

Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signalsin terms of a set of simpler functions by superposition of a number of complex exponentials. Below we willpresent the Discrete-Time Fourier Transform (DTFT). Because the DTFT deals with nonperiodicsignals, we must nd a way to include all real frequencies in the general equations. For the DTFT we simplylet N go to innity. This will also change the summation over integers to an integration over real numbers.

9.3.2 DTFT synthesis

It can be demonstrated that an arbitrary N -periodic Discrete Time-periodic function f [n] can be writtenas a linear combination of harmonic complex sinusoids

f [n] =N−1∑k=0

ckejΩ0kn (9.7)

where Ω0 = 2πN is the fundamental frequency. For almost all f [n] of practical interest, there exists cn to

make (9.7) true. If f [n] is nite energy, then the equality in (9.7) holds in the sense of energy convergence;with discrete-time signals, there are no concerns for divergence as there are with continuous-time signals.

The cn - called the Fourier coecients - tell us "how much" of the sinusoid ejΩ0kn is in f [n]. The formulashows f [n] as a sum of complex exponentials, each of which is easily processed by an LTI system (since itis an eigenfunction of every LTI system). Mathematically, it tells us that the set of complex exponentialsejΩ0kn , k ∈ Z

form a basis for the space of N-periodic discrete time functions.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 133: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

127

9.3.2.1 Equations

Discrete-Time Fourier Transform

X (Ω) =∞∑

n=−∞f [n] e−(jΩn) (9.8)

Inverse DTFT

x [n] =1

∫ π

−πX (Ω) ejΩndΩ (9.9)

warning: It is not uncommon to see the above formula written slightly dierent. One of the mostcommon dierences is the way that the exponential is written. The above equations use the radialfrequency variable Ω in the exponential, where Ω = 2πF , but it is also common to include the moreexplicit expression, j2πFn, in the exponential. Sometimes DTFT notation is expressed as X

(ejΩ),

to make it clear that it is not a CTFT (which is denoted as X (ω)). Click here5 for an overview ofthe notation used in Connexion's DSP modules.

9.3.3 DTFT Denition demonstration

Figure 9.6: Click on the above thumbnail image (when online) to download an interactive MathematicaPlayer demonstrating Discrete Time Fourier Transform. To Download, right-click and save target as .cdf.

5"DSP notation" <http://legacy.cnx.org/content/m10161/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 134: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

128 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

9.3.4 DTFT Summary

Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals usinga set of complex exponentials as a basis. The discrete time Fourier transform synthesis formula expresses adiscrete time, aperiodic function as the innite sum of continuous frequency complex exponentials.

X (Ω) =∞∑

n=−∞x [n] e−(jΩn) (9.10)

x [n] =1

∫ π

−πX (Ω) ejΩndΩ (9.11)

9.4 Properties of the DTFT6

9.4.1 Introduction

This module will look at some of the basic properties of the Discrete-Time Fourier Transform7 (DTFT).

note: We will be discussing these properties for aperiodic, discrete-time signals but understandthat very similar properties hold for continuous-time signals and periodic signals as well.

9.4.2 Table of DTFT Properties

Discrete-Time Fourier Transform Properties

Sequence Domain Frequency Domain

Linearity a1s1 [n] + a2s2 [n] a1S1 (Ω) + a2S2 (Ω)

Conjugate Symmetry s [n]∗ S (−Ω)∗

Time Scaling (Expansion) sc [n] = s [n/c] if n/c is integer

0 otherwiseS (cΩ)

Time Reversal s [−n] S (−Ω)

Time Delay s [n− n0] e−(jΩn0)S (Ω)

Multiplication by n ns [n] j dS(Ω)dΩ

Sum∑∞n=−∞ s [n] S (0)

Value at Origin s [0] 12π

∫ π−π S (Ω) dΩ

Time Convolution s1 [n] ∗ s1 [n] S1 (Ω)S2 (Ω)

Frequency Convolution s1 [n] s1 [n] 12π

∫2πS1 (u)S2 (Ω− u) dΩ

Parseval's Theorem∑∞n=−∞ (|s [n] |)2 1

∫ π−π (|S (Ω) |)2

Complex Modulation ejΩ0ns [n] S (Ω− Ω0)

Amplitude Modulation s [n] cos (2πΩ0n) S(Ω−Ω0)+S(Ω+Ω0)2

s [n] sin (2πΩ0n) S(Ω−Ω0)−S(Ω+Ω0)2j

Table 9.1: Discrete-time Fourier transform properties and relations.

6This content is available online at <http://legacy.cnx.org/content/m47374/1.9/>.7"Discrete Time Fourier Transform (DTFT)" <http://legacy.cnx.org/content/m10108/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 135: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

129

9.4.3 Discussion of Fourier Transform Properties

9.4.3.1 Linearity

The combined addition and scalar multiplication properties in the table above demonstrate the basic propertyof linearity. What you should see is that if one takes the Fourier transform of a linear combination of signalsthen it will be the same as the linear combination of the Fourier transforms of each of the individual signals.This is crucial when using a table8 of transforms to nd the transform of a more complicated signal.

Example 9.1We will begin with the following signal:

z [n] = af1 [n] + bf2 [n] (9.12)

Now, after we take the Fourier transform, shown in the equation below, notice that the linearcombination of the terms is unaected by the transform.

Z (Ω) = aF1 (Ω) + bF2 (Ω) (9.13)

9.4.3.2 Symmetry

Symmetry is a property that can make life quite easy when solving problems involving Fourier transforms.Basically what this property says is that since a rectangular function in time is a sinc function in frequency,then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similaritybetween the forward DTFT and the inverse DTFT. The only dierence is the scaling by 2π and a frequencyreversal.

9.4.3.3 Time Scaling

This property deals with the eect on the frequency-domain representation of a signal if the time variableis altered. The most important concept to understand for the time scaling property is that signals that arenarrow in time will be broad in frequency and vice versa. The simplest example of this is a delta function,a unit pulse9 with a very small duration, in time that becomes an innite-length constant function infrequency.

In contrast to the CTFT property, the DTFT time scaling property is available only for expansion inthe time domain. This is because decimation discards samples of the original signal and therefore there isno unique relationship between the original signal and a decimated signal (that is, a decimated signal couldcorrespond to many original signals) that would provide a single transformation between the original DTFTand the "decimated" DTFT. The intuition from CTFT still holds for expansion: expanding the signal intime compacts the DTFT in frequency.

9.4.3.4 Time Shifting

Time shifting shows that a shift in time is equivalent to a linear phase shift in frequency. Since the frequencycontent depends only on the shape of a signal, which is unchanged in a time shift, then only the phasespectrum will be altered. This property is proven below:

8"Common Fourier Transforms" <http://legacy.cnx.org/content/m10099/latest/>9"Elemental Signals": Section Pulse <http://legacy.cnx.org/content/m0004/latest/#pulsedef>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 136: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

130 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

Example 9.2We will begin by letting z [n] = f [n− η]. Now let us take the Fourier transform with the previousexpression substituted in for z [n].

Z (Ω) =∞∑

n=−∞f [n− η] e−(jΩn) (9.14)

Now let us make a simple change of variables, where σ = n− η. Through the calculations below,you can see that only the variable in the exponential is altered thus only changing the phase in thefrequency domain.

Z (Ω) =∑∞σ=−∞ f [σ] e−(jΩ(σ+η))

= e−(jΩη)∑∞σ=−∞ f [σ] e−(jΩσ)

= e−(jΩη)F (Ω)

(9.15)

9.4.3.5 Convolution

Convolution is one of the big reasons for converting signals to the frequency domain, since convolution intime becomes multiplication in frequency. This property is also another excellent example of symmetrybetween time and frequency. It also shows that there may be little to gain by changing to the frequencydomain when multiplication in time is involved.

We will introduce the convolution integral here, but if you have not seen this before or need to refreshyour memory, then look at the discrete-time convolution10 module for a more in depth explanation andderivation.

y [n] = f1 [n] ∗ f2 [n]

=∑∞η=−∞ f1 [η] f2 [n− η]

(9.16)

9.4.3.6 Time Dierentiation

Since LTI (Section 3.2) systems can be represented in terms of dierential equations, it is apparent withthis property that converting to the frequency domain may allow us to convert these complicated dierentialequations to simpler equations involving multiplication and addition. This is often looked at in more detailduring the study of the Z Transform11.

9.4.3.7 Parseval's Relation

∞∑n=−∞

(|f [n] |)2 =∫ π

−π(|F (Ω) |)2

dΩ (9.17)

Parseval's relation tells us that the energy of a signal is equal to the energy of its Fourier transform.

10"Discrete Time Convolution" <http://legacy.cnx.org/content/m10087/latest/>11"The Laplace Transform" <http://legacy.cnx.org/content/m10110/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 137: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

131

Figure 9.7

9.4.3.8 Modulation (Frequency Shift)

Modulation is absolutely imperative to communications applications. Being able to shift a signal to a dierentfrequency, allows us to take advantage of dierent parts of the electromagnetic spectrum is what allows usto transmit television, radio and other applications through the same space without signicant interference.

The proof of the frequency shift property is very similar to that of the time shift (Section 9.4.3.4: TimeShifting); however, here we would use the inverse Fourier transform in place of the Fourier transform. Sincewe went through the steps in the previous, time-shift proof, below we will just show the initial and nal stepto this proof:

z [n] =1

∫ ∞−∞

F (Ω− φ) ejΩndΩ (9.18)

z [n] = f [n] ejφn (9.19)

9.4.4 Online Resources

The following online resources provide interactive explanations of the properties of the CTFT:Discrete-Time Fourier Transform Properties (Johns Hopkins University)12

9.4.5 Properties Demonstration

An interactive example demonstration of the properties is included below:

This media object is a LabVIEW VI. Please view or download it at<CTFTSPlab.llb>

Figure 9.8: Interactive Signal Processing Laboratory Virtual Instrument created using NI's Labview.

9.5 Common Discrete Time Fourier Transforms13

9.5.1 Common DTFTs

12http://www.jhu.edu/signals/dtftprops/indexDTFTprops.htm13This content is available online at <http://legacy.cnx.org/content/m47373/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 138: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

132 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

Time Domain x[n] Frequency Domain X(w) Notes

1 2π∑∞m=−∞ δ (Ω− 2πm)

ejΩ0n 2π∑∞m=−∞ δ (Ω− Ω0 − 2πm) real number Ω0

δ [n] 1

δ [n−M ] e−jΩM integer M∑∞m=−∞ δ [n−Mm]

∑∞m=−∞ e−jΩMm =

1M

∑∞k=−∞ δ

(Ω2π −

kM

) integer M

u [n] 11−e−jΩ +

∑∞k=−∞ πδ (Ω + 2πk)

anu [n] 11−ae−jΩ if |a| < 1

−anu [− (n+ 1)] 11−ae−jΩ if |a| > 1

a|n| 1−a2

1−2acos(Ω)+a2 if |a| < 1

nanu [n] aejΩ

(ejΩ−a)2 if |a| < 1

sin (an) jπ∑∞m=−∞ [δ (Ω + a− 2πm)− δ (Ω− a− 2πm)]real number a

cos (an) π∑∞m=−∞ [δ (Ω− a− 2πm) + δ (Ω + a− 2πm)]real number a

Ωc · sinc2(

Ωcnπ

) ∑∞m=−∞ Λ

(Ω−2πm

2Ωc

)real number Ωc, 0 < Ωc ≤ π

Ωc · sinc(

Ωcnπ

) ∑∞m=−∞ p

(Ω−2πm

2Ωc

)real number Ωc, 0 < Ωc ≤ π

u [n]− u [n−M ] sin(ΩM/2)sin(Ω/2) e

−jΩ(M−1)/2 integer M

Ωc(n+a)cos [πΩc (n+ a)] −sinc [Ωc (n+ a)]

jΩ · p(

ΩπΩc

)ejaΩ real number Ωc, 0 < Ωc ≤ 1

1πn2 [(−1)n − 1] |Ω|

0 n = 0

(−1)n

n elsewherejΩ dierentiator lter

0 n odd

2πn n even

Hilbert Transform

Table 9.2

Notesp (t) is the pulse function for arbitrary real-valued t.

p (t) = 0 if |t| > 1/2

1 if |t| ≤ 1/2(9.20)

Λ (t) is the triangle function for arbitrary real-valued t.

Λ (t) = 1 + t if− 1 ≤ t ≤ 0

1− t if 0 < t ≤ 1

0 otherwise

(9.21)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 139: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

133

9.6 Discrete Time Convolution and the DTFT14

9.6.1 Introduction

This module discusses convolution of discrete signals in the time and frequency domains.

9.6.2 The Discrete-Time Convolution

9.6.2.1 Discrete Time Fourier Transform

The DTFT transforms an innite-length discrete signal in the time domain into an nite-length (or 2π-periodic) continuous signal in the frequency domain.

DTFT

X (Ω) =∞∑

n=−∞x [n] e−(jΩn) (9.22)

Inverse DTFT

x [n] =1

∫ 2π

0

X (Ω) ejΩndΩ (9.23)

14This content is available online at <http://legacy.cnx.org/content/m47375/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 140: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

134 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

9.6.2.2 Demonstration

Figure 9.9: Interact (when online) with a Mathematica CDF demonstrating the Discrete Convolution.To Download, right-click and save as .cdf.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 141: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

135

9.6.2.3 Convolution Sum

As mentioned above, the convolution sum provides a concise, mathematical way to express the output ofan LTI system based on an arbitrary discrete-time input signal and the system's impulse response. Theconvolution sum is expressed as

y [n] =∞∑

k=−∞

x [k]h [n− k] (9.24)

As with continuous-time, convolution is represented by the symbol *, and can be written as

y [n] = x [n] ∗ h [n] (9.25)

Convolution is commutative. For more information on the characteristics of convolution, read about theProperties of Convolution (Section 4.5).

9.6.2.4 Convolution Theorem

Let f and g be two functions with convolution f ∗ g. Let F be the Fourier transform operator. Then

F (f ∗ g) = F (f) · F (g) (9.26)

F (f · g) = F (f) ∗ F (g) (9.27)

By applying the inverse Fourier transform F−1, we can write:

f ∗ g = F−1 (F (f) · F (g)) (9.28)

9.6.3 Conclusion

The Fourier transform of a convolution is the pointwise product of Fourier transforms. In other words,convolution in one domain (e.g., time domain) corresponds to point-wise multiplication in the other domain(e.g., frequency domain).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 142: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

136 CHAPTER 9. DISCRETE TIME FOURIER TRANSFORM (DTFT)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 143: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 10

Sampling and Reconstruction

10.1 Signal Sampling1

10.1.1 Introduction

Digital computers can process discrete time signals using extremely exible and powerful algorithms. How-ever, most signals of interest are continuous time signals, which is how data almost always appears in nature.This module introduces the concepts behind converting continuous time signals into discrete time signalsthrough a process called sampling.

10.1.2 Sampling

Sampling a continuous time signal produces a discrete time signal by selecting the values of the continuoustime signal at evenly spaced points in time. Thus, sampling a continuous time signal x with sampling periodTs gives the discrete time signal xs dened by x [n] = x (nTs) . The sampling angular frequency is then givenby fs = 1/Ts.

It should be intuitively clear that multiple continuous time signals sampled at the same rate can producethe same discrete time signal since uncountably many continuous time functions could be constructed thatconnect the points on the graph of any discrete time function. Thus, sampling at a given rate does not resultin an injective relationship. Hence, sampling is, in general, not invertible.

Example 10.1For instance, consider the signals x, y dened by

x (t) =sin (t)t

(10.1)

y (t) =sin (5t)

t(10.2)

and their sampled versions x, y with sampling period Ts = π/2

x [n] =sin (nπ/2)nπ/2

(10.3)

y [n] =sin (n5π/2)

nπ/2. (10.4)

1This content is available online at <http://legacy.cnx.org/content/m47377/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

137

Page 144: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

138 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Notice that since

sin (n5π/2) = sin (n2π + nπ/2) = sin (nπ/2) (10.5)

it follows that

y [n] =sin (nπ/2)nπ/2

= x [n] . (10.6)

Hence, x and y provide an example of distinct functions with the same sampled versions at aspecic sampling rate.

It is also useful to consider the relationship between the frequency domain representations of the continuoustime function and its sampled versions. Consider a signal x sampled with sampling period Ts to producethe discrete time signal x [n] = x (nTs). Let us dene the continuous-time "impulse train"

p (t) =∞∑

n=−∞δ (t− nTs) (10.7)

Using Fourier series, it can be shown that

p (t) =1Ts

∞∑k=−∞

ej2πTskt (10.8)

Multiplying x (t) by the impulse train yields the "continuous-time sampled signal" xs (t):

xs (t) = xc (t)∑∞n=−∞ δ (t− nT )

= xc (t) 1Ts

∑∞k=−∞ ej

2πTskt

(10.9)

Taking the CTFT of xs (t),

Xs (f) =∫∞−∞

(xc (t) 1

Ts

∑∞k=−∞ ej

2πTskt)e−(j2πft)dt

= 1Ts

∑∞k=−∞

∫∞−∞ xc (t) e−(j2π(f− k

Ts)t)dt

= 1Ts

∑∞k=−∞Xc

(f − k

Ts

) (10.10)

Notice that Xs (f) is a summation of scaled and shifted copies of Xc (f). We now investigate therelationship between the CTFT and the DTFT:

X (Ω) =∑∞n=−∞ x [n] e−(jΩn)

=∑∞n=−∞ xc (nTs) e−(jΩn)

=∑∞n=−∞

∫∞−∞ xc (t) δ (t− nTs) dte−(jΩn)

=∑∞n=−∞

∫∞−∞ xc (t) δ (t− nTs) e−(jΩn)dt

(10.11)

where xc (t) δ (t− nTs) e−(jΩn) is nonzero if and only if n = tTs.

X (Ω) =∑∞n=−∞

∫∞−∞ xc (t) δ (t− nTs) e−(jΩ t

Ts)dt

=∫∞−∞

(xc (t)

∑∞n=−∞ δ (t− nTs)

)e−(jΩ t

Ts)dt

=∫∞−∞ xs (t) e−(jΩ t

Ts)dt

(10.12)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 145: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

139

where

xc (t)∞∑

n=−∞δ (t− nTs) = xs (t)

so that

X (Ω) = Xs

2πTs

)(10.13)

Plugging (10.10) into (10.13) yields:

X (Ω) =1Ts

∞∑k=−∞

Xc

(Ω− 2πk

2πTs

)(10.14)

Notice that the DTFT of the sampled signal x [n] is a summation of scaled, stretched, and shifted copiesof the CTFT of the continuous signal xc (t).

10.1.3 Sampling Summary

Sampling a continuous time signal produces a discrete time signal by selecting the values of the continuoustime signal at equally spaced points in time. However, we have shown that this relationship is not injective asmultiple continuous time signals can be sampled at the same rate to produce the same discrete time signal.This is related to a phenomenon called aliasing which will be discussed in later modules. Consequently,the sampling process is not, in general, invertible. Nevertheless, as will be shown in the module concerningreconstruction, the continuous time signal can be recovered from its sampled version if some additionalassumptions hold.

10.2 Sampling Theorem2

10.2.1 Introduction

With the introduction of the concept of signal sampling, which produces a discrete time signal by selectingthe values of the continuous time signal at evenly spaced points in time, it is now possible to discuss oneof the most important results in signal processing, the Nyquist-Shannon sampling theorem. Often simplycalled the sampling theorem, this theorem concerns signals, known as bandlimited signals, with spectra thatare zero for all frequencies with absolute value greater than or equal to a certain level. The theorem impliesthat there is a suciently high sampling rate at which a bandlimited signal can be recovered exactly from itssamples, which is an important step in the processing of continuous time signals using the tools of discretetime signal processing.

10.2.2 Nyquist-Shannon Sampling Theorem

10.2.2.1 Statement of the Sampling Theorem

The Nyquist-Shannon sampling theorem concerns signals with continuous time Fourier transforms that areonly nonzero on the interval (−B,B) for some constant B. Such a function is said to be bandlimited to(−B,B). Essentially, the sampling theorem has already been implicitly introduced in the previous moduleconcerning sampling. Given a continuous time signals x with continuous time Fourier transform X, recallthat the spectrum Xs of sampled signal xs with sampling period Ts is given by

Xs (Ω) =1Ts

∞∑k=−∞

X

(Ω− 2πk

2πTs

). (10.15)

2This content is available online at <http://legacy.cnx.org/content/m47378/1.2/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 146: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

140 CHAPTER 10. SAMPLING AND RECONSTRUCTION

It had previously been noted that if x is bandlimited to (−1/2Ts, 1/2Ts), the period of Xs centered about theorigin has the same form as X scaled in frequency since no aliasing occurs. This is illustrated in Figure 10.1.Hence, if any two (−1/2Ts, 1/2Ts) bandlimited continuous time signals sampled to the same signal, theywould have the same continuous time Fourier transform and thus be identical. Thus, for each discrete timesignal there is a unique (−1/2Ts, 1/2Ts) bandlimited continuous time signal that samples to the discretetime signal with sampling period Ts. Therefore, this (−1/2Ts, 1/2Ts) bandlimited signal can be found fromthe samples by inverting this bijection.

This is the essence of the sampling theorem. More formally, the sampling theorem states the following. Ifa signal x is bandlimited to (−B,B), it is completely determined by its samples with sampling rate fs = 2B,known as the Nyquist rate. That is to say, x can be reconstructed exactly from its samples xs with samplingrate fs = 2B. The angular frequency 4πB is often called the angular Nyquist rate. Equivalently, this can bestated in terms of the sampling period Ts = 1/fs. If a signal x is bandlimited to (−B,B), it is completelydetermined by its samples with sampling period Ts = 1/2B. That is to say, x can be reconstructed exactlyfrom its samples xs with sampling period Ts.

Figure 10.1: The spectrum of a bandlimited signals is shown as well as the spectra of its samplesat rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquistfrequency, and the period of the samples spectrum centered about the origin has the same form as thespectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur andcauses the spectrum to take a dierent than the original spectrum.

10.2.2.2 Proof of the Sampling Theorem

The above discussion has already shown the sampling theorem in an informal and intuitive way that couldeasily be rened into a formal proof. However, the original proof of the sampling theorem, which will be

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 147: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

141

given here, provides the interesting observation that the samples of a signal with period Ts provide Fourierseries coecients for the original signal spectrum on (−1/2Ts, 1/2Ts).

Let x be a (−1/2Ts, 1/2Ts) bandlimited signal and xs be its samples with sampling period Ts. We canrepresent x in terms of its spectrum X using the inverse continuous time Fourier transfrom and the fact thatx is bandlimited. The result is

x (t) =∫ 1/2Ts−1/2Ts

X (f) ej2πftdf (10.16)

This representation of x may then be sampled with sampling period Ts to produce

xs [n] = xs (nTs) =∫ 1/2Ts−1/2Ts

X (f) ej2πfnTsdf (10.17)

Noticing that this indicates that xs [n] is the nth continuous time Fourier series coecient for X (f) onthe interval (−1/2Ts, 1/2Ts), it is shown that the samples determine the original spectrum X (f) and, byextension, the original signal itself.

10.2.2.3 Perfect Reconstruction

Another way to show the sampling theorem is to derive the reconstruction formula that gives the originalsignal x = x from its samples xs with sampling period Ts, provided x is bandlimited to (−1/2Ts, 1/2Ts).This is done in the module on perfect reconstruction. However, the result, known as the Whittaker-Shannonreconstruction formula, will be stated here. If the requisite conditions hold, then the perfect reconstructionis given by

x (t) =∞∑

n=−∞xs [n] sinc (t/Ts − n) (10.18)

where the sinc function is dened as

sinc (t) =sin (πt)πt

. (10.19)

From this, it is clear that the set

sinc (t/Ts − n) |n ∈ Z (10.20)

forms an orthogonal basis for the set of (−1/2Ts, 1/2Ts) bandlimited signals, where the coecients of a(−1/2Ts, 1/2Ts) signal in this basis are its samples with sampling period Ts.

10.2.3 Practical Implications

10.2.3.1 Discrete Time Processing of Continuous Time Signals

The Nyquist-Shannon Sampling Theorem and the Whittaker-Shannon Reconstruction formula enable dis-crete time processing of continuous time signals. Because any linear time invariant lter performs a multi-plication in the frequency domain, the result of applying a linear time invariant lter to a bandlimited signalis an output signal with the same bandlimit. Since sampling a bandlimited continuous time signal abovethe Nyquist rate produces a discrete time signal with a spectrum of the same form as the original spectrum,a discrete time lter could modify the samples spectrum and perfectly reconstruct the output to producethe same result as a continuous time lter. This allows the use of digital computing power and exibility tobe leveraged in continuous time signal processing as well. This is more thouroughly described in the nalmodule of this chapter.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 148: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

142 CHAPTER 10. SAMPLING AND RECONSTRUCTION

10.2.3.2 Psychoacoustics

The properties of human physiology and psychology often inform design choices in technologies meant forinteractin with people. For instance, digital devices dealing with sound use sampling rates related to thefrequency range of human vocalizations and the frequency range of human auditory sensativity. Becausemost of the sounds in human speech concentrate most of their signal energy between 5 Hz and 4 kHz, mosttelephone systems discard frequencies above 4 kHz and sample at a rate of 8 kHz. Discarding the frequenciesgreater than or equal to 4 kHz through use of an anti-aliasing lter is important to avoid aliasing, whichwould negatively impact the quality of the output sound as is described in a later module. Similarly, humanhearing is sensitive to frequencies between 20 Hz and 20 kHz. Therefore, sampling rates for general audiowaveforms placed on CDs were chosen to be greater than 40 kHz, and all frequency content greater thanor equal to some level is discarded. The particular value that was chosen, 44.1 kHz, was selected for otherreasons, but the sampling theorem and the range of human hearing provided a lower bound for the range ofchoices.

10.2.4 Sampling Theorem Summary

The Nyquist-Shannon Sampling Theorem states that a signal bandlimited to (−1/2Ts, 1/2Ts) can be recon-structed exactly from its samples with sampling period Ts. The Whittaker-Shannon interpolation formula,which will be further described in the section on perfect reconstruction, provides the reconstruction of theunique (−1/2Ts, 1/2Ts) bandlimited continuous time signal that samples to a given discrete time signalwith sampling period Ts. This enables discrete time processing of continuous time signals, which has manypowerful applications.

10.3 Signal Reconstruction3

10.3.1 Introduction

The sampling process produces a discrete time signal from a continuous time signal by examining the valueof the continuous time signal at equally spaced points in time. Reconstruction, also known as interpolation,attempts to perform an opposite process that produces a continuous time signal coinciding with the pointsof the discrete time signal. Because the sampling process for general sets of signals is not invertible, thereare numerous possible reconstructions from a given discrete time signal, each of which would sample to thatsignal at the appropriate sampling rate. This module will introduce some of these reconstruction schemes.

10.3.2 Reconstruction

10.3.2.1 Reconstruction Process

The process of reconstruction, also commonly known as interpolation, produces a continuous time signalthat would sample to a given discrete time signal at a specic sampling rate. Reconstruction can be mathe-matically understood by rst generating a continuous time impulse train

ximp (t) =∞∑

n=−∞xs [n] δ (t− nTs) (10.21)

from the sampled signal xs with sampling period Ts and then applying a lowpass lter G that satises certainconditions to produce an output signal x. If G has impulse response g, then the result of the reconstruction

3This content is available online at <http://legacy.cnx.org/content/m47463/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 149: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

143

process, illustrated in Figure 10.2, is given by the following computation, the nal equation of which is usedto perform reconstruction in practice.

x (t) = (ximp ∗ g) (t)

=∫∞−∞ ximp (τ) g (t− τ) dτ

=∫∞−∞

∑∞n=−∞ xs [n] δ (τ − nTs) g (t− τ) dτ

=∑∞n=−∞ xs [n]

∫∞−∞ δ (τ − nTs) g (t− τ) dτ

=∑∞n=−∞ xs [n] g (t− nTs)

(10.22)

Figure 10.2: Block diagram of reconstruction process for a given lowpass lter G.

10.3.2.2 Reconstruction Filters

In order to guarantee that the reconstructed signal x samples to the discrete time signal xs from which itwas reconstructed using the sampling period Ts, the lowpass lter G must satisfy certain conditions. Thesecan be expressed well in the time domain in terms of a condition on the impulse response g of the lowpasslter G. The sucient condition to be a reconstruction lters that we will require is that, for all n ∈ Z,

g (nTs) = 1 n = 0

0 n 6= 0= δ [n] . (10.23)

This means that g sampled at a rate Ts produces a discrete time unit impulse signal. Therefore, it followsthat sampling x with sampling period Ts results in

x (nTs) =∑∞m=−∞ xs [m] g (nTs −mTs)

=∑∞m=−∞ xs [m] g ((n−m)Ts)

=∑∞m=−∞ xs [m] δ [n−m]

= xs [n] ,

(10.24)

which is the desired result for reconstruction lters.

10.3.2.3 Cardinal Basis Splines

Since there are many continuous time signals that sample to a given discrete time signal, additional con-straints are required in order to identify a particular one of these. For instance, we might require ourreconstruction to yield a spline of a certain degree, which is a signal described in piecewise parts by polyno-mials not exceeding that degree. Additionally, we might want to guarantee that the function and a certainnumber of its derivatives are continuous.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 150: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

144 CHAPTER 10. SAMPLING AND RECONSTRUCTION

This may be accomplished by restricting the result to the span of sets of certain splines, called basissplines or B-splines. Specically, if a nth degree spline with continuous derivatives up to at least order n− 1is required, then the desired function for a given Ts belongs to the span of Bn (t/Ts − k) |k ∈ Z where

Bn = B0 ∗Bn−1 (10.25)

for n ≥ 1 and

B0 (t) = 1 −1/2 < t < 1/2

0 otherwise. (10.26)

Figure 10.3: The basis splines Bn are shown in the above plots. Note that, except for the order 0 andorder 1 functions, these functions do not satisfy the conditions to be reconstruction lters. Also noticethat as the order increases, the functions approach the Gaussian function, which is exactly B∞.

However, the basis splines Bn do not satisfy the conditions to be a reconstruction lter for n ≥ 2 as isshown in Figure 10.3. Still, the Bn are useful in dening the cardinal basis splines, which do satisfy theconditions to be reconstruction lters. If we let bn be the samples of Bn on the integers, it turns out that bnhas an inverse b−1

n with respect to the operation of convolution for each n. This is to say that b−1n ∗ bn = δ.

The cardinal basis spline of order n for reconstruction with sampling period Ts is dened as

ηn (t) =∞∑

k=−∞

b−1n [k]Bn (t/Ts − k) . (10.27)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 151: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

145

In order to conrm that this satises the condition to be a reconstruction lter, note that

ηn (mTs) =∞∑

k=−∞

b−1n [k]Bn (m− k) =

(b−1n ∗ bn

)(m) = δ (m) . (10.28)

Thus, ηn is a valid reconstruction lter. Since ηn is an nth degree spline with continuous derivatives up toorder n − 1, the result of the reconstruction will be a nth degree spline with continuous derivatives up toorder n− 1.

Figure 10.4: The above plots show cardinal basis spline functions η0, η1, η2, and η∞. Note that thefunctions satisfy the conditions to be reconstruction lters. Also, notice that as the order increases, thecardinal basis splines approximate the sinc function, which is exactly η∞. Additionally, these lters areacausal.

The lowpass lter with impulse response equal to the cardinal basis spline η0 of order 0 is one of thesimplest examples of a reconstruction lter. It simply extends the value of the discrete time signal for halfthe sampling period to each side of every sample, producing a piecewise constant reconstruction. Thus, theresult is discontinuous for all nonconstant discrete time signals.

Likewise, the lowpass lter with impulse response equal to the cardinal basis spline η1 of order 1 is anotherof the simplest examples of a reconstruction lter. It simply joins the adjacent samples with a straight line,producing a piecewise linear reconstruction. In this way, the reconstruction is continuous for all possiblediscrete time signals. However, unless the samples are collinear, the result has discontinuous rst derivatives.

In general, similar statements can be made for lowpass lters with impulse responses equal to cardinalbasis splines of any order. Using the nth order cardinal basis spline ηn, the result is a piecewise degreen polynomial. Furthermore, it has continuous derivatives up to at least order n − 1. However, unless allsamples are points on a polynomial of degree at most n, the derivative of order n will be discontinuous.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 152: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

146 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Reconstructions of the discrete time signal given in Figure 10.5 using several of these lters are shownin Figure 10.6. As the order of the cardinal basis spline increases, notice that the reconstruction approachesthat of the innite order cardinal spline η∞, the sinc function. As will be shown in the subsequent sectionon perfect reconstruction, the lters with impulse response equal to the sinc function play an especiallyimportant role in signal processing.

Figure 10.5: The above plot shows an example discrete time function. This discrete time function willbe reconstructed using sampling period Ts using several cardinal basis splines in Figure 10.6.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 153: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

147

Figure 10.6: The above plots show interpolations of the discrete time signal given in Figure 10.5 usinglowpass lters with impulse responses given by the cardinal basis splines shown in Figure 10.4. Noticethat the interpolations become increasingly smooth and approach the sinc interpolation as the orderincreases.

10.3.3 Reconstruction Summary

Reconstruction of a continuous time signal from a discrete time signal can be accomplished through severalschemes. However, it is important to note that reconstruction is not the inverse of sampling and onlyproduces one possible continuous time signal that samples to a given discrete time signal. As is coveredin the subsequent module, perfect reconstruction of a bandlimited continuous time signal from its sampledversion is possible using the Whittaker-Shannon reconstruction formula, which makes use of the ideal lowpasslter and its sinc function impulse response, if the sampling rate is suciently high.

10.4 Perfect Reconstruction4

10.4.1 Introduction

If certain additional assumptions about the original signal and sampling rate hold, then the original signalcan be recovered exactly from its samples using a particularly important type of lter. More specically, itwill be shown that if a bandlimited signal is sampled at a rate greater than twice its bandlimit, the Whittaker-Shannon reconstruction formula perfectly reconstructs the original signal. This formula makes use of theideal lowpass lter, which is related to the sinc function. This is extremely useful, as sampled versions of

4This content is available online at <http://legacy.cnx.org/content/m47379/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 154: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

148 CHAPTER 10. SAMPLING AND RECONSTRUCTION

continuous time signals can be ltered using discrete time signal processing, often in a computer. The resultsmay then be reconstructed to produce the same continuous time output as some desired continuous timesystem.

10.4.2 Perfect Reconstruction

In order to understand the conditions for perfect reconstruction and the lter it employs, consider thefollowing. As a beginning, a sucient condition under which perfect reconstruction is possible will bediscussed. Subsequently, the lter and process used for perfect reconstruction will be detailed.

Recall that the sampled version xs of a continuous time signal x with sampling period Ts has a spectrumgiven by

Xs (Ω) =1Ts

∞∑k=−∞

X

(Ω− 2πk

2πTs

). (10.29)

As before, note that if x is bandlimited to (−1/2Ts, 1/2Ts), meaning that X is only nonzero on(−1/2Ts, 1/2Ts), then each period of Xs has the same form as X. Thus, we can identify the originalspectrum X from the spectrum of the samples Xs and, by extension, the original signal x from its samplesxs at rate Ts if x is bandlimited to (−1/2Ts, 1/2Ts).

If a signal x is bandlimited to (−B,B), then it is also bandlimited to (−fs/2, fs/2) provided that Ts <1/2B. Thus, if we ensure that x is sampled to xs with suciently high sampling frequency fs = 1/Ts > 2Band have a way of identifying the unique (−fs/2, fs/2) bandlimited signal corresponding to a discrete timesignal at sampling period Ts, then xs can be used to reconstruct x = x exactly. The frequency 2B is knownas the Nyquist rate. Therefore, the condition that the sampling rate fs = 1/Ts > 2B be greater than theNyquist rate is a sucient condition for perfect reconstruction to be possible.

The correct lter must also be known in order to perform perfect reconstruction. The ideal lowpasslter dened by G (f) = Ts (u (f + fs/2)− u (f − fs/2)), which is shown in Figure 10.7, removes all signalcontent not in the frequency range (−fs/2, fs/2). Therefore, application of this lter to the impulse train∑∞n=−∞ xs [n] δ (t− nTs) results in an output bandlimited to (−fs/2, fs/2).We now only need to conrm that the impulse response g of the lter G satises our sucient condition

to be a reconstruction lter. The inverse Fourier transform of G (f) is

g (t) = sinc (t/Ts) = 1 t = 0

sin(πt/Ts)πt/Ts

t 6= 0, (10.30)

which is shown in Figure 10.7. Hence,

g (nTs) = sinc (n) = 1 n = 0

sin(πn)πn n 6= 0

= 1 n = 0

0 n 6= 0= δ [n] . (10.31)

Therefore, the ideal lowpass lter G is a valid reconstruction lter. Since it is a valid reconstruction lterand always produces an output that is bandlimited to (−fs/2, fs/2), this lter always produces the unique(−fs/2, fs/2) bandlimited signal that samples to a given discrete time sequence at sampling period Ts whenthe impulse train

∑∞n=−∞ xs [n] δ (t− nTs) is input.

Therefore, we can always reconstruct any (−fs/2, fs/2) bandlimited signal from its samples at samplingperiod Ts by the formula

x (t) =∞∑

n=−∞xs [n] sinc (t/Ts − n) . (10.32)

This perfect reconstruction formula is known as the Whittaker-Shannon interpolation formula and is some-times also called the cardinal series. In fact, the sinc function is the innite order cardinal basis spline η∞.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 155: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

149

Consequently, the set sinc (t/Ts − n) |n ∈ Z forms a basis for the vector space of (−fs/2, fs/2) bandlimitedsignals where the signal samples provide the corresponding coecients. It is a simple exercise to show thatthis basis is, in fact, an orthogonal basis.

Figure 10.7: The above plots show the ideal lowpass lter and its inverse Fourier transform, the sincfunction.

Figure 10.8: The plots show an example discrete time signal and its Whittaker-Shannon sinc recon-struction.

10.4.3 Perfect Reconstruction Summary

This module has shown that bandlimited continuous time signals can be reconstructed exactly from theirsamples provided that the sampling rate exceeds the Nyquist rate, which is twice the bandlimit. TheWhittaker-Shannon reconstruction formula computes this perfect reconstruction using an ideal lowpass lter,with the resulting signal being a sum of shifted sinc functions that are scaled by the sample values. Samplingbelow the Nyquist rate can lead to aliasing which makes the original signal irrecoverable as is described inthe subsequent module. The ability to perfectly reconstruct bandlimited signals has important practicalimplications for the processing of continuous time signals using the tools of discrete time signal processing.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 156: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

150 CHAPTER 10. SAMPLING AND RECONSTRUCTION

10.5 Aliasing Phenomena5

10.5.1 Introduction

Through discussion of the Nyquist-Shannon sampling theorem and Whittaker-Shannon reconstruction for-mula, it has already been shown that a (−B,B) continuous time signal can be reconstructed from its samplesat rate fs = 1/Ts via the sinc interpolation lter if fs > 2B. Now, this module will investigate a problematicphenomenon, called aliasing, that can occur if this sucient condition for perfect reconstruction does nothold. When aliasing occurs the spectrum of the samples has dierent form than the original signal spectrum,so the samples cannot be used to reconstruct the original signal through Whittaker-Shannon interpolation.

10.5.2 Aliasing

Aliasing occurs when each period of the spectrum of the samples does not have the same form as the spectrumof the original signal. Given a continuous time signals x with continuous time Fourier transform X, recallthat the spectrum Xs of sampled signal xs with sampling period Ts is given by

Xs (Ω) =1Ts

∞∑k=−∞

X

(Ω− 2πk

2πTs

). (10.33)

As has already been mentioned several times, if x is bandlimited to (−fs/2, fs/2) then each period of Xs has

the same form as X. However, if x is not bandlimited to (−fs/2, fs/2), then the X(

Ω−2πk2πTs

)can overlap and

sum together. This is illustrated in Figure 10.9 in which sampling above the Nyquist frequency producesa samples spectrum of the same shape as the original signal, but sampling below the Nyquist frequencyproduces a samples spectrum with very dierent shape. Whittaker-Shannon interpolation of each of thesesequences produces dierent results. The low frequencies not aected by the overlap are the same, but thereis noise content in the higher frequencies caused by aliasing. Higher frequency energy masquerades as lowenergy content, a highly undesirable eect.

5This content is available online at <http://legacy.cnx.org/content/m47380/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 157: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

151

Figure 10.9: The spectrum of a bandlimited signals is shown as well as the spectra of its samplesat rates above and below the Nyquist frequency. As is shown, no aliasing occurs above the Nyquistfrequency, and the period of the samples spectrum centered about the origin has the same form as thespectrum of the original signal scaled in frequency. Below the Nyquist frequency, aliasing can occur andcauses the spectrum to take a dierent than the original spectrum.

Unlike when sampling above the Nyquist frequency, sampling below the Nyquist frequency does not yieldan injective (one-to-one) function from the (−B,B) bandlimited continuous time signals to the discrete timesignals. Any signal x with spectrum X which overlaps and sums to Xs samples to xs. It should be intuitivelyclear that there are very many (−B,B) bandlimited signals that sample to a given discrete time signal belowthe Nyquist frequency, as is demonstrated in Figure 10.10. It is quite easy to construct uncountably innitefamilies of such signals.

Aliasing obtains it name from the fact that multiple, in fact innitely many, (−B,B) bandlimited signalssample to the same discrete sequence if fs < 2B. Thus, information about the original signal is lost in thisnoninvertible process, and these dierent signals eectively assume the same identity, an alias. Hence, underthese conditions the Whittaker-Shannon interpolation formula will not produce a perfect reconstruction ofthe original signal but will instead give the unique (−fs/2, fs/2) bandlimited signal that samples to thediscrete sequence.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 158: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

152 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Figure 10.10: The spectrum of a discrete time signal xs, taken from Figure 10.9, is shown along withthe spectra of three (−B,B) signals that sample to it at rate ωs < 2B. From the sampled signal alone,it is impossible to tell which, if any, of these was sampled at rate ωs to produce xs. In fact, there areinnitely many (−B,B) bandlimited signals that sample to xs at a sampling rate below the Nyquist rate.

10.5.3 Online Resources

The following online resources provide interactive explanations of sampling, reconstruction, and aliasing:Sampling and Reconstruction with Sound Output6

Sampling and Reconstruction (Rice University)7

An Introduction to Sampling (University of Houston)8

6http://cwx.prenhall.com/bookbind/pubbooks/cyganski/chapter0/medialib/SAMPLING_RECONS_SOUND/sampling.html7http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Reconst/index.html8http://www2.egr.uh.edu/∼glover/applets/Sampling/Sampling.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 159: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

153

10.5.4 Aliasing Demonstration

Figure 10.11: Interact (when online) with a Mathematica CDF demonstrating sampling and aliasingfor a sinusoid. To Download, right-click and save target as .cdf.

10.5.5 Aliasing Summary

Aliasing, essentially the signal processing version of identity theft, occurs when each period of the spectrumof the samples does not have the same form as the spectrum of the original signal. As has been shown,there can be innitely many (−B,B) bandlimited signals that sample to a given discrete time signal xsat a rate fs = 1/Ts < 2B below the Nyquist frequency. However, there is a unique (−B,B) bandlimitedsignal that samples to xs, which is given by the Whittaker-Shannon interpolation of xs, at rate fs ≥ 2Bas no aliasing occurs above the Nyquist frequency. Unfortunately, suciently high sampling rates cannotalways be produced. Aliasing is detrimental to many signal processing applications, so in order to processcontinuous time signals using discrete time tools, it is often necessary to nd ways to avoid it other thanincreasing the sampling rate. Thus, anti-aliasing lters, are of practical importance.

10.6 Anti-Aliasing Filters9

10.6.1 Introduction

It has been shown that a (−B,B) bandlimited signal can be perfectly reconstructed from its samples at arate fs = 1/Ts ≥ 2B. However, it is not always practically possible to produce suciently high samplingrates or to ensure that the input is bandlimited in real situations. Aliasing, which manifests itself as adierence in shape between the periods of the samples signal spectrum and the original spectrum, wouldoccur without any further measures to correct this. Thus, it often becomes necessary to lter out signalenergy at frequencies above fs/2 in order to avoid the detrimental eects of aliasing. This is the role ofthe anti-aliasing lter, a lowpass lter applied before sampling to ensure that the signal is (−fs/2, fs/2)bandlimited or at least nearly so.

10.6.2 Anti-Aliasing Filters

Aliasing can occur when a signal with energy at frequencies other that (−B,B) is sampled at rate fs < 2B.Thus, when sampling below the Nyquist frequency, it is desirable to remove as much signal energy outsidethe frequency range (−B,B) as possible while keeping as much signal energy in the frequency range (−B,B)as possible. This suggests that the ideal lowpass lter with cuto frequency fs/2 would be the optimal anti-aliasing lter to apply before sampling. While this is true, the ideal lowpass lter can only be approximatedin real situations.

9This content is available online at <http://legacy.cnx.org/content/m47392/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 160: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

154 CHAPTER 10. SAMPLING AND RECONSTRUCTION

In order to demonstrate the importance of anti-aliasing lters, consider the calculation of the errorenergy between the original signal and its Whittaker-Shannon reconstruction from its samples taken withand without the use of an anti-aliasing lter. Let x be the original signal and y = Gx be the anti-aliasltered signal where G is the ideal lowpass lter with cuto frequency fs/2. It is easy to show that thereconstructed spectrum using no anti-aliasing lter is given by

X (f) = TsXs (Tsf) |f | < fs/2

0 otherwise=

∑∞k=−∞X (f − kfs) |f | < fs/2

0 otherwise. (10.34)

Thus, the reconstruction error spectrum for this case is

(X − X

)(f) =

−∑∞k=1 (X (f + kfs) +X (f − kfs)) |f | < fs/2

X (f) otherwise. (10.35)

Similarly, the reconstructed spectrum using the ideal lowpass anti-aliasing lter is given by

Y (f) = Y (f) = X (f) |f | < fs/2

0 otherwise. (10.36)

Thus, the reconstruction error spectrum for this case is

(X − Y

)(f) =

0 |f | < fs/2

X (f) otherwise. (10.37)

Hence, by Parseval's theorem, it follows that ||x − y|| ≤ ||x − x||. Also note that the spectrum of Y isidentical to that of the original signal X in the frequency range (−fs/2, fs/2). This is graphically shown inFigure 10.12.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 161: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

155

Figure 10.12: The gure above illustrates the use of an anti-aliasing lter to improve the process ofsampling and reconstruction when using a sampling frequency below the Nyquist frequency. Notice thatwhen using an ideal lowpass anti-aliasing lter, the reconstructed signal spectrum has the same shapeas the original signal spectrum for all frequencies below half the sampling rate. This results in a lowererror energy when using the anti-aliasing lter, as can be seen by comparing the error spectra shown.

10.6.3 Anti-Aliasing Filters Summary

As can be seen, anti-aliasing lters ensure that the signal is (−fs/2, fs/2) bandlimited, or at least nearly so.The optimal anti-aliasing lter would be the ideal lowpass lter with cuto frequency at fs/2, which wouldensure that the original signal spectrum and the reconstructed signal spectrum are equal on the interval(−fs/2, fs/2). However, the ideal lowpass lter is not possible to implement in practice, and approximationsmust be accepted instead. Anti-aliasing lters are an important component of systems that implementdiscrete time processing of continuous time signals, as will be shown in the subsequent module.

10.7 Changing Sampling Rates in Discrete Time10

Up to this point, we have discussed the connection between continuous-time and discrete-time signals thatare captured by the concepts of sampling and reconstruction. In particular cases, we might be interested inobserving a signal under a variety of sampling rates. For example, the amount of memory or communicationbandwidth available for transmission or storage of a discrete-time signal might uctuate in time, which mayrequire increasing or decreasing the sampling rate (or sampling period) accordingly.

10This content is available online at <http://legacy.cnx.org/content/m48038/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 162: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

156 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Figure 10.13: Changing the sampling frequency by reconstructing and sampling always works, butsometimes it may be possible to do so working only in the discrete-time domain.

Naively, if we have sampled the signal suciently often to be able to recover it (according to the Nyquistcriterion), then we can always return from the discrete-time signal to a continuous-time signal using re-construction and then sample the signal at the new desired sampling rate.However, there are specic caseswhere it is possible to modify the sampling rate of the signal without having to switch back to a continuous-time representation. In other words, certain changes of sampling rate can be performed directly on thediscrete-time signal. We discuss three specic cases: downsampling, upsampling, and rational scaling.

10.7.1 Downsampling

Consider the case where we start with a sampling frequency fs and are asked to reduce the sampling frequencyby an integer factor to a new value f 's = fs

k . When this change is translated to the sampling period T (asfs = 1/T ), it is easy to see that the new sampling period T ' = kT will be k times larger than its originalvalue. Therefore, the change in sampling frequency can be accounted for by taking the existing discrete signalx [n] (sampled at frequency fs) and decimating it by a factor of k to obtain the new signal x' [n] = x [kn]re-sampled with sampling frequency f 's.

We know that for both the old and new sampling frequencies the discrete-time Fourier transform ofthe sampled signal will correspond to a periodized, frequency-scaled version of the continuous-time signal'sFourier transform XCT (f) where the respective periods/sampling frequencies fs and f

's each gets mapped

to Ω = 2π. We now compare how these two discrete-time transforms match to one another:

X (Ω) =∑∞n=−∞ x [n] ejΩn = XCT

(Ωfs2π

),

X ' (Ω) =∑∞n=−∞ x' [n] ejΩn =

∑∞n=−∞ x [kn] ejΩn = XCT

(Ωfsk2π

).

(10.38)

By connecting the two equations through the right-most terms, it is easy to see that X ' (Ω) = X (Ω/k),i.e., that the downsampling performed expands the DTFT of the original signal x [n]. Note, however, thatsince we are still working with a discrete-time signal x' [n], the new transform must remain 2π-periodic, andso this expansion occurs for each copy of the spectrum around its center, but the copies stay stationary atmultiples of 2π.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 163: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

157

Figure 10.14: The downsampling operation in time, continuous frequency, and discrete frequency.

Note also that this result eectively provides us with a new property for the DTFT: decimation in thetime domain corresponds to a qualied expansion in the frequency domain, where the expansion is aroundthe center of each copy of the CT spectrum (Ω = 0,±2π,±4π, ...).

Finally, notice that in downsampling there is the risk that aliasing may occur when the new frequencyf 's does not hold the Nyquist frequency criterion (f 's ≥ 2f0, where f0 is the bandwidth of the CT signal).Noticeably, this is the rst time that we observe the possibility of aliasing directly in the discrete-timedomain. Since aliasing may occur, it is good engineering practice to apply a (discrete-time) anti-aliasinglter to the signal before decimation so that aliasing is prevented. Such a lter will ideally be a perfectlow-pass lter with cuto frequency Ωc = 2π/k. The combination of an anti-aliasing lter and a decimatoris known in the community as a donwsampler, as shown below.

Figure 10.15: A downsampler consists of an anti-aliasing lter and a decimator.

10.7.2 Upsampling

Now, consider the case where we start with a sampling frequency fs and are asked to increase the samplingfrequency by an integer factor to a new value f 's = k · fs. When this change is translated to the samplingperiod T (as fs = 1/T ), it is easy to see that the new sampling period T ' = T/k will be a fraction (1/k) ofthe original sampling period. Therefore, the change in sampling frequency requires the acquisition of newsamples in addition to those already available under the old sampling frequency. For this reason, this processis commonly known as up sampling.

While at rst sight this may imply a demand to go back to the continuous-time signal, we must recallthat samples that are obtained with a sampling frequency greater than the Nyquist frequency contain allinformation needed to recover the continuous-time signal, and so it should be possible to infer the new

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 164: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

158 CHAPTER 10. SAMPLING AND RECONSTRUCTION

samples directly from existing ones (as long as no aliasing has occurred). For this purpose, we will retrievethe concept of an expanded discrete-time signal:

xk [n] = x [n/k] if n/k ∈ Z,

0 otherwise.(10.39)

Notice that this signal xk [n] should match the upsampled signal x' [n] for indices that are multiples of k,and our goal is to ll in the missing samples in xk [n] currently having value zero. To see how this can bedone, we appeal to the DTFT of the expanded signal: recall that the time expansion property of the DTFTgives us that Xk (Ω) = X (kΩ), which in practice compresses the DTFT in frequency by a factor of k andmakes it 2π/k-periodic. In contrast, the DTFT of the upsampled signal would remain 2π-periodic, whilesimultaneously compressing each copy of the signal's spectrum by a factor of k.

Figure 10.16: The upsampling operation in time, continuous frequency, and discrete frequency.

This comparison illuminates a method to retrieve the upsampled signal x' [n] from the expanded signalxk [n]: to apply a low-pass lter on the expanded signal xk [n] so that one of the k copies that appear overeach 2π-length region of the DTFT is preserved. Such a lter will need a cuto frequency of fc = 2π/k. Thecombination of an expander and a low-pass lter is known as an upsampler, as shown below.

Figure 10.17: An upsampler consists of an expander and a low-pass lter.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 165: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

159

10.7.3 Rational Sampling Frequency Scaling

A third case where changes in the sampling frequency can be resolved in the discrete-time domain is whenthe ratio between the new and old sampling frequencies provides a rational number. That is, f 's = a

b fs.Intuitively, one can see that such a change can be obtained by combining an upsampling by a factor of awith a downsampling by a factor of b. However, the order of these two operations is crucial.

Essentially, if downsampling is applied before upsampling, there is a chance that the downsampling anti-aliasing lter will remove a portion of the signal's spectrum that would alias but would have been shrunkinto the allowable region during the upsampling, and so the potential for unnecessary distortion is introduced.In contrast, if upsampling is performed before downsampling, the cascade of the two systems will yield asequence of two low-pass lters, and implementing only the narrower lter would provide the same outputas the original cascade. This is illustrated in an example below.

Consider the case where the original sampling frequency fs = 15kHz, the signal's bandwidth is f0 = 5kHz(so that the DTFT bandwidth is Ω = 2π (5kHz/15kHz) = 2π/3, and we are interested in resampling tofs = 10kHz (i.e., the minimum allowed sampling frequency under the Nyquist criterion). In this case,applying downsampling before upsampling results in the following:

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 166: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

160 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Figure 10.18: Changing the sampling frequency by downsampling followed by upsampling. The down-sampling anti-aliasing lter cut o part of the original spectrum.

and so only the portion of the signal's spectrum below 2.5kHz (π/3 in the original spectrum) survives theprocess. In contrast, applying upsampling before downsampling allows the entire spectrum to go through,eectively meeting the bound given by the Nyquist criterion.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 167: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

161

Figure 10.19: Changing the sampling frequency by upsampling followed by downsampling. The entirespectrum is preserved through the process, and one of the lowpass lters is redundant.

10.8 Discrete Time Processing of Continuous Time Signals11

10.8.1 Introduction

Digital computers can process discrete time signals using extremely exible and powerful algorithms. How-ever, most signals of interest are continuous time signals, which is how data almost always appears in nature.Now that the theory supporting methods for generating a discrete time signal from a continuous time signalthrough sampling and then perfectly reconstructing the original signal from its samples without error hasbeen discussed, it will be shown how this can be applied to implement continuous time, linear time invari-ant systems using discrete time, linear time invariant systems. This is of key importance to many moderntechnologies as it allows the power of digital computing to be leveraged for processing of analog signals.

11This content is available online at <http://legacy.cnx.org/content/m47398/1.3/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 168: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

162 CHAPTER 10. SAMPLING AND RECONSTRUCTION

10.8.2 Discrete Time Processing of Continuous Time Signals

10.8.2.1 Process Structure

With the aim of processing continuous time signals using a discrete time system, we will now examine one ofthe most common structures of digital signal processing technologies. As an overview of the approach taken,the original continuous time signal x is sampled to a discrete time signal xs in such a way that the periods ofthe samples spectrum Xs is as close as possible in shape to the spectrum of X. Then a discrete time, lineartime invariant lter H2 is applied, which modies the shape of the samples spectrum Xs but cannot increasethe bandlimit of Xs, to produce another signal ys. This is reconstructed with a suitable reconstruction lterto produce a continuous time output signal y, thus eectively implementing some continuous time systemH1. This process is illustrated in Figure 10.20, and the spectra are shown for a specic case in Figure 10.21.

Figure 10.20: A block diagram for processing of continuous time signals using discrete time systems isshown.

Further discussion about each of these steps is necessary, and we will begin by discussing the analog todigital converter, often denoted by ADC or A/D. It is clear that in order to process a continuous time signalusing discrete time techniques, we must sample the signal as an initial step. This is essentially the purpose ofthe ADC, although there are practical issues that which will be discussed later. An ADC takes a continuoustime analog signal as input and produces a discrete time digital signal as output, with the ideal inniteprecision case corresponding to sampling. As stated by the Nyquist-Shannon Sampling theorem, in order toretain all information about the original signal, we usually wish sample above the Nyquist frequency fs ≥ 2Bwhere the original signal is bandlimited to (−B,B). When it is not possible to guarantee this condition, ananti-aliasing lter should be used.

The discrete time lter is where the intentional modications to the signal information occur. This iscommonly done in digital computer software after the signal has been sampled by a hardware ADC andbefore it is used by a hardware DAC to construct the output. This allows the above setup to be quiteexible in the lter that it implements. If sampling above the Nyquist frequency the. Any modicationsthat the discrete lter makes to this shape can be passed on to a continuous time signal assuming perfectreconstruction. Consequently, the process described will implement a continuous time, linear time invariantlter. This will be explained in more mathematical detail in the subsequent section. As usual, there are, ofcourse, practical limitations that will be discussed later.

Finally, we will discuss the digital to analog converter, often denoted by DAC or D/A. Since continuoustime lters have continuous time inputs and continuous time outputs, we must construct a continuous timesignal from our ltered discrete time signal. Assuming that we have sampled a bandlimited at a sucientlyhigh rate, in the ideal case this would be done using perfect reconstruction through the Whittaker-Shannoninterpolation formula. However, there are, once again, practical issues that prevent this from happening thatwill be discussed later.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 169: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

163

Figure 10.21: Spectra are shown in black for each step in implementing a continuous time lter usinga discrete time lter for a specic signal. The lter frequency responses are shown in blue, and both aremeant to have maximum value 1 in spite of the vertical scale that is meant only for the signal spectra.Ideal ADCs and DACs are assumed.

10.8.2.2 Discrete Time Filter

With some initial discussion of the process illustrated in Figure 10.20 complete, the relationship betweenthe continuous time, linear time invariant lter H1 and the discrete time, linear time invariant lter H2 canbe explored. We will assume the use of ideal, innite precision ADCs and DACs that perform samplingand perfect reconstruction, respectively, using a sampling rate fs = 1/Ts ≥ 2B where the input signal x isbandlimited to (−B,B). Note that these arguments fail if this condition is not met and aliasing occurs. Inthat case, preapplication of an anti-aliasing lter is necessary for these arguments to hold.

Recall that we have already calculated the spectrum Xs of the samples xs given an input x with spectrumX as

Xs (Ω) =1Ts

∞∑k=−∞

X

(Ω− 2πk

2πTs

). (10.40)

Likewise, the spectrum Ys of the samples ys given an output y with spectrum Y is

Ys (Ω) =1Ts

∞∑k=−∞

Y

(Ω− 2πk

2πTs

). (10.41)

From the knowledge that ys = (H1x)s = H2 (xs), it follows that

∞∑k=−∞

H1

(Ω− 2πk

2πTs

)X

(Ω− 2πk

2πTs

)= H2 (Ω)

∞∑k=−∞

X

(Ω− 2πk

2πTs

). (10.42)

Because X is bandlimited to (−1/2Ts, 1/2Ts), we may conclude that

H2 (Ω) =∞∑

k=−∞

H1

(Ω− 2πk

2πTs

)(u (Ω− (2k − 1)π)− u (Ω− (2k + 1)π)) . (10.43)

More simply stated, H2 is 2π periodic and H2 (Ω) = H1 (Ω/2πTs) for Ω ∈ [−π, π).Given a specic continuous time, linear time invariant lter H1, the above equation solves the system

design problem provided we know how to implement H2. The lter H2 must be chosen such that it has a

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 170: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

164 CHAPTER 10. SAMPLING AND RECONSTRUCTION

frequency response where each period has the same shape as the frequency response ofH1 on (−1/2Ts, 1/2Ts).This is illustrated in the frequency responses shown in Figure 10.21.

We might also want to consider the system analysis problem in which a specic discrete time, lineartime invariant lter H2 is given, and we wish to describe the lter H1. There are many such lters, but wecan describe their frequency responses on (−fs/2, fs/2) using the above equation. Isolating one period ofH2 (Ω) yields the conclusion that H1 (f) = H2 (2πf/fs) for f ∈ (−fs/2, fs/2). Because x was assumed tobe bandlimited to (−fs/2, fs/2), the value of the frequency response elsewhere is irrelevant.

10.8.3 Practical Considerations

As mentioned before, there are several practical considerations that need to be addressed at each stage ofthe process shown in Figure 10.20. Some of these will be briey addressed here, and a more complete modelof how discrete time processing of continuous time signals appears in Figure 10.22.

Figure 10.22: A more complete model of how discrete time processing of continuous time signals isimplemented in practice. Notice the addition of anti-aliasing and anti-imaging lters to promote inputand output bandlimitedness. The ADC is shown to perform sampling with quantization. The digitallter is further specied to be causal. The DAC is shown to perform imperfect reconstruction, a zeroorder hold in this case.

10.8.3.1 Anti-Aliasing Filter

In reality, we cannot typically guarantee that the input signal will have a specic bandlimit, and sucientlyhigh sampling rates cannot necessarily be produced. Since it is imperative that the higher frequency com-ponents not be allowed to masquerade as lower frequency components through aliasing, anti-aliasing lterswith cuto frequency less than or equal to fs/2 must be used before the signal is fed into the ADC. Theblock diagram in Figure 10.22 reects this addition.

As described in the previous section, an ideal lowpass lter removing all energy at frequencies above fs/2would be optimal. Of course, this is not achievable, so approximations of the ideal lowpass lter with lowgain above fs/2 must be accepted. This means that some aliasing is inevitable, but it can be reduced to amostly insignicant level.

10.8.3.2 Signal Quantization

In our preceding discussion of discrete time processing of continuous time signals, we had assumed an idealcase in which the ADC performs sampling exactly. However, while an ADC does convert a continuous timesignal to a discrete time signal, it also must convert analog values to digital values for use in a digital logicdevice, a phenomenon called quantization. The ADC subsystem of the block diagram in Figure 10.22 reectsthis addition.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 171: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

165

The data obtained by the ADC must be stored in nitely many bits inside a digital logic device. Thus,there are only nitely many values that a digital sample can take, specically 2N where N is the number ofbits, while there are uncountably many values an analog sample can take. Hence something must be lost inthe quantization process. The result is that quantization limits both the range and precision of the outputof the ADC. Both are nite, and improving one at constant number of bits requires sacricing quality in theother.

10.8.3.3 Filter Implementability

In real world circumstances, if the input signal is a function of time, the future values of the signal cannotbe used to calculate the output. Thus, the digital lter H2 and the overall system H1 must be causal. Thelter annotation in Figure 10.22 reects this addition. If the desired system is not causal but has impulseresponse equal to zero before some time t0, a delay can be introduced to make it causal. However, if thisdelay is excessive or the impulse response has innite length, a windowing scheme becomes necessary in orderto practically solve the problem. Multiplying by a window to decrease the length of the impulse responsecan reduce the necessary delay and decrease computational requirements.

Take, for instance the case of the ideal lowpass lter. It is acausal and innite in length in both directions.Thus, we must satisfy ourselves with an approximation. One might suggest that these approximations couldbe achieved by truncating the sinc impulse response of the lowpass lter at one of its zeros, eectivelywindowing it with a rectangular pulse. However, doing so would produce poor results in the frequencydomain as the resulting convolution would signicantly spread the signal energy. Other windowing functions,of which there are many, spread the signal less in the frequency domain and are thus much more useful forproducing these approximations.

10.8.3.4 Anti-Imaging Filter

In our preceding discussion of discrete time processing of continuous time signals, we had assumed an idealcase in which the DAC performs perfect reconstruction. However, when considering practical matters, it isimportant to remember that the sinc function, which is used for Whittaker-Shannon interpolation, is innitein length and acausal. Hence, it would be impossible for an DAC to implement perfect reconstruction.

Instead, the DAC implements a causal zero order hold or other simple reconstruction scheme with respectto the sampling rate fs used by the ADC. However, doing so will result in a function that is not bandlimitedto (−fs/2, fs/2). Therefore, an additional lowpass lter, called an anti-imaging lter, must be applied to theoutput. The process illustrated in Figure 10.22 reects these additions. The anti-imaging lter attempts tobandlimit the signal to (−fs/2, fs/2), so an ideal lowpass lter would be optimal. However, as has alreadybeen stated, this is not possible. Therefore, approximations of the ideal lowpass lter with low gain abovefs/2 must be accepted. The anti-imaging lter typically has the same characteristics as the anti-aliasinglter.

10.8.4 Discrete Time Processing of Continuous Time Signals Summary

As has been show, the sampling and reconstruction can be used to implement continuous time systemsusing discrete time systems, which is very powerful due to the versatility, exibility, and speed of digitalcomputers. However, there are a large number of practical considerations that must be taken into accountwhen attempting to accomplish this, including quantization noise and anti-aliasing in the analog to digitalconverter, lter implementability in the discrete time lter, and reconstruction windowing and associatedissues in the digital to analog converter. Many modern technologies address these issues and make use ofthis process.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 172: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

166 CHAPTER 10. SAMPLING AND RECONSTRUCTION

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 173: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Chapter 11

Capstone Signal Processing Topics

11.1 Discrete Fourier Transform (DFT)1

The discrete-time Fourier transform (and the continuous-time transform as well) can be evaluated when wehave an analytic expression for the signal. Suppose we just have a signal, such as the speech signal usedin the previous chapter, for which there is no formula. How then would you compute the spectrum? TheDiscrete Fourier Transform (DFT) allows the computation of spectra from discrete-time data. While indiscrete-time we can exactly calculate spectra, for analog signals no similar exact spectrum computationexists. For analog-signal spectra, use must build special devices, which turn out in most cases to consist ofA/D converters and discrete-time computations. Certainly discrete-time spectral analysis is more exiblethan continuous-time spectral analysis.

The formula for the DTFT is a sum, which conceptually can be easily computed save for two issues.

• Signal duration. The sum extends over the signal's duration, which must be nite to compute thesignal's spectrum. It is exceedingly dicult to store an innite-length signal in any case, so we'llassume that the signal extends over [0, N − 1].

• Continuous frequency. Subtler than the signal duration issue is the fact that the frequency variableis continuous: It may only need to span one period, like

[− 1

2 ,12

]or [0, 1], but the DTFT formula as it

stands requires evaluating the spectra at all frequencies within a period. Let's compute the spectrumat a few frequencies; the most obvious ones are the equally spaced ones Ω = 2πk

K , k ∈ 0, . . . ,K − 1.

We thus dene the discrete Fourier transform (DFT) to be

S [k] =N−1∑n=0

s [n] e−j2πnkK , k ∈ 0, . . . ,K − 1 (11.1)

Here, S [k] is shorthand for S(ej2π

kK

).

We can compute the spectrum at as many equally spaced frequencies as we like. Note that you can thinkabout this computationally motivated choice as sampling the spectrum; more about this interpretation later.The issue now is how many frequencies are enough to capture how the spectrum changes with frequency.One way of answering this question is determining an inverse discrete Fourier transform formula: given S [k],k = 0, . . . ,K − 1 how do we nd s [n], n = 0, . . . , N − 1? Presumably, the formula will be of the form

s [n] =∑K−1k=0 S [k] e

j2πnkK . Substituting the DFT formula in this prototype inverse transform yields

s [n] =K−1∑k=0

N−1∑m=0

s [m] e−(j 2πmkK )ej

2πnkK (11.2)

1This content is available online at <http://legacy.cnx.org/content/m47468/1.1/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

167

Page 174: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

168 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Note that the orthogonality relation we use so often has a dierent character now.

K−1∑k=0

e−(j 2πkmK )ej

2πknK =

K if (m = n, n±K,n± 2K, . . . )0 otherwise

(11.3)

We obtain nonzero value whenever the two indices dier by multiples of K. We can express this result asK∑l δ [m− n− lK]. Thus, our formula becomes

s [n] =N−1∑m=0

s [m]K∞∑

l=−∞

δ [m− n− lK] (11.4)

The integers n and m both range over 0, . . . , N − 1. To have an inverse transform, we need the sumto be a single unit sample for m, n in this range. If it did not, then s [n] would equal a sum of values,and we would not have a valid transform: Once going into the frequency domain, we could not get backunambiguously! Clearly, the term l = 0 always provides a unit sample (we'll take care of the factor of Ksoon). If we evaluate the spectrum at fewer frequencies than the signal's duration, the term correspondingto m = n + K will also appear for some values of m, n = 0, . . . , N − 1. This situation means that ourprototype transform equals s [n] + s [n+K] for some values of n. The only way to eliminate this problem isto require K ≥ N : We must have at least as many frequency samples as the signal's duration. In this way,we can return from the frequency domain we entered via the DFT.

Exercise 11.1.1 (Solution on p. 184.)

When we have fewer frequency samples than the signal's duration, some discrete-time signal valuesequal the sum of the original signal values. Given the sampling interpretation of the spectrum,characterize this eect a dierent way.

Another way to understand this requirement is to use the theory of linear equations. If we write out theexpression for the DFT as a set of linear equations,

s [0] + s [1] + · · ·+ s [N − 1] = S [0] (11.5)

s [0] + s [1] e(−j) 2πK + · · ·+ s [N − 1] e(−j) 2π(N−1)

K = S [1]

...

s [0] + s [1] e(−j) 2π(K−1)K + · · ·+ s [N − 1] e(−j) 2π(N−1)(K−1)

K = S [K − 1]

we have K equations in N unknowns if we want to nd the signal from its sampled spectrum. This require-ment is impossible to fulll if K < N ; we must have K ≥ N . Our orthogonality relation essentially says thatif we have a sucient number of equations (frequency samples), the resulting set of equations can indeed besolved.

By convention, the number of DFT frequency values K is chosen to equal the signal's duration N . Thediscrete Fourier transform pair consists of

Discrete Fourier Transform Pair

S [k] =∑N−1n=0 s [n] e−(j 2πnk

N )

s [n] = 1N

∑N−1k=0 S [k] ej

2πnkN

(11.6)

Example 11.1Use this demonstration to perform DFT analysis of a signal.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 175: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

169

This media object is a LabVIEW VI. Please view or download it at<DFTanalysis.llb>

Example 11.2Use this demonstration to synthesize a signal from a DFT sequence.

This media object is a LabVIEW VI. Please view or download it at<DFT_Component_Manipulation.llb>

11.2 DFT: Fast Fourier Transform2

We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform(DFT)3 computes the spectrum at N equally spaced frequencies from a length- N sequence. An issue thatnever arises in analog "computation," like that performed by a circuit, is how much work it takes to performthe signal processing operation such as ltering. In computation, this consideration translates to the numberof basic computational steps required to perform the needed processing. The number of steps, known asthe complexity, becomes equivalent to how long the computation takes (how long must we wait for ananswer). Complexity is not so much tied to specic computers or programming languages but to how manysteps are required on any computer. Thus, a procedure's stated complexity says that the time taken will beproportional to some function of the amount of data used in the computation and the amount demanded.

For example, consider the formula for the discrete Fourier transform. For each frequency we chose, wemust multiply each signal value by a complex number and add together the results. For a real-valued signal,each real-times-complex multiplication requires two real multiplications, meaning we have 2N multiplicationsto perform. To add the results together, we must keep the real and imaginary parts separate. Adding Nnumbers requires N − 1 additions. Consequently, each frequency requires 2N + 2 (N − 1) = 4N − 2 basiccomputational steps. As we have N frequencies, the total number of computations is N (4N − 2).

In complexity calculations, we only worry about what happens as the data lengths increase, and take thedominant termhere the 4N2 termas reecting how much work is involved in making the computation.As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd theDFT is an O

(N2)computational procedure. This notation is read "order N -squared". Thus, if we double

the length of the data, we would expect that the computation time to approximately quadruple.

Exercise 11.2.1 (Solution on p. 184.)

In making the complexity evaluation for the DFT, we assumed the data to be real. Three ques-tions emerge. First of all, the spectra of such signals have conjugate symmetry, meaning thatnegative frequency components (k =

[N2 + 1, ..., N + 1

]in the DFT4) can be computed from the

corresponding positive frequency components. Does this symmetry change the DFT's complexity?Secondly, suppose the data are complex-valued; what is the DFT's complexity now?Finally, a less important but interesting question is suppose we want K frequency values instead

of N ; now what is the complexity?

2This content is available online at <http://legacy.cnx.org/content/m0504/2.9/>.3"Discrete Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>4"Discrete Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 176: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

170 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

11.3 The Fast Fourier Transform (FFT)5

11.3.1 Introduction

The Fast Fourier Transform (FFT) is an ecient O(NlogN) algorithm for calculating DFTs The FFT 6exploitssymmetries in the W matrix to take a "divide and conquer" approach. We will rst discuss deriving theactual FFT algorithm, some of its implications for the DFT, and a speed comparison to drive home theimportance of this powerful algorithm.

11.3.2 Deriving the FFT

To derive the FFT, we assume that the signal's duration is a power of two: N = 2l . Consider what happensto the even-numbered and odd-numbered elements of the sequence in the DFT calculation.

S [k] = s [0] + s [2] e(−j) 2π2kN + · · · + s [N − 2] e(−j) 2π(N−2)k

N +

s [1] e(−j) 2πkN + s [3] e(−j) 2π×(2+1)k

N + · · · + s [N − 1] e(−j) 2π(N−2+1)kN =

s [0] + s [2] e(−j) 2πk

N2 + · · · + s [N − 2] e

(−j)2π(N2 −1)k

N2 +s [1] + s [3] e

(−j) 2πkN2 + · · ·+ s [N − 1] e

(−j)2π(N2 −1)k

N2

e−(j2πk)

N

(11.7)

Each term in square brackets has the form of a N2 -length DFT. The rst one is a DFT of the even-

numbered elements, and the second of the odd-numbered elements. The rst DFT is combined with the

second multiplied by the complex exponential e−(j2πk)

N . The half-length transforms are each evaluated atfrequency indices k ∈ 0, . . . , N − 1 . Normally, the number of frequency indices in a DFT calculation rangebetween zero and the transform length minus one. The computational advantage of the FFT comes fromrecognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations

made in the half-length transforms and combines them through additions and the multiplication by e−(j2πk)

N

, which is not periodic over N2 , to rewrite the length-N DFT. Figure 11.1 (Length-8 DFT decomposition)

illustrates this decomposition. As it stands, we now compute two length- N2 transforms (complexity 2O(N2

4

)), multiply one of them by the complex exponential (complexity O (N) ), and add the results (complexityO (N) ). At this point, the total complexity is still dominated by the half-length DFT calculations, but theproportionality coecient has been reduced.

Now for the fun. Because N = 2l , each of the half-length transforms can be reduced to two quarter-lengthtransforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left withlength-2 transforms. This transform is quite simple, involving only additions. Thus, the rst stage of theFFT has N

2 length-2 transforms (see the bottom part of Figure 11.1 (Length-8 DFT decomposition)). Pairsof these transforms are combined by adding one to the other multiplied by a complex exponential. Each pairrequires 4 additions and 4 multiplications, giving a total number of computations equaling 8N4 = N

2 . Thisnumber of computations does not change from stage to stage. Because the number of stages, the number oftimes the length can be divided by two, equals log2N , the complexity of the FFT is O (N logN) .

5This content is available online at <http://legacy.cnx.org/content/m47467/1.1/>.6"Fast Fourier Transform (FFT)" <http://legacy.cnx.org/content/m10250/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 177: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

171

Length-8 DFT decomposition

(a)

(b)

Figure 11.1: The initial decomposition of a length-8 DFT into the terms using even- and odd-indexedinputs marks the rst phase of developing the FFT algorithm. When these half-length transforms aresuccessively decomposed, we are left with the diagram shown in the bottom panel that depicts thelength-8 FFT computation.

Doing an example will make computational savings more obvious. Let's look at the details of a length-8DFT. As shown on Figure 11.1 (Length-8 DFT decomposition), we rst decompose the DFT into two length-4 DFTs, with the outputs added and subtracted together in pairs. Considering Figure 11.1 (Length-8 DFTdecomposition) as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTsinto the nal calculation because of the periodicity of the DFT output. Examining how pairs of outputs arecollected together, we create the basic computational element known as a buttery (Figure 11.2 (Buttery)).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 178: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

172 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Buttery

Figure 11.2: The basic computational element of the fast Fourier transform is the buttery. It takestwo complex numbers, represented by a and b, and forms the quantities shown. Each buttery requiresone complex multiplication and two complex additions.

By considering together the computations involving common output frequencies from the two half-lengthDFTs, we see that the two complex multiplies are related to each other, and we can reduce our computationalwork even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining theiroutputs, we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 11.1 (Length-8DFT decomposition)). Although most of the complex multiplies are quite simple (multiplying by e−(jπ)

means negating real and imaginary parts), let's count those for purposes of evaluating the complexity as fullcomplex multiplies. We have N

2 = 4 complex multiplies and 2N = 16 additions for each stage and log2N = 3stages, making the number of basic computations 3N

2 log2N as predicted.

Exercise 11.3.1 (Solution on p. 184.)

Note that the ordering of the input sequence in the two parts of Figure 11.1 (Length-8 DFTdecomposition) aren't quite the same. Why not? How is the ordering determined?

11.3.2.1 FFT and the DFT

We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform(DFT)7 computes the spectrum at N equally spaced frequencies from a length- N sequence. An issue thatnever arises in analog "computation," like that performed by a circuit, is how much work it takes to performthe signal processing operation such as ltering. In computation, this consideration translates to the numberof basic computational steps required to perform the needed processing. The number of steps, known asthe complexity, becomes equivalent to how long the computation takes (how long must we wait for ananswer). Complexity is not so much tied to specic computers or programming languages but to how manysteps are required on any computer. Thus, a procedure's stated complexity says that the time taken will beproportional to some function of the amount of data used in the computation and the amount demanded.

For example, consider the formula for the discrete Fourier transform. For each frequency we chose, wemust multiply each signal value by a complex number and add together the results. For a real-valued signal,each real-times-complex multiplication requires two real multiplications, meaning we have 2N multiplicationsto perform. To add the results together, we must keep the real and imaginary parts separate. Adding Nnumbers requires N − 1 additions. Consequently, each frequency requires 2N + 2 (N − 1) = 4N − 2 basiccomputational steps. As we have N frequencies, the total number of computations is N (4N − 2).

In complexity calculations, we only worry about what happens as the data lengths increase, and take thedominant termhere the 4N2 termas reecting how much work is involved in making the computation.

7"Discrete Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 179: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

173

As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd theDFT is an O

(N2)computational procedure. This notation is read "order N -squared". Thus, if we double

the length of the data, we would expect that the computation time to approximately quadruple.

Exercise 11.3.2 (Solution on p. 184.)

In making the complexity evaluation for the DFT, we assumed the data to be real. Three ques-tions emerge. First of all, the spectra of such signals have conjugate symmetry, meaning thatnegative frequency components (k =

[N2 + 1, ..., N + 1

]in the DFT8) can be computed from the

corresponding positive frequency components. Does this symmetry change the DFT's complexity?Secondly, suppose the data are complex-valued; what is the DFT's complexity now?Finally, a less important but interesting question is suppose we want K frequency values instead

of N ; now what is the complexity?

11.3.3 Speed Comparison

How much better is O (N logN) than O(N2)?

Figure 11.3: This gure shows how much slower the computation time of an O(NlogN) process grows.

N 10 100 1000 106 109

N2 100 104 106 1012 1018

N logN 1 200 3000 6× 106 9× 109

Table 11.1

Say you have a 1 MFLOP machine (a million "oating point" operations per second). Let N = 1million =106.

An O(N2)algorithm takes 1012 ors → 106 seconds ' 11.5 days.

An O (N logN) algorithm takes 6× 106 Flors → 6 seconds.

note: N = 1million is not unreasonable.

Example 11.33 megapixel digital camera spits out 3×106 numbers for each picture. So for two N point sequencesf [n] and h [n]. If computing f [n] ~ h [n] directly: O

(N2)operations.

taking FFTs O (N logN)

8"Discrete Fourier Transform", (1) : Discrete Fourier transform <http://legacy.cnx.org/content/m0502/latest/#eqn1>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 180: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

174 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

multiplying FFTs O (N)inverse FFTs O (N logN).the total complexity is O (N logN).

11.3.4 Conclusion

Other "fast" algorithms have been discovered, most of which make use of how many common factors thetransform length N has. In number theory, the number of prime factors a given integer has measures howcomposite it is. The numbers 16 and 81 are highly composite (equaling 24 and 34 respectively), thenumber 18 is less so ( 2132 ), and 17 not at all (it's prime). In over thirty years of Fourier transformalgorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. Itis so computationally ecient that power-of-two transform lengths are frequently used regardless of whatthe actual length of the data. It is even well established that the FFT, alongside the digital computer, werealmost completely responsible for the "explosion" of DSP in the 60's.

11.4 Matched Filter Detector9

11.4.1 Introduction

A great many applications in signal processing, image processing, and beyond involve determining thepresence and location of a target signal within some other signal. A radar system, for example, searches forcopies of a transmitted radar pulse in order to determine the presence of and distance to reective objectssuch as buildings or aircraft. A communication system searches for copies of waveforms representing digital0s and 1s in order to receive a message.

Two key mathematical tools that contribute to these applications are inner products10 and the Cauchy-Schwarz inequality11 . As is shown in the module on the Cauchy-Schwarz inequality, the expression∣∣∣1 x||x|| ,

y||y||2

∣∣∣ attains its upper bound, which is 1, when y = ax for some scalar a in a real or complex

eld. The lower bound, which is 0, is attained when x and y are orthogonal. In informal intuition, thismeans that the expression is maximized when the vectors x and y have the same shape or pattern andminimized when x and y are very dierent. A pair of vectors with similar but unequal shapes or patternswill produce relatively large value of the expression less than 1, and a pair of vectors with very dierent butnot orthogonal shapes or patterns will produce relatively small values of the expression greater than 0. Thus,the above expression carries with it a notion of the degree to which two signals are alike, the magnitude ofthe normalized correlation between the signals in the case of the standard inner products.

This concept can be extremely useful. For instance consider a situation in which we wish to determinewhich signal, if any, from a set X of signals most resembles a particular signal y. In order to accomplishthis, we might evaluate the above expression for every signal x ∈ X, choosing the one that results in maximaprovided that those maxima are above some threshold of likeness. This is the idea behind the matchedlter detector, which compares a set of signals against a target signal using the above expression in order todetermine which is most like the target signal.

9This content is available online at <http://legacy.cnx.org/content/m34670/1.10/>.10http://cnx.org/content/m12101/latest/11http://cnx.org/content/m10757/latest/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 181: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

175

11.4.2 Matched Filter Detector Theory

11.4.2.1 Signal Comparison

The simplest variant of the matched lter detector scheme would be to nd the member signal in a set X ofsignals that most closely matches a target signal y. Thus, for every x ∈ X we wish to evaluate

m (x, y) =∣∣∣∣1 x

||x||,y

||y||2∣∣∣∣ (11.8)

in order to compare every member of X to the target signal y. Since the member of X which most closelymatches the target signal y is desired, ultimately we wish to evaluate

xm = argmaxx∈X

∣∣∣∣1 x

||x||,y

||y||2∣∣∣∣ . (11.9)

Note that the target signal does not technically need to be normalized to produce a maximum, but givesthe desirable property that m (x, y) is bounded to [0, 1].

The element xm ∈ X that produces the maximum value of m (x, y) is not necessarily unique, so theremay be more than one matching signal in X. Additionally, the signal xm ∈ X producing the maximum valueof m (x, y) may not produce a very large value of m (x, y) and thus not be very much like the target signaly. Hence, another matched lter scheme might identify the argument producing the maximum but onlyabove a certain threshold, returning no matching signals in X if the maximum is below the threshold. Therealso may be a signal x ∈ X that produces a large value of m (x, y) and thus has a high degree of likenessto y but does not produce the maximum value of m (x, y). Thus, yet another matched lter scheme mightidentify all signals in X producing local maxima that are above a certain threshold.

Example 11.4For example, consider the target signal given in Figure 11.4 (Template Signal) and the set of twosignals given in Figure 11.5 (Candidate Signals). By inspection, it is clear that the signal g2 is mostlike the target signal f . However, to make that conclusion mathematically, we use the matchedlter detector with the L2 inner product. If we were to actually make the necessary computations,we would rst normalize each signal and then compute the necessary inner products in order tocompare the signals in X with the target signal f . We would notice that the absolute value of theinner product for g2 with f when normalized is greater than the absolute value of the inner productof g1 with f when normalized, mathematically stated as

g2 = argmaxx∈g1,g2

∣∣∣∣1 x

||x||,f

||f ||2∣∣∣∣ . (11.10)

Template Signal

Figure 11.4: We wish to nd a match for this target signal in the set of signals below.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 182: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

176 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Candidate Signals

(a) (b)

Figure 11.5: We wish to nd a match for the above target signal in this set of signals.

11.4.2.2 Pattern Detection

A somewhat more involved matched lter detector scheme would involve attempting to match a target timelimited signal y = f to a set of of time shifted and windowed versions of a single signal X = wStg|t ∈ Rindexed by R. The windowing funtion is given by w (t) = u (t− t1) − u (t− t2) where [t1, t2] is the intervalto which f is time limited. This scheme could be used to nd portions of g that have the same shape asf . If the absolute value of the inner product of the normalized versions of f and wStg is large, which isthe absolute value of the normalized correlation for standard inner products, then g has a high degree oflikeness to f on the interval to which f is time limited but left shifted by t. Of course, if f is not timelimited, it means that the entire signal has a high degree of likeness to f left shifted by t.

Thus, in order to determine the most likely locations of a signal with the same shape as the target signalf in a signal g we wish to compute

tm = argmaxt∈R

∣∣∣∣1 f

||f ||,wStg

||wStg||2∣∣∣∣ (11.11)

to provide the desired shift. Assuming the inner product space examined is L2 (R (similar results hold forL2 (R [a, b)), l2 (Z), and l2 (Z [a, b))), this produces

tm = argmaxt∈R

∣∣∣∣ 1||f ||||wStg||

∫ ∞−∞

f (τ)w (τ) g (τ − t)dτ∣∣∣∣ . (11.12)

Since f and w are time limited to the same interval

tm = argmaxt∈R

∣∣∣∣ 1||f ||||wStg||

∫ t2

t1

f (τ) g (τ − t)dτ∣∣∣∣ . (11.13)

Making the subsitution h (t) = g (−t),

tm = argmaxt∈R

∣∣∣∣ 1||f ||||wStg||

∫ t2

t1

f (τ)h (t− τ) dτ∣∣∣∣ . (11.14)

Noting that this expression contains a convolution operation

tm = argmaxt∈R

∣∣∣∣ (f ∗ h) (t)||f ||||wStg||

∣∣∣∣ . (11.15)

where h is the conjugate of the time reversed version of g dened by h (t) = g (−t).

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 183: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

177

In the special case in which the target signal f is not time limited, w has unit value on the entire realline. Thus, the norm can be evaluated as ||wStg|| = ||Stg|| = ||g|| = ||h||. Therefore, the function reduces

to tm = argmaxt∈R(f∗h)(t)||f ||||h|| where h (t) = g (−t). The function f ∗ g = (f∗h)(t)

||f ||||h|| is known as the normalized

cross-correlation of f and g.Hence, this matched lter scheme can be implemented as a convolution. Therefore, it may be expedient

to implement it in the frequency domain. Similar results hold for the L2 (R [a, b)), l2 (Z), and l2 (Z [a, b])spaces. It is especially useful to implement the l2 (Z [a, b]) cases in the frequency domain as the power ofthe Fast Fourier Transform algorithm can be leveraged to quickly perform the computations in a computerprogram. In the L2 (R [a, b)) and l2 (Z [a, b]) cases, care must be taken to zero pad the signal if wrap-aroundeects are not desired. Similar results also hold for spaces on higher dimensional intervals with the sameinner products.

Of course, there is not necessarily exactly one instance of a target signal in a given signal. There could beone instance, more than one instance, or no instance of a target signal. Therefore, it is often more practicalto identify all shifts corresponding to local maxima that are above a certain threshold.

Example 11.5The signal in Figure 11.7 (Longer Signal) contains an instance of the template signal seen inFigure 11.6 (Pattern Signal) beginning at time t = s1 as shown by the plot in Figure 11.8 (AbsoluteValue of Output). Therefore,

s1 = argmaxt∈R

∣∣∣∣1 f

||f ||,wStg

||wStg||2∣∣∣∣ . (11.16)

Pattern Signal

Figure 11.6: This function shows tha pattern we are looking for in the signal below, which occurs attime t = s1.

Longer Signal

Figure 11.7: This signal contains an instance of the above signal starting at time t = s1.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 184: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

178 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Absolute Value of Output

Figure 11.8: This signal shows a sketch of the absolute value of the matched lter output for theinterval shown. Note that this was just an "eyeball approximation" sketch. Observe the pronouncedpeak at time t = s1.

11.4.3 Practical Applications

11.4.3.1 Image Detection

Matched Filtering is used in image processing to detect a template image within a reference image. Thishas real-word applications in verifying ngerprints for security or in verifying someone's photo. As a simpleexample, we can turn to the ever-popular "Where's Waldo?" books (known as Wally in the UK!), where thereader is tasked with nding the specic face of Waldo/Wally in a confusing background rife with look-alikes!If we are given the template head and a reference image, we can run a two dimensional convolution of thetemplate image across the reference image to obtain a three dimensional convolution map, Figure 11.9(a),where the height of the convolution map is determined by the degree of correlation, higher being morecorrelated. Finding our target then becomes a matter of determining the spot where the local surface areais highest. The process is demonstrated in Figure 11.9(b). In the eld of image processing, this matchedlter-based process is known as template matching.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 185: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

179

(a)

(b)

Figure 11.9: Example of "Where's Waldo?" picture. Our Matched Filter Detector can be implementedto nd a possible match for Waldo.

then we could easily develop a program to nd the closest resemblance to the image of Waldo's head inthe larger picture. We would simply implement our same match lter algorithm: take the inner products ateach shift and see how large our resulting answers are. This idea was implemented on this same picture fora Signals and Systems Project12 at Rice University (click the link to learn more).

Exercise 11.4.1: Pros and Cons (Solution on p. 184.)

What are the advantages of the matched lter algorithm to image detection? What are thedrawbacks of this method?

11.4.3.2 Communications Systems

Matched lter detectors are also commonly used in Communications Systems13. In fact, they are the optimaldetectors in Gaussian noise. Signals in the real-world are often distorted by the environment around them,so there is a constant struggle to develop ways to be able to receive a distorted signal and then be able tolter it in some way to determine what the original signal was. Matched lters provide one way to compare areceived signal with two possible original ("template") signals and determine which one is the closest matchto the received signal.

12http://www.owlnet.rice.edu/∼elec301/Projects99/waldo/process.html13"Structure of Communication Systems" <http://legacy.cnx.org/content/m0002/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 186: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

180 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

For example, below we have a simplied example of Frequency Shift Keying14 (FSK) where we havingthe following coding for '1' and '0':

Figure 11.10: Frequency Shift Keying for '1' and '0'.

Based on the above coding, we can create digital signals based on 0's and 1's by putting together theabove two "codes" in an innite number of ways. For this example we will transmit a basic 3-bit number,101, which is displayed in Figure 11.11:

Figure 11.11: The bit stream "101" coded with the above FSK.

Now, the signal picture above represents our original signal that will be transmitted over some commu-nication system, which will inevitably pass through the "communications channel," the part of the systemthat will distort and alter our signal. As long as the noise is not too great, our matched lter should keep usfrom having to worry about these changes to our transmitted signal. Once this signal has been received, wewill pass the noisy signal through a simple system, similar to the simplied version shown in Figure 11.12:

14"Frequency Shift Keying" <http://legacy.cnx.org/content/m0545/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 187: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

181

Figure 11.12: Block diagram of matched lter detector.

Figure 11.12 basically shows that our noisy signal will be passed in (we will assume that it passes in one"bit" at a time) and this signal will be split and passed to two dierent matched lter detectors. Each onewill compare the noisy, received signal to one of the two codes we dened for '1' and '0.' Then this value willbe passed on and whichever value is higher (i.e. whichever FSK code signal the noisy signal most resembles)will be the value that the receiver takes. For example, the rst bit that will be sent through will be a '1' sothe upper level of the block diagram will have a higher value, thus denoting that a '1' was sent by the signal,even though the signal may appear very noisy and distorted.

The interactive example below supposes that our transmitter sends 1000 bits, plotting how many ofthose bits are received and interpreted correctly as either 1s and 0s, and also keeps a tally of how many areaccidentally misinterpreted. You can play around with the distance between the energy of the "1" and the"0" (discriminability), the degree of noise present in the channel, and the location of the criterion (threshold)to get a feel for the basics of signal detection theory.

Example 11.6Let's use a matched lter to nd the "0" bits in a simple signal.

Let's use the signal s1 (t) from example 1 to represent the bits. s1 (t) represents 0, while −s1 (t)represents 1.

0⇒ (b = 1)⇒ (s1 (t) = s (t)) for 0 ≤ t ≤ T1⇒ (b = −1)⇒ (s2 (t) = −s (t)) for 0 ≤ t ≤ T

Xt =P∑

i=−Pbis (t− iT ) (11.17)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 188: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

182 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Figure 11.13

The matched lter output clearly shows the location of the "0" bits.

11.4.3.3 Radar

One of the rst, and more intriguing forms of communication that used the matched lter concept wasradar. A known electromagnetic signal is sent out by a transmitter at a target and reected o of the targetback to the sender with a time delay proportional to the distance between target and sender. This scaled,time-shifted signal is then convolved with the original template signal, and the time at which the output ofthis convolution is highest is noted.

This technology proved vital in the 1940s for the powers that possessed it. A short set of videos belowshows the basics of how the technology works, its applications, and its impact in World War 2.

History of Radar

This media object is a Flash object. Please view or download it at<http://www.youtube.com/v/Zq0uE7nUlEQ&hl=en_US&fs=1&>

Figure 11.14

See the video in Figure 11.15 for an analysis of the same basic principle being applied to adaptive cruisecontrol systems for the modern car.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 189: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

183

This media object is a Flash object. Please view or download it at<http://www.youtube.com/v/VabT6UMjLNY&hl=en_US&fs=1>

Figure 11.15: Video on radar-based adaptive cruise control from The Science Channel.

11.4.4 Matched Filter Demonstration

Figure 11.16: Interact (when online) with a Mathematica CDF demonstrating the Matched Filter. ToDownload, right-click and save target as .cdf.

11.4.5 Matched Filter Summary

As can be seen, the matched lter detector is an important signal processing application, rich both intheoretical concepts and in practical applications. The matched lter supports a wide array of uses relatedto pattern recognition, including image detection, frequency shift keying demodulation, and radar signalinterpretation. Despite this diversity of purpose, all matched lter applications operate in essentially thesame way. Every member of some set of signals is compared to a target signal by evaluating the absolutevalue of the inner product of the the two signals after normalization. However, the signal sets and resultinterpretations are application specic.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 190: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

184 CHAPTER 11. CAPSTONE SIGNAL PROCESSING TOPICS

Solutions to Exercises in Chapter 11

Solution to Exercise 11.1.1 (p. 168)This situation amounts to aliasing in the time-domain.Solution to Exercise 11.2.1 (p. 169)When the signal is real-valued, we may only need half the spectral values, but the complexity remainsunchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity isagain the same. When only K frequencies are needed, the complexity is O (KN).Solution to Exercise 11.3.1 (p. 172)The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has.The ordering is determined by the algorithm.Solution to Exercise 11.3.2 (p. 173)When the signal is real-valued, we may only need half the spectral values, but the complexity remainsunchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity isagain the same. When only K frequencies are needed, the complexity is O (KN).Solution to Exercise 11.4.1 (p. 179)This algorithm is very simple and thus easy to code. However, it is susceptible to certain types of noise - forexample, it would be dicult to nd Waldo if his face was rotated, ipped, larger or smaller than expected,or distorted in some other way.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 191: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Appendix: Mathematical Pot-Pourri

12.1 Basic Linear Algebra1

This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those ofyou trying to gain a deep insight into linear algebra. Rather, this brief introduction to some of the terms andideas of linear algebra is meant to provide a little background to those trying to get a better understandingor learn about eigenvectors and eigenfunctions, which play a big role in deriving a few important ideas onSignals and Systems. The goal of these concepts will be to provide a background for signal decompositionand to lead up to the derivation of the Fourier Series2.

12.1.1 Linear Independence

A set of vectors x1, x2, . . . , xk , xi ∈ Cn are linearly independent if none of them can be written asa linear combination of the others.

Denition 12.1: Linearly IndependentFor a given set of vectors, x1, x2, . . . , xn, they are linearly independent if

c1x1 + c2x2 + · · ·+ cnxn = 0

only when c1 = c2 = · · · = cn = 0ExampleWe are given the following two vectors:

x1 =

3

2

x2 =

−6

−4

These are not linearly independent as proven by the following statement, which, by inspection,can be seen to not adhere to the denition of linear independence stated above.

(x2 = −2x1)⇒ (2x1 + x2 = 0)

Another approach to reveal a vectors independence is by graphing the vectors. Looking at these twovectors geometrically (as in Figure 12.1), one can again prove that these vectors are not linearlyindependent.

1This content is available online at <http://legacy.cnx.org/content/m10734/2.7/>.2"Fourier Series: Eigenfunction Approach" <http://legacy.cnx.org/content/m10496/latest/>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

185

Page 192: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

186 APPENDIX

Figure 12.1: Graphical representation of two vectors that are not linearly independent.

Example 12.1We are given the following two vectors:

x1 =

3

2

x2 =

1

2

These are linearly independent since

c1x1 = − (c2x2)

only if c1 = c2 = 0. Based on the denition, this proof shows that these vectors are indeed linearlyindependent. Again, we could also graph these two vectors (see Figure 12.2) to check for linearindependence.

Figure 12.2: Graphical representation of two vectors that are linearly independent.

Exercise 12.1.1 (Solution on p. 195.)

Are x1, x2, x3 linearly independent?

x1 =

3

2

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 193: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 187

x2 =

1

2

x3 =

−1

0

As we have seen in the two above examples, often times the independence of vectors can be easily seenthrough a graph. However this may not be as easy when we are given three or more vectors. Can you easilytell whether or not these vectors are independent from Figure 12.3. Probably not, which is why the methodused in the above solution becomes important.

Figure 12.3: Plot of the three vectors. Can be shown that a linear combination exists among thethree, and therefore they are not linear independent.

Hint: A set of m vectors in Cn cannot be linearly independent if m > n.

12.1.2 Span

Denition 12.2: SpanThe span3 of a set of vectors x1, x2, . . . , xk is the set of vectors that can be written as a linearcombination of x1, x2, . . . , xk

span (x1, . . . , xk) = α1x1 + α2x2 + · · ·+ αkxk , αi ∈ Cn

ExampleGiven the vector

x1 =

3

2

the span of x1 is a line.

ExampleGiven the vectors

x1 =

3

2

3"Subspaces", Denition 2: "Span" <http://legacy.cnx.org/content/m10297/latest/#defn2>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 194: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

188 APPENDIX

x2 =

1

2

the span of these vectors is C2.

12.1.3 Basis

Denition 12.3: BasisA basis for Cn is a set of vectors that: (1) spans Cn and (2) is linearly independent.

Clearly, any set of n linearly independent vectors is a basis for Cn.

Example 12.2We are given the following vector

ei =

0...

0

1

0...

0

where the 1 is always in the ith place and the remaining values are zero. Then the basis for Cn is

ei , i = [1, 2, . . . , n]

note: ei , i = [1, 2, . . . , n] is called the standard basis.

Example 12.3

h1 =

1

1

h2 =

1

−1

h1, h2 is a basis for C2.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 195: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 189

Figure 12.4: Plot of basis for C2

If b1, . . . , b2 is a basis for Cn, then we can express any x ∈ Cn as a linear combination of the bi's:

x = α1b1 + α2b2 + · · ·+ αnbn , αi ∈ C

Example 12.4Given the following vector,

x =

1

2

writing x in terms of e1, e2 gives us

x = e1 + 2e2

Exercise 12.1.2 (Solution on p. 195.)

Try and write x in terms of h1, h2 (dened in the previous example).

In the two basis examples above, x is the same vector in both cases, but we can express it in many dierentways (we give only two out of many, many possibilities). You can take this even further by extending thisidea of a basis to function spaces.

note: As mentioned in the introduction, these concepts of linear algebra will help prepare youto understand the Fourier Series4, which tells us that we can express periodic functions, f (t), interms of their basis functions, ejω0nt.

[Media Object]5

4"Fourier Series: Eigenfunction Approach" <http://legacy.cnx.org/content/m10496/latest/>5This media object is a LabVIEW VI. Please view or download it at

<LinearAlgebraCalc3.llb>

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 196: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

190 APPENDIX

Khan Lecture on Basis of a Subspace

This media object is a Flash object. Please view or download it at<http://www.youtube.com/v/zntNi3-

ybfQ&rel=0&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_embedded&fs=1>

Figure 12.5: video from Khan Academy, Basis of a Subspace - 20 min.

12.2 Linear Constant Coecient Dierence Equations6

12.2.1 Introduction: Dierence Equations

In our study of signals and systems, it will often be useful to describe systems using equations involvingthe rate of change in some quantity. In discrete time, this is modeled through dierence equations, whichare a specic type of recurrance relation. For instance, recall that the funds in an account with discretelycomponded interest rate r will increase by r times the previous balance. Thus, a discretely compoundedinterest system is described by the rst order dierence equation shown in (12.1).

y (n) = (1 + r) y (n− 1) (12.1)

Given a suciently descriptive set of initial conditions or boundary conditions, if there is a solution to thedierence equation, that solution is unique and describes the behavior of the system. Of course, the resultsare only accurate to the degree that the model mirrors reality.

12.2.2 Linear Constant Coecient Dierence Equations

An important subclass of dierence equations is the set of linear constant coecient dierence equations.These equations are of the form

Cy (n) = f (n) (12.2)

where C is a dierence operator of the form given

C = cNDN + cN−1D

N−1 + ...+ c1D + c0 (12.3)

in which D is the rst dierence operator

D (y (n)) = y (n)− y (n− 1) . (12.4)

Note that operators of this type satisfy the linearity conditions, and c0, ..., cn are real constants.However, (12.2) can easily be written as a linear constant coecient recurrence equation without dierence

operators. Conversely, linear constant coecient recurrence equations can also be written in the form ofa dierence equation, so the two types of equations are dierent representations of the same relationship.Although we will still call them linear constant coecient dierence equations in this course, we typicallywill not write them using dierence operators. Instead, we will write them in the simpler recurrence relationform

N∑k=0

aky (n− k) =M∑k=0

bkx (n− k) (12.5)

6This content is available online at <http://legacy.cnx.org/content/m12325/1.5/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 197: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 191

where x is the input to the system and y is the output. This can be rearranged to nd y (n) as

y (n) =1a0

(−

N∑k=1

aky (n− k) +M∑k=0

bkx (n− k)

)(12.6)

The forms provided by (12.5) and (12.6) will be used in the remainder of this course.A similar concept for continuous time setting, dierential equations, is discussed in the chapter on time

domain analysis of continuous time systems. There are many parallels between the discussion of linearconstant coecient ordinary dierential equations and linear constant coecient dierece equations.

12.2.3 Applications of Dierence Equations

Dierence equations can be used to describe many useful digital lters as described in the chapter discussingthe z-transform. An additional illustrative example is provided here.

Example 12.5Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when apair rabbits get left alone in a black box... The assumptions are that a pair of rabits never dieand produce a pair of ospring every month starting on their second month of life. This system isdened by the recursion relation for the number of rabit pairs y (n) at month n

y (n) = y (n− 1) + y (n− 2) (12.7)

with the initial conditions y (0) = 0 and y (1) = 1. The result is a very fast growth in the sequence.This is why we do not open black boxes.

12.2.4 Linear Constant Coecient Dierence Equations Summary

Dierence equations are an important mathematical tool for modeling discrete time systems. An importantsubclass of these is the class of linear constant coecient dierence equations. Linear constant coecientdierence equations are often particularly easy to solve as will be described in the module on solutions tolinear constant coecient dierence equations and are useful in describing a wide range of situations thatarise in electrical engineering and in other elds.

12.3 Solving Linear Constant Coecient Dierence Equations7

12.3.1 Introduction

The approach to solving linear constant coecient dierence equations is to nd the general form of allpossible solutions to the equation and then apply a number of conditions to nd the appropriate solution.The two main types of problems are initial value problems, which involve constraints on the solution at severalconsecutive points, and boundary value problems, which involve constraints on the solution at nonconsecutivepoints.

The number of initial conditions needed for an Nth order dierence equation, which is the order of thehighest order dierence or the largest delay parameter of the output in the equation, is N , and a uniquesolution is always guaranteed if these are supplied. Boundary value probelms can be slightly more complicatedand will not necessarily have a unique solution or even a solution at all for a given set of conditions. Thus,this section will focus exclusively on initial value problems.

7This content is available online at <http://legacy.cnx.org/content/m12326/1.6/>.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 198: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

192 APPENDIX

12.3.2 Solving Linear Constant Coecient Dierence Equations

Consider some linear constant coecient dierence equation given by Ay (n) = f (n), in which A is adierence operator of the form

A = aNDN + aN−1D

N−1 + ...+ a1D + a0 (12.8)

where D is the rst dierence operator

D (y (n)) = y (n)− y (n− 1) . (12.9)

Let yh (n) and yp (n) be two functions such that Ayh (n) = 0 and Ayp (n) = f (n). By the linearity ofA, note that L (yh (n) + yp (n)) = 0 + f (n) = f (n). Thus, the form of the general solution yg (n) to anylinear constant coecient ordinary dierential equation is the sum of a homogeneous solution yh (n) to theequation Ay (n) = 0 and a particular solution yp (n) that is specic to the forcing function f (n).

We wish to determine the forms of the homogeneous and nonhomogeneous solutions in full generality inorder to avoid incorrectly restricting the form of the solution before applying any conditions. Otherwise, avalid set of initial or boundary conditions might appear to have no corresponding solution trajectory. Thefollowing sections discuss how to accomplish this for linear constant coecient dierence equations.

12.3.2.1 Finding the Homogeneous Solution

In order to nd the homogeneous solution to a dierence equation described by the recurrence relation∑Nk=0 aky (n− k) = f (n), consider the dierence equation

∑Nk=0 aky (n− k) = 0. We know that the solu-

tions have the form cλn for some complex constants c, λ. Since∑Nk=0 akcλ

n−k = 0 for a solution it followsthat

cλn−NN∑k=0

akλN−k = 0 (12.10)

so it also follows that

a0λN + ...+ aN = 0. (12.11)

Therefore, the solution exponential are the roots of the above polynomial, called the characteristic polyno-mial.

For equations of order two or more, there will be several roots. If all of the roots are distinct, then thegeneral form of the homogeneous solution is simply

yh (n) = c1λn1 + ...+ c2λ

n2 . (12.12)

If a root has multiplicity that is greater than one, the repeated solutions must be multiplied by each powerof n from 0 to one less than the root multipicity (in order to ensure linearly independent solutions). Forinstance, if λ1 had a multiplicity of 2 and λ2 had multiplicity 3, the homogeneous solution would be

yh (n) = c1λn1 + c2nλ

n1 + c3λ

n2 + c4nλ

n2 + c5n

2λn2 . (12.13)

Example 12.6Recall that the Fibonacci sequence describes a (very unrealistic) model of what happens when apair rabbits get left alone in a black box... The assumptions are that a pair of rabits never dieand produce a pair of ospring every month starting on their second month of life. This system isdened by the recursion relation for the number of rabit pairs y (n) at month n

y (n)− y (n− 1)− y (n− 2) = 0 (12.14)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 199: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 193

with the initial conditions y (0) = 0 and y (1) = 1.Note that the forcing function is zero, so only the homogenous solution is needed. It is easy to

see that the characteristic polynomial is λ2 − λ − 1 = 0, so there are two roots with multiplicity

one. These are λ1 = 1+√

52 and λ2 = 1−

√5

2 . Thus, the solution is of the form

y (n) = c1

(1 +√

52

)n+ c2

(1−√

52

)n. (12.15)

Using the initial conditions, we determine that

c1 =√

55

(12.16)

and

c2 = −√

55. (12.17)

Hence, the Fibonacci sequence is given by

y (n) =√

55

(1 +√

52

)n−√

55

(1−√

52

)n. (12.18)

12.3.2.2 Finding the Particular Solution

Finding the particular solution is a slightly more complicated task than nding the homogeneous solution. Itcan be found through convolution of the input with the unit impulse response once the unit impulse responseis known. Finding the particular solution ot a dierential equation is discussed further in the chapterconcerning the z-transform, which greatly simplies the procedure for solving linear constant coecientdierential equations using frequency domain tools.

Example 12.7Consider the following dierence equation describing a system with feedback

y (n)− ay (n− 1) = x (n) . (12.19)

In order to nd the homogeneous solution, consider the dierence equation

y (n)− ay (n− 1) = 0. (12.20)

It is easy to see that the characteristic polynomial is λ − a = 0, so λ = a is the only root. Thusthe homogeneous solution is of the form

yh (n) = c1an. (12.21)

In order to nd the particular solution, consider the output for the x (n) = δ (n) unit impulse case

y (n)− ay (n− 1) = δ (n) . (12.22)

By inspection, it is clear that the impulse response is anu (n). Hence, the particular solution for agiven x (n) is

yp (n) = x (n) ∗ (anu (n)) . (12.23)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 200: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

194 APPENDIX

Therefore, the general solution is

yg (n) = yh (n) + yp (n) = c1an + x (n) ∗ (anu (n)) . (12.24)

Initial conditions and a specic input can further tailor this solution to a specic situation.

12.3.3 Solving Dierence Equations Summary

Linear constant coecient dierence equations are useful for modeling a wide variety of discrete time systems.The approach to solving them is to nd the general form of all possible solutions to the equation and thenapply a number of conditions to nd the appropriate solution. This is done by nding the homogeneoussolution to the dierence equation that does not depend on the forcing function input and a particularsolution to the dierence equation that does depend on the forcing function input.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 201: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 195

Solutions to Exercises in Chapter 12

Solution to Exercise 12.1.1 (p. 186)By playing around with the vectors and doing a little trial and error, we will discover the following rela-tionship:

x1 − x2 + 2x3 = 0

Thus we have found a linear combination of these three vectors that equals zero without setting the coecientsequal to zero. Therefore, these vectors are not linearly independent!Solution to Exercise 12.1.2 (p. 189)

x =32h1 +

−12h2

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 202: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

196 APPENDIX

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 203: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Appendix: Viewing Interactive Content

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

197

Page 204: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

198 APPENDIX

13.1 Viewing Embedded LabVIEW Content in Connexions1

13.1.1 Introduction

In order to view LabVIEW content embedded in Connexions modules, you must install and enable the Lab-VIEW 8.0 and 8.5 Local VI Execution Browser Plug-in for Windows. Step-by-step installation instructionsare given below. Once installation is complete, the placeholder box at the bottom of this module shoulddisplay a demo LabVIEW virtual instrument (VI).

13.1.2 Installing the LabVIEW Run-time Engine on Microsoft Windows

1. Download and install the LabVIEW 8.0 Runtime Engine found at:http://zone.ni.com/devzone/cda/tut/p/id/43462 .

2. Download and install the LabVIEW 8.5 Runtime Engine found at:http://zone.ni.com/devzone/cda/tut/p/id/66333 .

3. Dowload the LVBrowserPlugin.ini le from http://zone.ni.com/devzone/cda/tut/p/id/82884 , andplace it in the My Documents\LabVIEW Data folder. (You may have to create this folder if it doesn'talready exist.)

4. Restart your computer to complete the installation.5. The placeholder box at the bottom of this module should now display a demo LabVIEW virtual

instrument (VI).

13.1.3 Example Virtual Instrument

This media object is a LabVIEW VI. Please view or download it at<DFD_Utility.llb>

Figure 13.1: Digital lter design LabVIEW virtual instrument fromhttp://cnx.org/content/m13115/latest/5.

13.2 Getting Started With Mathematica6

13.2.1 What is Mathematica?

Mathematica is a computational software program used in technical elds. It is developed by WolframResearch. Mathematica makes it easy to visualize data and create GUIs in only a few lines of code.

13.2.2 How can I run, create, and nd Mathematica les?

RunThe free CDF Player7 is available for running non-commercial Mathematica programs. The option exists

1This content is available online at <http://legacy.cnx.org/content/m34460/1.5/>.2http://zone.ni.com/devzone/cda/tut/p/id/43463http://zone.ni.com/devzone/cda/tut/p/id/66334http://zone.ni.com/devzone/cda/tut/p/id/82885http://cnx.org/content/m13115/latest/6This content is available online at <http://legacy.cnx.org/content/m36728/1.13/>.7http://www.wolfram.com/cdf-player/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 205: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

APPENDIX 199

of downloading source les and running on your computer, but the CDF-player comes with a plug-in forviewing dynamic content online on your web browser!CreateMathematica 8 is available for purchase8 from Wolfram. Many universities (including Rice) and companiesalready have a Mathematica license. Wolfram has a free, save-disabled 15-day trial version9 of Mathematica.FindWolfram has thousands of Mathematica programs (including source code) available at the Wolfram Demon-strations Project10 . Anyone can create and submit a Demonstration. Also, many other websites (includingConnexions) have a lot of Mathematica content.

13.2.3 What do I need to run interactive content?

Mathematica 8 is supported on Linux, Microsoft Windows, Mac OS X, and Solaris. Mathematica's freeCDF-player is available for Windows and Mac OS X, and is in development for Linux; the CDF-Playerplugin is available for IE, Firefox, Chrome, Safari, and Opera.

13.2.4 How can I upload a Mathematica le to a Connexions module?

Go to the Files tab at the top of the module and upload your .cdf le, along with an (optional) screenshotof the le in use. In order to generate a clean bracket-less screenshot, you should do the following:

• Open your .cdf in Mathematica and left click on the bracket surrounding the manipulate command.• Click on Cell->Convert To->Bitmap.• Then click on File->Save Selection As, and save the image le in your desired image format.

Embed the les into the module in any way you like. Some tags you may nd helpful include image,gure, download, and link (if linking to an .cdf le on another website). The best method is to create aninteractive gure, and include a fallback png image of the cdf le should the CDF image not render properly.See the interactive demo/image below.Convolution Demo

<figure id="demoonline"><media id="CNXdemoonline" alt="timeshiftDemo"><image mime-type="image/png" src="Convolutiondisplay-4.cdf" thumbnail="Convolution4.0Display.png" width="600"/><object width="500" height="500" src="Convolutiondisplay-4.cdf" mime-type="application/vnd.wolfram.cdf" for="webview2.0"/><image mime-type="application/postscript" for="pdf" src="Convolution4.0Display.png" width="400"/></media><caption>Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download, right-click and save target as .cdf.%lt;/caption></figure>

8http://www.wolfram.com/products/mathematica/purchase.html9http://www.wolfram.com/products/mathematica/experience/request.cgi

10http://demonstrations.wolfram.com/index.html

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 206: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

200 APPENDIX

Figure 13.2: Interact (when online) with a Mathematica CDF demonstrating Convolution. To Down-load, right-click and save target as .cdf.

Alternatively, this is how it looks when you use a thumbnail link to a live online demo.

Figure 13.3: Click on the above thumbnail image (when online) to view an interactive MathematicaPlayer demonstrating Convolution.

13.2.5 How can I learn Mathematica?

Open Mathematica and go to the Getting Started section of the "Welcome to Mathematica" screen, or checkout Help: Documentation Center.

The Mathematica Learning Center11 has lots of screencasts, how-tos, and tutorials.When troubleshooting, the error messages are often unhelpful, so it's best to evaluate often so the problem

can be easily located. Search engines like Google are useful when you're looking for an explanation of specicerror messages.

11http://www.wolfram.com/learningcenter/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 207: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

GLOSSARY 201

Glossary

B Basis

A basis for Cn is a set of vectors that: (1) spans Cn and (2) is linearly independent.

L Linearly Independent

For a given set of vectors, x1, x2, . . . , xn, they are linearly independent if

c1x1 + c2x2 + · · ·+ cnxn = 0

only when c1 = c2 = · · · = cn = 0

Example: We are given the following two vectors:

x1 =

3

2

x2 =

−6

−4

These are not linearly independent as proven by the following statement, which, byinspection, can be seen to not adhere to the denition of linear independence stated above.

(x2 = −2x1)⇒ (2x1 + x2 = 0)

Another approach to reveal a vectors independence is by graphing the vectors. Looking at thesetwo vectors geometrically (as in 12 ), one can again prove that these vectors are not linearlyindependent.

S Span

The span13 of a set of vectors x1, x2, . . . , xk is the set of vectors that can be written as a linearcombination of x1, x2, . . . , xk

span (x1, . . . , xk) = α1x1 + α2x2 + · · ·+ αkxk , αi ∈ Cn

Example: Given the vector

x1 =

3

2

the span of x1 is a line.

12http://legacy.cnx.org/content/m10734/latest/13http://legacy.cnx.org/content/m10734/latest/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 208: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

202 GLOSSARY

Example: Given the vectors

x1 =

3

2

x2 =

1

2

the span of these vectors is C2.

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 209: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

INDEX 203

Index of Keywords and Terms

Keywords are listed by the section with that keyword (page numbers are in parentheses). Keywordsdo not necessarily appear in the text of the page. They are merely associated with that section. Ex.apples, 1.1 (1) Terms are referenced by the page they appear on. Ex. apples, 1

A acausal, 10.8(161)ADC, 10.8(161)algebra, 1.2(5)alias, 10.5(150), 10.6(153)aliasing, 10.5(150), 10.6(153), 10.8(161)analog, 2.1(17), 18, 10.8(161)analysis, 5.4(71)Anti-Aliasing, 10.6(153), 10.8(161)anti-imaging, 10.8(161)anticausal, 2.1(17), 20, 10.8(161)aperiodic, 2.1(17), 68, 6.1(77), 77, 9.1(121), 123

B bandlimited, 10.8(161)bases, 12.1(185)basis, 12.1(185), 188, 188, 188BIBO, 4.3(57), 8.5(118)bits, 10.8(161)block diagram, 3.1(39)bounded, 4.3(57)bounded input, 4.3(57), 8.5(118)bounded output, 4.3(57), 8.5(118)buttery, 171

C cardinal, 10.3(142)cascade, 3.1(39), 3.3(45)Cauchy-Schwarz inequality, 11.4(174)causal, 2.1(17), 20, 3.2(41), 10.8(161)common, 9.5(131)communication, 11.4(174)complex, 2.2(23), 7.1(91)complex exponential, 2.6(34), 5.3(70), 92, 7.5(100), 9.2(124)complex exponentials, 126complex numbers, 1.1(1), 1.2(5), 1.3(11)complex plane, 2.6(34), 7.5(100)complex-valued, 2.2(23), 7.1(91)complexity, 169, 172composite, 174computational advantage, 170considerations, 10.8(161)Constant Coecient, 12.2(190), 12.3(191)

continuous, 17, 6.5(87), 10.8(161)continuous frequency, 6.2(78), 9.3(125)continuous time, 2.1(17), 2.2(23), 4.1(53), 4.2(54), 56, 4.3(57), 4.4(58), 5.1(67), 5.3(70), 5.4(71), 6.1(77), 6.2(78), 6.3(81), 6.4(85), 10.1(137)Continuous Time Convolution Integral, 56Continuous Time Fourier Series, 71Continuous Time Fourier Transform, 77, 78continuous-time, 4.1(53)Continuous-Time Fourier Transform, 79converge, 4.3(57)converter, 10.8(161)Convolution, 56, 4.4(58), 4.5(62), 6.3(81), 6.5(87), 108, 8.3(109), 8.4(115), 11.4(174)convolution integral, 87Cooley-Tukey, 11.2(169)CTFS, 5.4(71)CTFT, 6.2(78), 6.5(87)

D DAC, 10.8(161)de, 4.1(53)decimation, 7.3(95)decompose, 7.1(91)delta function, 8.2(106)detection, 11.4(174)DFT, 11.1(167), 11.3(170)dierence equation, 8.1(105)Dierence Equations, 12.2(190), 12.3(191)dierential, 4.1(53)dierential equation, 56dierential equations, 4.1(53)digital, 2.1(17), 18, 10.8(161)digital signal processing, 8.1(105)dirac delta function, 2.2(23), 2.5(32), 32Discete Time Fourier Transform, 123discrete, 17, 10.8(161)discrete convolution, 9.6(133)Discrete Fourier Transform, 11.1(167), 167discrete time, 2.1(17), 4.3(57), 5.1(67), 8.2(106), 108, 8.3(109), 8.4(115),

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 210: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

204 INDEX

8.5(118), 9.1(121), 9.2(124), 9.3(125), 9.5(131), 9.6(133), 10.1(137)Discrete Time Convolution Sum, 108Discrete Time Fourier Transform, 125discrete-time, 7.1(91), 9.4(128)Discrete-Time Fourier Transform, 126Discrete-Time Fourier Transform properties, 9.4(128)discrete-time periodic signal, 121discrete-time signals, 7.3(95), 10.7(155)discrete-time systems, 8.1(105)downsampling, 10.7(155)DSP, 8.1(105), 10.2(139)DT, 8.3(109)DTFT, 9.3(125), 9.5(131), 9.6(133)duality, 6.3(81)dynamic content, 13.1(198)

E eigenfunction, 5.3(70), 5.4(71), 9.2(124)eigenvalue, 5.3(70), 9.2(124)eigenvector, 5.3(70), 9.2(124)ELEC 301, 12.2(190)electrical engineering, 1.1(1), 1.2(5), 1.3(11)embedded, 13.1(198)Energy, 2.4(29), 7.2(93)engineering, 1.1(1), 1.2(5), 1.3(11)even signal, 2.1(17), 21examples, 9.5(131)expansion, 7.3(95)exponential, 2.2(23), 7.1(91)

F fast Fourier transform, 11.2(169), 11.3(170)feedback, 3.1(39)FFT, 11.2(169), 11.3(170)lter, 10.3(142), 10.6(153), 10.8(161)nite-length signal, 19form, 170fourier, 11.3(170)Fourier methods, 56, 108fourier series, 5.1(67), 5.4(71)fourier transform, 5.1(67), 6.2(78), 6.3(81), 6.4(85), 6.5(87), 9.3(125), 9.4(128), 9.6(133), 11.1(167), 11.3(170)fourier transforms, 9.5(131)frequency, 10.5(150), 10.6(153), 10.8(161)Frequency Domain, 9.4(128)FT, 6.5(87)function spaces, 189functional, 39fundamental period, 18, 68, 77, 121

G geometry, 1.1(1)

H hold, 10.8(161)homogeneous, 12.3(191)

I ideal, 10.3(142), 10.4(147)imaging, 10.8(161)imperfect, 10.8(161)implementability, 10.8(161)impulse, 2.2(23), 2.5(32), 4.2(54), 7.4(98), 8.2(106), 108impulse response, 4.2(54), 8.2(106), 8.3(109), 12.3(191)impulse-like input signal, 56independence, 12.1(185)innite-length signal, 19initial value, 12.3(191)input, 3.1(39)integrable, 4.3(57)interpolation, 7.3(95), 10.3(142)invariant, 10.8(161)

L LabVIEW, 13.1(198)Laplace transform, 56, 5.1(67)linear, 3.2(41), 3.3(45), 10.8(161), 12.2(190), 12.3(191)linear algebra, 12.1(185)linear independence, 12.1(185)linear system, 4.1(53)linear time invariant, 4.4(58), 5.3(70), 9.2(124)linearity, 6.3(81)linearly independent, 185, 185lowpass, 10.3(142), 10.4(147), 10.6(153)LTI, 4.4(58), 5.3(70), 9.2(124)

M Mathematica, 13.2(198)modulation, 6.3(81)

N noncausal, 2.1(17), 20, 3.2(41)nonhomogeneous, 12.3(191)nonlinear, 3.2(41)not, 173Nyquist, 10.1(137), 10.2(139), 10.6(153), 10.8(161)Nyquist frequency, 10.2(139)Nyquist theorem, 10.2(139)

O odd signal, 2.1(17), 21output, 3.1(39)

P parallel, 3.1(39), 3.3(45)pattern recognition, 11.4(174)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 211: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

INDEX 205

perfect, 10.4(147)period, 18, 5.2(68)periodic, 2.1(17), 5.2(68), 68, 69, 77, 121periodic function, 5.2(68), 68, 77, 121periodic functions, 68, 122periodicity, 5.2(68)Player, 13.2(198)plug-in, 13.1(198)pole, 8.5(118)Power, 2.4(29), 7.2(93)practical, 10.8(161)precision, 10.8(161)processing, 10.8(161)properties, 8.4(115)property, 4.5(62)proportional, 169, 172pulse, 2.2(23)

Q quantization, 10.8(161)

R radar, 11.4(174)range, 10.8(161)real-valued, 7.1(91)real-valued signal conjugate symmetry, 6.3(81)reconstruct, 10.4(147)reconstruction, 10.1(137), 10.2(139), 10.3(142), 10.4(147), 10.5(150), 10.8(161)

S sample, 10.2(139), 10.5(150)sampling, 10.1(137), 10.2(139), 10.4(147), 10.5(150), 10.6(153), 10.7(155), 10.8(161), 167sampling rate, 10.7(155)sampling theorem, 10.8(161)Sequence-Domain, 9.4(128)sequences, 7.1(91)Shannon, 10.1(137), 10.2(139), 10.3(142), 10.4(147), 10.8(161)shift-invariant systems, 8.1(105)sifting property, 2.2(23)signal, 3.1(39), 5.2(68)signals, 2.2(23), 2.3(26), 2.5(32), 2.6(34), 3.2(41), 4.3(57), 4.4(58), 4.5(62), 5.1(67), 6.1(77), 6.4(85), 7.1(91), 7.4(98), 7.5(100), 8.3(109), 8.5(118), 9.1(121), 10.8(161)signals and systems, 2.1(17), 8.3(109)sinc, 10.3(142), 10.4(147), 10.8(161)

sine, 7.1(91)sinusoid, 7.1(91)solution, 12.3(191)span, 12.1(185), 187spectrum, 10.1(137)spline, 10.3(142)stability, 4.3(57), 8.5(118)standard basis, 188summable, 8.5(118)superposition, 3.3(45), 8.1(105)symbolic-valued signals, 7.1(91)synthesis, 5.4(71)system theory, 3.1(39)systems, 4.3(57), 4.4(58), 4.5(62), 5.1(67), 6.4(85), 7.1(91), 8.5(118)

T t-periodic, 5.2(68)template matching, 178theorem, 10.2(139)time, 10.8(161)time dierentiation, 6.3(81)time domain, 8.1(105), 8.4(115)time invariant, 3.2(41)time reversal, 2.3(26), 7.3(95)time scaling, 2.3(26), 6.3(81), 7.3(95)time shifting, 2.3(26), 6.3(81), 7.3(95)time varying, 3.2(41)time-invariant, 3.3(45)transfer function, 67, 5.4(71)triangle, 2.2(23)triangle function, 25

U unit sample, 7.1(91), 92, 7.4(98)unit sample function, 98unit step, 2.2(23)unit-pulse function, 25unit-step function, 24, 92upsampling, 10.7(155)

V vector space, 1.3(11)VI, 13.1(198)virtual instrument, 13.1(198)

W Whittaker, 10.3(142), 10.4(147), 10.8(161)window, 10.8(161)Wolfram, 13.2(198)

Z z transform, 5.1(67), 8.5(118)zero, 8.5(118)

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 212: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

206 ATTRIBUTIONS

Attributions

Collection: Signals and SystemsEdited by: Marco F. DuarteURL: http://legacy.cnx.org/content/col11557/1.6/License: http://creativecommons.org/licenses/by/3.0/

Module: "Complex Numbers: Geometry of Complex Numbers"Used here as: "Geometry of Complex Numbers"By: Louis ScharfURL: http://legacy.cnx.org/content/m21411/1.6/Pages: 1-5Copyright: Louis ScharfLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Complex Numbers: Algebra of Complex Numbers"Used here as: "Algebra of Complex Numbers"By: Louis ScharfURL: http://legacy.cnx.org/content/m21408/1.6/Pages: 5-11Copyright: Louis ScharfLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Complex Numbers: Representing Complex Numbers in a Vector Space"Used here as: "Representing Complex Numbers in a Vector Space"By: Louis ScharfURL: http://legacy.cnx.org/content/m21414/1.6/Pages: 11-15Copyright: Louis ScharfLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Signal Classications and Properties"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47271/1.2/Pages: 17-23Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Signal Classications and PropertiesBy: Melissa Selik, Richard Baraniuk, Michael HaagURL: http://legacy.cnx.org/content/m10057/2.21/

Module: "Common Continuous Time Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47606/1.3/Pages: 23-26Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Common Continuous Time SignalsBy: Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m10058/2.16/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 213: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

ATTRIBUTIONS 207

Module: "Signal Operations"By: Richard BaraniukURL: http://legacy.cnx.org/content/m10125/2.18/Pages: 26-29Copyright: Richard Baraniuk, Adam BlairLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Energy and Power of Continuous-Time Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47273/1.4/Pages: 29-31Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Energy and PowerBy: Anders Gjendemsjø, Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m11526/1.20/

Module: "Continuous Time Impulse Function"By: Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m10059/2.27/Pages: 32-34Copyright: Melissa Selik, Richard Baraniuk, Adam BlairLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Continuous Time Complex Exponential"By: Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m10060/2.25/Pages: 34-37Copyright: Richard Baraniuk, Stephen Kruzick, Adam BlairLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Introduction to Systems"By: Don JohnsonURL: http://legacy.cnx.org/content/m0005/2.19/Pages: 39-41Copyright: Don JohnsonLicense: http://creativecommons.org/licenses/by/1.0

Module: "System Classications and Properties"By: Melissa Selik, Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m10084/2.24/Pages: 41-45Copyright: Melissa Selik, Richard Baraniuk, Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Linear Time Invariant Systems"By: Thanos Antoulas, JP SlavinskyURL: http://legacy.cnx.org/content/m2102/2.26/Pages: 45-51Copyright: Thanos Antoulas, JP SlavinskyLicense: http://creativecommons.org/licenses/by/3.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 214: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

208 ATTRIBUTIONS

Module: "Continuous Time Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47437/1.1/Pages: 53-54Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time SystemsBy: Michael Haag, Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m10855/2.8/

Module: "Continuous Time Impulse Response"By: Dante SoaresURL: http://legacy.cnx.org/content/m34629/1.2/Pages: 54-57Copyright: Dante SoaresLicense: http://creativecommons.org/licenses/by/3.0/

Module: "BIBO Stability of Continuous Time Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47280/1.1/Pages: 57-58Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: BIBO Stability of Continuous Time SystemsBy: Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m10113/2.11/

Module: "Continuous-Time Convolution"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47482/1.2/Pages: 58-62Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Continuous Time ConvolutionBy: Melissa Selik, Richard Baraniuk, Stephen Kruzick, Dan CalderonURL: http://legacy.cnx.org/content/m10085/2.34/

Module: "Properties of Continuous Time Convolution"By: Melissa Selik, Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m10088/2.18/Pages: 62-65Copyright: Melissa Selik, Richard Baraniuk, Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Introduction to Fourier Analysis"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47439/1.2/Page: 67Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Introduction to Fourier AnalysisBy: Richard BaraniukURL: http://legacy.cnx.org/content/m10096/2.12/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 215: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

ATTRIBUTIONS 209

Module: "Continuous Time Periodic Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47350/1.1/Pages: 68-69Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time Periodic SignalsBy: Michael Haag, Justin RombergURL: http://legacy.cnx.org/content/m10744/2.13/

Module: "Eigenfunctions of Continuous-Time LTI Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47308/1.2/Pages: 70-71Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Eigenfunctions of Continuous Time LTI SystemsBy: Justin Romberg, Stephen KruzickURL: http://legacy.cnx.org/content/m34639/1.1/

Module: "Continuous Time Fourier Series (CTFS)"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47348/1.2/Pages: 71-75Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time Fourier Series (CTFS)By: Stephen Kruzick, Dan CalderonURL: http://legacy.cnx.org/content/m34531/1.9/

Module: "Continuous Time Aperiodic Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47356/1.1/Pages: 77-78Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time Aperiodic SignalsBy: Stephen KruzickURL: http://legacy.cnx.org/content/m34848/1.5/

Module: "Continuous Time Fourier Transform (CTFT)"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47319/1.5/Pages: 78-81Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time Fourier Transform (CTFT)By: Richard Baraniuk, Melissa SelikURL: http://legacy.cnx.org/content/m10098/2.16/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 216: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

210 ATTRIBUTIONS

Module: "Properties of the CTFT"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47347/1.4/Pages: 81-84Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Properties of the CTFTBy: Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m10100/2.15/

Module: "Common Fourier Transforms"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47344/1.5/Pages: 85-87Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Common Fourier TransformsBy: Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m10099/2.11/

Module: "Continuous Time Convolution and the CTFT"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47346/1.1/Pages: 87-89Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Continuous Time Convolution and the CTFTBy: Stephen KruzickURL: http://legacy.cnx.org/content/m34849/1.5/

Module: "Common Discrete Time Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47447/1.2/Pages: 91-93Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Common Discrete Time SignalsBy: Don Johnson, Stephen KruzickURL: http://legacy.cnx.org/content/m34575/1.2/

Module: "Energy and Power of Discrete-Time Signals"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47357/1.3/Pages: 93-94Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Energy and PowerBy: Anders Gjendemsjø, Melissa Selik, Richard BaraniukURL: http://legacy.cnx.org/content/m11526/1.20/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 217: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

ATTRIBUTIONS 211

Module: "Discrete-Time Signal Operations"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47809/1.1/Pages: 95-98Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Signal OperationsBy: Richard BaraniukURL: http://legacy.cnx.org/content/m10125/2.17/

Module: "Discrete Time Impulse Function"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47448/1.1/Pages: 98-100Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Impulse FunctionBy: Dan CalderonURL: http://legacy.cnx.org/content/m34566/1.6/

Module: "Discrete Time Complex Exponential"By: Dan Calderon, Richard Baraniuk, Stephen Kruzick, Matthew MoravecURL: http://legacy.cnx.org/content/m34573/1.6/Pages: 100-104Copyright: Dan Calderon, Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/Based on: The Complex ExponentialBy: Richard BaraniukURL: http://legacy.cnx.org/content/m10060/2.21/

Module: "Discrete Time Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47454/1.3/Pages: 105-106Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Discrete Time SystemsBy: Don Johnson, Stephen KruzickURL: http://legacy.cnx.org/content/m34614/1.2/

Module: "Discrete Time Impulse Response"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47363/1.2/Pages: 106-109Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Impulse ResponseBy: Dante SoaresURL: http://legacy.cnx.org/content/m34626/1.1/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 218: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

212 ATTRIBUTIONS

Module: "Discrete-Time Convolution"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47455/1.2/Pages: 109-115Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Discrete Time ConvolutionBy: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick, Catherine ElderURL: http://legacy.cnx.org/content/m10087/2.27/

Module: "Properties of Discrete Time Convolution"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47456/1.1/Pages: 115-117Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Properties of Discrete Time ConvolutionBy: Stephen KruzickURL: http://legacy.cnx.org/content/m34625/1.2/

Module: "BIBO Stability of Discrete Time Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47362/1.3/Pages: 118-119Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: BIBO Stability of Discrete Time SystemsBy: Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m34515/1.2/

Module: "Discrete Time Aperiodic Signals"By: Marco F. Duarte, Natesh GaneshURL: http://legacy.cnx.org/content/m47369/1.3/Pages: 121-124Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Aperiodic SignalsBy: Stephen Kruzick, Dan CalderonURL: http://legacy.cnx.org/content/m34850/1.4/

Module: "Eigenfunctions of Discrete Time LTI Systems"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47459/1.1/Pages: 124-125Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Eigenfunctions of Discrete Time LTI SystemsBy: Justin Romberg, Stephen KruzickURL: http://legacy.cnx.org/content/m34640/1.1/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 219: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

ATTRIBUTIONS 213

Module: "Discrete Time Fourier Transform (DTFT)"By: Marco F. Duarte, Natesh GaneshURL: http://legacy.cnx.org/content/m47370/1.2/Pages: 125-128Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Fourier Transform (DTFT)By: Richard BaraniukURL: http://legacy.cnx.org/content/m10108/2.18/

Module: "Properties of the DTFT"By: Marco F. Duarte, Natesh GaneshURL: http://legacy.cnx.org/content/m47374/1.9/Pages: 128-131Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Properties of the DTFTBy: Don JohnsonURL: http://legacy.cnx.org/content/m0506/2.7/

Module: "Common Discrete Time Fourier Transforms"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47373/1.3/Pages: 131-132Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Common Discrete Time Fourier TransformsBy: Stephen KruzickURL: http://legacy.cnx.org/content/m34771/1.3/

Module: "Discrete Time Convolution and the DTFT"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47375/1.2/Pages: 133-135Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Convolution and the DTFTBy: Stephen Kruzick, Dan CalderonURL: http://legacy.cnx.org/content/m34851/1.6/

Module: "Signal Sampling"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47377/1.2/Pages: 137-139Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Signal SamplingBy: Stephen Kruzick, Justin RombergURL: http://legacy.cnx.org/content/m10798/2.8/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 220: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

214 ATTRIBUTIONS

Module: "Sampling Theorem"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47378/1.2/Pages: 139-142Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Sampling TheoremBy: Justin Romberg, Stephen KruzickURL: http://legacy.cnx.org/content/m10791/2.7/

Module: "Signal Reconstruction"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47463/1.1/Pages: 142-147Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Signal ReconstructionBy: Stephen Kruzick, Justin RombergURL: http://legacy.cnx.org/content/m10788/2.8/

Module: "Perfect Reconstruction"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47379/1.3/Pages: 147-149Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Perfect ReconstructionBy: Stephen Kruzick, Roy Ha, Justin RombergURL: http://legacy.cnx.org/content/m10790/2.6/

Module: "Aliasing Phenomena"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47380/1.3/Pages: 150-153Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/4.0/Based on: Aliasing PhenomenaBy: Justin Romberg, Don Johnson, Stephen KruzickURL: http://legacy.cnx.org/content/m34847/1.5/

Module: "Anti-Aliasing Filters"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47392/1.1/Pages: 153-155Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Anti-Aliasing FiltersBy: Justin Romberg, Stephen KruzickURL: http://legacy.cnx.org/content/m10794/2.6/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 221: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

ATTRIBUTIONS 215

Module: "Changing Sampling Rates in Discrete Time"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m48038/1.1/Pages: 155-161Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Discrete Time Processing of Continuous Time Signals"By: Marco F. Duarte, Natesh GaneshURL: http://legacy.cnx.org/content/m47398/1.3/Pages: 161-165Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Time Processing of Continuous Time SignalsBy: Justin Romberg, Stephen KruzickURL: http://legacy.cnx.org/content/m10797/2.11/

Module: "Discrete Fourier Transform (DFT)"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47468/1.1/Pages: 167-169Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: Discrete Fourier Transform (DFT)By: Don JohnsonURL: http://legacy.cnx.org/content/m10249/2.28/

Module: "DFT: Fast Fourier Transform"By: Don JohnsonURL: http://legacy.cnx.org/content/m0504/2.9/Page: 169Copyright: Don JohnsonLicense: http://creativecommons.org/licenses/by/3.0/

Module: "The Fast Fourier Transform (FFT)"By: Marco F. DuarteURL: http://legacy.cnx.org/content/m47467/1.1/Pages: 170-174Copyright: Marco F. DuarteLicense: http://creativecommons.org/licenses/by/3.0/Based on: The Fast Fourier Transform (FFT)By: Justin RombergURL: http://legacy.cnx.org/content/m10783/2.7/

Module: "Matched Filter Detector"By: Stephen Kruzick, Dan Calderon, Catherine Elder, Richard BaraniukURL: http://legacy.cnx.org/content/m34670/1.10/Pages: 174-183Copyright: Stephen Kruzick, Dan Calderon, Catherine Elder, Richard BaraniukLicense: http://creativecommons.org/licenses/by/3.0/Based on: Cauchy-Schwarz InequalityBy: Michael Haag, Justin RombergURL: http://legacy.cnx.org/content/m10757/2.6/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 222: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

216 ATTRIBUTIONS

Module: "Linear Algebra: The Basics"Used here as: "Basic Linear Algebra"By: Michael Haag, Justin RombergURL: http://legacy.cnx.org/content/m10734/2.7/Pages: 185-190Copyright: Michael Haag, Justin RombergLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Linear Constant Coecient Dierence Equations"By: Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m12325/1.5/Pages: 190-191Copyright: Richard Baraniuk, Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Solving Linear Constant Coecient Dierence Equations"By: Richard Baraniuk, Stephen KruzickURL: http://legacy.cnx.org/content/m12326/1.6/Pages: 191-194Copyright: Richard Baraniuk, Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/

Module: "Viewing Embedded LabVIEW Content in Connexions"By: Stephen KruzickURL: http://legacy.cnx.org/content/m34460/1.5/Page: 198Copyright: Stephen KruzickLicense: http://creativecommons.org/licenses/by/3.0/Based on: Viewing Embedded LabVIEW ContentBy: Matthew HutchinsonURL: http://legacy.cnx.org/content/m13753/1.3/

Module: "Getting Started With Mathematica"By: Catherine Elder, Dan CalderonURL: http://legacy.cnx.org/content/m36728/1.13/Pages: 198-200Copyright: Catherine Elder, Dan CalderonLicense: http://creativecommons.org/licenses/by/3.0/

Available for free at Connexions <http://legacy.cnx.org/content/col11557/1.6>

Page 223: Signals and SystemsSignals and Systems Collection Editor: Marco F. Duarte Authors: Thanos Antoulas Richard Baraniuk Dan Calderon Marco F. Duarte Catherine Elder Natesh Ganesh Michael

Signals and SystemsThis course deals with signals, systems, and transforms, from their theoretical mathematical foundations topractical implementation in circuits and computer algorithms. At the conclusion of ECE 313, you shouldhave a deep understanding of the mathematics and practical issues of signals in continuous and discrete time,linear time invariant systems, convolution, and Fourier transforms.

About OpenStax-CNXRhaptos is a web-based collaborative publishing system for educational material.