1 chapter 7 fir filter design techniques. 2 design of fir filters by windowing (1) we have...
TRANSCRIPT
1
Chapter 7
FIR Filter Design Techniques
2
Design of FIR Filters by Windowing (1)
We have discussed techniques for the design of discrete-time IIR filters based on the transformations of continuous-time IIR systems.
In contrast, FIR design filters are almost entirely restricted to discrete-time implementations. The design techniques for filters are based on directly approximating the desired frequency response of the discrete-time system.
Most techniques for approximating the magnitude response of an FIR system assume a linear phase constraint, thereby avoiding the problem of spectrum factorization that complicates the direct design of IIR filters.
The simplest method for FIR design is called the Window method.
3
Design of FIR Filters by Windowing (2)
We consider an ideal desired frequency response that can be represented as
In turn, the impulse response sequence can be expressed as
Many idealized systems are defined by piecewise-constant or piecewise-functional frequency responses with discontinuities at the boundaries between bands. As a result, these systems have impulse responses that are noncausal and infinitely long.
The most straightforward approach to obtaining a causal FIR approximation to such systems is to truncate the ideal response.
n
njd
Tjd enheH ][)(
deeHnh njjdd )(
2
1][
4
Design of FIR Filters by Windowing (3)
otherwise,0
0],[][
Mnnhnh d
][][][ nwnhnh d
otherwise,0
0,1][
Mnnw
The simplest way to obtain a causal FIR filter from hd[n] is to define a new system with impulse response h[n] given by
As can be seen in the LPF example (we discussed it in Ch. 2), there is the Gibbs phenomenon.
More generally, we can represent h[n] as the product of the desired impulse response and a finite-duration “window” w[n],
where the window for the above example is
5
njM
Mn
cjM e
n
neH
sin)(
-1 0 1-0.5
0
0.5
1
1.5
/
HM
(ej
)
-1 0 1-0.5
0
0.5
1
1.5
/
HM
(ej
)
-1 0 1-0.5
0
0.5
1
1.5
/
HM
(ej
)
-1 0 1-0.5
0
0.5
1
1.5
/
HM
(ej
)
1M
7M 19M
5M
Design of FIR Filters by Windowing (4)
6
Design of FIR Filters by Windowing (5)
From the modulation theorem (or windowing theorem)
That is, H(ej) is the periodic convolution of Hd(ej) and W(ej). Thus, H(ej) will be a smeared version of Hd(ej).
deWeHeH jjd
j )()(
2
1)( )(
7
Design of FIR Filters by Windowing (6)
If W(ej) is concentrated in a narrowband of frequencies around =0, then H(ej) will look like Hd(ej), except where Hd(ej) changes very abruptly.
The choice of the window is to have w[n] as short as possible in duration, so as to minimize computation in the implementation of the filter, while having W(ej) approximate an impulse. Clearly, there are conflicting requirements.
For the rectangular window, the side lobes are large, and as M increases, the peak amplitudes of the main lobe and the side lobes grow in a manner such that the area under lobe is a constant while the width of each lobe decreases with M.
8
Design of FIR Filters by Windowing (7)
Frequency response of rectangular window
Next, we will see that, by tapering the window smoothly to zero at each end, the height of the side lobes can be reduced at the expense of a wider main lobes.
]2/sin[
]2/)1(sin[
1
1)( 2/
0
)1(
M
ee
eeeW Mj
M
nj
Mjnjj
9
Properties of Commonly Used Windows (1)
Commonly used windows
Rectangular
Bartlett (triangular)
Hanning
Hamming
Blackman
otherwise,0
0,1][
Mnnw
otherwise,0
0),/2cos(5.05.0][
MnMnnw
otherwise,0
2/,/22
2/0,/2
][ MnMMn
MnMn
nw
otherwise,0
0),/2cos(46.054.0][
MnMnnw
otherwise,0
0),/4cos(08.0)/2cos(5.042.0][
MnMnMnnw
10
Properties of Commonly Used Windows (2)
Commonly used windows (Note that the windows are plotted as functions of a continuous variable. The actual window sequence is defined only at integer value of n.)
11
Properties of Commonly Used Windows (3)
Commonly used windows – magnitude response
M=50
Rectangular
Peak sidelobe level: –13 dB
Approx. width of main lobe:
4/(M+1)
Bartlett
Peak sidelobe level: –25 dB
Approx. width of main lobe:
8/M
12
Properties of Commonly Used Windows (4)
Hanning
Peak sidelobe level: –31 dB
Approx. width of main lobe:
8/M
Hamming
Peak sidelobe level: –41 dB
Approx. width of main lobe:
8/M
Blackman
Peak sidelobe level: –57 dB
Approx. width of main lobe:
12/M
13
Incorporation of Generalized Linear Phase
All the windows have the property that
i.e., they are symmetric about M/2.
If the desired impulse response is also symmetric about M/2, i.e., if hd[n]= hd[M – n], then the windowed impulse response will also have that symmetry, and the resulting frequency response will have a generalized linear phase; that is
where Ae(ej) is real and is an even function of .
Properties of Commonly Used Windows (5)
otherwise,0
0],[][
MnnMwnw
2/)()( Mjje
j eeAeH
14
Incorporation of Generalized Linear Phase (cont.)
If the desired impulse response is anti-symmetric about M/2, i.e., if hd[n]= –hd[M – n], then the windowed impulse response will also be anti-symmetry about M/2, and the resulting frequency response will have a generalized linear phase; that is
where Ao(ej) is real and is an odd function of .
Properties of Commonly Used Windows (6)
2/)()( Mjjo
j eejAeH
15
The trade-off between the main lobe width and side-lobe area can be quantified by seeking the window function that is maximally concentrated around =0 in the frequency domain. This was considered and the solution shows that the filter design involves prolate spheroidal wave functions and thus is difficult.
However, Kaiser found that a near-optimal window could be formed using the zeroth-order modified Bessel function of the first kind [I0(·)], a function that is much easier to compute.
The Kaiser window is defined as
where =M/2.
The Kaiser Window Filter Design Method (1)
otherwise,0
0,)(
])]/)[(1([][
0
2/120 Mn
I
nInw
16
The Kaiser window has two parameters: the window length (M+1) and a shape parameter . By adjusting (M+1) and , the side-lobe amplitude and main-lobe width can be properly designed.
=0 becomes the rectangular window. Increasing will reduce the side-lobe level, but the main-lobe becomes wider.
Increasing M while holding constant causes the main lobe to reduce in width, but does not affect the amplitude of the side lobes.
Through extensive numerical experiments, Kaiser obtained a pair of formulas to predict the values of and needed to meet a given filter specification.
The Kaiser Window Filter Design Method (2)
17
The Kaiser Window Filter Design Method (3)
18
Relationship of the Kaiser Window to Other Windows
19
Window Related Matlab Functions(Signal Processing Toolbox)
WINDOW - Window function gateway
WINDOW(@WNAME,N) returns an N-point window of type specified by the function handle @WNAME in a column vector. @WNAME can be any valid window function name, for example:
@bartlett - Bartlett window.@blackman - Blackman window.@hamming - Hamming window.@hann - Hann window.@kaiser - Kaiser window.@rectwin - Rectangular window.@triang - Triangular window.
20
W = BARTLETT(N) returns the N-point Bartlett window.
W=BLACKMAN(N) returns the N-point symmetric Blackman window in a column vector.
W=HAMMING(N) returns the N-point symmetric Hamming window in a column vector.
W=HANN(N) returns the N-point symmetric Hann window in a column vector.
W = RECTWIN(N) returns the N-point rectangular window.
W = TRIANG(N) returns the N-point triangular window.
W = KAISER(N,beta) returns the BETA-valued N-point Kaiser window.
Window Related Matlab Functions(Signal Processing Toolbox)
21
fir1 - Window based FIR filter design - low, high, band, stop, multi.
kaiserord - Kaiser window design based filter order estimation.
Also, the following two functions are available at the standard “Specialized math functions”.
BESSELI Modified Bessel function of the first kind. I = BESSELI(NU,Z) is the modified Bessel function of the first kind, I_nu(Z).
For more functions related to window and filter design, use “help signal”.
For detailed information for each function, use “help func_name”.
FIR Design Related Matlab Functions(Signal Processing Toolbox)
22
The Kaiser Window Filter Design Method (4)
Note that, in the Kaiser window design, the passband and the stopband have the same error bound .
23
Defining
A = – 20log10
Kaiser determined empirically that the value of is given by
When the transition bandwidth is =s – p , M must satisfy
This equation predicts M to within 2 over a wide range of values of and
The Kaiser Window Filter Design Method (5)
285.2
8AM
21,0.0
5021),21(07886.0)21(5842.0
50),7.8(1102.04.0
A
AAA
AA
24
LPF Design Example [1=0.01, 2=0.001, ps ]
Step 1: because the Kaiser window assume the same error bound for the passband and stopband, we let = min[1 2]=0.001.
Step 2: The cutoff frequency of the underlying ideal LPF is
c = (p s Step 3: Determine A and .
A = – 20log10dB, = p– p
Then we can obtain= 5.653, M =37.
Step 4: The impulse response of the filter is obtained as (=M/2)
The Kaiser Window Filter Design Method (6)
otherwise,0
0,)(
])]/)[(1([
)(
)(sin][
0
2/120 Mn
I
nI
n
nnh
c
Ideal LPF window
25
LPF Design Example [1=0.01, 2=0.001, ps ] (cont.)
Since M is an odd integer, the resulting linear-phase system would be of type II (H(ej) is zero at ).
The approximation error function is defined as
(note that the error is not defined in the transition region) In this design example, the peak approximation error is slightly
greater than =0.001. Increasing M to 38 results in a type I filter for which =0.0008.
It is not necessary to plot the phase and group delay, because the phase is precisely linear and the delay is M/2 samples.
The Kaiser Window Filter Design Method (7)
sj
pj
AeH
eHE
|,)(|0
0|,)(|1)(
26
LPF Design Example [1=0.01, 2=0.001, ps ] (cont.)
The Kaiser Window Filter Design Method (8)
27
HPF Design Example [1=2=0.021, sp ]
The frequency response of an ideal HPF is
and the corresponding impulse response is
The HPF impulse response after applying the Kaiser window is
The Kaiser Window Filter Design Method (9)
otherwise,0
0],[][][
Mnnwnhnh hp
)(||,
||,0)( 2/
2/
j
lpMj
cMj
cjhp eHe
eeH
nMn
Mn
Mn
Mnnh c
hp ,)2/(
)2/(sin
)2/(
)2/(sin][
28
HPF Design Example [1=2=0.021, sp ]
Using the Kaiser’s formula, we obtain that =2.6, M=24. The cutoff frequency of the ideal HPF is c = (p p
The actual approximation error of this type I filter is =0.0213. If we increase M to 25 and keep unchanged (as we did in the
LPF design), the frequency response becomes highly unsatisfactory because the type II filter has a zero at . Type II FIR linear-phase systems are generally not appropriate approximation for the either highpass or bandstop filters.
Increasing M to 26 will result in type I filter with narrower transition region, and will satisfy the design requirement.
The Kaiser Window Filter Design Method (10)
29
HPF Design Example [1=2=0.021, sp ] (cont.)
M=24
The Kaiser Window Filter Design Method (11)
30
HPF Design Example [1=2=0.021, sp ] (cont.)
M=25
The Kaiser Window Filter Design Method (12)
31
The Kaiser Window Filter Design Method (13)
Multiband filter (cont.)
If such a magnitude function is multiplied by a linear phase factor e –jM/2, the corresponding ideal impulse response is
where Nmb is the number of bands and .
If hmb[n] is multiplied by a Kaiser window, the type of approximations we have observed at the single discontinuity of the lowpass and highpass systems will occur at each of the discontinuities. The approximation error will be scaled by the size of the jump.
01 mbNG
)2/(
)2/(sin)(][
11 Mn
MnGGnh k
N
kkkmb
mb
32
Discrete-Time Differentiators
As we discussed before, the discrete-time differentiator system has a frequency response ( j/T ) for –.
For an ideal discrete-time differentiator with a linear phase, the appropriate frequency response is (the factor of 1/T is omitted)
The corresponding ideal impulse response is
If hdiff[n] is multiplied by a symmetric window of length (M+1), then it is easy to show that h[n]= –h[M–n]. The resulting system is either type III or type IV general linear-phase system.
The Kaiser Window Filter Design Method (14)
2/diff )()( Mjj ejeH
2diff )2/(
)2/(sin
2/
)2/(cos][
Mn
Mn
Mn
Mnnh
33
Discrete-Time Differentiators – Design Example using type III
Suppose M=10, =2.4. Type III does not provide good magnitude linearity because of the zero at .
The Kaiser Window Filter Design Method (15)
34
The Kaiser Window Filter Design Method (16)
Discrete-Time Differentiators – Design Example using type IV
Suppose M=5, =2.4. Type IV achieves much better approximations to the amplitude function.
This type of system generates a non-integer time delay.
35
Optimum Approximation Criterion (1)
We have discussed design of FIR filters by windowing, which is straightforward and is quite general.
However, we often wish to design a filter that is the BEST that can be achieved for a given value of M.
What is the criterion? It is meaningless to discuss the best solution without an approximation criterion.
For example, if the criterion is to minimize the mean square error
then, the following rectangular window is the best approximation.
otherwise,0
0],[][
Mnnhnh d
deHeH jjd
22 )()(
2
1
36
Optimum Approximation Criterion (2)
However, the window methods generally have two problems:
Adverse behavior at discontinuities of H(ej); the error usually becomes smaller for frequencies away from the discontinuity (over-specified at those frequencies).
It does not permit individual control over approximation error in different bands (over-specified over some bands).
For many applications, better filters result from a minimax strategy (minimization of the maximum errors) or a frequency-weighted criterion.
Approximation error is spread out uniformly in frequency
Individual control of approximation error over different bands
Such criterion avoids the abovementioned over-specifications.
37
Optimum Approximation Criterion (3)
We consider a particularly effective and widely used algorithmic procedure for the design of FIR filters with a generalized linear phase.
We consider only type I filters in detail, and examples for type II filters are also shown. We can understand how the method can be extended to other types of filters.
We first consider a zero-phase filter (this filter can be made causal by inserting a proper delay),
he[n]= he[–n]
and the corresponding frequency response is ( L = M/2 is an integer)
njL
Lne
je enheA
][)(
38
Optimum Approximation Criterion (4)
The frequency response can be rewritten as
Therefore, it is a real, even, and periodic function of .
A causal system can be obtained from he[n] by delaying it by L = M/2 samples; i.e,
h[n]= he[n – M/2] = h[M–n]
and the corresponding frequency response is
L
nee
je nnhheA
1
)cos(][2]0[)(
2/)()( Mjje
j eeAeH
39
Optimum Approximation Criterion (5)
A tolerance scheme for an approximation to a LPF with a real function Ae(ej).
Some of the parameters L, 1, 2, p, and s are fixed and an iterative procedure is used to obtain optimum adjustment of the remaining parameters.
40
Optimum Approximation Criterion (6)
There are several methods for FIR filter design. The Parks-McClellan algorithm (Remez algorithm) has become the dominant method for optimum design of FIR filters. This is because it is the most flexible and the most computationally efficient. We will discuss only that algorithm.
The Parks-McClellan algorithm is based on reformulating the filter design problem as a problem in polynomial approximation.
We note that the term cos(n) in Ae(ej) can be expressed as a
sum of powers of in the form
cos(n) =Tn(cos )
where Tn(x) = cos(ncos-1x) is the nth-order Chebyshev polynomial.
41
Chebyshev Polynomial
Chebyshev polynomial
Vn(x) is an nth polynomial of x.
)coshcosh()coscos()( 11 xNxNxVN
)()(2)(
,,12)(
,)(
,1)(
11
22
1
0
xVxxVxV
xxV
xxV
xV
NNN
42
Optimum Approximation Criterion (7)
Therefore, Ae(ej) can be expressed as an Lth-order
polynomial in cos namely
where
cos0
)()(cos)(
x
L
k
kk
je xPaeA
L
k
kk xaxP
0
)(
43
Optimum Approximation Criterion (8)
Define an approximation error function
W() : weighting function
Hd(ej) : desired frequency response
Ae(ej) : approximation function
These functions are defined only over closed subintervals (e.g., passband and stopband) of 0 ≤ ≤ .
For example, the weighting function and desired frequency response of an LPF are (K= 1/ 2)
)]()()[()( je
jd eAeHWE
s
pjd eH
,0
0,1)(
s
pKW
,1
0,/1)(
44
Optimum Approximation Criterion (9)
Note that, with weighting, the maximum weighted absolute approximation error is = 2 in both bands.
Typical frequency response.
Weighted error.
45
Optimum Approximation Criterion (10)
The Minimax Criterion
The particular criterion used in this design procedure is the so-called minimax or Chebyshev criterion, where, with the frequency intervals of interest (the passband and stopband for an LPF), we seek a frequency response Ae(ej) that minimizes the maximum weighted approximation error; i.e.,
where F is the closed subset of 0 ≤ ≤ e.g., passband and stopband for an LPF).
)(maxmin
}0:][{
E
FLnnhe
46
Optimum Approximation Criterion (11)
Alternation Theorem
Let FP denote the closed subset consisting of the disjoint union of closed subset of the real axis x. Then
is an rth-order polynomial. Also, DP(x) denotes a given desired function of x that is continuous on FP. WP(x) is a positive
function, continuous on FP, and EP(x) = WP(x)[DP(x) – P(x) ] is the weighted error. The maximum error is defined as
A necessary and sufficient condition that P(x) be the unique rth-order polynomial that minimizes ||E|| is that EP(x) exhibit at least (r+2) alternations.
)(max xEE PFP
r
k
kk xaxP
0
)(
47
Optimum Approximation Criterion (11)
(r +2) alternations means…
there must exist at least (r + 2) values xi in FP such that x1 < x2 <
…< xr+2 and such that EP(xi)= –EP(xi+1)=||E|| for i=1, 2,…, r +1.
Example … which one satisfies the alternation theorem (r =5)?
48
Optimal Type I Lowpass Filters (1)
For type I filters, the polynomial P(x) is the cosine polynomial Ae(ej) with the transformation of variable x=cos and r = L:
DP(x) and WP(x) become
and
respectively, and the weighted approximation error is
L
k
kkaP
0
)(cos)(cos
s
ppD
coscos1,0
1coscos,1)(cos
s
pp
KW
coscos1,0
1coscos,/1)(cos
)](cos)(cos)[(cos)(cos PDWE ppp
49
Optimal Type I Lowpass Filters (2)
Equivalent polynomial approximation function as a function of x=cos .
50
Optimal Type I Lowpass Filters (3)
Properties
The maximum possible number of alternations of the error is L+3.
An Lth-degree polynomial can have at most (L–1) points with zero slope in an open interval, the maximum possible number of locations for alternations are those plus 4 band edges.
Even P(x) may not have zero-slope at x = 1 and x = –1, P(cos ) always have zero slope at =0 and .
Alternations will always occur at p and s.
All points with zero slope inside the passband and all points with zero slope inside the stopband will correspond to alternations. That is, the filter will be equiripple, except possibly at =0 and .
51
Optimal Type I Lowpass Filters (4)
Possible optimum LPF approximations for L=7.
L+3 alternations (extraripple case) L+2 alternations (extrenum at )
L+2 alternations (extrenum at =0) L+2 alternations (extrenum at =0 and )
52
Optimal Type I Lowpass Filters (5)
Illustrations supporting the second and third properties
53
Optimal Type II Lowpass Filters (1)
For type II filters, the filter length (M+1) is even, with the symmetric property
h[n]= h[M – n]
Therefore, the frequency response H(ej) can be expressed in the form
Let b[n]=2h[(M+1)/2 –n], n=1, 2, …, (M+1)/2, then
2/)1(
0
2/
2cos][2)(
M
n
Mjj nM
nheeH
2/)1(
0
2/
2/)1(
1
2/
)cos(][~
)2/cos(
2
1cos][)(
M
n
Mj
M
n
Mjj
nnbe
nnbeeH
54
Optimal Type II Lowpass Filters (2)
Derivation memo (see Problem 7.52)
Using the trigonometric identitycos cos = ½ cos(+) + ½ cos(–)
we get
2cos
2
1~
2
1
2cos]0[
~
2
1
2
1cos])[
~]1[
~(
2
1
2cos
2
1~
2
1
2cos]0[
~
2
1
2
1cos][
~
2
1
2
1cos]1[
~
2
1
2
1cos][
~
2
1
2
1cos][
~
2
1
)cos(][~
)2/cos(
2/)1(
1
2/)1(
1
2/)1(
1
2/)1(
0
2/)1(
0
2/)1(
0
MMbbnnbnb
MMbb
nnbnnb
nnbnnb
nnb
M
n
M
n
M
n
M
n
M
n
M
n
55
Optimal Type II Lowpass Filters (3)
Derivation memo (cont.)
This will be equal
if we let
2
1,
2
]2/)1[(~ 2
12,
2
]1[~
][~
1,2
]0[~
]1[~
][
Mn
Mb
Mn
nbnb
nbb
nb
2/)1(
1 2
1cos][
M
n
nnb
56
Optimal Type II Lowpass Filters (4)
Therefore,
Consequently, type II filter design is a different polynomial approximation problem than type I filter design. Type III and type IV filters can be considered similarly.
2
1,)(cos)(cos
0
MLaP
L
k
kk
s
p
pj
d DeH,0
0),2/cos(/1)(cos)(
)(cos)2/cos()( 2/ PeeH Mjj
s
p
p
KWW
),2/cos(
0,/)2/cos()(cos)(
57
The Parks-McClellan Algorithm (1)
The alternation theorem gives necessary and sufficient conditions on the error for optimality in the Chebyshev or minimax sense. Although the theorem does not state explicitly how to find the optimum filter, the conditions that are presented serve as the basis for an efficient algorithm for finding it. We consider type I LPF design herein.
From the alternation theorem, the optimum filter Ae(ej) will satisfy the set of equations
We can write these equations as
)2(,...,2,1,)1()]()()[( 1 LiAeHW iie
jdi
i
)2(,...,2,1),()(/)1()( 1 LieHWA ijdi
iie
58
The Parks-McClellan Algorithm (2)
In matrix form, it becomes (xi=cos i)
This set of equations serves as the basis for an iterative algorithm for finding the optimum Ae(ej). The procedure begins by guessing a set of alternation frequencies i, i=1, 2, …, (L+2).
Note that p and s are fixed and are necessary members of the set of alternation frequencies. Specifically, if l = p,
then l+1= s.
)(
)(
)(
)(/)1(1
)(/11)(/11
2
2
1
1
0
22
222
22222
11211
Ljd
jd
jd
LLL
LLL
L
L
eH
eH
eHaa
Wxxx
WxxxWxxx
59
The Parks-McClellan Algorithm (3)
The above set of equations could be solved for the set of coefficients ak and .
A more efficient alternative is to use polynomial interpolation. In particular, Parks and McClellan found that, for the given set of the extremal frequencies (xi=cos i),
That is, Ae(ej) has values 1 K if 0 ≤ i ≤ p and if s ≤
≤ .
2
1
1
2
1
)(
)1(
)(
L
k k
kk
L
k
jdk
W
b
eHb k
2
1 )(
1L
kii ik
k xxb
60
The Parks-McClellan Algorithm (4)
Now, since Ae(ej) is known to be an Lth-order trigonometric polynomial, we can interpolate a trigonometric polynomial through (L+1) of the (L+2) known values E(i) [or, equivalently, Ae(ej)].
61
The Parks-McClellan Algorithm (5)
Parks and McClellan used the Lagrange interpolation formula to obtain (x=cos , xi=cos i),
with
If |E()| ≤ for all in the passband and stopband, then the optimum approximation has been found. Otherwise we must find a new set of extremal frequencies.
1
1
1
1
)]/([
)]/([)(cos)( L
kkk
L
kkkk
je
xxd
CxxdPeA k
)()(
12
1
1
Lkk
L
kii ik
k xxbxx
d
)(
)1()(
1
k
kj
dk WeHC k
62
The Parks-McClellan Algorithm (6)
For the LPF shown at the previous figure, was too small. The extremal frequencies are exchanged for a completely new set defined by the (L+2) largest peaks of the error curve (marked with x in the figure).
As before, p and s must be selected as extremal frequencies.
Also recall that there are at most (L –1) local minima and maxima in the open interval 0 < < p and s < < . The
remaining extrema can be either =0 and . If there is a maximum of the error function at both 0 and , then the frequency at which the greatest error occurs is taken as the new estimate of the frequency of the remaining extremum.
The circle – computing the value of , fitting a polynomial to the assumed error peaks, and then locating the actual error peaks – is repeating until does not change from its previous value by more than a prescribed small amount.
63
The Parks-McClellan Algorithm (7)
64
Characteristics of Optimum FIR filters (1)
For different types of filters (e.g., M=9 and M=10), it is possible that a shorter filter is better. For the same type of filters (e.g., M=8 and M=10), a longer filter always provides better or identical performance (in this case, the two filters are identical).
Illustration of the dependence of passband and stopband error on cutoff frequency for optimal approximation of a LPF (K=1, s – p =0.2).
65
Characteristics of Optimum FIR filters (2)
The estimate of M for equiripple lowpass approximation is
where = s – p. Compared with the design formula for the
Kaiser window method
for the comparable case (1=2=), the optimal approximations
provide about 5 dB better approximation error for a given value of M. Another important advantage of equiripple filters is that 1 and 2 need not be equal, as must be the case for the window
method.
324.2
13)(log10 2110M
285.2
8)(log20 10M
66
kaiser Kaiser window
kaiserordFIR order estimator of Kaiser window method
remezParks-McClellan optimal equiripple FIR filter design
remezordFIR order estimator of Parks-McClellan optimal
approximation method
MatLab Functions
67
Design Examples - LPF (1)
LPF Design Example
[1=0.01, 2=0.001, (K=10), ps ]
estimate of M=25.34 M=26, result: maximum error in the stopband = 0.00116: unsatisfied.
68
Design Examples - LPF (2)
LPF Design Example (cont.)
[1=0.01, 2=0.001, (K=10), ps ]
increase M to M=27 [compare: M=38 required at Kaiser window]result: maximum error in the stopband = 0.00092: satisfactory.
69
Design Examples - BPF (1)
For an LPF filter, there are only two approximation bands. However, bandpass and bandstop filters require three approximation bands.
The alternation theorem does not assume any limit on the number of disjoint approximation intervals. Therefore, the minimum number of alternations is still (L+2). However, multiband filters can have more than (L+3) alternations, because there are more band edges.
Some of the statements so far are not true in the multiband case. For example, it is not necessary for all the local minima or maxima of Ae(ej) to lie inside the approximation intervals. Local extrema can occur in the transition regions, and the approximation need not be equiripple in the approximation regions.
70
Design Examples - BPF (2)
BPF Design Example
M = 74
L = M/2 = 37The alternation theorem requires at least L+2 = 39 alternations.
7.0,06.035.0,1
3.00,0)( j
d eH
7.0,2.06.035.0,1
3.00,1)(W
71
Design Examples - BPF (3)
72
Design Examples - BPF (4)
The approximations we obtained are optimal in the sense of the alternation theorem, but they would probably be unacceptable in a filtering application. In general, there is no guarantee that the transition region of a multiband filter will be monotonic, because the Parks-McClellan algorithm leaves these regions completely unconstrained.
When this kind of response results for a particularly choice of the filter parameters, acceptable transition regions can usually be obtained by systematically changing one or more of the band edge frequencies, the impulse-response length, or the error-weighting function and redesigning the filter.
73
Comments on IIR and FIR Filters
We have discussed design methods for linear time-invariant discrete-time systems. What type of system is the best, IIR or FIR? Why give so many different design methods? Which method yields the best results?It is generally not possible to give a precise answer.
IIR:+ closed-form design formulas - noniterative IIR filter design. + efficient – usually less order is required.– only the magnitude response can be specified.
FIR: + precise generalized linear phase.– no closed-form design formula - some iterations may be necessary to meet a given specification.
74
Homework
7.67.7
7.167.34 (a)