Tutorials

Demystifying Digital Filters, Part 3

Part three of this series will lay the conceptual groundwork required for designing and analyzing IIR filters, the counterpart of FIR filters. This article will cover:

  • Filter kernels vs. filter coefficients

  • Difference equations

  • Transfer functions

  • Filter order

  • Poles, zeros, and stability analysis

demystifying_digital_filters_part_3_patches.zip
application/zip 2.88 KB
Download the patches used in this tutorial

IIR versus FIR

Recall from part one that FIR and IIR filters are quite similar in implementation — they are simply weighted average machines. The difference is that FIR filters require only feedforward loops (relying only on input samples) whereas IIR filters use both feedforward and feedback loops (relying both on input and output samples).

Why might you use an IIR implementation over an FIR one?

Advantages:

  • Lower latency, fewer coefficients needed for the same specifications as an FIR filter

  • Easier to model analog filters

  • Easier to tweak parameters on the fly

Disadvantages:

  • Less numerically stable (feedback loops can cause unbounded growth)

  • Nonlinear phase response

Despite their disadvantages, IIR filters are a common choice because they are less memory-intensive and many applications do not require linear phase responses.

Kernels, coefficients, and the difference equation

Remember that filters are just weighted average machines, and the work of filter design is to define the weights used in that machine. In FIR filters, we referred to these weights as the filter kernel, but they can also be called filter coefficients. Since the term kernel is associated with convolution, IIR filters — which don’t use convolution — stick to calling them coefficients.

The word “coefficient” makes sense when you think about the relationship they have with the generalized equation for the output y of an FIR filter:

Notice that once you expand out the summation, the kernel values (h[n]) act as coefficients for each of the terms from input signal x. For example, a kernel with three values expands to:

Recall that another word for the generalized formula for computing outputs of a filter is a difference equation. You know what difference equations look like for FIR filters, but what about IIR filters?

Let’s try an example:

The summing and multiplying to find y[t_n] is:

With IIR filters, we distinguish between feedback and feedforward coefficients. By convention, feedback coefficients, given the letter a, are written with negative signs in front. Feedforward coefficients are given the letter b. The subscript of the coefficient is associated with the delay (in units of samples).

Replacing values with variables gives:

Gathering up terms into sums:

Finally, for M feedforward coefficients and N feedback coefficients, you can write more generally:

Seeing the prominent subtraction of the two summations should now explain why the equation is called a “difference” equation!

Another new representation

So far we’ve discussed a number of interrelated ways to represent a given filter. If you know any one of the following, you can compute any of the others, assuming the filter is LTI:

  • Impulse response

  • Frequency response

  • Step response

  • Filter kernels/filter coefficients

  • Difference equation

Besides the representations you’ve learned so far, there is yet another one called the transfer function. The transfer function is the key to analyzing filter properties you may have heard of before, including:

  • Filter order

  • Poles and zeros of a filter

  • Filter stability

Why wait until now to look at transfer functions? Well, since FIR filters don’t run into the same stability problems as IIR filters, transfer functions are much more interesting to analyze for IIR filters. Furthermore, poles and zeros are used in IIR design and not so much in FIR design.

Before discussing these filter properties, let’s first look at how you can find the transfer function from the difference equation using a very close relative of the Fourier transform — the Z-transform.

The Z-transform

Remember that the Fourier transform is a way to “translate” between a frequency and time representation of a signal. To do this, the Fourier transform uses sinusoids as a building block by mixing together sine waves of different frequencies and amplitudes to represent any other signal. In the world of math, this “mixing” is called a linear combination and sine functions are considered the basis. What if, instead of sine waves, you could use another kind of function as a basis?

The idea of the Z-transform is to use a complex exponential function, represented by the letter “z”.

In this equation, the A represents some amplitude, i is the imaginary number, and ϕ (phi) is an angle.

Complex exponentials may be intimidating at first, but they’re incredibly practical for two reasons:

  • It’s often easier to do operations on exponential functions compared to trigonometric ones.

  • Complex numbers wrap up two pieces of information into one. (You could think of complex numbers as being two-dimensional versus a one-dimensional integer.)

The Z-transform is defined as follows:

(Note for completeness’ sake: There are two similar forms of the Z-transform, the bilateral transform and the unilateral transform. The form above is the bilateral one, but the differences in the two forms won’t make a difference for the scope of our discussion.)

The left hand side of the equation is saying that the Z-transform operates on a signal x that is indexed by integers n.

On the right hand side, the transform is defined as a sum over values of n from negative to positive infinity. This just means that every value of x is considered; if you had 12 samples in x, for example, you’d have a sum of 12 terms. Each of the terms in the sum is a value of input x (at index n) multiplied by the complex exponential z raised to the negative n power.

From this you can see what makes the Z-transform so similar to the Fourier transform — it represents an input signal as a sum of many sinusoids (only this time, those sinusoids are wrapped up in a complex exponential function). In fact, if you set z = e^{i * ϕ} in the Z-transform, you would get back the definition of the complex discrete Fourier transform. The difference between the two is just that pesky amplitude A in the Z-transform!

What does the Z-transform of a simple example signal look like? Let’s use a signal x that is two samples long. (Note the new shorthand for the Z-transform, too. A capital letter with an argument of z represents the Z-transform.)

You can drop away z^0 since that just equals 1, and then you’ll find:

Another note: Now you see why block diagrams show a single sample delay with z^(-1) notation! This is related to what is known as either the shift-invariance or time-delay property of Z-transforms, which you can learn more about (along with other properties of the Z-transform) here.

If you ever want to avoid doing the work of performing a Z-transform by hand, you can try using Wolfram Alpha’s Z-transform calculator.

Finding the transfer function

The basic form of a transfer function H(z) is a ratio between the Z-transform of an output signal y versus an input signal x.

To see how you might find a real-world transfer function given a difference equation, let’s walk through an example of an IIR filter’s difference equation.

First, gather the output signal terms on the left and the input signal terms on the right.

Next, write each term as its Z-transform.

Do a little rearranging, taking advantage of the linearity property of the Z-transform so that X(z) and Y(z) can be factored out.

One more step...

Congratulations! Meet your very first transfer function.

Filter order

Transfer functions are often represented in a factored polynomial form. In other words, the “shape” of a factored transfer function will look something like:

There will be some constant gain factor in front and the numerator and denominator will be a bunch of terms multiplied together where each term is 1 minus some factor times z^(-1).

Your first transfer function, for instance, slightly rearranged to be in factored form is:

In this case, the numerator and denominator only have one term, but there is no limit to how many terms you can be. This brings us to the concept of order — the filter's order is the order of the transfer function. To find the transfer function order, find the degree of the numerator and the degree of the denominator (by counting the number of z^(-1) terms are in the factored form). The maximum of those degrees gives the transfer function order. If, for instance, there were three terms in the numerator and four terms in the denominator, the transfer function would be “fourth-order” since max(3, 4) is 4.

In general, a higher order means a better quality filter (with a steeper slope). At the same time, a higher order means more computational effort is needed. Can you think of why longer delay lines are needed for higher order? (Hint: Think about what the difference equation would look like as more z^(-1) terms are added. How many previous samples will the calculation need?)

With the filterdesign, filterdetail, and plot~ objects in Max, you can play around to see the effect of filter order on the corresponding frequency response.

Poles, zeros, and stability

Let’s make up a new transfer function that is higher-order and a bit more exciting to analyze.

First question: what if one of the terms in the top was zero? For instance, what if z was 0.4, 0.3, -0.15, or 0.35? In other words, what if z was a root of the numerator polynomial? The term in parentheses would become (1 - 1), or zero, making the entire transfer function equal zero for that point. These values would be the so-called zeros of the transfer function. These are associated with the feedforward filter elements. Can you convince yourself of this by revisiting the first transfer function derivation?

Similarly, what if one of the terms on the bottom was zero? If z was one of the roots, 0.75 or 0.25, the bottom would equal zero, making H(z) undefined. These are the poles of the transfer function, and they are associated with the feedback part of the filter.

These poles and zeros are often illustrated on a pole-zero plot. Remember that z is an imaginary number, so plotting it occurs on the z-plane (the complex plane) where the horizontal axis is the real part and the vertical axis is the imaginary part. Note that in our example, all of the poles and zeros have imaginary parts that are zero, but this doesn’t have to be the case (and it often won’t be).

To explore the pole-zero plot, the zplane~ object in Max is a useful tool. You can manipulate the poles (shown as x’s) and zeros (shown as o’s), and it will output filter coefficients that filtergraph will understand and plot on a frequency response graph.

As you play with zplane~, you will notice that the poles and zeros mirror across the horizontal axis. This is explained by the complex conjugate root theorem. The theorem basically says that for any root of a complex-valued function a + bi, there will be another root a - bi.

Finally, you’ll notice that pole-zero plots usually show the unit circle. Why? In short, it’s a way to indicate where a filter will be stable. If any poles lie outside of the unit circle, the filter will become unstable, causing any input to blow up to infinity at the output. (For a mathematical proof of the fact, check out this resource from Julius O. Smith III.) This explains why FIR filters are always stable — they can only have poles at the origin.

Conceptual map

As you reflect on the concepts introduced so far in this series, the following illustration may be a helpful visualization of the relationships between them.

Up next

In Part four, you will build on the concepts introduced in this article by learning about design and implementation of IIR filters.

References

Theory and Application of Digital Signal Processing by Lawrence R. Rabiner and Bernard Gold

Introduction to Digital Filters with Audio Applications by Julius O. Smith III

Learn More: See all the articles in this series

by Isabel Kaspriskie on March 16, 2021

dfwaudio's icon

Thanks a lot. Thats absolutly amazing. I hope i will understand everything. :)

Isabel Kaspriskie's icon

Nice catch! Should be fixed now.

👽'tW∆s ∆lienz👽's icon

okokokok ok 💆‍♂️ ...ok... i have a question 🙋‍♂️
i think i'm stuck at the Z-transform(the rest of the article seems like it will make perfect sense if i just get this one thing). apologies in advance if this is obvious or basic or perhaps(i feel this is most likely) i might be looking at it from a completely weird angle, but in the article where you do an example with just two samples and it boils down to this equation:

👆 i didn't really understand where to go from there... can i literally fill the definition of 'z' from the top of the article into that last equation where 'z' appears? but then in that case it might look something like this:
X(z) = x[0] + ( x[1] * (A *(cos(phi) + i *sin(phi)))^-1)
or alternately
X(z) = x[0] + (x[1] * (A * e^(i*phi))^-1)
...at which point i wonder where do i get the values for Amplitude(A) and phase/angle(phi) from? it probably isn't from the sample x[1] itself? (i suppose it could come from an FFT over the entire vector, but then we'd be using a FastFourier-transform to work a Z-transform, that can't be right 😂 ..i sound ridiculous, help me)

Isabel Kaspriskie's icon

@Raja

You could fill in the definition of z and it wouldn't be technically incorrect, but it wouldn't really be all that useful since the equation is in terms of z. In other words, remember that X(z) = function(z).

But why do we care about z and write that whole X(z) = function(z) in the first place? Mostly here it's a tool to help us get to the transfer function which then gets us to poles/zeros/etc. or from the poles/zeros/etc. back to a difference equation.

One thing that the article doesn't explain as in depth about is about complex numbers (i.e. z) and where amplitude and phase come into play. The part that we care about here is the z-plane (a.k.a. the complex plane) because it's where we plot special values of z, like the poles and zeros. The amplitude and phase give the polar coordinates of values in the z-plane. The amplitude is the radius, and the angle is the angle.

To give an example: What if I told you that my filter has a zero at cos(pi/3) + i sin(pi/3)? That means it's a distance of 1 from the origin, and the angle it makes with the positive horizontal axis is pi/3 radians. Then I could plot that on a z-plane. (If you want to think in Cartesian x-y coordinates, you find that the horizontal/real axis is A*cos(phi) and the vertical/imaginary axis is A*sin(phi).)

To know what to Google about all this complex number manipulation (half the battle!) is Euler's formula. It is where the relationship of the complex number z, the complex exponential Ae^(i * phi), and the related cosine/sine representations come from. https://en.wikipedia.org/wiki/Euler%27s_formula :)

👽'tW∆s ∆lienz👽's icon

oh 🤯 the 'z' in z-transform is referring to 'z-plane' (🤦‍♂️ same thing as the complex plane)
🤣 i didn't make this simple connection... (and i've been this way for years having read about z-transform elsewhere)..i probably should've known, given mention of the zplane~ object but... ok i get it now 😊 👉"the equation is in terms of z"

Thanks so much for another superb article! (and thanks for the extra help and link: i watched a video on yt of Julius Orion Smith explaining how Euler's formula gets to the heart of it all, felt like i sort of grasped the relationships in a general way, but need to comprehend the specifics a bit more) 🍻

Isabel Kaspriskie's icon
Ernest's icon

HI Isabel, I enjoyed reading that, but Im confused about your depiction of feedforward and feedback. I had thought, conventionally, if you are factoring a previous input sample with the current one, then it is feedback not feedforward. Just to be sure I have actually gone insane, I looked on the web and found this, which is how I remember it normally being talked about, and is the other way around than you appear to be saying, with feedback going right to left, not left to right. This means of course Raja the Resident Alien must be right to say I am insane, as was everyone in the past must be too.

Terms do change in meaning over time, and C74 swaps a0/a1 with b0/b1 in its filter equations too, so it could be some special terminology for this software thats just decided to say things opposite to everyone else for some good reason I dont know, and obviously I am not the expert, Raja the resident Alien is now your expert, I hope you enjoy him, because its too much for me, Im gone lol. I guess there must be some reason to swap the terminology but Im too insane to understand the reasons any more, Raja has told me I have some mental disorder, so I must be wrong, and just dont belong here any more.

Isabel Kaspriskie's icon

Hi Ernest,

Feedback is when you are using output samples and "feeding them back" into the input of the system. Feedforward is when you take an input sample, "bypass" the system, and mix it in to the output.

In your pasted image, you are correct that that is feedback because the arrow begins after the D(s) and G(s) systems and brings the samples back.

Also, in regards to using a vs b to refer to the feedback and feedforward coefficients — unfortunately there's no good convention for which variable should refer to which kind of coefficient. This is something that has to be taken from the context of whatever article/book/paper/etc. you're referring to. Some textbooks use a to mean feedforward and some use it to mean feedback. In this article series, I'm following the conventions of Julius O. Smith's textbooks because I provide them as reference links. There is no "wrong" choice, really.

Ernest's icon

well maybe I am just not used to this representation, or misunderstand what its trying to show, but it doesnt look like feedback to me. Sorry to be so stupid.



Luigi Castelli's icon

Ok, wait a second here...

Nowadays there is a well established convention as to how to refer to feedback and feedforward coefficients in IIR filters indeed. The vast majority of DSP literature uses 'b' for feedforward and 'a' for feedback.
Julius O. Smith follows the same convention as many other widespread references do (Matlab, Oppenheim-Schafer, etc...)

So, Ernest has not gone insane...

For example, here is an article by Christopher Dobrian where the author has to specifically change this convention in order to be consistent with Max/MSP.
(He explicitly acknowledges it in the article)
https://dobrian.github.io/cmp/topics/filters/filterequation.html

So there is a well established convention and there is no good reason for Max/MSP to reverse it.
Having said this, it is just a convention after all so it's no big deal. However I feel it would be less confusing for everyone if Max/MSP would also follow suit.

Isabel Kaspriskie's icon

@Ernest

In the diagram there, the feedback coefficients are multiplying samples from the y[n] signal (which are the output of the system) and the feedforward coefficients are multiplying samples from the x[n] signal (which are the input of the system). Perhaps the confusion is in the fact that the diagram has the input and output signals lying in parallel to each other. If you rotate the diagram counter-clockwise and look at the place where values are summed, it may be more visually obvious that the feedback is going "backwards" and the feedforward is going "forwards."

@Luigi

The reason I'm being careful to mention that there's no accepted convention is that people can get a little heated about which is the right way. While not a rigorous statistical survey of convention, Nigel Redmon's post on EarLevel Engineering seems to suggest it's not quite fair to say there's a well established choice. As you mention, it's no big deal, but I thought it was worth clarifying why I'm being careful about it. :) https://www.earlevel.com/main/2017/01/04/about-coefficient-conventions/

Ernest's icon

Are the output samples denoted as moving in the oppoisite direction as the input samples, because otherwise it still looks to me like the feedback is actually feedforward with a two cycle delay. Maybe its the direction of the arrows of something. Its true this is not the depiction I am used to seeing in DSP, which is that a plus circle or minus circle only normally has multiple inputs and one output by the conventions I know, but theres this floating multplication sign with no arrows attached to it, so maybe the style of representation has changed.

Ernest's icon

Well I am not heated by it lol. It appears I am past it though. I do remember debates on how to draw DSP diagrams a decade ago when computer flow simulation first started, but I thought they had all been decided now. Im just too old lol, and one reaches a certain point where things change too much to make sense any more.

There was a time when breaking into the capitol building with threats to kill the vice president would have been regarded something the president should try to stop while its happening. Apparently not any more. Its all beyond me lol. All I can think any more is, as protecting the Vice President from active attacks on his life is not part of the President's responsibility to protect the nation, then any explanation at all can now be considered reasonable, up to and including excuses for ending the world. Im just going to stick to hospital beds until either the world or me is over. Best wishes.

Roman Thilenius's icon


as usual, the (undocumented) unconventional design of some max objects - here biquad - does not really help to understand things.

but there is no solution for that: you cant change an object´s inlet which is in use by 100,000+ people for some 25 years.

it reminds me on the situation of my analog mixing desk, where L and R to the computers main output has been switched, so that i had to switch the L and R speaker cables.
and when i now try to test a 4-channels crossfader with a second pair of speakers, my L and R from max/msp to the second speaker set is always wrong, and it remains wrong even if you switch the speaker cables, though that can´t be.

and who does not know them, these MSP objects which keep playing audio to the dac~ even after you disconnected them and even after you closed and reopened the patch.