up next…

13/12/2012 § Leave a comment

Hit list for next update.


  • Add/remove Effect Chains
  • Save/Load presets


  • Scale the freq domain analysis to proper units ūüėõ
  • Use keyboard arrow keys to change pos of (conjugate) poles/zeros
  • The amount by which location shifts must be customizable
  • There must be a way to enter a locus (equation) on which (conjugate) poles/zeros
  • Will move by (customizable) distance on keyboard key press

All this by the eve of 25th Dec.

And if I could spare some more time maybe I’ll play with realliZation and have a sneak peek into¬†world of DSP.

Till then, keep converting coffee into code. ūüėõ

“Programmers are machines that convert coffee into code”


the Convolution – Part 2 of 2

01/11/2012 § 1 Comment

Part 1 was about ‘feeling’¬†convolution.

In this post I will write about implementing convolution.

So, lets take a look at convolution again: it’s the process of knowing the excitation present at any given instant, provided we know channel’s given¬†impulse¬†response of channel and resultant of past¬†excitations.

From now on we’ll¬†constrain¬†ourselves to Z domain only. ¬†I’ll get back with detailed¬†explanations¬†later, but here is the code.

If Z domain representation of 2nd order filter is of form:

H(z) = {a0 + a1*z^(-1) + a2*z^(-2)} / {1 + b1*z^(-1) + b2*z^(-2)}

a0, a1, a2, b1 and b2 are called filter coefficients. The roots of numerator gives the location of ‘zeros’ and roots of denominator gives location of ‘poles’.

Octave/Matlab script:

% 'out' is array that stores output
% 'in' is array that stores input
% size of 'out' and 'in' must be same

hz1=0.0; % stores last input excitation
hz2=0.0; % stores second last input excitation
hp1=0.0; % stores output of last excitation
hp2=0.0; % stores output of second last excitation

for i=1:length(x)

   % the convolution step
   out(i) = a0*in(i) + a1*hz1 + a2*hz2 - b1*hp1 - b2*hp2;

   % setup history
   hz2 = hz1; 
   hz1 = in(i); 
   hp2 = hp1; 
   hp1 = out(i);

   % you might want to scale your o/p here...

C/C++ code

// all data types are float (or double if you like, can be even an int)
// 'pOut' and 'pIn' are pointer to output and input buffers resp.
// 'iBufferSize' is size of buffer, which is int, of course
// a0, a1, a2, b1, b2 are filter coefficients as discribed in Z domain 
// equation above

float hz1=0.0; // stores last input excitation
float hz2=0.0; // stores second last input excitation
float hp1=0.0; // stores output of last excitation
float hp2=0.0; // stores output of second last excitation

for (int i = 0; i < iBufferSize; i++) 
   out = a0*pIn[i] + a1*hz1 + a2*hz2 - b1*hp1 - b2*hp2;
   hz2 = hz1;
   hz1 = pIn[i]; 
   hp2 = hp1;
   hp1 = out;

   // scaling the o/p
   if (out > 17000)
      out = 17000;
   if (out < -15000)
      out = -15000;

   pOut[i] = out;

I will upload complete test codes soon, which will run ‘right off the shelf’

the Z domain

01/11/2012 § 1 Comment

Alrighty, we are getting closer to our first filter. Sit tight!

time domain

So what is this domain business by the way, you ask. Well, that’s just a ‘name’ for method of ‘representation’ of a signal. I’ll stick to discrete signal. When we say ‘time domain representation’ we are talking about varation of signal with respect to time. Simple, isn’t it ?

frequency domain

You might have guessed it by now – it’s the variation of energy of signal with respect to time. Why do we need ¬†it ? Well you might use this information to make you music ‘sound’ better – say, when you boost the Bass you are literally boosting signal energy in lower frequencies. Or let’s take other example, in communication systems often there is addition of narrow band noise (noise only in a given frequency range), engineers remove it by passing the signal through a notch filter which attenuates the unwanted frequency. This may be accompanied with loss of information. Besides, every filter you can think of does it’s work in freqecy domiain.

conversion from time domain to frequency domain

A long time ago, not so long, but quite long… there lived a mathematician in France, Joseph Fourier. He gave a way to see a given signal in ‘frequency domain‘. The way I see it, a generalized version maybe, any signal can be represented as sum of series of any other signal (I am yet to claim my patent). Fourier transform uses infinite series of sine/cosine to represnt a given signal in time domain and that’s how you can get a signal’s frequency response. I’ll not go deep into it, because will not quite use it. Btw, if you ever want to do Fourier transform don’t to wirte code, there are some excellent open source libraries available for it.

the Z domain

Like frequency and time domain, z domain is again a way of representing digital signal. You may find really good explanations if you search for it. But here are the key points:

  • The horozontal line in z domain is real axis whereas the vertical line is imaginary axis.
  • Frequencies of digital signal are mapped on upper half of unit circle.
  • From previous point. A frequency f on unit circle will be represented as:
                    ŌČ = 2 * ‚ąŹ * f / fs, fs is sampling frequency
  • ŌČ (omega) is called the ‘angular frequency’.
  • Poles must be inside unit circle for system to be stable, that’s quite straightforward to notice. If pole will be out side unit circle the function in time domain will never converge.
  • Complex poles/zeros must occur in complex conjugate pair otherwise signal won’t be a real signal.
  • Zeros may occur outside unit circle.
  • Poles boost a given frequency, i.e., the closer the pole to a given freqency on unit circle the more boost that frequency will have.
  • Zeros attenuate a given frequency, i.e., the closer the zero to a given frequency on unit circle the more attenuation it will have.
  • Engineers try to keep placement of poles and zeros in a manner that system remains stable, boosts a given set/range of frequencies and attenuate other.

Effect on phase response is not yet very clear to me. Once I understand it fully I’ll update this post.


  • realliZation¬†is a open source, lunch time project, which I am solely (so far at least) developing.
  • It’s made on Qt/C++. Initial idea was to see Z domain to time domain conversion when user gives location of poles and zeros on Z plane’s GUI representation but later I decided to use this tool as a test bed for my DSP theories that I¬†discover as I am on¬†escapade with Sound Processing.
  • Octave/Matlab are not enough when you¬†want¬†to see how to ‘actually’¬†implement¬†a DSP algo and not just see if it works¬†superficially.
  • I can’t guarantee it’s¬†behavior¬†because its a tool that I developed for playing around wuth numbers – nothing serious.
  • You are welcome to contribute/use/test. and if you need any help regarding code documentation or any thing just comment/email/ping me on Skype… whatever works for you.

I have covered most of the points. I might be still missing some. If so, please reply in comment.

the Convolution – part 1 of 2

31/10/2012 § 2 Comments

This part is about the definition and feeling of Convolution. Part 2 of 2 is implementation (in a programming language)

There is a mathematical definition and then there is the visual definition.

To be honest – I can’t¬†visualize¬†what that integration or summation is really trying to do. So, lets play with basics.

what is convolution

You can get a ‘word-to-word’ definition out there somewhere. But let’s try to see it rather than trying to be verbally correct.

What happens to an ‘impulse’ when it passes through a ‘channel’ is called ‘impulse response’ of that ‘channel’.¬†Impulse response is just a way a channel behaves when its excited with unit impulse.¬†In a sense, ¬†impulse response’ models the ‘channel’.¬†The process of knowing the channel¬†behavior¬†given impulse response and input signal is convolution.¬†Pretty straightforward, right ? Then how did scientists managed to get that ¬†complicated’ formula for such a simple thing ?¬†You’ll know that once you manage to visualize it. ūüėČ


To visualize convolution there are pretty good tools out there, but we want to learn basics so lets not complicate the situation and use this tool. Forget the theory, get to the interactive Java applet, let’s experiment. x[n] is input signal and y[n] is output signal.

1. Choose x[n] as a unit impulse (second last from right) and y[n] as unit impulse (second last from right). Convolve it, we observe that at n = 0 output is 1 and 0 elsewhere.

2. Choose x[n] as unit impulse and y[n] as the first signal from left. Convolve it. This time we have non zero output even at time beyond n = 0, even though the input excitation was given only at time n = 0. Why so ? This is because the channel, y[n], tends to remember the input till 3 seconds after the input excitation is given. This is a kind of ‘memory’ that this channel has. Other way to see it is that the effect of single impulse of excitation lasts for 4 seconds. How does it matter ? Lets see.

3. Keep y[n] same as in last experiment. Choose x[n] as the¬†first signal from left. This time the input excitation will not last for just 1 second but for 3 seconds. Don’t convolve yet! Try and think what should happen. Channel keeps memory of each input till 3 seconds… that means total excitation at a given instant will be the sum of excitation produced by current impulse and the memory of excitation of previous impulses. Think about it for a while and then go ahead with the convolution. Can you see it ? Can you feel it ? If no, write a comment down there in comment box and I’ll get back to you. And do not proceed till you can see it from eye of your mind.

4. So you can feel it, cool! Try convolving all different combinations and before you see the result on screen try to draw it on a paper using your gut feeling. It’ll take sometime but you’ll get it. You’ll also realize that if you interchange x[n] and y[n] the output remains same. That means convolution doesn’t care what is input and what the channel response, all it know is the basic maths which even you, now, understand. ūüôā

5. Now that you can see it and feel it, let’s try to write this ‘gut feeling’ in form of a mathematical expression. Not a big deal, really.Lets see what we want to do:

  • channel keeps memory
  • total excitation at any given moment is the sum of memory that channel has and excitation of in coming signal
  • hmm.. that means for excitation at a given instant I need value of current excitation and sum of values of past excitations which depends on how channel ‘remembers’ them.
  • so, for a given instance n, I should know what has happened from n-1 th instance to instance till which channel can remember.
  • take a paper and a pen, and try to write what you can feel. I am running out of words to express my feeling, probably I’ll get back later when I can find words to express it. Once you are done you’ll realize that you are thinking what that ‘complicated’ formula wants you to think.

So, now you see how a simple  gut feeling can take form of complicated mathematical equation.

lets begin… the Basics

30/10/2012 § 1 Comment

Before I start, we need to know some basic stuff.

types of signals

Signal is physical quantity which varies with time.

Here, the first thing that comes to mind is a wave. There are two kinds of waves,¬†electromagnetic waves and mechanical waves. This is a good place to learn more about them. Sound is a mechanical wave and when we are talking about making music from computer, we are – in a general sense – talking about mechanical wave. But we never actually manipulate ‘mechanical wave’, we modify their ‘computer (digital) representation’. We’ll get to that in a minute.

Now that we know that sound is a mechanical wave and we want to manipulate this wave by using a computer. To represent a mechanical wave in a way that computer understands we have to sample it in time and amplitude. This leads us to our next topic.


Sampling is fairly simple to understand. There are various resources out there to explain it way better than I can. The take-away is ‘The Sampling frequency‘. It’s the frequency at which samples should be taken so as to re-build the original analog wave. Sampling frequency must be more than twice the maximum frequency present in analog signal otherwise aliasing will occur. Generally we¬†over-sample¬†the analog signal so as to re-build analog wave more¬†precisely. In this process there is always loss of information. But ¬†engineers manage to re build initial signal just good enough so that it still makes sense by sampling at sampling frequency.

Sampling in time gives us a signal what we call a ‘discrete signal’. Sampling of amplitude is also done so that computer may store the in coming information, this is generally called quantization. This representation is called ‘digital signal’

This complete process is called analog to digital conversion.

Now, how we get analog back from this representation, here it is.

take aways

  • sampling frequency
  • aliasing
  • digital and discrete signal

I have just brushed the basics. I’ll write more about them in future.

my target

30/10/2012 § Leave a comment

Apart form understanding DSP and developing that gut feeling, I am targeting:

1. Adding parametric eqs and other effects in Mixxx

2. Bringing up GUI for Effects Framework.

3. Making Mixxx LADSPA compatible

And , if possible add feature of loading VST and LV2 plugins.

Where Am I?

You are currently browsing entries tagged with Research at a tryst with DSP.