1. The physical basis of frequency
The most basic physical interpretation of the frequency is the number of cycles of a pure sinusoid per unit time. It's the same peak-to-peak, trough-to-trough, any two points in between with the same spacing. The units which make the most sense to describe this number is cycles/second, which we call Hertz or Hz. That is,
\(f\) is the physical frequency of the oscillation and always has units of Hz = cycles/second.
2. \(\omega\): physical frequency with different units
Using the physical units of Hz is great until we need to make calculations. For instance, if we started a sinusoid of frequency \(f\) at \(t = 0\) and wanted to know how many cycles had passed at time \(t = T\), including the nearest fraction of a cycle, we would use the simple formula
$$N_{cycles} = f[Hz] \times T[sec]. $$
As an example, at a frequency of 60 Hz (60 cycles/second) and time of 1.3 sec, we would have \(60 \times 1.3 = 78\) cycles. Partial cycles are allowed too; at 1.36 sec of a 60 Hz sinusoid we have 81.6 cycles which have passed--81 full cycles and 60% of a full cycle.
Now suppose we want to look the actual value of the sinusoid (up to scaling by the magnitude) instead of just the fraction of the cycle. That is, we want an argument \(x(t)\) to put in so that at time $t = T$, \(\sin(x(T))\) gives the value that a of frequency \(f\) sine wave has at that time.
We can't use \(x(t) = ft\) because after a full cycle (starting at zero), when \(ft = 1\), the sine wave is again zero, but \(\sin(1) \neq 0\). We need to modify the argument function to reflect this. You can convince yourself that
$$x(t) = 2\pi ft$$
is what we want. Adding the \(2\pi\) can get messy however. If I need the fourth derivative of \(\sin(2\pi ft)\) for some reason, I get an explosion of \(2\pi\)'s. So we choose to define
$$\omega = 2\pi f$$
as a way of simplifying the algebra. We know the fourth derivative of \(\sin(\omega t)\) is \(\omega^4\sin(\omega t)\).
Another nice thing about defining \(\omega\) like this is it allows us to measure the frequency in terms of angles (in fact it's called the angular frequency). I actually didn't understand this in my basic courses and just followed the math, but it's the crux of the issue: sinusoids are circular functions in the sense that the \(x,y\)-coordinates of the unit circle are exactly \(\cos(x),\sin(x)\) (measuring positive angles counter-clockwise). We can express these angles in terms of radians and find that there are exactly \(2\pi\) radians to a single rotation about the unit circle. Or, equivalently, a single cycle of a pure sinusoid. That is,
1 cycle \(= 2\pi\) radians.
recall that \(f\) has units of cycles per second, so it follows that \(\omega = 2\pi f\) must have units of radians per second. That is, \(\omega\) measures the rate at which the angle of the \(x,y\) coordinate changes in time. To summarize:
\(\omega\) measures the angular rate of the oscillation and always has units of rad/sec. \(\omega\) is always larger than \(f\) by a factor of \(2\pi \approx 6\).
So if you're designing a filter with a 60 Hz rolloff point, don't put a pole at \(\omega = 60\)! Put it at \(f = 60\). If you're making or using a bode plot, make sure you use the same \(f\) or \(\omega\) as is appropriate for the design requirements!
2. \(s\): The Laplace Frequency
The first thing we typically do when we run into a linear ODE is take the Laplace Transform. The LT is essentially a one-directional (negatives aren't included, since negative time makes no sense) Fourier transform that allows for the frequency to be any complex number. I've blogged about how the FT is just the continuous set of coefficients for expressing a function in a basis of sines and cosines (that is, the FT is a continuous analog of Fourier series), and the LT is the same thing, but now using \(s = \sigma + j\omega\) instead of just \(j\omega\). This means we now include damped sines and cosines (or sines and cosines with an envelope, if you want to think about it that way) in the basis set.
Then a funny thing happens: in control, we always set \(\sigma = 0\).
Which makes the LT the same as the FT! (assuming the function is zero for \(t < 0\)). This means that the ubiquitous \(s\) factors found in control are no different than \(j\omega\). I once saw this referred to as the "Joukowski Substitution", but I've lost the source of that name and can't find it again. To controls engineers it's no more than a basic rule:
$$s = j \omega.$$
Since \(j\) is a unitless constant (square root of minus one), \(s\) also has units of rad/sec, just like \(\omega\). It's worth remembering that \(s\) doesn't have units of Hz however; I've seen more than one textbook get that wrong!
2. \(z\): The Discrete Frequency
Of course, modern controllers are not often implemented in continuous time (a tragedy, but alas we'll save that rant for another post). The invention of the microprocessor basically killed analog controllers for all but a few special instances. Computers think in digital time and chop continuous time up into discrete instances to do so. To see how to do this, consider the continuous time differential equation
$$\dot{x} = ax.$$
We want to sample it at a rate of(N\) samples/sec. This is the same as \(1/N\) sec/sample, which we call the sampling time \(T_s\) (the first one is called the sampling frequency and called \(f_s\)). If we suppose we know \(x\) at time \(t= nT_s\) (\(n\) samples from the time we turned the computer on) then we can integrate the equation from \(t_0 = nT_s\) to \(t =(n+1)T_s\). For any \(t\) and \(t_0\) the solution is
$$x(t) = e^{a(t-t_0)}x(t_0),$$
thus, putting the discrete time in the result, we get
$$x[n+1] = e^{aT_s}x[n]. $$
We know we can represent the first equation using the LT variable \(s\) via \((s-a)X(s) = x_0\). It also turns out there's a discrete $Z$-transform which turns the shift operator \(qx[n] = x[n+1]\) into a complex variable \(z\)--\(x[n+1] = zx[n]\). The full substitution turns out to be \((z-e^{aT_s})X(z) = zx_0\) The pole is in the same ultimate place regardless which transformation we use (\(z\) and \(s\) are both complex variables, thus \(X\) has to be the same, up to a change of coordinates), so we conclude that the continuous pole \(s = a\) corresponds to the discrete pole \(z = e^{aT_s}\), or
$$ z = e^{sT_s}. $$
We should be able to recover the continuous time version as \(T_s \rightarrow 0\), so let's do that. We have
$$X(e^{sT_s}) = \frac{e^{sT_s}}{e^{sT_s}-e^{aT_s}}x_0 = \frac{x_0}{1- e^{aT_s-sT_s}} \approx \frac{x_0/T_s}{s-a},$$
where the scaling by \(T_s\) washes away in the continuum limit of the inverse transform. The important part is that the pole is in the right place.
where the scaling by \(T_s\) washes away in the continuum limit of the inverse transform. The important part is that the pole is in the right place.