This is quite an interesting discussion. It motivated me to take a little refresher on calculus and signal theory which I studied more than fifteen years ago, and I have to admit that what I expressed above is not correct.
I used chapters on Fourier Transform and Inverse Fourier Transform from the book called " Digital Signal Processing" by Jonathan M. Blackledget, as well as Wikipedia and some other resources from ScienceDirect, in case you’re interested in the real math of it.
Let’s do some very simplified math (e.g. omitting phase, integrals, complex variables, etc.) of evaluating signal in time and frequency domain.
Statement
Our starting point would be the following definitions:
- t - time variable in time domain
- f - frequency variable in frequency domain
- A(t) - our original signal as an amplitude function of time. It is important to note that for simple signal like a sine it is a periodic function, while for a music fragment it is usually a non-periodic signal function.
- F(f) - the original signal in frequency domain, as a continuous Fourier transform of the time-domain signal.
- FR(f) - the measured frequency response at the measurement rig.
- HpTF(t), HpTF(f) - is a headphone transfer function represented as operations or filter functions applied in time domain and frequency domain accordingly.
- HRTF(t), HRTF(f) - is a head-related transfer function represented in a similar way.
- FT(x), IFT(x) - continuous Fourier Transform and Inverse Fourier Transform operators.
- DFT(x), DIFT(x) - discrete Fourier transform operators.
For simplicity, let’s ignore HRTF for now and imagine that we have a perfect measurement rig that is capable of measuring the signal at the ear drum.
We measure the frequency response as a way to evaluate the HpTF of a headphone, because working with frequency-domain information is much easier, and the properties of the Fourier Transform and Inverse Fourier Transform claim that an operation applied in time domain will be reflected in the frequency domain accordingly, and vice versa. So, for instance:
HpTF(f) = FT(HpTF(t))
FR(f) = FT(HpTF(A(t))) = HpTF(f)*FT(A(t)) = HpTF(f)*F(f)
which we can use to calculate how the headphone affects the signal:
HpTF(f) = FR(f)/F(f)
In the ideal circumstances (more on that below) we could use the Inverse Fourier Transform to go back from frequency domain to time domain without losing any information:
HpTF(t)=IFT(FR(f))/A(t)
Which means that indeed, if we could take a perfect measurement of the frequency response at the eardrum, it would represent all the characteristics of the time domain as well.
However, in reality there are several important constraints introduced both by mathematical properties of the Fourier transforms and physical properties of the measurement system.
Mathematical constraints
First of all, the original Fourier Transform (FT) and (IFT) Inverse Transform functions are defined for continuous analog signals on an infinite integral range (i.e. infinite range of time and frequency values). They also have a simplified version for periodic signals defined on a finite range.
So, if we wanted to know the HpTF for continuous music signals we would need to take infinite integrals and get a continuous infinite analog spectrum as a result.
In practice we use the discrete versions of FT known as DFT or FFT and DIFT or IFFT accordingly. Discrete means a certain level of approximation.
Quoting some of the important properties of continuous and discrete FT:
The signal with different time domain characteristics has different frequency domain characteristics; these are as follows:
• Continuous-time periodic signal is transformed to discrete non-periodic spectrum.
• Continuous-time non-periodic signal is transformed to continuous non-periodic spectrum.
• Discrete non-periodic signal is transformed to continuous periodic spectrum.
• Discrete periodic signal is transformed to discrete periodic spectrum.
When we are measuring the FR of sine sweeps, we are effectively measuring the discrete non-periodic spectrum of a continuous periodic signal.
If music is a continuous non-periodic signal, the correct representation of it in frequency domain is a continuous non-periodic spectrum though. But our goal is not to measure the music, our goal is to get to the HpTF.
What does this say about our FR measurements? Two things:
- If we use periodic signal for measurement, we can get a discrete non-periodic spectrum. And measure HpTF as a discrete non-periodic function. Blaine WINS!
- As we are moving from continuous signal to discrete signal, discretion resolution becomes important if we want to lose as little information as possible
This leads us to obvious physical constraints.
Physical constraints
These have been discussed many times and include to name a few:
- HRTF variation
- Positional variation
- Ability to measure the FR at the eardrum
- Accuracy of the measurement rig and the errors introduced by the rig itself
- Precision of the digital processing used
Conclusion
Here are the takeaways from this exercise:
- FR at the eardrum measured with a sine sweep can indeed represent full information about how a headphone reproduces sound.
- Resolution of the FR matters. For example, the “trailing ends of tones” might be present in several areas of the plot as tiny spikes or dips, but if the plot is smoothed too much or the measurement rig was not able to capture them, we won’t see this feature in the visualization.
- The FR measurements that we make on the rig is still an approximation. The more HRTF details we add to it and the more accurate the measurement, the better the approximation.
This brings us to a couple of other interesting topics, such as using smoothed FR graphs to describe fine sonic features of a headphone, as well as using EQ filters (which have limited accuracy) to make a perfect match of one headphone to the other, but I’ll leave it here for now.