Hi! I’m sorry if I’m posting this in the wrong section of the forums, but it seemed like the most appropriate one to me. And excuse me for this probably quite foolish question.
There’s a headphone I’ve been looking at for quite some time now that is attractive to me due to certain FR characteristics (as well as price and purported build quality), even though the overall tuning is… questionable.
I’m used to EQing my head/earphones, which is something I do with every pair I get anyway, so getting a set that basically requires EQ to sound good doesn’t seem like an issue to me in and of itself.
However, to my understanding, where drastic adjustments may be required, distortion can become a concern.
And, alas, this particular headphone doesn’t seem to perform well in that regard. Moreso, it seems like I might have to make significant boosts to the exact frequencies that show elevated distortion. But… might this impression be wrong?
Here I have to admit that I’m pretty much a noob in these matters still, and have a pretty feeble grasp on many related basic, yet crucial concepts. One of them being normalization.
When the frequency response graph is normalized to the target or compensation curve at different frequencies, the relative levels difference between the two changes across the frequency spectrum and hence EQ adjustments that you would make to match the former to the latter also differ in gain values. For example, if I go by default normalization setting and try to match the midrange to DF HRTF with a -1 dB per octave tilt, I have to boost ~250 Hz by ~7 dB and ~1500 Hz by 6 dB, while cutting 3000-8000 Hz by ~4 dB. But if I normalize to 500 Hz, I have to just cut the 3000-8000 Hz region by ~8-9 dB.
Intuitively, to my confused mind it seems like the first approach would lead to an increase in THD in the 150-2000 Hz range, while the second would only lead to a decrease in THD in the 3000-8000 Hz range without affecting the distortion in the low mids. But that doesn’t actually make much sense, does it?
The THD levels should end up the same in both cases, when normalized to the same SPL at the same frequency (say 500 Hz), right?
When the measurements for THD level are performed at various SPLs like 95 dB and 105 dB for example… Do those values typically refer to SPL at a single reference frequency like 1 kHz or 500 Hz, or do they represent a weighted average across the entire frequency range?
Notice how the headphone’s FR graph is normalized differently between the measurements in the blog article and on the squig.link page, which seems to be one of the main sources for my confusion.
Is the THD measurement from the article more representative of the second picture, not the first - is that right? As far I understand, that would be the case if it was performed at 95 SPL at 1000 Hz or 500 Hz (like the frequency response graphs from the same article would suggest?).
In short… Am I going to have a considerable increase in THD in the 200-2000 Hz range after equalizing the Austrian Audio Hi-X20 to conform to DF HRTF with a 1 dB/Octave tilt (or Harman 2018 filters - the difference should be relatively small in this range). Should I boost the 200-200 Hz range, or rather cut the 3000-8000 Hz range to achieve that goal - this should not matter in the end.
Thank you veryuch for your time, patience and attention!