Headphone THD levels and EQ

Hi! I’m sorry if I’m posting this in the wrong section of the forums, but it seemed like the most appropriate one to me. And excuse me for this probably quite foolish question.

There’s a headphone I’ve been looking at for quite some time now that is attractive to me due to certain FR characteristics (as well as price and purported build quality), even though the overall tuning is… questionable.

I’m used to EQing my head/earphones, which is something I do with every pair I get anyway, so getting a set that basically requires EQ to sound good doesn’t seem like an issue to me in and of itself.

However, to my understanding, where drastic adjustments may be required, distortion can become a concern.

And, alas, this particular headphone doesn’t seem to perform well in that regard. Moreso, it seems like I might have to make significant boosts to the exact frequencies that show elevated distortion. But… might this impression be wrong?

Here I have to admit that I’m pretty much a noob in these matters still, and have a pretty feeble grasp on many related basic, yet crucial concepts. One of them being normalization.

When the frequency response graph is normalized to the target or compensation curve at different frequencies, the relative levels difference between the two changes across the frequency spectrum and hence EQ adjustments that you would make to match the former to the latter also differ in gain values. For example, if I go by default normalization setting and try to match the midrange to DF HRTF with a -1 dB per octave tilt, I have to boost ~250 Hz by ~7 dB and ~1500 Hz by 6 dB, while cutting 3000-8000 Hz by ~4 dB. But if I normalize to 500 Hz, I have to just cut the 3000-8000 Hz region by ~8-9 dB.

Intuitively, to my confused mind it seems like the first approach would lead to an increase in THD in the 150-2000 Hz range, while the second would only lead to a decrease in THD in the 3000-8000 Hz range without affecting the distortion in the low mids. But that doesn’t actually make much sense, does it?

The THD levels should end up the same in both cases, when normalized to the same SPL at the same frequency (say 500 Hz), right?

When the measurements for THD level are performed at various SPLs like 95 dB and 105 dB for example… Do those values typically refer to SPL at a single reference frequency like 1 kHz or 500 Hz, or do they represent a weighted average across the entire frequency range?

Notice how the headphone’s FR graph is normalized differently between the measurements in the blog article and on the squig.link page, which seems to be one of the main sources for my confusion.

Is the THD measurement from the article more representative of the second picture, not the first - is that right? As far I understand, that would be the case if it was performed at 95 SPL at 1000 Hz or 500 Hz (like the frequency response graphs from the same article would suggest?).

In short… Am I going to have a considerable increase in THD in the 200-2000 Hz range after equalizing the Austrian Audio Hi-X20 to conform to DF HRTF with a 1 dB/Octave tilt (or Harman 2018 filters - the difference should be relatively small in this range). Should I boost the 200-200 Hz range, or rather cut the 3000-8000 Hz range to achieve that goal - this should not matter in the end.

Thank you veryuch for your time, patience and attention!

Not following you here – this doesn’t look like the same set of changes. Assuming you’re going to end up listening at the same volume irrespective of your approach, it’s the relative levels of the frequencies that matter. So if in the second case you’re only touching 3-8k, you’re not applying the same EQ curve.

2 Likes

Thank you very much for your response!

Yep, I wrote part wrong. I’d still have to cut upper mids and treble by ~4 dB in the first case, as well as make a boost to ~1700 Hz in the second case.

Here I’ve got two EQ drafts. They’re preliminary and crude, and the end result is not quite the same. But it should be close enough.


Required negative pre amp gain is quite different between the two as well.

1 Like

I might be misunderstanding your question but I’ll say that eq in and of itself is not a source of distortion problems.

Think of the usual turning up the volume. You are addining ‘energy’ but the headphone doesn’t start distorting just because you turned the volume up a little.

EQ is just a selective volume control for certain frequencies.

Of course, any processing of a signal can have side efects so I’m not saying eq is immune to causing subtle changes.

I’m just saying that making the signal louder at certain frequencies does not directly create distortion unless you exceed the physical capabilities of the headphone.

Also, eq is widely used in the music production process so the signal you get, which you might think of as ‘pure’, has already been processed and eq’d.

2 Likes

Thank you very much for your response!

I more or less understand what you said about EQ, I think. I don’t think of EQ as something producing distortion in and of itself necessarily (though I think it may produce phase distortion in some cases or contribute to increased distortion by boosting ultra low or ultra high frequencies, or pushing the driver closer towards it’s excursion limits ecaxerbating dynamic compression issues, if present, for example?).

But from what I’ve with regards to measurements, distortion tends to increase with volume. The rate of increase is nonlinear and varies across the range, which makes it difficult, if at all possible, to predict what effect a certain increase in volume would have on distortion.

Wouldn’t it be reasonable to expect higher distortion when listening at higher SPL? Especially in the frequency ranges that shows spikes in distortion that might be indicative of said limits in physical capabilities of the system. I suspect that increasing the SPL across the entire frequency range may put more strain on the driver and have a larger effect on distortion compared to increasing SPL in a few areas of the frequency range.

For example, if both headphones show similar amounts and patterns of distortion in the midrange @ 95 dB (though I believe it’s very important what this value represents - SPL at a single frequency like 1000 Hz, or a weighted average across the entire audible range), but one seems to have a considerably lower midrange levels (let’s say around 10 dB for example, though maybe somewhat exaggerated one) vs other parts of the frequency range which may merit increasing that midrange with EQ by those 10 dB… Shouldn’t it be expected to see the distortion in the mids rise similar to how it would rise if the overall volume was increased by 10 dB to 105 dB SPL?

And depending on EQ adjustment being made, one might end up wanting to turn up the overall volume higher.

I think one of my main issued might be not understanding dB normalization that squig.link tool defaults to, as opposed to frequency normalization.

To my understanding, the frequency response graphs in the articles seem to be normalized to frequency. Looking at them, the 200 to 2000 Hz range on HD600 and Hi-X20 is reasonably close to each other and the compensation target.


But looking at them on squig.link that belongs to that same person, where they’re normalized differently, the gap is way more drastic.

Lets say I wanted to match Hi-X20’s midrange to that of HD600. Looking at them might give off an impression that I would need to make a considerably more significant boost to the former’s lower mids to achieve that. It would need ~ 7 dB boost around 300 Hz and ~ 5 dB boost around 2000 Hz, exactly where the distortion spikes are located, as well as a ~3,5-4 dB boost from 450 Hz to 1500 Hz. Which would lead to an increase in distortion produced in that range. Without EQ, Hi-X20’s seems to still have less distortion in the mids than many hybrid IEMs utilizing BA driver, to which I’m pretty accustomed to, so it wouldn’t be all that concerning if not for an impression that I would need to make significant boosts in that range via EQ.

But that impression may be false, as I understand it.

Looking at the graphs in the articles (first two images), it would only need a ~ 4,5 dB boost at 300 Hz and a ~3,5 dB boost at 2000 Hz, though with more significant cuts to bass, upper mids and treble if I wanted to match across the entire frequency range. That might still lead to an increase in distortion for Hi-X20, but not as drastic as it might have seemed when looking at the graphs on Squig.

But maybe if one normalization method and reference is more representative of wide-band sound perception of an average human than the other, there might be a meaningful difference in practice between those approaches.

At the very least, it will be with regards to digital negative gain that should be applied to compensate for positive gain EQ adjustments and retain headroom. Here are two preliminary EQ profiles, they’re somewhat crude and don’t match perfectly in their results. But the difference in required negative digital gain seems to be more drastic that the differences in frequency levels across the range between the resulting curves would suggest (as with two EQ presets that I posted above).

This should have real implications in terms of source gear performance: analog amp stage power/gain requirements, as well as dynamic range, noise level etc (to my understanding).

Are there in terms of headphone performance, though?

I think what might look like a more drastic difference in the last image is just due to the Y-axis as well as the relative position of the X20 curve. Turn the “volume” up on it +3 dB (in your mind) and 250-2kHz will look a lot closer – and everything below and above that range will be a lot further apart.

It’s all the same information, just shown a little differently. The different presentations shouldn’t yield different EQ curves.