Driver story, the acoustic system and the limits of EQ

Read this thread from start to finish, great civil discussions everyone. I’m very interested in this topic, having spent many hours EQing headphones using different methods, and also having listened to over 150 models of headphones, including most of the top of the line offerings currently for sale.

I’ve recently come across this as an answer to NOT to EQ headphones. A fairly big name Audiophile Youtuber claimed that he could hear destruction of soundstage and other technicalities (citing Rob Watts of Chord) due to phase changes, and rounding errors in the EQ process. I would love to hear all of y’all thoughts on this, as EQing seem to be a completely accepted standard practice in forums such at this one, in the books I’ve read, such as ones by Toole, and I’ve never come across these negatives of EQ being actual audible factors in discussions by Olive, Griesinger, or Linkwitz.

I would like to pick your brains a bit on this @Resolve . While what you said is true and I absolutely agree, I’d like to ask you something in your own specific case, your own head, your own ears since you have extensive experiences in EQing. Have you been able to EQ or add “punch” to DCA headphones? In your reviews (and others as well), DCA headphones are always said to have less “punch” than others. Now given that what might work for one person wouldn’t necessarily work for another, just in your case, have you been able to find out what worked or didn’t work for you? DCA headphones have quite low distortion and quite good headroom for EQ, so I’m curious to hear what exactly this mysterious sound feature is that it lacks, which no reviewer seem to ever mention they could or couldn’t EQ it “back in”.

Dan Clark himself attributes this lack of “punch” due to very low distortions in the 30-200hz region, and he added a bass hump to the tuning of his newest headphones in the 125-250hz region. And yet, reviewers continue to say they lack punch. I would really appreciate your perspective on this, and if you’ve had any success, or if there is none, why EQing doesn’t seem to be enough to add punch back in?

This is actually a topic I’ve been most interested in for the last few weeks. It turns out that engineers like Greisinger (https://www.youtube.com/watch?v=D_dwY-0pToU) and Linkwitz (Reference earphones) do advocate personalization of headphones by frequency sweeps and finding your own EQ that matches your HRTF. I’ve spent many hours EQing headphones this way, adding a bit of personal “flavor” to the EQ after evening out all the irregularities of the FR, and it is rather amazing how big of an improvement it is. I even dug out some old crappy headphones with wonky FR and spent some time EQing them with this method. The improvements are massive. As long as the headphone drivers can reproduce the same range of frequencies without much distortion or any clipping, all kinds of problems from simple ones such as “lacking rumble” to more complicated ones like “strained vocals” can be solved.

3 Likes

Great post

I kinda new to headphones. Or rather way more of a speaker guy. But as anything audio headphones are fun too.

In my limited experience you can EQ some stuff but other stuff you can’t.
For instance, everyone knows the ‘ mixing trick’ of giving a bit of a BBC dip in the approximate 1-2khz region to make the soundstage seem a bit deeper.
But for some headphones I just couldn’t make the soundstage deeper like other headphones when implementing this kind of EQ.

Or , for my recent IEM purchase of the Katos. I just couldn’t dial out this grain in the treble and harshness until the IEM got way too dull.
Same with other headphones.

It’s all fascinating stuff , and I don’t know what’s going on mostly. I do think pads further away from our head creates a vaster soundstage. But again I don’t have that much experience.

No matter what ( EQ) I did to my 490pro or HD600 I could never get them to sound nearly as punchy as my EQed Beyer 1990. Or replicate the soundstage.
And conversely the Beyers never got quite the texture of the Sennheisers.

So go figure.

With speakers it’s a whole lot easier to understand and measure. Headphones not as much.
IEMs? Forget about it. Someone says this IEM sounds smooth and then I try it and it kills my ears.

So lots of interesting points.

Cheers

4 Likes

Exact same experience here, a few aspects such as sound stage (spaciousness) seem mostly unchanged with digital EQ, maybe some tweaks around 1500hz up or down makes a difference, but it’s tiny for me. Some like slam/punch are partially solved for me but not completely. You could have a look at the links in my previous post, the way Greisinger EQ is mostly how I flatten out the FR to adapt to my own ears (I use many frequency sweeps, then add small preference adjustments on top), and it works wonders in removing almost all traces of “wonkyness” or off putting traits of 4 different sets of headphones I’ve tried it on so far. I have a set of HD600 and 1990 like you, and you’re right, I could only get maybe 70% of the way there for punch or texture, but never quite all the way.

1 Like

The bad person in me would say “skill issue”, but only because that’s what Oratory has said to me when we’ve had exactly this conversation, and he was right. The problem is always that you’re not able to measure at your eardrum, so you don’t know what the actual FR differences are. And more importantly, those qualities you prefer subjectively aren’t just down to how the low frequencies behave.

With regards to the Greisinger method, I would caution people against doing this, but that’s perhaps a topic for a different time, since we need to get into the ways in which headphones differ from speakers, along with Thiele’s work.

With headphones, if you do EQ correctly, there shouldn’t be any degradation beyond potentially doing it badly. As in making changes a person doesn’t actually prefer. GoldenSound has actually spoken at length about this, but essentially there are no downsides to EQing headphones, apart from their inherent limitations and what the user is realistically able to achieve. As far as the destruction of technicalities is concerned… I believe they’d need to demonstrate this in a controlled listening test.

The bigger issue is that EQing a headphone changes its relative loudness. So even if you reduce digital volume to reduce clipping rather than reducing volume in the EQ software (I’ve done a video on this), your relative SPL between EQ on and EQ off isn’t actually the same, and that’s because you’ve made a series of adjustments across the spectrum. And because there is a change in loudness… this can feel like a change in ‘technicalities’. But I’d say even level matching with pink noise, while still imperfect, makes that illusion go away.

1 Like

We went over some of this in that live stream. The short answer is that there are still a lot of unknowns when it comes to the analysis of FR and how that redounds to the subjective qualifiers people associate with such terms. And… there’s always the potential that acoustc impedance is a factor outside of FR that remains unaccounted for, which would be interesting since the DCA headphones are particularly high acoustic impedance.

But my general sense of it is that yes you could EQ them to have a better sense of ‘dynamics’ - certainly I was able to do this with the Focal Azurys, which I also initially felt lacked a bit in that department. It turns out that because I wear glasses the in-situ response on my head was such that the low frequency response wasn’t nearly as elevated as it was on the graph, all the way down from like 700hz, and I was able to confirm this by using in-ear mics.

The thing is, headphones that do have bass trends that typically resemble the Harman bass shelf are generally more poorly received for this quality, at least among the people who are thinking of it the same way. And with respect to Dan… while I think he’s right about FR being responsible, he’s wrong about what actually contributes to this quality that many of us are looking for. I mean evidently that’s the case given the way many of them are received - though that could be for coupling reasons as well.

The DCA headphones have particularly high resonance frequencies and high driver damping, meaning that if there’s any kind of coupling imperfections, the low frequency information is going to drop out to some degree - like the driver resonance frequency is around 500hz with those ones. That’s not… inherently a bad thing, but it would explain why some people report “lack of dynamics” if say the pad folds in on itself and there’s a small leak or gap in the coupling, depending of course on how they think of the term ‘dynamics’.

But as far as what may actually contribute to it, both Listener and I had identified the same thing, that the Harman shelf is typically counterproductive in achieving that quality, since it can often mask it. Rather it seems to be more associated with a more gradual, wider Q sloping response from the lower mids up into the bass. And likely also related to certain elements of the FR much higher up in frequency. When you look at where many of those tones fall for example - the ones that are associated with punchiness - they span a much wider range than just the bass.

Now… the big issue with all of this still is that all these qualities we experience, like dynamics, slam punchiness and so forth… they are all just descriptions of our subjective experiences, and how each of us associates meaning to those terms could be different. This is why we all keep talking in circles on this subject.

2 Likes

So I take it those concerns raised by Rob Watts about phase changes and rounding errors are of little or no consequences? Not that I don’t believe you, because in fact, I absolutely do, however it would be cool to some extra reading on why those aren’t real issues. I know that in speaker testing, Toole had demonstrated that phase changes aren’t discernible, but for a minimum phase system such as headphones, I’m not so sure why or why not.

Agreed, difficult to do in a fair way as you alluded to due to level matching and the rather obvious change in sound signature.

Edit: sorry looks like you replied in another message right as I sent the following, so it might’ve been answered:

So I’m guessing you’re saying you haven’t been able to personally EQ in better “punchiness” in DCA headphones due to those reasons? I mean, fair enough if that’s the case, I was just hoping I could learn a thing or two, maybe at least get some ideas on where else to look outside of just the lower frequencies. I guess it’s clear that the issue is not just those frequencies or Dan Clark himself would’ve figured it out and changed the tuning for this very popular complaint, I suppose.

I’m not familiar with what Rob has said on the subject, so I don’t want to wave that away as a nothingburger. But in headphones this isn’t something canonically worth worrying about. Really this is more of a question for Cameron, since digital audio and source gear is his arena.

Yeah I think the core issue is that he often goes for a harmanlike shelf, and that kind of bass profile seems to be not the thing the dynamics crowd is into. But at the same time, we just don’t know what they’re into because there could be all kinds of reasons they’re not received well for this quality. There are too many confounding variables.

Really though, to get back to the original bit on this topic, I don’t think this is about “injecting ‘technicalities’ into bad headphones with EQ”, even though it’s totally possible to do. Because in practice you can’t know what the FR at your eardrum actually is with these products and no two headphones will have a matching FR at the eardrum, even with sophisticated and meticulous EQ. Rather, it’s about recognizing that FR describes more than just tonality - it describes the effects often thought of as ‘technicalities’ too. And because FR varies at the eardrums of individual people, those qualities are bound to be received differently from person to person too.

2 Likes

I do recall in one of your countless videos that you guys talked about perhaps some elevated frequencies in the treble range might also contribute to “slam”.

It would be nice if we as the community, or you guys as content creators with access to rigs and measurement equipment, could isolate certain sections of music, or drum hits or whatever it is that is most “impactful”, play that, find out which headphones are on the extreme ends of having “slam”, then test the ear drum FR to see where the differences lie. Right now it seems like there’s a lot of theories and uncertainties, and maybe more science could be done to unravel some of the mysteries.

You’ve raised a lot of great points for consideration so I’m very appreciative of that, thanks @Resolve !

You’re right about that. However Greisinger (and amongst others I’ve never read about I’m sure) did create measurement probes for this exact purpose. So it is doable and has been done before. Page 19 of his presentation on Binaural hearing and headphones: https://www.audiosciencereview.com/forum/index.php?attachments/binaural_hearing_and_headphones-pdf.14172/

Yeah in-ear probe mics is a good idea, though I think Blaine would be the best person to explain why the Greisinger method is probably not the right way to go.

I still can’t help but think of a headphones driver on how fast it starts and stops.
But I’m pretty sure that would show up on the FR or distortion measurements.

In speakers this is quite easy to measure as ringing. But I don’t know if that holds true for headphones.

:man_shrugging:

2 Likes

Well if nothing else, I think Olive’s research has shown people prefer to have more bass in headphones than flat sounding speakers. But if one considers Greisinger’s method (and Linkwitz as well) as a practical way to personalize a headphone’s sound to one’s specific HRTF without buying extra equipment or shoving tubes touching your eardrums, I think there’s merit.

1 Like

If memory serves, the issue is more to do with the fact that front-biased or any specific direction based sound fields make no sense for headphones given the use condition of them being worn on the head, where the sound comes from effectively no direction. But I’ll see if Blaine has the more thorough rebuttal to that method specifically.

Thanks! Despite the fact that I do have a science background, math has never been my strength, so I’m pretty sure my simplifications won’t stand a thorough peer review. The general principle still stands, I believe.

Here is a summary of that math in a normal human language:

Problem statement: given that we know the source signal (in both time and frequency domain), and can measure the final signal at the eardrum, we want to figure out, how the headphone as a sound system affects the sound.

Hypothesis: measuring frequency response at the eardrum for sine sweeps can be enough to represent all the properties of the output signal, including the time domain information.

Why this hypothesis holds true:

  • The properties of the forward Fourier Transform and the Inverse Fourier Transform allow us to go from time domain to frequency domain and back.
  • For periodic signals like sine waves in particular, we can capture all the information as a discrete and finite spectrum, which means that our FR chart, if measured with high enough resolution, can present all of the necessary information. If we wanted to prove the same hypothesis with music, we would need to do it for infinite time and get an infinite spectrum as a result.

What limitations this method has:

  • Head-related Transfer Function (HRTF) is another variable function that affects the sound at the eardrum in combination with the Headphone Transfer Function (HpTF). If we know exactly the HRTF of the measurement rig, we can compensate for it. Otherwise it adds an uncontrolled variable to the result.
  • We don’t know how complex non-repeating signal like music affects the HpTF itself. And measuring the complete FR spectrum for non-periodic signals is tricky in theory, and is a lossy process in practice.
  • As with any measurement, there are accuracy limitations and errors introduced by measurement systems themselves, both their analog and digital parts.

I’m pretty sure @listen_r will be able to unpack it in a much more readable and complete way.

We could, but we don’t need to if we know the raw FR at a very high resolution and with low error, and we know how to interpret it in terms of perceived audio effects. If.

I’m aware of the DFHRTF debate you folks had and I agree that it makes sense for headphones in particular. I didn’t know that you actually measure the HRTF of the rig and compensate for it. Respect!

I think what you guys are doing in general is great, and I appreciate that you’re trying to use scientific approach to unpack what has been obscure and purely subjective. However, allow me a bit of the criticism though.

What got me triggered originally and pushed me to participate in this thread is the confidence with which Resolve and Listener keep saying “it’s all in the FR at the eardrum”. As if FR at the eardrum is a golden hammer that you guys got and now are running around with, talking of everything else as a nail :laughing:

The only thing I am confident to say is: we don’t know.

For example, the physical properties of the driver which have been mentioned several times in the thread.

Yes, the timbral impact of the driver is perfectly measurable in the FR, because that mostly has to do with its resonance frequencies. Like @generic said, if you hit the wood it sounds like a wood, if you hit the metal it sounds like the metal, and if you hit the plastic it sounds like the plastic. We know it since school and we can measure it on the FR.

The “speed” of the driver is not so straightforward to me though. Yes, how fast the driver can move without losing its efficiency is perfectly visible in the treble area of the FR. But we measure that with a sine wave. Busy mixes are not sine waves, and I wouldn’t bet my money to claim that the driver having non-zero mass and non-zero inertial properties does not affect the ability of the driver to reproduce a complex waveform, e.g. a the speed of 192k precise movements per second. Can our hearing perceive such fine differences? That’s another question I don’t have an answer to. When we study acoustics at school we are told that a human ear can hear up to 22kHz, thus the sample rate of 44.1kHz should be more than enough to represent all the audible information. As audiophiles we know from experience that for a trained listener that’s not correct.

A huge +1 to this. If you guys follow the empiric approach in your research overall, that’s the way to go about it.

Identifying and quantifying the perceived sound quality features (“detail”, “slam”, “spaciousness”, “depth”, “imaging”, “trailing ends of tones”, etc.) and their representation on the FR by running a series of repeated experiments with maximum isolation and representative sample sizes would be much more convincing and reproducible than anecdotal experience. I recognize that this could be a million dollar research to conduct though, but maybe you can give Sean Olive a hint so he can convince Samsung to put some money into it :laughing:

Quoting from a cool article on Linear Phase EQ:

Any type of filtering we apply with EQ, whether analog or digital, will introduce a short time-delay/phase shift to the signal. Most equalizers used today are called minimum phase filters and will only introduce slight phase shifts for specific frequencies depending on the filter settings. Luckily, the ear is not sensitive to absolute phase changes (whether the frequency is at its positive or negative part of the cycle) so if you’re applying moderate filtering the introduced phase shift won’t be very noticeable. As such, you can safely apply EQ changes without the fear of introducing artefacts that will dramatically change the sound. However, for steeper (i.e higher order) filters, the effect can become quite apparent as the phase shift introduced by the filter starts to affect the transients of your signal. This effect is called smearing.

This aligns with what I was told about signal filters when I was studying radioelectronics.

One clarification about headphones being minimum-phase devices: I believe this is said in the context of the phase of the acoustic wave while traveling through the air, not in a context of being immune to phase shifts in the electrical signal in general.

1 Like

I’m going to stay out of this discussion, but Robb Watts recent Can Jam talks are well worth watching, he discusses how he listens for changes and the impact of “noise” introduced by digital processing at levels so low no one would consider being audible.
I’m generally an everything matters kind of guy, but the sceptic in me was very much there must be some other factor impacting the tests. But if his causation is correct it really does question if we can even measure what’s significant.

3 Likes

There is a lot of growing evidence to the contrary, if you play a sawtooth wave, or a sine wave with the top cut off, you can clearly delineate a 180 degree phase shift.
Having said that it’s harder to hear that same phase shift in more complex signals, those people sensitive to it, usually hear it in the bass frequencies.

The little vibrating bits in the inner ear have 2 nerves, so can differentiate phase at least up to ~4KHz from the papers I’ve seen.

Having said that every device with an amplifier or filter is impacting phase, though for good devices it might only be a few degrees in that <4KHz range.

I’ve also seen recent papers on measured brain activity in the presence of high frequency signals well outside what we “hear”. All drivers and for that matter electronic devices have some ultrasonic ringing for high slew rate signals.

This has nothing to do specifically with EQ I have to wonder how much of these things we can’t easily differentiate, impact perception over time.

4 Likes

Yeah this is 100% true. And I don’t want to make it seem like this is ironclad, we really don’t know. My position though is that we should exhaust FR at the eardrum first because it’s a known unknown so to speak. And also, most of us suggesting it’s FR at the eardrum would’ve started with driver story, thinking “maybe there is something to this”, and we’ve all landed at FR after doing EQ to various extents.

1 Like

This depends on the magnitude of the effects you care about. It is technically true that any EQ is “only so accurate” - in the digital domain because of the limitations of our precision, in the analog domain because real devices have tolerances and level dependency - and thus an equalization is an approximation of an ideal. With a digital EQ, that approximation is within much less than .01dB of accuracy, so unless your headphones have been surgically mounted to your head to move less than .5mm at any time, I would not worry about this.

Phase changes are inherent to any magnitude change in a minimum phase system, and are a Good Thing here - you want your headphones’ phase response to be changed to match their magnitude response. I’m also personally of the view that if you’re a purist, minimum phase is non-negotiable anyway - acausal systems are fundamentally unnatural - but that’s a very niche and completely non-audibility related point.

This is not true - in speakers, how fast it starts and stops is measurable in frequency response, just as in headphones. Observe the response of a subwoofer compared to a generally similar, lower-mass midwoofer. Obviously the frequencies where we care about breakup are also radically different, so there are differences in the cone design, but a higher moving mass system - one with more inertia - is inherently lowpassed.

One note: two-channel FFT measurements have existed for many decades now, and allow us to very accurately measure FR with an arbitrary stimulus - this can be impulsive, random noise, or music.

This is, functionally, the reason we care about motors in speakers - while the motor has control of the motion of the diaphragm, you do not see free resonance. Imagine hitting a drum and holding your hand against the skin: you are controlling the motion of the diaphragm.

The extent to which this is practically possible is, of course, variable - this is why we have resonances of the driver system, including modal breakup. When these are very significant, they may produce significant nonlinear distortion and excess phase - these are very much measurable.

There is a misconception in this - a sinusoid with a frequency of 96khz requires vastly greater speed than a complex waveform with a highest frequency component of 40khz. If there is meaningful inertia, the sinusoidal response will degrade, just as it would with a complex wave containing those frequency elements - symmetrically, what this means is that the complex wave would be distorted.

In amplifiers, the parameter you’re talking about is “slewing”, and it’s very much measurable. Amplifiers with flat responses and low distortion at high frequencies are not in slewing, the same holds for headphones.

Believe me, if he could, he would, but neither he nor we can compel that. We are actively interested in this, but the null hypothesis should be what it is: frequency response describes almost all of human perception of headphones. And, frankly, given how headphones work, the factors most likely to be involved in the “last x%” are not time domain stuff, they’re acoustic Z and nonlinearity.

This isn’t per se true - absolute phase is audible with the right stimulus. In music, however…

Yes, but the “goal” here is to achieve the magnitude and phase response of a headphone with the desired frequency response, and to do that, you need to use minimum phase EQ, because headphones are minimum phase. If you use linear phase EQ, you will end up with excess phase (equivalent to having added allpass filtering to your headphone) relative to your magnitude FR.

4 Likes

Thanks for the clarifications, it helps to make the picture complete!

I was being a purist there, because FFT is a discrete operation, and I was primarily talking about the integral continuous FT (which we don’t use in practice). For FFT one may say that as any discrete operation it is bound to its resolution and we might be losing some information. I doubt the resolution is a problem at all with the modern tech.

1 Like

Thanks for the detailed reply, much appreciated. I’m a bit dense and wouldn’t mind further clarifications on the quoted passage. Could you perhaps put this in the context of a set of headphones which is -6db from Harman at 3000hz, or whatever example you see fit for an explanation.

By EQing, this becomes a good thing? How do we know when or by how much this eq should be in order to match the phase and magnitude response?

1 Like