Driver story, the acoustic system and the limits of EQ

So we do actually measure at different SPL, especially when looking into distortion data. But any kind of limiter there would have to do with maximum driver excursion, where any limit would cause the headphone to become immediately nonlinear.

There are exceptions though, particularly with active, wireless or ANC headphones, where they use dynamic equalization based on the loudness and crest factor of the music. For that, we then use an m-noise stimulus, or in other words a more music-like signal, because this will give a better indication for how the dynamic equalization affects the performance when listening to music.

We don’t need to do that with passive headphones, because they don’t change until they hit the excursion limit. But at that point we’re talking about extreme SPLs and not something to worry about 99% of the time.

Re square wave, once again that’s not something anyone has a reason to care about when FR is available because it’s just a worse view of the same thing.

2 Likes

Pardon, that’s makes sense.
What I wanted to say is if there are measurements for dynamic compression in IEMs. Not headphones. I’ve never heard compression from a headphone.
In headphones it’s super rare to see a driver compress unless it’s a serious design error.

But I have zero clue to the quality of the drivers used in these IEMs. The types of drivers is discussed of course, but not the type of driver and its characteristics.

As consumers we are sorta in the dark about that. Unless it’s just me who doesn’t know.
Companies like Sennheiser talk about their 7mm dynamic drivers and how they work. I think they might have some specs as well.
But for most other IEMs I have no clue as to specs or quality of their drivers.

I mean I think we can agree that not all drivers whether speaker drivers or headphone drivers are created equal. There are differences.

Thanks

I actually found this guy from ASR that did some compression tests for IEMs. There was some compression found. Usually not too much.

I don’t know his reputation but his measurements seem ok.

https://www.audiosciencereview.com/forum/index.php?threads/tanchjim-kara-iem-review.53358/

1 Like

Yeah there are some examples with IEMs where their behavior changes subject to different input voltage. The Moondrop Variations being the most recent (and worrying) example. So that is definitely worth testing, though I don’t know if what you’re hearing is that kind of issue. It’s most likely just FR.

I have to test it again.
It wouldn’t seem that bizarre to me and I’ll tell you why.
Listen to that piece I mentioned. Listen on your favorite headphone and you will notice that the trumpet is mixed very loud. It’s normal for it to kinda sting your ears on a good system even at moderate levels.
As in the trumpet sticks out in the mix like crazy.

So it could well be hitting higher peaks in SPL compared to the rest of the orchestra. Classical is recorded with a much wider dynamic range than almost any genre.

I actually haven’t noticed that phenomenon with any other song on the Zero Red, so it could well be that.

I wouldn’t know however unless, I measured it and I don’t have a rig. Don’t you guys just do freq sweeps at different SPL for compression?

That’s interesting about the Moondrop Variations. Do you guys have the measurements of that? The compression that is.

And BTW if you enjoy classical you will like that album. It’s great.

Thanks man.
Great convo going on

But this very much could just be an FR thing. Like you can create this effect, or precisely the absence of it with EQ in any headphone. I feel people don’t fully understand the significance of FR and just how small changes in it can meaningfully alter the experience of your music.

1 Like

Yeah maybe.

No I don’t as much as you cause….i haven’t had of course access and experience . So yeah I get it man.
Pardon, I don’t mean to insist. Darn texting. Misses all the inflection. :blush:

Robust Binaural Measurements in the Ear Canal Using a Two-Microphone Array, 2023 August


In the audiophile community there’s often mentions of “Everything you hear is just frequency response at the ear drum”. However with no accurate ear drum level measurements, and blocked ear canal measurements not painting the whole picture of a person’s HRTF, the debate circles on without resolution.

Researchers at Sweden tried a new method of using two MEMS mics in order to get an accurate ear drum level measurement that works for non Free air equivalent coupling headphones (which means, basically all “normal” headphones we wear)

Could this spark a new step forward in understanding what makes good sound? It certainly looks simply enough that a crafty DIY person could attempt it. Hopefully the more technical minded of the community could have a look at the paper linked above and voice their opinion on this method.

3 Likes

Moved this here, just because it’s mostly a continuation of this topic.

I hope you guys could look into this, maybe contact the authors of the paper. I’d be very curious to see if you could unravel the mysteries of “All sound is just FR at the ear drum”, now that there’s a way to do measurements. Might make for an interesting video talking to the authors and see what they think about this topic, maybe they’ll send you some little mics to stick into your giant head!

1 Like

I think another point which has been mentioned before but not often talked about in length is channel matching.

Doing some sweeps can reveal bad or not so good channel matching. Or taking one ear cup off can result in hearing a less ‘ smeared ‘ sound.

Beyer sells their spare parts drivers in matched pairs. Now I don’t know the tolerances they have , but I know from speaker building that this is quite important and very few manufactures make both speakers to tight tolerances. The very best match their drivers out of the stock they have. Most just get drivers from the same production batch.

1 Like

Fwiw, driver symmetry has been one of my pet peeves for a long time, on the lower end products I buy. And it is essential to good stereo imaging imo.

For sweeps to be effective for this, the headphone should really be reversible. Otherwise, the selective hearing loss in each of your ears can obfuscate the results. It’s good to do these kinds of listening tests though imo, because it can help to flush out potential issues in the headphones, and sometimes also in your other gear. And also in your own ears.

Measurements aren’t a perfect solution for this either. Because there are many different ways imbalances/asymmetries can still potentially creep into the system, without necessarily originating in the headphones. Asymmetries in the rig, and small variations in the position or seal of the earcups can easily effect the results, for example.

I was a little surprised by some of the differences in the left and right measurements of the AT M50X on Rtings recently. But I found it extremely difficult to get a reliable, consistent seal with those on my own head, because of the small, shallow cups. I also had some symmetry issues with my old AKG K553’s. Some of that could have been related to the seal, but was also related to QC imo, and probably some other issues as well.

Both the M50X and K553 have single cables that plug into one earcup, as does my DT770. And I’ve never liked this one-sided cable approach. It has become the norm on alot of studio monitors (beaters) and other lower cost headphones though, unfortunately. The DT770 has better symmetry though than my K553 did, based on my listening tests and the recent HBK 5128 measurements on Rtings. I believe Sennheiser is also pretty good at matching, generally speaking. Though I haven’t owned any of their products in awhile.

You can also use pink noise, or something similar to do quick overall balance checks. And make adjustments in your audio settings to compensate, if necessary.

The DT770 and K553 are both reversible btw. Meaning they fit the same whether you have the left cup on the left or right side of your head, or vice versa. And this is one of the reasons I chose them over some other options. The M50X is not reversible though because it has angled drivers, and also a slight foward tilt to the headband, if memory serves.

You can see how generally good the channel matching is on some of the Sennheisers in the compensated measurements below. These were all measured on Rtings new 5128 rig.

Some of the other older 6 series measurements on Rtings site do not appear quite as well-matched as the ones below though. And this is something I’d pay attention to when purchasing a new headphone.

https://www.rtings.com/headphones/1-8/graph/25486/sound-profile/sennheiser-hd-490-pro/50713

https://www.rtings.com/headphones/1-8/graph/25486/sound-profile/sennheiser-hd-560s/18492

https://www.rtings.com/headphones/1-8/graph/25486/sound-profile/sennheiser-hd-600/325

https://www.rtings.com/headphones/1-8/graph/25486/sound-profile/sennheiser-hd-620s/60710

https://www.rtings.com/headphones/1-8/graph/25486/sound-profile/sennheiser-hd-800-s/290

Rtings is still in the process of redoing their measurements on the new HBK 5128 rig btw. So many of their measurements are still from an older setup, and are not really compatible with the above 5128 plots.

New Head-Fi topic that ties in a bit with this discussion.

1 Like

I’ve owned and tried multi thousand dollar headphones, and for my personal use my favorites right now are GSP600 and OAE1. Bought them for $15 and $200 respectively.

After EQing extensively (which I know you don’t agree with), my findings is that indeed I can make cheap headphones sound as good, and in some case better than expensive or well rated headphones.

There’s no solid correlation between sound quality and price to begin with, add in good EQ methods and careful tuning, then for me the best headphones are the ones with the best low and high end extensions in the FR, low distortions, and comfortable.

1 Like

Been wanting to write this for about a week and finally got the time. Can we have a chat about this video from Lachlan at Passion for Sound please:

The secret to wonderful sound!

I had to watch this video a few times to (I think) understand what the presenter, Lachlan was saying. But it seems to me there’s a fair bit of problematic content in this video that touches on a lot of the misconceptions that have been discussed in this thread. It, and the works of Milind N Kuncher cited by the video, were talked about in relatively oblique terms in the latest Noise Floor podcast as well as Cameron’s video on myths about measurements. But that content didn’t get into the details about the statements made in the video.

The first 8 minutes talk about the biology of the human ear and limits and abilities of human hearing. I won’t repeat any of that here. It gets interesting from after 8 minutes. (For ease of reference I’ll number the statements made subsequently - but this is not to suggest they are the only statements made that might be of interest, just the ones I’ve noted). He says:

  1. 8:10: “Almost all modern audio gear and testing devices has sufficient sensitivity and capability to measure everything we need to know about frequency response, distortion characteristics and dynamic range.”

So far so good. He goes on:

  1. “Where things get more complex is when we start to think about the timing, and also the complex or complicated processing that our brains are doing with that sound. Our auditory systems unsurprisingly are also incredibly accurate when it comes to timing, down to an accuracy of about 1 microsecond or less… about 1 millionth of a second.”

I don’t know enough about biology to know if this has been substantiated. The next part goes on to discuss more biology apparently associated with processing what we hear and in particular our perception of timing. Lachlan then proceeds to make a number of statements, including the following (paraphrased, and interspersed with others in the video):

  1. From 10:03: Our brains use the complex information at the beginning and end of a sound to determine its timbre (onset and offset transients). The sustained notes, including balance between fundamental and harmonic, are not enough to show this. This has been shown in research (by Berger, apparently) which had the first microsecond / onset transients of sounds cut off, with trained musicians unable to determine which instruments were playing as a result.

Again, I don’t know enough about this and would be interested in whether the cited study is credible.

  1. 13:46: If you jump between 2 differently tuned headphones, sometimes you’ll hear the sounds higher in the soundstage. This is because our brains are being tricked by having a bit more of a certain frequency in the tuning of the headphone, and thinking that it’s actually a height cue caused by higher delivery of the sound.

Interesting, I haven’t experienced this myself but wonder if others might have.

  1. 15:35: Onset and offset transients are relevant to our localisation of sound in space. This has apparently been demonstrated via something called the Franssen effect (I won’t repeat the details here - this can be easily looked up).

Apparently the Franssen effect is a legit thing. I assume there is something to the idea that onset and offset transients are important for timbre.

  1. 16:23: Lachlan says "To my knowledge there is no measurement available, or at least no measurement regularly published, that represents a system or a component’s timing accuracy on transients.

Here is where it starts to get really problematic, given the things we’ve discussed in this thread about the relationship between frequency response and phase / timing. Regarding the measurements, this should be readily measurable at least in headphones, right? Is it correct that any timing issues would show up as excess group delay? Are there any issues with measurement accuracy, i.e. can they perhaps not go down to delays of 1 microsecond?

  1. 16:32: “If you have 2 systems with excellent frequency response characteristics and excellent distortion characteristics… they can still perform and sound very different due to their timing accuracy.”

I don’t understand how 2 systems with the same frequency response and distortion characteristics can have different timing accuracy. Can this be a thing? Could the statement be correct if one system suffers from jitter issues (but wouldn’t that show up in the measurements anyway)?

  1. 17:10-18:13: In summary, our brains track and process the loudness of distinct sounds, ratio of the difference of direct sound coming from instruments vs reflected sounds, differences between higher and lower frequencies, interaural timing, reflections delay, and reflections frequency structure.

Okay sure, presumably all of that information would be baked into the recording.

  1. At 18:46, Lachlan discusses that reflected sound was something he noticed in his review of the Heddphone 2. He says “What I now understand, or I theorise, is that the Heddphone 2 with its incredible speed is able to deliver the resolution and the responsiveness and the timing to pick up all the signals in the recording, but to deliver them accurately enough for my brain to understand it”.

I think the better hypothesis would be that the differences were in FR, e.g. perhaps more content in the air frequencies or other elements which result in more of the room details being perceptible.

  1. 19:34: “Each device in our system will be influencing the timing and the resolution of the sounds that we’re hearing. By the time our brains process and cross-process all that information, the permutations of one tiny difference between one system and another can be striking.”

Assuming minimum phase transducers, would there ever be circumstances where source gear has timing issues sufficient to affect the sound quality, outside the obvious (e.g. jitter causing signal loss)?

  1. From 19:53 Lachlan poses a hypothetical example comparison of 2 systems “that measure identifcally”. The systems A and B are posed as identical except that we are asked to imagine that system B “an issue with capacitance”, i.e. an impulse that should last 2 microseconds is stretched out over 4-5 microseconds. System B “will still have a perfect frequency response, it will still have incredible distortion characteristics, because it’s not changing the frequency characteristics of the sound, it’s just holding on to them for a bit longer.” System B will apparently “produce a sound with noticeably poorer timbral accuracy and spatial accuracy, it will be less lifelike, and likely less enjoyable, but it will measure identically to System A.”

This seems to essentially be meaningless since it relies on conjecture based on the Milind N Kuncher studies. But I’m not sure those studies even went as far as making claims like these (but maybe, I haven’t read them).

  1. From 21:12 Lachlan talks about the idea that louder signals received at the ears at the same time as quieter signals will mask the quieter signals. The consequence is that in System A we will hear the details and enjoy a remarkable sense of space and image accuracy, but not on System B because of the masking caused by the capacitance issue.

Sure auditory masking can be a thing, but that would be much more of an issue with meaningful differences in SPL between different frequencies, which would very much be visible in FR.

  1. 22:32: “Firstly the speed of components like transistors in the analogue stage of your DAC, in your preamp, your amp, or your headphone amp all matter. They’re also very rarely talked about or measured.”

As per above.

  1. 22:50: “Next, the responsiveness of the drivers whether it’s in headphones or speakers does actually matter, and it won’t show up in a frequency response graph, even in headphones. How rapidly a driver can start, stop, and change direction will have an impact on the accuracy of the timing of those tiny signals, and whether they’re reproduced at all.”

But this shouldn’t matter in minimum phase devices like headphones capable of playing the audible spectrum. And if there are phase issues (e.g. in speakers), surely that should show up in FR as modes, right?

  1. From 23:12 Lachlan discusses how cables have a combination of resistance, inductance and capacitance, and the idea that cables can hold on to signal and mask sounds, although this won’t show up in frequency response. He cites Milind N Kuncher’s cable paper for evidence of this, and says any delays “will matter for every single aspect of the sound, and our perception of the sound, except for the frequency response, harmonic distortion, and noise… That means it will affect timbre, imaging accuracy, fine detail resolution, overall soundstage size and the sense of separation of each individual instrument.”

I’ve watched Amir’s video that debunks the Kuncher study. The study seems to be a bit of a joke really.

  1. 24:18: “If noise is entering the digital end of your system and influencing the timing circuit of your DAC or your streamer, or your transport… that could be causing additional jitter, and jitter is a word used to describe timing errors… even timing errors in timing will make a difference to what we hear. And this is why accessories such as cables, power supplies or filters can make a difference to the sound of your system. Basically anything that’s able to reduce the noise entering the system can improve it’s sound. That’s assuming the system doesn’t have a bigger problem that’s larger than the impact of removing some noise.”

I have experienced jitter issues myself, but the result was audio that effectively stopped and started and was noticeably awful. The issue was between a Samsung Frame TV and my Audiolab 6000A integrated amp. I was thankfully able to solve the issue by changing the PLL (phase locked loop) setting. Cables, power supplies and filters might also fix specific issues such as ground loops or EMI, but I’ve not known cables to make a difference to sound quality other than in specific known circumstances (i.e. impedance effects). I imagine most impacts of these kinds of accessories would either be obviously audible or totally inaudible - it seems unlikely they would result in subtle improvements to sound quality, e.g. greater perception of details or the like.

  1. 24:59: “The other thing that can smear timing is feedback circuits… What we do know is that feedback circuits work by looping the signal back around onto itself to cancel out noise. And that simple process of looping the signal back on itself by its nature has to introduce some smearing of time… I believe that’s why I tend to not always enjoy amps from brands that are measurement focused… that use these systems to get better measurements.”

Putting aside the timing thing, which seems likely to be bogus (is it?), is there any truth to the idea that amps that use feedback circuits don’t sound as good as others?

Lachlan concludes with “a few important truths”:

  1. 26:00: “Firstly, what we hear depends on the level, the structure or content, and the history of the audio signal. The tone based or sinus wave signals used for most signals can’t capture that.”

See above comment.

  1. 26:17: “Measurements such as frequency response, harmonic distortion and intermodulation distortion can’t consistently predict the perceived sound quality of a system, and they may negatively predict the sound quality of the system. That’s according to a study by F E Toole.”

Can someone who has read the Toole study explain what it says about this?

  1. 26:36: “Anything in your system that is restricting the agility of your system and its timing accuracy will be degrading the perceived sound quality of the sound you’re hearing, even if it has no impact on the frequency response or the distortion characteristics of the system.”

As per above comments.

I actually had a back and forth in the YT comments with Lachlan where I asked about some perceived problems, but his message was essentially that I’d not properly understood the video. In a subsequent response he said he couldn’t explain things any better (or apparently engage with any of the basic points in my initial comment) and he implied that I wished to ‘ignore the science’. I responded to that and left it there.

I very much welcome all of your thoughts about the above.

1 Like

This is a case where if people are going to quote a book, they need to cite the page and chapter. I have given this book a read but don’t quite recall that conclusion.

Indeed I’m also thinking that you’re correct.

To me this is the main issue with all of this talk about timing. None of this seem like they should be a main focus for stereo content playback on headphones. For binaural recordings being played back using customized eq to suit a personals HRTF, yes I’m sure all the recorded sounds would then provide accurate cues for localization.

For stereo content this makes no logical sense. The way sound is captured, processed, mixed, corrected with eq, mastered usually on speakers, all of them aren’t done in a way to accurate mimic how a human hear sounds with 2 ears on headphones. Headphones can sound spacious, but accurate timing doesn’t make sound stage effects when the recording and playback isn’t done with binaural in mind.

I’m quite conflicted about this person’s channel. A lot of his impressions on headphones jives with mine, but at the same time he seem to have a lot of very strange and unscientific thoughts and recommendations.

I’ve been thinking about this as part of my own quest to understand what are the factors that affect sound quality, real or perceived. And I started this other thread, to discuss an aspect of this, the import of drivers, dynamic or otherwise, linked below.

Apologies if I do not post all the evidence here, of the things I’m fairly sure of, cos these are things that can be easily verified, by anyone. And hopefully this is not as “academic” and super evidence centric place to discuss, unlike some other forums - where only “expert” opinions are given any room to breathe - e.g ASR.

Out of the mouth of babes …

I have not yet come to a firm conclusion, on whether driver type matters or not, because, I would have to assemble a fairly decent number of listening devices of each type, and conduct a scientific test, with various participants, like the kind that Harman did to come up with their preference curve (you know what I mean, not just the curve but the range of preferred sound signatures in a head based listening device, or in a speaker e.g the room curve for speakers.). That is clearly beyond the resources of mere mortals, and probably beyond the resources of the one and only headphones.com, to invest in such an effort.

So we have to tease out some truth from information, in a round about manner. Hypothesis, and suspected evidence.

  1. In the other thread I propose that there may be a difference based on driver type, borrowing some truths that are not questioned in the professional music/audio world where dynamic microphones are NOT the preferred choice for any delicate recordings where the utmost accuracy is needed. For example, it is nigh on impossible, that anyone would record a real orchestra with a dynamic microphone. And almost certain that there would be not a single dynamic microphone in sight in the recording of a classical orchestra, or one of the opera greats - e.g Pavarotti, Placido Domingo, etc. No one records such outstanding performances with dynamic microphones. Never. Typically done with ribbons, or condenser (capacitor microphones) - why, Dynamics cannot capture the intended level of detail, with the same accuracy as a condenser - this is not conjecture - but FACT, non contestable FACT.

So there is every reason to extrapolate that using dynamic drivers, will impose a similar limitation on listening devices., making them what should be the last choice when choosing drivers for a speaker/headphone/IEM. In theory.

  1. In my limited experience, having listened to only one planar magnetic device, the ARTTI T10, I was shocked to discover a level of clarity, which was different from all the other budget listening devices I own, and also different from my AKG K702 headphone, which is not a “budget” device. I was not expecting such a difference. So just something I noticed.

When I compared the frequency responses of the ARTTI to IEM’s such as the 7Hz Zero 2, on squig.link, and some other IEMs I own, they were similar, yet I could not explain why they sounded so different. All the dynamic driver IEMs sounded similar, had the same “sound”, even though their frequency response was dissimilar, and the were not identical in measurements on squig.link.

I’m an audio engineer, so one of the 1st important thing one learns is the sound of limiting and compression. The sound of limiting and compression, is kind of like what I expect perfect pitch to sound like to those who have it. I do not have perfect pitch, but I easily “detect” the sound of a compressed recording or one which has been limited. It just has a “sound” and I can tell how much compresson or limiting has been added - just by listening to it. Sorry I have no way of explaining this further. How I learnt to detect this, is a story for, another day. Suffice to say, I know the signature of a compressed/limited processed audio. item.

On an exhaustive listening marathon, with my small set of IEMs, I discovered that it was so much easier for me to hear these artefacts of compression and limiting, via my dynamic IEMs, and here is my hypothesis, to explain this. Dynamic drivers are altering the transient response of audio in most likely two extremes. Dynamic drivers are adding more compression/limiting/expansion, of their own, making it even easier for me to hear the dynamic processing that is already contained in the audio/music

2.1 Limiting and Compression at the high point of excursion, as well as the sluggisness to move from Zero. Mass establishes a certain inertia or over excursion at the top of the excursion curve - i.e Due to the mass of the driver, on excursion, they are NOT able to stop and turn on a dime.

2.2 Similar to an Expander in Audio engineering, below a certain level, it is difficult for a dynamic driver to accurately represent the quiet signals, cos instead of finely tracking small changes, almost as if a dynamic driver has a kind of bit reduction, very similar to using a smaller number of bits to represent an audio recording. So a dynamic driver does not follow small changes well, kind of jumping broad levels. And especially for very low signals, it just goes back to Zero. In my listening, I hear these artefacts of some kind of change in the dynamic response/transfer function where compared to the planar magnetic of the ARTTI T10, the few dynamic IEMs such as the 7Hz Zero 2 and KZ SAGA Balanced, are “distorting” the signal. Especially of note is their poor performance at representing reverb trails, something which the ARTTI T10 does a much better job of, compared to every dynamic driver, head worn listening device I have heard.

So in conclusion - At this time, in spite of my limited sample set of listening devices, I am about as certain as I will ever be, YES, driver type does matter. From what I have heard so far, the planar magnetic device I have heard, in spite of having a similar frequency response, does not sound the same as any of the dynamic driver playback devices I own. And my layman’s hypotheses are defined above.

This view may run contra to the view of others, and all I ask is, please think through the reasons I have logically and painstakingly presented above, there must be some truth in these. I think of this as the theoretical physics, like Einstein did, and now we wait for experimental confirmation, in teh same way that we not too long ago had confirmation of gravitational waves, a prediction from theoretical physics.

I’d be happy to collaborate with, in any way I can, with anyone who has the experimental resources to examine these thoughts further, and test them practically, as I am unable to. But this is how the world should work, I should not have to be good at everything. If I can think of plausible causes, and others can assist to examine - debunk through practical experiments, or otherwise confirm there is truth in my assertions, either way, that would be great, cos we can move forward as people of knowledge.

The difference in clarity, especially on acoustic music, or any music with natural sounds such as reverb (real or artificial), when heard on the ARTTI T10, compared to the other dynamic IEMs, in my possession is significant, and the only explanation I have, since the measured frequency response is similar, has to be a difference in transient response.

If I can think of these things. I suspect that many people out there know the truth, that in my hypothesis, dynamic drivers in audio reproduction devices, are a cost based compromise(i.e. garbage that makes money for some), a fact well known by those who know more than me, but those who know - especially audio gear manufacturers, and the researchers, are keeping hush hush about this info - which would shake the entire audio world, if the cat was out of the bag, irrefutably.

Sadly all I have is a suspicion, based on what my ears are telling me, clearly, every time I compare planar vs dynamic., from my limited pool of IEM devices.

Haven’t read the study. Most of Toole’s work has been on loudspeakers and room acoustics though. And he may have been referring to either the in-room frequency response or direct frequency response, neither of which predicts a speaker’s sound quality on its own, because independently they don’t predict how a speaker sounds or performs in a room.

Toole and the other researchers at Harman have found a very high correlation to perceived sound quality though when considering both the direct or on-axis, and also the off-axis frequency response performance of speakers in a room.

I can also tell you that the current distortion measurements we mostly use are not perceptually based. And work is on-going in this area. I believe Cam alluded to some new studies on this in his recent measurement video. One of the principal advocates for this has been Dr. Earl Geddes. Erin of EAC talked with him not too long ago in some depth on this. Might be an interesting listen if you haven’t heard it.

This does not mean we should ignore specs like THD though. It just means that we have more to learn on the subject. I find that there is some correlation between this metric and some aspects of the sound qualities I hear, as stated in other contexts here (and elsewhere). So it is something I try to look at when buying headphones.

Some audiophiles also seem to find the distortion in some tube amps more pleasing to their ears as well. So it may not necessarily always be a bad thing. And there could be certain types of distortion that are actually more pleasing to the human ear. This is something Cam also alluded to in his final remarks in the measurement video, cued up below.

Not sure I agree with Jason, but there you go. :slight_smile:

2 Likes

Not sure about the mode thing. But yes, I think phase changes in EQ could have some potential to degrade audio quality. And advise mitigating it where possible.

Re speed and responsiveness, EQ doesn’t happen in a vacuum. If the headphone cannot comfortably or effectively produce higher (speedier) frequencies at decently high volume levels without distorting or breaking up modally, then it will fail from the standpoint of speed or responsiveness. Even if you’re able to EQ it to (apparently) produce those higher frequencies on a graph.

The same kind of failure can also happen in the lower frequency range, if boosting the sub-bass with EQ produces audible distortion, for example (as it does with many open dynamic headphones).

So no, I’m afraid FR alone won’t necessarily tell you the whole story on these things. It always needs to be considered in the context of the headphone’s other characteristics, such as phase, distortion, and modal performance. This is my opinion anyway at this time, based on some other conversations here.