"Myths About Measurements" Discussion Thread

Indeed. While I’m younger than you, I actually did grow up in an anachronistic subculture that used many old-timey expressions. A few years back I read story by a professor about getting old. She referenced “Moses crossing the Red Sea” in class and none of the students understood.

1 Like

No, not the expression itself. Just to what it was being applied to in this context. Whether it was targeting @AudioTool’s commentary on how people should do EQ in practice, or the earlier statements about sound degradation.

I apologize for not responding to this earlier, AudioTool. Pretty sure I’ve already mentioned this in other contexts on the forum. The main “distortion” effects that PEQ filters in minimum phase EQs will have are frequency-dependent phase shifts or delays. Which I believe can potentially affect the precision of stereo imaging. (Google AI also seems to agree with this fwiw. See example below.)

This is an essential part of how most EQ’s work btw. So there isn’t really any way of avoiding it (unless you use linear phase, which has other downsides). Using lower Q filters will also help to mitigate it though.

Yes, phase delays introduced by digital EQs can affect stereo imaging, especially when working with stereo tracks, parallel processing, or multi-miked sources. The impact on imaging is a key distinction between standard “minimum phase” and “linear phase” equalizers.

Minimum phase EQ

Most digital EQs emulate analog EQs and are “minimum phase”.

  • How it works: This type of EQ introduces a small, frequency-dependent phase shift whenever a frequency band is boosted or cut.
  • Effect on imaging: While often subtle and sometimes perceived as a desirable “coloration,” the phase shift can cause issues with stereo imaging when:
    • Processing stereo tracks: If you apply EQ to a stereo track, the phase shift can cause slight, frequency-dependent timing differences between the left and right channels. This can lead to a slight smearing of the transients and make certain frequencies feel off-center or cause the image to “lean” to one side.
    • Mixing correlated signals: If you have two correlated signals (e.g., a multi-miked drum kit or layered synths) and EQ only one of them, the change in phase relationship can cause phase cancellation. This will alter the sound and can shift the perceived position of the instrument in the stereo field.

Linear phase EQ

In contrast, linear phase EQs are designed to prevent frequency-dependent phase shifts, but at the cost of other trade-offs.

  • How it works: A linear phase EQ delays all frequencies equally, preserving the original phase relationships. It achieves this by using a Finite Impulse Response (FIR) filter and a complex algorithm that adds latency to the signal.
  • Effect on imaging: Because there is no phase shift, linear phase EQs are “phase-coherent” and will not smear transients or shift the stereo image in the way minimum phase EQs can. This makes them ideal for tasks where phase integrity is critical, such as mastering or processing multi-miked sources.
  • Disadvantages: This “surgical” precision comes with some drawbacks:
    • Increased latency: To achieve perfect phase alignment, linear phase EQs introduce a significant amount of latency, which can be problematic during tracking or live performance.
    • “Pre-ringing” artifacts: Linear phase processing can introduce audible “pre-ringing,” a short, ethereal sound that occurs before a transient. This is most noticeable with aggressive, narrow boosts or cuts and can make transient-heavy sounds, like drums, sound unnatural.
1 Like

This is true, but it is not distortion in the way you are thinking. I’ve posted about this before, but I think oratory1990 has a better explanation than I could write:

We are not using EQ to shape the sound in a new way, we are using EQ as a remedy, to “repair” or “fix” inherent problems/idiosyncrasies of the headphone.
E.g. we’re using a filter at 5 kHz to combat a resonance of the headphone at 5 kHz.
The resonance of the headphone will not only affect the frequency response of the headphone but also the phase angle (as they are invariably linked in a minimum phase system)
By now calculating a minimum-phase filter to combat this specific resonance, we are combating the problem both in terms of magnitude frequency response as well as phase angle frequency response.

2 Likes

Yes. I’m afraid my understanding of the minimum phase concept is still a bit lacking. And I think you, Oratory1990 (and also Blaine) may be mostly correct on this.

If EQ can correct both the phase and magnitude of a resonance, I wonder if there’s some way to see that in a measurement, other than frequency response.

I think this only works if the headphones are truly or mostly minimum phase btw.

I have a few more thoughts on this as well. And will try to get back to you on them in fairly short order.

You could see it in the impulse response graph, which is mathematically equivalent to frequency response in a minimum phase system like headphones.

Yes. And yes they are mostly minimum phase. Any exceptions to this are not relevant to deciding whether to use minimum phase EQ or linear EQ. IE - you need to choose one or the other and minimum phase EQ is clearly a better fit.

1 Like

If there was some way, it would be interesting to see both frequency response and also a plot of the phase shifts on the same graph for comparison. I’m not sure how helpful an impulse plot would be for that though.

Agreed.

From what I’ve been reading, the easiest way to see the minimum phase performance of a headphone (or speaker?) is on an Excess Group Delay plot.

Amir at ASR regularly posts group delay graphs with his headphone reviews. These may not be quite the same as an excess group delay graph though. The difference is explained in more detail here…

Resolve shows EGD graphs on his headphones. Take note of the scale of the delay on the Y-axis though, because sometimes it’s in 20 millisecond steps. Sometimes in 5ms steps. And sometimes the scale is in 1ms steps, which shows the most detail.

Studies like this one by Genelec and Aalto University suggest that delays on the order of a millisecond may be detectable at certain frequencies.

Delays in this range will also exceed the period (difference in time between consecutive peaks in phase) of higher frequency signals. The period of a 4 kHz signal is 0.25ms, for example. While a 10 kHz signal has a period of only 0.1ms.

I think it might also be helpful to see EGD plots in conjunction with frequency response plots, to better see how features in the two measurements align.

Back to bits…

Nuance made a similar comment I believe.

Internal processing at higher depths has obvious advantages. However, to get the full benefit of that, I also recommend using a higher bit depth for your audio device. Something higher than the native depth of 16-bit content. (24 bits is likely sufficient for most consumer applications, but some audiophiles will use even higher depths than this.)

Smaller reductions are better. But any reduction, no matter how small, will necessitate resampling. So I still advise using a higher depth that 16 bits to improve the quality of the signal for your audio device.

Most music is compressed in dynamic range. This is often done to make it sound better on inferior (non-audiophile) audio gear with more limited volume and dynamic range. Or in less than ideal listening conditions, where there’s significant background noise.

There is alot of audio content which is not so compressed though, and has a higher dynamic range. And to take full advantage and get the best results with all types of content, you want a system that is optimized for the widest dynamic range supported by your gear, your ears, your listening preferences/environment, and the audio format, imho.

Most commercial music of interest to typical listeners is quite compressed, and may be recorded or evaluated with sub-optimal gear (e.g., the classic HD 600). If you want to listen to Artist XYZ then you must listen to Artist XYZ and accept their mainstream production decisions. Compression is an arbitrary studio decision based on how the artist and team want the music to sound. Electric guitars are even paired with “compression” pedals to generate a loud-like sound without making the sound louder.

One can find various releases with high dynamic range, even if they are used to test a system rather than listen. They tend to be most common among orchestral releases, audiophile-specific labels, and a handful of artists who are also audiophiles (e.g., Donald Fagen). Still, dynamic range is a moot point for those who don’t listen to these sources.

Even under ideal conditions, the maximum functional dynamic range of music follows from the performance envelope of human hearing. This means perhaps 50 dB to 90 dB, but more likely 65 dB to 90 dB. Too soft and you can’t hear it, too loud and it’ll be uncomfortable and cause hearing damage:

3 Likes

I’ve come to the realization that EQ can do more harm than good. I prefer to use headphones that are “close enough” to neutral for my taste with no EQ. Instead, tools such as PGGB-RT can enhance playback by application of a improved more accurate Nyquist algorithm.

1 Like

Only if you’re careless or don’t know what you’re doing. Otherwise, it’ll do precisely the opposite of what you stated.

4 Likes

I’m with @Nuance here. A wee bit of EQ can go a long way, and it’s often easy to do by ear. Sometimes the needs of a given setup are extremely obvious. If you take it too far or use digital processing, EQ can introduce some weird artifacts.

3 Likes

One can “know what they are doing”, and still have issues with EQ. I used to be a big supporter. No more. I find more improvement with headphones that do not need EQ, and use a better Nyquist filter.

Don’t knock it until you have tried it.

1 Like

I didn’t knock nyquist filters, but rather offered an alternative point of view on EQ. :wink:

I think we’d both agree that finding a headphone that does’t require EQ would be ideal, though. Or at least one that requires very little tinkering.

2 Likes

Agreed. Well stated.

I spent a lot of time researching EQ, and found it can help with some recordings with some headphones. I’ve also experienced the issue where the same EQ settings can make a different recording sound worse. For me, it got to be too much of a hassle. I went through a lot of headphones, and when I came across one that I felt didn’t need EQ to sound good enough with different genres, I kept it. I’ve got four headphones that sound good to me without EQ. They are all different, but are similar enough to not need EQ for my preferences. I find more improvement with using PGGB-RT, especially with low level detail clarity, vocals, and rendering of instruments.

1 Like

I have three headphones that I don’t think need EQ. However I haven’t found a headphone yet that didn’t sound significantly better with EQ.

Ok there’s a caveat to that last part. I own a couple of ZMF headphones and they would both be challenging to EQ improvements to since they are so colored. I honestly haven’t even tried since the coloration is the reason I own them.

I don’t understand how to compare sound improvement between a better Nyquist filter and using EQ. They affect the sound in very different ways. You can’t add a bass shelf with a Nyquist filter for example. Since they can both be used, why would you want to choose?

2 Likes

Well, with the Topping D900, the EQ options are restricted to 192 KHz. Since I often use either 352/388, or 705/768 KHz, I would rather eschew EQ and go with the higher resolution. Like I’ve said, I’ve used EQ for quite some time, and eventually gave up. I just use headphones that sound close enough to speakers that I don’t feel the need to mess with it anymore.

2 Likes

I hear ya. Your experience goes to the larger issue, which is the quality of the recording and mixing and mastering job. PEQ on the headphone will simply change the headphone frequency response; it won’t fix poor recordings or lousy mixing and mastering jobs. An alternative to that could be different PEQ presets for your headphones to compensate for bad recordings, but that could be an exhausting experiment since there’s seemingly no recording standard in the music industry.

1 Like