"Myths About Measurements" Discussion Thread

Ah. No worries. One of the limitations with this type of format is that one doesn’t know the other members as well as they would if you could talk in person. :slight_smile:

My perspective is simply that parametric EQ is great in theory, but, in reality, too many variables are in play. Getting the Q factors right for a given frequency adjustment can be an issue. Another factor not talked about much, but something I’ve run across is deviations in performance with a given headphone model. I’ve seen lots of complaints about Focal headphones variance with the same model. I discounted this until I experienced it first had. The first Dan Clark E3 headphones I had were missing the presence region. Sold them. A few months later, got another pair of the E3s, and this pair was fine. Sounded more like what the reviewers raved about.

I have used Sonarworks for a couple of supported headphones, and yes, it does help some. It can’t make a silk purse out of a sow’s ear though. I have occasionally noticed more differences with a given headphone being driven by different amps than using EQ.

1 Like

All of your points are quite valid; there are so many variables in the equation that can ultimately lead to failure, or even prevent someone from beginning. Properly EQing a headphone is quite difficult and takes a lot of patience and practice, and even then doesn’t always turn out how you’d hope. I honestly don’t recommend it to the normal people and only do it myself because I have “the brain worms.“ :grinning_face_with_smiling_eyes: Fortunately, it’s turned out pretty good with the headphones I currently use…at least I think it has. Heh.

2 Likes

Good on ya, mate! I don’t doubt the potential to improve the playback quality via EQ. I more or less gave up on it, outside of using Sonarworks for supported headphones. To me EQ is like what Woody Hayes used to say about throwing the footbal:

“ Three things can happen, and two of them are bad” ):laughing:

2 Likes

It depends a bit how you use the EQ of course. It is a common thing though to use a negative gain or preamp with both analog and digital equalization to ensure that your other EQ adjustments/filters don’t exceed unity or 0 dBFS, and potentially clip your audio content.

If you use a parametric peak or shelf filter with a +4 dB gain, for example, then you’d generally also want to use a -4 dB preamp (which lowers the entire frequency spectrum -4 dB) to keep your audio content from exceeding unity/0 dBFS at the filter’s affected frequencies.

If you are only applying EQ filters with a negative gain though, then you probably don’t need to use a negative preamp. And in some cases, you might even want to use a positive preamp, if the other EQ filters result in a net overall reduction in your audio signal’s levels, below unity or 0 dBFS.

Any reduction in volume in the digital domain, including the use of a negative preamp filter, will degrade the quality of your audio content to some degree btw. So it’s best to keep any such reductions of this type to a minimum, where possible. And to let the headphones and headphone amplifier do most of the attenuation needed for comfortable listening.

Because my 250-ohm DT770’s are fairly low in sensitivity (ie quiet) to begin with, for example, I try to run the hottest digital signal I can without any clipping into my amp, to maintain the best audio quality possible.

EQ and volume changes in the digital domain also necessitate some resampling of your audio content. So if you’re using a digital EQ with 16 bit, 44.1 kHz content, then you should also bump up the bit depth on your audio device settings where the EQ is occurring to at least 24-bits or higher, to get better resampling of the audio content. 16 bits just isn’t enough bits to provide smooth resampling of audio data imho.

1 Like

Do you have any evidence of that besides anecdotal?

2 Likes

I’m honestly surprised to see you say this since I know how much experience you have with EQ. It sounds like you think there is some kind of “correct” EQ when there really is only preference, and that is far easier to achieve. Anything less preferred is just a data point, not a failure. Anyone new to EQ can start with an Oratory or Headphones.com preset and go from there. Do some How To Listen training and tweak to your taste.

If you think it has… it has. I know there are all kinds of biases involved but really there are fewer biases in comparing EQ presets than in comparing headphones.

1 Like

Yes, though I will say, it’s important not to discount the bias of “I put effort into making this EQ, therefore it should sound better”, and the issue of non-level matched EQ toggling. You’d have to ensure you have the same SPL with pink noise or something for both EQ on and off, and then adjust your pre-gain accordingly. Most folks getting into EQ don’t do this, and they also don’t do blind comparisons once the profile is created.

2 Likes

Nah, I’m not saying there is one true correct way for the masses. Apologies if I gave that impression. I’m simply saying it’s possible to do it well on an individual basis, through lots of learning, trial and error and patience. And even then, the results could probably be better.

That’s really what it comes down to - EQing “successfully enough” (for yourself with your own systems) so that you’re happy with the results.

100% yes! I create my EQ preset in Peace and then another called “level match” for whichever headphone it is, then toggle between. I’ll always be bias to some degree - it’s unavoidable - so sometimes I have my wife or brother do the listening and then provide feedback. Sometimes their input is sobering, while other times it confirms what I think I heard. In the end, it’s all about having fun and enjoying the music.

See habituation research. All this talk is dancing around decades of perception science methods. Brutally complex and costly methods. Don’t go there unless you’ve got years of time to spare.

1 Like

There are of course times when it’s so obviously better that you don’t really need that kind of scrutiny too. Like where something was intolerable before but is now tolerable. I’m merely thinking of differences between something like good and something like great.

1 Like

Absolutely! Although I’ve had good luck comparing similar EQ presets by just matching SPL at one frequency somewhere in the mids. This is actually the trickiest part since perceived loudness is not the same as measured.

Word. The only reason I have EQMac installed is because I can quickly toggle EQ presets. I love how Roon applies the EQ but hate the interface and the (unfortunately necessary) delay between switching EQ presets.

My main issue when comparing EQ presets seems to be along the lines of “I can tell it is different, but is it BETTER?”

2 Likes

I hear ya. That’s where extended listening sessions come into play, sometimes days worth of them. Short toggles back and forth allow me to hear the change, but only extended sessions tell me if the change is good.

Bits are bits. If you reduce or resample them, there will be some loss. In the case of digital audio, the loss will be in the form of reduced dynamic range, decreased signal to quantization noise, and rounding errors. All of which can potentially decrease the quality of your audio.

A better question is whether this loss is audible. As you probably already know, dithering, higher bit depths, and keeping digital volume reductions to a minimum will all help to mitigate any audible loss. But they don’t eliminate the loss completely.

That’s probably not the answer you were hoping or looking for. But it’s the best I can give at this time. If you want to know more though, there are plenty of discussions about the pros/cons of digital vs. analog volume controls online. Some may be relevant to this discussion though, and some may be not so much.

1 Like

Nuance was questioning whether a negative preamp filter such as is necessary with PEQ will degrade the quality of your audio content, but you are including more than that in your response. Just pointing that out to reduce confusion.

I am skeptical that using a negative preamp filter will cause any information loss with modern digital recordings if 24 bit depth is used by the algorithm, like Roon. Let alone any audible loss. There is more than enough head room to accommodate the entire recorded dynamic range as long as you don’t go crazy with the db of preamp. And if you do, the headphones are likely going to distort. Also transcoding from 16 bit to 24 bit depth before applying PEQ is lossless.

I do sometimes wonder about the PEQ filters themselves, though. My understanding is that the algorithm to apply PEQ is fairly straightforward, but that doesn’t mean that there isn’t information loss. I mean the whole point of PEQ is to change aka distort the original signal so how would you know if PEQ is lossless? :thinking:

3 Likes

Precisely this.

Obligatory 20 characters.

1 Like

Man you guy’s hurt my brain. I decided to read everything after my last comment.

My headphone and music tendencies are mostly to just zone out while doing something else. I get an initial impression of hardware then just spend time with it. I think that’s why I have so many different pieces to the puzzle. I can switch up my hardware and get a new experience all over again. And in my current habits are found as better than PEQ (just realized today what that acronym is lol). I’m kind of fine with just about any hardware…. I just want to spend time with it. Caveat; minimum threshold of quality.

What I want to know from you guy’s, is where do I start? I understand what is said to a decent degree, but I haven’t experienced it.

Ex: I should be oversampling before I get into my DAC… ok how? I have the vague understanding of this can only be done with content in hand and not streaming. Ok well there goes that most of the time. I’m in the process of changing entirely how I listen to music, no clicky keyboard and mouse, no or minimal flashing screen. And I’m willing to expand my own library to make this a new/updated hobby.

The problem I’ve had in years past is that I never had a baseline to know where I started with old school EQ. I’d end up with highly irregular sound. But I’m sure thing’s have changed since then.

Currently I am taking breaks from my computer to listen to my Dragon Inspire IHA-1 because of all the previously mentioned reasons and I enjoy learning my hardware. I find some solace in the act of letting tube’s warm up with something plugged into it, and letting the hardware burn into my life. Perhaps my knee jerk reaction is dislike of a product, like both my Centaurus and Inspire did to me, but I find some merit in letting a sound simmer and cook. Today I woke up wanting to just sit with my tubes but had to work. That desire sat with me all day and when I got home I grabbed my 6xx (best tube headphones I own) and laid down with it. But…. it wasn’t tickling my pickle like normal. Chasing that dragon was sobering because I had spent the night or two before listening to my Koss KPH30ci on this amp. Again at first I thought the characteristics were crap, but I just sat back and let the sound take hold. It became beautiful and enough to drive me to want that again for a couple of days.

I guess my point is that we need to not always chase the dragon, rather let it come to us. There are always different tendencies in this hobby and I look forward to the day that I know a headphone and supply chain so well that I want to EQ it. I just have to much Audio ADHD and the lack of habits that are conducive to being this analytical. But I want to change that at times.

I don’t think PGGB reduces the bits. If anything, being the best Nyquist filter available, it will squeeze more information out of the bits, not less.

Yes. I apologize for any confusion on that.

My comment re the reduction in bits was mostly related to changes in volume. When you lower volume in the digital domain using either a software-based volume control, or preamp/pregain filter in a digital EQ (like EAPO), you are effectively reducing both dynamic range and also the number of bits that are available for rendering/playing back your audio content. For every -6 dBs of decrease in volume, you effectively lose or waste about 1 bit’s worth of resolution in your audio rendering. That doesn’t sound like much, but I’ll give you a couple real world examples where it could potentially be a problem.

There is a prevailing myth among audiophiles that 16 bits, 44.1 kHz is the most you really need for good sounding audio. And if you are using content encoded at that resolution, it’s best to keep it at that native depth and rate until it’s converted to an analog audio signal (in a DAC). As long as you’re not changing or playing around with the volume or EQ before it’s converted to an analog signal, then you’re probably on fairly safe ground with those assumptions.

If you are following the “native depth/rate” assumption though, and reducing the volume or gain of your 16/44.1 audio content on a digital device (such as a PC) that is also set to render/playback the audio at 16/44.1, then you’ll lose about half of the potential resolution of your audio content if you reduce the volume by -6 dB (or 1 bit). Here’s why… 16 bit audio has about 65,536 possible levels of amplitude for each sample. If you reduce that by only 1 bit, to 15 bits, the number of possible levels for each sample is effectively cut in half to only 32,768. Drop the volume by another -6 dB, and you’ve cut the number of possible amplitude levels for each sample to 1/4 of the original content, or 16,384 levels.

Let’s look at a potentially even more problematic scenario than this. One that is also from my own personal experience.

When I first started using digital EQ’s, I was largely unaware of the audio settings for bit depth and sample rate on my PC, and the potential impact they might have on the sound quality of the digital content I was streaming from the web. This was the primary source for most of my music listening btw. And by default, the audio device settings on my PC were set to 16/44.1, which was normally just fine for CD’s, MP3’s, and some other audio streaming services back then… provided you were not using them with a digital EQ or software-based volume control, as described above.

That’s not what I was listening to though. I got most of my audio content from streamed YouTube video clips, which are encoded at the standard bit depth and sample rate of digital video, which is 24 bits, 48 kHz for the audio layer. So my PC was essentially downconverting everything I was listening to to 16/44.1. And then I was further exacerbating that down conversion by reducing the volume in a software EQ.

24-bit audio has about 16.7 million possible amplitude levels for each sample. Many of those levels are probably out of my normal hearing range, because the dynamic range of 24-bit audio is around 144 dB (24 bits x 6 dBs). And I never listen to audio at dB SPL levels that loud. I wasn’t just cutting the potential resolution in half though in this case. If I reduced the volume on my PC by -6 dB, I was cutting the number of possible levels for each 24-bit sample by 1/500th! And the audio settings on my PC were also reducing the sample rate from 48 to 44.1 kHz.

I didn’t learn how to rectify that from the folks on Head-Fi, because the prevailing wisdom by most folks there was that 16/44.1 was enough. And it was best to keep things at that rate until D/A conversion. And I’m afraid that there are probably still many audiophiles that continue to cling to this belief.

The highest the current audio device settings on my PC can go are 24/48. And switching to that rate made a subtle, but noticeable (imho) improvement in the sound quality. And it was such a simple thing to do. I’ve noticed though that whenever my software is updated or reinstalled, the audio settings on my PC will sometimes revert back to 16/44.1 automatically. So I continue to check that setting whenever there are new updates on my system.

Not all YouTube content is encoded and streamed at the standard video rate of 24/48 btw. But that is the recommended encoding format for best sound quality. YouTube content usually streams at a variable bitrate though (VBR). So the streamed bit depth varies, and can be less than 24 bits.

1 Like

See my post above. If you’re using a software-based digital EQ like Equalizer APO, set the bit depth on your PC’s audio device to a level higher than 16 bits. Depending on the type of content you listen to, you may also want to use a higher sample rate as well. At least 48 khz, if you listen to alot of YouTube content. And possibly higher if you want better compatibility with both 44.1 and 48 kHz content. And don’t want to keep switching between the two.

There are some potential downsides to using higher depths and sample rates btw. The most obvious is latency. So that might be something to consider if you do alot of gaming. You want the settings that are most optimal for your content, gear, and applications/use cases. Do not blindly follow suggestions for keeping things “native”, unless they fit your specific needs, applications, and circumstances.

1 Like

This reads like a Chat GPT response. So, let’s see what Chat GPT says about it. That’s not to say I put stock in what Chat GPT says, but it’ll be a fun experiment

Do you really “lose” bits?

This is where nuance matters:

  • If you attenuate in 16-bit fixed-point, then yes, you are effectively wasting resolution because the quiet signal now sits in fewer significant bits.

  • But in 32-bit floating-point processing (common in modern audio engines, DAWs, DSP like Equalizer APO), attenuation does not cause bit loss in the same way. Floating point has a huge dynamic range, so reducing gain just shifts the mantissa—resolution is preserved until final conversion.

  • At the DAC output stage, if you down-convert to 16-bit, the final available resolution depends on the bit depth of the signal going into the DAC—not the internal DSP math. With dither, even attenuated 16-bit material can preserve detail below the LSB threshold.

2 Likes