Tuning EQ to your personal HRTF

Thanks for your answer. :slight_smile:

The Griesinger procedure is this video?

Yep, that would be the one, though he got a newer Windows app made around 2019-2020 that’s presented in the latest video on that channel.

I created a website that tries to let you do this exact thing! - cabinaudio.com

The point of it is to stretch out the soundstage, changing your spatial perception of what you’re listening to.

1 Like

Can you explain how this works theoretically? I’m not sure what your methodology is based on and why the “higher pitched” sounds are supposed to sound “higher elevation”, and what EQ settings are supposed to be used to achieve those results. And even if those results could be managed with the sample sounds, how that affects tonality of normal stereo music recordings and voices.

All really great questions. I’ll give a shot at answering them, and please let me know any follow up questions you have.

How does this work theoretically?

The main idea is that we want to create separation of instruments, a large soundstage, and consistent imaging. We do this by changing the frequency response of the sound with EQ.

This works because the primary cues in localization are frequency response, ITD (interaural time difference), and ILD (interaural level difference). When we change the frequency response, we change the localization.

Because the frequency response of the sample sounds resemble the frequency response of individual sounds in music, the localization of the sample sounds is similar to the localization of instruments in the music. The sample sounds make it easy to hear how EQ changes this localization, which lets us effectively change the soundstage/imaging.

It’s definitely not perfect and we’re looking for ways to improve it.

What is your methodology based on?

Our methodology is based on the idea of using your own perception of sound to tune headphones. Originally, we tried to use perception of loudness to do this, but it never worked. Loudness is a very unreliable perception because our brain is constantly trying to subtract our HRTF from what we’re hearing. Eventually we came up with the idea to use your spatial perception of sound instead, with the hopes of improving soundstage, imaging, and separation of instruments directly.

Why do “higher pitched” sounds sound “higher elevation”?

This is a psychological/evolutionary bias our hearing system has - since we wanted to create spatial separation, we rolled with it.

Why auditory pitch and spatial elevation get high together: Shape of human ear may have evolved to mirror acoustics in natural world | ScienceDaily (Why auditory pitch and spatial elevation get high together: Shape of human ear may have evolved to mirror acoustics in natural world | ScienceDaily)

What EQ changes achieve these results?

Generally, intense V-shaped curves make lower pitched instruments sound lower and higher pitched instruments sound higher. This might be because they literally make bass-heavy sounds more bass-heavy, and treble-heavy sounds more treble-heavy, exaggerating the psychological bias for low pitches to sound lower and high pitches to sound higher. Additionally, dips and peaks that match our HRTF can create these results. Of course, you’ll notice a problem with this: if we replicate the HRTF at one elevation, then all of the instruments will sound like they are at that elevation! Well, that’s exactly the problem with small speakers and headphones: your HRTF causes all of the instruments to coalesce at the driver. Our idea is that if you undo the HRTF, you can spread out the image again, which might require strange looking peaks and dips.

How does this affect tonality?

We’re honestly not that sure. But it’s important to remember is that our brain is constantly subtracting our HRTF from what we hear. This means tonality is what we hear minus our HRTF. In other words, it’s spatially conditioned. Our brain is used to adapting what we hear based on where it thinks it’s coming from. However, in headphones, there is no ITD to match panning, the sound is spread over a much wider set of angles than in speakers (which matters b/c of HRTF), and there is no sense of distance. I’m not saying that we fix tonality, but tonality in headphones is a confusing concept in the first place because direction in headphones is unrealistic.

2 Likes

I happened to come across Genelec’s Aural ID, which seems to be a system for calculating your HRTF based on a 3D scan of your head: https://www.genelec.com/aural-id

What do we know about this? When will this be available?

I found an online tool that seems to build an EQ after finding your threshold of hearing at various frequencies, so I guess a kind of equal loudness test?

It plays a series of tones for you at different volumes and all you have to do is tell it whether you could hear the tone or not. At the end, you tell it which target you’re after and it spits out EQ values. It takes less than 5 minutes.

The tool is called EQPass and it’s by HosnLS. The interface is Chinese but the project open-source and MIT-licensed so I forked it and did an automatic English translation, available here.

I tried it on my Etymotic ER4SR with custom vinyl tips and the Harman target. It sounds better to me than 5128-based AutoEQ to Harman target, no EQ and various EQ presets that were present in PowerAmp (e.g. bass boost or bass + treble). I haven’t yet tried the Griesinger procedure nor tried to train my ears to detect peaks and troughs in frequency sweeps.

I’m really curious what more experienced folks think of this.

In particular:

  1. If you had to manually make a 10-band graphic EQ to correct a pair of headphones to one of the supported targets, does this tool produce a similar result?
  2. If you identify the peaks to target in advance then enter them into the “Custom EQ” setting when starting the tool, does it produce a similar magnitude in its corrections to what you’d establish manually yourself?
2 Likes

Does it though? I did the 31 and 15-band versions, and the dB values I’m getting in the bottom table are far from being the same deltas as on the graph between my HpTF and the chosen target. I also notice the top and bottom frequency scales are misaligned in both cases across most of the bass and low mids.
LE:
Ah, in the 10-band variant if I account for the graph misalignment and just read the values from the separate cursor labels at the top and bottom as I’m moving my mouse over the graph, and take care to compare dB values for the right frequencies, yeah, it looks like they’re all simple delta values between the two curves. But that’s not the case with the 15 and 31-band scenarios, those I think are just calculated wrong.

Anyway, if the point is to simply give you the delta dB at each frequency between your HpTF (which does include information about your ears, cochlea etc.) and some population-average target (which doesn’t), applying those values with EQ should de-personalize the response for you, no? In fact I think it moves it away from personalization even more than simply applying the Oratory curve that corresponds to those headphones, since that one was at least measured on some generic head and thus should match the population-average-ness of the target curves better.

So in my view, over multiple users and headphones this should perform statistically worse than applying the headphone-specific Oratory curve, and far worse than the Griesinger method.

Ah, and also it’s unclear if it’s applying any correction to account for the different shape of the equal-loudness curve at normal listening level vs. at the threshold of hearing (where the actual measurement is performed). Those don’t look the same down low, with this method the result should be overly bassy (but this error might not jump out at you if you’re not doing this with very sub-bass extended headphones).

1 Like

I gave up some time ago trying to apply my own EQ. It’s like the Ohio State football coach said about passing the football. Three things can happen, and two of them are bad. :slight_smile:

To me, the Sonarworks software was a godsend. The curves age quite accurate, generally more so than what a typical end user would come up with. Every headphone I’ve used the curves with has only improved the overall performance. If you have a headphone you absolutely love, you can send them your favorite cans and they will generate an individual curve for your specific headphone. This takes the individual variance of the headphone, as some manufactures are better at tolerance variations than others.

Now, combine the professional EQ with PGGB upscaling, and headphone listening cant truly get to another level.

Or the Penn State football coach. Oh wait. What football coach. Something happened and half or all of it was bad - depending on if you’re on the give or take side of the buyout.

Yes, I know this has zero to do with EQ, but as a diehard Penn State fan…

This season as disappointing as ordering a STAX 9000 and receiving Hello Kitty knockoff headphones from Alibaba…

It’s been awhile since I’ve tried Sonarworks SoundID, but my main complaint was, and remains, that it doesn’t support enough of my headphones. :slightly_frowning_face:

1 Like

A valid complaint. The list of supported headphones has expended a lot since I first got my license. I’ve run into a similar issue with my Devialet Expert Pro Speaker Active Matching function. I had to get a pair of ATC speakers that were supported by the list.

Once I realized most headphones actually NEED EQ in order to sound their best, and coming up with accurate EQ is anything but easy to get right, that revised my thinking of what headphones I should have on hand.

I sold off headphones that were not supported and likely had no chance of getting supported. Went and obtained headphones that were supported by the software (and were highly recommended as well), and have been happier with the performance as a result. Getting the midrange right with headphones has proven to be a much bigger challenge than I would have though when I got into this hobby. One of the headphone enthusiasts over on Head-FI said tongue in cheek that audiophiles don’t want neutral sounding headphones. I have to say there may be some truth in that.

I’ve kept one set of closed back headphones that are not supported by Sonarworks, mostly because they sound a lot like listening to my speakers tonal wise.

1 Like

After reading this, I think I get what you’re saying: the Harman target, while often discussed as measuring preference, also implicitly incorporates the population average HRTF.

This tool measures HpTF and HRTF simultaneously and if I use the measurements to correct to Harman, I get HRTF corrections from the tool’s measurements and from the Harman target, so it doesn’t really make sense to try to correct to the Harman target. I’d be better off using the HpTF + population average HRTF correction given by AutoEQ or something more generic, since there’d at least be only one HRTF correction.

For this tool to be truly useful, you need a personal preference target so that you don’t end up correcting for HRTF twice.

I’ll take the time to try Griesinger’s method. For anyone who comes to this page and thinks it’s a long manual process, there’s software to help. You can find it on his page.

Sonarworks can only compensate for the headphones themselves, it can’t compensate for your personal physiology, the interaction between the headphones and your personal physiology, or your personal preference. This article shows that there are substantial differences when you put the same headphones on different heads.

I’m not doubting that you like the sound but I think it indicates that you’re a lucky match for Sonarworks’ measurements rather than Sonarworks being better than any of the alternatives, for the average user.

I hear you. The Auto EQ app with Foobar2000 has a lot of EQ available. Allows the user to “tweak” the EQ to adjust to your preferences.

Yep, my first thought was I would need to run it twice and get 2 curves based on my own ears: the HpTF for headphone X, and the calibrated-stereo-speakers-in-a-good-room based HRTF as a reference. Then I’d subtract one from the other and basically get a Griesinger (stereo-mod) EQ. :grin:

Except potentially worse precision because it’s using tones instead of noise bands, also potential tilt error due to being based on hearing threshold values instead of music-listening level values (though maybe those would just subtract out since the same errors would be present in both curves this time).

Unfortunately in this case I would only need the measured values, not the calculated deltas, and the tool isn’t showing the measured values anywhere I can easily copy-paste them, I’d have to go through the graph frequency by frequency and write them down manually. :roll_eyes: Then I’d have to calculate the Q values for the EQ peak filters on my own as well, since it’s not providing those either. Could end up doing just as much work as with the DG Sonic Focus thingy, just to get a worse approximation of the same EQ profile. :person_shrugging:

I found an HRTF measurement project called the XR-based HRTF Measurement System that I think represents probably the pinnacle of what can be achieved at home.

The project uses the Meta Quest 2 VR headset as the basis for a much more precise and accessible version of the “chair method” that @Resolve mentioned above.

FYI, in case anyone here doesn’t know, the chair method, or any measurement-based HRTF method, is accomplished by using blocked canal mics to measure speakers that have been evenly distributed in a sphere around the listener’s head. Those speaker measurements are then all averaged together to obtain the diffuse field head-related transfer function.

A common way to attempt it at home is to sit in a swivel chair and rotate one’s head to specific angles relative to a speaker. I guess the obvious issue with this is that it’s labor intensive and imprecise. Basically, it’s super janky. This project addresses all that jankiness by using the Quest 2’s super precise 6DOF tracking system and automatically guiding the user through all the necessary measurements. The exact angle and distance to the speaker in each instance is then recorded along with the speaker’s impulse response.

After that, the measurements are then read into a MATLAB post-processing script. The script makes geometric adjustments to the measurements, accounts for headset acoustic shadow, windows all the IR’s, and adds an empirically-based low-frequency extension before averaging all the measurements. The system then prints out sofa files for spatial audio, but the MATLAB script can be interrogated to get the DFHRTF.

The theoretical approach used in this project is outlined in this paper from AES 2022

Rudzki, et al.; 2022; XR-based HRTF Measurements [PDF]; University of York, York, UK; Paper 27; Available from: https://aes2.org/publications/elibrary-page/?id=21857

The paper isn’t locked behind an AES member paywall so you can just download at the link.

3 Likes

Holy smokes… this is very interesting. Again in my view folks are erroneously trying to match front-biased or direction based HRTFs, but really this is what we should be chasing for headphones. It’s a challenge to do yourself because of exactly what you noted, the imprecision and jank. It’s why I’m reaching out to certain audio labs to try to get this done for my ear. So if there are ways to eliminate or even just improve the imprecision, it could meaningfully lower the barrier to entry for getting actually useful human data in the appropriate sound field conditions we want for headphones.

2 Likes

An idea I had for going further than this to address the issue of blocked-canal vs eardrum FR is to go to an audiologist that has a real-ear measurement system and have them get you ear drum’s FR for a few angles, and then measure from the same position and angles with the blocked canal mic to get you canal transfer function which can just then be multiplied by the measured DFHRTF to get a much more realistic eardrum-based target.

1 Like

This looks like fun! Let’s see if I have all of the necessary equpiment [sic]:

Getting Started: Hardware

Equpiment list:

  • XR headset :white_check_mark:
  • Loudspeaker :white_check_mark:
  • In-ear Microphones :no_entry:
  • PC with Audio Interface :white_check_mark:
  • Wifi Router :white_check_mark:

An additional requirement not listed is a copy of MATLAB software. :no_entry:

The license cost of MATLAB Home is $149. What about the cost of appropriate in-ear mics? I don’t see any recommended on the site.

1 Like

You can get a 30-day trial version of Matlab that has full functionality. Aso, second this about an in-ear microphone. I found the Knowles microphone that Meta used for their HRTF library, but it takes like a bunch of electronics know-how to set it up. Are there any layman-type recommendations for in-ear microphones?