Soundstage Is (Much) More Complicated Than You Think

I continue to press for a firm and clear distinction in discussions between (1) “subjective” and (2) psychoacoustics because these are very different things.

Audio “subjective” is associated with casual observations, weird analyses, poetic writing, arbitrary personal preferences, disguised biases (as you say, paying more for gear can equate to liking it more), and whatnot. If one describes sound quality as “romantic” or “cloying” or “musical” they likely fall into this camp. If one focuses on built quality and style one may fall into this camp during non-blind listening.

Psychoacoustics is one branch of the biological and cognitive sciences. While it is certainly less precise than the hard sciences (i.e., physics, chemistry, astronomy, etc.), these fields are highly ordered and predictable. Most bio-psych measurements fall onto bell curve or normal distribution across people, whereby most people are similar and a few are distinct outliers. To compare analytical precision to flowers, if chemistry is a field of cookie-cutter daisies or poppies, humans are a field of roses. There are red ones, yellow ones, white ones, pink ones, ones with many petals and others with just five petals, but they generally smell like roses and have thorns. They all die off in the winter and grow back in the spring. Some human subpopulations with similar biology/genetics may have largely similar audio preferences too, and that’s a fully testable idea.

Beyond the bell curves of hearing and perception, humans are highly affected by their lifecycles. We learn a lot as we grow, and development in each perceptual and cognitive domain differs. Small children may have perfect 20/20 biological vision but not understand the cognitive metaphorical meaning of a political cartoon until becoming an experienced adult. Writers can improve their skills until their 50s or even 60s, even though some teenagers have very large vocabularies. As such, the ability to hear musical qualities could continue to improve to some point between age 18 and 60, and this is testable too.

Aging also harms perception, whereby general cognitive problem solving (e.g., executive function) starts to decline around age 50 and the decline is steeper by age 60 to 65 (i.e., a large group of big spenders on audio equipment). Dementia becomes more common, and older adults are known to enter a “second childhood” of impulses and pleasure. At the same time, hearing high frequencies falls off and thereby people tend to lose the ability to discern what makes any given piece of gear different from the next. And audio shows thereby have many bright and piercing demo systems.

I fear that most audio hobbyists gravitate to easy but largely irrelevant black-and-white answers (i.e., electrical test gear, and commonly known as “objective” measurements), when the primary and most important distinguishing factors are psychoacoustic and fuzzier. But these are highly predictable and a 3rd way that involves neither the standard usage of “objective” nor “subjective” factors.

2 Likes

I agree with this actually. It’s also not an “anything goes” kind of thing, and largely the same with human perception generally. However priming is certainly a relevant part of psychoacoustics. This goes back to even the reasons why DF is the relevant sound field for headphones, given the sound in that use condition comes from the ‘sound helmet’, and not any particular direction/location. So with headphones, we don’t have priming for direction or locatedness the way we do with speakers, or the way we do when naturally hearing the world. And my contention is that the way your attention interacts with a given perception is similar in kind, as it can meaningful impact the experience. That part is far more difficult to pin down or indicate as universal.

1 Like

This post is what got the essay to make sense for me.

Thank you – very interesting and informative.

Ok, I just read this article (thanks for a great article and the inspiration) and thoroughly enjoyed it but I also feel like it is perhaps missing some elements about what could possibly also affect this subject called ‘soundstage’.

From what i got out of it and agree is that:

  1. Different frequencies and how loud they are produced by the specific headphone can affect ‘soundstage’ by perception of depth, as well as the mixing/mastering engineer and which tracks in the mix are both louder, and also varied in frequencies.
  2. Differences in width can also be happening at the same time via L/R stereo panning and in conjunction with frequency response ‘variation’.

Just putting both of those elements together while you are listening and your brain probably is spinning either from pleasure of all that happening at once, or it’s perplexing or complex, depending on who you are and if you enjoy that or not.

But I got to the end of that part of the article and all of a sudden I was at the ‘conclusion’. And I was like, but wait, isn’t there more than that to all of this??
Because part of the reason I was reading the article was trying to figure out what new headphone/s to purchase. And there was nothing mentioned about any of the elements below and how they may or may not affect or be affected by ‘soundstage’.

Technical ability: I only have two headphones thus far, Sundara 2020, and 6XX (I like to think that the XX makes that headphone female for some reason), and I have tried my best to match their frequency responses as spot on as I could to ‘make them sound the same’, and then compare the differences/similarities.
So that brings me to the question of technical ability. To me, the Sundara just absolutely blows away the 6XX when it comes to things like the precision of ‘imaging’ and stereo separation, but also the actual ‘separation of tracks’ within the song’s recorded mix/master. So my question is: How does ‘technical ability’ of a headphone affect ‘soundstage’? I suppose that I am referring to things like the ability to be fast/speed. The ability for a sound to finish at the proper moment and not keep ‘ringing’ in the ear of the listener. Technical precision. Which, in my perception, the Sundara was by far way better at all of that. But it also became a bit distracting, and sometimes takes away the ‘music’ , from the ‘music’. With the Sundara and the Apogee DAC/Asgard 2 amp I can almost pinpoint exactly every single instrument/voice in the mix, it almost becomes TOO separated, if that is possible. And this is all ‘within the soundstage’, because of all the elements of ‘depth’, ‘width’, and tonal differences of frequency response. Whereas with the 6XX on the same amp, there is absolutely for sure (for me) less PRECISION of all of those elements, and therefore possibly ‘less soundstage’? To note though, on that configuration the 6XX also has quite a bit of ‘soundstage’ information, it just seems way less PRECISE. It’s almost as if 6XX is the surfer/musician sitting in the back of the class daydreaming, while the Sundara is the brown nosed nerd sitting right in the front of class. I’m sure there are other analogies, your mileage may vary along with the quality of your mother’s meatloaf.

Driver technology/superiority: How would different drivers/age of the drivers/age of the technology/superiority of the drivers all affect ‘soundstage’? Do dynamic drivers provide ‘less soundstage’, while planar magnetic drivers provide ‘more soundstage’, while electrostatic drivers provide ‘even more soundstage’, while ribbon drivers provide who knows what the frick?

Clarity/resolution: If clarity and resolution can be compared to how much zoom and precision your glasses have for you to be able to see something, how would those elements affect ‘soundstage’? Is ‘soundstage’ at its best when it is totally blurry at 5 am after you just got up to go pee and are still technically half asleep in a semi dream state, or is it at its best while you are deadlifting at 2 pm after preworkout and 2 shots of espresso? (I like to think of my 6XX as the sleeky gazelle 22 year old woman who can cook an absolute 5 star meal with a perfect wine pairing, with just the right amount of your mom’s meatloaf in the bass/midbass, and the Sundara kind of like a little skinny guy with glasses who has calculated the exact math on exactly how many decibels the splash cymbal reverb only track has been panned to at ~-22L). Which of those two = ‘better soundstage’??

Size of drivers/headphone resonating area/space: Ok, this is the one that I thought for sure I would see in this article, because I see or hear so many people talking about their giant egg shaped drivers and how TALL their soundstage is!!! Does the size of the actual driver and how big your ear is and how big it fits on your ear, and how big all the resonating areas of the headphone are, affect how big or tall or wide the soundstage is? I am looking right at you 800S and HE1000 or Arya or Edition XS, etc. Does bigger/taller/wider/thicker equal more soundstage, or does that only matter to HER, in bed?
What makes ‘soundstage’ wider than a different other headphone, or taller or smaller than yet another different headphone?
Otherwise known as: why does the 6XX seem so closed in (small sound booth) 3 blob or whatever, and the 800S a large round circle?? (I don’t have 800S yet, but I am considering it)

This also brings up the question of do the higher frequencies usually create a perception of more or larger ‘soundstage’, or the lower ones, or both?
It was already discussed how dipping the lower mids can create more ‘soundstage’. But it wasn’t discussed with if a headphone had less high information like the 6XX, if that would make it seem smaller, in general, with regards to 'soundstage, or if a headphone like 800S, with much more treble response is going to seem ‘bigger’?

Which also leads me to this question: If you take 6XX and 800S and EQ them to the exact same neutral Kemar diffuse field would they both then have the same/similar ‘soundstage’, or would the 800S still seem ‘bigger/wider’??

Source differences like amplifiers, DACs, sound cards, computer software: Of course, I figured that this topic would come up, since when I got my Asgard 2 amp I was amazed at how much new ‘space’ i was hearing/perceiving. So then the question/s: how does the amplifier and pairing affect ‘soundstage’, and how is it achieving this? (My guess is probably through similar technical sonic elements that were discussed in the article). In other words if an amplifier is affecting frequency response in any way, or even the power if fluctuating or oscillating frequencies back and forth in the same way that a chorus effect might be put on a guitar etc. Or how much POWER can affect ‘soundstage’, because if it’s louder and has more power it’s always better/bigger, right? (insert Marty McFly opening scene of you all know what).

It was also not mentioned about Dolby Atmos, or for example 5.1 surround sound, and how that type of information in the mix/master/soundtrack would affect ‘soundstage’? Adding binaural elements to the already insane aspect of the combination of depth and width plus like 100 tracks of music in one 3 minute song. Is binaural already a ‘combined concept’ of the ‘soundstage’ that we are already hearing and how much more or less does it affect it?

Also, why would a certain headphone seem to have its similar soundstage (like a tall rectangle or a circle all around) that never changes, kind of similar like walking into a familiar concert hall, or a different one down the street?

As another aside, since I am fairly new to all of this:
To me the headphone is you and your ‘manhood’, the DAC is what translates from male to female to get her to say yes, and amplifiers are different colors of condoms, or other ‘accessories’. (pads are ‘toys’, for example)
(I realize that this kind of writing is perhaps sexist/misogynist, and that everyone including females might love to listen to high quality headphones also. I write it because I care first and foremost that it is fun/funny and as an aside to the ‘serious technical jargon’ and perhaps that it is nice to possess the freedom to write whatever I want/feel.)

Size of our ears/ear parts/ear drum/brain etc.: In the same way that a human being can have a different size and shape or other elements to their larynx, which produces a distinct voice, wouldn’t that be analogous to our ears, and how would that affect ‘soundstage’?
A voice is an element of difference that pretty much is obvious to the listener (get it, ‘listener’) in timbre/sonority/fullness/nasal and other elements, aurally. But what one person hears and how their ears/structure/body hear those differences, are only perceived by them.

I also equate this to HOW humans hear things differently. For example how a 7 year old boy might hear a very complex big band recording with all sorts of complex harmonies/layering/multitude of timbre etc., and how a 37 year old music professor female might hear that same recording. Are some people just hearing a simple melody on top at a choral concert for example, or are some people hearing every single note from every single voice AND how they all blend and harmonize together??

My guess is that how and what we hear and the parts of our body/ears/brain are just as different as something as obvious as the different quality of each vocal larynx. We probably just don’t think about that as much, since we DON’T KNOW HOW/WHAT OTHERS ARE HEARING. And since we don’t know, we might just hypothesize that others are hearing things pretty much the same as we are, since visually their ears appear to be kind of similar to ours, and since they are human and we are human. (and let’s not even get into what the folks at ‘Rtings’ might be hearing, ha)
(And speaking of ‘dogs’ let’s not even get into ‘dogs’, who can absolutely hear WAY more shit than we can. And yeah, that’s my new billion dollar california business idea, do not steal it from me, ha. "Headphones for Dogs’. Because if you think you enjoy music through headphones just imagine how much your dog will enjoy it. And I am only half joking here. Dogs go absolutely crazy for any and all sound/s. If you do make a headphone for dogs though, they for sure won’t need any amplifiers. They can hear a Modhouse Tungsten through your mama’s fat meatloaf (their mileage may vary, if they know they know). But that’s a whole other essay/article/research paper for next time, boys and girls, damas y caballeros, ladies and gentlemen, ye lads and lasses.

Angle of the drivers within the headphone (looking at you 800S): I have heard that this can also affect things like ‘soundstage’. Changing the position of sound, I’m sure can be perceived as being ‘different’ or ‘spacious’ , or ‘more far away’, or ‘muted’, etc. etc.

So, yeah, I thought there would be more to the article, because I want to buy some new headphones and I just want/need all the information I can get.

Thanks for letting me ramble stream of consciousness based on inspiration from this article.

Another thing came to my mind when I was reading all this and some of the responses:

When I was little, the speakers that we had all had a little tweeter, a medium sized midrange speaker, and a slightly bigger ‘woofer’. So when you had two stereo ‘speakers’ you were actually listening to 6 speakers, where 3 speakers on each side were working as a ‘team’ to create a then blended full range sound.

And then when I put on my Sony Walkman with the orange padded headphones that came with it, I was like ‘hey why don’t these have three speakers on each side?’ Having only one little mini speaker per side I kind of felt a bit forlorn.

So, it’s not possible to make a headphone that has three speakers built in to each side of the headphone, or perhaps a hidden subwoofer hanging around the back/inside of the cup of say an 800S. (And hey, maybe then that headphone will actually have some bass if you did that, because you would have to simultaneously play a real subwoofer in the room to get some subbass). I have done that with my Hifiman headphones that are ‘extremely’ open. It did sound kind of cool, but i think the sync was just slightly off. Just slightly off enough to sound like a flanger. Man, ‘flanger’, why does that just sound kind of dirty?

In any case, I am very impressed by everything that is going on in here. You guys are experienced madmen of the best kind. Honestly, it kind of reminds me of Star Wars or something. I really want a Star Wars 800S. Built with the exact colors of a stormtrooper.
Toodles

@VJJ your raised a lot of questions, lots of good questions. I think it’s unfortunate that there isn’t a more authoritative text book purely about headphones which addresses a lot of the common audiophile questions and misconceptions. Forums are better than nothing, but information is often hearsay, unverified, and obviously scattered all over the place.

I would recommend reading some text books such as ones by Toole to get a better understanding of how transducers work. Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms (Audio Engineering Society Presents): Toole, Floyd E., Toole, Floyd: 9780240520094: Amazon.com: Books

And this:

That’ll shed some light on questions about driver “speed”, technology, and measurements. I’ll leave the detailed and correct responses to others more qualified and eloquent than me. Let’s just say that you’ll find a lot of your answers by finding out what the following means: headphone acoustic impedance, minimum phase system, distortion measurements and audibility, upper range limit of frequency response of transducers.

If you don’t want to read text books I have some papers you might find interesting:
The Measurement and Calibration of Sound Reproducing Systems:

Listener Preferences for High-Frequency Response of Insert Headphones

Personalized and Self-Adapting Headphone Equalization Using Near Field Response

Modelling Perceptual Characteristics of Prototype Headphones

A Comparison of Sensory Profiles of Headphones Using Real Devices and HATS Recordings

Perceptually Robust Headphone Equalization

for Binaural Reproduction
https://www.researchgate.net/publication/226629941_Perceptually_Robust_Headphone_Equalization_for_Binaural_Reproduction

A Study of Listener Bass and Loudness Preferences over

Loudspeakers and Headphones

One main point which you’ll certainly come across and could get confused about is “everything is just sound waves (frequency response) at the ear drums”. While it might seem obvious that we hear with our ears, we haven’t seen much research using direct measurements at the ear drums, especially not in terms of audiophile’s concerns of “sound stage” “attack” “punch”. Other factors such as activation of skin outside of the ears by headphones could potentially play a role in how we perceive sound. All of the above requires more research, and there are currently no clear answers.

Which brings us to one interesting point about sound quality research. You’ll basically NEVER find any papers or researchers talk about “sound stage” “attack” “dynamics”. These are all audiophile terms which are ill defined at best. Some headphone designers will talk about the feeling of spaciousness, and that is one term which is better defined and understood. It’s got to do with low acoustic impedance and large driver size.

One last point on my wall of text in response to your wall of text: Sound stage on headphones while playing back stereo content is basically just an illusion. Bi-aural recordings and playback with customized HRTF EQ is what would give realistic localization effects.

2 Likes

This does raise the question about what the heck ATMOS is messing with, and the wild claims it makes for precise localization, including perceived up and down in addition to front, back, and left-right. I can sort of see it in a room with speakers, but I hear something going on in the demo tracks using hardware like Apple AirPods Pro 2 on test tracks played through my phone. My brain seems to process this as “hmmm. some sort of gimmick processing going on” and not as precise 3D imaging. But who knows, maybe they have convinced people that they are hearing precise 3D imaging.

Thank you for the updated reading list. I may look at it when and if Benadryl 50mg hrs doesn’t work. I actually used to read that kind of stuff. There’s more on psycho-acoustics now.

lol funny

IIRC Apple uses a few generic HRTF profiles and the user’s supposed to scan their ears with the phone so the app will match one of the profiles and apply EQ accordingly?

Probably why ATMOS kind of works but not in a completely convincing way.

1 Like

Ok, I just read this article (thanks for a great article and the inspiration) and thoroughly enjoyed it but I also feel like it is perhaps missing some elements about what could possibly also affect this subject called ‘soundstage’.

From what i got out of it and agree is that:

  1. Different frequencies and how loud they are produced by the specific headphone can affect ‘soundstage’ by perception of depth, as well as the mixing/mastering engineer and which tracks in the mix are both louder, and also varied in frequencies.
  2. Differences in width can also be happening at the same time via L/R stereo panning and in conjunction with frequency response ‘variation’.

Just putting both of those elements together while you are listening and your brain probably is spinning either from pleasure of all that happening at once, or it’s perplexing or complex, depending on who you are and if you enjoy that or not.

But I got to the end of that part of the article and all of a sudden I was at the ‘conclusion’. And I was like, but wait, isn’t there more than that to all of this??
Because part of the reason I was reading the article was trying to figure out what new headphone/s to purchase. And there was nothing mentioned about any of the elements below and how they may or may not affect or be affected by ‘soundstage’.

Technical ability: I only have two headphones thus far, Sundara 2020, and 6XX (I like to think that the XX makes that headphone female for some reason), and I have tried my best to match their frequency responses as spot on as I could to ‘make them sound the same’, and then compare the differences/similarities.
So that brings me to the question of technical ability. To me, the Sundara just absolutely blows away the 6XX when it comes to things like the precision of ‘imaging’ and stereo separation, but also the actual ‘separation of tracks’ within the song’s recorded mix/master. So my question is: How does ‘technical ability’ of a headphone affect ‘soundstage’? I suppose that I am referring to things like the ability to be fast/speed. The ability for a sound to finish at the proper moment and not keep ‘ringing’ in the ear of the listener. Technical precision. Which, in my perception, the Sundara was by far way better at all of that. But it also became a bit distracting, and sometimes takes away the ‘music’ , from the ‘music’. With the Sundara and the Apogee DAC/Asgard 2 amp I can almost pinpoint exactly every single instrument/voice in the mix, it almost becomes TOO separated, if that is possible. And this is all ‘within the soundstage’, because of all the elements of ‘depth’, ‘width’, and tonal differences of frequency response. Whereas with the 6XX on the same amp, there is absolutely for sure (for me) less PRECISION of all of those elements, and therefore possibly ‘less soundstage’? To note though, on that configuration the 6XX also has quite a bit of ‘soundstage’ information, it just seems way less PRECISE. It’s almost as if 6XX is the surfer/musician sitting in the back of the class daydreaming, while the Sundara is the brown nosed nerd sitting right in the front of class. I’m sure there are other analogies, your mileage may vary along with the quality of your mother’s meatloaf.

Driver technology/Superiority: How would different drivers/age of the drivers/age of the technology/superiority of the drivers all affect ‘soundstage’? Do dynamic drivers provide ‘less soundstage’, while planar magnetic drivers provide ‘more soundstage’, while electrostatic drivers provide ‘even more soundstage’, while ribbon drivers provide who knows what the frick?

Clarity/resolution: If clarity and resolution can be compared to how much zoom and precision your glasses have for you to be able to see something, how would those elements affect ‘soundstage’? Is ‘soundstage’ at its best when it is totally blurry at 5 am after you just got up to go pee and are still technically half asleep in a semi dream state, or is it at its best while you are deadlifting at 2 pm after preworkout and 2 shots of espresso? (I like to think of my 6XX as the sleeky gazelle 22 year old woman who can cook an absolute 5 star meal with a perfect wine pairing, with just the right amount of your mom’s meatloaf in the bass/midbass, and the Sundara kind of like a little skinny guy with glasses who has calculated the exact math on exactly how many decibels the splash cymbal reverb only track has been panned to at ~-22L). Which of those two = ‘better soundstage’??

Size of drivers/headphone resonating area/space: Ok, this is the one that I thought for sure I would see in this article, because I see or hear so many people talking about their giant egg shaped drivers and how TALL their soundstage is!!! Does the size of the actual driver and how big your ear is and how big it fits on your ear, and how big all the resonating areas of the headphone are, affect how big or tall or wide the soundstage is? I am looking right at you 800S and HE1000 or Arya or Edition XS, etc. Does bigger/taller/wider/thicker equal more soundstage, or does that only matter to HER, in bed?
What makes ‘soundstage’ wider than a different other headphone, or taller or smaller than yet another different headphone?
Otherwise known as: why does the 6XX seem so closed in (small sound booth) 3 blob or whatever, and the 800S a large round circle?? (I don’t have 800S yet, but I am considering it)

This also brings up the question of do the higher frequencies usually create a perception of more or larger ‘soundstage’, or the lower ones, or both?
It was already discussed how dipping the lower mids can create more ‘soundstage’. But it wasn’t discussed with if a headphone had less high information like the 6XX, if that would make it seem smaller, in general, with regards to 'soundstage, or if a headphone like 800S, with much more treble response is going to seem ‘bigger’?

Which also leads me to this question: If you take 6XX and 800S and EQ them to the exact same neutral Kemar diffuse field would they both then have the same/similar ‘soundstage’, or would the 800S still seem ‘bigger/wider’??

Source differences like amplifiers, DACs, sound cards, computer software: Of course, I figured that this topic would come up, since when I got my Asgard 2 amp I was amazed at how much new ‘space’ i was hearing/perceiving. So then the question/s: how does the amplifier and pairing affect ‘soundstage’, and how is it achieving this? (My guess is probably through similar technical sonic elements that were discussed in the article). In other words if an amplifier is affecting frequency response in any way, or even the power if fluctuating or oscillating frequencies back and forth in the same way that a chorus effect might be put on a guitar etc. Or how much POWER can affect ‘soundstage’, because if it’s louder and has more power it’s always better/bigger, right? (insert Marty McFly opening scene of you all know what).

It was also not mentioned about Dolby Atmos, or for example 5.1 surround sound, and how that type of information in the mix/master/soundtrack would affect ‘soundstage’? Adding binaural elements to the already insane aspect of the combination of depth and width plus like 100 tracks of music in one 3 minute song. Is binaural already a ‘combined concept’ of the ‘soundstage’ that we are already hearing and how much more or less does it affect it?

Also, why would a certain headphone seem to have its similar soundstage (like a tall rectangle or a circle all around) that never changes, kind of similar like walking into a familiar concert hall, or a different one down the street?

As another aside, since I am fairly new to all of this:
To me the headphone is you and your ‘manhood’, the DAC is what translates from male to female to get her to say yes, and amplifiers are different colors of condoms, or other ‘accessories’. (pads are ‘toys’, for example)
(I realize that this kind of writing is perhaps sexist/misogynist, and that everyone including females might love to listen to high quality headphones also. I write it because I care first and foremost that it is fun/funny and as an aside to the ‘serious technical jargon’ and perhaps that it is nice to possess the freedom to write whatever I want/feel.)

Size of our ears/ear parts/ear drum/brain etc.: In the same way that a human being can have a different size and shape or other elements to their larynx, which produces a distinct voice, wouldn’t that be analogous to our ears, and how would that affect ‘soundstage’?
A voice is an element of difference that pretty much is obvious to the listener (get it, ‘listener’) in timbre/sonority/fullness/nasal and other elements, aurally. But what one person hears and how their ears/structure/body and those differences, are only perceived by them.

I also equate this to HOW humans hear things differently. For example how a 7 year old boy might hear a very complex big band recording with all sorts of complex harmonies/layering/multitude of timbre etc., and how a 37 year old music professor female might hear that same recording. Are some people just hearing a simple melody on top at a choral concert for example, or are some people hearing every single note from every single voice AND how they all blend and harmonize together??

My guess is that how and what we hear and the parts of our body/ears/brain are just as different as something as obvious as the different quality of each vocal larynx. We probably just don’t think about that as much, since we DON’T KNOW HOW/WHAT OTHERS ARE HEARING. And since we don’t know, we might just hypothesize that others are hearing things pretty much the same as we are, since visually their ears appear to be kind of similar to ours, and since they are human and we are human.
(Let’s not even get into ‘dogs’, who can absolutely hear WAY more shit than we can. And yeah, that’s my new billion dollar california business idea, do not steal it from me, ha. "Headphones for Dogs’. Because if you think you enjoy music through headphones just imagine how much your dog will enjoy it. And I am only half joking here. Dogs go absolutely crazy for any and all sound/s. But that’s a whole other essay/article/research paper for next time, boys and girls, damas y caballeros, ladies and gentlemen, ye lads and lasses.

Angle of the drivers within the headphone (looking at you 800S): I have heard that this can also affect things like ‘soundstage’. Changing the position of sound, I’m sure can be perceived as being ‘different’ or ‘spacious’ , or ‘more far away’, or ‘muted’, etc. etc.

So, yeah, I thought there would be more to the article, because I want to buy some new headphones and I just want/need all the information I can get.

Thanks for letting me ramble stream of consciousness based on inspiration from this article.

After lots of experimenting, I found that by far the biggest two factors that affect my psychoacoustic perception of “Soundstage” and instrument separation are ear pad space and clamp force. If something has hugely spacious cups, especially when they are deep and tall, and the clamp is light? other factors end up mattering much less.

2 Likes

Thought it was relevant enough to add here: I just found a nice collection of citations on a component of the spaciousness effect that occurs specifically in the sub-bass and is detectable down to 40 Hz despite our commonly quoted inability to localize sounds below 90-ish Hz: Bass and subwoofers | Audio Science Review (ASR) Forum

One wonders if current headphone targets are even informed by this, as I notice a few of these articles claim that bass EQ’d for flatness is detrimental to this spatial effect.

1 Like