Soundstage Is (Much) More Complicated Than You Think

Concur. Concur. Concur.

The differences between speaker and headphone soundstages include (1) the ability to place speakers in a natural forward and distant position whereby most sound hits the ear at an oblique angle (matching natural live sound), and (2) the ability to blend an array of speaker drivers with different properties (versus single-driver headphones, as multi-driver setups tend to fare poorly).

A good playback soundstage reproduces the fine, ephemeral echoes and positional delays of the recording environment – BUT BUT BUT – true live sound always comes in from multiple angles with multiple reflections and subtle delays. Reproduction quality is forever limited by the resolution of the recording mic (as a speaker in reverse) and the recording medium. If the recording medium and playback drivers are unable to reproduce a subtle 2nd, 3rd, or 4th order reflection, then the nuance of a live soundstage is lost.

Headphones sidestep the mud introduced by the room with playback speakers too – much of the tone and timbre results from a room’s hardwood, carpet, thin walls, concrete, and other building materials. That mud may come across as a better or more natural (realistic) soundstage to some. The old Bose 901 tried to simulate a live experience with a bunch of rearward facing speakers. This was a simulation rather than a reproduction of the (often simple stereo or artificially mixed) recorded source.

Many implicit assumptions here. I generally disagree. Many things about audio become crystal clear when one produces music and hears which perceptual factors carry over from live to a recording. There’s a FIERCE debate about the role of tonewood in guitars. While effectively 100% of people agree that tonewood is a major thing for acoustic guitars, there’s sharp and enduring disagreement about its relevance for electric guitars. While a player can feel different vibrations, a listener (let alone a recording) may miss all of those differences. The metal guitar strings vibrate differently based on thickness, construction, and age, then the magnetic electric guitar pickups transform string vibrations some way, then the amplifier transforms the sounds again and may highly distort them, then the recording mic again transforms them or cuts the nuances, etc. It’s not unlike how one’s own voice sounds different when played back versus spoken. Multiplied by ten.

I had no clue until I started messing with guitars myself. So much gets lost versus live, even with electric guitars. Headphone quality assessments must follow from the quality and nuances preserved in a recording, and is very much embedded in the specific setup (@Polygonhell) . Many people habituate to (i.e., hear as “normal”) a given sound profile based on their physical sensory potential and personal hardware. Then, they will quickly habituate to a second source because humans generally perform poorly at audio comparisons over time (failed blind ABX tests). This is NOT subjective experience, rather, it’s a product of ineffective perceptual testing methods that do not standardize to individual hearing potential. This can be done. It has been done. Perceptual testing is what underlies all the spatial and noise canceling tricks of Bose, Apple, THX, etc. software.

5 Likes