Please pardon my stream-of-consciousness word splatter, but this something that I thought would be worth discussing …
It’s generally accepted that humans can’t hear infrasonic frequencies (below 20 Hz). Given this, evaluation of headphones generally focuses on frequency response and distortion characteristics for frequencies above 20 Hz. However, it occurs to me that distortion products from infrasonic frequencies do enter the audible range. For example, for a fundamental at 15 Hz, the 2nd harmonic would be 30 Hz, the 3rd 45 Hz, 4th 60 Hz, and so on. Now generally, we tend not to worry about higher order harmonic distortion since it’s generally just the 2nd and 3rd that are the strongest and things tail off pretty quickly after that. However, we also know that headphone drivers tend to have increasing distortion at lower frequencies, so perhaps that extends into higher order harmonics?
Here’s the core question - can distortion from a headphone that’s trying to reproduce infrasonic frequencies audibly harm reproduction of audible frequencies?
This of course raises a whole range of other questions, like:
- Do recordings commonly contain sonic content below 20 Hz?
- Do certain lossy encodings discard information below 20 Hz and would that potentially make them sound better in some cases than full-spectrum lossless encodings?
- Can said distortion be effectively addressed by applying a steep low-pass or brick-wall filter in the playback chain?
It’s entirely possible that I’m barking up the wrong tree and that there’s nothing to see here, was just a thought that sprang to mind.