I’m not exactly sure where to put this, but I’ll start in this forum.
I’ve got a new tube in my BH Crack so I’m going down my playlist of test songs, and at the moment I’m in the middle of Adele’s “Hello”. I don’t necessarily trust my own ears so I wanted to see if anyone else was hearing what I was hearing.
Overall the recording sounds quite good, but a little before the 4:00 mark it almost sounds distorted, but it could just be the recording. One specific place I hear this is when the lyrics “thousand times” are echoed as Adele is singer the same line. This might just be an odd (to my ears) mix of brushed snares and multiple voices, but I suppose it could also be a DAC or amp artifact. I just don’t have the experience to say.
The recording is from Spotify, and I’m using a Bifrost2 and a BH Crack + SB + T-S 5998.
The only thing I could “hear” in a quick ‘n’ dirty attempt to reproduce your issue was an increasing volume after 3 minutes of the song – which I think is a common technique used in music production. But no anomalies at my end.
To your issue: check if you may be clipping the input stage of your amp. In other words, the signal from your BS2 may be coming too hot into the Crack.
Assuming the above is an issue, it should be easily reproducible with other recently mastered songs (a.k.a. loudness wars). The work-around is as simple as reducing the signal into the Crack – e.g.: lowering the OS volume.
If it’s not reproducible, then other details may be needed from your end.
Thanks for giving it a listen. With my BF2 I don’t have the option to diminish the signal going into the Crack, but I will try lowering the output volume at my computer. I also have an Asgard with a DAC and some other amps so I’ll try those when I get the time. I was just curious if it was “just me” or my hardware had something to do with it.
If I learn anything new I’ll post here. Thanks again!
First, Adele’s releases tend to be dynamically compressed and hot/harsh. She’s often not a good example of recording quality.
Second, one recording method is to place multiple microphones in a room so the loud notes have a stronger, fuller character than the quiet ones. David Bowie’s Heroes is known for using close, middle, and far mics. The more distant mics pick up room noise and echoes, so they are inherently less clean and precise than a close mic.
Third, gating (limiting the sounds heard by a threshold) is a common practice to increase the punch and definition of some sounds. Phil Collins In the Air Tonight used gating to define the famous drum break at 3:15.
The only Adele CD I have is “25” and I was disappointed in the sound. I chalked it up to too much fiddling in the back room for the sake of some production value? The very first song, the background group’s input sounded like it was recorded in a different studio and added in after the fact, and it sounded distorted to my ears. And occasional fuzzy sound throughout. But I listen to so little of popular genres I don’t know what else, good or bad, is out there.
I knew she had a good voice and when I heard her sing the title song from the Bond film “Skyfall” I decided to explore. But I was so disenchanted by “25” I stopped.
But like I said, I’m probably the worst person to listen to when it comes to popular genres of music, LOL!
I can’t remember the 4:00 mark (can check later) but the quality of the recording is one of the reasons I have “Hello” on my test track list, on a good set up it just doesn’t sound great.
@generic and @FLTWS , thanks for that information and your insights. I can see where having more knowledge of the recording process helps you better understand what you’re hearing in a recording.
I have a couple of musician friends that have mentioned “compression” as a negative aspect of modern recordings (though acknowledging that it does have a lot of value if used judiciously), so that makes sense as well.
I’m actually thinking about taking a basic music production/engineering class just to get some basic theory to help me pick this stuff out. I find it fascinating!
@SenyorC Thank you! This helps me tremendously. I chose this track was because it was on a Spotify playlist of tracks @MRHifiReviews used for his gear reviews. Unfortunately there aren’t notes to go along with each track, so I’m assuming this one is on there for the reason you mentioned.
This also makes me feel better about my equipment, since I thought - in general - everything else I listened to sounded fantastic.
I guess I need to dig into these “test” playlists and make sure I understand why the selections were included. One of my other playlists does have that information since I used a thread (on this forum, I’m pretty sure) to make the list and I copied the notes to a text file.
As others have said, it’s just a horribly mastered track, which is such as shame, since she has such a great voice. I hope it’s just the mastering and not the recording, because one day, when the “loudness wars” fad is over, they’ll be able to fish out the original tapes and produce a better-sounding version of the album.
I didn’t hear any distortion on the CD, but I can hear the sound quality get a little worse from about 3:52 onwards as her voice gets louder, along with the backup vocals and instruments. If you look at this wave form, which isn’t good looking to begin with, it gets more “saturated” around 3:50, and the sound suffers.
On the other hand, when she sings the chorus earlier in the song, around 3:02, you can see there’s a little more dynamics, and it doesn’t sound as congested.
Interesting to look at the waveforms and compare them, I didn’t think about trying that. Prior to posting I played that section several times to try to nail down a time and interestingly the first timestamp I wrote down was 3:50, which corresponds to the first significant “chunk” of saturation.
And I agree that the rest of the song is not that bad. I really only noticed the issue starting at the 3:50/4:00 mark and this graph makes it clear why.
Not like I need another rabbit hole to explore, but I did some research and learned a bit more about the “loudness wars”. The wiki article has some interesting background. I think I grasp why it started, but I’m still not clear on why it continues. Clearly most competant recording engineers would want to provide the hightest DR possible within the scope of the music, so I’m guessing it’s a management decision to mix them with more compression. With most people listening to music on low-cost headphones from streaming digital sources, I suspect that sound is further processed and volume-limited, so what is motivating the recording industry to do this? There ultimately has to be a consumer demand since the bottom line drives everything else, but I’m not clear on what that demand is.
I did come across the DR database on SBAF, so I’ll be using that tool in the future to both screen for higher-DR albums to listen to and better understand tracks that don’t seem to sound right.
Compression methods have changed over the years, and can be better than they once were. Some of the late 1990s stuff was horrid (especially Red Hot Chili Peppers Californication – my definitive test track for awful vocals. That track and album was just maxed out with all peaks brutally sliced off. It’s super bright and in-your-face.
In my experience post-2010 compression involves squishing each waveform to bring the instruments and voice into harmony. This isn’t a bad thing, and not unlike careful mic placement for quality recording. The classic 1976 audiophile test album Jazz at the Pawnshop has a close mic on the chimes/bells. This results in defined music and sounds ‘great,’ but it’s NOT how one experiences a live show.
Compression continues because the average music customer listens on junky equipment, in the car with background noise, and plays music when they are multi-tasking. IMO they don’t want the kind of dynamics possible with Classical – too hard to hear and the changes are too disruptive. Properly compressed pop (vocals plus acoustic or electric piano/keyboard/guitar, drums, and misc.) doesn’t need a ton of dynamic range to sound decent, just not brutal treatment and clipping.
After that episode, I reviewed my cable setup and never looked back. The symptom was that I could hear some “fart” sounds. And they were not coming from me.
Never used my worst sounding recordings as part of my demo selection but it does makes sense.
Yep, I’ve got all the acknowledged recommended demo stuff compiled by mostly “The Absolute Sound” and “Stereophile”, back in the day. Had them on LP and now on CD.
But your right about a lot of SOTA recordings, you’d never hear it that way live but it is ear-catching and maybe offers some sort of compensation for the fact your brain knows “it is Memorex”!
My number 1 complaint with most orchestral recordings is they are always mic’d too close and with too many mic’s so some instruments sounds, especially in their upper registers, wobble left and right of center. Never sounds that way from out in the hall or even in the orchestra pit. Imaging is never as specific as recordings can deliver it and the farther out in the hall I sit the more things blend (sit and listen live with eyes closed). And recorded works for orchestra and soloist always make, say, the piano keyboard, sound like it stretches all the way from stage left to stage right.
I’ve been wondering about DR in any number of digital recordings over the last decade. Although I don’t perform anymore, I spent almost 44 years in recording studios as a session drummer and performing musician. One thing I learned early in the game was how to listen and understand what I was hearing in my “cans” regardless of who the manufacturer is. Some dynamics will leap out at you, some will not. A lot of thought vectoring will be steered by the type and genre of the music you’re hearing. Currently, I’m using a set of SIVGA P-Ⅱ Planar Magnetic over-the-ear headphones which use a 97mm*76mm ultra-nano double-sided magnetic planar diaphragm unit as the driver, combined with the superior sound characteristics of the black walnut chamber. They are fed from an iFi ZEN DAC using 4.4mm balanced connector and the standard SIVGA cable. The iFi ZEN gets a USB 3.0 signal that filtered and re-clocked using an iFi iPurifier3 USB Audio and Data Signal Filter/Purifier that re-clocks/regenerates/repeats the USB signal, eliminates USB jitter, frame and packet noise and restores USB signal integrity. The little thing also rebalances and corrects USB signal balance and impedance mismatch. And its buss powered. So, with all that in mind, when I listen to a recorded piece that is what I’d consider “clean” from the mic upstream, it removes any consideration about the issues that might be suspect.
Since you mentioned Adele’s “Hello”, I thought I post a link to an awesome cover of the song performed by the Swedish trio FLR project. The video features German vocalist Anna Schmitz who delivers a stunning vocal rendition of Adele’s vocal range accompanied by the outstanding playing of the trio. Pay close attention to the outstanding Sebi Friebe on bass guitar. He’s amazing! The audio is exceptional with a very balanced mix favoring the instruments until the chorus where Anna’s powerful vocals come to the front of the soundstage. Even the overdubbed vocal harmonies are well mixed and blended. Enjoy the clip and the playing.
I’m not picking up any distortion, but it does become VERY compressed (dare I say overcompressed) in that section as she’s coming back into a chorus. “Loudness War” mastering 101 imho…
Right you are, sir. A prime example of a product of my least favorite side of the mastering industry…the “loudness wars.”
I think what a lot of us hear who are savvy listeners is called “peak limiting.”
So at the very fabric of Adele’s recordings they are mixed originally quite loudly. When your mix is already sitting high in the DB chain the Mastering engineer will have his hands full trying to smooth out in some areas a very loud recording. A healthy sine wave looks like a beautiful bell curve with the top of those sine waves not being jagged or cut off at all. This is not the case in an Adele recording at times, especially during some loud sections of the music. Mastering engineers try smoothing out those sine waves which are on the verge of clipping or already have distorted tops with so much processing, sound colouration and noise gating etc.
Good systems I know will reveal the flaws no matter what, however I will say that the production value on an Adele album is quite high and very polished, which for some may be its saving Grace.
Hit me up with anymore thoughts you guys may have cause I think I’m hearing the same things you are.
I always say to people that the loud cannot exist without the quiet. I wish more people in the music industry could grasp this concept because what you have nowadays is a lot of recordings that just don’t sound good on great systems.
Not necessarily cause the recording itself is mainly the problem. Not the bit rate I feel. For instance Supertramp’s Crime of the century sounds better in 320 KPS than Adele’s 21 at 1411KPS. Yes they’re different types of music, but generally speaking the point I am making is relevant.