Adele's "Hello" - what am I hearing?

The only Adele CD I have is “25” and I was disappointed in the sound. I chalked it up to too much fiddling in the back room for the sake of some production value? The very first song, the background group’s input sounded like it was recorded in a different studio and added in after the fact, and it sounded distorted to my ears. And occasional fuzzy sound throughout. But I listen to so little of popular genres I don’t know what else, good or bad, is out there.

I knew she had a good voice and when I heard her sing the title song from the Bond film “Skyfall” I decided to explore. But I was so disenchanted by “25” I stopped.

But like I said, I’m probably the worst person to listen to when it comes to popular genres of music, LOL!

2 Likes

I can’t remember the 4:00 mark (can check later) but the quality of the recording is one of the reasons I have “Hello” on my test track list, on a good set up it just doesn’t sound great.

2 Likes

@generic and @FLTWS , thanks for that information and your insights. I can see where having more knowledge of the recording process helps you better understand what you’re hearing in a recording.

I have a couple of musician friends that have mentioned “compression” as a negative aspect of modern recordings (though acknowledging that it does have a lot of value if used judiciously), so that makes sense as well.

I’m actually thinking about taking a basic music production/engineering class just to get some basic theory to help me pick this stuff out. I find it fascinating!

2 Likes

@SenyorC Thank you! This helps me tremendously. I chose this track was because it was on a Spotify playlist of tracks @MRHifiReviews used for his gear reviews. Unfortunately there aren’t notes to go along with each track, so I’m assuming this one is on there for the reason you mentioned.

This also makes me feel better about my equipment, since I thought - in general - everything else I listened to sounded fantastic.

I guess I need to dig into these “test” playlists and make sure I understand why the selections were included. One of my other playlists does have that information since I used a thread (on this forum, I’m pretty sure) to make the list and I copied the notes to a text file.

Thanks again for helping me figure this out! :grin:

3 Likes

Tidal: 44.1khz 16bit

RoonRpi/Modius/Magnius/Ananda

Nothing unusual audible to me from 3:45 to 4:05.

(Other than a recording studio hash of crap that is intended to sound the same on $2 earbuds, clock radios and the speaker in your microwave)

5 Likes

As others have said, it’s just a horribly mastered track, which is such as shame, since she has such a great voice. I hope it’s just the mastering and not the recording, because one day, when the “loudness wars” fad is over, they’ll be able to fish out the original tapes and produce a better-sounding version of the album.

I didn’t hear any distortion on the CD, but I can hear the sound quality get a little worse from about 3:52 onwards as her voice gets louder, along with the backup vocals and instruments. If you look at this wave form, which isn’t good looking to begin with, it gets more “saturated” around 3:50, and the sound suffers.

On the other hand, when she sings the chorus earlier in the song, around 3:02, you can see there’s a little more dynamics, and it doesn’t sound as congested.

7 Likes

Interesting to look at the waveforms and compare them, I didn’t think about trying that. Prior to posting I played that section several times to try to nail down a time and interestingly the first timestamp I wrote down was 3:50, which corresponds to the first significant “chunk” of saturation.

And I agree that the rest of the song is not that bad. I really only noticed the issue starting at the 3:50/4:00 mark and this graph makes it clear why.

Not like I need another rabbit hole to explore, but I did some research and learned a bit more about the “loudness wars”. The wiki article has some interesting background. I think I grasp why it started, but I’m still not clear on why it continues. Clearly most competant recording engineers would want to provide the hightest DR possible within the scope of the music, so I’m guessing it’s a management decision to mix them with more compression. With most people listening to music on low-cost headphones from streaming digital sources, I suspect that sound is further processed and volume-limited, so what is motivating the recording industry to do this? There ultimately has to be a consumer demand since the bottom line drives everything else, but I’m not clear on what that demand is.

I did come across the DR database on SBAF, so I’ll be using that tool in the future to both screen for higher-DR albums to listen to and better understand tracks that don’t seem to sound right.

1 Like

Compression methods have changed over the years, and can be better than they once were. Some of the late 1990s stuff was horrid (especially Red Hot Chili Peppers Californicationmy definitive test track for awful vocals. That track and album was just maxed out with all peaks brutally sliced off. It’s super bright and in-your-face.

In my experience post-2010 compression involves squishing each waveform to bring the instruments and voice into harmony. This isn’t a bad thing, and not unlike careful mic placement for quality recording. The classic 1976 audiophile test album Jazz at the Pawnshop has a close mic on the chimes/bells. This results in defined music and sounds ‘great,’ but it’s NOT how one experiences a live show.

Compression continues because the average music customer listens on junky equipment, in the car with background noise, and plays music when they are multi-tasking. IMO they don’t want the kind of dynamics possible with Classical – too hard to hear and the changes are too disruptive. Properly compressed pop (vocals plus acoustic or electric piano/keyboard/guitar, drums, and misc.) doesn’t need a ton of dynamic range to sound decent, just not brutal treatment and clipping.

6 Likes

Here’s the song I last caught distortion in my system. Song is 9 minutes long and the issues should start after 5’.

https://music.youtube.com/watch?v=Izc3W3yOW0M

After that episode, I reviewed my cable setup and never looked back. The symptom was that I could hear some “fart” sounds. And they were not coming from me.

Cheers. :beers:

Never used my worst sounding recordings as part of my demo selection but it does makes sense.

Yep, I’ve got all the acknowledged recommended demo stuff compiled by mostly “The Absolute Sound” and “Stereophile”, back in the day. Had them on LP and now on CD.

But your right about a lot of SOTA recordings, you’d never hear it that way live but it is ear-catching and maybe offers some sort of compensation for the fact your brain knows “it is Memorex”!

My number 1 complaint with most orchestral recordings is they are always mic’d too close and with too many mic’s so some instruments sounds, especially in their upper registers, wobble left and right of center. Never sounds that way from out in the hall or even in the orchestra pit. Imaging is never as specific as recordings can deliver it and the farther out in the hall I sit the more things blend (sit and listen live with eyes closed). And recorded works for orchestra and soloist always make, say, the piano keyboard, sound like it stretches all the way from stage left to stage right.

3 Likes

Spotify is one thing and mastering at even slightly louder than it should be will cause abnormally strange transient misalignment.

I’ve been wondering about DR in any number of digital recordings over the last decade. Although I don’t perform anymore, I spent almost 44 years in recording studios as a session drummer and performing musician. One thing I learned early in the game was how to listen and understand what I was hearing in my “cans” regardless of who the manufacturer is. Some dynamics will leap out at you, some will not. A lot of thought vectoring will be steered by the type and genre of the music you’re hearing. Currently, I’m using a set of SIVGA P-Ⅱ Planar Magnetic over-the-ear headphones which use a 97mm*76mm ultra-nano double-sided magnetic planar diaphragm unit as the driver, combined with the superior sound characteristics of the black walnut chamber. They are fed from an iFi ZEN DAC using 4.4mm balanced connector and the standard SIVGA cable. The iFi ZEN gets a USB 3.0 signal that filtered and re-clocked using an iFi iPurifier3 USB Audio and Data Signal Filter/Purifier that re-clocks/regenerates/repeats the USB signal, eliminates USB jitter, frame and packet noise and restores USB signal integrity. The little thing also rebalances and corrects USB signal balance and impedance mismatch. And its buss powered. So, with all that in mind, when I listen to a recorded piece that is what I’d consider “clean” from the mic upstream, it removes any consideration about the issues that might be suspect.
Since you mentioned Adele’s “Hello”, I thought I post a link to an awesome cover of the song performed by the Swedish trio FLR project. The video features German vocalist Anna Schmitz who delivers a stunning vocal rendition of Adele’s vocal range accompanied by the outstanding playing of the trio. Pay close attention to the outstanding Sebi Friebe on bass guitar. He’s amazing! The audio is exceptional with a very balanced mix favoring the instruments until the chorus where Anna’s powerful vocals come to the front of the soundstage. Even the overdubbed vocal harmonies are well mixed and blended. Enjoy the clip and the playing.

4 Likes

I’m not picking up any distortion, but it does become VERY compressed (dare I say overcompressed) in that section as she’s coming back into a chorus. “Loudness War” mastering 101 imho…

Right you are, sir. A prime example of a product of my least favorite side of the mastering industry…the “loudness wars.”

5 Likes

I think what a lot of us hear who are savvy listeners is called “peak limiting.”
So at the very fabric of Adele’s recordings they are mixed originally quite loudly. When your mix is already sitting high in the DB chain the Mastering engineer will have his hands full trying to smooth out in some areas a very loud recording. A healthy sine wave looks like a beautiful bell curve with the top of those sine waves not being jagged or cut off at all. This is not the case in an Adele recording at times, especially during some loud sections of the music. Mastering engineers try smoothing out those sine waves which are on the verge of clipping or already have distorted tops with so much processing, sound colouration and noise gating etc.

Good systems I know will reveal the flaws no matter what, however I will say that the production value on an Adele album is quite high and very polished, which for some may be its saving Grace.

Hit me up with anymore thoughts you guys may have cause I think I’m hearing the same things you are. :wink:

I always say to people that the loud cannot exist without the quiet. I wish more people in the music industry could grasp this concept because what you have nowadays is a lot of recordings that just don’t sound good on great systems. :frowning:

5 Likes

Maybe Apple music going to lossless options will encourage better recordings?

Not necessarily cause the recording itself is mainly the problem. Not the bit rate I feel. For instance Supertramp’s Crime of the century sounds better in 320 KPS than Adele’s 21 at 1411KPS. Yes they’re different types of music, but generally speaking the point I am making is relevant. :wink:

3 Likes

Adele’s producers were fully conscious and calculating of her tone. This has two main causes:

  1. Music must be played in the car, on cheap equipment, heard in noisy places and when working out, and heard by people who’ve done terrible damage to their hearing. One cause of the Loudness War was likely premature deafness among many.

  2. Producers calculate “attitude” or toughness with female performers. Women work with a much larger communication and fashion palette than men (i.e., sex roles), and all aspects of the delivery communicate different things. Some are innocent and girlish, others focus on come-hither, and others “love rock & roll.” Listen to the tone, sizzle, and tube distortion of:

Adele is a variant of the powerful, strong, commanding female theme.

2 Likes

This 1000%! The limiting on this track (and many tracks like it unfortunately) is really aggressive in the chorus sections. It’s always preferred by mastering engineers to not have the track already as loud as it can be. There’s no room to improve anything when it’s like that.

Agreed. The raw recording quality of this track is incredible, no doubt. It’s the post-production processing that the mixing engineer did, then on top of that, the mastering engineer’s processing (plus all the opinions he has to deal with from label execs). Great thoughts, @Nephilim_81, you’re right on point!

4 Likes

Yeah, I understand the recording is the main problem, but if more people get into quality music due to Apple making it more mainstream then maybe producers will slowly improve the quality. This is also why I said, encourage better recordings. I understand a shit recording at a higher bit rate is still shit.
It’s not going to be overnight but maybe if the mainstream market craves the better quality we’ll slowly get there. Regardless I think Apple getting into the lossless game will help music quality long-term. And this is coming from someone who doesn’t use Apple music.

2 Likes

Thank you so much. :slight_smile:
And as much as I love listening to music I no doubt care about the audio sciences or recording processes, which is as to why I love listening to music so much!
I look at audio Producers, Mastering and mixing engineers as important members of the band. The first thing I do when I buy a cd is look at the liner notes and credits. Always interested as to who helped make the album I bought sound really amazing or even really poorly.

I feel I owe a lot of my decent insight into the audio sciences to Alan Parsons.
He has thought me quite a bit, but not personally of course. :slight_smile:

4 Likes