Take Me to School (Understanding Audio Better)

OK, here’s a good undergrad-level article related to cryogenic treatment of vacuum tubes.

I will say I share Mr. Taylor’s perspective here. In my previous career as an engineer I spent some time in the aerospace field (space shuttle and theater missile defense targets). Cold is the enemy of electronics – there’s a reason NASA’s lunar rovers, Mars rovers, and deep space probes all had/have heaters on them.

Oh, and Mr. Taylor also explains what’s going on with quenching steel, and yes, that’s (almost) entirely different.

Happy studies!

4 Likes

Thank you. What about cables? The idea of a very brittle dialectic material distresses me. (Pun intended).

Next, cryogenic treatment of toothpaste tubes . . . Does it improve the Gleam?

Well, let’s ask: What parameters should cable design encompass? The only expensive audio cables I know of that were developed by an actual engineer in the field of professional broadcast-quality cabling is Iconoclast Cable. Fortunately, they also provide some rationale regarding their design decisions. See the links to 4 white papers near the bottom of the page.

Blue Jeans Cable also offers up some articles.

3 Likes

Coincidental and timely: this guy talks about a new peer-reviewed paper on what affects perceptible cable differences. He reports that time/speed matters. I haven’t tried to track down the original source yet.

1 Like

Original source: http://boson.physics.sc.edu/~kunchur/papers/Interconnect-cable-measurements--Kunchur.pdf

1 Like

Thank you, the Iconoclast papers are very interesting - and the math rises to the level that my mental plasticity for math - which was never beyond freshman calculus - starts to degrade my aging wetware dielectric. I certainly get the gist of most of it, but would not want to sit down and figure things out with paper and a slide rule.

Equally interesting are the prices. I presently have 18 foot runs of litz cable for my speakers due to room and interior design constraints. I think I’ll stay with that, and the argument for litz cables was covered in the TIME paper. At the office, if I get a few more good clients, I might consider a pair of 7 footers.

It’ll be a while before I replace my rat’s nest loom of Pangea, AudioQuest, and miscellaneous other cables with Iconoclast in my main stereo setup.

That still leaves the question of cryogenic treatment. Why? and what does it do to the dielectric material?

And out of curiosity - have you listened to any of the Iconoclast cable designs? Any personal opinion?

I haven’t seen an engineer / materials scientist / physicist tackle this question for cable insulation.

So, just my thoughts for now:
(1) Cryogenic treatment of plastics has the same general effect as treatment of metal alloys, namely (a) increased hardness, (b) increased wear resistance, and (c) decreased internal stresses. I can’t imagine why any of that would cause the insulator/dielectric to improve signal transmission in any way, but that’s not the same thing as saying it does nothing. People do treat plastic parts in applications where those things are of benefit.

(2) Cryogenic treatment of a cable as a whole seems to have some obvious difficulties, similar to what the link above on cryo treatment of tubes discusses: the metal conductors have much higher thermal coefficients than the insulators, so during treatment I would expect serious tensile forces on the terminations and radial forces pulling the conductors away from the dielectric. That will get more complex if you have multiple, individually insulated conductors in some sort of twisted geometry (like ethernet cables). Based on that, I’d much rather have the cryo treatment applied to the terminations, conductors, and jacket separately, prior to the actual cable build-up. Don’t know if anyone actually does it that way.

Haven’t heard them – they’re outside my price range. My next step with cables is replacing my rat’s nest with BJC LC-1. I’m also staying low-budget for now because I would like to move to balanced operation and that will mean new cables anyway at some point. Maybe someday if I get a real nice DAC and amp I’ll spring for Iconoclast between them. Iconoclast speaker cables tend to cost about what I can afford for speakers!

Maybe someone is better with Google and can find a real (non marketing-speak) paper on cryogenic treatment of cables. I’d also be interested in it.

2 Likes

I know what you mean.

I am curious about the definition of the “Brick Wall” filter setting I see on some DAC/Amps. Is it somehow related to the “non-oversampling” choice some DACs offer? TIA

Question about modern vs traditional vinyl record manufacture

I do know something about the history of recording and about making copies of recordings. I leaned the CD process and about glass masters also.

I’m thinking that until the digital age, the record producing process was analog. I don’t know how long the use of a lacquer master was around - and I do know that they still make them today. Today, I understand that there is a lathe used to cut the lacquer. I’m not sure what the older process was, but I do know that there wasn’t no digital in it.

Today, with digital recording, I’m assuming there must be both an ADC and a DAC in the steps to get to that analog lacquer. And with those pre-manufacture steps, there must be some concern about the accurate translation of digital information into movement - if not of a speaker cone, then of some tool that cuts the grooves in the lacquer.

How does this digital process affect my end product vinyl? (Sure it’s all analog after the lacquer is created. The lacquer is used to create a metal stamper, etc.) Is there any physical difference between the groove/track produced in a digital process vs an older analog one? Is there a benefit of tighter control of a modern lathe compared to vintage cutting tools? Has anyone compared or analyzed the final grooves scientifically?

Why do questions like this occur to me for no reason at 1:37 AM?

The lacquer master goes back to the 1930s. Prior to digital, the master recording was some variety of multi-track magnetic tape (think reel-to-reel). There were some “direct to disc” recordings made, where the lathe was driven directly from a feed of a live performance, mixed down in real time. Still all analog of course.

Yes, almost by definition. You could, in theory, create a digital master straight from numbers. Simple stuff like test tones (single frequency sine wave, white noise, pink noise) could be done this way, but I doubt anything anyone would want to listen to for pleasure could be made straight from numbers. So, there’s an A-D conversion.

Similarly, there has to be a D-A conversion, since that groove in an LP is analog. Even if the lathe were driven digitally, the result would be an analog groove, so there is an inherent conversion. But I doubt the lathe is driven digitally. To my knowledge, record-cutting lathes and people with the skill to operate them are rare.

1:45 am would be like the middle of the night or something. Gotta be earlier than that. :slight_smile:

Big question. Well, in a sense, you’re hearing the ADC and DAC used, in addition to all the usual parts of the LP production chain. I remember listening to an early digital LP of Stravinsky’s “The Firebird” and thinking it sounded very bright and harsh. Looking back on it, I think that was just because early digital sounded that way, at least to me. Today, the quality is so much better that A-D and D-A conversions can be done nearly transparently. It could certainly vary from one LP to another, of course. I don’t think there are any across-the-board attributes (good or bad) that could be attributed to having a digital step in the chain.

1 Like

Exception / counterexample: The 1970s to 1980s synth era saw the rise of midi music. This genre also includes the bleeps and bloops of classic Nintendo games. It must be created in digital keyboards and computers. Some people do love its simple digital purity. In the pre-Napster era midi songs tread close to copyright violations and involved legal warnings / take downs too. Early MP3s greatly reduced interest, but it still remains a niche.

Doo doo … doo doo doo … dooot

2 Likes

excellent read. It quantifies much of what I was thinking.

I see the term ear gain tossed around often. I cannot find a definition for it in any of the glossaries. Can someone explain what this means exactly?

“ear gain” - sometimes incorrectly referred to as “pinna gain” - is a shorthand that’s developed for the rise into the roughly 3khz band as a product of the resonances of the external ear and ear canal

The exact shape of this rise is specific to the acoustic source’s incidence (as you see comparing the frontal free field and diffuse field responses of the 5128C above), but generally if the raw measurement of a headphone does not have a substantial rise in this band, it will be quite timbrally poor.

3 Likes

Thanks for explaining. So when @Resolve mentions if a headphone has ear gain thats a bad thing? What should. We be looking for in a headphone in regards to ear gain? If it’s fixed?

Ear gain is a requirement for a headphone to sound good. Too much ear gain… is bad (or rather… too much emphasis in a particular region of the ear gain).

Ear gain is just the effect of the ear on incoming sound. It amplifies certain regions of the frequency response (hence, ‘gain’), but you can think of it generally as the ‘effect’ the ear has on sound.

2 Likes

It’s a good thing for a headphone to have ear gain - a headphone that doesn’t have ear gain will have a very wonky sounding frequency response!

The depends a bit - both on your reference frequency, and your personal preferences. In general you are unlikely to enjoy a headphone whose raw response rises by less than 8dB or so from 200 to 3000hz, but there’s a pretty meaningful spread in terms of preferred treble level.

The average, of course would be in line with the Harman target


but Dr. Olive’s research found a fairly decent spread based both on listener demographics and uncontrolled variables
image
so while the “Harman approved” level at the peak of the ear gain would be about 13dB relative to 200hz, you might well find 10dB or 15dB more pleasant (but you probably wouldn’t like 0 or 30 very much!)

1 Like

How far upstream does the quality start?

No, I’m not talking about performance, miking or any of the production side. Or even things I can’t control like a streaming service.

My question is digital source component to DAC.
If I’m using the same streaming source - let’s say Qobuz, and I am using the same DAC, for the sake of consistency a desktop DAC that doesn’t depend on a USB port for power, I don’t notice much difference - if any - whether I’m streaming from my MacBook Pro, any iPad or iPhone, an Android tablet, or a Sonos Port digital output.

I’m aware of various USB issues, not trying to get them into this equation. Does the streaming source make much difference? Or can it, barring gross malfunction?

1 Like

SPDIF and AES sources generate the audio clock, there is inherent jitter in conversion at the DAC, so any noise transmitted on any clocked source can impact that.
With USB sources the only practical problem is noise introduced on the connection leaking into other parts of the system. It’s why you see galvanic isolation on DAC’s. Galvanic isolation isn’t perfect.

Comparing USB to AES the biggest factor is probably where the clock is generated, I know for example the Chord DAC’s discard the external clock and regenerate it, so AES shouldn’t be inherently any better or worse, and most (though not all) DAC’s will attempt to clean the incoming clock on an AES source using a PLL, but cleaner input to a PLL means cleaner output.
I know Rob Watts recommends using TOSLink cables for his DAC’s which have perfect isolation from the source, but notoriously terrible jitter, but since the DAC is discarding the incoming clock it shouldn’t matter.

Where the rubber meets the road on this is does it matter, most of that is anecdotal, most people report hearing differences between AES and USB on some DAC’s at least.
My personal experience is good streamers so sound better, I thought the EMM Labs NS1 turned the Chord Dave into a MUCH better DAC, vs a Pi2AES.
I felt on my Lampizator DAC the difference between the Optical Rendu via USB and the NS1 via AES was marginal enough I still use the Rendu (despite is being much less than 1/3 of the price) because it lets me play the few high bitrate DSD files I have without downsampling.

Where I start to struggle with when things start to matter is the concept of the Server itself making a difference, or local file payback vs Network playback, or differing network protocols making a difference.
I get it with devices like the Pink Faun or Antipodes, where the server is the streamer.
But my server is electrically isolated from my Streamer.
I’ve compared Network vs local playback, with the same files, and not heard a big difference in my system, but people with the some streamers do report a difference, I don’t hear a big difference if any between RAAT and UPNP or LMS, but I know a number of people who report RAAT as sounding inferior. The only thing that could be really impacting anything with the protocols is how it impacts hardware usage (CPU/Ethernet) in the streamer, and hence the noise it’s generating. From network monitoring, most 44/16 content finishes streaming on my system when using UPNP/LMS in the first few seconds of the song, so after that the entire thing plays out of memory, this is not the case with RAAT, and that could explain it.

The right question for most people is do you care, I think getting off a PC as a source is a win, and while I did think stepping up to a quality streamer was worth it, my system is dramatically more expensive than most peoples, and the money I put in a streamer wouldn’t move the dial much put elsewhere.

3 Likes