Yeah so at the moment it’s just a device that will allow us to do blind ABX comparisons with headphone amps. Think like, a pair of headphones plugged into a box that has two sources hooked up to it, and several buttons on the front that allow for seamless switching between the two, but also a function that obscures which is which, and the goal will be to try and match the source in question to the reference correctly. So it’ll make the whole process much easier and remove any placebos or biases that may be incurred in sighted listening tests. It’s still about a week or two out though.
I follow the concept of an ABX, but what tests do you plan to run with it? It’s really exciting to hear that a major reviewer is planning to do real ABX - that’ll be a major asset for the community
If I was anywhere near I’d offer to come over and see if I could hear the differences between amplifiers with higher harmonic distortion measures - but alas I am not
It’s interesting, I approach this from a standpoint of something like this: “sources don’t make much difference, but I still can here a difference in some cases”. With the ABX box, the primary test will be to determine if that’s correct, or just due to framing/placebo effects. I think if the latter is the case then that on its own is interesting. But if my original position turns out to be true, then there’s a lot more that can be explored. I’m also going to loop into into every source evaluation I do, so for example I’d be able to run the test 10 times and see how many times I was able to guess for X correctly, and then add that to the results. Something along those lines, although I haven’t worked that out 100%.
Word of warning with AB boxes, for short listening sessions, it’s often hard to distinguish, but over extended listening things tend to be more apparent.
At one had two comparable DAC’s connected through a switch, both were fed with aes/spdif from a pi2aes, both had consistent line level to the point I couldn’t hear the point of a change. Anecdotally, while there was no obvious tonal difference on the switch, I would often notice just by listening that I was not listening to the one I thought while listening to music, but it would take a minute or two of listening. And these were components I was intimately familiar with.
I got rid of one of those DAC’s because generally I would prefer the other.
It will be interesting. I think there are some good philosophical concerns about what this type of test actually reveals, so I’ll have to be careful there. But I think at the very least it should reveal something.
Sounds great, can’t wait to see what you come up with
You might find Tyll Hertsen’s Big Sound experiments from 2015 informative. It’s here on the Wayback Machine: Big Sound 2015 | InnerFidelity (and here, too, if the links don’t work: Wayback Machine)
I have noticed this as well. I’ve experimented with A/B comparing solid state amps both sighted and blind and it does seem that I can only really distinguish between two amps through longer listening sessions. InstaswitchingTM doesn’t reveal any apparent differences to me. But if you don’t instaswitchTM then why bother building an A/B box?
However I haven’t done enough to come to any conclusions yet. And even then the conclusions might only apply to me.
I have mainly identified amps and DACs through fatigue and treble artifacts. Rapid swaps reveal nothing with close competitors, as one’s ears must habituate for perception to stabilize. Rapid swaps are biologically akin (at reduced levels) to walking into a dark room from sunlight, or walking from indoors out to a sunny day with snow on the ground.
Some of my devices have or had distinctive whistle or whining patterns. The worst were obvious after a few seconds, while better ones took an hour to cause issues (i.e., tinnitus). Only my Bifrost 2 is immune, but is has some other quirks. The ZenDAC is the best of my Delta Sigma DACs in avoiding treble issues.
It doesn’t matter as long as it’s good or better. Because clearly signal-to-noise ratio and distortion matter pretty obviously.
For those interested in how pro guys do it: if you have an interface, record the device’s signal into a good ADC, then play the unprocessed file or a control signal lined up in a daw with the phase flipped on one so you can hear the ‘delta’ or difference between them.
Its how almost every mastering engineer A/Bs the sound of gear in their system.
Every engineer will also admit that we’ve been sitting there thinking ‘man that 800hz boost sounds AWESOME only to look down and see you have the whole EQ in bypass. You win some and lose some - if you can’t tell a difference, but you’re still fascinated, come back a different day with fresh ears and a fresh attitude.
This can be difficult to do correctly. Passion for Sound used a similar method to test for differences in USB cables. Even if there can be a difference at these cable lengths, I’m skeptical that the difference is as significant as it appears. Note GoldenSound’s comment about the methodology used:
Why do you say it’s difficult to do correctly? It’s a bit trick to figure out the routing perhaps for a cable test but this is routinely done in professional studios and easy to setup in reaper. He used Audacity, which I find difficult and unreliable for this kind of test, but the choice of DAW doesn’t much matter.
I am a bit surprised he got tests in the -30dB range between cables, but I’ve found this a reliable methodology for a wide variety of non-transducer gear tests in the past.
It also makes it easy to compare both uninverted recordings and quickly A/B between them without relying on a switching box, relay noise/delay, etc.
I can’t link directly to GoldenSound’s comment on the video but it appears at the top of the comments section for me. In short there can be timing and amplitude variation as well as sample alignment issues without using the right software. He recommends DeltaWave.
Gotcha. Yeah, like I mentioned I find audacity difficult and not ideal - but any mastering capable DAW with latency compensation and a reasonably clean ADC/DAC can do serviceable tests. The DeltaWave program looks interesting for sure.
I suspect the things he picked up on were much exaggerated by the focusrite and audacity - I’ve never seen a cable have level differences as audible as -30dB.
I was coming at it having been used to and assuming reference grade converters in a DAW capable of compensating latency and showing true sample alignment. Its a bit more of a challenge to do in a hone setup for sure.