Marketing and the various claims for a good number of audio-player applications are often quite divorced from reality. Or, they’re real, but they don’t actually make any difference.
I say that as a happy user of both Roon and Audirvana.
Audirvana takes control of the computer’s audio flow, minimizes the signal path, and ensures internal bit-perfect processing.
That could mean almost anything shy of calling a higher-level “Play File” API. Even something as simple as the player loading the sample data into its own buffer, and then calling on an OS level function to play that to a specific output.
It bypasses the internal audio mixer, avoiding sound events from other applications and unwanted changes to the audio format of your music.
That’s what “Exclusive Mode” does; there’s no magic to it - it’s a standard feature of Core Audio, just not all players expose the option to use it.
This mixer modifies the resolution of audio samples under a “lowest common denominator” rule and uses a low-power algorithm to avoid extra latency, which adds quantification artifacts on top of quality loss.
The mixer does do this, but only if the sample rate or bit-depth of the audio data is different to that of the output settings for the DAC (in Audio/Midi Settings). All that’s needed to avoid this entirely is to make an API call to switch the output bit-depth and sample rate to match that of the source file, and it will be bit-perfect, won’t be re-sampled, and will have no additional latency.
Hence, most decent audio players automatically switch output sample rates to match the source file. Audirvana, Roon, JRiver, BitPerfect, Qobuz, TIDAL and even the Amazon HD clients all do this properly.
And that’s all without getting into the reality that modern CPUs don’t execute the actual instructions they are fed (in the unlikely even the player was coded in assembly language in the first place). So heroic efforts by the programmer don’t necessarily get a benefit anyway. Certainly nothing predictable enough to do things like “reduce USB or PSU noise” in anything other than random fashion.
The net result is that, absent active processing in the audio chain (which is invariably avoidable using normal OS configuration settings), or a faulty driver/USB/S/PDIF output implementation, bit-perfect audio output over USB is the DEFAULT condition in exclusive mode with volume at max.
And it’s easily proven … just capture the sample data coming off the USB output and compare it with that in the source file.