The test is interesting, but he’s too focused on bits and error correction.
There is no error correction on USB audio, it uses isochronous transfer, it has a CRC, but the DAC has no way to rerequest the data. This is documented in the standards document, so it really isn’t open for debate.
I’d also be surprised if a significant number of bits were flipped on the wire, I’ve never put an analyser on a USB wire, but I have measured corrected errors on a network cable, and they are stunningly infrequent.
So then the question becomes what’s the difference, If I had to guess, I suspect it’s largely about noise introduced to the DAC’s power lines through the USB cable ground or 5V lines. DAC’s go to a lot of effort to galvanically isolate the USB subsystems from the rest of the DAC, and I suspect they do that for a reason, and that isolation likely is not perfect.
If I can hear the difference between power cables, or wall warts, all bets are off on how little noise on a power line can influence the output stage of a DAC or an amplifier.