Transport Bit Accuracy: Two Theories Explaining the Sonic Differences Between CD Transports

Disc players
Transport Bit Accuracy: Two Theories Explaining the Sonic Differences Between CD Transports

Transport Bit Accuracy

Two Theories Explaining Sonic Differences Between CD Transports

Robert Harley

One of the theories explaining the analog-like sonic differences between CD transports suggests that better-sounding transports produced fewer bit errors. That is, the inferior transports don’t recover all the data from the disc, resulting in the wrong ones and zeros being converted to an analog signal.

The second theory holds that virtually all CD transports can recover all the information from a CD with perfect bit accuracy, and that the sonic variations between transports are due soley to jitter (timing errors).

I believe the second theory, and here’s why.

Let’s conduct a thought experiment. Say we deliberately introduced bit errors randomly into a PCM-coded digital audio signal and listened to the error-riddled bitstream. What would those errors sound like? Well, if we changed the least significant bit of a 16-bit “word” from a one to a zero, or from a zero to one, the result would be an amplitude error with a magnitude of one part in 65,536. We know that number precisely because 16 bits can represent any number between 0 and 65,535. Can we hear such a low-level amplitude error? Not a chance.

Now let’s consider what happens if that random bit error happens to affect the most significant bit of the 16-bit word—a scenario just as likely (or unlikely; remember, the bit errors are introduced randomly). Changing the 16-bit word’s most significant bit from a zero to a one, or from a one to a zero, would result in a change in the audio signal’s amplitude of 32,768 parts in 65,536, or fully half the signal amplitude. That’s right; an error in the most significant bit results in an instantaneous amplitude error of half-scale. Such an error would be manifested as a click loud enough to make you jump out of your chair.

Do you hear random loud clicks when playing CDs?

The reason you don’t hear such bit errors is because the CD format has a sophisticated error detection and correction system to prevent such errors from ever reaching the CD player’s DACs (or a transport’s digital output jack). In fact, CD’s error correction system can completely correct an error of up to 4000 missing consecutive bits. We’re not talking about interpolation (filling in missing information with a best-guess approximation) or error concealment, but full, 100% correction of those 4000 bits.

Errant or missing data longer than 4000 consecutive bits are interpolated so that the error is less audible. If the error is too great to be interpolated, the CD player simply mutes the output. The proponents of the “bit error” theory explaining sonic differences between transports suggest that this interpolation is nearly continuous and affects such sonic parameters as soundstage depth and reproduction of timbre.

My experience suggests otherwise. When I worked in a CD mastering lab, we converted Philips CD players into CD error analyzers for the factory’s QC department. We tapped into the flags in the error correction chips for presentation to a PC running custom software to plot the errors. This allowed us to see exactly the frequency and severity of data errors on CDs. CD data errors are categorized by a letter and two numbers that tell you the error’s severity with great precision. I’ll spare you the details here, but suffice to say that we could determine with tremendous accuracy exactly what was going on in the error correction circuits, and if any uncorrected errors occurred.

Uncorrected errors (interpolation) not only never occured, but never even came close to occurring unless the disc was damaged. In fact, the CD’s error correction system is far more robust than it needs to be.

Moreover, I sometimes performed a bit-for-bit comparison between a CD master tape and the CD replicated from that tape. I did this on about ten different CD/master tape pairs, representing perhaps 56 billion bits of data. Not once did the system detect even a single bit error.

Proponents of the “bit error” theory of sonic differences between CD transports suggest that sonic qualities such as soundstage size, timbral liquidity, and dynamics are affected by random bit errors as recovered by the transport. But would random amplitude errors, even if they existed, change the soundstage size?

There’s no question that a systemic change in the datastream representing music can affect the qualities I’ve mentioned—soundstaging, timbral liquidity, and dynamics. Those sonic qualities are not affected by random bit errors, but rather by a wholesale re-arrangement of the data, as can occur in PC-based music servers. The Windows operating system (before Windows 7) will completely resample audio data unless you manually turn off this processing. Many audiophiles have built PC-based music servers only to be disappointed by the sound, and such processing is the reason why. The server must be “bit transparent,” meaning that the data on the source are the same data that are output to a DAC. I know that my own server is bit transparent because I have it connected to a Berkeley Audio Alpha DAC which features a front-panel HDCD light that illuminates when playing an HDCD-encoded source. Any data corruption will destroy the HDCD code hidden in the least significant bit and prevent the HDCD light from illuminating.

Another source of a wholesale change in the data that introduces an analog-like variability into digital audio is an asynchronous sample-rate converter. A sample-rate converter, even if operating at 44.1kHz input and 44.1kHz output, creates entirely new output samples that are related to the incoming data, but the output samples are not identical to the input samples. Although such a device can remove timing errors (jitter) present in the input signal, it does so at the expense of slightly changing the amplitude of every single sample. In effect, it converts a timing error at the input to an amplitude error at the output. At the last CES I heard an interesting demonstration in the PS Audio room of the audible effect of asynchronous sample-rate conversion. The PS Audio Perfect Wave includes an asynchronous sample-rate converter for those times when it might be needed, but it can be switched out of the circuit. This feature allows one to listen with and without the sample-rate converter in the signal path. Although I was leery of sample rate conversion on purely theoretical grounds, this was the first time I was able to listen to and isolate its sonic effects. Switching in the sample-rate converter caused the presentation to sound thicker and less resolved, smeared the imaging, and made the entire presentation sound synthetic.

Neither of these systemic changes to the audio samples representing the music is the result of the CD transport not recovering all the data from a CD. Virtually all CD transports recover 100% of the data from a disc with zero uncorrected errors, and that the sonic differences we hear are solely the result of jitter.