Benchmark Media Systems LA4 Line Amplifier and DAC3 B Digital-to-Analog Converter

Paul Seydor Talks with John Siau of Benchmark

Blog
Categories:
Solid-state power amplifiers,
Digital-to-analog converters
Benchmark Media Systems LA4 Line Amplifier and DAC3 B Digital-to-Analog Converter

(The following interview accompanies Paul Seydor’s review in the December, 2020 issue of the Benchmark Media Systems LA4 Line Amplifier and DAC3 B Digital-to-Analog Converter.)

John Siau, Director of Engineering and chief designer at Benchmark Media, knew from an early age that he wanted to design electronic equipment. He enrolled at Syracuse University in 1976 and graduated four years later with a bachelor’s degree in Computer Engineering. Along the way he’d cobbled together an audio system and worked as the sound engineer and mixer for a local band. The fifteen years following his graduation include long stints at CBS and General Electric, plus independent consulting, through all of which he acquired extensive experience in HDTV (receiving two patents for video-image stabilization systems) and developed high expertise with high-speed A/D and D/A converters, ultra-low-jitter phase-locked loops, high-speed digital logic, digital filters and FPGA cores. In 1995 Benchmark hired him to design its first digital product, the AD2004, a 20-bit A/D converter that set new standards for low distortion and won some awards. Soon afterward he joined the company full time, eventually becoming part owner. Away from Benchmark, his musical training includes trumpet and tuba; he is an avid skier; and he and his wife have a large family who on vacations enjoy exploring remote trails and locations. They own a farm, which they lease, though they do enjoy working it from time to time.

Siau’s many white papers and other pieces about audio are well worth investigating on the Benchmark website. In the interview that follows I’ve interpolated links in [] to articles that elaborates on certain subjects he covers. His essay “Rules of Thumb for Music and Audio,” which combines useful information, solid practical advice, and wise counsel, will enlighten both tyros and seasoned audiophiles and reviewers (https://benchmarkmedia.com/blogs/application_notes/audio-rules-of-thumb).

Benchmark publishes by far the most exhaustive technical information about its products yet listening also plays an important part in your product design and development.
At Benchmark listening is the final exam that determines if a design passes from engineering to production. But since listening tests are never perfect, it’s essential we develop measurements for each artifact we identify in a listening test. An APx555 test set has far more resolution than human hearing, but it has no intelligence. We have to tell it exactly what to measure and how to measure it. When we hear something we cannot measure, we are not doing the right measurements. If we just listen, redesign, then repeat, we may arrive at a solution that just masks the artifact with another less-objectionable artifact. But if we focus on eliminating every artifact that we can measure, we can quickly converge on a solution that approaches sonic transparency. If we can measure an artifact, we don't try to determine if it’s low enough to be inaudible, we simply try to eliminate it.

Can you provide an example from your own work as to how listening revealed something the tests did not and how you went about discovering (with tests) what it was and how you fixed it?
One of the most elusive artifacts is the inter-sample peaks that exceed 0dBFS. These peaks can reach +3dBFS and can cause DSP overloads in sigma-delta converters and in sample-rate converters. The result is best described as an artificial snare or hi-hat sound that is added to the music. It also tends to add an artificial brightness to the apparent frequency response. In our listening tests, sample-rate converters with THD plus noise better than –135dB (0.000018%) had an audible impact on the sound and we could not explain this with any of the conventional audio measurements. We can't hear distortion at –135dB, but we were hearing something! Eventually we discovered that inter-sample peaks could overload fixed-point DSP processing when interpolating in a reconstruction filter, and the resulting THD was very high—several percent. Once we identified the root cause, the test was easy to perform. We now have an 11.025kHz test signal that contains a clean tone at +3dBFS. Our DAC2 and DAC3 converters will pass this tone without distortion. Virtually all other D/A converters will distort. This artifact is probably the primary audible difference between PCM and DSD and also probably one of the most significant differences between oversampled and non-oversampled converters. By the way, MP3 is particularly plagued with inter-sample peaks that exceed 0dBFS. MP3 sounds terrible on our DAC1 and other DACs, but somewhat tolerable on our DAC2 and DAC3. [https://benchmarkmedia.com/blogs/application_notes/intersample-overs-in-cd-recordings]

Your DACs have only recently offered DSD conversion, and that only at 64, and you haven’t embraced MQA at all.
The direct DSD64 function is provided for those who are convinced this path is mathematically cleaner. But there is no loss converting DSD to PCM if the intersample overload problem is eliminated, which we have accomplished. Given that, DSD is a step backward. As for MQA compression, it has a tendency to time-shift transients to the nearest sample instead of rendering these transients with the proper timing. The effect is audible, and I do not like it. From a streaming bandwidth standpoint, MQA offers no advantages. There are lossless schemes that can achieve the same bit rate.

What is your reply to those who say that good as your products are, they represent overkill? Specifically, if noise is already below, let alone substantially below the threshold of audibility, what’s the point of pushing it lower?
If two devices have identical signal-to-noise ratio, their combined noise will degrade the SNR by 3dB. If one device is 6dB quieter than the other, the resulting SNR will be 1dB worse than the noisier device, and so on and so forth. Our goal is to build products that produce no noise or distortion exceeding 0dB sound-pressure level in a typical playback system. By this I mean that the electronically produced noise and distortion would be absolutely inaudible even if the noise and distortion could be played while the music was muted. This may seem like overkill, but it is an achievable goal with today's technology. Over the years, the audio industry has constantly been surprised as we have identified audible defects that were produced by systems that should have been "good enough".

You also seem obsessed with preserving correct phase response inasmuch the bandwidth of the new preamplifier goes from –3dB at 0.01Hz to flat out beyond half a million Hertz!
Phase accuracy requires a greatly extended frequency response. The phase error at the –3dB point is 45 degrees, which is huge. For example, if a system is –3dB at 20Hz, 20Hz tones will begin and end 6 milliseconds too late. This shifting in time is equivalent to a 6-foot change in the position of the low-frequency driver. This blurs the depth of the soundstage, and the late bass tends to mask the high-frequency reverb tails. At high frequencies, the left/right matching of time delays is vitally important for creating a 3D image with two speakers. Phase errors will push the high frequencies toward the speakers and away from the actual location of the instrument within the soundstage. Wider bandwidths make L/R phase matching much easier. If phase accuracy is not maintained, it is impossible to create a 3D holographic image of the performance.

Do you worry about ultrasonic electronic noise and RFI getting into the signal path?
This will not happen unless an upstream component is producing the noise, though it could happen if a D/A converter is lacking the required 50kHz low-pass filter. It could also happen if a PCM D/A converter does not have a properly designed reconstruction filter. A good DAC is essential in any high-end system.

Many audiophiles of my generation are suspicious of ultra-low noise and distortion figures such as yours because they associate them with those solid-state amplifiers from the seventies that got great specs from troweling on piles and piles of negative feedback. Yet they still had poor sound. Why are yours different? 
That’s a great question. These early transistor designs were thermally unstable and had too little phase margin in the feedback loop. The thermal instability was a direct consequence of inadequate thermal coupling between the output stage and the biasing circuit. The inadequate phase margin resulted directly from too much feedback given the limited gain-bandwidth product of early power transistors. The excessive use of feedback was effective in partially reducing the crossover distortion that was caused by the poor biasing, but only for low-frequency test tones. In these early designs, total harmonic distortion rose quickly with increasing frequency, but published THD tests were conducted only at 1kHz. This excessive use of feedback created amplifiers that measured well with high amplitude 1kHz tones, creating great-looking spec sheets. At one watt or lower, they would measure poorly. At higher frequencies they would also measure poorly. All of these defects would have been revealed with an intermodulation distortion test, but these tests were not done in the 1970's. [https://benchmarkmedia.com/blogs/application_notes/interpreting-thd-measurements-think-db-not-percent]

But despite the copious test results in your specifications, I don’t see IMD figures.
Until recently we were using AP2700 and AP2500 test sets and these boxes had more IMD than our products, so the graphs and numbers were somewhat meaningless (other than showing that we were at the residual of the test equipment). We now have an APx555b test set, and we will be publishing IMD tests soon. IMD tests have been independently tested and published by Audio Science Review and Stereophile. The results are available for viewing, but obviously they cannot be republished because we do not own the copyrights. We will conduct our own tests and publish the results. By the way, low IMD is important in any wide-band system. IMD will fold ultrasonic noise down into the audible band. The LA4 and AHB2 have very low IMD and are stable when reproducing ultrasonic frequencies.

As I’m sure you know, many audio consumers evaluate as much with their eyes as their ears. Benchmark products are so diminutive and they’re all self-contained, without the massive outboard power supplies of some of your competitors. You’re also rather skeptical of the ability of filters, conditioners, and power regenerators to improve the performance of your products. 
It is disturbing that expensive audio equipment needs any of these devices to prevent interference. If line-level audio devices are properly designed they should be immune to fluctuations in the AC line and to conducted interference. While an active or passive line filter may correct problems that are caused by poor power-supply designs, good equipment will run perfectly on noisy AC power. We test for susceptibility before we release any new product by designing our circuits to have a high power-supply-rejection ratio, so they’re immune to conducted interference. We also design our power supplies so they don’t emit strong line-frequency magnetic fields. And we design our circuit boards so that sensitive signals are distributed with a star-quad geometry which provides magnetic immunity. A spectrum analysis of the LA4, HPA4, DAC3, or AHB2 prove that line interference can be completely eliminated while still using internal power supplies. The AHB2 is a remarkable example of the effectiveness of these techniques given the power density of this linear amplifier. We use a switch-mode AC-to-DC power-supply because of the huge reduction in magnetic emissions that can be achieved when large line-frequency transformers are replaced with small high-frequency transformers. The much weaker magnetic fields are also well above the audio band and any interference can be filtered out without harming the audio. The AHB2 delivers a 500kHz bandwidth that has no traces of power-supply interference. There is also no significant power-supply related noise above 500kHz in the audio output. The 135dB SNR of the AHB2 could not have been achieved with a linear power supply unless the power supply was housed in a separate box that was separated from the amplifier by about two feet. Though I realize that most of the industry is still fixated on linear power supplies, in my opinion they have no place in true high-end audio equipment. By comparison, every Benchmark product can be stacked on or placed right beside another Benchmark without worries about magnetic interference.

The words “neutral” and “transparent” appear so often in Benchmark’s literature they seem almost like mantras.
I’m a bit of a purest when it comes to adding sonic effects in the playback system. There is no question that tone controls, non-linear filters, reverb, polarity, and other effects can sometimes improve the sound of a recording, but the resulting sound differs from what the artist intended. We wouldn't dream of color-enhancing a photograph or print of a famous painting. Why do we look to colorize the sound of well-known recordings? We need to relax and enjoy the music as it was recorded. Colorizing switches and controls detract from that experience.