The Future of Audio: Integrated Systems?

Integrated amplifiers
The Future of Audio: Integrated Systems?

This article is intended to be a bit different from most of what we publish in TAS. I want to make the argument that we could be executing a slow but powerfully important transition to a new system architecture for high-end audio. This new architecture is necessary to take significant further steps toward musical enjoyment. Those steps may be big enough that the new architecture becomes dominant. Or, the new architecture may simply create an alternative that works for some listeners and not for others. But, either way, I think this new architecture could be hugely significant. 

When I say “architecture,” I’m referring to the conceptual layout of the things we buy to reproduce music. One way to think about this is to observe that high-end audio has, since its inception in the 1950s, had one basic physical architecture: recording:source>preamp>amp>speakers/room:listener. Cables were used at each point marked by “>.” There was no feedback around the entire chain or between the elements. Sources evolved from turntables to compact disc players to computers. With that evolution, the recording medium went from vinyl to CD to hard disc and streaming. That is, while technologies changed (e.g. tubes to transistors or CDs to hard discs and streaming), the architecture really didn’t.

The physical architecture, though, is secondary to the logic behind it. The essential, almost completely dominant, logical architecture for high-end audio, from about 1955 to 2015, was the idea of modular, component-based systems. You could swap preamps or amps or speakers from any of dozens of manufacturers in a system, because the building blocks and interfaces were standardized. At the same time, modularity meant that the budget consumed for each improvement was limited to the cost of a new component, not the cost of an entire system. This approach worked well, to a point. The hoped-for benefit was that evolving technological refinement at the component level would lead the user up a stepwise path toward higher and higher musical performance. 

The standard high-end audio architecture was chosen to a large degree on practical grounds, which helps in understanding why there might be substantial performance left on the table. Especially since a crucial rule is this: structure begets performance. It takes little more than a glance at a Dachshund, a Bassett Hound, a Corgi, a Greyhound, a Saluki, and a Dalmatian to imagine that the latter are much faster than the former. The structure of long legs and relatively high power:weight are faster than the structure of short legs and relatively low power:weight. 

Because structure begets performance, audiophiles mainly have derived their performance goals from the architecture. Not directly or knowingly, perhaps, but the performance goals of audiophiles and the industry serving them naturally were set and constrained by the goals that were achievable within the constraints of the architecture. Another way of saying this is that audiophiles and the audio industry, for the most part, didn’t lay out goals first (e.g. musical realism) and then pursue the architectures that would deliver them. So, we really aren’t pursuing a process that is designed to achieve the end of musical realism. We want musical realism, but we have an architecture that is restricted to only partial success.

To understand the impact of the prevailing architecture, observe that the traditional modular-component architecture has adopted the main performance goal of what we might call “intra-component high definition.” High definition can be seen in other concepts with which audiophiles will be familiar: resolution and depth. Related to this is the idea of “low distortion.” Importantly, these ideas are generally applied at the component level (hence the “intra-component”). 

These concepts—intra-component high definition and low distortion—are good and valuable parameters of audio performance. And the industry has made real progress on these fronts. It can be hard to remember that audio amplifiers were invented well after the automobile (1912 vs. 1885). And important basic concepts like negative feedback (1934) and bipolar transistors (1950) are also comparatively recent—the internal combustion engine dates from 1879 or earlier. As a mark of continued progress on the high-definition and low-distortion fronts, almost no one would trade a good 2019 system for a good 1989 system.

At the same time, intra-stage high definition and low distortion are not the only audio parameters that matter. The standard architecture assumes that errors introduced between, say, the performer and the mastertape are minimal, consistent, and irrelevant. But are they? The standard architecture assumes that errors introduced at each stage in the reproduction chain are minimal and not compounding. But are they? The standard architecture assumes that errors introduced by equipment placement and room characteristics are minor and irrelevant. But are they? 

Those who have been around audio a long time might observe that the goals that actually are addressed by the standard architecture are, to a degree, byproducts of history, driven by the analog-signal era, where signal degradation between stages of amplification was a difficult and important issue. This issue doesn’t really disappear in the so-called digital era, but over the course of decades it can, and has, come to dominate the thought process about where the audio frontier lies. Intra-stage distortion was a problem, we internalized that it was a problem, and as we reached the asymptotic part of the progress curve we kept going back to low intra-stage distortion as the goal because we didn’t know what else to do—or we did, but the architecture worked against doing something. This is now a problem. The problem, I would say.