Master Quality Authenticated (MQA): The View From 30,000 Feet
[Editor’s Note: The May/June issue of The Absolute Sound, which mails to subscribers on March 30, includes a comprehensive evaluation of Master Quality Authenticated (MQA), a new digital technology that delivers better-than-high-resolution sound quality at a bit rate comparable to that of CD. My evaluation was informed by listening to hundreds of MQA-encoded tracks, decoded by the Meridian 808v6 CD player/DAC, over several weeks in my own reference system.
As part of that package of MQA coverage, which includes an explanation of the technology, FAQ about MQA, an examination of the claim that MQA can sound better than the original master, and reviews of the first MQA-capable decoders (the Meridian 808v6 CD player/DAC and Meridian Explorer2 DAC), I included the following editorial, entitled “MQA: The View From 30,000 Feet.”]
In this issue’s cover story I explain some of the technology behind Master Quality Authenticated (MQA) and describe how it sounds. But I’d like to use this space to step back and take a larger view of the history of digitally recorded music, audio technology, and how MQA fits into that historical context.
To recap, MQA is a technology that simultaneously improves digital sound quality while dramatically lowering the bit-rate. It’s an encode-decode system, meaning that for maximum fidelity the music must be encoded with MQA, and played back through a device with MQA decoding. MQA is, however, backward compatible with all existing distribution channels and playback hardware. If you don’t have an MQA decoder, you get slightly-better-than-CD sound. If you have an MQA DAC, the file “unfolds” into the high-resolution signal.
It’s not quite accurate to call MQA a “technology” because it’s more than just a set of hardware and software techniques. Rather, MQA is a nearly-ground-up rethinking of how to best deliver to the listener as close a facsimile as possible of the original musical event. MQA starts with the analog signal in the studio and ends with the analog signal on playback. It ties together every element in that chain into essentially a single analog-to-analog system.
Let’s look at a brief history of digital audio and how that development path led us to the current state of affairs. In the late 1970s, the first digital recorders were commercially introduced, and became ubiquitous a decade later. These machines, based on pulse-code modulation and operating at 44.1kHz or 48kHz sampling and 16-bit quantization, were quite crude by today’s standards. Nonetheless, digital recorders quickly replaced analog tape machines in the studio. The compact disc, with its 44.1kHz sample rate and 16-bit word length, became the standard for distributing digital audio to consumers. As we all know, the CD took over the world beginning in the mid-1980s.
This switch from purely analog technology to digital had its advantages, but also some significant drawbacks. Once the signal was in the digital domain it could be copied, transmitted, and manipulated with no loss of sound quality. But the penalty for that convenience and power was paid at the interfaces between the analog and digital worlds, specifically the analog-to-digital converter used to make the recording and the digital-to-analog converter that transformed a series of numbers back into music. These two ends of the chain exacted a significant sonic penalty, in part because of the steep low-pass filters required to make digital audio work. The design of these filters came out of the sampling theory first developed in 1927 by Harry Nyquist and advanced in the 1940s by Claude Shannon.
Working within the constraints of the so-called “Nyquist-Shannon” sampling criterion, digital audio improved over the past 30 years with higher sampling rates, greater bit depth, lower jitter, and myriad other techniques that realized significantly better sound. In the mid-1990s, recording professionals began using 96kHz/24-bit recorders, which allowed them to make better sounding 44.1kHz/16-bit compact discs. Nonetheless, the consumer experience was limited to 44.1kHz/16-bit quality. In the late-1990s, two attempts to move beyond the CD, Super Audio Compact Disc (SACD) and DVD-Audio, essentially failed in the wider marketplace (DVD-Audio spectacularly so). More recently, the music-only version of Blu-ray Disc has been met with a tepid response.
And then came the Internet, and with it the ability to distribute digitally encoded music without the need for physical formats. It’s impossible to overstate the significance of this development. Physical formats are massively difficult to develop and launch, technically, politically, and commercially. But the Internet allowed music labels (the “content providers” in industry parlance) to distribute high-bit- rate music to consumers in the form of downloads without the constraints imposed by a new physical format.
That development was both a blessing and a curse. The blessing was that here was a cheap and easy way to deliver to consumers the best-available representation of a recording. The curse was that the record companies were delivering to consumers the best-available representation of a recording—a recording that could easily be copied, shared, and even pirated for profit. The record labels’ opening of their vaults by selling high-bit-rate downloads would be tantamount to throwing open the doors to an unguarded shopping mall. Once their catalogs were out in the world, the record companies would have nothing left to sell.
The other problem with “high-resolution” digital audio is that it didn’t really solve the fundamental problem of why digital sounds the way it does—flat, congested, hard, and glassy. Digital audio requires low-pass “brickwall” filters to prevent a type of distortion called “aliasing.” But these filters introduce ringing, or a smearing of musical signals over time. Despite attempts to minimize this distortion through faster and faster sample rates (the filters for which are less sonically detrimental), digital audio was constrained by the very fundamentals of sampling theorem codified more than fifty years ago.
So-called “high-resolution” downloads also exact a price in massive file sizes. Increasing the sampling rate reduces, but doesn’t eliminate, the flaws built into the very foundations of digital audio as it has been implemented. Moreover, very fast sampling is preposterously wasteful; most of those additional bits carry no real information whatsoever. Consider that a 192kHz/24-bit system allocates the bits to encode a 90kHz sinewave at full-scale amplitude, a signal that wouldn’t even come close to existing in the real world. High sample rates create a massive container for the music (a 96/24 or 192/24 file) that is largely wasted bits. It’s like shipping a paperback book in a box the size of a filing cabinet. Moreover, obtaining these files, and playing them back correctly, requires specialized computer expertise, making them accessible only to the committed.
To summarize, the audible degradation in digital audio is largely caused by filters. The industry then tries to minimize that degradation by very fast sampling and the gentler filters possible with faster sampling, which creates massive files and limits their accessibility to the vast majority of listeners. The record industry is reluctant to release their fast-sampling files for fear that they will eventually have nothing to sell. Consumers who want better-than-CD quality must master computer technology, limiting the widespread accessibility of better sound. Even then, the library of available music is limited, and still doesn’t represent the sound in the recording studio. To top it off, the consumer never really knows if the file he’s playing back is the same as that created by the artist and engineer. And those enormous files can’t be streamed, and won’t play in portable applications.
In short, the technology is broken. The business model is broken. The artist is unable to deliver to fans the best possible representation of his or her work. The consumer is denied the best possible listening experience.
We ended up in this predicament because each improvement in digital audio was merely an incremental evolution of conventional ideas and models. No one had gone back to first principles and rethought how best to record and distribute music.
Against this backdrop, Master Quality Authenticated emerged. In a single stroke, MQA solves all these problems, from the technical, to the business model, to the sound quality, to the easy accessibility of that sound quality, and to the communication between artist and listener.
How does it do this? For starters, it turns out that the “laws” of Nyquist-Shannon sampling theorem—which have dictated the design of brickwall filters since digital audio’s inception—are not quite ironclad. Since Shannon, sampling theory has advanced considerably, driven by research in other fields such as medical imaging and astronomy that face challenges parallel to those of audio. Also, sounds that are important to humans have very specific statistics including a 1/f tendency (the power spectral density is inversely proportional to frequency; i.e. the higher frequencies have less energy), in part resulting from how sound behaves in air, a factor not considered by the classic sampling criteria.
Consequently, Nyquist-Shannon isn’t the limiting factor it once was, but it took some minds from the audio world (MQA inventors Bob Stuart and Peter Craven) to recognize that fact and apply these advanced new techniques to music reproduction. MQA incorporates this latest sophisticated thinking into a different sampling design, reducing the filters’ “temporal blur” and with it the degradation that has plagued digital audio since its inception.
In addition to delivering unprecedented sound quality, MQA offers record companies a compelling solution to delivering to consumers the best possible sound while still protecting their archives. When you play an MQA file through an MQA decoder, you hear the high-resolution studio master, yet you never actually possess the high-resolution studio master. That high-resolution signal exists only at the decoder output, in analog form but matches very closely the analog in the studio. Of course, you can store an MQA-encoded file (it’s formatted as a 44.1 or 48kHz/24-bit FLAC file) with all the high-resolution information embedded in it, but to access that hi-res information you must play it back. It must be noted here that MQA has no form of copy protection or digital-rights management (DRM) whatsoever. Contrary to what some Internet posters think, MQA is not an evil scheme to institute DRM.
A much more efficient coding technique captures all the musical information while not trying to encode signals that don’t exist in the real world. This approach results in much smaller file sizes with no loss in sound quality. In addition, a clever technique encapsulates the high-resolution portion of the signal and hides it under the noise floor. This information “unfolds” on playback, with awareness of the playback platform, into the signal’s original resolution, all the way up to 352kHz/24-bit.
Another benefit to record companies and consumers is that one MQA file serves all listeners, and will play anywhere. Record companies must now create and offer MP3, AAC, Red Book, 96/24, and many other versions of the same music. The same MQA file will go to everyone.
Finally, MQA provides a direct link between artist and listener in the form of the authentication feature—the light on the decoder that confirms that the file being decoded is the file created in the studio. The mastering engineer can monitor the signal through the entire encode-decode chain, and hear exactly what the listener will hear. Conversely, the listener hears exactly what the engineer created.
The surprising advances and innovative thinking that MQA has introduced will forever change the way we and future generations consider digital audio, even if MQA never becomes a large-scale commercial reality. But I’m betting that it will.
By Robert Harley
My older brother Stephen introduced me to music when I was about 12 years old. Stephen was a prodigious musical talent (he went on to get a degree in Composition) who generously shared his records and passion for music with his little brother.More articles from this editor
Read Next From BlogSee all
Building a Compact Reference System | Part 1: Requirements
Robert Harley is one lucky fellow. He got to build […]
- by Alan Taffel
- Jun 16th, 2021