Although we’re accustomed to thinking of the science of electricity as having a long history, the fact of it is that what we now know as electronics didn’t really exist until 1877, when Alexander Graham Bell and his partners established the world’s first telephone company.
Yes, certainly, everybody knows that more than a century before that, in 1752, Ben Franklin flew a kite in a lightning storm to prove (alas, some weeks after the French experimenter, Thomas-François Dalibard, had beaten him to the punch) that lightning and electricity were the same thing. There are even some who believe that there’s evidence from two millennia before that that the Mesopotamians and ancient Egyptians may have had galvanic cells (batteries). And, there’s even one source (http://www.bibliotecapleyades.net/ciencia/ciencia_hitech02a.htm) that asserts that “The Coso Artifact,” found in the Coso mountains of California in 1961 (which some geologist has dated, based on the surrounding strata, from sometime 250,000 to 500,000 years BCE), may have been a prehistoric (or even pre-human [sic]) attempt at the construction of a superconductor.
Whether or not any of these more extreme conjectures have merit, there can be no doubt that electrical experimentation goes back at least as far as 600 B.C., when Thales of Miletus wrote about how amber can be given (what he didn’t then know was) a static charge by rubbing it. It’s similarly certain that in 1600 AD, English scientist William Gilbert coined the term “electricity” from the Greek word for amber; that in 1729, Stephen Gray discovered electrical conduction; and that, just a few years later, the Frenchman, duFay discovered what Benjamin Franklin (among others) later re-named “positive” and “negative” electrical charges. All of these great things and more were done, and even more great discoveries were made by Volta, by Ampere, by Ohm, Maxwell, Faraday, and a whole host of others over the centuries.So how can I possibly say that electronics didn’t have its real beginning until that first time, so many years later, when a telephone rang?
It’s simple: until then—with only very limited exception—everybody who had worked with or theorized about electricity had been an experimenter, a scientist, or an accidental discoverer, but not a technician, and not trying to actually do anything with it!
Until the telegraph (first commercial use in 1837), even such important discoveries as the Leyden jar (1745) and the electric generator and electric motor (by various inventors in the 1830s and 40s) were motivated not by commerce, but by curiosity. Nobody actually used electricity for anything. In spite of valiant but failed attempts (such as an electric locomotive in 1851), other than the telegraph electricity was all theory and experimentation, and not utilization or commercial development. Even the telegraph, itself, was so simple in its application—essentially just sending a DC pulse down-line to operate a solenoid—that to call it “electronic” would be to stretch that word beyond its limits.
The Edison incandescent lamp, when it came along just a little later, in 1879, was another application that, like the telegraph, was certainly electrical, but that could hardly be described as electronic. Initially it involved nothing more than a simple low-voltage DC current. For citywide distribution, though, DC current presented a problem of resistive line-losses and was soon replaced by the Tesla-developed system of alternating current (AC), which, by using very high transmission voltages and low current (later, near its destination, converted by transformers to 240 volts and high current in a three-phase system) allowed for very long lines with much lower resistive losses.
Really, the only problems that power lines and the telegraph had stemmed from the distances that electricity had to travel for them to operate. Obviously, that could range up to—for a transcontinental telegraph system—thousands of miles, and might mean, regardless of the gauge of the wire used, quite significant resistance and quite significant resistive losses. For either the power system or the telegraph to work, though, at least as long as the power system remained DC, resistance—plus the occasional “short” or “open” circuit or other physical damage—were the only electrical problems that had to be considered, and neither capacitive nor inductive reactance was a factor.
This remained substantially the case even after the conversion of the power system from DC to AC because, at the system’s very low fixed operating frequency (50 or 60Hz, depending on specific locale), reactive elements still remained somewhat inconsequential, even for relatively long distances.
For the telegraph, all that was being transmitted was a series of pulses and breaks (similar to the ones and zeroes of digital, but of an entirely different kind and context); for the power grid (after conversion), it was an essentially constant low-frequency sinewave; but with the telephone, all of that changed. For the first time ever, a length of wire was called upon to carry, not just an intermittent current, or even a relatively constant level of single-frequency AC, but a voice or music signal constantly varying in amplitude, and covering more than three full octaves of bandwidth!
The frequency range of those earliest telephones (roughly 300Hz to 3.5kHz) was, for its time, a huge accomplishment. (Certainly, the phonograph, which had originated at essentially the same time, had a similar frequency range, but remember that until the 1920s it wasn’t electrical at all, and relied on purely mechanical energies for both recording and playback.)
To put this into perspective, to increase the range of a modern electronic product by the same number of octaves as the telephone company’s new bandwidth did back in the Nineteenth Century would require extending its upper frequency limit from 20kHz to 224kHz—more than a full order of magnitude, and almost a quarter of a million cycles per second.
Increased frequency range and other performance increases brought with them new problems: Characteristic impedance (Z0), for example, which had been both unknown and utterly irrelevant before the telephone company took over the neighborhood’s communications, suddenly became, with the new higher frequencies and the fact that now even a short telephone line was likely to be miles long, a real problem that (because of impedance mismatches and reflections due to the wavelengths of the signal and the finite length of the wire) could create “drop-outs,” standing waves, and other significant transmission problems.
The result was that the telephone company, and specifically Bell Laboratories and its manufacturing facility, Western Electric, were forced, for their own survival and to keep the nation’s telephone network growing, to invent what we now call “Electrical Engineering.”
Drawing heavily on all of the experimental learning of the past, coupled with new knowledge gained from current problems and the solutions that telephone scientists had been able to find for them, Electrical Engineering was quickly—especially in that booming time of newly electrified and telephone-equipped homes and a public hugely impressed by the fruits of science and clamoring to buy more of them—adopted by the country’s universities and offered to an eager student body as the career of the future.
What the universities didn’t tell anybody, though, was that what they were teaching was not some new engineering knowledge acquired by divine revelation or developed in a vacuum, but, specifically, telephone company electrical engineering, developed to solve telephone company problems.
And that meant that what it tended to mean, when it referred to wires, was long wires.
When the telephone company thinks of wire, the odds are overwhelming that it will be thinking at least in hundreds of meters, if not hundreds of miles. High-end audio designers, on the other hand—and I want to emphasize the word “audio” here, and remind readers that this refers to frequencies well below the RF range—are more likely to think in lengths of a few inches to a few feet. To them, a whole hundred feet is as huge a number as it was a tiny one to the men of the telephone company back in the days when electrical engineering was first being formulated. And that may be a source of problems.
Regardless of what you may have heard, in at least some things, size does matter. That’s because, in those things, more or less is not just more or less of the same thing, but actually becomes something different!
Diamonds, for example, are lovely for jewelry or for cutting glass or, in industrial applications, for cutting almost anything else, but when the diamond is not a stone, but a thin film, it actually reverses its nature in at least two important respects: While diamond is normally extremely rigid, as evidenced by, among other things, Dynavector’s use of it as the cantilever material for some of its phono cartridges, thin-film diamonds are exactly the opposite, and demonstrate an admirable degree of flexibility. Also, while diamonds as we normally think of them are a good (but obviously expensive) electrical insulator, thin films of diamond can be made to be excellent electrical conductors, and thin-film-diamond-coated-electrodes are becoming increasingly important in a number of industrial and even medical applications (http://www.qmed.com/mpmn/article/bright-future-ahead-flexible-diamond-coated-electrode).
Another material that changes its characteristics substantially with size is copper oxide (CuO). If you’ve ever seen a penny that has gotten wet and turned green with oxidation, what has happened is that the copper in that penny, normally a conductor second only to silver, has grown itself a quite effective insulating jacket. Thick films of copper oxide are non-conductive, and a green penny makes a fine insulator. Very thin films of CuO, on the other hand—the kind covering virtually every piece of unprotected copper that has ever, even briefly, been exposed to the atmosphere, and that may be found in the junctures between the crystals of most copper wire—are not insulators, at all, but semi-conductors; whole gangs of tiny diodes that do affect the passage of signal through the wire (especially stranded wires, where electrons, always seeking the straightest path, will jump from wire-to-wire in a stranded conductor, passing, in the process, through two such tiny diodes with every jump. (Do you remember what two diodes are? The answer is a rectifier, which does exactly the same kind of filtration in a wire as it would do in the power supply section of any piece of electronic equipment.) For the wires in a hi-fi system, the result can be a quite noticeable loss of detail.
Like those other things, wires (or cables, if we’re talking not about single conductors, but about integrated systems for carrying signal) are significantly different in long and short lengths. Certainly, as their length increases so, proportionally, do their resistance (R), capacitance (C), and inductance (L), but so, also, does (here it comes again!) the problem of characteristic impedance. That’s not because different lengths of the same wire have a different Z0—all lengths of the same wire have exactly the same characteristic impedance (that’s why it’s said to be “characteristic”). Instead it’s because the problematic effects of characteristic impedance are frequency-related, becoming (after a quite considerable “lag” range encompassing virtually all audio frequencies) more and more important as the wavelength of the frequency approaches, equals, and eventually becomes smaller than the length of the cable.
Just as an example, the nominal wavelength of an electrical signal of 20kHz, assuming, for convenience, that it will be propagated in the cable at 100% of the speed of light (300,000,000 meters/second) is 15,000 meters (300,000,000 ÷ 20,000 = 15,000). Obviously no audio cable is ever going to get to anywhere near that length in a hi-fi system, but 15,000 meters equals 49,212 feet, or 9.32 miles, which is no big deal at all for a telephone line. And even at the frequency limits of the earliest telephones (around 3.5kHz), it’s still only 53.29 miles, so it’s easy to see why, as the system grew, Z0 became a killer problem for the telephone company and thus a crucial consideration for electrical engineers.
For (by consumer electronics’ standards) very high frequencies, like digital or video, it can be important, too. Even a short cable (one or two meters) can be long enough relative to the wavelength of the signal it’s carrying for “nodes” and “anti-nodes” to be created and to make characteristic impedance a necessary consideration. For audio frequencies, though, it’s not only not a factor; it may also even be impossible to deal with. Speakers, for example, and with very few exceptions (including certain planar magnetics, like those from Magnepan) present a very complex load to an amplifier, with—despite whatever nominal impedance they may be rated at—an impedance that changes with frequency and, typically, a strong impedance peak at the fundamental or system resonance of every driver. Although some companies have claimed to offer “impedance matching” speaker cables, such a thing is simply not possible. The speaker (system) doesn’t have a matchable impedance, and even if it did, the output impedance of the amplifier (which for a no-loss transmission line to be achieved would also have to be matched) would still be different (and typically much lower than) that of the speaker, and no match would be possible.
For interconnects, too, if they are to be used with unbalanced (“single-ended,” usually with RCA connectors) lines, audio-frequency impedance matching is not possible. Unbalanced audio circuits are normally “loaded,” meaning that a low source impedance (typically in the range of 50 to 250 ohms) is terminated with a high load impedance (typically 7k to 10k for line level or 47k to 100k ohms for phono), and so, again, there’s nothing there to match.
Where audio-frequency impedance matching is used is in balanced lines (usually, in the U.S., with an XLR connector), for microphones, and professional or high-end audio equipment. The original 600-ohm standard came (surprise!) from the telephone company, but when radio came along and started with (because that was all that was initially available) telephone company equipment, it became the radio standard, too. The same thing occurred when, with the advent of electrical recording, recording studios adopted radio station equipment (again, the only thing initially available) and 600-ohm balanced lines as their standard. Radio certainly had RF frequencies to justify impedance matching, but for the recording industry and high-end audio, the choice of balanced (differential) lines was made not out of any concern for characteristic impedance, but for improved signal-to-noise-ratio (s/n). Because both sides of a balanced or differential circuit are driven out-of-phase with each other (there is no customary “hot” or “ground” to a differential circuit, but both sides change polarity with the signal, each acting as “ground” for the other, and with the whole thing “floating” above chassis ground) there’s twice as much current and twice as much voltage conveyed. This results in a 6dB s/n ratio improvement, which may be augmented in high (electrical) noise environments by balanced circuitry’s natural and inherent ability to cancel external noise. Both make balanced lines an ideal choice for high-quality audio applications.
With very long lines and characteristic impedance problems essentially irrelevant, other aspects of wire and cable design assume an importance that the telephone company may never have considered. The short (at least by comparison) cables used by high-end audio go far beyond just R, C, and L in their design. One must look closely at metal purity (with special concern for copper-oxide mini-diodes); the actual length of the conductor’s individual metallic crystals (longer is better); the mass and plating on the connectors used (less mass is better, and gold is better than silver for audio but silver is better than gold for RF and above); all of the aspects of the cable’s dielectric that can contribute to degradation of the transmitted signal due to capacitive discharge and other effects; and any number of other things that the telephone company never considered in the past but that, perhaps paradoxically, modern electrical engineers are being forced to learn about as signal frequencies—now up in the gigahertz range for some military, communications, and computer applications (and spiraling ever higher)—force them to examine things that hi-fi cable designers have known about for years.
Think how much faster things might have developed, and where we might have been today, in all aspects of electronics, if the universities had just told the engineers that what they were teaching them was telephone company electrical engineering.
About Roger Skoff
Roger Skoff claims to be retired, but nobody believes him. And even if anybody did, they wouldn’t know what to say he had retired from: As an economist and entrepreneur, Roger has started and operated nearly two dozen successful companies of his own and has been involved in more industries than anyone would care to throw rocks at—including everything from aerospace to franchising, to healthcare, to motorcycles, to Holstein dairy cattle breeding, to oil and gas drilling, to publishing, and, finally, to high-end audio, where he was founder and designer for cable manufacturer XLO Electric, which he sold in 2002.
Roger has consulted to companies as diverse as Procter & Gamble, Motown Records, and Onkyo, holds U.S. and world patents in several disciplines, and, having made basic discoveries in interactive field theory and capacitive-discharge effects in cables, is considered to be one of the world’s leading authorities on the performance and electrical characteristics of short lengths of wire.