image description

Meter Madness

What your level meters tell you—and what they don’t.

By Mike Rivers

In May 2014, the VU meter will celebrate its 75th birthday. It has served the industry well, and when properly interpreted, it’s still useful. However, today’s digital recording processes have caused us to take a hard look at the usefulness and inadequacy of both the traditional VU meter and its modern replacement, the LED level meter, as tools for signal-level management.

The classic VU meter, though relatively rare today (primarily because of cost), has long been the most common audio-level indicator. VU stands for Volume Unit, and the VU meter indicates how loud something sounds. The traditional VU meter is mechanical, analog, and has a standardized (even the color scheme is standard), logarithmic (decibel) scale that runs from –20 to +3, with usable resolution over about a 15 dB range. The zero mark is about two-thirds of the way up the scale. This 0 VU mark is what we mean when we say the meter “reads zero,” not where the pointer rests when the equipment is turned off.

Resolution is very good near the high end of the scale: generally 1 division per dB between 0 and +3. The scale resolution becomes progressively lower (more dBs per scale division) below zero VU.

These days, the mechanical meter’s replacement is usually a column of LEDs, with individual lights serving as the meter scale ticks. On a computer-based digital audio workstation, a meter is often simply drawn on the monitor screen. The Cool Look Factor is lower, but it’s a lot cheaper than using mechanical meters. However, some analog audio devices—notably “boutique-style” preamps and channel strips such as the PreSonus® ADL 600, ADL 700, and RC 500—still sport traditional VU meters.

In this article, we’ll review the characteristics of the traditional analog VU meter and also address audio-level measurement and management in the digital domain, the concept of headroom, and how loudness and audio level are related—as well as how they’re not related.

What’s a VU?

The Volume Unit meter was originally designed to help broadcast engineers keep the overall program level consistent between speech and music. There’s a well-defined standard for the mechanical rise-and-fall response characteristics of the pointer (to which few of today’s VU-like meters actually comply).

A standard VU meter responds a little too fast to accurately represent musical loudness but fast enough to show movement between spoken syllables. Unscientific as it may seem, the dynamic response of the VU meter was tailored so that the pointer motion looks good when indicating speech level. It was easy to tell at a glance whether speech or music was going out over the air, whether it was about at the right level, and when something wasn’t working. Engineers learned that brief excursions up to the +3 dB top end of the scale rarely caused distortion in the analog equipment of the day, nor did they sound too loud.

How is Audio Level Measured?

Once sound is converted to electricity, we can represent the audio level by measuring the alternating electrical voltage. A symmetrical signal, such as a sine wave—a single-pitched note with no overtones or distortion—spends equal time on the positive and negative sides of 0 volts. The numerical average of the positive and negative voltages over many cycles is zero—not a very useful measurement when we want to know how loud a sound that voltage represents.

A measurement that corresponds closely to how we perceive loudness is called the RMS (Root Mean Square) voltage. RMS is calculated by squaring the voltage at each point on the waveform (remember, the square of a negative number is a positive number), summing the squares, dividing by the number of points measured (the more points, the more accurate the calculation), then taking the square root of that quotient.

For a sine wave, the RMS value is 0.707 times the peak value (see Fig. 1a). You’ve probably heard of this relationship (it’s the square root of 2) between peak and RMS. Understand, though, that this numerical relationship holds true only for a pure sine wave.

Look at the string of narrow pulses in Fig. 1b. The voltage spends much more time at zero than at some positive or negative value. Even though the peak value is the same as for the sine wave above it, the RMS value of this waveform is lower. It will read lower on a VU meter, and it will sound quieter than a sine wave of the same peak level. Since a square wave (see Fig. 1c) spends all its time either at the peak positive or peak negative voltage, its RMS and peak amplitudes are equal.

The Classic VU Meter

Today, a VU (or pseudo-VU) meter is most often used for little more than to indicate an impending overload. We don’t watch a recorder’s VU meter to tell how loud our recording is, but rather, to ensure that we don’t exceed the available headroom—the 10 dB or so between 0 VU and the point where THD reaches 1%.

An analog tape recorder is typically calibrated so that 0 VU on its meter corresponds to the signal level required to produce the chosen reference fluxivity level (the measure of the flux density of a magnetic recording tape per unit of track width) on analog tape. At a level 10 dB above reference, a professional tape recorder will have a measurable but tolerable amount of total harmonic distortion (THD)—around 1%. This is what’s usually referred to as “analog tape compression” or “analog warmth.” (Read this NPR EUonline article for more on fluxivity, operating levels, and headroom.)

However, the VU meter is only a good headroom indicator if you’re working with program material that’s fairly consistent in level and doesn’t require a generous allowance for surprise peaks.

Take a close look at the VU meter scale in Fig. 2. While the meter scale has a total range of 23 dB, fully half (the top half) of the scale represents only 5 dB. This is good resolution for measuring steady tones when calibrating the recorder or setting levels within a system but pretty wasteful when working with a recorder that’s capable of handling a dynamic range between 65 dB (analog tape) and better than 90 dB (garden-variety digital [QQQ 16/44.1?]). There’s no usable resolution below -10 VU. If we assume that we have at least 10 dB of headroom above 0 VU, a VU meter is really only informative within about a 13 dB range.

Why such a compressed scale? Practicality. Perceived loudness is a logarithmic function. It takes more than twice the signal voltage for something to sound twice as loud. A linear scale that represents loudness just wouldn’t look right. Remember that one of the design criteria for the VU meter was that the pointer response looked good for speech.

It’s no surprise that modern, highly compressed music will shoot the meter pointer well up scale, and it’ll stay right there until the fadeout. On uncompressed material with a wide dynamic range, there’s plenty of audible material down below the -20 mark on the VU meter, but an inexperienced engineer who trusts the meter rather than his ears can be misled into thinking that anything that barely moves the meter is too soft.

There are standard specifications for how a VU meter should behave, and all the good ones (like on the front of your grandfather’s Ampex) conform pretty closely to those specifications. The problem is that you can’t always tell by looking at a meter how closely it complies with the standard. The specification defines several important characteristics:

  • Sensitivity: How many volts are required to deflect the pointer to the zero mark.
  • Impedance: How it loads the circuit to which it’s connected.
  • Rectifier characteristics: How it responds to various waveforms.
  • Ballistics: The rise and fall time of the meter pointer.
  • Scale: Sufficient resolution to be useful.

All of these things contribute to indicating how loud we perceive a sound to be. Unfortunately, none of those characteristics help to determine peak levels, and that makes it tricky to use a VU meter to indicate level going to a digital device. Let’s look at those characteristics and see how meters differ.

Sensitivity and Impedance

When Western Electric designed the VU meter, they were concerned with driving miles and miles of telephone lines, which, like a loudspeaker, required a certain amount of power (voltage x current). Modern audio devices, however, draw negligible current (and hence require negligible power), so we’re interested in measuring voltage rather than power.

The classic telephone system is built around 600 ohms for both the source output impedance and destination input impedance. The voltage necessary to pump 1 milliwatt of power into a 600 ohm load was defined as 0 dBu. This is 0.775 volts, and it’s independent of impedance. It would have been convenient to design the VU meter to indicate zero with an input of 0.775 volts, but that presented too many design conflicts. In order to achieve that sensitivity with purely a mechanical meter (no electronics), it needed to have an input impedance of around 400 ohms, which was low enough to drag down the output of the circuit it was measuring.

To solve that problem, the designers put a 3.6 kilohm resistor in series with the meter movement to avoid significantly loading a 600 ohm source—but then the voltage drop across the resistor caused the meter to read about 4 dB too low. The solution (remember, this was in the days of gobs of headroom) was simply to crank up the voltage required to get the pointer up to 0. And so, the professional standard for audio equipment today is that a standard VU meter will read zero for a level 4 dB above the 0.775V (0 dBu) reference point, or about 1.23 volts.

AC/DC

The type of movement used in the classic VU meter responds to direct current. Put AC into it, and the pointer will just vibrate around the bottom of the scale. Since audio is by nature alternating current, the VU meter employs a rectifier to change the AC into DC. In the heyday of VU meters, there were only two types of solid-state rectifiers, and the type that was the most linear, copper oxide, was chosen for use. It didn’t give a true RMS output for any waveform, but at least all the meters were the same.

Nowadays, rectification is usually done with an operational amplifier, which gives more linear rectification than a copper oxide rectifier. Consequently, a modern VU meter that’s on the front panel of a preamp or mixer is likely to give a different reading than a traditional one when the waveform diverges from a simple sine wave.

Going Ballistic

An ordinary voltmeter (if you can still find one with a pointer), though accurate for continuous simple waveforms, doesn’t do a very good job of responding to the rapid level changes and asymmetrical waveforms present in audio. The VU meter’s movement is designed so that its reading will correspond to a bench meter on steady-state tones (so it can be calibrated), but on complex audio waveforms, it will read somewhere between the average and peak value, making it work about like the human ear. If peaks are frequent and closely spaced, (for instance on a drum solo) the VU meter will tend to read a little higher than average level but won’t make it all the way up to the peak level.

A proper VU meter takes about 1/3 of a second after a 0 VU level signal is applied for the pointer to reach the zero mark (by which time the signal level may have dropped a bit below its peak), while many electronic meters have near instant response. There are designs for electronic meters that approximate the mechanical response of a real VU meter, but it’s not generally an advertised feature.

The VU meter ballistic makes it a useful tool because it approximates how we actually hear, but the engineer must understand that when it’s necessary to know the actual peak level, a VU meter doesn’t tell the whole story. The VU meter may read -6 during a drum solo but the peaks when the stick hits the head may be 10 dB higher than that.

Today, many meters, both mechanical and LED or LCD ladder-style, have scales that look like a VU meter but don’t meet the VU standards. These are useful for establishing steady-state calibration levels when setting up a system, but they don’t accurately represent loudness or headroom.

LED ladder meters are often found on digital equipment but until you dig into the inner workings, you usually don’t know whether the meter indicates an analog voltage or the amplitude of a digital sample. In either case, the garden-variety meter doesn’t provide the same dynamic response of a real VU meter. It can show you average and sometimes peak level but it won’t tell you much about apparent loudness.

Each LED on a ladder-style meter represents a single calibrated point, but unlike an analog meter, you can’t estimate the level between any two LEDs. While an analog VU meter scale runs from –20 to +3, the range between the highest and lowest level indicators on an LED meter tends to be much wider than with a mechanical meter.

The bottom LED of the scale usually represents 30 to 60 dB below 0 VU, while the top of the scale typically extends to +20 or even as high as +28 dB above 0 VU. The steps between the lower LEDs are fairly large, similar to the low resolution of the bottom end of the mechanical meter scale. Unlike the mechanical VU meter, however, however, because of the extended range above 0 VU, the LEDs at the top end of the scale tend to have fairly large steps between them as well.

For the meter shown here, the seven LEDs between -4 and +10 have roughly the same resolution as the working range of a mechanical meter. There’s only one step above +10 and at 18 dB, you have more range between those two LEDs than you have on a whole VU meter! When the +10 LED is on, and the +28 LED is off, you really can’t tell what the level is.

You can get into trouble if you try to use this meter as a guide to setting levels with a digital system. A typical A/D converter or computer audio interface typically is calibrated so that +4 dBu at the input corresponds to a digital level of -20 dBFS. The meter in this example is calibrated so that 0 VU corresponds to an analog level of 0 dBu. The second LED from the top (+10 dBu) corresponds to a digital level of -14 dBFS. The top LED (+28) doesn’t flash until you’re 4 dB above digital full scale. So in that critical range between -20 and -6 dBFS, you really have only one useful LED indicator.

One day, manufacturers will catch on. Until then, pay attention to your DAW’s meters.

Zero VU is an arbitrary voltage level. It’s whatever is “normal” at the point in the circuit where it’s measuring. A meter that indicates input or output level is generally calibrated according to one of several industry “sort-of” standards. The most common is that 0 VU represents a voltage level of +4 dBu, about 1.23 volts RMS.

Most modern mixers are calibrated to this convention: When the output meter reads 0 VU, the device is putting out +4 dBu RMS. But this isn’t always the case. For many years, Mackie believed this was too confusing so they calibrated their meters so that 0 VU = 0 dBu. The “semi-pro” recording gear popular throughout the 1980s was generally calibrated for 0 VU = –10 dBV (10 dB below 1 volt, about 0.32 volts). Professional recorders are usually calibrated so that their meters read 0 for an input level of +4 dBu. In the broadcast world, “line level” is often +8 dBu, so that’s where their VU meters are calibrated.

Are you beginning to see what the “madness” is in the title of this article? Wait! There’s more!

Digital Metering

A digital meter—which is not a VU meter—has a scale with 0 dB all the way at the top. (Shown here is the Selected Channel level meter from a PreSonus StudioLive™ 32.4.2AI digital mixer.) This doesn’t represent a specific analog voltage level but rather represents maximum digital level – a sample represented by a binary number with all the significant bits turned on. Digital meters (regardless of what the scale says) measure dB relative to the maximum value (“full scale”) rather than a nominal value, as with an analog or standard VU meter. We call this dBFS, where the all-bits-on value is represented by 0 dBFS.

Unlike the VU meter, with a digital meter there is no headroom above 0. You can’t turn on more bits than the system’s word length. It’s up to you, the engineer, to decide where to set the analog-to-digital converter’s input gain to allow as much headroom as you’d like. The object is to leave enough analog room so that only the loudest peaks approach a digital level of 0 dBFS. While today’s digital systems typically accommodate internal processing that yields a word length greater than what goes in or comes out, a 24-bit analog-to-digital converter is flat out when that 24th bit is turned on.

Perceived loudness is the same whether the source is analog or digital but we view headroom and operating range differently in the two worlds.

In the analog world, there’s a range over which distortion increases gradually, and we include at least a portion of that range in our useful working range. In the digital world, distortion is nearly negligible and independent of level until all the available bits are used up, and at that point, it’s (literally) all over.

The meters on early DAT recorders measured the signal level on the analog side of the converter. The “Overload” indicator turned on when the input voltage reached the level that produced full-scale output from the digital-to-analog (D/A) converter. This tells you that there was likely an overload when recording, but since the recording can never exceed full scale, “overs” metered on the analog side never showed up on playback.

The conventional way of determining a potential digital overload condition is to count the number of consecutive 0 dBFS samples, turning on the “Over” indicator when that number exceeds a preset threshold. The Sony PCM-1630, the original “official” digital standard recorder from which all CD glass masters used to be cut, indicates overload any time it sees three consecutive full-scale samples.

This is reasonable, since we can assume that the first of a consecutive string of 0 dBFS samples occurred while the waveform was on the way up, and the last occurred on the way down; therefore, the stream must have tried to go over the 0 dBFS level some time in between.Modern digital recorders and all software DAW meters monitor the actual digitized value. Any input voltage which exceeds the 0 dBFS level will be digitized as 0 dBFS, leaving a nice flat-topped waveform that looks quite unlike the original (see Fig. 3). While the digital level never exceeds 0 dBFS, clearly there’s something wrong, about which we would like to be warned.

Three consecutive 0 dBFS samples for an Over might be considered pretty fussy, since it might represent only one sample that tried to exceed 0 dBFS, with its two adjacent samples being exactly full scale. For most program material, one sample is too brief to be detected by most listeners, so some digital level meters offer a choice of the number of contiguous full-scale samples required for an overload indication. Pick your poison, depending on the music you’re recording. With modern 24-bit systems, a single full-scale sample is usually shown as an overload.

Digital Over indicators are all about statistics and perception, however. While it might not be musically interesting, you can record a full-scale, level square wave accurately at 0 dBFS and the Over indicator will never go off.

Working at a 96 kHz sample rate raises an interesting question: Should we double the number of FS samples that we consider an overload, since that would represent a flat-topped waveform of the same duration as three samples at 48 kHz? The modern convention is to indicate an “over” when one sample reaches 0 dBFS.

In the early days of digital recording, refreshing an onscreen meter was usually a low-priority task for the program. Meters were useful for setting initial levels, but they could run a bit behind once the “tape” started rolling, and there could be enough lag to get into trouble if you worked too close to full-scale level. This problem was compounded by the 16-bit converters of the day that had a noise level well above the lowest order bit, so the practice—a carryover from tape—was to run as close to maximum level as possible for the best resolution and lowest noise.

DAW meters often lag far enough behind the A/D conversion to let you get into trouble if you’re working close to the edge. Modern computers have sufficient processing power so that this is no longer a problem, but converters have become better, too. With modern 24-bit converters, leaving several dBFS at the top for real emergencies won’t compromise the signal-to-noise ratio enough to worry about, and if your recording will be tweaked by a mastering engineer, it’ll leave a little breathing room for digital level adjustment or equalization.

Many video production houses have adopted a more stringent convention, primarily to assure that they don’t run out working space when mixing various elements of a project. The SMPTE standard for nominal digital level is -20 dBFS, but rather than allowing peaks up to 0 dBFS, they don’t want to see anything higher than -10 dBFS. This tosses away about a bit and a half of resolution and raises the noise floor slightly, which isn’t terribly important when playback is from a television-set speaker, and it absolutely assures no “overs.”

In the early days of CD manufacturing, if your digital master contained “overs,” the mastering house simply rejected it, telling you to fix your problems before cutting the glass master. They didn’t want to be blamed for your distortion. Now that controlled distortion is an accepted production tool in popular music, judicious application of digital limiting and compression can keep the Over light from coming on while effectively holding the level within a fraction of a dB below full scale throughout the entire song.

What’s the difference between that and digital clipping? Control. If you don’t like how the limiter sounds, you can change it. But while someone might intentionally (and hopefully, sparingly) use digital clipping as an effect, it’s totally inappropriate as a leveler.

But How Loud is It?

Loudness is a function of both the recorded level and how far the listener has the volume turned up. In the past few years, commercial (and following in their footsteps, independently produced) recordings have been in a race to make each recording sound louder, when played at the same volume setting, than the previous one. A whole segment of our industry has sprung up as a result.

Loudness used to be whatever it turned out to be when you kept the level around the recording device’s nominal operating level, as indicated by 0 on its VU. But what’s the nominal operating level of a digital device? The simple answer is that it’s whatever you say it is. Many digital recorders and A/D converters with good metering resolution (a lot of lights) have a mark on the meter scale somewhere between -20 and -12 dBFS. This is the recommended operating level. Everything from that point on up to 0 is headroom. However, you don’t need to stick with that mark on the scale; you can leave as much or as little headroom as you want (or dare). So why are there different “recommended” nominal operating levels? Because there’s no agreement on this.

Operating Levels: Analog vs. Digital

For theater sound, Dolby Labs has established the standard of 85 dB SPL (acoustic sound pressure level, C weighted, slow response) at a digital level of -20 dBFS. In a theater with a properly calibrated playback system—one which actually plays back -20 dBFS at 85 dB—most of the audience can hear quiet dialog, and they don’t get blown out of their seats when the car chase ends in an explosion.

With the coming of home-theater systems, Dolby recognized that sound mixed for 85 dB plus headroom in a loud theater doesn’t translate well to small speakers and small rooms. After all, 20 dB is a lot of dynamic range, and 105 dB SPL is more than most listeners can tolerate for very long. So Dolby recommend lowering sound pressure level to 79 dB for -20 dBFS for home reproduction.

Of course nobody wants to admit to turning anything down, so let’s play with the numbers a little and see if we can make up that level.

If we start out with the standard Dolby calibration of -20 dBFS = 85 dB SPL, we’ll reach 79 dB SPL at a digital level of -6 dBFS. Since we’ve compressed the bejeezus out of all the tracks, we don’t have a lot of dynamic range, so we can run a little hotter. Raising the nominal operating level from -20 to -14 dBFS will jack everything up by 6 dB, leaving enough headroom to still allow peaks to approach 0 dBFS. Now, 0 dBFS still corresponds to 85 dB SPL, but instead of being able to go 20 dB over the nominal level before running out of bits (the headroom that we’ve allowed), we now have only 14 dB of headroom. We keep our fingers crossed that it’s enough.

How do we deal with the reduced headroom? By reducing the crest factor: the ratio of peak to average level. By allowing the average level to be closer to the peak, the average level can be raised without peaks that exceed the maximum allowable level. A 20 dB peak-to-average ratio is typical for uncompressed music of almost any genre, which is why we start out with a nominal operating level of -20 dBFS.

We perceive loudness based on average rather than peak level. In a typical pop song, the snare drum will be the loudest individual sound, and its peaks tend to ride a few dB above the rest of the mix. This is what sounds right to our ears, and it’s those peaks that determine the maximum digital level. We’ll mix so we can allow the drummer, at some points in the song, to be particularly enthusiastic, and we’ll let the peak level hit 0 dBFS occasionally.

Let’s mix our hypothetical song again, but this time we’ll sit on that snare with a limiter or compressor so that despite the drummer’s enthusiasm, his peaks still remain pretty level. You’ll find that the meter no longer hits 0 dBFS but the mix doesn’t sound any quieter.

Now that we’ve taken care of those pesky occasional peaks that determine the maximum signal level, we can boost the master level by a few dB so that we get back those 0 dBFS peaks. Since we’ve just raised the overall level (including the average level), the mix sounds louder, and we’re still within the digital limit. Voila! Instant mastering!

K-System Metering

In a presentation at the October 1999 Audio Engineering Society convention, mastering engineer Bob Katz of Digital Domain described the concept of monitoring at a calibrated sound-pressure level. He proposed a new type of meter scale that essentially displays headroom relative to the calibrated monitoring SPL rather than the digital level. His meter scale looks like a VU meter in that it’s calibrated both above and below 0 VU, but unlike a VU meter, it has a linear scale.

The K-System, then, is an integrated metering system tied to monitoring gain, and it is intended to standardize the levels at which sound is mixed and mastered.

K-System metering features three different meter scales called K-20, K-14, and K-12. These three scales are meant to be used with different types of audio production and have been described Katz in his Audio Engineering Society technical paper “An Integrated Approach to Metering, Monitoring, and Levelling Practices.”

Katz wrote: “The K-20 meter is for use with wide dynamic-range material, e.g., large theater mixes, ‘daring home theater’ mixes, audiophile music, classical (symphonic) music, hopefully future ‘audiophile’ pop music mixed in 5.1, and so on. The K-14 meter is for the vast majority of high-fidelity productions for the home, e.g., home theater and pop music (which includes the wide variety of moderately compressed music, from folk music to hard rock). And the K-12 meter is for productions to be dedicated for broadcast.”

With the K-System, when mixing for maximum dynamic range, the scale goes up to +20 VU, with 0 VU representing a monitor level of 83 dB SPL for a single channel or 85 dB SPL for stereo. For mixing typical pop music, the meter’s full scale is +14 VU, while for heavily compressed “radio ready” music, full scale is +12 VU. The K-System recognizes that different genres and different listening environments require different approaches to level management without destroying dynamic range.

K-System meters in Studio One (L to R): K-12, K-14, K-20.While it may seem that the broadcast mix—the one that we want to appear the loudest—has the lowest full-scale level, it’s really the loudest. With 0 VU corresponding to 85 dB SPL, with the smallest amount of range between 85 dB and maximum, a mix running around 0 VU will keep the average volume higher than that of the widest dynamic-range mix, which allows 20 dB headroom above 0 VU.

If this sounds a bit too much like smoke and mirrors, you can read the paper and fill in some of the mental blanks.

Note that the Level Meter plug-in in PreSonus Studio One® features K-System metering options. To switch to any K-System meter, [Right]/[Control]-click on any Peak/RMS meter and choose an option from the menu.

Those Pesky Reference Levels

The real meter madness associated with digital level measurement is when you have devices with different reference levels (volts or dBu vs. 0 dBFS) in the same system. To make this even more befuddling, you don’t always know a device’s reference level unless you measure it. A lot of gear is specified only in terms of a nominal analog reference level, without revealing the corresponding digital level.

Further, many digital I/O devices have no input or output level controls, so you can’t easily calibrate the reference level to match other gear in your system – you have to accept whatever calibration the manufacturer gives you. It’s bad enough when there’s a standard and not everyone follows it, but in this case there’s no standard for the analog level equivalent to 0 dBFS.

This leads to the common complaint of “my mixes aren’t hot enough” or conversely, “it plays much too loud.” A dirty little secret is that budget-priced A/D converters (whether a stand-alone converter or integrated into another device) tend to be a bit on the less-sensitive side. It’s more likely that +4 dBu going in will give a digital level in the -20 dBFS ballpark than -12 dBFS. The reason is that with less gain on the front end, they’re digitizing a lower quiescent noise level, so the manufacturer can advertise a lower noise floor on the digital side. You need to hit the input pretty hard in order to get to 0 dBFS.

Sometimes you can crank up the output level of the source—for example a mixer—and sometimes, as with a microphone, you can’t. If there’s something you can adjust, you must take care that you don’t push the source feeding the A/D converter into clipping before the converter reaches maximum digital level. This can occur if you have a mixer with little headroom (a greater risk with older semi-pro mixers than with modern ones) or if you have a mismatch of nominal analog operating levels between the mixer and recorder. If you’re using a recorder with +4 dBu input sensitivity together with a mixer with a nominal operating level of -10 dBV, the only way you’ll be able to get the recorder’s meters to approach zero on peaks is to run the mixer’s output level meter well above zero, which will seriously compromise, and may exceed, the headroom in your mixer.

Loudness in the 21st Century

In the past few years, the audio industry, primarily in response to complaints that television commercials are too loud, has developed a new set of standards for loudness. This is primarily a broadcast thing but there’s now a variety of loudness meters that measure compliance with such loudness standards as ITU-R BS.1770. This is pretty complicated stuff and beyond the scope of this article but don’t be surprised if your DAW gets a plug-in for it eventually.

Keep Your Eye on the Ball (Not on the Meter)

Once you’ve properly calibrated the gear in your studio, meters can tell you a lot about what’s happening, but don’t become a slave to them. Use your meters as tools, and you’ll spend more time recording and less time worrying about whether your levels are set properly. Don’t forget that you have a playback volume control. Leave getting the hot levels to the end of the process.