For some people, the first attempt at mixing goes something like this: You put everything up part way and listen. You need to hear more guitar, so you push that up a bit. Now the vocal is too low, so you raise it some. Then you need to bring up the bass guitar. There’s always one more thing that isn’t loud enough... until finally you have everything pushed all the way up.
Needless to say, this doesn’t work very well. Unfortunately, it takes a while to learn how to mix, and the better you want your mixes to be, the longer it takes to develop that level of skill. I’ve had at least one person state flatly that it takes 5 years of work to become a good mixer. I do not know whether that is necessarily true, but certainly it takes a long time if you have to discover everything by yourself. It is possible to find lots of hints about mixing technique, but I have yet to see any place that pulls it all together and tells you all the juicy secrets. I would make this site that place if I could, but I don’t know all the “juicy secrets” yet. What I have learned so far, I will try to share.
For this little lesson, I want to explain what may well be THE core principle of mixing: Masking, what it is, and how to deal with it. We all know what a mask is: it’s what you wear to hide your face. In a broader sense, a mask is anything that hides or covers something else. Just as there are masks affecting what we see, there can also be masks affecting what we hear. In the vast Dilbert-land of office cubicles, one thing that is difficult to manage is privacy. In particular, it can be difficult to keep conversations private. You may not see through the partitions, but it is not hard to hear over them. In some places they use what are called masking generators to play filtered noise through speakers as a way of “covering” conversations that others should not hear.
If that is difficult to picture, here is another example... You may not believe it, but I snore at night. I have this on the best authority: my wife. During the summer, she runs a fan at high speed in our bedroom, not just to move air, but to make noise that covers the sound of my snoring. For use in the winter months, I built her a masking generator by installing a pink noise generator in a small powered speaker. The rushing noise it makes masks the sound of my snoring so that my wife can sleep.
When I am listening to the radio in my car, I have to turn it up louder when I am on the freeway so that I can hear it over the louder road noise. I could probably come up with lots of other examples, but I think you get the idea.
The next bit to understand is that, not only can the music we want to hear be masked by outside noise, but also some parts of the music can actually be masked, or covered up, by other parts of the music. The basic idea behind this is that, if too many sounds try to occupy the same “space”, we simply won’t hear them all. In this case, there are various kinds of “space”. The most obvious kind of space is that of physical location, or where the sound seems to come from. There is also spectrum space, meaning the frequency or pitch of a sound. Finally there is the “space” of time, when different sounds happen. So, the closer together two or more sounds are in location, pitch, and/or time, the harder it is for our ears to separate these sounds from each other.
The first factor to consider in masking, though, is relative level. If two sounds are quite “close together” and one is a lot louder than the other, only the louder sound will be audible. We will be most likely to hear both sounds if they are equally loud.
In mixing we may play games with all four of these factors to get the sound we want. Sometimes we will trade one factor off against another. Generally, the better we can separate the different sounds in as many ways as possible, the less those sounds will mask each other and the better we can hear every part in the music. Now let’s examine each factor and what we can do with it.
The first thing that many top engineers do with a mix is to listen to everything together and go for a “rough balance” among the different parts. They know that the balance among parts affects the sound of each part, and thus influences everything else that is done with the mix. This is also the first step in dealing with any potential masking problems.
Level may not be a form of “space”, but it definitely affects how much space a sound occupies. If there are two sounds, and one is louder than the other, the softer sound may or may not be heard. The closer together in location, spectrum, and/or time the two sounds are, the easier it is for the louder sound to drown out, or “mask”, the softer sound, or the louder the softer sound has to be in order to be heard. The further “apart” the two sounds are, the lower the level of the softer sound can be before it is masked by the louder sound.
If you have trouble hearing something that is important in a mix, you have to find a way to make it stronger, either by separating it from other sounds, or making it louder, or making certain other sounds softer. In fact, often the first approach to dealing with masking problems should be adjusting the level balance among parts. Although we can sometimes manipulate time, spectrum, or location to make a sound clearer, these tricks are often a band-aid for the larger problem of too many voices trying to be “out front”.
Often the best way to solve a problem like this is to find “less important” parts that you can turn down in the mix, instead of turning up the sound you want. In fact, that is often the best way to work out overall balances in a mix. Remember the problem I described at the beginning of pushing everything up all the way? This usually happens because the mixer is asking himself the wrong question over and over again: “What should be louder?” Let’s take a closer look at that question. First of all, why isn’t that sound loud enough? Because it is partially hidden by something else. Now, when my wife is standing on the other side of a closed door and I want to see her, I don’t say “Sweetie, could you please make yourself bigger than the door?”: instead, I open the door. So it is with a mix. Instead of making the more important sound louder, you make less important sounds softer. The first question to ask yourself, then, isn’t “What should be louder?”, but rather “What should be softer?” or, more precisely, “What could be quieter and still be heard?”
As with many human impulses, the gut desire to make EVERYTHING LOUD usually needs to be restrained.
The important key concept here is: Not everything has to be loud to be heard. In fact, the brighter a sound is, the less level it may need to he heard. That means that it is often possible for everything to be heard without everything being loud. Every time that you can lower the volume of a sound and still hear it, you reduce the masking of other sounds, and thus make your mix clearer. Instead of thinking “Loud enough to be heard”, perhaps you should think “just enough to be heard, and no more”. So, if there are masking problems in a mix, sometimes the best answer is to find things to “turn down” until everything can be heard. Yes, the idea sounds backwards at first, but if you work with it a while you will find that it helps more often than you might expect.
One more thing about working with levels: Sometimes the “weak” part is only weak some of the time, that is, its highest levels sound fine but its quieter sections disappear. You can sometimes fix this with either fader moves or compression, using one or the other to bring up the quieter parts of that track.
Once you have done everything you can with level balances, you can look at the other methods if you still have problems. Adjusting part balances is the easiest of the methods for dealing with masking problems, and the one least likely to do “damage” to the natural sounds of the instruments.
The more locations (or channels) you have to put different sounds in, the easier it becomes to get a workable mix, and to keep different parts from masking each other. The fewer channels you have to mix to, the more skill is required to keep a mix “clear” enough to hear everything. Many experienced mix engineers will tell you that if a mix will stand up in mono, you’ve pretty well got it nailed for clarity. In mono there is no “place” to hide. For that reason, it is always a good idea to check your mixes in mono.
Most of our mixes now, of course, are in two-channel stereo (the term stereo actually can refer to any number of channels, although often it is assumed now that “stereo” means “two channels”). There are a various things that our ears and brains “measure” to tell us where a sound comes from, but most mixes use only one factor to fix the location of a sound: Intensity. By feeding different amounts of the same signal to different channels, it is possible to create a fair illusion of almost any location (between or among the speakers), but the full effect usually is only present for a listener sitting near the middle of the “stage”. For the driver of a car, for example, the sounds in a mix that are panned to the center often sound more towards the left. The “phantom center” image simply is not stable for all listening positions.
In fact, in a regular stereo mix, there are only two positions that are guaranteed to be stable: the far left and the far right. “Hard panning” a sound to one side or the other is one quick and easy way to help it stand out in a mix without having to be the loudest sound. You have to be careful with that trick, though, because of a little thing called “center channel buildup”. When a stereo mix is heard in mono, things panned to the center tend to become a bit louder than things that were hard to one side or the other. For things like the lead vocal in a song, this may be a good thing, since that part is usually panned center. If you have a part that you would like to be able to “goose up” a bit in the stereo mix without being too loud in mono, hard panning that part to one side may help.
Sometimes an individual part or instrument is best used as a mono source panned to one location instead of as a stereo source “all the way across”. This can be especially true for electronic keyboard instruments. Arrangements that are heavy in keyboard or sampled sounds that are supposedly stereo often get mixed with all of these “stereo pairs” panned wide, which can result in a sound that some have called “Big Mono”: a sound that is full and crowded without necessarily giving a real sense of space, and where different parts are not clearly separated out into different locations. In some instances it may be better to pan both channels of a given instrument or patch to the same location, or at least pan them fairly close together, thus leaving other locations to be “filled” by other parts.
In a surround mix, of course, you actually get FIVE “nailed down” locations to play with, which makes it easier to spread out different sounds to keep them from stepping on each other. At least one mix engineer has commented that he found it easier to get a decent mix in surround. Of course, surround mixing methods for music are still under development, at least as far as any more or less agreed “standard” methods (other than how the monitor systems are set up, for which there are very specific standards in place).
I could have used the word “pitch” here, except that we tend to think of a pitch as a single frequency, and most musical sounds actually have components at many different frequencies, so that a note played by an instrument occupies an area (and sometimes more than one area) of the audible spectrum. In mixing it can be useful to think of an instrumental note, not as a single sound, but as a collection of different sounds that work together.
When two or more instruments occupy the same spectrum space, they can interfere with each other, with each making it more difficult to distinguish the individual sound of the other. Now, if the artist/producer’s intent is to have two or more instruments combine to form one unified sound, this may not be a problem. Most of the time, though, we want each instrument to have its own space to live in so that it retains its own clear identity. Sometimes it becomes necessary to alter the spectrum of an instrument. We may do this either to help that instrument stand out, or to keep that instrument out of the way of another, or both.
When we need to do spectrum carving, the tool we generally use is an equalizer. When some people get into serious detail, they talk about doing “surgical” EQ adjustments. The equalizers that most of us now have available with our Digital Audio Workstations give us enough power and control capability to do some real damage if we are not careful. This means that accurate monitoring (or the ability to “listen around” the faults of your monitors) is really important. If you cannot be sure what you are hearing, don’t be too surprised to make unpleasant discoveries later on. Fortunately, the DAW also gives us the ability to go back and fix our mistakes, so we shouldn’t be afraid to try new, even radical, things to see where we can go.
The first thing that most novice engineers think to do with equalizers is to use them to turn up sounds that they like. For example, how many car stereos have you seen with graphic equalizers whose sliders are adjusted to form a smile? I have heard the derisive term “smiley face” from more than one professional. A certain amount of “boost what you like” is normal, but this can easily be overdone.
Most experienced engineers have learned the power of turning down the parts of a sound that they either don’t like or don’t need. If two or more instruments are fighting for a certain part of the spectrum, you could ask yourself which of these instruments most needs that part of the spectrum and which would be ok without it. You could then pull a slight dip in the EQ for that part of the spectrum on the instruments that don’t need it, and maybe put a slight peak there for the instrument that does need it. This is a common trick for helping make a mix clearer.
Sometimes it may be necessary to do things with EQ that seem unnatural at first. When setting up an instrument in a mix, it seems natural to solo that track and adjust it until it sounds right. Sometimes, though, an instrument EQ’ed to sound best by itself will not sound right in the overall mix. Never “settle” on an EQ adjustment on a track until you have heard how it sounds in the overall mix. You will find that sometimes, in fact, an instrument that is adjusted to perfectly fit its place in the mix will sound unnatural or even ugly on its own. Don’t be afraid of that possibility, because that may well turn out to be what is best for the song.
This one can be tricky, because although with a DAW we can move any sound to any time we want, it doesn’t mean that we should. Although different things happening at the same time can and do step on each other to some degree, we can’t always avoid it. I mention time as a factor partly because it is something that the writer or arranger of the song should be aware of before recording begins. It is often a good idea for different parts to take turns “speaking”, and most music benefits from having a certain amount of space left open. Sometimes, though, the engineer is called upon to create a different arrangement by editing. The simplest form that this takes is actually cutting out parts of a track to get an instrument out of the way. Of course, this is a decision that needs the artist’s or producer’s approval, but again one of the benefits of a DAW is the ability to do alternate edits that can be compared to each other without having to alter or destroy the original work (I LOVE non-destructive editing!).
Failing such actual alterations to the arrangement of a song, about all we have to play with is the use of echo or reverb. Sometimes adding a bit of slap echo or a bit of a reverb tail can help call attention to a particular part. You have to be very careful with such tricks, though, so that you don’t muddy up the mix with them.
In fact, sometimes you may feel the need to make the reverb “take turns” with the part that drives it. For example, maybe you want a BIG reverb on the snare drum in a ballad, but you don’t want that reverb to “wash out” the actual hits of the snare. One way around that is to put a compressor on the reverb return and key that compressor from the snare track. That way, the snare can “punch a hole” in the reverb so that you hear it clearly, but still have that big reverb “tail” on it.
Masking is often your enemy, but occasionally masking is also your friend. Sometimes, when you solo a track, you will hear little noises that are not part of the music. You may or may not be able to remove or fix these noises. In the full mix, though, these little noises may be covered by the other instruments. There have been a lot of hit records made with all sorts of little defects in the tracks that no one ever hears because they are masked by the rest of the music. I once had an opportunity to hear the source tracks of Norman Greenbaum’s “Spirit In The Sky”, and some of what I heard was, well, not very impressive... yet the full mix still stands up well. The “defects” were being masked by other things.
Masking then, is something you definitely need to understand in order to create top-notch mixes of dense material (a LOT of current pop/rock music would definitely be described as “dense”). You need to learn when and how to fight it, and when to use it. Much of the skill of mixing involves dealing with masking. Some of the tricks you need to use will seem “unnatural” at first, but as you develop your skill with them most folk will never know you used any “tricks” at all: they’ll just hear that great music.