Tuesday, January 24, 2012
Get your de-essing wrong and either your vocal will still be sibilant, or its high-frequency content will sound mangled and tortured. But how can you get de-essing EXACTLY right?
Whenever did you hear someone speak or sing in everyday life and feel that they sounded overly sibilant? OK, some people do have a little whistle in their speech (David Attenborough anyone?), but rarely would you become aware of someone over-pronouncing their 's' sounds.
Put a singer in front of a microphone and it's a different story. Quite often you will find that the signal from the mic is far too 'essy'. It's down to the singer, or the mic, or just the way they react to each other. So you have to do something about it...
...And reach for the de-esser.
It has to be said that de-essing using a plug-in, or a compressor with a de-ess function (or an ordinary compressor cleverly set up), can work. It can work very well, with a bit of care. But sometimes it is incredibly difficult to find the precise setting where the esses come out natural-sounding, and neither over-pronounced nor over-processed.
There is a solution however. Except in really unusual circumstances it is a full and complete solution. But you have to plan ahead.
Let's start from the point where you have a singer standing in front of you and you're listening in the old fashioned voice-to-ear way, purely acoustically. Old-fashioned, I know ;-)
In a situation such as this, it would be unlikely that you would experience any unpleasant degree of sibilance.
So take our your microphone and have a listen through that. Suddenly things are different, and there may well be a sibilance problem to take care of.
The problem is that microphones that make the vocal sound nice are also the ones that are more likely to accentuate any sibilance. So there is a little bit of nastiness mixed in with the nice.
You might have chosen a more accurate microphone - small-diaphragm capacitor microphones are normally the most accurate - and not had any sibilance at all. But they don't flatter the vocal. And you can't have it both ways.
Well actually, you can...
There is absolutely no reason why you shouldn't record the vocal with two mics simultaneously. One will be your favorite, warm, luscious large-diaphragm tube microphone, maybe even through a tube preamp if you like the tube-on-tube sound. The other will be an accurate small-diaphragm mic, recorded through a transistor preamp.
You will end up with two vocal tracks, one will be warm and lush, but with a sibilance problem. The other will be clinically clean, but with clean esses too.
All it takes is a little editing and you will have the perfect warm vocal track, with no sibilance to be heard!Publication date: Saturday January 21, 2012
Author: David Mellor
Monday, January 16, 2012
In what order should you process your tracks? What will happen if you get the order wrong?
A common question received here at Audio Masterclass and RecordProducer.com is, "In what order should I connect my processors and effects?". Of course, this question applies equally to hardware processors, analog or digital, and to DAW plug-ins.
Firstly, let's consider why you need to use plug-ins or hardware processors and effects at all? Yes, why?
One reason is to correct, ameliorate or compensate for some defect in the original signal. Suppose for instance you had recorded an acoustic guitar and you found there was an annoying resonance in the lower midrange, then you would use an EQ plug-in to correct this problem.
Another reason is to make a signal sound better, either individually or in the context of the whole mix. So you might find a recording of a violin or other stringed instrument too dry. You would add reverb to make the sound more rich and lush.
In my opinion, it is better to correct faults first, then think about improvements. Indeed, how can you improve something while there are faults clouding your judgment? Would you not first tidy your room, then clean it, then arrange the pot plants and ornaments?
If a vocal is excessively sibilant, then processing is easier on the cleanest, purest version of the signal you can obtain, which of course will be the signal before any other processing. Because de-essing is quite tricky to get right, in my view it should be tackled immediately. De-essing should take place especially before compression, which will make the task very much more difficult.
If a signal has a fault in terms of frequency balance, then it makes sense to correct this fault before further processing. Why would you process a signal that had a defect? If the result turned out well, it could only possibly be by chance.
When the signal is as clean, clear and crisp as you would ideally have liked the recording to have been in the first place, then you can compress. You can do this either because you want to reduce the dynamic range quickly and easily, or because you simply like the sound of compression.
If there was some noise or other unwanted sounds in your original recording, then with the power of your DAW you can simply edit out sections where the instrument isn't playing. When it is playing, it will almost certainly mask the noise, unless there was a serious problem that you really should have attended to during the session.
I have placed the expander/gate at this point in my list because this is where it usually comes in my own personal thought process. However you might want to place it before the compressor, on the grounds that compression always increases the noise level, but if you have expanded or gated the signal then there is less noise to compress. If you edit out the non-playing but noisy parts of the track, then you are effectively doing this.
Another point of view is to place the expander/gate after the compressor if the noise level wasn't objectionable before compression, but it is after compression.
As you can see, the positioning of the expander/gate is very optional. The differences are subtle but worth experimenting with.
There is often a case for EQing a compressed (and perhaps expanded or gated) signal. This might be because you find you can get a more pleasing sound with EQ, or to fit the instrument or vocal into the context of the mix as a whole. Whereas previously you used EQ to correct a fault, now you can use it to make improvements. In other words, if your original recording was perfect in every way, you would now be making it even better. The same logic applies if you have had to de-ess, EQ, compress and expand or gate to make it as perfect as possible.
In normal circumstances, reverb is used to emulate the ambience and reverberation of real-life acoustic spaces, so it is applied to signals that have been processed to perfection.
You can place your reverb plug-in at a different point in the plug-in sequence, but this would be for a special effect. This would not be the way to obtain a natural sound quality, but if you are looking for something unusual, then experimenting with the placement of the reverb plug-in is one way to do it.
One last point
Remember that there are no rules in recording. Whatever sounds good to you, and your client or the market you sell into, is most definitely the right way to do things. But the above sequence of processes and effects is in most cases the best way to work.
Wednesday, January 11, 2012
Aux channels are commonly found in the analog world; aux tracks in the world of the DAW. What do they have in common, and how do they differ? What can you do with them?
Let's start in the analog world, specifically in the analog mixing console, which is where many of our modern concepts of DAW mixing originate.
Analog auxiliary channels
An analog mixing console possesses a number of channels. You could, for instance, connect a number of microphones to the same number of channels, and mix a band playing live directly into stereo. Or you could connect the tracks of a multitrack recorder and mix to stereo.
Each channel will have a preamplifier, EQ, auxiliary sends (an explanation for which I will save for another article), pan, fader, solo and mute controls.
A small mixing console might have eight channels; a large console might have thirty-two or more (often in multiples of eight).
Auxiliary channels are exactly like channels, but with fewer features. Typically an aux channel will have a line input (no mic), pan, rotary fader, solo and mute. There may be a couple of aux sends but there will be no (or minimal) EQ.
Although aux channels don't have as many facilities as standard channels, they take up less space on the surface of the console. So they are easily provided as useful extra channels that you can use when you don't need the full features of a standard channel - reverb returns for instance.
Digital mixing consoles can have auxiliary channels too.
Auxiliary channels are sometimes called 'auxiliary returns', but this implies that there has to be a send-return relationship of some kind. Often this is so, but it doesn't have to be.
Auxiliary tracks in the DAW
We tend not to use the word 'channel' so much with reference to the computer digital audio workstation. We talk of 'tracks' instead. In the analog world, a track is something possessed by a recorder, never by a mixing console.
If you look at the edit screen of your DAW, then you will see the tracks clearly. If you look at the mixing screen then you will see a close resemblance to the channels of an analog mixing console. But in the DAW we usually call them tracks. To old-timers it is a little uncomfortable to talk of 'aux tracks' and 'master tracks', but since they are normally represented in time on the edit screen, along with the recorded tracks, it does make a certain amount of sense.
In the DAW, an auxiliary track is a track that doesn't have any recorded audio associated with it. All the other tracks will have audio on them. You cannot record onto an aux track.
What you can do with an aux track is insert a reverb plug-in. You can insert any plug-in you like, but reverb is the most commonly used. You can use the aux sends of the audio tracks to send signals to the reverb, then add that reverb to the stereo mix through the aux track.
In a DAW, an aux track is as fully-featured as any other track. The only difference is that it can't record.
One good use (among many) for aux channels and aux tracks is subgrouping.
For instance, you might have recorded a band and used eight tracks for the drums. It makes sense to mix those eight drum tracks into stereo, then send them to a stereo aux track so that you can control the level of the entire drum set on just one fader.
(On an analog console you may not have a stereo aux channel, but you could just as easily use two mono aux channels.)
If you have any questions about aux tracks that still need answering, click on the 'Ask a Question' link at the top of the page.Publication date: Tuesday January 10, 2012
Author: David Mellor
Tuesday, January 10, 2012
Saturday, January 7, 2012
Tuesday, January 3, 2012
Mixing Live Sound for Tony Little in 2011
This was to be aired for late night infomercials and to be released on DVD for workout training.