Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Thursday, May 31, 2012

Q: Are analog mixing consoles compatible with digital audio workstations?

An RP reader asks whether he can use his Allen and Heath GS-R24 analog mixing console with a Pro Tools or Pyramix DAW system.

By David Mellor, Course Director of Audio Masterclass

This is an interesting question because the answer is so obviously, "Yes". But the response could just as easily be, "Why would you want to?"

Let's look at some scenarios...

Live recording direct to stereo

By 'live' I mean a situation where the instruments and/or singers are all performing at the same time, with no overdubs expected or involved.

This is a classic situation where the connection of an analog mixing console to a DAW is a marriage made in heaven.

Suppose you have an acoustic ensemble like an orchestra, choir or jazz band (amplified electric instruments are fine too, as long as no PA is involved).

Simply arrange your microphones the way you like, mix the sound the way you like and record the stereo output of the mixer into your DAW. You will come away from the session with stereo takes, ready to edit and sweeten.

What you can't do here obviously is remix the recording. The signals are mixed before they are sent to the DAW. There's no problem with this if you know what you are doing. Countless recordings have been made this way.

If there is a PA system involved, like at a live gig, then recording becomes a little more complicated.

What you often cannot do is record the stereo output of the PA mixing console. The reason for this is that the FOH (Front of House) engineer will take into account the sound coming from the amplifiers on stage. So in the FOH mix, the guitars will tend to be a little quieter than would be ideal for a recording. This problem gets less the bigger the venue, so you could probably record a stadium gig just fine; a cozy bar could be a problem.

The best option for live recording where there is a PA is to split the microphone signals using purpose-designed transformer splitters. This gives you the raw feeds to work with. Another option is to use the insert point sends of the FOH console. The drawback with this is that if the FOH engineer changes any of the mic gains during the show, it will affect your recording.

Live recording to multitrack

What you will need for this is for your DAW's audio interface to have multiple inputs. If you want to record sixteen independent tracks simultaneously, you will need sixteen inputs.

What if you only have an eight-input interface?

In this case you will have to pre-mix some of the channels, and accept that you won't be able to change these premixes later. You'll keep important tracks like the lead vocal separate of course.

Mixing a multitrack recording from your DAW

OK, why don't you just mix it in the DAW?

Well perhaps you want to get your hands on some real faders. That's a good enough reason.

Perhaps you feel that the analog sound of the console will give your mix some kind of benefit. If you really think that it will, then that's a good enough reason too.

Perhaps you are fed up with never being able to finish a mix in your DAW, because you can always go back to it and tweak it at any time later. Well if you mix on an analog console, when you decide that the mix is finished and print it to stereo, you can zero the console again. Now you can't go back other than by starting again. That's a good enough reason too.

To take multitrack audio from the DAW to the console, you will need an audio interface with multiple outputs. More is better in this case. Eight outputs won't give you much flexibility. Sixteen will be a lot better. If you have more than sixteen tracks in your DAW, you will have to premix some or invest in more audio interfaceage for your studio.


What we can see here is that there are a number of valid reasons for using an analog mixing console with a DAW. In all cases you have to consider the number of inputs and/or outputs that your audio interface possesses. The precise make and model of the console don't really matter, as long as it is of professional quality and in good working order.

Publication date: Thursday December 23, 2010
Author: David Mellor, Course Director of Audio Masterclass

Korg M50- Drum Kits (part 1 of 2) - In The Studio With Korg

Wednesday, May 30, 2012

Should you need a manual to operate a mixing console?

 If a mixing console needs a manual to operate, then there must be something seriously wrong surely?

By David Mellor, Course Director of Audio Masterclass

I had the pleasure of working on an amateur ballet show the other day. I don't do much live sound these days so it's nice to keep my hand in.

Of course I took the precaution of reading the theater's specification before I turned up at the crack of dawn on the day of the dress rehearsal and show.

And when I walked into the control room, a cheery sight greeted me - a conventional analog mixing console.

You know - I didn't even notice what make it was, but it had the channels in the right place, groups in the right place, all the usual facilities. Easy peasy.

Now supposing the theater management had some time earlier decided to re-equip the sound booth (fat chance, usually a theater has to burn down before that happens!) and someone 'in the know' had recommended a digital console?

Suddenly from analog simplicity we move over to a potential nightmare.

There is one vital accessory any digital console needs - the manual! You can't expect to be able to operate a digital console without the manual.

If it's a console that you have worked before, then fine. Or if it's a console by the same manufacturer as one you have used previously, you might be able to pick up how it works quite quickly.

But where analog consoles have matured to the point where all their basic facilities are almost identical, digital consoles are all very different to each other.

You'll need the manual just to understand the menu system of some digital consoles!
What is the answer to this, I wonder?

It could be for sound engineers to equip themselves with their own library of manuals, so they can be prepared for any situation. Or perhaps they should develop the 'digital' centers of the brain, possibly through dietary supplements.

No, the solution will come in time. Gradually manufacturers will pinch each others' best ideas and eventually the digital mixing console will evolve into a unified form.

Digital consoles will have similar facilities and similar methods of operation - facilities and methods that have been found to work well by actual pro users, whom manufacturers obviously have to please.

But I sense we're in for a long haul before that happens. Does anyone have any ideas how we can speed up this process?
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Hubay: Violin Concerto No. 3 / Stabrawa · Fischer · Berliner Philharmoniker

Tuesday, May 29, 2012

Why do some people use equipment that was designed when dinosaurs ruled the Earth?

Do you still use DAT? CD recorders? Outboard effects units? Come on and admit it - you're a dinosaur! 

By David Mellor, Course Director of Audio Masterclass

Why do some people use equipment that was designed when dinosaurs ruled the Earth?

Do you still use DAT? CD recorders? Outboard effects units? Come on and admit it - you're a dinosaur!
One thing that people forget about dinosaurs is that they ruled the Earth for a lot longer than we humans have, so far.

So we might think we are the most successful species ever, but we haven't quite proved that we can live up to the achievements of the great lizards.

But there are audio dinosaurs too - equipment that really ought to be extinct by now.

One such is the DAT recorder. DAT stands for 'Digital Audio Tape' and in its heyday everyone had a DAT recorder. And anyone who didn't have one desperately wanted one.

DAT was used as a stereo mastering format. Before DAT, which means before around 1987, the only option was to master to analog tape (or other ultra-ultra-expensive digital formats post-1980).

Analog tape may have an interesting sound quality that we might sometimes use as an effect these days. But back in the 1980s people hated its murky noise and distortion. DAT was like pure spring water, distilled three times, in comparison.

But then people started mastering directly to computer files, and storing their backups on writeable CD or DVD. Then they started using 24-bit resolution and 96 kHz sampling rate, where standard DAT was only capable of 16-bit / 44.1 or 48 kHz.

So gradually the point of DAT became less and less. And now the only real use they have is to play back old tapes from the archive.


Do you know different? Do you have a DAT recorder that you still actively use?

I'll ask the same question about CD recorders. Do you have a standalone CD recorder that you still use?
Outboard effects units? Why oh why when so many excellent plug-ins are available?

If you are still using any of these types of equipment, please tell us about your motives and experiences, and why you refuse to change with the times.

It could easily be that the ancient dinosaurs of audio are right, and the computerized mammals are wrong. Discussion below...

(By the way, the CD burner illustrated is a recently released product. Someone must be buying it.)

From The Vault (Korg): Getting To Know the Korg PA80

Monday, May 28, 2012

An RP reader has an interesting phase problem. Is this something he should worry about?

By David Mellor, Course Director of Audio Masterclass

A question from an RP reader...

"It seems that everything I record is out of phase, or in inverted polarity. Could it be the power coming into the house? Or not being grounded properly? Do I lose quality when this happens? And when I invert to the correct phase, do I lose anything there?"

Let me start by saying that this has nothing to do with mains power or grounding. Having got that out of the way I can begin to address this issue. Firstly, what exactly is the problem?

When people use the terminology 'out of phase', they generally mean that the signal is inverted. There is a lot else that could be said about phase, but the issue here is signal inversion, so that is the terminology I will stick to.

Imagine a microphone in front of a bass drum. The drummer whacks the beater and the head moves outwards towards the microphone. Conventionally, this will cause the microphone to generate an initial positive electrical pulse. Inward pressure on the diaphragm of a microphone creates a positive voltage.

The signal from the capsule of the microphone goes to pins 2 and 3 of its XLR connector. If this is wired correctly, pin 2 will carry a positive voltage compared to pin 3.

This pulse will then travel through preamplifier, audio interface, digital audio workstation, bounced .wav file to mp3, all the way around the world via the Internet, be downloaded onto someone's iPod and ultimately be reproduced by a loudspeaker in their iPod dock...

The acoustic pressure pulse created by the bass drum will cause the diaphragm of that loudspeaker to move outwards. And the listener's eardrum will move inwards just as if he or she had been standing right in front of the drum.

This is how things should be. But at many points in the chain, the signal can become inverted. The most likely cause is a cable where one of the connectors is wired with pins 2 and 3 the wrong way round.

It can also happen that a piece of equipment is incorrectly designed and inverts the signal. You can find some interesting comments on poorly designed equipment here...

Does signal inversion matter?

While it may seem like a horrendous distortion to invert an audio signal, the human ear doesn't seem to mind. Indeed, it is very difficult to tell whether or not a signal has been inverted.

Take the example of a drum kit with close mics. Whereas the bass drum pushes towards its microphone when struck, thus producing an initial positive pulse, every other drum initially moves away from the microphone diaphragm. Thus one drum in the drum kit creates positive-going initial pulses, every other drum creates negative-going pulses. Whoever worries about that? Nobody. It makes no practical difference.

If I were to write a list of one hundred things to worry about in audio, signal inversion, in itself, wouldn't even be on that list.

By the way, when considering whether a signal is correct or inverted, the phrase 'absolute polarity' (or 'absolute phase') is often used. If you look this up, you will find that a lot of people are fascinated by the topic. It's interesting, but not so relevant in day-to-day recording.

When signal inversion absolutely does matter

Although the absolute polarity of a single signal is not that much of an issue, when you start to combine signals it becomes very important.

For instance, if you have a stereo signal and, through some kind of error, the left channel becomes inverted, the stereo sound stage completely breaks down. If, for instance, an instrument is panned center, the left loudspeaker will be pushing out while the right is sucking in and vice versa. There is nothing in the natural world that creates this effect and the brain has no sensible way of interpreting it. It sounds bad indeed, and if you have not experienced it you should try some experiments in your DAW.

Alternatively, you might be processing a signal in some way that requires that you mix in the processed version with the original. If the processed signal is inverted, then it will partially cancel the original when it is mixed in.

Signal inversion has the potential to create havoc in any sound engineering scenario, which is why it is so important that absolute polarity is maintained at every point in the signal chain.

Back to the question...

If everything that the questioner records is inverted, then it would seem to be the case that there is a single point in the system where the inversion is happening. If so, this should be sought out and the problem resolved. Inverting back to the correct polarity will not cause any further problems, but the issue should not be allowed to arise in the first place.

In summary, although absolute polarity is of less relevance than many other factors in recording, any item of hardware or software, or any process, that inverts the signal has the potential to cause significant problems. There is a correct way to handle signals, and that is not to invert them. Not unless you have a very good reason to do so.
Publication date: Monday May 28, 2012
Author: David Mellor, Course Director of Audio Masterclass

From The Vault (Korg): Introducing the PA1X and PA1X Pro!

Friday, May 25, 2012

Q: Compression, EQ or reverb - where should I start?

An Audio Masterclass visitor is trying to decide the order of his plug-ins. Three plug-ins, six combinations. How to decide...    

By David Mellor, Course Director of Audio Masterclass

Well we think there are six combinations. Let's see...
  • Compression - EQ - reverb
  • EQ - reverb - compression
  • Reverb - compression - EQ
  • Reverb - EQ - compression
  • EQ - compression - reverb
  • Compression - reverb - EQ
Each of these will produce a distinctly different sound quality, and any might be useful depending on what it is you want to achieve. It all rather depends on whether you want to follow standard practice, or be a little more unconventional.

In general, if there are any frequency distribution problems in the signal, you will want to EQ it first to resolve the problem. Then you might consider that the dynamic range is too wide or you want the 'magical sound' of compression. At that point you might consider the signal a little dry, so you add reverb (normally as a buss effect).

It may be however that the frequency distribution of the signal is fine as it is, so you can go straight into compression. Then you might consider that you could improve the signal beyond its existing state with EQ. Reverb once again would be at the end of the chain.

(It may be that you have EQ problems to resolve before compression, then you want to improve the signal with EQ after compression, so there are two EQ stages. That is certainly possible.)

What would be quite unusual however would be to apply reverb anywhere other than the end of the signal chain. And if you had reverb as a bus effect, then any EQ or compression you applied to it would affect the reverb only, not the original signal.

Let us therefore consider using reverb as an insert effect. It's unusual, but you can't go to prison for it.

If you add reverb first, then compress, then you should get some interesting dynamic effects. So the reason for doing this is that you want to go beyond normal boundaries and find new sounds. Where you put the EQ is up to you, because the results will be unpredictable and will depend very much on the actual signal you are using.

So in summary you can play safe...
  • EQ - compression - reverb
  • Compression - EQ - reverb
  • Compression - reverb - EQ (which is fairly safe)
Or you can be more adventurous...
  • Reverb - EQ - compression
  • Reverb - compression - EQ
  • EQ - reverb - compression

Korg Triton Studio (Extreme) "Master Series" DVD Tutorial

Thursday, May 24, 2012

Can a spectrograph give you insight into EQ, or should you just listen?

A spectrograph can give you a lovely display of the frequency content of a piece of audio. But what good is LOOKING when the end product is for LISTENING?

By David Mellor, Course Director of Audio Masterclass

When I was first learning about pro audio, I was fascinated with the idea of being able to see frequencies as well as hear them. I felt it would give me a tremendous insight into what I was listening to, with the aim of creating better recordings and mixes.

But an audio spectrograph such as the model made by Bruel & Kjaer was so expensive I could hardly afford even to get a look at one. Some years later however, a hand-held spectrograph made by Ivie fell into my hands and I grabbed the opportunity to get some useful experience.

What I learned however was that although the spectrograph is a useful instrument for offering insights into audio, it is vitally important to interpret what it says, putting your ears first and eyes second.

These days, any tuppeny-ha'penny EQ plug-in can have a spectrogram display. (By the way, according to the Oxford English Dictionary, the output of a spectrograph machine or software is a spectrogram, but let's not worry too much about that because hardly anyone else does - the two terms are often used interchangeably. You can call the device an audio spectrum analyzer too.)

So what does it really show you...?

To put it simply, a spectrograph will analyze the energy in various frequency bands and take averages over an interval of time. It will show you the strong frequency components in a signal, and also the weak.

Suppose for instance that there is something worrying you frequency-wise about the recording of a single instrument. You could look at its spectrogram and see exactly where the problem lies, then use EQ to fix it.

Well, this might work - if the problem is the strongest band of frequencies that the recording of the instrument possesses. But it could be some other band of frequencies - perhaps not particularly strong but irritating nonetheless. The spectrogram won't tell you that. In my experience, a spectrograph can point out maybe one or two bands where stuff is going on, but other features of a signal that you may be able to hear clearly are either not shown distinctly, or they are hidden in a mush of spurious detail.

This is not to say however that the spectrograph is without its uses. In my view, it isn't a question of ear versus eye, but what it might be possible to achieve if both senses are used together.

With experience, I find that I don't need a spectrograph to point out things that I can hear clearly. If an instrument has an ugly resonance, I can sweep the frequency control of the EQ to find it. People with keener hearing than mine would go to the right region of frequencies directly.

But when I have a tougher problem to solve - trying to filter out the sound of the vuvuzela comes to mind - then the combination of ears and eyes can be more insightful than ears alone. Clearly though the end result should be judged solely from listening.

One more thing... if you're looking to give your masters the 'commercial' sound, then it doesn't hurt to emulate the frequency balance of stuff that is already out there selling. Once again, the spectrograph should not be telling you what to do, but it can be an insightful guide

Dvořák: Cello Concerto / Isserlis · Gilbert · Berliner Philharmoniker

Wednesday, May 23, 2012

Can I send the output of my mixer to a compressor then bring the signal back to the mixer?

An Audio Masterclass visitor has worked out an interesting way of connecting his compressor. Will it work? Or is something going to blow?  

By David Mellor, Course Director of Audio Masterclass

A question from an Audio Masterclass visitor...

"What's wrong in connecting the main outputs of a mixer to a compressor and return the signal to mixer inputs for equalizing, instead of connecting through inserts? It sounds unorthodox, but is there any serious difference in sound?"

It looks like we're in the analog universe here. But that's OK, the analog universe still exists; it hasn't imploded into a blob of dark matter just yet.

It is often interesting to compress the entire mix, rather than only individual channels. You have to know what you're doing so it is advisable to conduct a lot of experiments before committing the best song you ever wrote to this treatment.

Some mixing consoles make compressing the mix easy - they have insert points in the stereo master outputs.

"What's an insert point" comes a small voice from the back of the room.

Mixing consoles often have an insert point in each individual channel. The signal can be tapped off at this point and processed, often through a compressor. The signal is then reinserted into the channel's signal path. Normally only the processed signal is used; there is no mixing of processed and unprocessed signal as there would be when using auxiliary sends and returns.

Although many mixing consoles have channel inserts, fewer have inserts in the master outputs (or the group outputs either).

But that doesn't mean you can't compress the mix. All you have to do is connect the compressor between the master outputs of the console and the inputs of your stereo master recording device. That will work just fine. Remember to click the 'stereo link' button on the compressor.

But what if you want to compress the mix, and then EQ it? There's absolutely no reason why you shouldn't do this, if you want to. But how?

With a DAW this is dead easy, but we are not talking about DAWs. We're in the analog universe, remember?

So what would happen if you connected the outputs of the console to the compressor, and then brought it back to a pair of channels for EQ? (You would need splitter cables so that you could also connect to your stereo master recording device.)


You have just blown your speakers.

The reason for this is that you have just created a perfect circle from mixer output back to mixer input back to mixer output again. This creates a positive feedback loop in which signal will build and build all the way up to maximum level. If you don't at least blow your tweeters then you have been lucky. But it wouldn't have been pleasant to listen to.

So don't do this.


Be very careful in what you do.

The one thing you must not do is route the compressor return channels to the master outputs. That's what caused the positive feedback loop.

But if you don't do that, how can you get the EQ'd signal out of the mixing console and into your recorder?
Well there are a number of ways.

If you have individual channel outputs then you're laughing. It's easy, but not so many consoles have this feature.

Another way would be to take signals from the insert points of the compressor return channels. This would only be useful if the inserts are post-EQ. If they are pre-EQ, then you won't get the equalized signal.

If you can't do this, then you would need to find a couple of spare auxes that you are not using in the mix. Assuming the auxes are post-EQ, then this will work. It's just a bit fiddly.

There are other alternatives that involve recording the compressed signal, then EQing it and re-recording it.

Of course the simple answer would be to buy a DAW, where this process is easy. But that's not the point.

One of the skills of a good engineer is to find alternative ways of doing things. Finding a different way to other people is an excellent route towards creativity.

Mozart: Symphony No. 40 / Pinnock · Berliner Philharmoniker

Tuesday, May 22, 2012

Do you fade out at the end of your songs? Why?

There's nothing like bringing a song to a rousing conclusion. And a fade is nothing like bringing a song to a rousing conclusion. So why do it?

By David Mellor, Course Director of Audio Masterclass

Fading out at the end of a song has become something of a recording cliche, but no-one seems to know for sure why it started.

When it started is possible to track down, to the Farewell Symphony of 1772 by Joseph Haydn. At the end of the work, the musicians gradually leave the stage leaving only two violins to play the final notes. That was before the era of recording of course. Oh well, let the pedants revolt.

146 years later Gustav Holst fades out the last movement of his The Planets. It closes with a women's chorus who perform in a room separate from the main auditorium, the door to which is slowly closed.

In recording, America (1918) by the chorus of evangelist Billy Sunday and Barkin' Dog (1919) by the Ted Lewis Jazz Band are reported to end in fades.

But fades became more commonplace in the modern era of recording which started in the 1960s.
Some say that the idea of a fade is that the music never really ends, it just keeps on going. There's a certain amount of arty-tartiness artistic sensibility in that.

Others will say that it is just the lazy person's ending because they can't be bothered to think up a proper finale for their song.

I would say that it is indeed a bit lazy. But on the other hand it could be a sign of a song that has been conceived in the studio rather than for live performance.

Other than the classical music examples above, it is impossible to fade out live. Try it and see if it works. No don't bother, your audience will look at you as though you are a pack of idiots. (Yes I tried it, around the age of 15 or so, only once.)

But there can be more to fades than just fading.

Some songs have really long fades - Hey Jude by The Beatles for example.

Also by The Beatles there is Strawberry Fields, which fades out and fades back in again. George Martin described how this happened as a bit of a wild jam going on at the end of the song. At one point it fell apart, then came back together again. So they just faded the part that didn't work out!

Creativity in fading indeed!

P.S. When Holst's The Planets was written in 1918, only eight planets had been discovered, so the work ends with Neptune. Pluto was discovered in 1930. In 2000, Colin Matthews was commissioned to write an additional movement to depict Pluto, and thus bring The Planets into line with the known solar system. Unfortunately Pluto's planetary status was canceled in 2006. Whether Matthews still gets his royalties isn't known...

Beethoven: Piano Concerto No. 4 / Perahia · Mehta · Berliner Philharmoniker

Monday, May 21, 2012

How do you back up your data? Do you back up your data?

If you don't back up your data, then you are headed for sure and certain disaster. It's a question of 'when?', not 'if?' 

By David Mellor, Course Director of Audio Masterclass

Hard disks always fail. Sooner or later the disk upon which your most treasured data resides will become as dead as the proverbial ex-parrot.

But of course you have a backup plan, don't you? You can simply replace the disk, restore your data and carry on as though nothing has happened.

But the sad fact is that most people don't have a backup, and most of those who do have one or more of the following problems...

  • Their backup is no more secure than their primary data storage

  • They have never tested the restore procedure

  • The backup is way out of date

  • Data is prone to loss during the backup procedure
Let's look at each of these in turn...

Suppose you have a backup. Where do you keep it? Attached to your computer at all times? Well when the burglars come round to your house, they will take the lot. You have lost your data.

If your house burns down, you've lost the lot. Even if you hide your backup under the floorboards, the fire (or flood) will surely find and destroy it.

Your backup needs to be in a remote location so that if your computer is stolen or damaged, the backup remains safe.

Suppose you have a backup. When was the last time you tested your backup procedure? I once had a whole album's worth of tracks in progress. When my disk failed, I was so pleased I had a backup... until I found that it wouldn't restore. Fortunately I had some rough mixes stored elsewhere and I was able to use these (and actually the limitations of working with mixed backing tracks spurred my creativity, but that's another story).

Every so often therefore you must test your backup, to make sure it will work when you need it to.
When is the last time you backed up your data? At the end of yesterday's session? Really? My computer just flashed up a message saying that I haven't backed up in 20 days. Oh dear, it's so easy to forget.

Suppose you have a backup hard disk. Every so often you'll have to attach it to your computer to update your backup. What if the computer does something funny and wipes all of your data? It has been known.

My own plan for my personal data, family photographs and such, is this...

I keep my important data on one of my Apple Macintosh computers. I have Windows computers as well but the Mac suits my purposes for everyday use.

The standard software installed with the OS X operating system includes something called Time Machine. Basically, if you plug in an external hard disk, the computer will ask you if you want to use it for backup. Time Machine is very easy to use for backup, and I have restored files on a couple of occasions when I had deleted them in error.

But what about storing the disk in a remote location? Well, I keep my backup disk in the trunk of my car. I figure that either the house will get burgled or the car will. They are not likely to get done at the same time. If I go on vacation and leave the car behind, I leave the disk at a friend's house.

One thing that does still worry me however is data loss during the backup procedure. This can take a long time and I could easily want to go out of the house while it is in progress. What if the burglars strike then?

The answer to this, or so I thought, would be to use two disks for Time Machine backup and always have one in the trunk of the car. Er... you can only use one, unless you want to start the backup from scratch every time you swap the disk.

I store my most important files on Amazon S3. 'S3' stands for 'Simple Storage Service'. It costs me ¢15 per gigabyte per month, plus some transfer charges. This works very well indeed, although when you start to think about storing a terabyte of data, that comes to $1800 a year, just to keep it there. Ouch!

OK, over to you... tell us about your backup methods. If anyone comes up with a method of real genius and stunning simplicity, we'll feature it in an article all of its own.

Friday, May 18, 2012

What is the difference between audio and MIDI?

A seemingly simple question that perplexes many newcomers to audio and recording...

By David Mellor, Course Director of Audio Masterclass
MIDI was quite something when it first arrived in the early 1980s. It is still very much around today, although we don't tend to work quite as directly with it as before. MIDI (Musical Instrument Digital Interface) is now mostly hidden 'under the hood'. But still it is very useful to know about what it is and what it can do...

When you record an audio signal, then the acoustic or electronic waveform that the instrument produces is captured directly. The recording is a representation of the sound the instrument actually made, and will be different according to whether the instrument was, say, a violin or a trumpet. An audio signal is recorded on an audio track of a digital audio workstation software.

A MIDI signal is normally generated by a keyboard, and it contains information about which keys are being pressed. The MIDI signal can be recorded on a MIDI track of a digital audio workstation. Only the data about which keys were pressed, plus other associated data, is recorded. So the MIDI signal doesn't sound like a violin or a trumpet, it is merely a list of which keys were pressed and when.

To play back an audio signal, an audio output of the interface to your recording system must be connected to an amplifier or loudspeakers, or you can listen on headphones.

To play back a MIDI signal, a MIDI OUT connector on the interface to your recording system must be connected to the MIDI IN connector of a MIDI instrument or sound generator. The audio output of this instrument or sound generator must be connected to an amplifier or loudspeakers to make the sound audible.

[Alternatively, the MIDI signal can be routed within the workstation to a software instrument that will create an audio signal from it.]

MIDI seems more complicated then. Since it doesn't record the original sound, the MIDI signal must be connected to a sound generator to make it audible.

But this makes MIDI more versatile.

Your recording of a violin will always sound like a violin. You can EQ it, but it will still sound like a violin.
However, you might have had your keyboard set to a violin sound when you recorded your MIDI signal, but when you play it back you can set your keyboard to any sound you like. So what was once a violin can now very easily become a trumpet.

MIDI has further advantages...

You can edit the MIDI data more flexibly than audio. For instance you can correct the timing of notes, or how forcefully they were played. Or correct wrong notes even!

Also, you can record a whole composition in MIDI and then change the tempo. Audio is catching up in this respect, but there is always a sound quality loss.

MIDI, therefore, offers additional flexibility and conveniences. It has to be said that only synthesizers and sample-playback systems can respond to a MIDI signal. There is no such thing as a violin that will play from a MIDI signal. Yet.

But any newcomer to recording who can appreciate that an audio signal is a representation of the original sound, while MIDI contains only key press and associated data, will have come a long way in understanding.

Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

From The Vault: [Korg] Triton Studio Video Manual

Thursday, May 17, 2012

In 5 to 10 years' time, computers might catch up with traditional technologies. Might.

Computers are cutting edge, right? Er, not quite so cutting edge as you might have thought...

By David Mellor, Course Director of Audio Masterclass
I've sold my iPad. It was an interesting novelty, but it can't do as much as my computer, and I don't find it at all pleasurable to read books, magazines or newspapers on it. Call me a luddite if you like, but I gave it a damn good try.

But that set me thinking. Perhaps the iPad isn't actually all that cutting edge. Perhaps even fully-fledged computers are not as cutting edge as we seem to think they are.

Time for a demonstration...

Take out a recent issue of Sound on Sound magazine. (It's one of my favourite ways of not creating music or recording). Now measure the pages. Yes, measure them with your ruler. OK, I've done it for you and, in deference to the mighty USA, I've done it in inches.

The mag is approximately 11.7 inches high by 8.3 inches wide. I don't actually know what resolution it is printed at, but 300 dpi (dots per inch) is a good resolution for quality print. So each page consists of 11.7 x 300 x 8.3 x 300 pixels. Multiply that up and we get 8,743,964 pixels, or 8.7 megapixels if you prefer.

And since you can open up the mag into a double-page spread, the effective number of megapixels is nearly 17.5.

Now compare this to the original iPad - a little under 0.8 megapixels. Or the new iPad - 3.1 megapixels.
It's pretty clear now why I wasn't enjoying my first-generation iPad too much, and the new iPad isn't all that much better.

But what about my computer, with mighty dual monitors - 4.6 megapixels.

Even then, the good old-fashioned print version of Sound on Sound beats my highly-specified computer in terms of resolution by a ratio of nearly 4:1.

And... the mag has been around for years at this resolution. Computers still have not caught up (unless I buy another six monitors!).

So what's the audio relevance of all of this?

Well it's difficult to figure out what the audio equivalent of a pixel is, but I'd say that you could sensibly multiply the sampling rate by the dynamic resolution to get a reasonable figure. So 24 bits = 16,777,216 different levels x 96,000 samples per second = 1,610,612,736,000 'auxels' in each second of audio. That's per channel so double it up and you get 3.2 tera-auxels in each second of stereo audio.

That's a big number. But imagine you are in a concert hall with a sixty-piece orchestra on stage, each instrument radiating sound from a different direction. Look around you at all the reflecting surfaces, all the different ways sound could arrive at your ears. Look at the other 999 people in the audience. How many different ways does sound reach them?

In terms of complexity, this really is mind-boggling when you stop to think about it.

Yes, computers still have a long way to go!
Publication date: Thursday May 17, 2012
Author: David Mellor, Course Director of Audio Masterclass

Beethoven: Violin Concerto / Kavakos · Mehta · Berliner Philharmoniker

Wednesday, May 16, 2012

Is the time right to buy Waves plug-ins at bargain-basement prices?

Waves currently has an offer on plug-ins - up to 50% off and more. Wow that's a lot - but is there something else you need to consider?

By David Mellor, Course Director of Audio Masterclass
I've mentioned offers from Waves before. Waves is well-known as a company that creates high-quality, and high-price, plug-ins. But every now and then they send out an e-mail with some very special offers. 50% off and more in this case (as of May 15, 2012). That's less than half-price!

The quality of these products is not in question. I've never been dissatisfied with a Waves plug-in and they offer free trials, which is commendable in every way. However, the cut-price offers seem to come so frequently now that one would have to be a little crazy to buy at full price. No doubt sometimes one might have to - if a certain plug-in is needed on an urgent project for example.

But most of us can afford to wait. And by affording to wait, we can afford more plug-ins! See what I did there? :-)

Clearly there is a recession on and many companies are struggling. Companies with debt on their hands are at risk of going under unless they can keep the cash flowing. Which raises a spectre...

What if Waves is teetering on the edge? Note that I'm not saying that it is. I'm saying 'what if' it is? OK, not wanting to seem to be a Waves-basher because I like their products, let me generalize this to any plug-in developer. What if the developer of your favorite plug-in goes bust? Well your plug-in will still work, but you won't get any support or upgrades.

But it will only work for so long. Will it still work after your DAW's next upgrade? And after your next OS upgrade? History has shown that this often isn't the case.

So while your audio hardware will continue to function until it wears out and spare parts become unobtainable, your plug-ins will only work until the next DAW or OS upgrade, or while they are actively supported by the developer. However much you like any of your plug-ins, it's perhaps best not to get too attached. Sooner or later they will cease to work and you will have to find a new plug-in to love.

But hey... life is change and change is life. So let's get on and enjoy what we have now.
Publication date: Wednesday May 16, 2012
Author: David Mellor, Course Director of Audio Masterclass

Grieg Piano Concerto in A minor, Op.16 - III.Allegro moderato Jean-Yves Thibaudet Gustavo Dudamel

Tuesday, May 15, 2012

Shotgun Microphone Roundup

Shotgun Microphone Roundup

By Sam Mallery
Published Friday, March 30, 2012 - 1:24pm
Shotgun microphones are used to capture sounds such as dialog in film and video productions, for “spot” miking specific areas on sets, stages and installations, and for creating Foley and sound effects. These microphones feature a distinctive long and vented “interference tube,” which helps reject sound from the sides and rear and focus on the sounds directly in front of them. They are very sensitive and detailed sounding, and because of their sensitivity, suspended shock mounts are almost always used to attach them to boompoles, video cameras, stands, etc. Their increased sensitivity also makes them susceptible to wind noise, so additional wind protection is mandatory for outdoor use.
There’s a wide variety of shotgun microphones available at B&H. In this roundup, we’ll take a close look at some popular models, explain what makes them desirable in what situation and include links to high-wind protection for each one. All of the microphones in this roundup are considered “short shotguns.” None are more than 12 inches (30.5 cm) in length and they all have XLR connectors. All require phantom power unless otherwise stated. If you need a comprehensive and easy-to-understand explanation of what shotgun microphones are and how they’re used, be sure to check out the B&H InDepth Shotgun Microphone Buying Guide.


Rode NTG3 B&H Signature Series

The Rode NTG3 B&H Signature Series is an RF condenser shotgun microphone with a special design that enables it to operate flawlessly in damp environments. The ability to survive the hard-knock world of field production and to function in challenging weather conditions is essential for anyone needing to work outdoors. The NTG3 excels in these areas, but where it really delivers is in sound quality. Simply said, it sounds and performs like a microphone that costs several hundred dollars more. The NTG3 comes with a compact, pipe-shaped aluminum case, and an equally formidable 10-year warranty. It was designed and built at Rode’s headquarters in Australia, and the limited edition B&H Signature Series features a matte-black finish. This gives the microphone a subdued visual presence, and is far less reflective when working around lights. The NTG3 is also available with a nickel plated finish. Compatible accessories include the Rode Blimp, a complete high wind protection system that’s available in B&H Signature Series matte black, or in the normal gray color. For less intensive wind, you can use the separately available Rode WS7 windscreen.


Rode NTG2

If you don’t have the budget for a higher-end shotgun mic, there are a few options that do an impressive job at an attractive price. The Rode NTG2 is popular in this regard, and it’s one of the few shotguns that can be powered by either a single AA battery or phantom power (a clear explanation of phantom power is provided in the Shotgun Microphone Buying Guide). The AA battery power option is useful if you want to plug this microphone into the 1/8” input on an HDSLR camera (which requires an impedance transformer like the Pearstone LMT100), or use it with a wireless transmitter that lacks a phantom powering capability. The sound quality of this microphone is very good, but not as outstanding as the NTG3. It also lacks the RF aspect of the NTG3.


Rode NTG1

If the NTG2 sounds appealing, but you don’t need the AA battery powering ability, you should check out the Rode NTG1. It’s essentially the same microphone without the AA battery compartment. Because there’s no battery slot, the NTG1 is more than two and half inches (63.5mm) shorter in length and weighs two ounces (56.7 g) less. Like the NTG2, it features a low-cut switch to filter out unwanted low-frequency sounds (like rumble from footsteps and vehicles). Its short size makes the NTG1 a great choice for mounting on video cameras. Both the NTG2 and the NTG1 are compatible with the Rode Blimp (available in both B&H Signature Series black or gray). For less-intensive wind, both mics are compatible with the Rode WS6. Those on a tight budget can affix the Pearstone Fuzzy Windbuster around the included foam windscreen for additional wind protection.


Audio-Technica AT875

Another favorite microphone for people with more limited resources is the Audio-Technica AT875. A typical reaction to a microphone that’s priced this low is to assume that it sounds terrible, but the performance of the entry-level AT875 is actually quite good. Just under seven inches in length (175mm), it’s the shortest shotgun microphone in this roundup. Shorter microphones like this are an excellent choice for mounting on video cameras, because the mic won’t protrude too far in front of the camera. For use in high wind conditions, the Rycote S-Series Windshield Kit is recommended. For lighter wind, the Rycote 033032 Softie is the way to go.


Sanken CS-3e

The Sanken CS-3e is popular among location sound professionals, but tends to be a bit cost prohibitive for hobbyists. This microphone employs a set of three directional capsules that form a unique “mic line array,” which ultimately gives it superior off-axis rejection. Most shotguns aren’t effective at rejecting low-frequency sounds to the sides and rear, but the CS-3e is, and it also features an incredibly small rear lobe (the area behind the mic that picks up sound). This makes the CS-3E less likely to capture unwanted reverberant sounds when used indoors, and it’s a better candidate when you need to boom close to a ceiling, HVAC vents, noisy camera rigs and lights. For use in high wind conditions, the Rycote Windshield Kit 4 is compatible, and for lower wind situations, the Rycote 033052 Softie is the one to get.


Sennheiser MKH 416

No roundup of shotgun microphones would be complete without the Sennheiser MKH 416. This microphone has remained the tool of choice in professional productions for decades, with its nearly indestructible build quality and infallible all-weather RF condenser design. Its sound quality is rich and alive and helps the human voice to cut through to the front of a mix. The directional sweet spot on the 416 is rather tight, meaning that this microphone will have a narrow focus on the sound source directly in front of it, while doing a good job of rejecting nearby sounds. The compatible high-wind protector for the 416 is the Sennheiser Blimp System, and the Rycote 033052 Softie is the one to use for lower wind conditions.


Sennheiser MKH 8060

The Sennheiser MKH 8060 is a short shotgun microphone that offers a big sound and a great deal of versatility. Like the MKH 416, the MKH 8060 features an RF condenser design. It has a very rich and natural sound and shares the ability to bring the human voice to the front of a mix. Unlike the MKH 416, the MKH 8060 is more forgiving of off-axis sounds—that is, when not pointing directly at a speaking person, the voice will merely sound lower in volume, and not thin and artificial as delivered by other microphones. There is no low-cut filter or pad built into this microphone, but if you need them, they can easily be added with the separately available MZF 8000 module. The MKH 8060 is a part of Sennheiser’s modular 8000 series, which offers many options for microphone capsules and rigging accessories. You can learn all about this system in this B&H InDepth review. The compatible windscreen systems (if not using the MZF filter module) are the Rycote 3-Lite Kit blimp, and for light wind the Rycote 033032 Softie.


Schoeps CMIT5U

One of the most respected microphones for capturing natural-sounding interior dialog on a boom pole is the Schoeps CMC6 MK41. However, when you cannot place that microphone close enough to the sound source, or if you’re booming outdoors, one of the best tools to use in its place is the Schoeps CMIT5U. This is an extremely lightweight microphone with a very open and natural sound and is among the best sounding short shotguns on the market. The CMIT5U features three built-in filters: one adds a 5 dB boost at 10 kHz (to compensate for reduced high frequencies when the mic is used with wind protection), another cuts lows below 80 Hz (to reduce wind noise and rumble) and the third gently rolls off frequencies below 300 Hz (to compensate for any proximity effect when the mic is positioned close to a sound source). The microphone is also available in a low-profile gray color, the Schoeps CMIT5UAG. The compatible blimp system is the Rycote Windshield Kit 4, and for light wind, the Rycote 033032 Softie.


Neumann KMR81I

The Neumann KMR81I continues the solid tradition of the company’s offerings with a great sounding shotgun microphone that also exhibits extremely low self-noise. It has a wide frequency range (20 Hz to 20 kHz), which makes it a viable option for recording voice-overs and for capturing more lifelike sound effects and Foley. When you don’t need the full frequency range, a built-in low cut filter can be engaged to remove unwanted rumble, which is handy for when it’s used on a boompole. A switchable 10 dB pad is provided for miking a loud sound source. The KMR81I comes with a nickel finish, but it’s also available in a black (the Neumann KMR81IMT). The Rycote Windshield Kit 4 is compatible for use in high wind conditions, and the Rycote 033042 Softie can be used in lighter wind.

Thanks for checking out this B&H InDepth article. If you’re interested in microphones that are used on boompoles for capturing interior dialog, check out this B&H InDepth roundup. If you have any questions about shotgun microphones, we encourage you to submit a Comment below.
  Frequency Response Low Cut Pad RF  Max. SPL Power Length & Diameter Weight
Rode NTG3 40 Hz - 20 kHz No No Yes 130 dB 44 to 52V phantom 10 x 0.74" (255 x 19mm) 5.8 oz (163 g)
Rode NTG2 20 Hz - 20 kHz Yes No No 131 dB AA battery or 24 to 48V 10.94 x 0.87" (278 x 22mm) 5.7 oz (161 g)
Rode NTG1 20 Hz - 20 kHz Yes No No 139 dB 24 to 48V phantom 8.5 x 0.9" (217 x 22mm) 3.7 oz (105 g)
Audio-Technica AT875 90 Hz - 20 kHz No No No 127 dB 11 to 52V phantom 6.9 x 0.8" (175 x 21mm) 2.8 oz (80 g)
Sanken CS-3E 50 Hz - 20 kHz Yes No No 120 dB 44 to 52V phantom 10.6 x 0.75" (270 x 19mm) 4.2 oz (120 g)
Sennheiser MKH 416 40 Hz - 20 kHz No No Yes 130 dB 44 to 52V phantom 9.8 x 0.75" (250 x 19mm) 5.82 oz (165 g)
Sennheiser MKH 8060 50 Hz - 25 kHz No No Yes 129 dB 44 to 52V phantom 7 x 0.75" (177 x 19mm) 3.9 oz (111 g)
Schoeps CMIT5U 40 Hz - 20 kHz Yes No Yes 132 dB 48V phantom 9.9 x 0.8" (251 x 21mm) 3.2 oz (89 g)
Neumann KMR81I 20Hz - 20kHz Yes Yes No 128 dB 44 to 52V phantom 8.9 x 0.8" (226 x 21mm) 5.1 oz (145 g)

Grieg Piano Concerto in A minor, Op.16 - II.Adagio Jean-Yves Thibaudet Gustavo Dudamel

Birthday Celebration

Monday, May 14, 2012

Should vocals be recorded in mono or stereo?

When stereo sounds so much better and more lifelike than mono, why would you ever want to record a vocal with a single mic?

By David Mellor, Course Director of Audio Masterclass
A question from an RP reader...

"Mr David sir, please I need an urgent answer to this question; which channel is the best for voice recording-mono or stereo?"

Firstly, don't call me 'sir'. When I receive a knighthood from the Queen of England, then things will be different. But I sense that it's going to be a long time in coming. :-)

Now the question - should vocals be recorded in mono or stereo?

Well let's consider a solo lead vocal for pop, rock, hip hop or other modern style of music. Let's also consider the original purpose of stereo, as it was invented...

The original purpose of stereo was to create a convincing sound image between the loudspeakers. Just as a camera captures an image with light, a stereo pair of microphones connected to a stereo recording device captures an image in sound.

So if you record an orchestra with a coincident crossed pair of microphones, you will hear the individual instruments coming from the same locations in space, whether you stand in front of the orchestra at the recording session, or listen later on loudspeakers.

The whole purpose of stereo, as originally intended, is to give a believable sense of width to the audio signal.
So... How wide is your singer's mouth?

There may be a case for recording Louis Armstrong in stereo, but otherwise there is little point in recording a normal pop/rock/hip hop vocal in stereo. The mouth is effectively a point source and can be recorded as such. A single microphone, resulting in a mono recording, will do just fine.

But what if your singer is an opera singer?

This applies to any solo singer who would normally be heard in an acoustic environment. It may also apply to a jazz singer.

In this kind of music, the acoustics of the room or auditorium are important. The voice needs to interact with the acoustics. If you sit or stand in front of the singer in real life, you will hear the voice surrounded by an enveloping cloud of ambience and reverberation. And although the voice itself might still be a point source, the ambience and reverberation will most definitely be stereo.

So for this kind of singing, you should record in stereo.
It is worth noting however that when the singer is accompanied, recording techniques suddenly become a little more complex. Let's say the singer is accompanied by a piano.

You can record this using a coincident crossed pair of microphones. But you need to find exactly the right point in the room for the mics. Even for an experienced engineer, this takes time and experimentation. Every combination of singer, piano, pianist and room is different.

So the recording process can be simplified by miking the singer and piano separately with fairly close mics, then setting ambience mics further away to capture the reverberation.

In this case the stereo information can come from the ambience mics, so it is perfectly OK to use just a single mic for the singer.
 Publication date: Monday May 14, 2012
Author: David Mellor, Course Director of Audio Masterclass

Can you really *produce* using only virtual instruments?

So you record using virtual instruments. Can you really call yourself a producer?

By David Mellor, Course Director of Audio Masterclass

A question from an RP reader...

"I am making a production only with virtual instruments. How can I begin? What instrument?

What techniques?" - Demilson Alves de Souza

The answer to the question in my headline is an easy "yes". I wouldn't go so far as to say that virtual instruments can do everything that real instruments are capable of. But I have no doubt that virtual instruments can provide a sound palette broad enough to express almost any musical thought.

But I have this irritating habit of always wanting to look into things a little deeper...

One of the interesting features of real instruments is how difficult they are to record well. Unless you have a good player, good instrument and good studio acoustics, you'll work hard to get a decent sound. If you have all the elements of 'goodness', then getting a professional sound is straightforward. But microphone positioning can offer an incredible range of alternatives. How many different ways could you mic a piano for instance? When a change in positioning of one foot (30 cm) can make an audible difference, then a studio that is 20' x 20' x 12' offers 4800 possibilities for just a single mic.

From this we can see that the same piano in the same studio can be recorded in literally thousands of different ways. And then there are different pianos, and different studios. And even one piano doesn't sound the same on different days.

Contrast this with a digital piano however. I have a Roland V-Piano, which in my opinion is the best digital piano there is. But it's the same as any other Roland V-Piano. And since it would normally be recorded directly from the line outputs or digital output, it will always sound the same. There is no influence from microphones or studio acoustics. The same applies to instruments that exist only within the computer.

So if you produce using only virtual instruments, the amazing variety of tonal qualities available, almost automatically, from real instruments is lost. The result can easily turn out rather textureless, and much the same as anyone else's virtual instrument production.

How to really produce using virtual instruments

I could say at this point that I feel that I have made an incontrovertible argument that real instruments are better than virtual instruments.

But that isn't so.

With real instruments, you have to engineer and produce a recording. A good recording won't happen by accident. Indeed, it takes considerable skill and experience.

But with virtual instruments, a blandly professional recording happens automatically, given decent musicianship. You can't go wrong, or at least you would have to work hard to. But the result will be dull, and the overall sound will be pretty much the same as anyone else's virtual instrument production.

The solution to this is to be found in a realization that with real instruments, you are forced to produce. With virtual instruments you have to force yourself to produce. See the difference?

So with my Roland V-Piano for example, I can choose from the 30 factory presets. Or I can delve into the editing pages and make a piano sound that is all my own. (The V-Piano is so flexible you can tune every individual virtual string of the instrument. This makes an amazing difference in itself, and there are so many other ways of editing.)

When I have achieved a unique sound from the V-Piano, I can consider how I record it. I might consider putting it through a loudspeaker and miking it. Or at least creating some real acoustic reverb in my stairwell and mixing that in. No-one else has my stairwell (Except my neighbor, whose house is a mirror image of mine. But he doesn't record, as far as I know) so no-one else's recording can ever sound the same.

Extend this line of thinking to every virtual instrument and you can end up with a recording that is fully as rich in texture as any recording of real instruments.

P.S. Part of the original question that prompted this article was what instrument to start with. I'd say a metronome. After that, the most important instrument of the arrangement.
Publication date: Monday May 14, 2012
Author: David Mellor, Course Director of Audio Masterclass

Saturday, May 12, 2012

The 10 rules of pan

  An RP reader wants to pan, but doesn't know the rules. Do you have to know the rules before you break them?

By David Mellor, Course Director of Audio Masterclass

A question from an RP reader...

"Hi David, today I have a question to ask. If a person needs to pan instruments what rule do you use?"

I'm inclined to say that there should be no rules. I find panning in modern music totally, absolutely, mind-numbingly boring. I much prefer the interesting ways pan was used before the so-called 'rules' were invented.

But if you want to earn a living from your music, you have to work within the bounds of what is commercially acceptable. In effect, you have to follow the rules, or starve.
So here are the rules of pan (so you know what to break, if you want)...

1. Always pan the lead vocal center

This is something of a no-brainer. When is the last time you heard a lead vocal that was not panned center? Not in this decade, nor the last one, nor the one before that, and not even the one before that. Just pan it center and don't bother thinking about it.

2. Always pan the bass center
There is a reason behind this. If the bass comes from both stereo speakers, it can be louder than if it only comes from one. Also, in the days of vinyl records, there could be problems with mistracking on playback if the bass was panned all the way to one side. Loud bass frequencies panned hard left or right can cause the groove to become shallow, and the stylus may jump out.

3. Balance the channels equally

A mix can sound odd if one channel seems to have a greater weight of sound than the other. In general, the channels will sound balanced if both left and right meters are mostly at the same level. This doesn't always apply however, so balance should be judged subjectively.

4. If you have two similar-sounding instruments, pan one left and the other right

If you have two guitars strumming away throughout the course of a song, then if they are panned to the same location in the stereo image they will sound confused, almost as though just one instrument is playing. And any lack of synchronicity might make it sound as though the instrument is being played badly. Panning half-left and half-right is often a good solution. Bear in mind however that you should always check that a mix sounds good in mono.

5. A stereo pair of microphones should be panned hard left and hard right

If you have used any of the common stereo microphone configurations where the microphones are balanced across the stereo sound stage, then they should normally be panned hard left and hard right.

6. Check your pans on headphones

What sounds good on loudspeakers doesn't always sound good on headphones or earbuds, particularly hard pans. There is no reason why you shouldn't ever hard pan, but do check it on headphones.

7. Pan individual drum mics so that each drum matches its location in the overheads

You don't have to do this, but if you don't consider it then you are not mixing drums properly. The low rack tom for instance might be half-left in the stereo image of the overhead. It normally wouldn't make sense to pan the individual mic on that drum to any other position.

8. Don't make a single instrument three meters wide

This can easily happen with asymmetric miking. For example, if you aim two microphones from different positions at an acoustic guitar, one towards the fingerboard and the other towards the belly, then pan hard left and right, the result can often span the entire width between the speakers. In some cases, it can sound like two guitars are playing in exact synchronicity. It is completely different to anything you could possibly hear in real life.

9. After you pan an instrument, reconsider its level

This is connected with the 'law' of your pan control. A pan control can be designed so that when a mix is played in mono, the positions of the pans are irrelevant. You can turn any pan control and the level of that instrument will not change. OR... it can be designed so that wherever you position the pan, the level of the instrument in the stereo mix stays the same. OR... the pan law can be a compromise between these two options. So unless your pan law is correct for stereo mixing, the level will change when you pan.
But it's a subjective thing too. If say you have several instruments panned center, then you pan one mid-left, it will suddenly stand out more in the mix. You might therefore want to lower its level.

10. If you want to fit in with the audio community, follow these rules

But if you want to stand out, then BREAK THEM, BREAK THEM, BREAK THEM!
P.S. If you have made a recording with interesting pans, we would love to feature it in an article in Record-Producer.com. Send us your track with a description of what you wanted to achieve.
Publication date: Saturday May 12, 2012
Author: David Mellor, Course Director of Audio Masterclass