Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Saturday, June 30, 2012

Jessie J steals Will Loomis's song. Or does she?

Will Loomis thinks that Jessie J's song 'Domino' is a ripoff of his song 'Bright Red Chords', allegedly. But how many notes does it take to be scoundrel in the music business?

By David Mellor, Course Director of Audio Masterclass

I am almost reluctant to give you this link, because it goes to a site where you can waste more time unproductively than you ever thought possible. However, in fairness, this is where I heard about the story. Who knows where they heard it from?

In short, Will Loomis of the band Loomis &The Lust accuses (allegedly - every verb in this article will have the same adverb implied from now on) Jessie J of stealing his song Bright Red Chords and fashioning it into her Domino. The evidence...

I bet you thought there was going to be some over exaggeration going on here but, no, the tune of the verse is the same in both songs. Take a note, go up a major third, then down a fourth from the starting point with a minor third as a passing note. Basically it is five notes, and the rhythm is quite similar too.

I don't hear any other similarities other than those imposed by Western music only having twelve different notes to work with. So it boils down to those five notes. Loomis (insert adverb) says that he never gave Jessie J permission to use his song. And he is suing her for who knows how many countless millions.

Now, suppose it had been the other way round. Typically if Major Publisher A feels that Small Guy Musician B has stolen one of their songs, they will use their financial power to get all of Small Guy B's money. Yes, all of it. This has happened. So if the copying had been the other way round, I would expect Loomis to lose, legally, financially and completely.

But in this case, if it goes to court, Loomis will probably only be able to afford dimwit lawyers, while Jessie J's publisher will have the brightest legal brains on the planet. And they can afford to appeal if the decision goes the wrong way. I would expect that Loomis will have to settle out of court. We will never get to know what the settlement is, but I would expect that he gets a modest payoff in return for shutting up and never speaking about the matter again.


Does five notes constitute a ripoff? This depends on how distinctive the combination of notes is. Think of Close Encounters Of The Third Kind - don't even consider using those same five notes in your song.

But in my opinion, both Loomis and J (if I might call her J) are using a generic combination of notes that has almost certainly been used many times before. All that is necessary is to find the same combination in someone else's earlier song and Loomis has no claim to J's money.

And if the same combination can be found in the work of someone who is long dead and out of copyright, then anyone can use these notes freely. I'm sure Mr. J.S. Bach must have featured them at some stage in his massively complicated works of counterpoint.

My view is that these are both entertaining songs. No-one will remember them in five years' time, but for today they keep people happy. Let Jessie sing her version of these notes to millions, and Loomis bask in the glow of publicity for a while. No harm done.

P.S. If the name 'Loomis' seems familiar, here's why...
Publication date: Saturday June 30, 2012
Author: David Mellor, Course Director of Audio Masterclass

Rudess (Korg) OASYS Video 2

Friday, June 29, 2012

What is the difference between EQ and filters? *With Audio*

 Both EQ and filters alter the frequency response characteristics of a signal. But how are they different, and how should they be applied?

By David Mellor, Course Director of Audio Masterclass 
Let's start with baby steps. An audio signal consists of a range of frequencies from low (20 Hz) to high (20,000 Hz). Low frequencies represent low notes, high frequencies represent high notes and overtones. 'Hertz' (Hz) means 'vibrations per second'.

Often it is desirable to boost some frequencies and perhaps cut others in level. This might because there is a perceived problem with the signal, to enhance it, or to help it blend in with other signals better.

And the tools we use to do this are filters and equalizers. Let's look at filters first...

There are five main types of filter: low-pass, high-pass, band-pass, band-stop and notch.

A low-pass filter allows low frequencies to pass and attenuates (reduces) high frequencies.

A high-pass filter allows high frequencies to pass and attenuates low frequencies.

A band-pass filter allows mid-range frequencies to pass and attenuates low and high frequencies.

A band-stop filter allows low and high frequencies to pass and attenuates mid-range frequencies.

A notch filter is a band-stop filter that covers a very narrow range of frequencies.

Most commonly found are the low-pass and high-pass filters. Let's look at the low-pass filter because this is used in sound engineering and widely in subtractive synthesis too...

The low-pass filter has a 'cut-off frequency'. Below this frequency, everything is allowed through unaltered. This is called the 'pass band'. Above this frequency, the signal is progressively attenuated at higher and higher frequencies. This is called the 'stop band'.

Actually at the cut-off frequency, the signal is 3 decibels lower in level than frequencies in the pass band. Clearly there is a gradual transition between the pass band and the stop band.

In the stop band, a filter is said to have a 'slope'. A low pass filter doesn't cut off high frequencies completely. Above the cut-off frequency it attenuates them more and more as the frequency gets higher.

At a certain point above the cut-off frequency, the degree of attenuation will be 6 dB (for instance). An octave higher in frequency, the attenuation may be 12 dB.

We would say that this filter has a slope of 6 dB/octave, an octave being a doubling of frequency.

It is common to find filters with slopes of 6, 12, 18 and 24 dB/octave. Clearly, a steeper slope means a more pronounced filtering effect.

A typical low-pass filter will have a control for cut-off frequency, and may have a switch for slope.

Now for equalizers...

Equalizers come in three main types: low frequency, midrange, high frequency. There are more options and subdivisions, but I don't want to get too complicated here.

Let's look at a low-frequency (LF) EQ and see how it differs from a low-pass filter...

A well-specified LF EQ will have the following controls:

The frequency control sets the frequency at which the EQ will start to take effect. It will operate on a range of frequencies lower than this.

Gain sets the amount of cut or boost to be applied. Usually there is up to plus or minus 12 to 18 dB.

If the LF EQ is set to 'shelf' then as the frequency of the signal drops below that set by the frequency control, then the amount of cut or boost will increase, until it reaches a maximum value. Beyond that, it will stay the same.

If the LF EQ is set to 'bell' then its the same as above, except that the amount of cut or boost will return to zero at lower frequencies.

LF and HF EQ sections do have a feature very similar to slope, but you don't get to control it. Some say that this is the factor that makes one EQ more 'musical' than another.

I'm sure that by now you want to hear the differences between all of these.

Well, I have stuck to the low frequency end, but I have a nice little demonstration of filters and EQ for you.

In the filters demo you can first hear the untreated track. Then a low-pass filter with a cutoff frequency of 1 kHz and a slope of 6 dB/octave is switched in. After eight bars the slope is increased to 12 dB/octave, then 18, then 24, and then the clean track to end.

A word of explanation. This track is low in level, around 12 dB below full scale. This is deliberate so that there is sufficient headroom for the LF boost in the EQ demo, so that you can compare all at the same original level. Here is the filters track...
 [Demo Track Not Available]
As you can hear, filters can make a drastic difference to a track. This is just a demonstration by the way. I'm not saying that filters should be used in this way, just that this is what they sound like. Here are the settings, in order...

Now for the EQ demo. This starts with the untreated track, then a low-frequency EQ section is switched in with a frequency setting of 1 kHz and a gain of -6 dB.

After eight bars the gain is set to -12 dB, then +6, then +12. Afterwards comes the clean track again.

As you can hear, the cut settings are reminiscent of the filter, but subtly different. And of course EQ offers the possibility of boost, where filters do not.

Here are the settings, in order...

 Publication date: Sunday May 10, 2009
Author: David Mellor, Course Director of Audio Masterclass

Rudess (Korg) OASYS Video 1

Thursday, June 28, 2012

For beginners - Why do your loudspeakers have holes in them?

Take off the grille cloth of your loudspeakers and you will see that they have holes, with tubes extending inside the cabinet. Why? What would happen if you blocked them up?

By David Mellor, Course Director of Audio Masterclass

It is more likely than not that your loudspeaker has a 'port', which clearly is there for a purpose and not for decoration. But first, why are the drive units mounted in a cabinet?

The answer to this question is that the sound radiation from the rear of the drive unit would otherwise cancel out the sound from the front, at low frequencies. So although the drive unit is producing a lot of bass energy, much of it is canceled out and wasted. The function of the cabinet therefore is to contain the rear radiation from the drive unit, and preferably dispose of it (easier said than done). A fully closed cabinet is known as a closed box, acoustic suspension or sometimes infinite baffle, although some people use the term infinite baffle in a different sense.

The advantage of the closed box is its relative lack of resonance - when the signal stops, the sound stops. But as we know, most loudspeakers are not of the closed box type because they have a port and are not completely sealed. Adding a port to the cabinet turns it into a Helmholtz resonator, named for German scientist Hermann von Helmholtz.

A Helmholtz resonator consists of an enclosed volume of air connected to the outside world by a narrow tube. A beer bottle has this shape. And, as you know, you can blow across the neck of the bottle, when it is empty, and produce a musical note. This demonstrates the property of resonance, that the air inside the bottle will vibrate easily at a certain frequency, given an energy input.

It is the same with the bass reflex loudspeaker cabinet, which has a port. The air inside the cabinet will vibrate readily at a certain frequency. The frequency at which the air inside the cabinet vibrates is determined by the volume of the cabinet and the dimensions of the port. So what is the benefit of this?

The answer is that small and medium sized loudspeakers have an inadequate bass response. But if the cabinet is tuned so that the air inside vibrates at a frequency where the natural response of the low frequency drive unit is just starting to diminish, then the cabinet can 'help out' and extend the bass response down to a lower frequency than the closed box. So a bass reflex speaker has a better bass response than a closed box of the same size.

The disadvantage of the bass reflex is that it is resonant. It has to be to work. And not only does that make frequencies around the resonant frequency louder, it prolongs such sounds in time. So for example, a kick drum heard through a closed box loudspeaker will be nice and tight. But heard through a bass reflex it will trigger the resonance and there will be significant output at the resonant frequency, which will continue for a fraction of a second even after the kick drum beat has gone. The sound of the bass reflex loudspeaker is often described as 'boomy', and poorly designed examples can display a significant 'one-note bass' effect.

It is an interesting experiment to block the port and compare the sound. This doesn't turn the bass reflex into an ideal closed box because the parameters of the cabinet will not be optimized for the low frequency drive unit. Even so, the reduction in bass output is interesting to hear. It may seem therefore that bass reflex loudspeakers are not suitable as monitors because they color the sound in the bass and do not accurately reproduce the signal. However, most of the world listens to music on bass reflex loudspeakers, so it is useful to monitor on a similar type so that the engineer gets a flavor for what the listener will eventually hear.

In short, the closed box loudspeaker is more accurate, but the bass reflex has a better low frequency response, at the expensive of some boominess.
Publication date: Thursday June 28, 2012
Author: David Mellor, Course Director of Audio Masterclass

Rachmaninov: Piano Concerto No. 3 / Matsuev · Gergiev · Berliner Philharmoniker

Wednesday, June 27, 2012

Extraordinary stereo from your effects pedal

Do you always connect your effects pedals between your guitar and amplifier. Wow, that's just so retro!

By David Mellor, Course Director of Audio Masterclass

Everyone knows how to connect a guitar effects pedal - plug the guitar into the pedal and the pedal into the amplifier. That's it, job done.

But that's boring. For one thing, you don't have to plug a guitar in, and you don't have to connect the output to an amplifier. You can use a guitar effects pedal just like you would use any piece of outboard gear.

Take an insert send from your console or mic preamp (or a spare output of your DAW) and feed it to the pedal; bring the output of the pedal back to the insert or auxiliary return.

The only compromise with this is that the input of the pedal is optimized for the nature of the signal that a guitar provides. So expect to have to be careful how much level you send to the pedal, otherwise unpleasant distortion might be the result.

Also, traditionally, effects pedals have not been among the quietest pieces of equipment. But what's wrong with a little bit of texture now and then? And you can always gate it or cut out the noisy 'silences'.

Once you start using your effects pedals like this you will realize that you can use them for all kinds of sound sources. But also you will be able to connect them in more creative ways.

For example, you could route the dry, uneffected signal to the left output and route the effected signal to the right. Hard pan both to maximize the impact of the effect.

This will sound massively different to the pedal on its own. Somehow the ear pays more attention when it has both versions of the sound to consider.

One word of caution however. Some effects pedals invert the phase of the signal, so what comes out is an upside-down, effected version of what went in. In stereo, this might sound fantastic, but if you collapse the mix into mono, you will find that the similarities in the two signals cancel each other out. Some effected sound will be left behind, but it won't sound as you expect it to, to the point where a mono version of your mix will be worthless. And of course mono is important for radio and TV plays.

One last bonus is that there is simply such a great choice of effects pedals available. And since you will now be using them in a way that hardly anyone else is, you are almost guaranteed to get a unique sound.
Publication date: Wednesday June 27, 2012
Author: David Mellor, Course Director of Audio Masterclass

Korg M3 - Basic Sequencing (Part 2) - In The Studio With Korg

Tuesday, June 26, 2012

Why delay is good for you (and how to set delay times)

Delay is one of the simplest yet (currently) under-appreciated effects available. But how do you work out the correct settings?

By David Mellor, Course Director of Audio Masterclass
We have so many exotic plug-ins these days that the simpler ones tend to get ignored. And you can't get much simpler than delay. Take a signal, delay it in time, lower it in level, and add it back to the original. Suddenly everything is just that little bit more sparkly.

You can even feed the delay back into itself and get a repeating, decaying echo. In the 'good old days' of audio this was sometimes called spin echo.

In those 'good old days', delay was created using a spare stereo tape recorder, often a Revox A77 or B77. You can record and play back at the same time, but there is a delay created by the distance between the record and playback heads. You could get as many different delay times as the tape recorder had speeds, often just two.

Or if the tape recorder had a variable speed facility, you could get a wider range, which was simply set by ear.

Delay times these days can be set by ear, but you won't be able to resist looking at the milliseconds display. Or you can often just tap the tempo. Both methods are good.

Or you can, if you wish, calculate the delay time. Here's how to do it...
First start with the magic number, 60,000.

Next, divide the tempo of the song in beats per minute into 60,000. So if the song runs at 120 BPM, the result will be 500.

So you can set a delay time of 500 milliseconds, and the delays will correspond exactly to whole beats.

But often this doesn't sound too good as the delay gets confused with the original signal. But if you divide 500 by 2, or 3 or 4, or maybe even 5, you will get a range of delay times that will give interesting effects that are related to the tempo. Dividing by three for example will give you 167 milliseconds, which will give you a delay that is in triplets.

Or multiply 167 (Actually 166.6666 recurring) by two to give 333 ms and you will have yet another tempo-related delay.

But where does the magic number of 60,000 come from?

The answer is simply that it is the number of milliseconds in a second (1000) multiplied by the number of seconds in a minute (60).

Of course, you don't have to set delay times like this, but it is an experiment that every musician/engineer should try out at least once in their career.
Publication date: Monday May 18, 2009
Author: David Mellor, Course Director of Audio Masterclass

Korg M3 - Basic Sequencing (Part 1) - In The Studio With Korg

Monday, June 25, 2012

An unusual pair of loudspeakers that fire UPWARDS!

Why have loudspeakers that fire sound at you when you can have loudspeakers that spray sound all around the room. Those crazy Swedes... :-)

By David Mellor, Course Director of Audio Masterclass
There's an interesting pair of loudspeakers up for auction on Ebay at the time of writing (June 21, 2012). Unlike normal loudspeakers, the Sonab OA14, of Swedish design, doesn't direct sound at the listener but fires it up in the air. How crazy is that?

We actually it's not crazy at all. I've heard these speakers. OK, it was back in 1973 in a hi-fi shop. But I remember the experience of hearing Frankenstein by The Edgar Winter Group played very loud, and it was very enjoyable indeed.

If you don your x-ray goggles and look closely at the grilles on the tops of the loudspeakers, then you will see what's happening. Actually, you can just look at this photo...

As you can see, each loudspeaker has four tweeters, mounted to spread the sound out, rather than focus it in any particular direction. And it really does work. The room is filled with sound and, rather than having one optimum sweet spot, you can walk around anywhere in the room and the sound is still good.

Why this system didn't catch on more, I really don't know. It isn't even necessary to have the speakers in view - you can hide one behind a TV and the other behind a sofa (not crammed in too tight). For casual listening it works really well, and can be very spouse-friendly.

Clearly you wouldn't be using speakers like these for studio monitoring purposes, but in the living room they have a lot to offer. Now, let's see if I have a spare three hundred and fifty quid in my wallet...
Publication date: Monday June 25, 2012
Author: David Mellor, Course Director of Audio Masterclass

Korg M3- RPPR Patterns- In The Studio With Korg

Sunday, June 24, 2012

Create real acoustic reverberation, even if your interface doesn't have multiple outputs

Real acoustic reverb has texture and character, and it's much more fun than using a plug-in. But how?

By David Mellor, Course Director of Audio Masterclass

The essence of real acoustic reverberation is to send a signal to a loudspeaker in a reverberant space, pick up the reverb with a microphone and record it back into your DAW. Let's assume you have a vocal already recorded, and your audio interface only has stereo inputs and outputs. To keep things simple we'll record the reverb in mono, but you can easily do it in stereo with two mics.

Connect one output of your interface to an amplifier and loudspeaker. A guitar combo will give an interesting result, but a hi-fi or studio monitoring amp and speaker will be cleaner. Place the loudspeaker in a reverberant room. (Don't take mains-powered equipment into a bathroom.)

Play the track and make sure the audio is coming through OK.

Now set up a microphone and connect this to one of the inputs of your audio interface. Point the mic away from the loudspeaker so that it picks up mostly reflected sound. Create a new track to record the signal from the microphone. Now comes the important part...

You must mute this new track, so the audio doesn't feed through to the outputs of the interface. You can do this by clicking the mute button or by pulling down the fader. If you don't do this, you will get howlround. This will be unpleasant and you will spoil your recording.

You're all set to go now. Have a run through and set the gain for the microphone, then go ahead and record.

Since you were not able to monitor the reverb, you'll need to play back your recording and see how it sounds. You may find that an adjustment in the mic position will be required. Adjust as necessary and go again.

Hey presto! Real acoustic reverb!
Publication date: Sunday June 24, 2012
Author: David Mellor, Course Director of Audio Masterclass

Saturday, June 23, 2012

Korg M3- Chord Assign- In The Studio With Korg

Orem City Library, Utah USA

Orem City Library in July of 2008.

On some antiquated but effective equipment and along with a little help from some outside source equipment, the youth performance and puppet show were a great success.  Our services have also extended in the local city annual parade doing sound engineering for local live stage theatrical performers.

Rachmaninov: Symphony No. 2 / Sokhiev · Berliner Philharmoniker

Friday, June 22, 2012

How photography can tell you something about the professional standard of your audio

Are you concerned about the professionalism of your work? Some photographers are, and some are not so much. The comparisons are interesting...

By David Mellor, Course Director of Audio Masterclass
Photography and audio are as different as chalk and cheese. But professionally, both photographers and sound engineers (or music producers) need to satisfy their clients. If they do, they eat. If they don't, they starve.

"Anyone can take a photograph", one might say. Well yes, anyone can. But can you take stunning photographs day after day, week after week? Can you take photographs that people will pay for? That's a different thing entirely.

We use photographs at Record-Producer.com and Audio Masterclass. Many of the photographs are highly professional in quality. Equipment manufacturers commission highly-skilled photographers to make their gear look sexy, and they hope that these photographs will be seen by as many prospective purchasers as possible. We're happy to help with that when the photos suit our purposes.

But often in Record-Producer.com we just need a photo that illustrates an article in some way, and using an equipment photo wouldn't be relevant. We source these photos from photo libraries.

The first-call photo library for many media organizations is, you may be surprised, Flickr.com. This is because many of the contributors to Flickr allow their work to be used free of charge, with just a credit to the photographer, which is most easily provided by linking to their photo stream. Click the photo at the top of this page for an example.

However, finding a really good, striking, photograph on any particular topic on Flickr is tough. Most of the photographs on Flickr are nothing more than casual amateur snaps. Hardly the 'painting with light' that the word 'photography' derives from.

But there is a source of higher-quality photographs - iStockPhoto.

Although the photos on iStockPhoto are not free to use, most of them are quite cheap. And it is possible to pay a one-off licence to use the photo as much as you like. That is very convenient, in comparison with licences where you have to account for how an image is used and perhaps re-license every year.

Now, I say that the photos at iStockPhoto are higher in quality than Flickr, but many of them look more like they were shot by photography students than seasoned professionals. Particularly those that use a model - the combination of inexperienced model and inexperienced photographer make for a less than fully pro result.

To find the best in photography, one has to go elsewhere - to Getty Images. You can look through the Getty catalog and see page after page of truly stunning photographs. This is where the best photographers place their work. As you might expect however, it costs a significant amount of money to license a Getty image. More than Record-Producer.com can afford. Much more.

How this relates to audio is that these three photo libraries illustrate the difference between amateur, semi-pro and fully pro very clearly. The top professionals in photography have the knowledge, skills and experience to turn out work that is simply better than the others can achieve. And there is a big difference between semi and fully pro. So next time you finish a recording project, ask yourself how it stands in the rankings of professionalism.

Is it Flickr, iStockPhoto or Getty?

P.S. The image at the top of the page is nice and certainly better than I could have taken, but a Getty photographer would have gotten rid of the mic stand on the right, and have a selection of a hundred similar images from the same shoot to choose from.
Publication date: Friday June 22, 2012
Author: David Mellor, Course Director of Audio Masterclass

Lehi City Library Utah, USA

Some of the systems at the Lehi City Library include: Shure DFR Reciever, AT ESW-R210, ATW1451 Wireless system, AT 892 Headset, Point Source CO3-Headset. Marantz Tape Deck, Sony DVD, Elmo Projection system, TOA 900 Series.

Some of their system is a bit antiquated, but effective, we have incorporated new equipment as well as a new wireless microphone system with the TOA amplifier to allow more interaction with their story time presentations.  We have enjoyed serving them.

Should you optimize tracks individually or in the context of the whole mix?

 You can make each individual track of your recording sound as great as you like. But will they all mix together successfully?

By David Mellor, Course Director of Audio Masterclass
If you read the comments of pro mix engineers whenever you can (and you should) you will often find that they like to optimize individual tracks in the context of the whole mix. So, for example, you would EQ a guitar track while the whole mix is playing. You wouldn't solo the track because whatever you did to it, you wouldn't know whether it would work for the benefit of the whole mix.

But you sometimes have to take what mix engineers say a little less than literally. They might be expert at mixing, but can you also expect them to be expert in explaining what they do? Try explaining to an octo-tentacled space alien how you walk on just two legs. Explaining what you do by instinct is often difficult, sometimes impossible.

Let's suppose however that today you have a song to mix and you decide not to use the solo buttons at all. You throw up the faders and, with levels, EQ, compression and reverb, you refine and hone, hone and refine, refine and hone, until the mix is perfect.

Well done! You have created a mix where each individual track was processed in the correct context.


I would have to ask the question, "In the correct context of what?"

If you adjust everything in the context of the whole mix, then you have started with the context of an amorphous blob of audio. And then you EQed the guitar (for instance) in the context of that amorphous blob. Then you EQed something else, then something else. Basically you're going round in circles chasing a moving target. Gradually you are hoping that things will pull together and you will have a passably correct context in which to judge and adjust individual tracks. Don't worry - you'll get there in the end.

There are many ways to mix, and if the process I have described gets you to a good result, then that's fine.
At the end of the day, if your mix sounds good, then it is good.

I would contend however that there is a better approach to mixing, and I suspect that the really great mix engineers do this by instinct, even if they don't realize it.

In any song, there will be certain components that are the most important. It may be the vocal, or in a more dance-orientated track it might be the combination of kick drum and bass instrument.

Whatever you choose as the most important element of your song, you should solo it and do whatever you need to to make it the most fantastic-est, exciting-est, wow-wow-wow-est it can possibly be. Make it so great that anyone hearing it for the first time will buy your record in an instant, without even hearing the rest of the instruments. OK, that's an impossible thing to happen, but it should be your aim.

When you have done that, you have your context and you can blend in the other instruments to suit the one you have selected as most important. From this point in the mix, no soloing should be necessary.

In summary, I will definitely say that there is no one correct way to mix. However it is always correct to have a plan of action. Different plans for different tracks, perhaps different plans for different days. The plan I have just outlined is a good one.
Publication date: Thursday June 21, 2012
Author: David Mellor, Course Director of Audio Masterclass

Mozart: "Ah conte, partite" / Prohaska · Abbado · Berliner Philharmoniker

Thursday, June 21, 2012

Why are you recording at home when a pro studio would do a better job?

Still struggling to get a good sound in your home recording studio? Perhaps a visit to a pro studio would fix all your problems.

By David Mellor, Course Director of Audio Masterclass

We received an interesting question recently here at Record-Producer.com Towers...

An RP visitor asked quite simply why do people record at home when a pro studio would do a better job?

Hm, that really is a good question. If you are already a good musician then surely booking a pro studio would be far easier than spending months and years learning how to operate a home studio properly. Why waste all of this time?

And consider the money... you could spend a whole day in a world-class studio for less than it would cost you to set up even a halfway decent home studio. You could come out with a demo that you could use to get a deal, or license it to a smaller label as a master. Your career would be off to a head start.

It all sounds very tempting. But, there is a 'but'...

I can tell this story from my own experience of many, many years ago when I had a little bit of skill with music (which somehow seems to have receded over the years!) and at that point zero experience of recording, other than with my simple tape recorder at home.

I booked a studio that, for its period in time, had good equipment that was certainly capable of producing professional recordings.

The studio's owners worked as engineers and since they worked practically every day they had massive experience of recording all kinds of music.

So I was pretty sure that the equation, my music + good equipment + good engineer would equal a good recording.

I was wrong...

It was a fantastic day, I have no doubt about that. It really does stick in my memory. And there was no problem with the music, none with the equipment and none with the engineer.

So why did the recording turn out rather lackluster and why did I put it in a cupboard, never to see the light of day again?

The answer is in the interface between musician and engineer.

It's difficult to explain. I knew what I wanted musically, but I didn't then have the knowledge of recording techniques to communicate effectively with the engineer. The engineer on the other hand was trying to interpret my music through the experience he had of countless other musicians and bands.

And somehow we didn't make a connection. With the glorious 20/20 vision that hindsight provides I can see what a massive gap in understanding this is, and why we have people called 'record producers' whose job it is to bridge the gap between music and recording.

And I have heard the same tale so many times from other musicians that I know that it is true - you can go into a pro studio with great music and great musicianship, the engineer can do a professional piece of work, but you come out with a recording that just doesn't cut it.

So what can you do?

There are two answers... One is to view your first day in a pro studio as the first day of a learning experience that is going to take many visits to the studio to perfect. And until you have mastered the difficult art of self-production in a pro studio, and communicating exactly what you want to the engineer, you will not be satisfied with the results.

The other answer is to set up your own home recording studio and learn the recording process for yourself!
So that, in a nutshell, is why we have home studios. It is actually very difficult to get a good result in a pro studio, unless you really know what you are doing. It is possibly as difficult as learning how to make decent recordings yourself at home.

We would love to hear of your pro studio stories from the early days of your career, successful and otherwise. And if you run a studio for hire yourself, how would you advise an aspiring musician or band to prepare themselves for that special first day in a pro recording studio?

Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Schumann: Symphony No. 4 / Rattle · Berliner Philharmoniker

Wednesday, June 20, 2012

Tips for Improving Your Tape Ministry Recordings

Contributor:  John Mills

We tend to think of him as “friend of Shure Notes John” but to the larger audio world, he is FOH engineer extraordinaire, audio tech advisor in his popular TechTraining101 site, frequent contributor to Worship Musician plus pro on the bus and at the board with this summer’s Brothers of the Sun tour featuring Kenny Chesney and Tim McGraw.  

We’ve been turning to John for practical advice on everything from critical listening to mixing tips for many years now, so when we decided to tackle the topic of tape ministry recording, we didn’t have to look very far.  We tracked him down in cyberspace ‑ somewhere between Paradise Island and Tampa ‑ at crunch time: the start of a 25-city tour.

Before we get to actual recording, let’s touch on a subject that’s rarely mentioned. Recording rights.

“Any time you press record on an audio or video device, you need to make sure you have the rights to record the music. Recording the Pastor’s sermon is perfectly fine because technically he is the copyright owner of his sermon. Music is another story entirely.

If your worship team is singing another worship leader’s song, or a classic hymn for that matter, you can pretty much count on the fact that there are restrictions to pressing that big red record button.  Even if your church isn’t producing CDs to make a profit, the rights aren’t as hard to understand as you might think. For a thorough understanding, you can find detailed information at these websites like these.”

Christian Copyright License International
Music Services Organization

Now that we’ve covered the fact that you really DO need to have the rights to record, let’s talk about actually pressing the RECORD button. 

“Many churches simply hook up the tape recorder to the left/right output of the soundboard. That, as I’m sure your readers are aware, is going to sound pretty bad from a mix perspective.

What is coming straight off the board is often very unbalanced for the recording. It sounds great in the room because you’re hearing the trumpets fine without having a mic on them, but the recording is suffering because it doesn’t ‘hear’ the horn section, or whatever instrument(s) you choose not to mic. “

Many of our Shure Notes readers volunteer or work part-time in churches that don’t have mega-church budgets.  What advice do you have for them?

“Here are some No Budget tips:
  • Put a mic on anything you’re not satisfied with on the tape. I can hear the folks in the front row now … “We don’t need microphones on the drums, they’re already too loud.” Tip two will answer that complaint.
  • Set up your recording device to take a feed from a pre fader auxiliary send. This will allow the FOH engineer to mix what is needed in the house, while having a completely separate mix for the tape. Yes, this does mean a little more work, but it will give you the ability to mix things differently for the tape.  Make sure your aux sends mutes when you engage the mute on the main channel.  If not, you’ll have stuff going to tape that you really don’t want there.
For those folks in the front row concerned with the volume, tell them not to worry; those mics are just for recording. If you put those extra “recording” mics on pre fader aux sends, you don’t even have to push up the fader for that channel. So if they’re really concerned with the volume, take them to the board and show them that the drum mics aren’t even on in the house.”

OK, what if you have some money to spend?

“A really neat trick on the last install I did was to use a separate Aviom system for the recording.  If you’re having trouble with monitors and recording services, this may be the way to go. We installed an Aviom system (www.aviom.com) for the band to run their own monitors. Then we took an extra control surface to a room just behind the stage. We hooked up the output of the Aviom to the input of the tape deck and monitored it through a set of computer speakers. This gave them the ability to have a pseudo-recording room for a pretty reasonable budget.”

Let’s take a flight of fancy and assume that money is no object.

“If you are really serious about recording music the best way possible, you’ll need a separate engineer in an isolated room with a separate console. It’s really the best way to get amazing mixes.”

We know that not every house of worship has a professional staff.  What’s your advice for the church with a semi-pro crew?

 “Any mix is only going to be as good as the sound person behind the board. If you have only one good audio tech, I wouldn’t spend $100K on a separate recording room. I’d spend my money on educating some of the other audio volunteers. I’ve heard mixes from the simplest of setups that blew away the recordings done by multi-thousand dollar remote recording rooms, because they had a better sound person.”

What separates good recordings from great ones?

“The biggest key to a good recording is making it sound like you were there.
Start with a good, clean, balanced mix of all the instruments. It’s not uncommon in a smaller building to have six or more additional mics on instruments that aren’t even going to the house speakers. They’re just for the recording setup that I described before.

Now that you’ve built a good mix with whatever system you’re using, here are some additional suggestions:
  1. A mix straight off the board will never sound completely live because it is getting a tight sound directly from the instruments. You need to add back in some ambience with audience and/or ambience mics. Remember, audience mics are a spice: add too much and it sounds unnatural. Get a good mix of the instruments first and then add in just enough audience so that listeners know that they’re there. I usually start my recording with the audience mics considerably back in the mix. I wait until the middle or end of the first song to decide how much I need. That gives me a few minutes to make the mix as clean as possible before adding the spice.
  2. Also of note it’s best to EQ out as much of the low frequencies as possible in these mics.  If I have a variable High Pass filter on my soundboard, I may set it as high as 200Hz.  If you only have a button, engage that, and then take your low frequency shelf filter all the way down.  This lets the warmth of your dry mix come through without muddying it up with a bunch of low mush that the ambient/audience mics are picking up.
  3. If you have a stereo aux send then definitely do the mix in stereo and feel free to pan stuff around. Your brain loves to hear things with space in between and around it and stereo audience mics are always going to sound more real. They really add a sense of dimension to the mix.  If you have a little more budget available, the Shure VP88 is one of my favorite stereo mics. Either way, when you add more than one audience mic to the mix be sure to pan the hard left and right so that the listener gets that natural sense of space. “
Final thoughts?

“Live worship recording is an art and a science. It begins with a celebration of faith – the goal is capturing that experience in a form that can be share with the world.”

About John Mills:  John is a 20-year veteran of live sound. He’s toured with some of the biggest names in Christian music – Chris Tomlin, Shane and Shane, Lincoln Brewster and Paul Baloche and is currently on a summer tour with Kenny Chesney and Tim McGraw.  John writes regularly for Worship Musician and is a great resource for church tech teams with helpful advice on his TechTraining101.com website.  We’re also crazy about John because he says things like this: “I don’t want to turn around one day and look to see what I’ve accomplished in my life and realize that it was only running good sound at this or that concert. I remember promising God when I first started that if he allowed me to use my talents at this, I would be faithful to share that knowledge.”

The Berliner Philharmoniker perform Schumann's Symphony No. 3 / Trombone tutorial

Tuesday, June 19, 2012

ULX-D Dual & Quad Receivers First Look from InfoComm

Can your wireless system fit 100 transmitters into just two TV channels (12 MHz)? Ours can. Meet the new ULX-D Digital Dual and Quad Receivers.

As people continue to demand more and more wireless spectrum for their myriad devices, audio pros must find ways to maximize the number of wireless microphones that can operate reliably in the remaining spectrum. Debuting at InfoComm 2012, Shure adds two new additions to the ULX-D Digital Wireless System:  the ULXD4D Dual Channel Receiver and the ULXD4Q Quad Channel Receiver. They pack either two or four channels of wireless into one rack unit, saving  space and reducing installation time.

In Standard mode, up to 17 ULX-D systems can operate in just 6 MHz of spectrum, which is equal to one U.S. TV channel. Need even more systems? No problem. Just activate High Density mode, which allows up to 47 systems to operate in just 6 MHz of spectrum, with a working range of 100 feet.


This video demonstrates High Density mode with time-lapse footage shot at Shure Corporate Headquarters, where 100 transmitters were placed side by side, and then turned on one by one. If you’re wondering how it works…well, according to Chris:
“High Density mode optimizes the system’s output power and digital RF filtering to reduce its spectral footprint from 350 kHz to 125 kHz, with no loss of sound quality. This allows ULX-D systems to be tuned to frequencies that are much closer together without interfering with each other.”
Go on: see for yourself.

Learn more about ULX-D Digital Wireless on shure.com.

Schumann: Symphony No. 2 / Rattle · Berliner Philharmoniker

Monday, June 18, 2012

Why have a pair of speakers when you can have a quad (literally)?

Why have a pair of speakers when you can have a quad (literally)?

If your living room is dominated by a gigantic pair of speakers, then why not dominate it some more?

By David Mellor, Course Director of Audio Masterclass
I have written about electrostatic loudspeakers before. Not to bang on too much, if you haven't heard electrostatics, then you should. You will realize immediately how colored moving coil loudspeakers are in comparison. But hey, that's the way the world is and we live with it.

Probably the best-known electrostatic loudspeaker is the Quad ESL-57. The '57' stands for 1957, the year when it was designed. Despite their design being more than half a century old, examples in good condition still sound great.


The problem with the ESL-57 is that it doesn't go very loud. So what do you do if you want more volume? Why, simply double up. And that's what you see in these photos - stacked electrostatics. In very nice cabinets as far as I can see.

These particular examples are up for auction on Ebay at the time of writing (June 15, 2012).

This actually wasn't all that uncommon a thing to do. I remember other references to stacked electrostatics from years gone by. The boss of SME (of pickup arm fame) had a set.

Of course, desirable as such a set-up might be to have in your living room, you won't get far without the approval of your significant other. So do consult before you bid. Or just bid and, if you win, tell us what happens...

Publication date: Monday June 18, 2012
Author: David Mellor, Course Director of Audio Masterclass

Sunday, June 17, 2012

An 8-channel preamp of SSL/Neve quality? Really?

An 8-channel preamp of apparently SSL/Neve quality is up for sale on Ebay. But can you really trust the seller's description?

By David Mellor, Course Director of Audio Masterclass

I often find it interesting to browse the pro audio section of Ebay. I found an interesting auction for an 8-channel preamp that claimed to be of SSL/Neve quality. But...

I used to own the exact same model myself and I know that it isn't. It isn't the same quality at all.

Regarding sound quality, I had no issues. For a solid-state preamp, the sound was as faithful to the output of my microphones as I could possibly desire it to be. However, I went through a period when I only ever used one channel, leaving the other seven channels unloved. And untested.

But then I had the opportunity to make a location recording where all eight channels would be used. To cut a long story short, the other seven channels didn't work. Only the one I had been using regularly still functioned. I completed the recording by other means.

Back home in the studio, I took the lid off this preamp for the first time. I was astonished at what I saw...

Nothing. Well, almost nothing. Most of the interior space was filled with nothing more than a mix of 78% nitrogen, 21% oxygen, 1% argon and other trace gases. The rest was mostly power supply and, oh yes, a little audio circuitry too. Indeed, the unit's preamplification duties were fulfilled by eight integrated circuits and seven of them had at some point in time given off their tiny puffs of magic smoke. I rather imagined I could still smell it.

Now I don't want to complain unduly. The integrated circuits were the revered SSM 2017 and they were easily replaced. A short while after that the power supply failed and I decided to junk the unit rather than foist it off on an unsuspecting Ebay punter.

Now, regarding SSL and Neve quality. I honestly don't think that either would have given me better sound, but they certainly would have been more satisfying to own, and I doubt that their circuitry is quite so prone to spontaneous combustion. My view is that it isn't generally a good idea to rely too much on Ebay sellers' descriptions of their items.
Publication date: Sunday June 17, 2012
Author: David Mellor, Course Director of Audio Masterclass

Mendelssohn: Symphony No. 3 "Scottish" / Heras-Casado · Berliner Philharmoniker

Saturday, June 16, 2012

Should we clean up old recordings, or keep their noise and distortion in all their glory?

We think we know everything these days. But are we getting a little too clever? Perhaps people in an earlier age of recording knew something that we don't.

By Glen Stockton

Back in the 1930's and 40's my great-uncle, Robert MacGimsey, recorded hundreds of Negro spirituals on his Louisiana plantation, using a portable disc recorder and lacquered aluminum discs. I had the recordings dubbed off on 1/4" reel-to-reel tape in 1968 to preserve them, and of course today none of the old discs are playable.

About twelve years ago I began digitizing these recordings and cleaning them up with PC software: All of the clicks, over 90% of the turntable rumble and other background noise, etc., were removed successfully, and I went further to do "micro-surgery" on all other aspects of the songs, including diction and articulation, EQ, time-stretching, even adding a stereo effect, until the recordings were just sparkling clean and clear.

It was my intention to release a series of CD's which would be suitable for radio broadcast and would play well in home stereo units and car CD players. Well, they weren't very well received. Many folks didn't believe that they were actually old recordings, because they sounded so clean and sparkling, so modern. Others were incensed that I had in any way changed the musical content, or the medium by which it was created. These were VINTAGE recordings, and people wanted to hear them they way they would have sounded on a Victrola!

The project still resides on my old hard drive and on numerous DVD's. And in all of this, I re-learned a valuable life lesson: People love antiques, whether they be visual, physical, or audio; and I seemed to have destroyed that illusion for them, in my quest for "perfection."

There are stories of well-intentioned individuals who have taken a genuine old Stradivarius apart, scooped out the back and top much thinner than it was originally, or maybe even made a new top for it and threw the original away, and actually re-varnished the whole, thinking they had done a great service of some sort. What they did was to destroy almost the entire value of that priceless artwork, when leaving it in its now imperfect, but original, condition would have today brought perhaps two or three million dollars at the auction block at Sotheby's.

Can we improve upon Leonarda da Vinci's Mona Lisa by adding mascara, eye shadow, highlighted cheeks and perhaps some nice earrings and a gold necklace to her? Or could we 'correct' Michaelangelo's David or Madonna and Child with a little chipping here, and a little grinding there? Maybe rewriting Shakespeare's Romeo and Juliet to read, "Romeo, dude, where the **ll are ya?!" Reworking an original, vintage work rarely results in any kind of an improvement, just like using modern mastering on an old (or maybe even bad) song to produce a 'Masteringpiece.

Consider what modern electronics have done to 'improve' the great studio recordings of the past, when played on an MP3 or an iPod. If you can't hear the sonic difference, then my point is well stated: We have lost the 'golden ears' of our parents and grandparents, who listened to all genres of music on their big, stereo music systems, whether they were reel-to-reel tape or LP's.

I well remember my old uncle's Fisher machine, which consisted of separate, heavy cabinets for each of the channels, housing high-quality 15" woofers and various other sized, and warming the house with too many tubes to remember. And the sound was out of this world: Operatic singers were present in your living room, violinists and string quartets were alive and breathing, symphony orchestras surrounded you and blasted out their might on those forte passages, and Elvis Presley and John Gary and Bing Crosby sounded better then than I have ever heard them sound on digital equipment.

So, perhaps in summation, my thoughts would be, if it's vintage, leave it vintage. It it's poor, old or new, modern mastering isn't going to make it wonderful. Great music isn't created in the mastering room, it's created in the recording studio. I'm 65, by the way, and my hearing is still worlds ahead of any young person I know today.

Perhaps we're even doing a great disservice to our younger generation by giving them super-loud music that grievously lacks in quality? When I dine, I have to admit that, sometimes, quantity is quality. But not so with music. Forget 'in with the new and out with the old'. That works great for New Year's celebrations, but not always for music.

We should leave vintage recordings the way they were originally, and let the historians of future generations have something to enjoy in its purity.
Publication date: Saturday June 16, 2012
Author: Glen Stockton

Friday, June 15, 2012

A very unusual tape recorder used for mastering

 If you look at the photo carefully, you will see that this Studer A80 analog tape recorder has several more tape guides than the norm. It's used for mastering. But why?

By David Mellor, Course Director of Audio Masterclass

At the time of writing (June 13, 2012), this tape recorder is up for auction on Ebay. It is a Studer A80 and, as analog tape recorders go, this is one of the very best. But this isn't a normal A80, it is the mastering version. So the question is, why is there a special mastering version, and what makes it different from a normal A80?

If you look closely at this pic, you will see that the heads are arranged differently to a normal tape recorder...

 Usually, you would expect to see three heads - erase, record and play - set very close together underneath a head cover that makes everything look neat and tidy. But here there are two playback heads, separated quite widely.

So this machine can't even record, so it isn't even a tape recorder - it is a tape playback machine. So how does that make it suitable for mastering?

The answer is that this machine was used for mastering to vinyl. It is only ever used to play back signal to a vinyl cutting lathe. It is not capable of recording and that never was the intention of the machine.

So now the question arises why a special version of the A80 was desirable for mastering? Why wouldn't a standard A80 do?

The answer to this is that to maximize the duration of playback of each side of a vinyl record, the turns of the groove should be spaced so that they never take up any more width than necessary. Loud signals make the groove wiggle a lot. For quiet signals the groove is much more nearly a smooth curve. Lathes were designed so that they could automatically modify the pitch of the groove according to the level of the signal. However, since a quiet section in one turn of the groove might be followed by a loud section in the next, the lathe had to be able to 'look into the future' to see what is coming next.

This is the purpose of the second 'preview' playback head on the left, which sends signal to the lathe's control mechanism a little ahead of the signal sent to the lathe's cutter head. The extra tape guides are there to extend the time interval between the preview head and the main playback head. The tape is looped around these heads according to the diagram attached to the top plate of the machine.

Of course you could say, "Why not use a normal tape recorder and delay the signal to the lathe's cutter head digitally?"

Tell this to a vinyl junkie and see what happens...
Publication date: Friday June 15, 2012
Author: David Mellor, Course Director of Audio Masterclass

Brahms: Violin Concerto / Braunstein · Nelsons · Berliner Philharmoniker

Thursday, June 14, 2012

Do acoustic instruments need compression?

A recording of a group of acoustic instruments will have a dynamic range that is too wide for home listening. What should be done?

By David Mellor, Course Director of Audio Masterclass

Yesterday's Record Producer Daily on mastering brought up a couple of interesting comments.

One from Remy Ann David remarked, "I've been told numerous times that my mixes already sound mastered".

This is of course exactly how things should be. When the mix leaves the studio it should already be the very best it can be. If there is the merest inkling in the engineer's mind that mastering will be able to improve it, then the mix simply isn't finished.

Another from Alcohol of Massachusetts commented, in relation to recordings of acoustic isntruments, that, "To make the recording more suitable to listening, some compression is necessary to tame the dynamic range."

Yes, I agree with this. If you record a group of acoustic instruments with just one stereo pair of mics, then the dynamic range that is captured will be uncomfortably wide for domestic listening. And iPod/in-car listening too for that matter.

Without disagreeing with this point in the slightest, it does raise for me a couple of issues that I would like to talk about.

The first is what happens when you add more mics on top of the simple stereo pair.

As you know, if a group of acoustic instruments or voices sounds good in the natural acoustic of a room, then it is possible to make an excellent recording using just two mics in the coincident crossed pair, near-coincident crossed pair, ORTF or spaced omni configurations, according to your preference.

All it takes is to find the right position for the mics. ("All it takes"! - it can take ages of trial, error and adjustment to get this right.)

But there are reasons why you might want to add more mics, even in a good acoustic.
One is if you are in a hurry.

Is there such a thing as a hurried recording engineer?

Well there shouldn't be. Enough time should be booked to do the job properly. But in broadcasting there often isn't so much time. And in television recording there are the requirements of the cameras to consider too.

To get a good balance more quickly, you can set the stereo pair closer than you normally would. Then add two spaced ambience mics towards the rear of the auditorium.

The outputs of the pairs of mics can be balanced on the faders to give the optimum blend of direct sound and reverberation.

Many would say that this isn't the best way to work, and indeed it can result in the reveration not sounding entirely 'connected' with the instruments.

However, I for one rather like it and I would sometimes choose it for preference over the simple stereo pair.

Another reason for adding more mics is if the ensemble is large. What can happen here is that the instruments that are remote from the stereo pair sound dull and distant in comparison with those at the front.

One solution that is often employed is to add more mics for the rear instruments. You can go further than this and mic every section of instruments individually, plus the main stereo pair.

This will solve the perspective problem, at the expense of possibly a more confused sound.

But something else happens too...

The dynamic range is reduced.

I remember when I first became aware of this when it was demonstrated to me some time ago by Mike Beville of Audio & Design.

In my own recordings since then I found that it was most definitely true. And the more mics you add, the more the dynamic range is reduced.

We are not talking about major differences here - around 6 dB at the most I would say. Still, this is enough to make an audible difference and make a recording more listenable at home.

There is another point I would like to make...

I believe with a passion that compression is not the best way of controlling dynamic range.

This may seem odd since that is the whole reasoning behind compressors in the first place.

The fact though is that compressors were invented for the broadcast industry. And as I have said already, the broadcast industry works at a fast pace - they need compressors. But recording engineers outside of broadcast should look at another option first.

And that option is to control the dynamic range manually. It is as simple as this - raise the level when it gets too quiet. Lower it when the level comes back up again.

This used to be done completely manually on the faders, but now it can be done more conveniently with fader automation.

When a musically-aware engineer adjusts the dynamic range of a recording manually, the result is far better and more natural than can be achieved with a compressor.

Compressors do of course have their place. Reducing dynamic range on a very short-term, tens of milliseconds, basis is one. This is useful for enhancing a popular music vocal.

Going back to the original comment. Yes I do agree that an acoustic recording will need some reduction in its dynamic range. The engineer should choose whichever method gives the most natural sound.
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Tchaikovsky: Symphony No. 5 / Abbado · Berliner Philharmoniker

Wednesday, June 13, 2012

How technology is killing music

Don't we have great music-making technology these days? But what happened to the great music we were going to make with it?

By David Mellor, Course Director of Audio Masterclass

This article is inspired by a recent conversation with someone who knows, I mean really knows, about the business of songwriting.

A lot of people will say that songs today are nowhere near as good as the best songs from the 1970s and earlier, thinking back as far even as the 1940s and 1930s too.

Well, that would be a matter of opinion that you might not share. Even so, as an opinion it is a very popular one, if not universal.

But what about your songs? You have studio technology right there in your home that is technically way in advance of anything in even the best studios of the 1970s or earlier.

So how come you are not writing great songs? I mean really great songs that would challenge the best, of any decade.

You have the technology, so why isn't it helping you turn your dreams into reality?

Great gear equals great recording. Right?

Unless you have made some stunningly bad equipment and software choices, you have a home studio that is perfectly capable of turning out work that is every bit as good as you hear in commercial releases.

So technically, there is nothing holding you back from having a hit record.

So if you wrote a good enough song, you would be able to make a recording that would propel your name, as writer and producer, right to the top of the charts.

The problem is, of course, writing that 'good enough' song. If you haven't done that yet, you need to figure out what it is that is holding you back.

Let's look at the process of writing and recording a song...

First you need to write the song. Perhaps you'll do it with a guitar, a pencil and a notebook. Perhaps you'll do it with a master keyboard, software instrument and word processor. Perhaps you'll do it entirely in your head! Any way is fine.

Next you need to make a recording. So call your singer, create an arrangement, record it, mix it and master it. Job done!

Ah, but here is a problem. Although you have the equipment and software to make a recording of professional quality, you don't quite have the necessary skills in production to pull it off. As brilliant as your song is, your finished recording sounds just a little bit... well, amateur.

Now I would say there's nothing wrong with this. Firstly I don't subscribe to the idea that the professionals 'own' music in any way. Music is the property of the people, in the words of Magic Michael that have resonated down the years. Secondly, it isn't reasonable to expect that anyone working at home in their spare time can compete as an absolute equal with seasoned studio professionals who have done nothing but make records twelve or more hours a day since they were seventeen years old.

Oddly enough, a part-time songwriter can compete with a professional, because writing songs comes more from inspiration than experience and time-honed slickness.

So although you might have the ability to write a great song, in all probability you don't have the ability to produce a finished recording of truly professional quality.

(OK, maybe you do, but stick with me...)

Can't make a master? Then make a demo!

Let's suppose that you have written a great song, but realistically you don't believe that you can take it all the way to becoming a commercially-successful recording. But there's nothing stopping you making a demo!

All you have to do is make a recording that puts the song across well, so that a publisher, producer or A&R manager might take a liking to it and take it on to the next stage. This is how the industry has worked for decades, although in an earlier era the demo would have been a live performance at the piano in a publisher's office rather than a recording.

Here comes my point...

The problem is now that so many people have home recording studios, and many of them are able to make recordings that are close to professional standards. So to make the grade with a demo, you have to arrange, record, mix and master your song to a similar near-professional standard. Otherwise no-one, literally no-one, in the industry will be prepared to spend any of their valuable time listening to it. They will consider that you haven't put enough effort in, so why should they?

So rather than putting all of your energy into your writing and knocking out a quick demo on a Revox or Portastudio, you're burning the midnight oil trying to make a great-sounding production. In all likelihood, whatever magic there was in your song will become diluted or hidden. If it is a truly great song, then it should sound great just with voice and guitar, and that should be all that is necessary for a demo.

In conclusion

My conclusion is that many of us are spending far too much time and effort trying to make professional-sounding recordings, which is sapping time and energy from what is really important, which is the songwriting process.

Rather than spending hour after hour building up layer after layer of virtual instrument tracks, checking out plug-in after plug-in to get the compression exactly right, it would be better to work with just a singer and a guitarist to get a really, really good performance of your song. A really good performance. And that means spending time on the music, not the equipment and software.

You know, in an era characterized by over-production, that might just be the way to make your song stand out!
Publication date: Monday December 26, 2011
Author: David Mellor, Course Director of Audio Masterclass

Mozart: Symphony No. 35 "Haffner" / Abbado · Berliner Philharmoniker

Tuesday, June 12, 2012

Are you hard on your hard disk? Is the way you record damaging your system?

 An RP reader asks whether freezing tracks that use plug-ins is bad for your hard disk? Would it be better to bounce them?

By David Mellor, Course Director of Audio Masterclass
Here's a question from an RP reader...

"Actually, this is a response to your article about using as many plug-ins as you like. I'd heard about using this function in Logic. Experienced producers have said to me that this technique is extremely bad for the internal hard drive. The drive has to work that much harder that it won't last very long. I think that Ableton also has a freeze function. I have been advised to actually bounce the tracks instead of freezing them. Does anyone agree?"
Thanks, John, Amsterdam, Holland.

First let's recap on the freezing technique...

Imagine you have a song that uses loads of software instruments and plug-ins. You try to add another plug-in, but your computer is already working to its maximum so everything grinds to a halt. Some DAWs have a function where you can 'freeze' a track, which records the audio on that track to disk, complete with all effects, so that the system doesn't have to run the instrument or plug-ins on that track. You can unfreeze if you need to later on.

Freezing can allow you to get more from your system, but apparently it's bad for the hard disk. Can this be true?

Well we would have to say that only a well-conducted test would give a reliable answer to this. But we can certainly speculate what the outcome would be.

Let's think about what the hard disk is doing when you play your song...

Your song consists of a number of audio files playing simultaneously. Each file could be a continuous string of binary digits on the disk. However since all the files have to play at the same time, the heads of the disk drive have to skip about to pick up a bit of one track, a bit of another etc.

This is absolutely normal behavior for a hard disk. It's what you bought it for.

If you are playing by the rules and reserve a separate disk for audio, then software instruments and plug-ins don't affect that disk at all. They run from the system disk, and in fact hardly make any demands on that since they are loaded into the computer's memory.

But what if you freeze a track, what difference does that make to your audio disk?

Well, since freezing a track means recording it to disk complete with instrument and/or plug-ins, there's another track on the disk that needs to be played.

In theory, this should mean that the original track does not now need to be played, but you will find that in some DAWs even muted tracks consume system resources. Still, it's only one more track and this will hardly make any difference unless you really have a lot of tracks in your song.

So freezing shouldn't give the hard disk much more work to do, if any. What about bouncing?

Well bouncing is what you would have to do if your DAW didn't have a freeze function. And once you had bounced a track complete with instrument and/or plug-ins, it wouldn't be any different to if you had frozen it.

So our opinion is that neither freezing nor bouncing will affect the disk in any significant way.

What will affect your disk however is editing...

When you edit a track, you create a discontinuity in the data and the heads of the disk will have to skip about more than they did previously.

If you edit many tracks into a lot of short segments and shuffle them about, then the disk will have much more work to do.

Even so, that's what a hard disk is designed for. It is difficult to overload a hard disk with work because the manufacturer would have to be pretty stupid to allow that to happen. The disk will simply reach its maximum work rate then go no faster.

All that skipping about of the heads could shorten the life of the disk, but disks are cheap these days and as long as you have a backup or your data, it doesn't really matter if you have the occasional failure, say once every few months or so.


Different people have different experiences. It is quite possible that we have overlooked something that is affecting the disk in an unexpected way.

We would love to hear your thoughts on whether the way you use your DAW can stress or damage the hard disk. Discussion below...

Publication date: Friday January 29, 2010
Author: David Mellor, Course Director of Audio Masterclass