Welcome to No Limit Sound Productions. Where there are no limits! Enjoy your visit!
Welcome to No Limit Sound Productions
Company Founded | 2005 |
---|
Overview | Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting. |
---|---|
Mission | Our mission is to provide excellent quality and service to our customers. We do customized service. |
Wednesday, October 31, 2012
Tuesday, October 30, 2012
Q. Should I use a boundary mic when playing piano live?
Sound Advice : Miking
I play an acoustic
baby grand piano and sing in a new bar. The environment fluctuates from
being quiet to being ‘moderately’ noisy later on. There is permanent kit
on top of the piano during performance (meaning, at present, that there
is not the option of opening the lid). I’m ultimately not convinced
that the piano is acoustically loud enough for when the room gets more
noisy, but I’m more concerned about my ability to hear the piano. Due to
the logistic difficulties of lack of space and inability to leave the
lid open during performance, I’m thinking of getting a boundary mic to
lay inside the instrument. I appreciate that this is not necessarily
going to be the final word in creating a great piano sound, but at very
least I’m looking to feed this signal to my monitoring, to help resolve
that issue.Do you have any advice? I’ve never actually used a boundary
mic in anger before. If it could work, it would be great if you could
recommend suitable models.
Via SOS web site
SOS
Editor In Chief Paul White replies: I’d definitely be inclined to try
a couple of boundary mics fixed under the lid so that you can get
a balance between the treble and bass strings. Pretty much any model
will work adequately in what you have already recognised as a less than
ideal situation, so there’s no point in spending too much money. As long
as you can open the lid long enough to fix the mics to the underside of
the lid, using double‑sided tape or sticky fixers, you should be able
to bring about an improvement. Suitable models cost from around $40 to
$300 each, with the inexpensive Audio-Technica ATR9 looking like a good
bet. Though this is no longer in production, it may still be available
from certain retailers for under $40. Further up the price range, the
Beyerdynamic Opus 51 and Beyerdynamic MPC65 would also be suitable.
Also, the now discontinued AKG 542 could be a good bet, if you can find
it. You will, of course, need a mixer with two spare mic inputs and
phantom power to run the microphones, and you may have to experiment
with the mic positioning to achieve a reasonable balance between the
various strings.
SOS Technical Editor Hugh Robjohns adds:
I agree that using one or two boundary mics would be a practical
solution, and with careful placement should be capable of a reasonable
(if inherently very close) sound quality. Boundary mics are available
across a wide price range — www.microphone‑data.com
lists 34 current models to choose from — and, in my experience, even
the low‑price models can deliver quite usable sound quality.
The
biggest problem is likely to be mic overload; a grand piano is
a powerful instrument when played enthusiastically, as you’re likely to
be doing in a noisy environment, so look for a model with a moderate
sensitivity and a high maximum SPL. I’d also recommend choosing one with
a flat frequency response: many have a heavy presence boost which will
tend to make a piano sound very ‘shouty’.
I
second PW’s suggestion of the Beyerdynamic MPC65, and I’ve also had good
results with the MBHO 621E with a closed‑lid piano. It really does pay
enormously to devote plenty of time to finding the optimum location for
the mic (or mics), though, as small changes of position will result in
big changes to the sound.
Once you’ve found the
best place(s) for the mics, make sure that they’re fixed really securely
— but flexibly — to the underside of the lid. The last thing you need
is a loud clunk half way through your performance, followed by
a ‘honky‑tonk on a firing range’ effect as the mic falls into the
strings and bounces around for the rest of the number! On the other
hand, you don’t want mechanical vibrations from the piano mechanisms
being passed directly into the mic through the mount, either. A soft
rubber base or a layer of foam helps a lot here. And be careful with the
cables; I’ve known of mic cables being badly pinched, and even
completely severed, when the lid was closed again!
An alternative to the boundary mic is the
contact mic, which gives stunning separation and may be easier to fit in
your case. Again, some experimentation will be necessary to find the
best location(s), but I’ve been really impressed with the quality
obtained using Schertler DYN‑P (and DYN‑GP) pickups. Not cheap, but
worth it in situations where isolation is the priority. The pickups are
fitted to the underside of the soundboard using a ‘blu‑tak’‑like putty,
which is usually very reliable and might be a lot easier to install than
boundary mics inside the lid, in your situation.
Q. What are auxes, sends and returns?
Excuse the simplicity
of the question, but I’m always coming across these terms in the
magazine, and I don’t know what they are: auxes, buses, sends and
returns. Can you explain to me what are? Are they all part of the same
thing or completely unrelated?
Tony Robbins via email
SOS contributor Mike Senior replies: All of
these terms are related, in that they are all ways of talking about the
routing and processing of audio signals. The word ‘bus’ is probably the
best one to start with, because it’s the most general: a bus is the term
that describes any kind of audio conduit that allows a selection of
different signals to be routed/processed together. You feed the desired
signals to the bus, apply processing to the resulting mixed signal (if
you want), and then feed the signal on to your choice of destination. If
that description seems a bit vague, that’s because buses are very
general‑purpose.
For example, it’s common in
mixing situations to hear the term ‘mix bus’, which is usually applied
to the DAW’s output channel. In this case, all the sounds in your mix
are feeding the bus, and it might then have some compression applied to
it before the sound is routed to a master recorder or recorded directly
to disk within the software. A ‘drums bus’, on the other hand, would
tend to refer to a mixer channel that collects together all the drum‑mic
signals for overall processing, routing them back to the mix bus
alongside all the other instruments in the arrangement. Other buses are
much simpler, such as those that can be found on a large‑scale recording
mixer, feeding the inputs of the multitrack recorder, or those which
carry audio to/from external processing equipment. Some don’t even
provide a level control.
An ‘aux’ is just a type
of bus that you use to create ‘auxiliary’ mixes alongside that of the
main mix bus: each mixer channel will have a level control that sets how
much signal is fed to the aux bus in question. What you do with your
aux buses is up to you: the most common uses are feeding a cue signal to
speakers or headphones, so that performers can hear what they’re doing
on stage or during recording; and sending signals to effects processors
during mixing. In the latter case, the aux bus that feeds the effects
processor is usually referred to as a ‘send’, while the mixer channel
that receives the effect processor’s output will usually be called the
‘return’. For more information, check out Paul White’s ‘Plug‑in
Plumbing’ feature back in SOS April 2002; you can find it at www.soundonsound.com/sos/feb02/articles/plugins.asp.
Monday, October 29, 2012
Q. What’s the best program for time‑stretching?
Sound Advice : Mixing
SOS contributor Mike Senior replies: There are loads of bits of software that will do time‑stretching and tempo‑matching for you and, although I’ve no experience of the facilities in Audacity myself, I’d suspect that the current state‑of‑the‑art technology, commercially, is probably ahead of what is available as open‑source technology. You don’t say what kinds of things you’re trying to stretch, however, and in my experience the performance of any given tool depends a great deal on the type of audio material you feed it with
Propellerhead Recycle, for instance, is much better than most time‑stretching‑based tempo‑matching software when working on beats, drum loops, and other rhythmic material. Programs like Celemony’s Melodyne or Serato’s Pitch ‘N’ Time, on the other hand, tend to be much better at dealing with melodic phrases or full‑stereo mixes. However, all of these options may well be more complicated to get the best out of than something that’s specifically set up for easy working with beat‑based music: Ableton Live, Apple GarageBand, or Propellerhead Reason, for example.
I was wondering what
the latest and best program is for time-stretching. I purchased Apple
Logic 9, but I don’t find that quite suits my skill level. I use
Audacity to change tempo, as I find it very intuitive to use, and it
time‑stretches by a decent amount before degradation is noticeable. I am
sure, though, that in this day and age there are better programs for
this function. Are you aware of any?
Via SOS web site
SOS contributor Mike Senior replies: There are loads of bits of software that will do time‑stretching and tempo‑matching for you and, although I’ve no experience of the facilities in Audacity myself, I’d suspect that the current state‑of‑the‑art technology, commercially, is probably ahead of what is available as open‑source technology. You don’t say what kinds of things you’re trying to stretch, however, and in my experience the performance of any given tool depends a great deal on the type of audio material you feed it with
Propellerhead Recycle, for instance, is much better than most time‑stretching‑based tempo‑matching software when working on beats, drum loops, and other rhythmic material. Programs like Celemony’s Melodyne or Serato’s Pitch ‘N’ Time, on the other hand, tend to be much better at dealing with melodic phrases or full‑stereo mixes. However, all of these options may well be more complicated to get the best out of than something that’s specifically set up for easy working with beat‑based music: Ableton Live, Apple GarageBand, or Propellerhead Reason, for example.
Saturday, October 27, 2012
Q. How do I understand a VU meter correctly?
I have recently
invested in the range of UAD2 plug-ins, but am afraid I am not sure how
to read the VU meters correctly. I am fine with the VU meter showing
gain reduction on a compressor, but when it comes to the output reading —
for example, +4dB or +10dB on the same compressor — I am not sure for
what I am aiming for, output-wise? Am I right in thinking that the
nominal operating level should be averaging at around 0VU?Also, I think
I need to catch up on my dBus and my dBFSs. If my aim is to have the
average level around or slightly above 0VU, I take it that going into
the red is OK, as long as the average level is around 0VU. I think
I remember reading that VU meters didn’t respond to high transients very
well, hence going into the red, so that would make sense.
Via SOS web site
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: The VU meter (and the PPM) are analogue tools designed for the analogue world. They indicate signal levels around the nominal operating level and they don’t show the headroom margin at all.
0VU is the nominal operating level and, in the analogue world, that is usually (but not always) +4dBu. Most decent analogue equipment clips at about +24dBu. This means that when signals are averaging around the 0VU point there is about 20dB of headroom to capture the fast transient peaks that the meter can’t show.
Digital peak meters, in contrast, do show (most) transient peaks and do show the headroom margin. The clipping point is always at 0dBFS, and so, if you build in the same kind of headroom margin in a digital system as we’ve always enjoyed in the analogue world, you need to average the signal level at around -20dBFS, at least while recording (tracking) and mixing, with transient peaks kicking up to about -6dBFS occasionally.
It has become standard practice to remove the headroom margin when it is no longer required, after final post-production and mastering of the final mix, which is why commercial music averages about -12dBFS or so and peaks to 0dBFS.
Friday, October 26, 2012
So Mr. Bond... Who really did write your theme music?
As the new James Bond movie Skyfall opens, it seems an appropriate time to revisit the question of whether the 'James Bond Theme' was actually written by Monty Norman or John Barry.
By David Mellor, Course Director of Audio Masterclass
The answer to this question is very simple - Monty Norman wrote the James Bond Theme, and John Barry was merely the arranger. I say this with certainty because this is what was decided in a court of law in London in 2001. Not wishing to be sued for libel I absolutely and categorically state that Monty Norman wrote the James Bond Theme and what follows in this article is all from my warped and twisted, and not-to-be-relied-upon, imagination.
This is a topic I have followed on and off over the years. What it boils down to is this...
Monty Norman was hired to write the music for the first James Bond film, Dr. No. Allegedly, the music he wrote for the theme was found unsatisfactory by the producers and John Barry was brought in to improve it.
Over the years a dispute arose whether the James Bond Theme was written in its entirety by Monty Norman, or almost entirely written by John Barry (who claimed he used nothing of Norman's work but the first two bars).
Composer vs. arranger
Where this has relevance to modern-day music is the division of roles between composer and arranger. Normally the composer of a piece of music would receive a royalty on performances and recordings (as well as perhaps an upfront fee); the arranger would only receive a fee.
There is potentially a lot of money at stake here. A composer could, for instance, write a few bars of melody, which an arranger turns into a fantastic piece of music that then goes on to earn hundreds of thousands of pounds or dollars in royalties. Does the composer's original few bars entitle him or her to ALL of the royalties? Legally speaking, yes it does. It can however be decided, either by agreement or later litigation, that the arranger did actually contribute to a degree that is worthy of a royalty payment. The recent Procul Harem case is an example.
For the James Bond Theme, it seems that John Barry was brought in to tweak up Monty Norman's sketches for a flat fee of £250. If the film was successful, then Barry would be engaged as composer for the next James Bond film, From Russia With Love, which he was.
The evidence
Monty Norman's contribution to the James Bond Theme can be heard in a song called Bad Sign, Good Sign (from an earlier musical by Norman that didn't take off) and Dr. No's Fantasy, which was not used in the film but appears on the soundtrack album. (There is also a track called The James Bond Theme on the soundtrack album - notice 'The' as part of the title. This is clearly related to the James Bond Theme under discussion, but the musical essence is already there in Good Sign, Bad Sign, which was in existence before John Barry's involvement.)
You can hear Bad Sign, Good Sign at Monty Norman's website. It is 5th from the bottom of the track listing. You may notice that Bad Sign, Good Sign is a modern recording. You can hear a clip of Dr. No's Fantasy on YouTube, and elsewhere on the Internet.
What we can hear in these two tracks is what most people would recognize as some of the music that characterizes the James Bond movies. But clearly it is not the entire James Bond Theme.
John Barry's arrangement
Before discussing John Barry's arrangement of the James Bond Theme, it is worth establishing a frame of reference. Here is a track called Bee's Knees by The John Barry Seven, released in 1958...
In the court hearing, the prosecution called musicologist Stanley Sadie who analyzed the work thus (numbers correspond to bars)...
1-4 | Vamp |
5-10 | Guitar riff |
11-12 | Semitone descent |
13-20 | Repeat of riff |
21-24 | Repeat of vamp |
25-28 | Bebop 1 |
29-32 | Bebop 1 repeat |
33-40 | Repeat of 25-32 |
41-42 | Bebop 2, melody related to riff |
43-44 | Repeat of bebop 2 |
45-46 | Climax to bebop 2 |
47-48 | Vamp |
49-56 | Riff |
57-60 | Coda related to 25-28 |
Bebop 2 I would say is an arranger's masterstroke. OK, Norman wrote the riff, but to transform it like this, picking out the most significant notes and making something new but already known is a stroke of genius, which is as good an argument as any as to why arrangers should ever deserve to receive a royalty.
Stanley Sadie held that bars 11-12 relate to two guitar chords in the middle section of Dr. No's Fantasy. I'll let you be the judge of that.
As for the characteristic vamp at the beginning that sets the 'Bondy' tone of the whole thing... Well, Norman's Dr. No's Fantasy incorporates a vamp. But it isn't the James Bond vamp. Listen to this...
It's Nightmare, by Artie Shaw. He probably wasn't the first to use this vamp either. It's been around for so many years I doubt if anyone, living or dead, could realistically claim copyright to it.
So who did write the James Bond Theme?
Simple - Monty Norman wrote it, as I've said all along. But in my warped and twisted imagination he didn't write the vamp, he didn't write the first bebop section, and John Barry's masterstroke surely trumps anything else one could say about the second section of bebop. And the ending, and that wonderful chord - surely all John Barry's work. And the orchestration!
But in truth, Monty Norman wrote the James Bond Theme all the way from beginning to end and John Barry merely arranged it. That was decided in a court of law, therefore it is true.
P.S. I can't end without a special mention for guitarist Vic Flick without whom James Bond just wouldn't be the same. He was paid £7.50 for his work.
P.P.S. Comments on this article could easily be a legal minefield. I will state clearly and categorically that any comment below that disagrees with the court's verdict is wrong.
Publication date: Wednesday October 24, 2012
Author: David Mellor, Course Director of Audio Masterclass
Thursday, October 25, 2012
Why your voice-over recordings need to be FULLY professional
Voice over recording can be very lucrative. But only if your voice talent AND your recording techniques are of the highest standard. So what are the potential problems?
By David Mellor, Course Director of Audio Masterclass
'Voice over', 'voice-over' or 'voiceover' – whichever you prefer to call it, and all forms are in common use – refers to a recording of speech where a voice that is not part of the narrative is used in a radio or television production, filmmaking, theatre, or other presentation.
There is a lot of money to be made in voice work, both for voice artists (also called voice over artists) and studios that specialize in the field. Consider TV advertising for instance, which is enormously expensive due to the intense competition for slots from top brands. Everything about TV commercial production is expensive. It would make no sense not to use the very best voice talent, or the very best voice over studio. And getting the the best people on your team costs money – a lot of it.
Suppose however that you have a good speaking voice and you think that you can match the voice talent you hear on TV. Why not just buy a decent microphone and do it at home? Let's suppose that you really do have the talent, and the quality of your recording is the only potential issue standing between you and huge fees for your work.
Well firstly, you shouldn't be recording at home. If you have the talent then you should be looking for an agent to take you on and promote you to people who are hiring. You'll need to move to a location where voice work is plentiful, which would normally be close to where the big-time advertising agencies are sited. But maybe you live in Kansas (or Oxfordshire) and don't want to move. Recording yourself at home will be the only option…
Real life test
Let me skip forward to my recent experience with voice work. In my time as a writer on all things audio I was able to visit some of the top voice studios in London, so I know well how seriously they take things, how much they cost, and the quality of work they produce.
However for my recent voice project I couldn't justify the kind of budget that would entail. I needed good quality work, but at a fairly low cost. The natural place to look of course is on the Internet, and there are several sites where voice talents can demo their abilities and offer themselves for work.
I chose a site and posted my project. Just 200 words, which I wanted cleanly recorded by a female with a North American accent. I supplied a few sentences that would serve as an audition piece, and awaited responses.
The responses came in very quickly, twenty-three of them to be precise. I thought it would be a tough job to plough through them all. In actual fact it proved quite easy, because there were only two candidates with the kind of delivery I liked. This kind of thing is very subjective and it doesn't mean the other twenty-one were bad. Just that two of the auditionees had the delivery that I felt was suitable for my project. None of them could have been described as unprofessional in any way concerning their voice.
But the audio quality… Now that was another issue entirely! I have to say that the audio quality varied from just about acceptable to totally dreadful.
Noise
The main problem was noise. Noise in a voice recording should be inaudible when the recording is played at a normal level. Pro voice studios work to that standard, so that is the standard. The noise in most of the examples sounded like computer fan noise. This can be dealt with by using a quiet computer, or by placing the computer outside of the recording room, controlled by someone else, or via a KVM extender.
Excessive ambience
Another common problem was excessive ambience. In voice work, ambience should be barely perceptible, aiming at a completely dry quality.
Popping and blasting
Less common than I might have expected were popping and blasting, but even some of the best recordings had spots that verged on pops or blasts. Clearly, the closer you get to the microphone, the less noise and ambience there will be. But if this is at the expensive of popping and blasting then the result will not be satisfactory.
Level
Now… level. This is one of my regular bugbears. Let me explain it like this… Suppose a client receives twenty audition recordings. Nineteen are at a good healthy level and one is at a low level. Will he pay special attention to the low-level recording? Will he heck. He'll just move on to the next. Submitting low-level work to a client is a surefire way of getting rejected. No, not rejected – not even considered. This screen shot of all of the auditions end-to-end says it all…
I'm all in favor of allowing plenty of headroom when recording. But the level of a finished piece of work should not be low. As a guideline, there should be a peak above -2 dBFS somewhere in the piece.
Sibilance
One of the recordings was very sibilant, but otherwise I didn't feel that this was too much of a problem. You can quite often hear sibilance even in very high-level professional work, so I think it's something that we have become accustomed to, in the same way as we accept the bass boost from a directional microphone used close-to, even if it isn't really natural.
Naturalness
The last in my list of problems is a little subjective, but it's what I think of as 'a good sound for radio'. If you compare the sound of a well-recorded audio book with that of a prime-time radio presenter, you will see what I mean. An audio book needs a natural sound for comfortable, attentive listening over a long period of time. A radio station needs a sound that gets the audience excited. I'm happy with a natural sound or a good sound for radio, but there is to my mind an 'excessively good sound for radio', if you see what I mean, and that's not what I want.
In conclusion, my feeling is that all twenty-three of my auditionees potentially have the vocal ability to work at a very high level. However, many of them are letting themselves down through audio problems. At the end of the day, it's the person who can deliver the best work in all respects who will please the client most. Clients don't want problems, they want easy solutions.
Publication date: Saturday October 06, 2012
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Q. Why do I need to use a DI box?
Sound Advice : Recording
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: Both arrangements will work but, unless the cables’ lengths are very short, the DI route will usually provide better quality, despite the apparent illogicality!
Firstly, the fact is that all cables are capacitive, and that capacitance reacts with the source and destination impedances to form a low‑pass filter. The higher the impedances and the longer the cable, the worse that gets, curtailing the high end. So working with a low source impedance and relatively low microphone input impedance means you can pass signal over extremely long cables without problems.
Secondly, sending a balanced signal to a differential input means that RF and EM interference breaking into the cable can be largely rejected, which is very handy in a hostile and unpredictable environment in which there will be lighting interference and who knows what else. Mic signals are generally balanced, whereas instrument line signals are not.
Thirdly, most PA systems are set up with a mic‑level snake from stage to mixer, and it’s just a lot more convenient, and faster, to rig to work entirely with mic‑level signals rather than a mix of mic and line.
Finally, the balancing transformer in the DI box also provides galvanic isolation between stage equipment and PA equipment, helping to avoid ground‑loop problems and potential electrical safety issues under fault conditions.
As for your second question, regarding connecting more unusual instruments, these are generally fitted with piezo pickups or contact mics, similar to many acoustic guitars with fitted pickups. The output from the control or interface box will usually be ‘instrument level’, much the same as a guitar and will require a DI box again.
A decent active DI will set you back about £100 in the UK, but many people baulk at that when they see generic ‘active DI’ boxes going for £20 or so. However, the difference in sound quality is often very significant, and in my experience the better boxes are built to last. If you amortise the cost of a decent box over 10 years or more, it only costs £10 a year, and that’s peanuts compared to your mics and other gear.
As for recommendations, I’m a fan of the Radial J48 and the BSS AR113, but the Canford Audio Active DI box (originally designed and marketed by Technical Projects) is also excellent and remarkably versatile. The Klark Teknik DN100 is another strong contender.
I’ve been reading
a fair bit about the best way to directly connect instruments to a PA
recently, and I must admit I’m still a bit confused. My first question —
hopefully the simple one — is why is it recommended that an instrument
(say, a keyboard) is connected to a DI box, which changes the signal to
low‑impedance/mic‑level, and then send it to the mixer, where it goes
through a preamp to end up as a line‑level signal again? It seems that
it would be simpler to send the line‑level signal and plug it into an
insert on a channel, rather than a mic input.My second question is a bit
more vague and has to do with connecting other instruments, such as
a harp with a transducer and an electric violin. These obviously aren’t
microphone or line‑level signals and I’m not sure how to treat them.
I have been advised to use an LR Baggs Para DI for the harp, which
appears to be a preamp that then cuts the signal back down to mic level.
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: Both arrangements will work but, unless the cables’ lengths are very short, the DI route will usually provide better quality, despite the apparent illogicality!
Firstly, the fact is that all cables are capacitive, and that capacitance reacts with the source and destination impedances to form a low‑pass filter. The higher the impedances and the longer the cable, the worse that gets, curtailing the high end. So working with a low source impedance and relatively low microphone input impedance means you can pass signal over extremely long cables without problems.
Secondly, sending a balanced signal to a differential input means that RF and EM interference breaking into the cable can be largely rejected, which is very handy in a hostile and unpredictable environment in which there will be lighting interference and who knows what else. Mic signals are generally balanced, whereas instrument line signals are not.
Thirdly, most PA systems are set up with a mic‑level snake from stage to mixer, and it’s just a lot more convenient, and faster, to rig to work entirely with mic‑level signals rather than a mix of mic and line.
Finally, the balancing transformer in the DI box also provides galvanic isolation between stage equipment and PA equipment, helping to avoid ground‑loop problems and potential electrical safety issues under fault conditions.
As for your second question, regarding connecting more unusual instruments, these are generally fitted with piezo pickups or contact mics, similar to many acoustic guitars with fitted pickups. The output from the control or interface box will usually be ‘instrument level’, much the same as a guitar and will require a DI box again.
A decent active DI will set you back about £100 in the UK, but many people baulk at that when they see generic ‘active DI’ boxes going for £20 or so. However, the difference in sound quality is often very significant, and in my experience the better boxes are built to last. If you amortise the cost of a decent box over 10 years or more, it only costs £10 a year, and that’s peanuts compared to your mics and other gear.
As for recommendations, I’m a fan of the Radial J48 and the BSS AR113, but the Canford Audio Active DI box (originally designed and marketed by Technical Projects) is also excellent and remarkably versatile. The Klark Teknik DN100 is another strong contender.
Wednesday, October 24, 2012
Q. Can you explain what a submix is?
Sound Advice : Mixing
Tony Quayle via email
SOS Reviews Editor Matt Houghton replies: A submix is simply mixing tracks down to ‘stems’, or sending them to group buses. For example, you can route all your separate drum mics to a group bus so that you can process them together. You’d call that your drum bus, and if you bounced that down to a stereo file, that would be a drum submix. Using buses in this way is very common indeed, whether for drums, backing vocals, guitars or whatever, because it means that you can easily gain control over a large, unwieldy mix with only a few faders.
These days, there’s rather less call for submixes, particularly now that you have the full recall of a DAW project. However, they can still be useful in a few situations, such as providing material to remixers, or allowing you to perform ‘vocal up’ and ‘vocal down’ mixes if you’re asked to. Bear in mind, though, that if you’re using any processing on your master bus (for example, mix compression), you can’t simply bounce each group down on its own and expect to add them all back together to create your mix; the bus compressor will react according to the input signal. You’d have to bypass bus processing when bouncing the submix, and re-do any such processing when summing the submixes back together.
I’ve come across the
term ‘submix’ a few times recently. I can guess at what it means, but
would like to know for sure. Can you explain?
Tony Quayle via email
SOS Reviews Editor Matt Houghton replies: A submix is simply mixing tracks down to ‘stems’, or sending them to group buses. For example, you can route all your separate drum mics to a group bus so that you can process them together. You’d call that your drum bus, and if you bounced that down to a stereo file, that would be a drum submix. Using buses in this way is very common indeed, whether for drums, backing vocals, guitars or whatever, because it means that you can easily gain control over a large, unwieldy mix with only a few faders.
These days, there’s rather less call for submixes, particularly now that you have the full recall of a DAW project. However, they can still be useful in a few situations, such as providing material to remixers, or allowing you to perform ‘vocal up’ and ‘vocal down’ mixes if you’re asked to. Bear in mind, though, that if you’re using any processing on your master bus (for example, mix compression), you can’t simply bounce each group down on its own and expect to add them all back together to create your mix; the bus compressor will react according to the input signal. You’d have to bypass bus processing when bouncing the submix, and re-do any such processing when summing the submixes back together.
Tuesday, October 23, 2012
Achieving the 'mastered sound' while keeping a wide dynamic range
The mastered sound is very popular these days. But does it always have to come at the expense of dynamic range?
By David Mellor, Course Director of Audio Masterclass
I have been reading some interesting material on dynamic range recently. Well, it was 'Dynamic Range Day' on March 16, so it seems appropriate.
My point of view is that I hate the over-mastered sound as much as anyone else... Except for the people in record label A&R departments who decide what we are allowed to hear. They all seem to think that louder equals better. It does up to a certain point. But many people in the industry feel that current releases go far beyond the limits of acceptability.
But when I say that I hate the over-mastered sound, it doesn't mean that I hate the mastered sound. No, in fact I love to hear mastering tastefully done. It can turn a good mix into a powerful one, improving both frequency balance and the overall impact of the sound.
Does mastering always have to reduce dynamic range though? We are often led to believe that it does, but in fact it doesn't have to.
Dynamic range defined
One common definition of dynamic range is the difference between the peak and the RMS level of a signal. Since the peak level in a commercial release is always at full-scale, then the mastering process will level out the peaks then bring up the RMS level, thus reducing the dynamic range.
If the difference is less than 12 decibels, then the music will start to suffer. Less than 8 dB and the sound will be aggressive and harsh. 14 dB (or DR14) is thought to be a reasonable difference to aim for, to preserve dynamic range.
But...
You could look at dynamic range from a more musical point of view. Suppose for example that the RMS level of a song was -8 dB during a loud section. The peaks would be at 0 dBFS so this would represent DR8 and probably sound rather harsh.
But maybe it's meant to sound harsh - it's a loud section of the song. Maybe the song has a quieter section where the RMS level is around -20 dB (the peaks in this section would probably be lower than 0 dBFS).
Musically speaking, it would be reasonable to say that this song has a dynamic range of 12 dB when comparing the loud section and the quiet section.
Split mastering
So here's a thought...
What about mastering the loud sections and quiet sections of this song separately?
The loud sections would be mastered in the conventional way. The quiet sections could be mastered in a similar way, but the peak levels held down to -12 dBFS and the RMS levels possibly 10 or 12 dB below that.
Both the loud and quiet sections can now have a similar mastered sound, but in musical terms there is indeed dynamic range... 12 or more decibels of it in fact, comparing either the peaks of the loud sections to the peaks of the quiets, or the RMS levels of the loud sections to the RMS levels of the quiets.
Food for thought
There is a little bit of food for thought here. Normally an entire mix is mastered with the same parameters. But if a song varies in level as performed, then there is a case for varying the mastering parameters as the song progresses. It could combine the best features of mastering done well, with the louds and quiets that modern music often so desperately lacks.
Publication date: Sunday March 18, 2012
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Q: What is the difference between a balanced and an unbalanced signal?
Could you please tell me when I should use a balanced cable and when I should use an unbalanced cable? Does it really make all that much difference?
By David Mellor, Course Director of Audio Masterclass
To send an electrical signal, you need to have a complete circuit so that the electricity makes a full 'round trip'.
So imagine the output signal from an electric guitar. This connects inside the guitar to the tip of the jack plug. The signal goes all the way down the center conductor of the cable to the input of the amplifier. The amplifier takes what it needs from the signal and passes it to the screen of the cable going all the way back to the guitar. The circuit is completed in the guitar's pickup.
This is an unbalanced signal. In general, the screen of the cable is connected to earth. If there is no connection to earth, as in battery-operated equipment, then the metal case of the equipment takes on that role. If there is no metal case or chassis, then one point inside the equipment will take the role of earth and everything that needs earthing will be connected to that.
In an unbalanced signal, the earthed screen of the cable is there to protect the signal from interference. Any interference that gets into the screen is shorted out to earth, keeping the signal clean. However some interference may still get through and you will hear it, as it is inextricably bound into the signal.
Now suppose you have a cable with an extra conductor, exactly parallel to the signal conductor. Better still, have them twisted around each other so they are cuddling as close together as possible. This conductor will pick up interference in exactly the same way as the signal conductor. Now all you have to do is invert the interference picked up by this new conductor, add it to the original signal+interference, and the interference will magically cancel. That's what happens in a balanced connection.
It's worth noting that this only works properly when the output impedance of the sending equipment and the input impedance of the receiving equipment are equal for both conductors. It's the job of the equipment manufacturer to get that right.
It is also worth noting that the second conductor doesn't need to have a signal on it for this to work. It's handy however to put an inverted version of the signal on the second conductor as when this is inverted it will add to the signal thus reinforcing it. This indeed is what is normally done.
Balancing is a brilliantly simply way of guarding against interference.
To gain the advantages of balancing, then you must use equipment that has balanced outputs and inputs. You must use cables that have two conductors and a screen. You can balance an unbalanced signal by connecting it to a DI (direct injection) box.
In general you can get away with unbalanced connections in the studio where conditions are controlled and signal paths are short. Balanced connections however are a distinct advantage in live sound and outside broadcast where cable runs can be very much longer, and interference-producing lighting equipment is used.
In short, if you are not experiencing problems with interference, you don't need to worry about balanced connections. If you do have problems with interference, then balancing will be a sensible step towards a solution.
P.S. This description refers to electronically balanced connections. Balanced connections can also be made using transformers.
Publication date: Thursday June 17, 2010
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Monday, October 22, 2012
What is the Low-Z button for on the Golden Age Pre-73 microphone preamplifier?
Ask a number of preamp users what each of the controls on the Golden Age Pre-73 does, and the 'Low-Z' control will probably cause the most scratching of heads. So what is it for?
By David Mellor, Course Director of Audio Masterclass
In electronics, 'Z' stands for impedance, which is measured in ohms. When the Low-Z switch is out, the input impedance of the Golden Age Pre-73 is 1200 ohms. When the switch is in, the input impedance is a significantly lower 300 ohms.
So what difference is this going to make?
Let's take as an example the classic Shure SM57 microphone. This has a specified output impedance of 310 ohms, which for convenience I'll round down to 300 ohms.
You can think of the signal chain like this...
The signal from the capsule of the microphone goes through one 300 ohm resistor to the output. The signal then sees another 300 ohm resistor in the preamp connected to ground. So the signal flows through 600 ohms to ground, and the preamp is connected at the halfway point.
What we have here is a simple voltage divider. Since the resistances are equal, the voltage of the signal is halved at the mid-point. So the preamp only receives half of the output voltage of the microphone. This is a drop of 6 decibels.
Now this is not entirely bad because other factors are in play that optimize the current the preamp receives, and in fact by having equal output and input impedance, the maximum amount of power is transferred, which should result in the best signal-to-noise performance.
Frequency response problem
However, there is a problem...
It is likely that the output impedance of the microphone will vary with frequency. Suppose for example that you had a microphone with a specified output impedance of 300 ohms, but in actual fact it was 300 ohms at 1 kHz and 900 ohms at 10 kHz. Where the drop in level at 1 kHz is 6 dB, the drop in level at 10 kHz is now 12 dB.
What we have therefore is a frequency response that drops by 6 dB from 1 kHz to 10 kHz. This mic will sound dull.
If you do the math with a preamp that has an input impedance of 1200 ohms you will find that the drop in level at all frequencies is less, and the difference in the drop between 1 kHz and 10 kHz is less too. In other words, the frequency response is flatter.
What you can expect from the Low-Z switch therefore is that when switched out, the frequency response is flatter than when switched in. Most likely you will hear less high frequency content when switched in, but the change will depend on the impedance characteristics of the individual model of microphone. If the preamplifier has been designed to take full advantage of the Low-Z mode, then there should also be a little less noise, but you'll have to listen quite hard to notice that.
In summary, the High-Z position will generally be the most accurate setting, but if you prefer the sound of the Low-Z button switched in, then use it. In the end, it all comes down to subjective taste.
By the way, some mics have a higher-than normal output impedance. For example the CAD Trion 7000 is rated at 940 ohms. You would expect the Low-Z switch to have more of an effect here. It is often thought that the best compromise for preamp input impedance is that it should be around five times the output impedance of the mic, so according to this principle you would need a preamp with an input impedance of around 5000 ohms for optimum results. However, pairing the Trion 7000 with the Pre-73 set to 1200 ohms might give you more of the 'darker' ribbon character. While this would probably not be the best choice as your only mic/preamp combination, if you already have more conventional mics available then it is always worth having another option in your palette of sounds.
Conversely, if you had a mic with an output impedance of just 60 ohms, then the Pre-73's Low-Z setting would be exactly right.
Publication date: Friday October 19, 2012
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
The difference between DAW filters and synth filters
Filters are useful tools in a DAW, and they are essential in a synthesizer. But how are they different?
By David Mellor, Course Director of Audio Masterclass
I'll consider the low-pass filter as it is the most useful type in subtractive synthesis. A low-pass filter cuts high frequencies (HF) and allows low frequencies (LF) to pass unhindered. The user can set a cut-off frequency where the transition from LF to HF occurs.
In a standard DAW plug-in filter the slope of the filter is normally controllable and can be set to 6 dB/octave, 12 dB/octave, 18 dB/octave or 24 dB/octave. The greater the slope, the stronger the effect of the filter.
The cut-off frequency of a synth filter is similarly controllable. Since higher slopes are more useful in synthesis, sometimes the slope is set at a fixed high value.
So far so good. The two filters seem quite similar. So where are the differences?
Well firstly, a synth filter usually has a control for resonance. With this, a narrow band of frequencies just below the cut-off frequency can be boosted, often by a considerable amount. There isn't much need for this in a DAW, but in sound creation it can make a big difference. Oddly enough, the standard filter in Logic has a Q control that does the same thing. A high Q setting is illustrated above. In DAW operation this is most useful for a sound that doesn't change in pitch, such as a kick drum.
Perhaps the biggest difference between DAW filters and synth filters is that in a synth, the cut-off frequency changes as you play different notes. This is so that every note has subjectively the same sonic quality. Without this tracking, some notes are heavily filtered while others are hardly touched and remain 'buzzy'. There isn't a great deal of relevance for this in normal audio operation.
Although DAW filters and synth filters have similar jobs to do, they are adapted to their individual functions.
One thing that is worth trying however is to put an audio signal through a synth filter. There is considerable potential for creativity here and I strongly recommend trying it if you get the chance.
Publication date: Friday October 28, 2011
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Saturday, October 20, 2012
Q. Which MIDI velocity curve should I use with my controller keyboard?
I’ve just bought a
new MIDI controller keyboard that has a selection of velocity curves.
How should I go about choosing which one to use, and why is this
necessary?
Philip McKay via email
SOS contributor Martin Walker replies: Some
keyboardists play harder than others, while keyboard controllers
themselves can vary a great deal in their mechanical resistance, action
and feel. If you come from a synth background, a weighted,
hammer-action keyboard may feel very heavy and ponderous to play while,
conversely, if you’re used to playing acoustic pianos, a lightweight,
synth-action keyboard may feel lifeless. However, the ultimate goal is
always the same.
MIDI supports 128 different
velocity values (from zero to 127) and, whichever velocity-sensitive
keyboard you choose, it should let each player generate this complete
range of values smoothly as they dig into the keys, from soft to hard.
This is the reason why most keyboards offer a selection of velocity
curves.
Many modern sample libraries feature
eight, 16 or even 32 velocity layers per note, and if your keyboard
doesn’t let you generate the full range of MIDI velocity values you may
never hear some of these layers. This, in turn, means that your sounds
may lack expression or sound dull or harsh, or it might mean that you
never hear special effects programmed for high velocity values only,
such as piano hammer noise, guitar harmonics or bass slaps.
It’s
generally best to start by trying the linear velocity curve that
generates smoothly increasing velocity values as you play harder (see
graph above). Some makes and models of controller keyboard do manage to
do this over the full range but, in my experience, many don’t generate
any velocity over about 110, unless you hammer the keys really hard. The
different curves stretch one or more velocity areas across the
mechanical range. Don’t get too hung up on the shapes themselves, it’s
more important to just play and see what velocity values you can
generate.
You can choose the most expressive
velocity curve by simply playing a favourite sampled instrument, such
as a piano, but this can prove a tedious process. You may achieve the
perfect response with ‘loud’ notes only to find that the soft notes now
play too loud, or vice versa, or you may find that you only have the
perfect response for that one instrument. It’s better to be a little
more systematic and monitor the MIDI velocity values themselves as you
play, to check that you can move smoothly across the entire range. There
are plenty of visual options for this purpose, including various
sequencers that display incoming MIDI velocity as a level meter, or
software utilities such as MIDIOX (see www.midiox.com for details).
Once
you’ve chosen the most suitable preset curve for your playing style, a
one-off bit of final tweaking may make your keyboard playing even more
expressive. For instance, my main controller keyboard smoothly generates
MIDI velocities from 0 to 110, but struggles above this, so I just
convert this input range to an output range of 0 to 127 using the MIDIOX
Data Mapping function or a MIDI velocity-curve changer (see this one
from www.trombettworks.com/velocity.php).
Most
sequencers, and even some hardware/software synths, let you tweak
incoming velocity values in this way, either using MIDI plug-ins, such
as VelocityCurveSM (www.platinumears.com/freeplugins.htmlfor
more information) or specialised built-in functions, such as the Cubase
MIDI Input Transformer. For a ‘plug in and forget’ hardware solution,
you can buy a small box, such as MIDI Solutions’ Velocity Converter
(found at www.midisolutions.com/prodvel.htm ), which is MIDI-powered and offers 40 preset curves, plus a user-defined one.
Some
keyboards also include one or more ‘fixed’ velocity options that always
generate the same MIDI
velocity however soft or hard you play. These
can be useful for playing sampled instruments with no velocity
sensitivity, such as organs, and for step-recording drum parts or simple
synth tracks. A setting that always generates MIDI velocity 127 can
also be invaluable for sound designers who need to ensure that their
presets will never distort.
Friday, October 19, 2012
What is a pad? What is it used for?
In which situations would you use the -10 dB or the -20 dB pad? Recording live music?
By David Mellor, Course Director of Audio Masterclass
The word 'pad' in audio is derived from Passive Attenuation Device. 'Passive' refers to an electronic circuit that requires no power to operate. 'Attenuation' means making the level of the signal smaller.
'Device' means that it was invented by someone who was extremely clever!
You will commonly find a switchable pad in a capacitor microphone, and also in a microphone preamplifier. There isn't much use for pads anywhere else in the audio signal chain.
The value of a pad is in its passive nature. This means that it can accept any signal level without distortion, right up to the point where the circuit components burn out (which would be a very high level indeed).
A capacitor microphone contains an internal amplifier, which is an active (not passive) device. All active circuits have an upper limit on the level of the signal they can handle correctly. If the signal attempts to go above this level, it will be clipped at peak level until it drops back down again. This causes very serious distortion.
So if a capacitor microphone is exposed to sound of very high level, the internal amplifier might clip. To prevent this, a pad can be switched in that comes before the amplifier, lowering the signal level before it can cause clipping.
The same can happen in a microphone preamplifier. If the input signal is very high in level, the very first input stage can clip. Once again, if a pad is switched in before the first active stage, clipping can be prevented.
So if the sound level is very high, the pad in the microphone should be switched in. If this is switched in, then the pad in the preamp will probably not be necessary, but it's there just in case.
Pads can be used in both studio and live recording.
Publication date: Wednesday July 14, 2010
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Thursday, October 18, 2012
Q. Why is 88.2kHz the best sample rate for recording?
I have read that the
optimum sample rate to record at is 88.2kHz. The reasons include simple
integer-ratio sample-rate conversion, avoiding the phase shifts and
ringing of anti-alias filtering at 20kHz, and less data to move about
compared to 176.4kHz. Is there any truth in these assertions?
Via SOS web site
SOS
Technical Editor Hugh Robjohns replies: These claims are partially true!
Let’s start with the simple integer-ratio sample-rate conversion issue.
Simple ratios were important in the days of ‘synchronous’ sample-rate
conversion, but that technology went the way of the dodo a long time
ago. It was a relatively simplistic approach that did work best with
simple ratios between the source and destination sample rates. However,
it had limited resolution in terms of the practical word-lengths
achievable, with the noise floor rarely being better than the equivalent
of 18 or 19 bits. Moreover, the approach is hugely wasteful of
computational effort, calculating millions of intermediate sample values
no one has any interest in.
Modern
‘asynchronous’ sample-rate conversion is far more sophisticated and
works by analysing the source and destination sample rates and working
out only the required sample values with huge precision. This achieves
a technical performance that is significantly in excess of any
real-world converter and very close to the 24-bit theoretical level —
and that’s achieved with any ratio of input-to-output sample rate. There
is no measurable difference in performance between using simple integer
ratios or complex ones.
In fact, it’s
interesting to note that some of the best performing D-A and A-D
converters from the likes of Benchmark, Crookwood, Cranesong, Drawmer,
and others, all use non-integer sample-rate conversion as an inherent
part of their jitter-isolation process. For example, D-A converters
using this approach typically up-sample the incoming digital audio to
something like 210kHz, or the rate at which the physical D-A converter
chip achieves its best performance figures: no simple-ratio conversions
going on there, yet class-leading performance specifications!
Moving
on to the second point, there is a (weak) argument for sampling
original material at a rate higher than 44.1kHz in some cases. The
reason is that a lot of A-D systems are designed with relatively
imprecise anti-alias filtering, which typically only manages 6dB
attenuation at half the sampling rate, instead of the 100dB or more that
is theoretically required. It’s done in that way because it makes the
A-D converter’s digital filtering a lot easier to design, and in most
cases it makes little difference. However, aliasing could result if the
input audio contains a lot of strong extreme HF harmonics. Cymbals,
orchestral strings and brass can all generate enough HF energy, if
close-miked, to cause this problem with some converters. In such cases,
switching to a higher sample rate might sound noticeably sweeter.
The
issue then, of course, is how to down-convert to 44.1kHz for release
without suffering the same problem in the sample-rate converter (SRC)
anti-alias filtering. Clearly, a properly designed digital filter is
required in the SRC, and while some software SRCs do this properly, some
don’t. The Infinite Wave SRC comparison web site reveals the scary
truth! (See http://src.infinitewave.ca).
The
176kHz (or 192kHz) quad-sample-rate idea is really just about being
able to say ‘mine’s bigger than yours’. There’s a very good white paper
about sampling theory on Lavry’s web site (www.lavryengineering.com)
where Dan Lavry points out that the higher the sample rate, the greater
the proportion of error in sampling time and the lower the actual audio
resolution. Lavry argues (very sensibly, in my opinion) that the
optimal sample rate would actually be 60kHz. In the real world, 96kHz
can be useful, for the reasons mentioned above, but the quad rates are
a folly and Lavry refuses to support them!
Q: How thick should acoustic treatment be?
Q: "I want to line the walls of my studio with absorbent materials. How thick should the absorbers be?"
By David Mellor, Course Director of Audio Masterclass
If your recording room is too reverberant, then this reverberation will get onto your recordings. Reverberation can be added to a dry recording, but it can never be removed.
If your control room is too reverberant, then the sound that is bouncing around will color your judgment and you won't be able to monitor and mix accurately.
The solution in both cases is to apply acoustic treatment.
There are two common types of acoustic treatment. One is the porous absorber, the other is the panel absorber, also known as the membrane absorber.
Any material that is soft and full of air-pockets will function as a porous absorber. Examples include curtains (drapes), carpet, glass fiber loft insulation and mineral wool. Of these, mineral wool is normally preferred for its effectiveness and low cost.
Mineral wool is often supplied in slabs that are around 50 mm deep.
You will find that a surface covered with such slabs will absorb high frequencies almost completely. However it will not absorb low frequencies at all.
This has to do with the wavelength of audible sounds.
The wavelength of the highest frequency that is normally considered to be in the audible range, 20 kHz or 20,000 Hz, is around 17 mm.
However the wavelength of the lowest frequency that is normally considered to be in the audible range, 20 Hz, is 17 meters!
Clearly, absorption that is a mere 5 centimeters deep will have hardly any effect on a sound wave that is measured in meters.
As a rule of thumb, a porous absorber is effective for a wavelength that is four times its thickness, and all shorter wavelengths
So mineral wool that is 5 cm thick will absorb frequencies of 1700 Hz and above. Below 1700 Hz, it will be less and less effective.
In practical terms, it is effective to use porous absorption up to around 10 cm thick. For frequencies below that, panel absorbers are easier to find space for because they don't have to be so thick. We will discuss panel absorbers on another occasion.
Bear in mind that if you use only porous absorption, your room will become bass-heavy because of the low frequencies that are still bouncing around.
By the way, the photo shows an anechoic chamber where virtually all reverberation is absorbed.
Publication date: Tuesday June 15, 2010
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Wednesday, October 17, 2012
Q. Are expensive multicore cables really worth the money?
I’ve been looking at
multicores, and wondering how much quality matters. People say that only
the best cables should be used, and I know a cheaper one would probably
break sooner, but is there a massive difference between a $100 and
a $300 one? Am I likely to suffer more crosstalk and interference with
a cheaper multicore?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: A lot depends on how you plan to use the multicore. For example, if it’s being gigged in different venues every night, and pulled and kicked about by roadies desperate to get a curry before the restaurant closes, a cheap one will probably lose channels almost daily and be completely dead within the month. On the other hand, if it’s part of a permanent install in a private studio and used carefully only by you, a cheap one will probably last a lifetime.
The multicore cable itself isn’t really the issue, although it is certainly true that better cables (by which I mean lighter, more flexible, with better screening, and designed to be easier to terminate) do cost more. However, most of the cost of a multicore system actually goes on the construction, and paying more generally means you get better quality connectors and a much better standard of internal wiring in the end boxes.
Systems with detachable end or breakout boxes are more expensive than those with permanently wired breakout boxes, because of the additional connectors and wiring involved. They are often easier to rig and de-rig, and can be more reliable in heavy-use situations, but the quality of the cable and box connectors is critical — and good ones are frighteningly expensive!
Tuesday, October 16, 2012
Clip-based gain versus fader automation, which is best?
Pro Tools 10 adds clip-based gain to its feature set. But what does it offer that can't be done with fader automation? (This applies to all other DAWs too.)
By David Mellor, Course Director of Audio Masterclass
Firstly, what was once called a 'region' in Pro Tools is now called a 'clip'. It's a block of audio displayed on the screen that can be edited or processed in various ways without affecting any other audio in the session.
In any mix, it is likely that at least some faders will have to move during the mix. A mix that is static all the way through is either very rare, or not yet perfected.
Traditionally this has been done with fader automation, like in physical automated mixing consoles.
This is good, but the problem is that automation was developed in the days when the fundamental unit of audio was the track.
With hard disk recording, the fundamental unit of audio is the clip. There may only be one clip on a track that lasts the full duration of the song. But there will more likely be several clips, perhaps many.
So previously in Pro Tools, although detailed editing and processing of audio was done in clips, automation applied to tracks.
Now, automation can be performed in a much more useful way.
Clip-based automation in Pro Tools allows the level of an entire clip to be adjusted, with a corresponding increase or decrease in the height of the displayed waveform. Or you can set an automation curve so that the clip varies in level to whatever amount of detail you want.
Anything that you could have done with traditional fader automation can be done with clip-based gain, and it is very much easier and more flexible. Any clip-based gain you apply will remain attached to the clip as you move it, copy it or edit it, or even move it to a different track. Fader automation is still there if you want it.
Of course, other DAWs already have this facility. Indeed there are DAWs that have come and gone and turned into fossils that had it.
Even so, it is well worth appreciating the difference between track-based automation and clip-based automation. And using it too!
Publication date: Thursday October 27, 2011
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Don't suffocate in your soundproof studio!
So you have soundproofed your studio. Why is it that everyone around you is dropping like flies?
By David Mellor, Course Director of Audio Masterclass
Do you intend to soundproof your studio?
It is a pleasure to work in a soundproof studio, free from irritating outside noise, and relaxing in the knowledge that nothing you do will annoy your neighbors.
So to soundproof your room, you add mass to the walls, ceiling, floor, windows and door, exactly as you should. And you make sure that there are no leaks. Sound will find its way through even the tiniest of gaps.
But with effort and attention to detail, you can indeed have a soundproof studio. Perhaps not perfectly so, but effectively so for your purpose.
But there is now a problem. Since you have sealed all the gaps against sound penetration, you have also made your room air-tight.
Indeed, it can often be difficult to open and close the door to a soundproof room because of the air pressure.
Now, as you work in your soundproof studio, you will find that you get hot and stuffy. If there are other people in the room, it will happen more quickly. If there is a singer, he or she will feel uncomfortable first.
Some people who find that that their studio becomes hot and stuffy in use go out and buy an office-type fan. But this merely redistributes the hot and stuffy air around the room. It provides a slight subjective benefit when it is wafting in your direction, but not all that much.
For a cool, fresh working environment you need two things: ventilation and air conditioning. These are two separate processes. Ventilation does not cool the air (unless it's Alaska outside) and air conditioning does not provide fresh oxygen.
Ventilation is often perfectly adequate in itself. Although the temperature may rise, at least there is oxygen in the air. If you don't mind a hothouse environment, then ventilation from the outside world, using a large fan for quietness, may be enough.
The problem with both ventilation and aircon systems is that they create noise. A professional studio will spend a fortune on its system, installed by a specialist contractor who understands recording studios.
In smaller studios, it is usual to expect noise from the ventilation and aircon systems and simply tolerate it. You can switch them off when you are recording using microphones, and for critical listening during the mix.
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Monday, October 15, 2012
How to get a 'vintage sound' in your recordings
Vintage and retro equipment is very popular these days. But is that all you need to achieve a vintage sound?
By David Mellor, Course Director of Audio Masterclass
If you run a few recordings of varying vintages through an audio spectrum analyzer (also known as an audio spectrograph) you will notice that modern recordings tend to have more bass and more top end.
The reason for this is that in the era that we consider vintage - the 1950s and 60s - few people had a listening system that could reproduce much in the way of highs and lows. The photo above shows an example.
Engineers and producers therefore concentrated on what people would actually hear when they bought the record, and of course how the record sounded on radio.
It is worth considering therefore when putting your sounds together that if you monitor on loudspeakers that have the vintage sound, then your attention will be concentrated on the 'vintage frequencies'. (By which I mean the frequencies that were important to listeners in the vintage era.)
Note that I am not only talking about mixing and mastering, I am talking about the whole process of recording from beginning to end.
In fact I would go so far as to say that it doesn't matter how much vintage and retro equipment you have, you are never going to achieve a convincing vintage sound unless you go through the entire recording process this way.
Having said all of that, it is of course well worth bearing in mind that today's listeners very often do have the benefit of playback equipment that is capable of a full frequency range, and your eventual mix and master should please them too.
But if you have put all of your powers of mental concentration into the all-important midrange, you should be able to achieve a mix that would satisfy any lover of vintage sounds.
Publication date: Friday October 28, 2011
Author: David Mellor, Course Director of Audio Masterclass
Author: David Mellor, Course Director of Audio Masterclass
Saturday, October 13, 2012
Q. How and when should I normalise my mix?
I have a question
about normalising. I mix from Cockos Reaper through an M-Audio ProFire
2626 interface into an Allen & Heath ZED12 FX mixer, and then back
into Reaper. I often find that the end mix level is lower than expected,
and I have to push the master fader on the desk up over zero. I do set
the gain correctly for each hardware channel, using the PFL button, and
I have the outputs of the 2626 up at maximum. My worry is that if
I start pushing up the levels of the individual track faders on the
mixer, I’ll start introducing unwanted hiss from the hardware, so I tend
to mix to a maximum of unity gain on the individual channels — but
maybe I should start going beyond that? Also, if I do normalise my mix,
should I do it before I apply my master-bus processing (a bit of
compression and limiting), or should I apply the master-bus processing
first and then normalise that processed file to take full advantage of
the available digital headroom?
Timo Carlier via email
SOS contributor Mike Senior replies: The first main issue here is avoiding unwanted noise and distortion from your analogue components, and a good basic principle to bear in mind is to maximise your signal-to-noise ratio as early as possible for each piece of equipment, and then to leave any subsequent gain controls at their unity-gain position if possible. So in your situation, make sure the signals you’re routing out of Reaper are making full use of the digital output headroom, so that you’re sending the maximum level out of the ProFire 2626’s sockets. (Check also that the faders in the ProFire’s own mixer utility are set to unity gain, so that they don’t undermine your efforts in Reaper.) Boosting the gain in your DAW at this point shouldn’t incur any significant side-effects or additional background noise as long as you don’t clip the output buses in Reaper.
With a good level coming out of the ProFire 2626, you may well find you need very little, if any, boost from the ZED12’s channel Gain trim to give decent PFL readings, so from that point until you record the mixdown signal back into the computer, you should only need to turn things down. Clearly, this is what the faders are for, but you may also wish to use the channel Gain trims too, in some cases, in order to keep the channel faders for quieter sources closer to the unity-gain mark, where there’s better control resolution. From that point, the trick is to build up your analogue mix so that it naturally fills the console’s available output headroom. If you start with your first instruments too quiet, you’ll end up with a low output level and therefore more background noise than necessary. If you start things too hot, you’ll start clipping the mix bus before all the instruments have been added. It’s a bit of a knack, so don’t sweat it if you don’t nail it exactly first time. If you under/overshoot, the best thing to do is adjust the channel faders en masse to redress the situation. Any channel insert processing will be left unchanged in that way, as well as any post-fader effects levels, so the amount of rebalancing you’ll need to do is likely to be fairly small.
My main tip for getting it right first time is to fade up early any channels with strong transient peaks or powerful low-frequency energy: typically bass and drums in modern commercial productions. These elements take up the lion’s share of the headroom in many mixes, so they provide a good early indication of the headroom the full mix is likely to demand. In fact, with a little practice, you should be able to discover ‘rule of thumb’ starting levels for your drums and bass on the ZED12’s meters, which will usually lead to a good final mix level.
If you’re doing your master-bus processing in the computer, just get the hottest clean output signal from the mixer into the audio interface. If the signal’s too hot for the interface, turn down the mixer’s master fader; if it’s not hot enough, push up the console’s master fader (if that doesn’t clip the ZED12’s output circuitry), or apply additional analogue input gain on the interface. Once the signal is digitised at a good level, it shouldn’t need any normalisation before further bus processing; even 24dB of headroom should be perfectly fine at 24-bit resolution. If you’re bus processing through analogue gear before digitising, just keep the same principles of gain-structuring at the front of your mind, and you shouldn’t come too far unstuck: feed the processor as hot as possible without clipping it, and avoid adjusting the gain unnecessarily until you need to set an appropriate output level for the next piece of gear in the chain.
In addition to technical considerations, the level at which you drive any piece of analogue circuitry also affects the subjective sound in less tangible ways, and this concern may justify creatively modifying some of the above generalised tactics. If you hit the Allen & Heath’s input circuitry a little harder than strictly recommended, for instance, you may find you like the resulting saturation harmonics. Perhaps your analogue bus compressor provides slightly different release or ‘knee’ characteristics (for the same gain reduction) if you process a high-level signal with a high threshold than if you work on a lower-level signal with a lower threshold. Or maybe the mixer’s output bus might sound more transparent on acoustic music if fed more conservatively. Finding out that kind of stuff is one of the really fun bits about analogue mixing, so if you’ve already made the effort to get stuck into the hybrid analogue/digital approach, I imagine you don’t need too much convincing to get your hands dirty there!
Finally, it’s tempting to think that such signal-level concerns are pretty much redundant in the digital domain, but that’s not really true, given how many emulated analogue plug-ins most people now seem to use. If this kind of processor is faithfully modelled, it will usually track the non-linear characteristics of its analogue forbear, so you still need to be aware of what level is hitting it. This is one of the reasons why my Mix Rescue projects are full of lots of instances of GVST’s freeware GGain plug-in, a simple +/-12dB VST gain utility that gives me more control over the internal gain-structure of my channel plug-in chains. (There are also various built-in gain plug-ins within Reaper’s Jesusonic set if you’re not on a PC as I am.)
Subscribe to:
Posts (Atom)