Welcome to No Limit Sound Productions. Where there are no limits! Enjoy your visit!
Welcome to No Limit Sound Productions
Company Founded | 2005 |
---|
Overview | Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting. |
---|---|
Mission | Our mission is to provide excellent quality and service to our customers. We do customized service. |
Thursday, January 31, 2013
Q. Why aren’t my vocals audible in the car?
I’ve used a ‘stereo-izer’/widener on a track, along
with two more separately taken tracks panned far left and right,
respectively. There are three tracks in total for the chorus of the
track.
When the vocals play on the chorus, it’s
barely audible, but only when I play the track in the car. There may be
a difference on my iPod or my monitors, or my cinema system. But the
difference in the car is very dramatic.
I can only guess that the separate audio tracks are cancelling each other out. (I’ve read something about phasing, I think). But why is the effect so pronounced in the car?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies:
Without knowing exactly what the source tracks are and what you’ve done, it’s hard to give a precise answer, but it does sound as though there is some serious cancellation going on between channels, and that also implies that the two channels are being summed to mono at some point in the car.
Firstly, that stereo-widening effect you mentioned will inherently reduce the mono compatibility and may well lead to significant sound-quality changes between listening in mono and listening in stereo. Secondly, if you have two near-identical signals, and one has an inverted polarity with respect to the other, adding them together will result in them trying to cancel each other out, leaving nothing behind at all. But for that to happen you (a) need to have opposite polarity signals, and (b) need to add them together (either electrically or acoustically).
From your descriptions, you can hear a hint of the effect on your other systems, but it is very pronounced in the car. What that suggests to me is that there is some out-of-phase aspect to the signals — probably caused by the stereo widener effect — but that your other systems aren’t summing the left and right channels together, so the full cancellation isn’t happening. However, in your car it would appear that the channels are being summed together somehow, and hence near total cancellation. Exactly how that is happening, I can’t say. It could be that your speakers are wired in a peculiar way, or that there is some odd signal processing (surround effects?) happening in your car hi-fi.
The only way to get to the bottom of this is to analyse the source material carefully for phase anomalies, and use some known reference material to evaluate exactly what is going on in your car.
Wednesday, January 30, 2013
Q. How can I record my band with limited space and equipment?
I’m in a band and
we’re hoping to record ourselves live in our tiny rehearsal space using
the limited equipment we have. My plan so far is to have drum overheads,
snare, and kick mics going into channels one to four of my Alesis
Multimix 8. Everything else (vocals, guitars and bass) will be miked
into the PA system, then I’ll go out of that into channels five and six
on my Alesis and record a stereo mix into Reaper (on my laptop) via USB
from there. I figure that if I get the EQ and panning right first, that
should give a reasonable representation of how we sound, but am
I missing something? I’m worried that the PA speakers will feed too much
into the drum mics; for space reasons, the PA speakers are just behind
and to the sides of the drums.
But if I unplug the PA speakers, we won’t
all hear the vocal. If you have any words of wisdom then I’d be most
grateful. We have two regular dynamic mics, two small-diaphragm
condensers, and five dynamic drum mics.
Via SOS web site
SOS
contributor Mike Senior replies:
Given the cramped conditions, I’d
recommend turning off the PA speakers first of all, and living with the
compromise that the other players won’t hear the vocals. In practice it
shouldn’t really matter as long as everyone can see each other well, and
everyone’s fairly clear on the structure of the song. Beyond that,
though, your general miking/routing plan seems feasible, and here are
a few tips you might find handy.
Firstly, I’d
try to catch as full a drum sound as possible through the overheads.
That usually means not sticking them right above the cymbals; either
side of the drummer’s head is often a better starting point
balance-wise. Given the inevitable resonance-mode problems in small
rehearsal rooms, you’ll almost certainly want to roll out quite a bit of
low end using the Alesis Lo-EQ controls on the overhead channels, but
otherwise try to get the best sound you can from the drums by
repositioning the mics. Remember that cardioid mics tend to give their
brightest sound for whatever they’re pointing most directly at.
As
far as the snare is concerned, try not to get the mic so close to the
drum that all you get is ‘donk’. Whatever you end up with, though,
hopefully your overheads should supply enough snare sound that you don’t
have to use the close mic much. Try to baffle the kick mic in some way
(or put it inside the kick drum) so that you don’t get masses of bass
spill and low guitar woolliness on it, especially since you’ve got no
facility to gate it. The one thing you’re really missing on the Alesis
mixer is any phase/polarity control, so if you have any phase-inversion
XLR leads (leads that swap the hot and cold XLR pins), then have those
handy in case combining the snare or kick with the overheads sucks the
heart out of your drum sound.
Given that spill
is going to be a fact of life here, I’d be tempted to grasp the bull by
the horns and make the best of the situation in that respect. In other
words, I’d actually not try to separate the guitars and bass from the
drums especially, but rather put them as close as they’d be on stage so
that you get more of the benefit of a live-style performance situation
(albeit without vocals). If you mic the guitars close, spill from the
drums should still be fairly low in level if the instruments are
well-balanced in the room, and it may actually improve the overall drum
sound. If not, then try moving/rotating the whole ‘guitar plus close
mic’ setup a little to get a better result, or try another polarity-flip
XLR cable. Again, you’ll probably want to roll quite a bit of low end
out of the guitar close mics, given the spill situation and the likely
strength of the proximity-effect bass boost.
With
the bass, I have to say that, again because of the inevitable
room-resonance problems, I’d record his DI rather than his amp if at all
possible (through something like a Bass Pod if an amped sound is really
important), even if he still has the amp live in the room for
performance purposes. I’d have the vocalist out the front of the drums
facing the drummer, and then put duvets or something behind him/her to
soak up some of the spill. If you have something like an SE Electronics
Reflexion filter you can put up around the mic, then that’d help the
spill issue too, but be careful not to interfere with sight lines
between the players. Once more, low cut on the vocals will probably help
stop the overall mix sounding muddy.
Setting
all this up without the luxury of a separate monitoring room will be
a challenge, but the best way (if a little time-consuming) is to do
quick test recordings as you go, so you can judge the sounds without the
spill from the room putting you off. You can make life easier for
yourself in this respect if you do your best to get the sound in the
room as close to the sound you’re after on record as you can. In
practice, I’d expect it to take two or three hours of experimentation to
get a reasonable sound going in this way, not including the time taken
to set up the instruments and plug up and test the mic lines, so my
final advice would be just to allow yourselves enough time, and warn the
other band members that they might need a bit of patience!
If
you were able to lay hands on an eight-channel interface of some kind
(or even a small eight-track multitrack recorder: the Zoom R16 is
ridiculously affordable, for instance) then that would afford you a lot
of scope for improvements in a separate mixing stage. It’d also take
some of the pressure off you in terms of judging the best phase/polarity
relationships between the different mics right there on the session, so
I’d seriously consider making that investment.
Q. How do I get the best from guitar-amp simulator software?
Due to space and
other limitations, I am not able to mic up my amp, so I’m going to have
to rely on plug-ins for my guitar recordings, but I’m worried about how
close I can really get to the sound of my amped-up guitar using this
method. What do you think is the best amp-sim plug-in for guitar? And
what can I do to make the sound better or more realistic? Is this
a fairly common practice with DAWs, or should I be finding a way to mic
up a real amplifier?
Via SOS web site
SOS
Reviews Editor Matt Houghton replies:
The first thing to say is that
yes, use of software amp-modelling in home and professional DAWs is
commonplace, and the technology has come a long way in recent years: the
sounds you can achieve with one of many commercially available software
amp and speaker simulators are really pretty good now. What tops your
personal list will be pretty much down to taste. For example, IK
Multimedia’s Amplitube, Native Instruments’ Guitar Rig, Line 6’s Pod
Farm and Peavey’s Revalver III are all capable of great results, but
Softube Vintage Amp Room is my favourite software for re-amping work.
That’s partially due to the amps that are modelled, partly due to the
quality of those models, and partly due to the simplicity of the
interface, which isn’t cluttered with a gazillion effects and preset
menus, and offers only three amp models. This means that it’s easy to
get to know it inside out and back to front, just like you would a real
amp.
Whatever your personal preference in terms of
sound, though, the key with any of this software is to keep latency as
low as possible while playing, and to play it pretty loud over your
speakers if you can, just so that you can get some interaction between
the speakers and your pickups, as you would with a real amp. It’s the
‘playability’ side of things that concerns me most with much current
software, and hardware modellers too: while the sound itself can be
great on playback, and is usually perfectly good for re-amping purposes,
I find the performance can suffer when you’re not playing and
monitoring through your amp. To get around this, in part, you can use
a basic modelled speaker emulation while tracking — rather than an
impulse response, so that the convolution process isn’t adding
unnecessarily to the latency — and then maybe experiment with speaker
impulses later on, to produce a more realistic recorded sound. On the
question of latency and playability, remember that you’re often standing
away from a loud guitar amp, and if you’re playing closer to your
monitors, that should compensate a little for the latency, simply
because the sound reaches your ears that bit sooner than usual. When it
comes to impulse responses, when they’re played back through a suitable
convolution engine (there’s one bundled with most DAWs in the guise of
a reverb), they can sound very convincing, but do bear in mind that they
capture a static response and, therefore, don’t offer you control over
mic selection and position, and don’t respond dynamically to variations
in level as a real speaker would.
Notwithstanding
this advice, though, I typically only use the software modellers if I
have to, as I know I can get better sounds from a real amp. To keep amp
recordings quiet in a compact space, there are a few options, which
basically revolve around sticking the speaker inside a box to keep the
sound down, if that’s one of your concerns, which you can do with
isolation cabs such as the Hermit Cab. Alternatively, you could put
a power soak in between the amp and speaker, so that you can attenuate
the signal after it has passed through the amp but before it hits the
speaker. If space is at a premium, this might not be the best approach,
and while power soaks all allow you to use your amp, most impart plenty
of their own coloration to the signal, which may not be to your taste.
I much prefer to play through a nice tube-amp head into a power soak
(such as the THD Hot Plate or the Sequis Motherload), and while
I usually just use this to attenuate the output (so that I can drive the
amp harder at sensible recording levels), you can use a power soak to
attenuate the signal completely, and run a line output into your DAW.
You’ll still need something to model the speaker, of course, and some
power soaks include a reasonable speaker emulation, or you could, once
again, look to impulses such as those from Redwirez. A slightly less
complex variation on this theme is to use a high-quality guitar preamp
(or take a preamp output feed from your amp if you have one) and run the
resulting signal through a dedicated power amp and speaker emulator
such as the Two Notes Torpedo VB101 plug-in. Again, it’s difficult to
say how good you’ll find the results from these different approaches.
My
sense is that if you’re comfortable playing through software, and happy
with the sound, it’s a problem that doesn’t need solving. But if you
really do want to go in search for the ultimate compact and quiet
recording solution for guitar, then I hope I’ve given some useful
pointers!
Tuesday, January 29, 2013
Q. When was the click invented?
I’ve been wondering:
when was recording to a click first used? And when did it start to
become widely used? Did they used to record to metronomes before the
advent of MIDI? I’ve tried searching the Internet for answers to these
questions but haven’t found any answers.
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: I’m sure
people must have recorded using metronomes in the early days but, as far
as I’m aware, the first documented use of a ‘click track’ in the modern
understanding of the term was by Walt Disney’s team for the Fantasia
film soundtrack back in 1940. The requirement was to be able to pan
different orchestral sections around the auditorium via six speaker
arrays: three across the front and three across the back. To do that,
they needed to record the sections to separate tracks (remember this was
in the days of mono optical film audio machines locked together with
chains and sprockets!) and so they used a click track to keep the
sections in time on each take.
The Fantasia
project introduced a lot of things we take for granted today such as
click tracks, pan-pots, VCA level automation, multitrack recording,
overdubbing, surround sound and more besides!
However,
the Second World War took the focus away from sophisticated
surround-sound cinema productions, and the click-track idea didn’t
really surface again until MIDI sequencing and quantising became
commonplace in the 1980s.
Monday, January 28, 2013
Q. Why shouldn’t I use mastering limiting during mixing?
I often read
recommendations to mix with compressors/limiters in the main bus so we
can adjust to the effects of mastering during our mixing, and then to
bypass those dynamics plug-ins when we export for mastering. Why not put
in the full mastering chain and mix and master your track in one pass?
I mixed my latest track with the following chain in the master bus:
Cubase’s full-band compressor with a 1.2:1 ratio for 4-5dB of reduction;
Powercore EQsat plug-in with a broad four-octave dip of 1dB at 850Hz;
Powercore Master 3X multi-band compressor plug-in operating at a 3.2:1
ratio with 2dB of gain reduction; and ToneBoosters’ Barricade limiter
set to a -1dBFS ceiling and showing 3-4dB gain reduction.
Damien McEwan, via email
SOS contributor Mike Senior replies:
Using
a compressor on the main mix bus during mixdown is indeed very common
(although by no means universal) in order to ‘glue’ the mix together or
create extra excitement via gain-pumping effects. Given that this
bus-processing can impact quite heavily on the way you balance the
track, it makes sense to have it working while you mix, particularly so
that you can judge your effects levels and fader automation sensibly
within context.
However, limiting the main
output bus during mixdown is a whole different kettle of fish, because
the main purpose of full-mix limiting is simply to boost the subjective
loudness within the digital headroom. As such it’s usually much
faster-acting, and the goal is usually to make as little difference to
the mix balance as possible. Furthermore, setting up a limiter for the
best results is usually a delicate process, where small shifts of the
input level and plug-in controls can make big differences to the sound.
So on the basis that mastering limiting shouldn’t normally affect mix
balance, and that it adds to the already considerable complication of
creating a decent mix, I usually recommend that this process be left
until after mixdown.
Clearly there are some
chart-oriented producers for whom the loudness of the master is an
important primary concern. In that context having a preview of what the
side-effects of heavy-handed loudness processing (including limiting)
will do to the mix tone and balance can allow some pre-emptive
compensatory steps to be taken by the mix engineer. However, even in
that case, I’d favour bouncing your mix out to a separate project to
experiment with this processing, even if that means that you then have
to hop between the mix and a pseudo-mastering project. One reason
I prefer working this way is that it puts fewer limitations on the
mastering-style plug-ins I can use within my PC’s available CPU
resources, and usually makes it a lot simpler to switch between my own
pseudo-mastered mix and a selection of commercial reference tracks — an
essential process when judging the results of your own mastering. Also,
from a psychological perspective, being unable to immediately enact
changes on the mix during the comparison process encourages me to
clarify my own thoughts on the deficiencies of my own production across
my available monitoring systems, and I find that this means I go round
the houses less often while finalising my mix settings.
Equalising
your main output bus at mixdown is pretty common. It’s very easy while
you’re working on a mix for your ears to get used to a skewed tonality
(they’re very good at adapting), and if this shows up during comparisons
with commercial tracks then it’s much easier to deal with using
a decent-quality master-bus plug-in than by tweaking the individual EQ
settings across dozens of individual tracks.
Multi-band
compression, on the other hand, is another thing that I suggest leaving
to a separate mastering stage. Again, this is because it’s so fiddly to
set up properly, and you’re not normally looking for it to impact
hugely on the mix tone or balance; the heavy multi-band compression of
the late ‘90s hasn’t aged well, and isn’t very fashionable these days.
Also, in my experience, it’s very easy to take your eye off the ball as
regards getting the mix balance right when there’s a multi-band
compressor in the master bus, because the compression can often
counteract your mix settings and disguise subtler balance problems that
need addressing. Or, to put it another way, it tempts to you think that
the mix is easier than it is somehow, so you work less hard.
Q. Is there a better way of controlling sibilance than a de-esser?
A recording of mine
is suffering from excessive sibilance, so I have tried processing it
using a de-esser. The problem is that the processing seems to be having
a detrimental effect on the sound of the vocal. Is there another way
I can get rid of the sibilance, or something I can do differently?
Dan Simpkin via email
SOS contributor Tom Flint replies:
Ideally,
a de-esser should automatically attenuate troublesome sibilance without
its actions adversely affecting the perceived quality of the audio, but
the problem is that a static set of de-esser parameters often doesn’t
work well for an entire performance. Not all of the sibilants within a
single performance will be equally objectionable, and the energy in
different sounds such as ‘S’ and ‘T’ might occupy different frequency
ranges. One solution is to automate the threshold and frequency
parameters of your de-esser so that it only works as needed, but my
feeling is that if you’re going to get into using automation to control
sibilance, it makes more sense to directly automate the level of the
vocal track within your DAW. It can be time-consuming, but offers
precise control over each individual instance of sibilance, and enables
you to visualise the problem within your DAW’s arrange page.
The
first thing to do is to enable level automation on the vocal track, and
display the volume parameter for editing with the mouse. The next thing
to do is find the first problem instance of sibilance. I use the
horizontal and vertical zoom controls to expand the waveform until the
contours of the word containing the problem are clearly defined. It’s
often possible to identify the syllables in a word just by looking at it
in close-up.
Once you’ve identified a problem
sibilant, create a pair of automation nodes at either end, in effect
bracketing that syllable. You can then select the two innermost nodes
and drag them down to reduce that sibilant in level without affecting
the rest of the track. I very occasionally find it necessary to add
a few more nodes while getting the shape right, and delete those that
turn out to be superfluous at the end. Having a small number of nodes is
good practice as it means that level adjustments can be done quickly by
moving a single line. The screen below shows a short phrase in
Cakewalk’s Sonar with level automation used to control sibilance in this
way.
After attenuating the worst instances of
sibilance, I listen through to see how natural the overall performance
is sounding. It may be the case that after reducing the level of the
main offenders, the ear becomes less bothered by the others and a more
lenient approach can be taken thereafter. Sometimes as little as a 3 or 4
dB drop in level results in a significant improvement, but it is also
very surprising how much reduction can be applied without it sounding at
all odd. The most important thing to note is that although you can
often identify syllables visually, only your ears can tell you how much
attenuation is needed, and how much will sound natural. In other words,
what sounds right is right. Manually automating the level provides an
opportunity to make alterations that better suit how the brain
interprets what it is hearing.
One of the great
advantages of using automation to control sibilance is that you’re not
limited to applying it just to the level of the track. For instance,
it’s often the case that sibilance is exaggerated by auxiliary effects
such as reverb or delay. If this is the case, try copying your
automation curve to the relevant send parameters on your vocal track,
then scaling it drastically so that the level of those sends is heavily
attenuated for sibilants. Likewise, any high-frequency EQ boost on your
vocal track can make sibilance worse, so try copying the same automation
curve to the gain parameter on the relevant band of your EQ plug-in.
When
the level automation is complete, overall level adjustments can be made
by sending the channel to an intermediate bus and using its fader to
boost or cut by the required amount, thereby leaving the sibilance
control automation undisturbed.
Saturday, January 26, 2013
Q. How can I warm up my recording without using EQ?
Sound Advice : Recording
I’ve put a lot of
effort into creating and editing a recording of solo mandolin — played
quite slowly — and although I like the final result a lot, on
consideration the tone is too trebly and cold, almost like a photograph
with too sharp a resolution. A friend mentioned he thought I could
perhaps ‘warm it up’ using compression, perhaps of a type designed for
vocals. Can you give me some guidance on how best I might do this? Of
course, I realise I can use EQ, but would specifically be interested in
any thoughts on how compression/limiting could be used on an existing
take to get a warmer result. I’ve used Logic and the recording is clear,
undistorted, and free from ambient sound.
Simon Evans via email
SOS
contributor Mike Senior replies:
There are ways to warm up a mandolin
sound subjectively using compression, although none of them are likely
to make as big an impact as EQ. Fast compression may be able to take
some of the edge off a mandolin’s apparent tone, for instance, assuming
the processing can duck the picking transients independently of the
note-sustain elements. There are two main challenges in setting that up.
Firstly you need to have a compressor that will react sufficiently
quickly to the front edges of the pick transients, so something with
a fast attack time makes sense. Not all of Logic’s built-in compressor
models are well-suited to this application, so be sure to compare them
when configuring this effect; instinctively I’d head for the Class A or
FET models, but it’s always going to be a bit ‘suck it and see’. The
second difficulty will be getting the compressor not to interfere with
the rest of the sound. The release-time setting will be crucial here: it
needs to be fast enough to avoid pumping artifacts, but not so fast
that it starts distorting anything in conjunction with the attack
setting. Automating this compressor’s threshold level may be necessary
if there are lots of dynamic changes in the track, for similar reasons.
Applying some high-pass filtering to the compressor’s side-chain (open
the Logic Compressor plug-in’s advanced settings to access side-chain
EQ, and select the ‘HP’ mode) may help too, because the picking
transients will be richer in HF energy than the mandolin’s basic tone.
Another way to apparently warm up a mandolin is
to take the opposite approach: emphasise its sustain character directly
while leaving the pick spikes alone. In a normal insert-processing
scheme, I’d use a fast-release, low-threshold, low-ratio (1.2:1 to
1.5:1) setting to squish the overall dynamic range. Beyond deciding on
the amount of gain reduction, my biggest concern here would be choosing
an attack time that avoided any unwanted loss of picking definition. In
this case, shelving a bit of the high end out of the compression
side-chain might make a certain amount of sense if you can’t get the
extra sustain you want without an unacceptable impact on the picking
transients.
Alternatively, you might consider
switching over to a parallel processing setup, whereby you feed
a compressor as a send effect, and then set it to more aggressively
smooth out all the transients. The resulting ‘sustain-only’ signal can
then be added to the unprocessed signal to taste, as long as you’ve got
your plug-in delay compensation active to prevent processing delays from
causing destructive phase-cancellation. Using an analogue-modelled
compressor in this role might also play further into your hands here, as
analogue compressors do sometimes dull the high end of the signal
significantly if they’re driven reasonably hard, giving you, in effect,
a kind of free EQ.
Friday, January 25, 2013
Q. Are some analogue signal graphs misleading?
Sound Advice : Mixing
I read your feature about ‘Digital Problems, Practical Solutions’ (www.soundonsound.com/sos/feb08/articles/digitalaudio.htm),
which said that digital audio can capture and recreate analogue signals
accurately, and that the ‘steps’ on most teaching diagrams are
misleading. Does that mean that the graph should really show lines, or
plot ‘x’s, instead of looking like a standard bar-graph?
Remi Johnson via email
SOS
Technical Editor Hugh Robjohns replies:
Good question! The graphs in
that article are accurate as far as they go, but offer a very simplified
view of only one part of the whole, much more complex, process.
When
an analogue signal (the red line on Graph 1: Sample & Hold) is
sampled, an electronic circuit detects the signal voltage at a specific
moment in time (the sampling instant) and then holds that voltage as
constant as it can until the next sampling instant. During that holding
period the quantising circuitry works out which binary number represents
the measured sample voltage. This, not surprisingly, is called
a ‘sample and hold’ process, and that’s what that diagram is trying to
illustrate.
So the sampling moment is, theoretically, an
instant in time, best represented on the graph as a thin vertical line
at the sample intervals (the blue lines in the picture Graph 1: Sample
& Hold), but the actual output of the sample and hold process is the
grey bar extending to the right of the blue line.
However,
the key to understanding sampling is understanding the maths behind
that theoretical sampling ‘instant’, and that means delving into the
maths of ‘sinc’ (sin(x)/x) functions, which is the time-domain response
of a band-limited signal sample. At this point most musicians’ eyes
glaze over…
As we know, the measured amplitude
of each sample from an analogue waveform is represented by a binary
number in the digital audio system. When reconstructing the analogue
waveform that number determines the height of the sinc function.
The
important point is that we are not just creating a simple ‘pulse’ of
audio at the sample point, because the sinc signal actually comprises
a main sinusoidal peak at the sampling instant (and of the required
amplitude), plus decaying sine wave ‘ripples’ that extend (theoretically
for ever) both before and after that central pulse. The reconstructed
analogue waveform is the sum of all the sinc functions for all
the samples.
The clever bit is that the points
where those decaying sinc ripples cross the zero line always occur at
the adjacent sampling instants. This is shown in the next diagram (Graph
2: Two Sinc Functions) where, for simplicity, just two sample sinc
functions are shown for samples 23 (red) and 27 (blue). You can see that
at the intermediate sample points (26, 25, 24 and so on) the sinc
functions are always zero.
That means that the ripples don’t contribute to
the amplitude of any other sample, but they do contribute to the
amplitude of the reconstructed signal in between the samples, with the
adjacent sample sinc functions having the greatest influence, and lesser
contributions from the more distant samples. This is shown in the next
diagram (Graph 3: 3kHz Sinc Addition), in which the sinc functions of
a number of adjacent samples are shown, and when summed together produce
the dotted line that is a sampled 3kHz sine waveform
These last two diagrams have been borrowed from
a superb paper by Dan Lavry (of Lavry Engineering), which explains
sampling theory extremely well, and can be found here: www.lavryengineering.com/documents/Sampling_Theory.pdf.
Thursday, January 24, 2013
Q. What’s the best system for backing up my work?
Having recently
started making and recording my own music, I need to start thinking
about backing it up. At the moment, I’m just keeping everything on my
hard drive, which I’m somewhat nervous about (I’ve often heard people
say that digital data doesn’t exist at all unless it exists in at least
three places!), so I need to sort out a system quickly. What
procedure/system would you recommend?
Julia Webber via email
SOS
contributor Martin Walker replies:
It’s very refreshing to find
a musician who even thinks about backing up data at such an early stage:
often people only consider the options having dried their eyes after
losing a lot of irreplaceable songs. Hard drives can and do go wrong,
and catastrophic failures can happen in a microsecond, leaving you
unable to retrieve any of your previous files (companies do exist that
specialise in bringing data back from the dead, but they tend to be
expensive).
So it pays all of us to make regular
backups, then we can laugh when disaster strikes and restore our most
recent backup rather than lose any data: even if the very worst happens
and the entire hard drive goes belly-up, it’s entirely possible to plug
in a replacement drive and be up and running again within a couple of
hours.
First, you need to decide how often you
need to back up. To answer this question, just decide how much work you
are prepared to lose. Many hobbyists and some professionals are happy to
back up once a week, but always back up immediately you’ve finished an
important session as well, just in case. Second, decide how best to
organise your data to make each backup as easy as possible: after all,
the easier it is, the more likely you are to do it, and consequently the
less data you are likely to lose if anything does go wrong.
I prefer to organise my hard drives by dividing
them into various partitions, each devoted to a specific subject such
as Operating System + Applications, Audio Projects, Samples, Updates, My
Data and so on. Most modern operating systems let you partition your
drives in any way you wish. Although this takes a little more effort at
the start of your backup regime, for me the huge advantage of separating
your data from the operating system and applications is that you can
take global backups of entire partitions using a Drive imaging utility
such as Acronis True Image or Norton Ghost. This way, you’ll know that
absolutely everything on that partition will be contained within each
backup file (even those plug-in presets you create that get tucked away
somewhere safe and then forgotten!).
The
alternative is to leave all your data spread across the one huge default
partition for each drive, and use backup utilities that let you specify
which files to back up and which to ignore, such as Mac OSX Time Machine and Windows 7 Backup. Some audio applications, such as Wavelab,
also offer dedicated backup functions. Once again, this takes time to
set up initially, and this approach also relies on you specifying
a comprehensive list of files to save, so if you forget something vital,
you may come a cropper later on.
Whether you
choose drive imaging or a dedicated backup utility, you can
create a global backup file but, to save time and storage space later
on, both may also offer the subsequent option of much smaller
incremental backup files that only contain files that have been added or
changed since your most recent backup.
The
final choice is where to store your backups. The most important thing is
to store them separately from the original data, so that they are
unlikely to be damaged with the originals. If your computer has multiple
hard drives, a very quick and easy regime is to store backups of one
drive onto the other: this protects you if one drive becomes faulty, but
not if your entire computer goes up in a puff of smoke.
For
greater security, another set of backup data should be stored away from
your computer, either on removable media such as USB sticks, CD-Rs,
DVD-Rs, or removable or Firewire/USB hard drives. It also makes more
sense to store these backups in a completely different location, so that
even if your house burns down your data remains intact. Cloud-based
online backups, such as Dropbox or Amazon S3 (Simple Storage Service),
are very handy if you have a fast connection, although uploading speeds
can be cripplingly slow compared to downloads. A much quicker and easier
alternative may be to swap backups with local friends or family: you
keep a regular copy of their backups and they keep a copy of yours.
Q. Can I output my final mix one channel at a time?
Sound Advice : Mixing
I have recently
purchased a Golden Age Project Pre 73 MkII and Comp 54 on the
recommendation of someone from the SOS forums, and I am so pleased.
I use an RME Babyface and wondered, with my limited hardware, would it
be possible to output my final mix one channel at a time through the
Comp 54? The reason I ask is that the hardware adds something that
no VST seems to be able to do. If someone knows how I could do this it
would be great. If it matters, the DAW I am using is Reaper.
Via SOS web site
SOS
Technical Editor Hugh Robjohns replies:
The answer is yes, but it’s not
as straightforward as it might appear and you need to be careful.
The
basic problem is that when you’re working with a stereo mix the stereo
imaging is determined by the subtle level differences of individual
instruments in the two channels. A compressor exists to alter the level
of whatever you pass through it dynamically, depending on its own level.
Imagine an extreme situation where you have
some gentle acoustic guitar in the centre of your mix image, and some
occasional heavy percussion panned hard left. If you process those two
channels with separate unlinked compressors, the right channel
compressor only sees a gentle guitar and does nothing, while the left
channel compressor will feel obliged to wind the level back every time
the mad drummer breaks out.
Listen to the two compressed channels
afterwards in stereo and the result will be a very unsettled guitarist
who shuffles rapidly over to the right every time the percussionist
breaks out (probably a wise thing to do in the real world, of course,
but not very helpful for our stereo mix).
If you
process your stereo mix one channel at a time through your single
outboard compressor, that’s exactly what will happen. The compressor
will only react to whatever it sees in its own channel during in each
pass, and when you marry the two compressed recordings together again
you will find you have an unstable stereo image. The audibility of this,
and how objectionable you find it, will depend on the specific material
(the imaging and dynamics of your mix), but the problem will definitely
be there.
Stereo compressors avoid this problem
by linking the side chains of the two channels, so that whenever one
channel decides it has to reduce the gain, the other does too, and by
the same amount. In that way it maintains the correct level balance
between the two channels and so avoids any stereo image shifts.
You
can achieve the same end result if your single outboard compressor has
an external side-chain input, but sadly I don’t think the Golden Age
Project model does. If it did, what you’d need to do is create a mono
version of the stereo mix in your DAW and feed that mono track out to
the compressor’s external side-chain input, along with one of the
individual stereo mix channels (followed by the other). That way, the
compressor will be controlled only by the complete mono mix when
processing the separate left and right mix channels, so it will always
react in the same way, regardless of what is happening on an individual
channel, and there won’t be any image shifting.
That’s
no help to you with this setup, of course, but don’t give up yet, as
there is another possibility. You could take an entirely different
approach, and that’s to compress the mix in a Mid/Side format instead of
left-right. It involves a bit more work, obviously, as you’ll need to
convert your stereo track from left-right to Mid/Side, then pass each of
the new Mid and Side channels separately through the compressor, and
then convert the resulting compressed Mid/Side channels back into
left-right stereo. Using an M/S plug-in makes the task a lot easier than
fiddling around with mixer routing and grouping, and there are several
good free ones around.
The advantage of this
Mid/Side technique is that, although the Mid and Side signals are being
processed separately and independently, the resulting image shifts will
be much less obvious. The reason for this is that instead of blatant
left-right shifts, they will now be variations in overall image width
instead, and that is very much less noticeable to the average listener.
Sorry for the long-winded answer, but I hope that has pointed you in the right direction.
SOS
Reviews Editor Matt Houghton adds: I agree with Hugh’s suggestion of
M/S compression. I regularly use that when I want to deploy two
otherwise unlinkable mono compressors, and there’s no reason why you
can’t process the Mid and Side components one at a time. The only issue
here will be your inability to preview what you’re doing to a stereo
source, so be careful not to overwrite your original audio files! However, I sense that it’s the effect of running through the
compressor’s transformers that you’re hoping to achieve. In that case,
just set to unity gain and set the threshold so that the unit isn’t
compressing, and then run the signal through it. If it is standard L/R
compression you want, you could always get another Comp 54, as although
they’re mono processors they’re stereo-linkable with a single jack
cable.
In Cubase, I find that the best approach
to incorporating such outboard devices into my setup is to create an
External FX plug-in for each device, and then insert that on each
channel and print the result. In Reaper, the equivalent tool is the
excellent ReaInsert plug-in. This approach not only makes the process
less labour intensive in the long run, but means that you can drag and
drop the processor to different points in the channel’s signal chain,
should you want to.
Wednesday, January 23, 2013
Q. Does it make sense for me to mix with a mono 'grot box'?
Sound Advice : Mixing
Nicolas Issid via email
SOS contributor Mike Senior replies:
If your music were only ever played on larger full-range systems like those you mention, the usefulness of limited-bandwidth referencing would indeed be reduced. However, I’d personally think twice about targeting the sound too narrowly for one type of playback system, and would be inclined to prepare my music for lower-resolution playback in case it, for any reason, gets transmitted for wider consumption — on the Internet, say, or as part of a TV programme, radio advert or computer game.
That apart, though, I think you’re slightly underestimating the value of something like the Avantones, because they’re not just about the ‘middly’ frequency response. Their small-scale, single-driver, portless design makes them much more revealing as far as simple mix balance issues are concerned (ie. for deciding what level each instrument should be at) than almost any even marginally affordable full-range nearfield/midfield monitoring system. This is even more the case if you use only one such speaker, rather than a stereo pair, as you also avoid inter-speaker phasing issues. Overall, I think you’d still benefit a great deal from this kind of speaker even if you mix primarily for larger speaker systems.
As far as speaker placement is concerned, in my opinion it doesn’t really matter much where you put it, as long as it’s pointing roughly in your direction and you’re not getting acoustic reflection problems from a nearby room boundary or other hard surface. The only disadvantage of mounting a single speaker off centre is that it may temporarily skew your stereo perception to one side after you’ve been listening for a while. Not that this is actually a significant mixing problem in practice, though, because it’s very easy to work around.
I realise that there are advantages to monitoring on
a single ‘middly’-sounding small speaker (such as an Avantone MixCube)
from time to time while mixing, to get an idea of what the music might
sound like on typical cheap consumer playback systems. However, I mix
mainly deep house and lounge, which is quite rich in high and low
frequencies, and these are easily conveyed by the full-range
playback systems in trendy restaurants, cafes and clubs, but not by
these small monitors. Does using one while mixing, therefore, actually
make any sense for me? Also, if I did decide to get a single MixCube,
I guess the best place to put it would simply be in the middle of the
desk, but the problem is that that’s where my computer screen is! Should
I buy a higher speaker stand to go above my screen and angle the
Avantone down towards me?
Nicolas Issid via email
SOS contributor Mike Senior replies:
If your music were only ever played on larger full-range systems like those you mention, the usefulness of limited-bandwidth referencing would indeed be reduced. However, I’d personally think twice about targeting the sound too narrowly for one type of playback system, and would be inclined to prepare my music for lower-resolution playback in case it, for any reason, gets transmitted for wider consumption — on the Internet, say, or as part of a TV programme, radio advert or computer game.
That apart, though, I think you’re slightly underestimating the value of something like the Avantones, because they’re not just about the ‘middly’ frequency response. Their small-scale, single-driver, portless design makes them much more revealing as far as simple mix balance issues are concerned (ie. for deciding what level each instrument should be at) than almost any even marginally affordable full-range nearfield/midfield monitoring system. This is even more the case if you use only one such speaker, rather than a stereo pair, as you also avoid inter-speaker phasing issues. Overall, I think you’d still benefit a great deal from this kind of speaker even if you mix primarily for larger speaker systems.
As far as speaker placement is concerned, in my opinion it doesn’t really matter much where you put it, as long as it’s pointing roughly in your direction and you’re not getting acoustic reflection problems from a nearby room boundary or other hard surface. The only disadvantage of mounting a single speaker off centre is that it may temporarily skew your stereo perception to one side after you’ve been listening for a while. Not that this is actually a significant mixing problem in practice, though, because it’s very easy to work around.
Tuesday, January 22, 2013
Q. How important are microphone self-noise and SPL figures?
Sound Advice : Miking
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: The reason you can’t find those specific specifications is because the SM7 is a dynamic (moving-coil) microphone. In fact, you probably won’t find those specs for any dynamic mic from any manufacturer (other than dynamic mics with built-in buffers or gain stages), because they are largely meaningless and pointless figures.
The self-noise generated by a moving-coil microphone is only the thermal noise generated by the mic’s own output impedance, which is essentially just the DC resistance of the moving coil itself, plus that of a humbucking coil (if employed) and the output transformer (if present). This noise contribution is negligible, and will be utterly swamped by the receiving preamp’s own electronic noise.
The maximum SPL level for a dynamic mic is determined mainly by the range of mechanical movement afforded to the coil, and that will be more than high enough for any conventional application. So it’s not unusual to find professional dynamic mics that are capable of over 150dB SPL (for one percent THD), albeit with rapidly increasing distortion towards the limits, and with mechanical clipping occurring when the diaphragm and/or coil hits the end stops at 170 or 180 dB SPL.
In contrast, the self-noise and maximum SPL figures are quoted for all electrostatic mics (capacitor and electret) because the impedance converter electronics built into the microphone determine the mic’s dynamic range capability, the lower limit being set by the amplifier’s self-noise, and the upper limit by the amplifier’s distortion or clipping.
I am interested in
the Shure SM7b mic and have been looking at its specifications, but the
Shure web site seems to be missing information for self-noise and
maximum SPL levels. I’ve heard people saying that the SM7 can handle up
to 180dB SPL! I’m curious as to whether or not that is true (probably
not) and if it is anywhere near that, I’m assuming it’s because it’s got
some kind of -30dB switch on it or something crazy like that. Can you
shed any light on this?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: The reason you can’t find those specific specifications is because the SM7 is a dynamic (moving-coil) microphone. In fact, you probably won’t find those specs for any dynamic mic from any manufacturer (other than dynamic mics with built-in buffers or gain stages), because they are largely meaningless and pointless figures.
The self-noise generated by a moving-coil microphone is only the thermal noise generated by the mic’s own output impedance, which is essentially just the DC resistance of the moving coil itself, plus that of a humbucking coil (if employed) and the output transformer (if present). This noise contribution is negligible, and will be utterly swamped by the receiving preamp’s own electronic noise.
The maximum SPL level for a dynamic mic is determined mainly by the range of mechanical movement afforded to the coil, and that will be more than high enough for any conventional application. So it’s not unusual to find professional dynamic mics that are capable of over 150dB SPL (for one percent THD), albeit with rapidly increasing distortion towards the limits, and with mechanical clipping occurring when the diaphragm and/or coil hits the end stops at 170 or 180 dB SPL.
In contrast, the self-noise and maximum SPL figures are quoted for all electrostatic mics (capacitor and electret) because the impedance converter electronics built into the microphone determine the mic’s dynamic range capability, the lower limit being set by the amplifier’s self-noise, and the upper limit by the amplifier’s distortion or clipping.
Q. Can you recommend a low-cost heavy-duty mic stand?
Sound Advice : Miking
I have the usual
selection of Stagg and anonymous mic stands, which are fine most of the
time, but I now have some mics that are really pretty heavy (SE
Electronics’ Gemini III, for instance) and none of my present stands
really cut it. Of course, all mic stands are described as ‘heavy duty’,
but I’m looking for something that can hold really heavy microphones
reliably and with the minimum of hard twisting of small knobs and so
on.Of course, SE make a suitable stand, but I’m not sure I could justify
$500 on one mic stand. Can you suggest anything usable below, say, $150?
Via SOS web site
SOS
Technical Editor Hugh Robjohns replies:
If you can use a mic stand
without a boom arm — so, just the vertical pole — there shouldn’t be any
problem, because even budget mic stands should be able to support the
heaviest microphone without too much trouble. The real problem comes
when trying to hang a heavy mic on a boom arm, because most ordinary mic
stands don’t have anything like a sufficient counterweight mass to
properly balance even moderate mics, let alone big, heavy ones. As
a result, the boom arm clutch has to resist almost all of the rotational
force created by the leverage of the heavy mic at the end of the boom
and, frankly, most just aren’t up to the job. The inevitable consequence
is the annoying ‘droopage’, and the more you try to tighten the clutch
to prevent it, the quicker the whole thing wears out (or breaks), and
quickly becomes droopy even when supporting light microphones!
The
correct engineering solution is to properly counterbalance the weight
of the microphone so that there is no net rotational force at the boom
clutch. That then allows the clutch to do what it was intended to do —
stop the boom arm from moving — rather than have to accommodate the
entire rotational leverage. The cheap and cheerful solution is to tape
or affix some additional weight to the end of the boom arm; you need
enough to balance your heaviest mic at the maximum boom extension you
plan to use. However, this will be ugly and may not be as safe as it
should be, and you certainly don’t want the weight to fall off onto
someone’s foot... or the mic to crash onto the floor shortly afterwards!
I know the idea of spending $500 on a mic stand seems silly, but, to be honest, I think it’s worth it for peace of mind when you’re working with mics that cost $1500
and potential personal injury insurance claims! Moreover, mic stands in
this cost bracket generally live forever, because they are so well
designed and rugged, which means that the amortised investment is
actually very low.
The SE mic stand is
surprisingly stable, but it is a kind of hybrid of a reverse-engineered
Keith Monks boom arm and clutch from the 1970s and a drummer’s cymbal
stand. It does have a heavier counter-weight than most budget stands,
but it’s still not an ideal solution, to my mind.
The
most cost-effective and properly engineered stand I’ve come across to
date is the Sontronix Matrix 10.
It’s not the prettiest or most compact
stand on the planet — it’s basically a modified photography lighting
stand — but it has cogged clutches that definitely won’t slip, a very
sensible counterweight, removable wheels, and a handy drop-arm. It’s
very secure, totally reliable, and there’s nothing to break, so it will
live forever. I reviewed it in the August 2010 edition of Sound On Sound
(see the full review at
If
you want something in matt black and with a much smaller footprint,
I’ve just been reviewing the Latch Lake MicKing stands, which I have to
say are utterly brilliant. However, they are also pretty expensive,
because they are very well engineered, and imported from the US. The
review is soon to appear in Sound On Sound, but these stands have
a sensibly massive counterweight on the boom arm, a very heavy, but
compact, base (with transport wheels to make it easy to move the stand
to a storage area), a nice drop-arm system, and really ingenious lever
locks and clutches that are adjustable for both tension and ease of use.
These are very solid and impressive stands and well worth the
investment, in my view.
Monday, January 21, 2013
Q. How can I make using headphones less fatiguing?
Sound Advice : Mixing
I have been making music for years now, and although
I have a set of Genelec 8040s that I use during the day (when I’m home),
I have been using a set of Audio-Technica M50 headphones for writing at
night, when I usually have the ideas and desire to write, but am unable
to, due to neighbours and a sleeping wife.However, lately I have been
unable to use the cans, as I’ve been experiencing discomfort and what I believe is the onset or warning signs of tinnitus. It’s been
a nightmare trying to adapt to not using cans at night, and I find it
almost impossible to get anything other than sequencing done at this low
volume!I’m wondering whether there are any miracle headphones or bits
of kit that would minimise hearing damage or discomfort while still
being (relatively) accurate and enjoyable to use.
Via SOS web site
SOS
Technical Editor Hugh Robjohns replies:
Firstly, regarding the
tinnitus: it’s very common, often temporary and may be nothing to worry
about. It can be brought on by something as simple as drinking too much
coffee or suffering a mild ear infection, but don’t ignore or neglect
it. Go and see a medical professional and get checked out! If there is
a problem, early intervention could make all the difference.
I
don’t think there are any ‘miracle’ solutions in headphones. Basically,
it comes down to self-control in establishing the most appropriate
maximum level for those particular headphones and sticking to it. The
simplest solution is to put a mark on the headphone volume control and
exercise enough self-discipline to never turn it up past that. If you
reach a stage in your mixing when you’re finding that maximum level is
too quiet, take a break. Give the ears a little time to relax and reset,
and then start again.
More volume is not the
answer, though. It might seem more exciting and involving, but it
doesn’t really help to make better mixes — in fact, it usually makes
them worse! The reason is that greater volume allows you to hear
through a bad mix more easily, and poor balances aren’t perceived as
such. Working at more moderate levels — the kind of volume that most end
listeners will use — encourages a far more critical approach to the
mix, as poor balances sound obviously awful! Mixing becomes much harder,
certainly, but also much more accurate and with far better end results.
This is true of both speakers and headphones.
By
all means turn the volume up if you need to check low-level background
noises and so on, but do so only briefly. Try to mix at a modest level,
and keep that level fixed. If you continually change your monitoring
level, your mix will change continually too!
However,
the fatigue you’re experiencing may involve more than just sheer
volume. The M50s are pretty good for the money, but I think you might
find it easier to work with a pair of good open-backed headphones that
are more revealing. You might find it helpful to read the comments and
suggestions for different models in a headphone comparison article we
ran in the January 2010 issue (www.soundonsound.com/sos/jan10/articles/studioheadphones.htm).
If possible, try different models before buying, to make sure the
weight, headband pressure and size of the ear cups suit your head and
are comfortable. Open-back headphones do ‘leak’ more sound than closed
headphones, though, and that may be an issue for your wife!
The
M50, being a closed-back design, tends to be less revealing of
mid-range detail than a good open-backed headphone, and a consequence of
this is a natural tendency to keep cranking the level to try to hear
further into the mix, but more volume still doesn’t quite reveal what
you want to hear! Headphones that exert a strong pressure on the sides
of the head can also add to the sense of physical fatigue, and the
sealed nature of the earpieces quickly makes your ears hot and
uncomfortable, which also doesn’t help.
I’d
recommend trying some good open-back headphones, like the AKG 702s,
Sennheiser HD650s or the Beyerdynamic DT880 Pros. They are expensive,
but I think you’ll find it far easier to mix with them and you’ll be
much less tempted to wind the level up, although it is still very
important to take frequent breaks to allow your perception of volume to
reset! Headphones of this calibre provide a top-notch monitoring system
that will last for decades if well looked after, and you’ll probably
hear all sorts of details that your Genelecs don’t reveal, too.
Obviously, though, there is no physical
sensation from the low frequencies when using headphones, as there is
when using speakers and that can also be a factor in the continual
desire to turn the level up, especially if you’re producing music that
demands strong bass content. The only way around that is self-discipline
and learning to trust your headphones.
As
a last resort, if you don’t think you have the self-discipline to leave
the volume control alone, it might be wise to consider investing in
a suitably calibrated headphone limiter. Again, it’s an expensive
option, but I’d suggest that it’s well worth it to protect your
priceless ears! There’s some useful background information here: www.tonywoolf.co.uk/hp-limiters.htm.
Also, Canford Audio offer various types of headphone level limiter that
can be installed inside headphones or wired into the cable. These are
based on a clever BBC design, which is now mandatory within the
corporation to ensure that BBC staff don’t expose themselves to
excessive SPLs through their phones, and it works extremely well. You
can read more about it here: www.canford.co.uk/technical/PDFs/EarphoneLimiters.pdf.
Q. How can I use a figure‑of‑eight mic with the mid/side miking setup?
Sound Advice : Miking
I have two
figure‑of‑eight Golden Age ribbon mics that I want to use as overheads
for drum recording. I’ve read about Blumlein pairs and will try that,
but I also wondered if I could try Mid/Side recording techniques. Can
you use a figure‑of‑eight mic for the centre mic in that setup, and, if
so, how do I get rid of the ‘rear’ sound from the centre mic? While I’m
at it, can I ask if you know of any other neat recording tricks for
using two figure‑of‑eight mics together?
Connie Buck via email
SOS
Technical Editor Hugh Robjohns replies:
Yes, you can certainly use the
M/S approach if you want to, and that does provide the potentially
useful advantage of being able to adjust the stereo recording angle
remotely to set the required image width.
However, the left‑right decoded signal from an M/S
array comprising two figure-of-eights is essentially two
figure-of-eights in an X-Y format: basically the same Blumlein array you
are already familiar with. Altering the ratio of Mid and Side changes
the equivalent mutual angle of the decoded X-Y mics, and distorts their
polar patterns slightly. However, for matched Mid and Side levels, M/S
with a pair of figure-of-eights decodes as a perfect Blumlein array.
Indeed, this is precisely what Blumlein discovered and experimented with
80 years ago!
If you need to ‘get rid’ of the
rear pickup of the Mid mic, you will have to place an acoustic absorber
behind the mic; for example, the infamous SOS duvet, foam absorbers, or
even some kind of reflection filter: anything to capture sounds that
would otherwise head back into that rear pickup zone.
As
for other neat tricks with dual figure-of-eights, there is a technique
called the Faulkner Array that uses two figure‑of‑eight mics spaced
about eight inches apart and facing forward. The idea is to capture
a normal stereo sound‑stage in much the same way as an ORTF arrangement,
but with significantly reduced sensitivity to reverberant sounds from
the sides and above. It was a technique devised to deal with the
acoustics of a church that had nasty side‑wall slapback issues.
Another
situation in which I often use two figure-of-eights is capturing
a singing guitarist. By careful placement and angling of the mics, it’s
possible to arrange their deep side nulls to provide a significant
amount of rejection of the unwanted source: the guitar mic rejects much
of the voice, and the voice mic rejects much of the guitar. If you do
this carefully (and assuming the guitarist can sit still and not sway
about!), you can achieve 20dB of separation or more, which is a major
improvement on the usual dual-cardioid approach!
You can read more about this at www.soundonsound.com/sos/1996_articles/dec96/singingguitars.htm.
Saturday, January 19, 2013
Q. How do I record a double bass alongside other instruments?
Having been a bass
player for years, I’ve recently come into possession of an acoustic
double bass. I seem to be getting a decent enough sound out of it that
I think I’m ready to use it with my band. We’re going to be recording
soon, but will all be playing together in the studio. How can I record
the bass alongside other musicians, reducing as much spill as possible?
Bradley Culshaw via email
SOS
Technical Editor Hugh Robjohns replies:
The obvious ‘modern’ solution
is to fit a ‘bug’ — a bridge pickup or an internal mic — to the bass,
which will provide a pretty high degree of separation. The sound
character might not be entirely ‘natural’, but a little EQ should deal
with that. The ‘vintage’ alternative is to use acoustic screens or gobos
in the studio and thoughtful instrument and mic layout, with the aim of
minimising spill and helping to provide some sound shadowing for mics,
especially the double-bass mic, thus reducing the spill and providing
a workable degree of separation from the other instruments playing in
the studio. This is a well‑proven historic technique, and the remaining
spill generally helps to gel the mix together and provide a great ‘live’
character to the mix. Of course, such spill makes it almost impossible
to overdub replacement parts, but that’s what practice and an unlimited
number of takes are for!
Friday, January 18, 2013
Q. How much power does my stage system need?
Sound Advice : Mixing
I’m trying to work
out how much power a PA system I work with draws, and I also need to
come up with a sensible ‘plug‑it‑all‑in’ type of procedure. (I’ve read
the Sound On Sound December ‘05 article ‘PA Basics’.) It’s mainly small
venues we play in, such as function rooms and town halls. Looking at the
manual for my Mackie SA1530z, I’m kind of baffled. It says:
Line Input Power Europe: 230V, 50Hz
Recommended Amperage Service: 16 amps
Is
this saying that a 16‑amp circuit is recommended? The spec sheet
doesn’t seem to list how much current the box will draw. Also, it’s
often stated that FOH, mixer and racks, lights and backline should be
powered from their own separate sockets (three in total). Is it
acceptable to power from both sides of a double socket and another
adjacent socket, therefore, all being powered from the same ring main?
Via SOS web site
SOS
Technical Editor Hugh Robjohns replies:
The 16‑amp thing looks like
a generic suggestion to me. In the UK, standard domestic outlets are
nominally 13A anyway!
Essentially, what they are
saying is that it needs to be plugged into a sensible supply. The
typical average current will be a few amps at most, but the initial
inrush current on switch‑on will be considerably higher, so don’t try to
turn everything on in one go!
If you need to
know the real current and power‑consumption figures, invest in something
like an energy monitor, such as the one I’ve found here: www.maplin.co.uk/plug-in-mains-power-and-energy-monitor-38343.
This one is marketed by Maplin in the UK, but I’m sure you’ll find
similar devices from all the usual suppliers. You simply plug in the
device you want to know about, and the display will give you the current
and power being consumed, as well as the supply voltage and frequency.
It’s a really handy device and I use mine a lot when testing and
checking equipment.
Regarding the use of wall
sockets, assuming that you’re working with a PA and backline system that
is consuming less than about 4kW in total (which would be most systems
for a modest‑sized venue), use a double socket to run all the audio
equipment. That minimises any problems with ground loops
Run all the backline from one side of the
double outlet, and all the PA (FOH, racks, PA and monitors, for example)
from the other side. Supplying the two systems from their own RCDs
(Residual Current Devices) is essential too, particularly from the point
of view of preventing a backline fault from taking out the PA. If the
musicians want to use their own RCDs for their gear, that’s fine too!
Running
the FOH on a long mains extension from the PA power‑supply socket (or
distribution board) continues the theme of ‘star grounding’ and will
minimise the potential for ground loops in the PA system. Run lighting
from a different socket (or sockets) and try to keep the dimmer racks
and cabling well away from the audio cables.
Q. Are there any panning rules for maintaining mono compatibility?
Sound Advice : Mixing
With regard to stereo
image width, is there typically a ‘cap’ you would place on tracks to
maintain a good mono sound? Perhaps there’s some kind of relatively
hard‑and‑fast ground rule (assuming a typical sort of track layout),
such as ‘never go beyond 50 percent either way’?
Via SOS web site
SOS
contributor Mike Senior replies:
There are two basic issues regarding
mono compatibility. The first is that panning any mono track off‑centre
reduces its level in the mono balance by a maximum of around 3dB when
panning hard left or right. From this perspective, the only ground rule
I’d apply there is to make sure that the balance continues to function
correctly in mono. If your main guitar power‑riff is panned hard left,
it may struggle to fulfill its musical function in mono, simply by
virtue of losing a lot of ground against things like the bass, kick,
snare and lead vocal (which all typically reside close to the centre).
The
second issue to be aware of is that any stereo recording or stereo
effect return in your mix may contain elements in one channel that are
out of phase or polarity-inverted compared to the other channel. These
can phase‑cancel when summed to mono, and although this might simply
result in a subjective level drop (as in the case of some M/S‑based
widening effects), typically the cancellation is frequency‑selective in
some way, so the tone of affected parts suffers as well. Stereo
drum-overhead mics and stereo piano recordings commonly fall foul of
this to some extent, on account of the widespread use of spaced‑pair
recording techniques on these instruments, but almost any multi‑miked
part can potentially come a cropper if you pan the individual mics
independently in the stereo field
The cast‑iron remedy to uncertainty here is to
make a point of comparing your mix against commercial productions in
mono. Conventions on stereo imaging vary a lot between styles, and even
between engineers, so it’s tricky to generalise with any validity.
However, what may help you is to get hold of a stereo vectorscope
display for your DAW, such as Flux Audio’s fantastic freeware Stereo
Tool plug‑in. Once you get used to how things look on there, it can tip
you off to impending mono phase‑cancellation problems, especially if
you’re working on headphones, which don’t give the same funny ‘outside
the speakers’ stereo effect that’s usually a clear warning sign
on nearfields.
All that said, there is one
little panning‑width rule of thumb that I do tend to follow personally,
but this isn’t as much related to mono compatibility as it is headphone
listening. When you pan something hard to either side in headphones, it
gives the impression that it’s right by that ear, because there’s no
crosstalk between that earcup and the opposite ear. I’ve always found
this a bit distracting myself, and it can make it tricky to blend the
sounds in your mix convincingly, in my experience. For this reason
I rarely pan mono sources beyond about 85 percent either way, because
this makes them a little less dislocated in headphones and actually
affects the stereo presentation very little, especially if you’re
feeding a selection of stereo effect returns into your mix anyway, which
will still guarantee that the stereo picture is painted right out to
the edges. Bear in mind, though, that this is very much an issue of
personal preference, and there are lots of very famous engineers who
actively prefer the extreme‑panned presentation. The only way to make up
your own mind is, again, to compare your mix to your favourite records
on headphones and decide which sounds best to you.
Subscribe to:
Posts (Atom)