Avoid all the low-frequency pitfalls and learn to
achieve the perfect foundation for any mix, with our bass-mixing
masterclass...
How do I mix bass?
It’s a simple question, but compare a dozen records picked at random and
you’ll see that there’s no simple answer. When it comes to instruments,
‘bass’ can mean (at the very least) guitar, upright, drum or synth.
Each can perform many musical roles, and every genre has different
conventions for low-end sonics. In this article, I’ll help you make
sense of all that, whatever instruments or genre you’re working with.
Cancellation Insurance
A
bass ‘sound’ is often a combination of several similar signals: for
example, electric bass can be multi-miked; a DI signal may be captured;
and you might introduce MIDI-triggered layers to fill things out
further. Such shenanigans give you tremendous power to refine your
sound, but also enough rope to hang yourself, because the layers don’t
always reinforce each other when mixed. In fact, they can cancel
gruesomely at certain frequencies if there are polarity or phase
mismatches — so you need a clear understanding of phase and polarity!
There’s an in-depth article on the SOS web site (www.soundonsound.com/sos/apr08/articles/phasedemystified.htm) but I’ll run through the basics.
Phase
differences are caused by one signal being delayed relative to another;
and polarity differences are caused by one waveform being inverted
relative to another. If you’re unlucky, the phase/polarity relationship
between a pair of similar signals can result in tonal carnage when
they’re combined, and you must tackle such issues as early as possible.
With
multi-mic/DI recordings, a good way to start is to zoom in on their
waveforms and try to match them up as closely as possible, so that phase
and polarity differences are minimised and you get the strongest
reinforcement. Sort out any obviously polarity-inverted waveform first —
by either processing the audio region or hitting that channel’s
polarity-inversion switch — and drag the audio regions to line up
better. If judging things visually is tricky, hunt for transients, which
tend to be more easily identifiable.
Now to
start refining things by ear. Put the first two tracks out of polarity
with each other, fade them up to equal levels, and adjust the timing
offset between them to achieve the strongest cancellation. Returning to
a matched polarity will then give you the fullest composite sound.
Repeat this process, adjusting the timing of each new layer in relation
to those you’ve phase-matched.
It’s by no means
‘wrong’ to deliberately mismatch polarity and phase settings to
radically transform what was captured (this is art, after all) but
creative phase-cancellation is something of a lottery, and there’s
a tendency for it to mess with the relative balance of different note
pitches, thus introducing musical irregularities.
Phase Me Baby, Right Round...
A specialist ‘phase rotation’ device allows you to
delay different frequencies by different amounts (for links to
affordable phase-rotation plug-ins go to www.cambridge-mt.com/ms-ch8.htm#links-phase.)
Phase rotation won’t change a channel’s frequency response in
isolation, but it will change the way one layer of a multi-channel sound
interacts with others.
I find it more
time-efficient to grapple with polarity and timing adjustments before
faffing with phase-rotation, and there’s no point in trying to finesse
exact phase relationships if they don’t stay consistent (as in the case
of most multi-miked acoustic bass parts, where instrument movements will
alter the relative path-lengths to the mics, and hence the
time-offset). But I do use phase rotation a lot when mixing processed
and unprocessed versions of the same bass sound — something called
‘parallel processing’.
Most DAW systems
auto-compensate for a plug-in’s processing latency, but some plug-ins
(equalizers and amp emulators in particular) generate additional
time/phase shifts, and a phase rotator or simple delay line can help to
compensate for this.
There may also be hidden
phase gremlins between the left and right channels of stereo bass-synth
patches, which you’ll only hear when the channels are mixed to mono. The
worst-case scenario is that the low frequencies will cancel badly, and
won’t make it out of club and PA systems, or single-subwoofer home/car
systems. If the phase mismatch is static, adjusting the polarity,
timing, or phase response of one channel may help, but if the bass is
seriously flaky in mono, you might as well filter it out and layer in
a mono sub-bass synth.
EQ: The First Two Octaves
The
20-100Hz frequency region presents probably the most difficult
challenge, as it includes the fundamental frequency of most
acoustic/electric bass notes, and maybe a harmonic or two besides for
the most seismic of synths. Studio monitoring has a lot to answer for
here (see the ‘Bass Under Pressure’ box), but it’s also a question of EQ
technique.
Be cautious with low-shelving boosts
if your monitoring system (including your room as well as your
speakers) struggles to convey information below 40-50Hz. Lots of rubbish
like traffic rumble and mechanical thuds can be lurking at the
spectrum’s low extremes, and you don’t want to boost this. If you must
apply a shelving boost, also use a 20-30Hz high-pass filter for safety.
LF shelving filters also continue acting, to some degree well beyond
their specified frequency, so if you find you’ve collected excess low
mid-range baggage while trying to boost the true low end, a compensatory
peaking cut at 200-400Hz may be in order.
Beyond
broad-brush decisions, the most common job is compensating for
unhelpful resonances. Acoustic bass tracks always seem to feature one or
too fundamentals that boom out awkwardly, but room resonances can also
afflict miked amp recordings, aided and abetted by the cab’s resonant
structure. Even the recording mic can play a role, especially if it’s
one with a frequency response heavily tailored to rock kick-drum sounds.
The
simplest remedy is to deploy well-targeted narrow-band peaking cuts.
Find a pitch that consistently booms undesirably, and loop
a representative note. Then sweep around with a narrow peaking filter in
the sub-100Hz region to see if you can bring the errant frequency back
into a better balance. Boosting with the filter first can assist with
finding the right frequency, as can a high-resolution spectrum analyser.
A Q value of eight is a reasonable starting point, but be prepared to
adjust that by ear: some resonances may affect several adjacent pitches,
requiring a wider bandwidth, but otherwise, try to increase the Q value
as much as you can (without making the cut ineffective!) to avoid
messing with the spectral balance of other notes.
Low-end Interactions
No matter how solid your subs in isolation, they
won’t do you much good if the rest of your arrangement clouds them over,
or if they interfere with the low end of other important tracks. For
a start, if there’s more than one bass part (perhaps a bass guitar
layered with a synth bass), I’d usually choose only one as the main
low-end source, and high-pass filter the others around 100Hz, to avoid
insidious phase-cancellation nasties between their long-waveform LF
components, which would be pretty much unfixable with mix processing.
The
low-end level modulation inherent in some detuned multi-oscillator
synth patches is similarly undesirable if you want an absolutely solid
low end, so if you can’t switch off the patch’s detune directly, I’d
suggest filtering off the synth’s lower octaves and replacing them with
a more reliable static sub-bass synth.
With
multi-mic or ‘mic + DI’ recordings, you’ll often find that one signal
provides a clearer low-end than the other(s), and high-pass filtering
can again help add focus and definition to the final product. The
subjective timbre of the combined sound is heavily dependent on the
mid-range, so as long as you don’t move your filtering too far above
100Hz, you shouldn’t need to worry.
High-pass
filtering is also handy for removing low-end junk from other instruments
in your arrangement, to help the low end of your bass part pop though
more cleanly. Full-range keyboard instruments such as synths, pianos and
organs warrant special attention, as may orchestral overdubs,
found-sound snippets or sampled mix loops, any of which could conceal
a lot of unwanted rumble. Doing this has an extra benefit if you’re
working under less-than-ideal monitoring conditions: if you dramatically
undercook your mix’s overall LF levels, it’s then easier to correct
using mastering processes without dredging up a bunch of underlying
sludge at the same time.
Sub Warfare
The
most critical sub-100Hz conflict in modern mixes is that between bass
and kick drum: their low frequencies are normally responsible for the
lion’s share of the mix bus’s output level, and therefore present the
primary headroom bottleneck at mixdown and mastering. The engineer’s
task is to divide the available headroom appropriately between these two
main LF sources.
If your bass line needs to
relieve people of their fillings (think Nero’s ‘Guilt’ or Pendulum’s
‘Watercolour’), you’re unlikely to have the headroom to put much real
low-end on the kick-drum channel: you’ll have to move up into the
100-200Hz zone to salvage any beef. Alternatively, if your kick’s
threatening to wake Godzilla (as on Rihanna’s ‘Umbrella’ or Pussy Cat
Dolls’ ‘When I Grow Up’), you’ll have to be sparing with your bass
channel’s super-low frequencies.
No comments:
Post a Comment