Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, January 22, 2014

EQ: A New Perspective

Exploration


Technique : Effects / Processing

 

You might think that EQ is simply a tone control, but as PAUL WHITE explains, it's really a powerful mind-manipulating tool inextricably linked to the survival of the species!


When the term 'EQ' (equalisation) is mentioned, most people think of tone controls that make things sound brighter, more punchy, warmer, bassier and so forth. In this short article, however, I'm going to try a fresh approach to what EQ does, and how it can be used more effectively.

To start at the beginning, we really should ask ourselves why we need EQ at all? Why is it sometimes a good idea to change the tone of a sound? In the early days of EQ, tone control circuits were developed to help compensate for technical inaccuracies elsewhere in the recording or broadcast chain, obvious examples being microphone coloration and room acoustics. EQ could also be used to change the subjective level of a single instrument in a complete mix -- for in those days, EQ was thought of as being a volume control that worked over just one part of the audio spectrum. Nowadays, even home recording equipment comes with impressive technical specifications. There may be less need to use EQ to paper over the cracks in the technology, and yet ironically, EQ seems to be more in demand than ever. Why?

Over the past couple of decades, EQ has moved further away from the corrective domain, and fallen in with the effects boxes as a creative effect in its own right. Instead of simple bass and treble 'tone' controls, even the most basic recording mixers now offer mid-band control, often with a variable frequency or sweep function, while in more sophisticated systems, there's parametric EQ, where frequency, bandwidth and degree of cut/boost are all adjustable. But aside from explaining the origins of EQ, this still doesn't address the real reasons why we want to make things sound brighter, bassier, or whatever.

 

 

AN EAR TO THE GROUND


My contention is that EQ isn't so much about tone as it is about psychoacoustics -- the way various nuances of sound affect our perception of the world around us. And, like so many areas of psychology, the root of psychoacoustics undoubtedly date back to the days when survival was more important than setting up a really hot mix. Nature has its own, built-in EQ system in the form of distance. Low frequencies propagate slightly faster than high frequencies, so the further away the sound source is, the greater the time lag between the fundamental pitch of a sound and its higher harmonics. This doesn't in itself change the spectral content of the sound, but the change in phase relationships of the various harmonics does cause us to perceive the sound as being more distant -- and in terms of survival, less demanding of immediate attention.

The other thing that happens when a sound has to travel a long way is that the high frequencies are absorbed by frictional losses within the air itself, and the higher the frequency, the greater the absorption. This does affect the spectral content of the sound, and the further away it is, the less bright it appears to be. Again, 'less bright' equates to 'more distant,' especially when closer sounds are being registered at the same time. In real life, this would make us take less notice of the howling wolves in the distant forest than the sabre-toothed tiger dribbling at our feet! Putting it concisely, naturally created EQ is a way of making us pay attention.
Back in the modern world of the studio (where sabre-toothed tigers are rather less of a problem), we still exhibit essentially the same reaction to sounds. To get somebody's attention, you have to place a sound very close to them, and as recorded music is simply an illusion (and stereo doubly so), you have to use the tools at your disposal to create the illusion of closeness. That's what we're doing, often subconsciously, when we use EQ.

Equaliser circuits are designed to lift or cut parts of the audio spectrum relative to other parts, but one of the side-effects of EQ is that you also introduce phase differences between the high and low frequency components of a signal. Sound familiar? A touch of high frequency EQ and an increase in gain can make a distant sound appear closer, by compensating for the high frequency loss due to air absorption. At the same time, the phase changes introduced by the equaliser can help offset the fact that the higher harmonics have been delayed by their passage through air. If this can make distant sounds appear to be closer, then it stands to reason that close-miked sounds can be made to appear closer still, by adding top end.

This brings me on to the subject of why EQs can sound different, even though they claim to be working at the same frequency. While the effect of an equaliser's performance on the spectrum of the signal being processed is well documented, there's usually far less said about the way the equaliser affects the phase of the signal. There's now a growing belief that what an equaliser does to phase is just as important as what it does to frequency response. Indeed, some engineers believe that you can build an EQ that affects only phase, and not the frequency spectrum.

In some respects, these opinions are given credence by the success of various enhancer circuits, which have a minimal effect on the frequency spectrum of the sound being processed, yet create a significant impression of closeness and clarity. It's also true that a very small adjustment on a good equaliser will bring a sound out of a mix, whereas a less sophisticated equaliser requires you to crank up the treble to a point where the signal sounds harsh and nasty before it achieves the required degree of 'up-frontness'.

Before leaving the subject of 'up-frontness', it's also worth adding that there are non-EQ related audio cues relating to distance. These also need to be simulated, if you are to create a convincing sense of distance or proximity. For example, if someone suddenly whispers in your ear, you hear far more direct sound than reverberant sound. A distant sound will have a wider stereo spread and, depending on the environment, may contain a high proportion of reverberant information, especially in rocky or wooded areas.

 

 

THE DEEP END


If top end EQ affects the apparent proximity of a sound by grabbing our attention, low frequency sounds seem to have a more subliminal effect. This is why repetitive, rhythmic sounds usually include a lot of low frequency information. Much has been said about heartbeats and sounds heard in the womb, and I'm not really qualified to comment -- all I know is that you can have a really deep bass sound going on, which can add to the excitement of a track, without ever drawing your attention away from whichever sounds have been placed 'up front'. However, even the deepest sounds usually contain some high frequency harmonics, so you can still use high frequency EQ to move these sounds forward in the mix if you want to make them demand attention.

A lot of engineers use EQ to separate sounds within a mix, to try to keep it from becoming cluttered, but does this work, and if so, why? Our ears are incredibly powerful analytical tools, capable of picking out single instruments within orchestras (or overhearing interesting gossip over the top of dozens of other conversations at a party), so why do we need to enlist EQ to help us make sense of a mix?

Maybe there's no clear-cut answer, and I know that if sounds start to get too similar, then they become harder to differentiate, but I think a lot of it comes down to this 'attention' thing again. If everything in a mix is close-miked and given roughly the same EQ, then it's all going to try to push to the front, where it will compete for our attention -- and the human brain is noted for its intolerance of being asked to concentrate on too many different things at the same time. In a musical context, this leads to fatigue, and a general sense of not wanting to listen any more. To check out what I mean, listen to a CD that's been recorded with everything too bright, and you'll soon want to get it 'out of your face'.

 

 

THROWING A CURVE


So far, most of the techniques discussed can be tried with just a simple bass/treble EQ, but if that's the case, why do we need mid-range controls or parametric equalisers? One obvious application of a band-pass filter (which is what mid-range and parametric equalisers are) is that you can tune the equaliser to the fundamental pitch of an instrument, and then add boost, to increase the instrument's apparent level. If the level of the instrument is then reduced by turning down the gain so as to restore the original subjective balance, frequencies produced by that instrument that are well away from the fundamental frequency will also be reduced in volume. This helps reduce spectral overlap between sounds that might otherwise be too similar. To my mind, this is a corrective process rather than a creative one, and if you can choose more appropriate sounds at source, you'll probably find that the end result is better than using EQ to 'bend' the sounds to fit. But spectral mixing does work, and it's worth exploring to learn its benefits and limitations.

One aspect of natural sound relates not to the sound itself but to the way the human hearing system works. While a good hi-fi amp has a perfectly flat frequency response, the human hearing system comes nowhere close to being flat. What's more, the frequency response changes depending on the level of the sound being heard. As a sound gets louder, we perceive more low end and more top end, but the mid range becomes progressively more recessed. Looking at a frequency response curve, you'd see a curve with a dip in the centre, often known as a 'smile curve,' because of its obvious shape. The louder the sound, the deeper the smile. The loudness button on a stereo system emulates this smile curve, so that you can play back material at a low volume level, yet still get some impression of loudness.

In the studio, you can create a similar loudness effect by pulling down the mid EQ. Even physically quite quiet and distant sounds will appear louder -- but they'll still seem far away. Pulling down the mid range can also make a mix appear to be less cluttered, because a lot of the information that's clamouring for our attention resides in the upper mid-range. Using this knowledge, you could, for example, EQ an entire rhythm section to make it sound louder, then overlay it with conventionally EQ'd vocals and solo instruments.

It's easy to make a mix sound loud, by cranking up the monitors so that it is loud -- but there's more skill in making a mix sound loud and powerful, regardless of the playback level. Using the smile curve theory can help, though you should also check out the use of compression to maintain high average sound energy levels.

 

 

USE YOUR ILLUSION


While not suggesting that you should throw away everything you've learned about EQ in the past, I believe that there is value in listening to the various elements in a mix, and then deciding on a 'pecking order' in terms of which sounds deserve the most attention, and which ones play more of a supportive role. Then you can set up your EQ to help reinforce the sense of perspective. Most people try to achieve perspective using level control -- everyone knows that the further away a sound is, the quieter it is -- but now you know you can also roll off a little top, just to consolidate the illusion.

When it comes to using enhancers or exciters, try to avoid processing the entire mix if you can, because although these devices do make things sound clearer and more forward, they tend to bring everything forward -- when what you really need to do is increase the sense of space between what's at the front of the mix and what's at the back. Better to take your 'front-line' sounds, such as vocals and solo instruments, and give these the enhancement treatment, leaving the more distant stuff unprocessed, or even EQ'd down a little.

Finally, I have always been of the opinion that the less EQ you use, the more natural the final sound will be. So rather than adding lots of top to vulnerable sounds such as vocals in order to get them to sit at the front of the mix, try being more restrained in your use of EQ, and use high-end cut on things like low-level pad sounds, backing vocals and whatever else is playing a subordinate role. This is particularly relevant to those who don't have access to really sweet-sounding, upmarket EQs: most console equalisers are a little unsubtle when used in anything but moderation.

I'm not guaranteeing that these principles will solve all your EQ problems, because some difficult mixes are simply down to an unfortunate combination of instruments or sounds, or even plain bad arrangement. However, if you can get closer to the results you're after by using less EQ, then you just might break the unfortunate trend towards fatiguingly over-bright records.



No comments:

Post a Comment