Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Friday, January 31, 2014

Q What sort of double glazing do I need?

Paul White

Q What sort of double glazing do I need?

I have double glazing but it's quite old, and I notice that other people's double glazing seems to cut out more sound. Would it be better to replace my existing double glazing, or to fit additional secondary glazing? Is it worth me installing triple glazing, from a sound-reduction point of view? I have emailed double-glazing companies asking for information about noise reduction, but they are not forthcoming.

Via SOS web site

SOS Editor In Chief Paul White replies:

Modern UPVC double glazing can be very effective in reducing sound leakage, though older systems may not work so well, for a number of reasons. Double-glazed units work well at reducing sound leakage because they combine an airtight seal with a window assembly that includes an air gap between the two panes of glass. This reduces the amount of sound energy transferred from the inner sheet to the outer one — but it's still not perfect, because the trapped air between the panes still transmits some sound energy. This double-layer-with-gap arrangement provides better sound and heat isolation than one thicker sheet of glass, though. The heavier the glass and the wider the air gap, the more effective the sound isolation.

Early double-glazed units invariably had a smaller air gap than the newer type, which might explain why you're window isn't isolating as effectively, and it's also quite possible that the seals on your window have deteriorated with age. This requirement for an airtight seal is often overlooked (doors with gaps underneath and so on), and in my own studio I initially had a noticeable amount of sound leakage due to sound passing through the studio toilet's cistern overflow pipe! I have since changed the plumbing to the more modern type with an internal overflow arrangement, and blocked the old pipe, but this serves to illustrate that what might seem to be an insignificant opening can actually leak more sound than you might imagine. Even an open keyhole can compromise an otherwise well-designed door.

It follows, then, that where a window is not absolutely airtight when closed, sound will simply leak around the edges of the opening section. It can also be the case that large one-piece windows can resonate, allowing sound to pass through more easily at certain frequencies, so having a window comprising two or three separate sections may be more effective than a large one-piece window.

If you can get your original window serviced to restore the seals to their former glory (and also check that there are no gaps between the window frame and the wall into which it is fitted), then adding another layer of thick glass (or heavy perspex) at some distance from the first can work extremely well, as the much larger air gap will A simple DIY glazing panel with a large air gap can be very effective in reducing noise.A simple DIY glazing panel with a large air gap can be very effective in reducing noise.make the isolation considerably better and you should notice rather less low-frequency leakage. You could even seal the original window with silicone sealant, to prevent it opening and to ensure that it is airtight, if you don't need whatever you do to be easily reversible.

I did something very similar in my own studio, in which a standard double-glazed unit was already fitted flush with the outer face of the wall in the usual way. I added a sheet of 6mm glass, fixed into a frame which I fitted to the inner face of the wall, leaving an air gap between this and the existing window of almost the full thickness of the wall. I used a simple wooden frame with self-adhesive neoprene draught excluder between the glass and the wood on either side to produce the required seal. This is something that's well within the capabilities of anyone who can handle basic DIY.

The downside to this approach is that you will no longer be able to open the window — unless you arrange for the inner glass and its frame to be removable. However, commercial secondary glazing products, many of which are designed to open, tend to be much less effective because they rarely produce a perfect seal, and they also use thinner domestic glass, rather than the 6mm thickness recommended for this application.

In a commercial studio, the windows normally comprise much heavier glass than you would find in domestic double glazing, and it's also common practice to combine different thicknesses of glass so that the resonant frequencies of the two sheets don't coincide. That's especially important in control-room windows, which are usually large and may only comprise one piece of glass per side. The two pieces of glass may also be angled to control internal sound reflections. There's really no advantage in triple glazing when building a window from scratch; adding another sheet of glass in between two widely spaced pieces simply trades one large air gap for two smaller ones, which can actually reduce the isolation at low frequencies. In your case, however, the existing double-glazed unit is relatively thin compared with the wall thickness, so adding that extra-heavy glass layer on the inside of the wall will make a very significant difference. One final tip is that to prevent the windows steaming up, you can place a few bags of silica gel (dry them out on top of a hot radiator for a few hours first) in the gap to mop up any trapped moisture.

If you need still to be able to open the window, then forget the extra inner glazing and just fit a more modern double-glazed unit with the widest possible air gap. I haven't noticed much difference in isolation between the various brands, as the sealed-unit glass assemblies tend to be pretty similar (assuming the same-width air gap), and that's where most sound leakage still occurs.  .

Sonodyne SM Monitors - IBC 2010

Vocoders (SOS)


Vocoders were very popular in the '70s, but their 'talking keyboard' sound soon became cliched, and from then on, their popularity steadily declined. By the time MIDI started to take off, vocoders were all but extinct, with only a couple of manufacturers continuing to make them -- which was a pity, because a vocoder really comes into its own when used as part of a MIDI system. Fortunately, a few multi-effects units now include a vocoder as part of their repertoire, and with a little ingenuity, they can be used to modify sounds in a number of creative ways -- other than producing the classic 'asthmatic who's swallowed a harmonica' vocal effect. But before exploring some of the processing tricks made possible by this unique device, it's useful to take a look inside to see how it works.

A vocoder enables the tonal character of one sound to be imposed on another, quite different sound; the classic talking keyboard effect is produced by using the changing characteristics of the human voice to shape a sustained synth sound. What really happens is this: the vocal signal, which we shall call the modulator, is analysed by a bank of filters that continually measure the signal envelope in each part of the spectrum in exactly the same way as a spectrum analyser does. The more filters in the bank, the more accurate the analysis.

The signal to be modified, known as the carrier, is also fed to a bank of filters, but this time the level of signal passing through each filter band is modulated according to the output from the spectrum analyser section. In other words, the spectral characteristics of the modulating signal are duplicated in the filter bank processing the carrier. Figure 1 shows a simplified block diagram of what's actually going on. If the modulating signal is continually changing in character, as is true of the human voice, these dynamic changes will be passed onto the carrier, giving the synth sound a recognisable vocal quality. So effective is this process that it is possible to pick out intelligible words, even when none of the original vocal signal is present. And because we're analysing the spectral content and not the absolute pitch of the modulating signal, it doesn't even matter if the words are sung out of tune, or even spoken.

Apart from the obvious spectral variations generated by the vocal chords, human speech also includes 'fricatives' -- short, high-frequency sounds present in syllables such as 'S' and 'T' which are formed in the mouth. If these are separated out from the main vocal signal by means of a high pass filter, they can be added to the output to increase the intelligibility of the sound, and because they don't relate to the musical pitch of the vocal, they can be added to any musical output without compromising the tuning. A simple system for adding fricatives is also shown in Figure 1.

The original vocoders were built using analogue technology and the filter design was very similar to that used in graphic equalisers. The more bands the signal could be split into, the more convincing the vocal articulation. Some of these machines used patch cords to link the analyser outputs to the modulating filter bank, which opened up numerous creative avenues. By crossing over some of the patch cords so that one frequency band in the analyser section controlled a different band in the modulator filter bank, the vocal character imposed on the sound could be completely changed. Regrettably, most currently available vocoders don't offer this facility.

The digital vocoder implementation used in the Boss SE50 is built around a seven-band filter bank, and though this doesn't sound like a lot, the results are surprisingly good. I suspect that the filter frequencies have been specifically chosen to cover the human vocal range, which would mean that they are grouped together to provide coverage of the vital mid-range of the audio spectrum. The techniques discussed here were tried using an SE50.




Before trying out any advanced processing tricks, it helps to get a feel for the vocoder by recreating the cliches. The SE50 works at line level, though there is enough gain available to plug a high-Z mic directly into it. However, it's best to take a mic feed from a mixer direct out or insert send and feed this to the vocoder's modulator input (the right input in the case of the SE50).

Vocoding is a type of subtractive synthesis, so the carrier should ideally be a harmonically rich, sustained sound. At any rate, it is essential that the carrier produces sound in the vocal part of the spectrum, otherwise the vocoding effect won't work properly. When you speak into the mic at the same time as playing a sustained musical note or chord, you should hear the typical vocoder effect where the carrier is modulated in both frequency content and level. If you stop speaking, the output will fall silent, even though you are still playing the chord. Similarly, if you speak when there is no carrier present, you'll get no output; both signals have to be present before you hear anything. Essentially, the vocoder is multiplying the modulator signal level by the carrier level in each of the frequency bands, and if the input levels are constantly changing in level, the result can sound quite lumpy. It may help to produce a smoother result if a compressor is used to hold the vocal level as steady as possible, and in some circumstances it may help to compress the carrier signal too.




The problem with early vocoders was not so much a limitation of the devices themselves, but of what you could feed into them. Both the carrier and the modulator, usually keyboard and voice, had to be generated together in real time, and that often led to inconsistent or unrepeatable results. Now, of course, we have MIDI controlled keyboard and multitrack tape machines, both of which will produce exactly the same sounds over and over again. The most flexible setup for use with a vocoder is a MIDI sequencer synced to tape, but many of the ideas outlined here may be developed using a just a MIDI synth and sampler.

More realistic choir pads: Synth choir pads or samples are fine, but they don't actually say anything do they? They might go ahh, or ooh, but that's hardly the basis for a great lyric. A more human result can be achieved by modulating the sampled vocal using a real voice, and if you don't feel confident about doing this in real time, the vocal part can be put onto tape or even recorded into a sampler. Because of the way in which a vocoder modulates the level of the sound being processed, it is important not to leave any unin-tentional gaps as you catch your breath; in some cases two or more people singing together can help improve the ensemble effect. And it doesn't matter if you're tone deaf, as long as your timing is OK.

If the result is still too lumpy, try adding the vocoded sound to the unprocessed choir sound, and if you have a mixer with phase invert switches, experiment by reversing the phase of just the vocoded signal. This should change the way it interacts with the unprocessed sound and may produce an interesting alternative. Figure 2 shows this patch.

Cheap Morphing: For this trick, you'll need a sampler capable of performing crossfades to provide the modulating signal and either a synth or sampler to use as the carrier. The first step is to create a sample that slowly crossfades between two sounds with radically different characters, for example, an oboe and a human voice. This is used to feed the modulator input, and whatever is fed into the carrier input will then take on these changing characteristics as the samples crossfade. If the sampler is triggered via MIDI at the same time as instrument feeding the carrier input, the result will be quite repeatable. Figure 3 shows how this patch is created.

Using 'Natural' Modulators: when creating textural back-grounds for instrumental ambient music, the vocoder can be used to modify a synth pad sound using naturally found sounds as the modulator input. For example, a looped sample of cocktail party chatter will impart an almost subliminal murmuring quality to a piece of music. Sound effect CDs or tapes are useful sources of inspiration. Try the obvious sounds such as Wind, Rain, Thunder, Surf and so on, but don't neglect the more obscure sounds, as these often produce the most rewarding results. The patching is the same as for Figure 3.

Merging Synth Sounds: This trick works well if you have access to an analogue synth with a MIDI interface. If this is used as the modulator source, any filter sweeps will be transferred to whatever instrument is being used as the carrier. Don't expect the filter sound to be an accurate representation of the original analogue patch -- what you'll get is a merged sound that has characteristics of both instruments. This technique can be used to create a new sound from any two existing ones, but for the best results, the modulator must change in timbre during the evolution of the sound. Again, the patch in Figure 3 is used, with the modulator input being either a synth or a sampler.

Vocoder Delay: This trick is a variation on the straight vocoder theme. Instead of using the original vocal line to modulate a synth, you use the vocal as normal (Yes, I'm afraid that does mean you have to sing in tune!), and at the same time, use a delayed version of the vocal to drive the vocoder. This means you'll get a conventional vocal followed by a vocoded echo 'sung' by the synth pad of your choice. Figure 4 shows how a vocoder may be used with a delayed vocal.

Vocoding Echoes: Vocoders don't have to be used to modify just synth sounds -- any harmonically rich sound can be used as the carrier. Interesting results can be achieved using long reverbs or multiple delays modulated by vocal sounds. For example, taking another angle on the previous idea, you could use the delayed lead vocal to 'imprint' a vocal onto the reverb tail of itself. This would necessitate setting up a very long reverb time, but as it would only be audible when 'speaking', this wouldn't clutter up the mix. Figure 5 shows a suggested setup for achieving this effect.




These few ideas should be enough to demonstrate that there's more to the vocoder than 'Mister Blue Sky'. With a couple of multi-effects units now including vocoders, the price of experimentation has never been lower -- indeed, you can pick up a second-hand SE50 for around £200, around half the price I paid for my first vocoder ten years ago. Now that we have MIDI to ensure a reasonable degree of repeatability and control, the vocoder has become a valid studio processor rather than the temperamental live performance gimmick it started out as. With so few effects coming onto the market that are actually new, doesn't it make sense to explore something that's been here all along, yet has rarely been exploited to anywhere near its full potential?




Never originally intended as a musical device, the vocoder (VOice enCODER) was originally developed in the 1930s by one Homer Dudley at Bell Labs in the US, during research into reduced bandwidth speech transmission over long-distance phone lines. But, as with much early 20th century electronic gadgetry capable of processing or producing sound, the vocoder moved out of the lab and into the electro-acoustic studio. In the days when electronic music meant manipulating raw sound on tape, the vocoder was one of the few real-time processors available. Early examples of vocoder-based effects can be heard in Disney cartoons and feature films; Radiophonic Workshop-composed BBC theme tunes have often featured vocoding, and that's not to mention tracks from the likes of Pink Floyd, ELO, Laurie Anderson and Devo, amongst others.

The vocoder has enjoyed a long-standing, if fickle, flirtation with the music industry; many have been made and forgotten, though some notable units are still available on the second-hand market and rather sought-after. Here are some of them. (Note that 'new' prices come from 10 or 15 year old price lists!)
• BARTH Musicoder (16 filters)

Barth's Musicoder was a comprehensive device developed in the late '70s/early '80s from a Bell Labs-like research tool; Mike Oldfield's studio was reported to contain one of these at one time. Barth have since moved into the niche market of broadcast station controllers.
• ELECTRO HARMONIX Vocoder (14 filters)

This famed effects pedal company, who also made a compact synth and a couple of small samplers -- one of which evolved into Akai's S612, also produced a19-inch rack vocoder which also included a compressor on the mic input. It's a simple device, and cost around £400 when new.
• Vocoder 2000 (16 filters)
• Vocoder 3000 (16 filters)
• Vocoder 5000 (22 filters)

EMS, manufacturers of the legendary VCS3 and Synthi A synths, also made vocoders: the 5000 is the most visually striking (matching the synths perfectly -- it was even called the Synthi Vocoder in one version), and is the most fully-specified of the range. It also features a comprehensive patching system, enabling the analysing and synthesizing filters to be connected in any order. Incidentally, the 3000 and 2000 are still available new from EMS, the 3000 at £3000, and the 2000 at £995. Contact EMS on 0726 883265.
• DVP1 Digital Voice Processor

This unit couples vocoder facilities with harmonisation, pitch shifting and other features aimed at vocalists, all under MIDI control.
• VC10

This is likely to be the easiest vocoder to find and has the style and feel of Korg's MS10/MS20 semi-patchable mono-synths, with a sloped front panel and a 32-note keyboard. It's simple to use, and can process either mic or electronic instrument inputs.
• 16 Channel Vocoder

This was an expensive (£4400-ish), lab-spec, music-specific device, and is trés desirable, featuring patching and many facilities that make it easy to use live, in spite of its complexity. Rare.
• SVC350 Vocoder
• 10 filters

Along with the Korg VC10, the SVC350 is likely to be one of the most visible vintage vocoders on the market. This simple to use, rugged rack unit did retail for an affordable £500 at one time, though it might cost close to that to obtain a good example now.
• VP330 Vocoder Plus

In spite of its preset-like simplicity, this has become something of a classic: it easily produces typical airy vocal vocoder sounds and features a built-in keyboard to boot.
• VP70 Voice Processor

Similar to Korg's DVP1; offers vocoder-like facilities, but really scores as a pitch to MIDI and harmonising system.
• VSM201
• 20 channel

This £6000 plus device was a truly pro machine, and is unlikely to have made its way into domestic settings. What you can't do with the VSM201 probably isn't worth doing.S
• Syntovox 202 (2 filters)
• Syntovox 221 (20 filters)
• Syntovox 222 (10 filters)

The 202 was designed for stage use, and had a sub-£300 price tag and an unfussy front panel making it very easy to use; the 221 was a £2800 powerhouse (complete with filter patching matrix).

Derek Johnson

Thursday, January 30, 2014

Sony NEX-VG10 - IBC 2010

Using Effects With Keyboards

Tips & amp Techniques

Technique : Effects / Processing


Most of us can set up a suitable vocal reverb treatment, but what's the best way to deal with all those synthesized and sampled instruments? PAUL WHITE offers a few suggestions.

With such a bewildering array of effects offered by the current crop of stand-alone processors and workstations, choosing the best effects processing isn't always easy. In the case of a keyboard workstation, it's tempting to use whatever effects are already programmed in, but that neglects the true creative potential of the machine. The same is true of stand-alone effects units, where the factory presets often seem to provide more than enough choice. However, even more important than the range of effects on offer is the need to match sounds with these effects, so that the end result has both purpose and musical relevance.

If you've done any home recording, the chances are that you'll be able to come up with sympathetic reverb treatments for drums and vocals, but what, if anything, should you do to synthesized and sampled sounds? Because of the creative nature of music, there are no inviolable rules, but in many cases you might find it helpful to consider the acoustic surroundings in which the original instrument could have been played. Even in the case of a completely synthetic sound, you can often imagine the environment in which you'd like it to be heard, and go some way towards creating that. The aim of this article isn't to lay down hard and fast rules, but to try to establish a few practical guidelines which you can then build upon, based on natural acoustic principles.




Virtually every synth built over the last five years includes some form of choir sound or vocal pad, but in their raw state, they often don't sound much like the real thing. I've found that the best results are achieved by layering two or more choir patches from different instruments, and then adding effects.

Real choirs are generally associated with churches, cathedrals or other large buildings, which implies longish reverb times. But you can do more than simply slap on a five-second reverb. Going back to natural acoustics, choirs are comprised of many people, all singing slightly out of tune with each other, and at slightly different times. If the choir is a good one, these human variations will be small, but they can never be eliminated. A good first step to simulating this is to use a stereo pitch-shifter/delay algorithm, to create two detuned and slightly delayed copies of the original sound. A detune setting of between five and ten cents is adequate to produce a natural chorus effect, and if one output is panned left and tuned down slightly, while the other is panned right and tuned up slightly, the nominal pitch will remain the same, and the stereo spread will be enhanced. To simulate the timing delays of the different singers, the two detuned signals can be delayed by different amounts between 20 and 50ms. If more than one synth or sampler is being used to create the basic choir sound, these too may be detuned slightly. If there are only two layers of sound, one should be tuned slightly flat and the other slightly sharp, so as to maintain an average pitch that is still in tune.

Now we can start adding reverb, and because cathedrals tend to include a lot of hard surfaces, a bright hall setting (with the early reflections level, if you have one, turned right up) should bring you somewhere close. To reinforce the illusion of distance as well as space, try a pre-delay time of between 50 and 100ms. An alternative to this is to delay just one of the reverb outputs by 50ms or so, which will create a sense of left/right movement. This is less natural than pre-delay, but is a very pleasing effect in its own right.




Like choirs, string sections are made up of many performers, all playing slightly differently. You can use the same treatments as you would for a choir, but because the usual venue is a concert hall, the appropriate reverb treatment should be around three seconds, rather than five. Gentle stereo chorus may be used, either instead of or as well as pitch detuning, and this should be applied before adding the reverb. To prevent the final sound becoming too muddy, use the reverb sparingly, especially if the string patch has a slowish release time. If you have separate control over the early reflections level, you can increase this, to reinforce the illusion of many people playing together.

When layering string patches, try to pick sounds with slightly different attack rates, to create the effect of the string sound building up. One trick I've used with both string and choir sounds is to use a second layer an octave higher than the first, and with a noticeably slower attack. Analogue string pads can also be combined with digital string pads or samples to good effect.




Brass instruments tend to have an obvious attack when played hard, and in an ensemble, small timing differences become quite obvious. A stereo multitapped delay, with randomly set timings between 20 and 70ms, works well to simulate this effect, and, to avoid blurring the sound too much, a short, bright reverb treatment often gives the best result. A two-second plate tends to work well in a pop or rock context, though for a classical sound, a good concert hall patch set to decay for between two and three seconds is ideal.
Brass ensemble sounds may be further thickened by the use of pitch-shifter, detune, or chorus, as described in the Choir and String sections.




Though part of the standard orchestra repertoire, flutes are often used in contemporary work, where they are frequently treated using long reverbs, echoes, or combinations of both. The same is true of pan pipes, shakuhachis, or indeed any wind instruments which operate along the same lines (air being blown over an opening). As a general principle, long reverb or echoes work best on sparsely orchestrated pieces, as can be confirmed by listening to a selection of New Age compositions, but in an orchestral or ensemble context, it may be safer to err on the side of more natural acoustics. Concert hall patches are quite satisfactory, though the more intimate sound of a 'tiled room' or 'medium room' patch helps draw the listener into the music.
Because of the tonal purity of flutes and their ethnic cousins, detuning and chorus effects tend to detract from the character of these instruments, and are best omitted, unless used very sparingly.




The low end of the audio spectrum can easily become confused and cluttered if treated using long echo or reverb, which is why most bass sounds tend to be left fairly dry. Short delays may be used to create automatic double-tracking (ADT) or doubling effects, or you can try gated reverbs and early reflection patterns to create stereo spread and space without clogging up the mix. Effects such as flanging can also be effective on electronic bass sounds, because they add interest and movement without 'smearing' the sound. You can treat fretless bass slightly more adventurously, and in slow, sparsely orchestrated music, try combining both chorus and reverb to create a warm, sensuous feel.

Where there is a need to create a greater sense of bass energy, compression or limiting can be used to increase the average sound level without increasing the peak level. Setting the compressor attack time to between 10 and 50ms can help emphasise the attack of percussive bass sounds. On a practical note, bass sounds are normally panned to the centre of the mix, so that the low-frequency load is shared by both loudspeakers rather than only one. This helps create a louder-sounding and more stable mix.




Because pad sounds tend to be ongoing, there's little point in adding echo or reverb, because all the gaps in the mix are already full. In most instances, if you need to add interest, try using gentle chorus or flanging, ideally in stereo, to create a sense of width and movement. The human hearing system soon learns to ignore repetitive or constant events such as the tick of a clock, or the whirr of a fan heater, and the best way to keep the brain interested is to introduce change. Detuning patches are also effective in widening pad sounds.

Another way to create movement is to actually move the sound! If your effects unit includes a panner, try moving the sound from left to right and back, at a speed related to a multiple of the tempo of the song.




Ultimately, effects are tools to create an illusion of some kind, and every illusion starts with a good picture -- in this case, your mix. If a mix can't stand on its own, without effects, the chances are that it won't improve all that much when the effects are added. On the other hand, get the basics right, and the right effects will almost suggest themselves.




If you listen to a selection of mixes from respected producers, you may be surprised at the apparently limited use of effects. This is because a good producer knows when to leave an effect out as well as when to put one in. Vocals will be treated with reverb, but not to the extent that they are rendered unintelligible or pushed back in the mix, and the rhythm section will usually be tight and crisp, with plenty of space. Pads are mixed well back, so as not to conflict with the main melody or vocal line, and such effects as are used are applied only after consideration of what instrument is playing, what else is playing at the same time, and how much space there is left in the mix to work with. A useful tip here is that stereo reverb doesn't always have to be used in stereo. If you want to pinpoint a sound in a mix, pan the reverb to the same point as the original sound, or to create more movement, put the dry sound over at one side of the mix and all the reverb over at the other!

Sound Devices USBPre 2 - IBC 2010

Wednesday, January 29, 2014

Using Amps As Effects

Tips & Techniques

Technique : Effects / Processing


We've got so used to DI'ing keyboards that miking up an amp is something that never occurs to some people. PAUL WHITE explores the benefits of getting out the mics and plugging in the amp.

All naturally occurring sounds are coloured by their environment, and we associate certain types of sound with specific acoustic spaces. For example, a church choir only sounds right within the acoustic confines of a church or cathedral, an underground busker desperately needs the acoustic of a Bakerloo Line tube station -- and the speaking clock would sound totally out of place on a full-bandwidth hi-fi. In the studio, we usually use effects to simulate a plausible environment for DI'd sounds, the most obvious choice of effect being reverberation. Even so, the result often turns out to be more impressive as a spectacle in its own right than an accurate emulation of nature. Perhaps it's this lack of a breathing, organic environment that has sent people chasing old technology, such as tape echo units or valve processors, to try to put some of the character back. The philosophical implications of electronic instruments provide enough material to fuel many a closing-time conversation, but the purpose of this article is not to bury Caesar -- it's simply to borrow a few ears.

The key to the 'organicness' (or lack thereof) of electronic instruments is largely down to the amplifier and loudspeaker system used to reproduce them, and to the acoustic environment in which that amplifier is placed. If the instrument is DI'd, then the performance loudspeaker is the studio monitor or end-user's hi-fi system (which are both designed to deliver a nominally uncolored sound), and the acoustic environment can be anything from a studio control room to a bedsit in Putney. In other words, by DI'ing the instrument, you completely bypass the organic quality that comes from live performance in a specific acoustic environment. Trying to put back those missing components using effects can only be partially successful, because the fractal nature of real life is infinitely more complex than the algorithmic nature of digital effects. It's hard enough trying to make a recorded acoustic sound appear convincing when the holophonic soundfield of reality has to be replaced by the dual point-source compromise of stereo loudspeakers, but it's even harder when the sound source you're reproducing never existed in the real world at all.

To be fair, there are occasions when a DI'd keyboard works on an artistic level; we're used to hearing music made that way, so our frame of reference is already based on artificial values, and because we've been listening to digital reverberation for the past 15 years or so, that also forms a part of the listening experience against which we tend to judge new work. Having made that point, all but the most ardent electronic music protagonists seem to agree that a piece of music sounds far more human or organic if at least some of the instruments are real rather than being all synthesized. It doesn't take much -- just one electric or acoustic guitar added to a totally synthesized, sequenced composition can make a huge change to the way that the music is perceived.




It could be argued that a DI'd keyboard isn't a real instrument at all, because the sound does not exist until it reaches the listener's loudspeaker system; it might be more appropriate to call it a virtual instrument. The line isn't quite so clear-cut with a sampler, because it is possible to sample an acoustic instrument along with the ambience of its environment, but unless each note is separately sampled, the perceived environment will change as the pitch of the sample is changed. For example, if you take one sample with reverb already present, and use it over one octave, then the lowest note will have twice the reverb time of the highest note, and will appear to have been recorded in a room of twice the size. This being the case, you can see the logic in sampling sounds fairly dry and then adding sound processing afterwards; the effects may be artificial, but at least they'll be consistent, regardless of the note being played.

But there is a simple way to change a virtual instrument into a real one; plug it into an instrument amp and mic it up in a sympathetic acoustic environment. To the purists who would claim that an electronic instrument is basically a machine and so can never be classified as a 'natural' instrument, no matter what you plug it into, my response would have to be that all music instruments are the product of the technology prevailing at the time they were invented. If the digital synth doesn't qualify, then neither does the violin or the piano. In fact, the only 'natural' musical instrument, by definition, must be the human voice. My own view is that a synth plugged into an amp qualifies as a performance instrument, so I'll continue on that basis.




The wonderful thing about instrument amplifiers is that they do so much more than simply make a sound louder. All the best-loved instrument amps, whether for guitar or keyboard, introduce their own subtle (or not so subtle) distortions and colorations. They all have their own distinctive tone circuits, may use valves which exhibit interesting non-linearities, and may include transformers, which can do wonderfully constructive things to a sound. All these 'attributes' have the hi-fi purists in tears, but the whole point is that amps don't just reproduce a sound, they help to create one!

Perhaps even more significant is the loudspeaker system used, and while the better keyboard amps are now a cross between a big hi-fi speaker and a small PA system, it is possible to use different speaker systems to create specific tonalities. Before modern full-range keyboard amps were developed, keyboard amps were more like guitar amps, often fitted with 12-inch speakers and no tweeters. This limited the high frequency response of the systems quite severely, and had the effect of rounding off harmonically rich tones. At the same time, the open-backed speaker cabinets so popular at the time behaved most irrationally at low frequencies, and created a deep, pleasing -- and totally inaccurate -- bass end.


"Ambience always sounds more satisfying in stereo than in mono, so try to mic up your amp in stereo wherever possible."

Such keyboard amps tend not to be made any more, because there are occasions where you want to hear the fine edge of a bowed string, the rasp of a sax, or the breathy chiff of a piece of hollow tree with an oriental gentleman blowing meaningfully into the end of it, but other sounds cry out for the 'old amp ' treatment.

While keyboard amps have changed dramatically over the years, with a host of technological improvements, guitar amp manufacturers panic every time technology accidentally improves the sound, and then spend fortunes trying to get the new technology to sound the way the old valve circuits sounded in the '50s. Ironically, the valve amps of the '50s and '60s only sounded the way they did because the technology wasn't then available to make them sound any more accurate, especially when it came to loudspeakers. Designers probably stayed awake at night wondering how to reduce the horrendous level of distortion their circuits generated and improve the bandwidth and power handling of their loudspeakers! I wonder if future generations will modify their high-definition TVs to give that slightly fuzzy, 'painting by numbers' look that you get from an early video machine with worn heads?

Fortunately, because there are so many technically awful, sonically wonderful guitar amps around, you can have a lot of fun by plugging a keyboard into one. For example, if you have a digital keyboard pad sound that seems a bit too thin and is a little too gritty around the edges, simply plugging it into a guitar combo will filter all the edge out of the sound without actually making it seem dull, and the uncontrolled speaker response at the lower-mid and bass end will fatten the sound up quite nicely. You could of course do a similar thing using a guitar preamp and a speaker simulator, but then you'd lose the opportunity to mic it up and add a little real-world ambience.




So far, I haven't mentioned the mics or where you might want to put them. Ambience always sounds more satisfying in stereo than in mono, so try to mic up your amp in stereo wherever possible. An ordinary domestic room with the carpet rolled up and the major soft furnishings removed will produce plenty of ambience, while a completely empty room, concrete stairwell or glass conservatory can sustain several seconds of reverb.

Ideally, set up the room and the mics so that you get slightly less reverb than you need, because you can always lengthen the reverb time by adding a little extra artificial reverb. Whilst you might think that using this contradicts everything I've said in this article, artificial reverb added to natural ambience tends to sound much more natural than artificial reverb on its own. Additionally, if you can only mic up the amp in mono, then a touch of artificial reverb will help restore the illusion of space, though it's seldom as convincing as starting with a true stereo source.

On the subject of the mics themselves, if you're miking a small combo, a modest dynamic microphone should work fine, because it will have a significantly greater bandwidth than the loudspeaker it's 'listening' to. Even so, every mic sounds different, so the characteristics of the microphone become an integral part of the instrument. If you have several mics to choose from, try as many as you can to see which gives the best subjective sound, and though 'serious' stereo miking demands that you use an identical pair of microphones, in practice, you can use quite different mics and still get an artistically valid result. You have to remember that we're not so much interested in accuracy -- just in getting a sound we like.

Placing the mic(s) close to the speaker grill will exclude most of the room ambience, but it will produce a focused, punchy sound which will cut through a mix without sounding edgy. Moving the mic back four or five feet will yield a softer, less upfront sound with the room ambience making a greater contribution. In a room with lots of hard, reflective surfaces, it's even worth pointing the mic at these surfaces, and not at the amplifier at all. Experimentation is the key to success here, and the best way to get results quickly is to put on a pair of high-quality sealed headphones to monitor the output from the mic(s), and then wander around the room with the mics until you find the magic spots that deliver the sound you want. You don't even have to obey the rules of symmetry in stereo miking; you can use one close mic and one distant mic, or two mics pointing in quite different directions. All that matters is that the result works, though it is a good idea to press the mono button occasionally, just to make sure that phase cancellation doesn't screw up your sound.




One of the great things about recording sound is that absolute sound levels have very little meaning. A tiny amplifier can be made to sound huge simply by winding up the level in the mix, while a steaming Marshall stack can be pushed right to the back simply by pulling down a fader. Small practice amps often sound wonderful when miked up, and because they're not as loud as a performance amplifier, you don't have as many problems with isolation if the amp is running in the next room while you're trying to mix. Stories abound of famous musicians using the Tandy Microamp (which is little more than half an intercom in a plastic box) to record everything from guitar to harmonica. And while we're on the subject of guitars, if you need to create an over-driven guitar effect from a synth, how better to do it than plug it through a guitar amp and turn up the overdrive? This invariably sounds better than the digital distortion effects built into synths.

If you have a small combo or practice amp, you can also experiment by connecting it to different speakers -- even the most unlikely combinations can work extremely well. For example, an old TV or car radio speaker might distort in a particularly vigorous and interesting way when driven hard. Similarly, if you're after a boxy sound, don't resort to EQ straight away -- stick a small speaker inside a tea-chest or large cardboard box with a mic, and go for the real thing!
I'll finish off where I came in, by saying that the amp isn't just something to make the sound louder, it's a significant part of the instrument, and if you DI everything as a matter of course, you could be throwing away the best part of your sound without even knowing it.




Contrary to what you might think, you don't need lashings of cavernous acoustic reverb to give your sound that stamp of authenticity. Indeed, the room ambience may be so subtle that you don't realise there's any reverb there at all, but your brain will recognise it and respond to it. To illustrate this point, think about how the human voice sounds in a typical living room. It isn't obviously reverberant, but it sounds right. If the same voice were to be heard in an anechoic chamber, where all sound is absorbed, it would sound completely different and disturbingly unnatural. And DI'ing an instrument is, of course, exactly the same as listening to its acoustic counterpart in an anechoic room.




It's all very well trying to create the perfect sound in isolation, but there's always a danger that when it's heard in the context of a mix, it won't quite work and it will be beyond the capabilities of mere EQ to remedy matters. One way round this is to DI the basic sound (or sequence if synchronised to tape), but when you come to mix, feed the keyboard sound directly to your guitar amp (set up in another room), set up the mics and feed what they pick up back into the mix. Now, if the sound doesn't work, you have the opportunity to move the mics around, try different ones altogether, or make the room more or less ambient by changing the amount of soft furnishings, or by introducing reflective surfaces such as hardboard. Another benefit of working this way is that you don't have to record the sound to tape at all (if you're using a sequencer), or you only need to record it in mono if you want to play the part live to tape. If you have a limited number of tape tracks to play with, this can be a major consideration.

Soundfield UPM-1 Plug-In - IBC 2010

Getting The Best From Your Leslie Simulator

Tips & Techniques

Technique : Effects / Processing


Using a Leslie no longer means dragging around a large wooden cabinet crammed with rotating speakers. More and more modern units and simulations are now available, and NICK MAGNUS dons his prog rock mantle to explain how to get the most from them.

Leslie simulations are now to be found within many multi-effect units, such as the Alesis Quadraverb and Boss SE70, or built into the effect sections of instruments like the Korg Wavestation and the Roland JV1080. They tend to differ greatly in effectiveness and authenticity from device to device, but the most authentic incarnations seem to be found in dedicated units. The best of these (subjectively, of course) are Dynacord's analogue CLS222 and the digital DLS223, with excellent offerings to be found in the Korg G4 and Roland's SGX330. Among the best examples built into instruments is that of the Roland VK1000 organ, which sports a highly editable rotary effect, and the aforementioned Korg Wavestation also scores high marks for its version. These last two examples, however, are confined to use only on the host instrument, unless you are using a Wavestation A/D, which has analogue inputs enabling the treatment of sounds from the outside world.




One of the principal characteristics of a Leslie cabinet is its tonal quality. Much more than just a pair of revolving speakers, the Leslie is a solid cabinet with gears, clutches and relays, louvred ports, and often a built-in pre-amplifier. It has a distinct tonal fingerprint which is unlike a regular speaker enclosure. Most importantly, the frequency responses of the upper 'horn' speaker and the lower 'drum' are completely separated by a crossover unit. They have no frequencies in common, which means that the rotary movements of each speaker are clearly distinguishable, with no 'mushing' in the middle audio spectrum. This implies that there is a substantial notch in the mid-range response. Another consideration is high-frequency content; the Leslie cabinet does not reproduce the same full range as a hi-fi speaker, and there is some attenuation at the very high end -- hardly surprising, as the rotating horn is shut in a box with only louvred slats as an outlet for the sound.

Referring back to our electronic simulations, the Dynacord CLS222 is supremely successful in the above areas. Listening to its horn and bass signals individually reveals no common frequencies, and whether by accident or design, the upper end is suitably attenuated. Stopping the rotor movement and listening to the sound with the effect switched in reveals a notched-out quality similar to that produced by the real thing with its rotors stationary. The newer digital units could be said to suffer, if anything, from their 20Hz-20KHz specification. The signal comes out as bright as it went in, and some tend to exhibit a wide band of frequencies common to both upper and lower speakers; neither of these properties are necessarily desirable for an authentic rotary simulation. Laudably, the Korg G4 is bestowed with a speaker simulator setting to correct the high end.

The sound of some units can therefore be improved by judicious use of a stereo graphic equaliser, preferably one with 1/3 octave increments. The mid-range notch we need to create will be somewhere between 200Hz and 1KHz. To narrow down the search:

• First isolate the bass signal, if the rotary unit allows this.

• Whilst holding an organ chord somewhere around middle C, start lowering the sliders on the graphic, starting at the top end, until you detect that the upper harmonics are being attenuated.

• Make a note of this frequency and reset the graphic to a flat response.

• Now do the same listening to the horn signal only; this time the graphic's sliders will be lowered from the bottom upwards, until attenuation of the lower harmonics becomes obvious.

• Note this frequency. You should find that the first figure is higher than the second, and it is the frequency range between those two noted values that we wish to eliminate.

• Do this by lowering all the relevant graphic sliders. If this sounds too severe, try arranging the sliders in a truncated 'V' shape.

• To top it all off, try a very gentle roll-off starting around 6KHz, although the exact frequencies and the amount of reduction are ultimately down to experimentation and your own judgement.




The obvious use of the fast/slow speed control of a rotary speaker is to add excitement to a performance. Not that I would dream of lowering the tone of a serious article by using words like 'organ' and 'climax' in the same sentence, but dynamics are, essentially, what we're talking about. Rather than using the speed control indiscriminately, it makes good musical sense to follow the phrasing and structure of the music -- you could compare the approach to going through the gears in a sports car. You can induce a sense of urgency as you progress through a verse by inserting a few brief accelerations at the end of each phrase, becoming slightly longer each time, until the end of the last verse line, where you crank the Leslie up to full tilt just as the chorus begins, holding it briefly and allowing the speed to slow down on a long sustained chord. The final chorus, when everything is probably playing at its loudest, may be an appropriate time to let the Leslie stay at its fastest speed.




Some Leslie simulators offer a choice of on- or off-mic positions, notably the Dynacord DLS223, Boss SE70, Korg G4 and Roland SDX330. What exactly does this mean? When miking a real Leslie, the microphone position can drastically influence the sound, as you might expect. As well as the rotary effect, the cabinet also radiates a straight, unmodulated sound which is always present to some degree. This is because the speakers themselves do not rotate; it is only the horn and bass drum projecting the noise that are actually moving. Thus the closer the mic is to the rotating source, the more exaggerated are the volume and tonal sweeps. If the mic is moved away, it picks up more of the straight sound being reflected off the walls of the room, which is also mixed with delayed reflections from the rotors. This has the effect of smoothing out the overall intensity of the Leslie sound. You would use the off-mic position to simulate a 'clubby' sort of sound, while the on-mic position is often favoured for a heavy, demonstrative rock approach.
For stereo units which have no on/off mic facility, such as the CLS222, an effective way of exaggerating the rotary effect is as follows:

• First, try muting one side of the stereo picture. You will notice that the resulting modulations are now very intense. This is because the panning of the sound to the muted channel causes it to disappear briefly.

• Now unmute the channel with its fader set at zero and gradually reintroduce that signal into the mix until the modulations have smoothed just a little. The stereo picture will be lopsided (assuming that the desk pans were hard left and right), so you'll probably want to bring it a bit closer in. You will naturally be sacrificing a little stereo width in doing this, but it will help to blend the overall sound.




• If your unit has a treble/bass balance control, use it to help the sound sit with the other instruments in the track. Although a very full-sounding organ or guitar may be great on its own, it can bulldoze an ungraceful path through the mix. Setting the treble horn to be quite dominant means that the instrument takes up less sonic space, but still retains a sense of power, and the rotary movements are pleasingly prominent.

• Percussion sounds, particularly high-frequency ones like hi-hats, cabasa and shakers, can also benefit from this type of effect. Try these with the bass balance turned right down so that only the panning movements of the treble speaker have any influence. Or, with the treble/bass balance set at 50/50, combine them with lower frequency percussives, such as surdo, Indian drums or similar. These opposing types of sound will pan independently in accordance with the speaker movements.

• The rotor stop setting, too, can be used to dramatic effect. Try slowly fading in an organ chord with the rotors stationary, then at the crucial moment, kick the speed up to fast, and as soon as full speed is reached, slam on the brakes to the slow position, settling down into a glorious swirl.

Tuesday, January 28, 2014

SSL MADI-X8 - IBC 2010

Compressing Your Mix

Tips & Techniques

Technique : Effects / Processing


PAUL WHITE proves once again that nothing succeeds like excess. This time he patches in two compressors and a limiter to deal with a minor dynamic range problem...

I presided over a session recently where the client brought back an album production DAT of pop songs I'd compiled for them, because they were worried that the music didn't sound loud enough when compared with commercial recordings. It wasn't simply a matter of increasing the level on tape, because the peaks were coming within 3dB of digital clipping -- it was, of course, a matter of dynamic range. In other words, their loudest bits were as loud anyone else's loudest bits, but the average level of the material was well below that you'd expect from a commercial album of similar material. You might imagine that dynamic range would be no problem, as everyone has a compressor in their studio -- but in practice, a great many recordings are compressed on an individual track basis, with more compression being added to the overall mix.

If you know that you are going to compress the overall mix, it makes sense to patch in the compressor at the outset, so that you can hear the effects of the compressor if you mix. The reason behind this is that compression can change the subjective balance of a mix, so if you finish your mix before thinking about compression, you could end up with problems. However, this is exactly the situation I was confronted with -- a completed DAT tape that could obviously only be saved by being compressed. I turned to my Drawmer rack, although the methods outlined here could be put into practice using virtually any reputable compressor -- so don't stop reading if you're not a Drawmer fan.

I decided that the 1960 valve compressor would be a good starting point, because it is a soft-knee compressor, which makes it less obtrusive in operation than a fixed-ratio type. As a bonus, the valve circuitry also adds a touch of magic to the sound, but you could use any decent soft-knee compressor. Using a fast attack and a relatively slow release time (around 2S, at a guess), I set the threshold to produce around 6dB of gain reduction on the signal peaks. This isn't a great deal of reduction, but then I didn't want the signal to sound too squashed.
Soft-knee compressors may be unobtrusive, but what they offer in subtlety, they lose out in assertiveness, and occasionally a peak comes crashing by that they don't stamp on nearly firmly enough. To solve this problem, I patched in a second compressor, with a ratio of 10:1, set to Auto attack and release mode. You could use either another soft-knee compressor (so long as it has a variable ratio), or even a standard hard-knee device here, and if you only have a manual drive model, go for a fast attack and half a second or so release. The threshold was adjusted to produce about 4dB of additional gain reduction on the signal peaks.

A final line of defence came in the form of the DL241's limiter, which is definitely of the 'Thou shalt not pass' variety, and by deliberately driving the limiter so that it was permanently on, I could easily set the recording level on the target DAT machine to fractionally under full scale. Once this was set, the compressor gain was reset so that the peak LED only flashed briefly on the very loudest signal peaks I could find. In theory, the limiter shouldn't have to do anything, but I wanted it there just to make sure that the DAT could never go into overload. Pausing to check that both compressors were set for stereo link operation, I set about copying the original DAT to a new tape. Because this was a compiled album with pauses, I could have used the 241's expander gate to keep the pauses noise-free, but as it turned out this wasn't necessary, as the compiled master tape had absolute silence in the gaps anyway, so compressor noise wasn't a problem.

The result was better than I could have hoped for; there was none of the dulling of transients that can occur when a single compressor is used to heavily compress a mix, and there was no pumping, yet the mix felt more solid, more cohesive and more confident than the original. Apart from applying less than half a dB of EQ cut at 4kHz, to tame a little vocal sibilance, no further processing was needed, and the clients were relieved that their tape could go on to the CD pressing plant without further delay.

This is just a one-page article, so I've got to sign off before I collide with the page logo, but do give this a go -- you might be surprised at how much more punchy and together your mixes sound.




I put the success of this operation down, not only to the quality of the compressors used, but also to the use of two separate stages of compression backed up by a 'watchdog' limiter. The first compressor provided the bulk of the gain reduction, and so had to be unobtrusive, which is why a soft-knee model was chosen. The second model comes in only to deal with peaks that the first compressor fails to bring under control, so you can afford to be a bit more heavy-handed. Again, either a soft- or hard-knee compressor can be employed, though the type with a variable ratio works best.

While this combination of compressors works fairly transparently, the same can't usually be said of limiters, so it is imperative that the limiter is set to cut in rarely, if ever -- it's simply there as a last resort.

Vinten Vision Blue - IBC 2010

Monday, January 27, 2014

How Enhancers Work



 Tips & Techniques

Technique : Effects / Processing


Exciters and enhancers are often mentioned in SOS, but what exactly do they do to your sound, and how do the various types differ? PAUL WHITE explains.

There are often recording or mixing situations in which simple equalisation can't produce the tonal changes we're after, and it is in such situations that we might turn to enhancers or exciters to find a solution. But how can they help when traditional EQ fails?

Ordinary equalisers work by cutting or boosting a part of the audio spectrum to alter the overall spectral balance, which is why EQ can help us brighten sounds, bring up the bass or bring down the mid-range. Most of the time, this is exactly what we want to do, but there are limitations, the main one being that an equaliser can only boost frequencies that are already there. There's often a temptation to turn up the treble control in an attempt to brighten a sound that contains absolutely no high frequencies at all, which just results in more hiss! This is often the case with miked-up bass guitars, dull old electric pianos, and people with very smooth voices.

Another limitation is that when you add boost, it's there until you turn it off again. This may seem obvious, but if you're working on something with a lot of dynamics, such as a drum track, you might like to be able to apply some tonal boost only to the individual drum beats. One process that can achieve this effect is dynamic equalisation, where the amount of tonal boost varies according to the dynamics of the signal being processed. This allows extra bass (for example) to be added to bass guitar and bass drum sounds in a mix without making the sounds in between the beats too bottom-heavy. Conversely, additional brightness can be achieved by adding a dynamic, high-frequency boost to sounds such as snare drums or cymbals. Such dynamic effects are quite dramatic, because they increase the tonal contrast within the music, rather than treating the whole mix in the same way. Most exciters or enhancers combine elements of dynamic equalisation with other processes, including harmonic synthesis and phase manipulation (see the 'Psychoacoustics' and 'Just A Phase' side panels elsewhere in this article for some of the theory behind sound enhancement). Not all manufacturers use the same combination of principles, which means that each type of enhancer has its own characteristic sound. The purpose of this article is to look at some of the more popular models 
and to see what they actually do to the sound.




Aphex was the first company to market enhancers, and they claim that their Aural Exciter was discovered quite by accident, when a stereo valve amplifier kit was wrongly assembled. One channel worked properly, but the other produced only a thin, distorted sound. To their surprise, adding the two channels together produced a result that sounded cleaner and brighter than the original. After they had spent considerable time figuring out why this was, they formed a company to exploit the discovery. The first commercial Aphex processor was shrouded in secrecy, and anyone wanting to use it on record had to hire the unit from Aphex and pay a royalty based on the length of the recording. Today, Aphex exciters may be bought and used just like any other processor.

Most of what comes out of the output of an Aphex exciter is exactly the same as what goes in at the input, but some of the input signal is diverted, via a side-chain and a high-pass filter, into a harmonics-generating circuit. The high-pass filter is necessary to remove unwanted low frequencies which, after processing, might result in a muddy or discordant sound. The filtered signal is then processed dynamically to add phase shift and to create synthesised harmonics which are musically related to the original signal. A small amount of this signal is then added into the output, which has the effect of reinforcing and emphasising transient detail without significantly increasing the signal level.

Though the Aphex principle is patented, a number of companies have produced enhancers that work by similar means and produce similar subjective results. Aside from routine track or mix processing, this type of processor is useful for restoring high-frequency detail that has been lost after processing with a single-ended noise reduction system, or for producing master tapes for cassette duplication, where some high end is invariably lost in the duplication process itself. The Aphex process is also effective for creating an intimate vocal sound, because the enhancement process simulates the way a sound is perceived when the source is in close proximity to the listener.




BBE is one company that took a different route from Aphex in developing an enhancer; the BBE Sonic Maximizer works not by adding harmonics, but by introducing phase changes and dynamic equalisation, which just redistribute those harmonics already present. The process works by first splitting the audio signal into three frequency bands and applying different time delays to each band by means of passive and active filters. Frequencies below 150Hz are delayed by around 2.5mS, while those between 150Hz and 1200Hz are delayed by around 0.5mS. Frequencies above 1200Hz are not delayed, but are subjected to dynamic level control, which can take the form of compression or expansion, depending on the control settings and the nature of the input signal. The BBE process is also is able to influence the low-frequency end of the spectrum by means of a Lo-Contour control, allowing the sub-200Hz band to be cut or boosted by -12dB to +10dB.

In a typical BBE unit, the Lo-Contour control is used to bring up the bass, while the Definition control to brings in the high-end enhancement. The subjective result is quite different to that produced by the Aphex unit, as new harmonics are not being added; the level of the existing ones is being modified. The result is very smooth-sounding, but on material that is seriously lacking in top end in the first place, the process seems incapable of restoring it to the same extent that the Aphex process can. However, where the original material is of good quality, the BBE process can enhance it considerably without making it sound harsh or aggressive. On most material, the overall sense of brightness is definitely increased, and some improvement in subjective transparency is achieved. The dynamic nature of the process is also an advantage when dealing with noisy material, as little or no boost seems to be applied to low-level signals; this helps maintain a good signal-to-noise ratio.




Yet another approach to enhancement comes from SPL of Germany, and although their process (as used in their famous Vitalizer) involves mainly equalisation, the results obtained are quite unlike those obtained using conventional equalisers. The Vitalizer works by first generating a side-chain signal from the main signal; the frequency response of this side-chain signal is then modified both additively and subtractively. Because of the way filters interact, the impression of an increase in both bass and brightness is created when the side-chain signal is added back to the original, while the mid-range is brought into sharper focus, increasing the sense of transparency. Though the SPL enhancement principle is quite complex, part of the process involves adding low-frequency equalisation in such a way that phase cancellation occurs in the lower mid range. This has the effect of simultaneously lifting the bass and pulling back that area of the spectrum that would normally conflict with it, resulting in a very powerful but tightly-controlled bass lift.

At the high end of the spectrum, a Harmonics control is used to pull out transient detail through a combination of EQ and (possibly accidental) harmonic synthesis. The effect is apparently created by a filter circuit employing fourth-order filters, but this appears to generate harmonics almost as a by-product, due to the nature of the components used.

It is possible to isolate the processed signal using a Solo button, which not only allows the user to check how much processing is actually taking place, but also provides a means of using the effect via the aux sends and returns on a mixing console. A Process Depth control determines how much of the output from the sub-bass and mid-high filters is added back to the original sound. This has no effect on the Harmonics control, which operates independently. A tuning control defines the area of the mid range that will be processed, and also affects the operation of the Harmonics processor, which derives its input partly from the untreated signal, and partly from the output of the mid-high filter.

The Bass Process is interesting, in that the control has a centre-off position and produces two distinct sound characters depending on whether it is turned right or left from centre. Advanced clockwise, the sound takes on a very tight, punchy feel, while the anti-clockwise direction produces a much more 'rounded', full-sounding bass, but with no apparent spill into the mid range.

SPL include a so-called Surround Processor in their units, but this is independent of the enhancement circuitry, and simply uses the tried and tested principle of feeding phase-inverted signal from the left channel into the right, and vice-versa. It's the same system employed in ghetto blasters, and, if used in moderation, it can help increase the sense of stereo width. It is also fully mono-compatible.

Like the Aphex Exciter, the SPL enhancement process lends a sense of definition and transparency to a mix, the main difference in this area being that the mid range seems better defined too. The effect is like being able to 'hear through the mix' to all its constituent parts. The Vitalizer bass enhancement is impressive by any standards, and could well be used to compensate for the 'thinner' sound of narrow-format or budget digital tape machines. The subjective effect is like that of an exciter that works not just at the top end, but across the whole audio spectrum, increasing the sense of loudness, detail and space. Though the effect may be based on psychoacoustic trickery, the SPL process provides an easy way to add the punch and sizzle to a recording that most people associate with good pop production. Note, however, that like any equalisation process, SPL's system can bring up the level of noise that exists as a part of the programme material, and, as with the other enhancement units available, over-processing the top end can aggravate sibilance problems and highlight any distortion already present. As ever, the key is to use the process in moderation.




One of the latest companies to enter the exciter race is Dolby, who have combined their expertise in filter design with compression techniques to produce a Spectral Enhancer. Rather than adding harmonics or using simple dynamic filters, the Dolby approach relies upon treating a side-chain signal via a bank of complex filters, which modify their characteristics according to the nature of the input signal. The filtered signal then appears to be heavily compressed before being added back into the main signal path. The system has made a strong impression on those who have used it, even though Dolby's unit is the most expensive of the current commercial enhancers. Also available is a popular range of enhancers, including the Ultrafex, Dualfex, and Bassfex, from German company Behringer.

It's worth noting that enhancers are now being included in multi-effects units, and although all the original enhancers were analogue, attempts are being made to emulate these digitally. So far, I feel these have only been partly successful, but it's only a matter of time before these digital emulations are the equal of the original analogue processors, so if you need budget enhancement, you might be advised to wait a while. However, at the time of writing, if you need high-quality enhancement, there's no real alternative to buying a dedicated unit.



The easiest way to set up a standard Aphex unit is to first turn the Mix control to full, so that any effect created is over-emphasised. The Drive control can be advanced until the LED meter confirms a suitably high drive level, and then the Tune control can be adjusted by ear. This last control sets the frequency above which new harmonics will be generated; if it is set towards its clockwise extreme, the Exciter's action is confined to the upper reaches of the audio spectrum, meaning that only very bright sounds, such as cymbals, will be affected. Moving the control further down progressively involves more mid-range sounds in the process. Once the filter and Drive settings are set, it is necessary to reduce the Mix setting, so that the enhancement effect is appropriately subtle; comparing the processed signal with the bypassed sound is the best way to verify this.

Note: The Drive control comes before the Tune control in the signal path so, after setting the Tune by ear, it is often necessary to readjust the Drive control for optimum results. More recent Aphex units have dispensed with the Drive control, making them even easier to use.

Of all the types of enhancer currently available, the Aphex process is probably still the most effective for producing the illusion of brightness from a source that is badly lacking in high-frequency content. As the process only emphasises the high-frequency end, some low-frequency EQ may be required to maintain a proper bass/treble balance, and because of this, many modern units include some form of integral bass enhancement system to help maintain a balanced sound.



Various aspects of dynamic equalisation (see above) have been incorporated into enhancers to produce an effect that appears to make everything more detailed, more transparent and louder than before. The reason this works is all bound up with the psychological perception of hearing, or 'psychoacoustics', and although nobody fully understands the subject, there are tried and tested processing tricks that produce a definite and consistent result. One of the simpler psychoacoustic principles is based on the fact that our perception of the audio spectrum changes as sounds become louder. If, for example, we play a record at a very high volume, we tend to hear the high and low frequencies in a more pronounced way, whereas at lower levels, the mid-range is more evident. Simply by using an equaliser to cut the mid range or to boost the high and low extremes, music can be made to sound louder than it really is -- which is exactly how the loudness button works on a hi-fi stereo amplifier.

The American company Aphex discovered an interesting principle, which was further developed into their Aural Exciter concept. By adding very subtle distortion to the original signal, the signal could actually be made to sound clearer and louder -- but why? The answer is that whenever an audio signal is subjected to distortion, intentional or otherwise, high-frequency harmonics are produced. Normally, these sound pretty unpleasant, as they are not always musically related to the original sound, but by using filters to confine the distortion to a specific part of the audio spectrum, it is possible to create the illusion of additional high-frequency detail without musical dissonance. The most significant feature of exciters or harmonic enhancers is that they can be used most effectively on sounds originally lacking in high-frequency content, because they effectively synthesise a new and musically convincing top end. Further circuitry refinements add a dynamic element to the process, with the result that more harmonic enhancement is added to percussive or transient sounds than to quieter ones. The subjective result is remarkable, producing an audible increase in detail, presence and loudness, even though the level of added distortion is minuscule. However, the user needs to beware of over-processing a signal in this way, as excessive levels of enhancement can result in a harsh, fatiguing sound, making it imperative to use the treatment sparingly.

Dynamic equalisers, on the other hand, produce no deliberate distortion, but on well-recorded material, the result is subjectively similar to that produced by an enhancer, as transient sounds are increased in level by the process. Because no distortion is added, the result tends to be a little less harsh when high levels of processing are needed. However, dynamic equalisers are also less effective on dull material, as they do not have the ability to synthesise missing harmonics.




Both enhancers and dynamic equalisers manipulate the relative phase of various parts of the audio spectrum, and this contributes to their perceived effect. As with most audio processes, this relates to a real-life effect; when sound travels through air, the low frequencies travel slightly slower than the high frequencies, with the outcome that distant sounds are heard with a significant phase difference between the high-frequency and low-frequency sounds. Nearby sounds, on the other hand, are less affected, so the sounds arrive with their phase relationships intact. If an electronic processor is able to delay the low frequencies slightly by means of deliberately introduced phase shifts, it is possible to restore the original phase relationship, making the sound source appear to be closer. This is why processed sounds seem to be very 'up front'.

Phoenix Audio DRS8 - AES 2010

How To Patch Effects & Processors

Tips & Techniques

Technique : Effects / Processing


The roles of the various signal processors and effects used in audio production are pretty well understood by most musicians, but it's not always obvious where they should be patched into the signal chain to give the best results.
PAUL WHITE explains.

When recording was in its infancy, we were so impressed at being able to record four separate tracks at different times, and then being able to balance the results afterwards, that nothing much else seemed to matter other than, perhaps, a touch of spring reverb. The fact that all this was possible was wonder enough -- then along came digital effects, and the goal posts moved. From then on, we had to contend with aux sends, insert points, and mixing consoles with routing systems so advanced that they rivalled professional desks. Small wonder, then, that confusion sometimes exists over where best to patch a particular piece of outboard gear, and even if you know the ground rules, there may sometimes still be a better way of doing things if you stop to think about it.




For the benefit of those new to the concept of patching outboard gear, I like to define the various pieces of equipment as either Effects or Processors, according to what they do and how they do it. This may seems pedantic, especially when most boxes that don't actually produce noise in their own right tend to get referred to as processors, but if you'll humour me for a while, you'll see that it helps make things a lot clearer. The main reason for splitting boxes up into these two groups is that there are certain restrictions on how processors can be connected, while effects (or FX) enjoy a little more flexibility.

Processors, in this instance, are defined as boxes that take in a signal, do something to it, and produce a modified version of the signal at the output socket. None of the original signal is added to the processed signal, though in the case of compressors and gates, the output is actually a version of the original signal which has had dynamic gain changes applied to it. The only grey area is the exciter which, for practical reasons, is best defined as a processor, even though it mixes both original and processed sound internally.
Processors include:

• EQ

• Compressor/limiters

• Expander/gates

• Panners

• Single-ended Noise Reduction (SNR)
If the device modifies either the gain of a sound or filters it (as in the case of EQ and SNR), it's pretty safe to assume it's a processor.

Effects, on the other hand, are designed to be added to an existing signal, such that the result is a mixture of the original signal and the effect. One rule-of-thumb guide to deciding which is which is to define Effects as those boxes which rely on delay circuitry to work; in other words:

• Echo units

• Delay lines

• Reverbs

• Chorus/flangers

• Pitch shifters

• Phasers (and so on).
You can also be certain you're dealing with an Effect if a knob (or parameter) allows you to set the effect/dry (un-effected) balance.

A dilemma arises when it comes to multi-effects boxes, because most can function as either effects or processors, and a multi-effects patch might comprise both! However, if you visualise the different effects and processors in a patch as physically separate boxes, then apply the following guidelines, you shouldn't go too far wrong.




"Know ye now the greatest secret of all mixers. A mixer shall have knobs upon the uppermost surface, but on the underside, no knobs shall there be. A mixer shall not have knobs upon both the top and the underside, nor shall it be devoid of knobs on all surfaces, for then it will not function as a mixer..." though the next NAMM show could change all that, of course!

On a modern mixing console, the post-fade aux sends, commonly known as effects sends, allow some of the signal passing through a mixer channel to be sent to an effects unit such as a reverb or delay device. The output from the FX device is then brought back into the mixer through a spare channel or an effects return (which is really just another channel but without the frills) to be blended in with the original sound. Because each channel has its own aux send level controls, the same FX unit can be used by several channels simultaneously; the aux send controls simply work as 'more or less effect' knobs, enabling the user to set up different levels of effect for each channel -- and when the channel fader is turned down, the effects level changes accordingly.

The one rule that should be written in stone is that under all normal circumstances, only effects can be patched in via the aux send/return system. If you were to do the same with a processor such as an EQ unit, you'd just be adding EQ'd sound to the original sound which would, in effect, 'dilute' the effect of the EQ.

The other main method of patching external boxes into a mixer is to use the channel, Group and Master insert points. An insert point simply breaks the original signal path and routes all the signal through the external device plugged into it, rather like putting a fuzz pedal between your guitar and amp. Processors should always be connected via insert points (unless you achieve the same thing externally, by patching a compressor between the output of a tape machine and the line input of a mixer, for example), though effects may also be patched into insert points as long as you don't want to use the effect on more than one channel or Group of channels. So far, it's all pretty basic, and many of you will know these things already, but for the benefit of the viewers at home, I'd like to 'firm up' a few points before moving on.

• Insert points are invariably presented as stereo jacks wired to carry both the send and return signal, so if you don't have a patchbay, you'll need a Y-lead with a stereo jack on one end and two monos on the other. Again, this might seem obvious, but from the number of calls we get about this, it seems that some confusion still exists.

• Processors must always be used 'in-line' with a signal and not in the effects send/return loop, unless you really know what you are doing and why you're doing it -- in which case you probably don't need to be reading this.

• Most processors work at line level, so you can't plug a mic directly into them. The correct way to, for example, compress a mic signal, is to patch the compressor into the insert point of the mic channel. This way, the mixer's mic amp brings the mic signal up to line level before feeding it to the compressor.

• If an effect is used via the aux/send return system, it is normal to set the FX unit's dry/effect balance to 'effect only', in order to allow the console's aux send controls to control the effect balance.

• Some effects, such as phasing and flanging, rely on a precise effect/dry balance which may be better accomplished in the FX unit itself. In this case, either patch the FX unit into an insert point or, if you must use the aux/send system, you can either de-route the channel to kill the dry signal, or feed the effects unit from a pre-fade (foldback) send and turn the channel fader right down.

• To use a mono-in, stereo-out FX unit (such as reverb or stereo delay) via insert points, simply route one output of the unit to the insert return of the channel feeding it and the other to the insert return of an adjacent channel. Match the levels, pan one track hard left and the other hard right for maximum stereo effect.

• To use a stereo-in, stereo-out FX unit via insert points, use two adjacent mixer channels panned hard left and right.

• To treat a whole mix, say with EQ or compression, patch your processor into the master insert points. This places your unit in the signal path just before the master stereo faders, which means that if you're using a compressor, it won't try to fight you if you do a fade-out. Similarly, any noise generated by the processor will also be faded as you pull the faders down.

• If you don't have master insert points, you can patch a Processor between the mixer's stereo out and the input to your stereo mastering recorder, but if you want to do fades with a compressor patched in, you'll need to do them using the input level control on the tape machine, not on the desk.




There are two ways to add effects to channels routed via a subgroup; you can either connect the effects unit normally, via the aux sends, or patch the effects unit in via the Group insert points. Working via the aux sends affords independent control over the amount of level added to each channel, but you must route the output of the effects unit to the same Subgroup (or pair, if you're working in stereo), otherwise the effects level won't change whenever you move the Group fader. Of course, this presents a problem when you want to use your one and only reverb unit both on the drum Subgroup and on the vocals, which may be routed to a different Group or direct to the stereo mix. In this case, the only option is to route the reverb directly into the stereo mix and do any drum track level changes by moving all the drum channel faders at once, leaving the Group faders alone. This isn't particularly convenient, but as the eskimo found out when he nearly drowned after trying to keep warm by burning his canoe -- you can't have your kayak and heat it!

Patching the FX unit into the Group insert points offers little advantage because it means everything in the Group gets the same treatment, but if that's what you want to achieve, then doing it this way is likely to give the best noise performance. It's a little known fact that, because aux sends feed a mix buss, noise is generated which is proportional to the number of mixer channels, even if the aux knobs are turned right down. Using the insert points instead neatly avoids this.

If you want to process a Group, the Group insert points are the only way in. A typical application might be to compress a stereo backing vocal mix, or even a drum mix, which means using a stereo compressor patched into a Group pair. Don't forget to set the compressor to stereo link mode in this instance.




As intimated earlier, when it comes to patching in a composite effect created by a multi-effects unit, it has to be considered as a collection of separate blocks for the purpose of routing. For example, let's assume that you have a compressor patch followed by a reverb; patching this in via the send/return system will leave the dry sound unchanged (because that's coming directly through the mixer channel), but the reverb will be fed by a compressed version of that signal, resulting in a more consistent level of reverb. It's all a matter of thinking about what you actually want to achieve, and then arranging the patching to make it happen.

If a single instrument or tape track needs to be treated using a multi-effects processor, the best noise performance will be achieved either by patching it via the channel insert points or by placing the unit in the signal path between the instrument/tape machine and the mixer input. Many are the occasions on which an effects unit has been blamed for being noisy, when the real culprit is mix buss noise from the aux send system.