Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, October 18, 2017

Q. What are the characteristics of vintage mics?

By Hugh Robjohns

I've been browsing a vintage microphone site and it got me thinking: what kind of characteristics are actually offered by vintage mics? Can the same sound be achieved with modern mics and EQ? Isn't most of the 'vintage sound' due to tape and valves rather than mics?
The sought-after sound of the classic vintage mics is partly down to the fact that microphones used in professional studios many years ago would have been of particularly high quality to start with — and quality tends to age well. 
The sought-after sound of the classic vintage mics is partly down to the fact that microphones used in professional studios many years ago would have been of particularly high quality to start with — and quality tends to age well.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: A good vintage capacitor mic sounds much the same as a good modern equivalent, and the same goes for ribbons and moving coils. Having said that, there has been a tendency over the last decade or two to make modern mics sound brighter, partly because the technology has improved to allow that, and partly because of aural fashion.

Also, professional mics that are now considered vintage were usually pretty expensive in their day — studios and broadcasters bought very high‑quality products — and that high‑end quality generally persists despite the age of the microphones.

Most of the vintage mics you'll find on those kinds of sites, though, are either valve capacitor mics or ribbons, and they both have inherent characteristics of their own that a lot of people revere. Ribbons have a delightfully smooth and natural top end, while high‑quality valve capacitor mics often have mid‑range clarity and low‑end warmth. These qualities can still be found in some modern equivalents if you choose carefully.

Some of the vintage character is certainly attributable to recording on tape, replaying from vinyl, and the use of valves and transformers. But some is also down to the construction of the microphone capsules and the materials used, not all of which are still available in commercial products today.


Published January 2011

Monday, October 16, 2017

Q. If speakers have to be 'anchored', why don't mics?

By Hugh Robjohns & Mike Senior

As I understand it, loudspeakers create sound and momentum, which needs to be absorbed in order for the sound quality to be accurate, so we ensure they are braced or fixed to their stands and not wobbling about too much. So surely a mic diaphragm, which is moved by incoming sound, will less accurately represent the sound if the mic casing is not sufficiently anchored. Given that we hang these things from cables, or put them in elastic shockmounts, can you explain to me why this principle doesn't apply?
Is it just to do with acceptable tolerances or is it a trade‑off between picking up vibrations from the stand and capturing the intended sound?

Paul Hammond, via email

SOS Technical Editor Hugh Robjohns replies: In a perfect world, both the loudspeaker and the microphone would be held rigidly in space to deliver optimal performance. However, we don't live in a perfect world. Sometimes a shelf is the most appropriate position for a speaker, but the inevitable down side, then, is that the vibrations inherently generated by the speaker's drive units wobbling back and forth will set up sympathetic resonances and rattles in the shelf, adding unwanted acoustic contributions to the direct sound from the speaker, and thus messing up the sound.
 
We 'decouple' speakers with foam to prevent annoying low‑end frequencies leaving the speakers from reaching the surface they sit on. In the case of mics, we want to stop problem frequencies from reaching them, so we support them in shockmounts.

 
The obvious solution is, therefore, to 'decouple' the speaker from the shelf with some kind of damped mass‑spring arrangement optimised to prevent the most troubling and annoying frequencies (generally the bottom end) from reaching the shelf. This is often achieved, in practice, using a foam pad or similar.

With microphones, we are trying to control energy going the other way. We want to stop mechanical vibrations from reaching the mic, whereas we were trying to stop mechanical vibrations leaving the speaker.

Again, in a perfect world the mic would be held rigidly in space, using some kind of tripod, much like the ones photographers use for their cameras. However, in practice, we tend to place mics at the ends of long, undamped boom arms on relatively floppy mic stands which are, themselves, placed on objects that pick up mechanical vibrations (foot tapping, perhaps) and then pass them along the metalwork straight to the mic.

The obvious result is that the mic body moves in space, and in so doing forces the diaphragm back and forth through the air. This results in a varying air pressure impinging on the diaphragm that the mic can't differentiate from the wanted sound waves coming through the air, and so the mic indirectly captures the 'sound' of its physical movement as well as the wanted music.

The solution is to support the mic in a well‑designed shockmount so that the troublesome (low end, again) vibrations that travel up through the mic stand are trapped by another damped mass‑spring arrangement and thus are prevented from reaching the mic. If the shockmount works well, the mic stays still while the stand wobbles about around it, much like the interior of a car moving smoothly while the wheels below are crashing in and out of potholes!

The only potential problem with the microphone shockmount is that it can easily be bypassed by the microphone cable. If the cable is relatively stiff and is wrapped around the mic stand, the vibrations can travel along the mic cable and reach the mic that way, neatly circumventing the shockmount. The solution is to use a very lightweight cable from the mic to the stand, properly secured at the stand to trap unwanted vibrations.


Published February 2011

Friday, October 13, 2017

Q. Where should I put my overhead mics?

By Hugh Robjohns
When recording drums, I really want to get the kick, snare and hi‑hat in the middle of the image, but with a wide spread of cymbals. The snare is placed off to the left of the kick (from the drummer's point of view). I know I need to set my drum overhead mics so that there are no phasing issues with the kick and snare mics, but how do I know where to point the OH mics? For example, if I have two cardioid-pattern mics, should they be pointing straight down, at the snare, or somewhere between the kick and snare — or somewhere else entirely?

Adrian Cairns via email

SOS Technical Editor Hugh Robjohns replies: This is an interesting one because what you are trying to do is distort the stereo imaging of the recording, compared with the reality of the kit setup. And the only way you can do that is by maximising the separation of what each mic hears. That's easy enough with the kick, snare and hi‑hat mics because of their proximity to the sources and the effectiveness of bracketing EQ. The overheads, however, remain more of an issue, because they are naturally going to pick up significant spill from the snare and hi‑hat (you can use bracketing EQ to minimise the kick drum spill, of course).

To achieve your desire of keeping the snare and hi‑hat central in the image you will have to ensure that the overhead mics are equally spaced from those two sources, so that the level and time of arrival of snare and hi‑hat sounds are equal in both mics. With that as a primary requirement, you can then experiment with moving the mics (and/or cymbals) around to achieve the required spread of cymbal sound. Angling the mics, to assist with the rejection of as much snare and hat spill as possible while capturing the wanted cymbals, is also a useful tool, providing you maintain the equal distance so that whatever spill is captured remains central in the stereo image.
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources. 
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources.

There are also some less conventional alternative techniques you might also like to consider, using fig‑8 mics where you can aim the deep null to minimise snare and hat pickup in a useful way.


Published March 2011

Tuesday, October 10, 2017

Q. Where should I put my overhead mics?

By Hugh Robjohns

When recording drums, I really want to get the kick, snare and hi‑hat in the middle of the image, but with a wide spread of cymbals. The snare is placed off to the left of the kick (from the drummer's point of view). I know I need to set my drum overhead mics so that there are no phasing issues with the kick and snare mics, but how do I know where to point the OH mics? For example, if I have two cardioid-pattern mics, should they be pointing straight down, at the snare, or somewhere between the kick and snare — or somewhere else entirely?

Adrian Cairns via email

SOS Technical Editor Hugh Robjohns replies: This is an interesting one because what you are trying to do is distort the stereo imaging of the recording, compared with the reality of the kit setup. And the only way you can do that is by maximising the separation of what each mic hears. That's easy enough with the kick, snare and hi‑hat mics because of their proximity to the sources and the effectiveness of bracketing EQ. The overheads, however, remain more of an issue, because they are naturally going to pick up significant spill from the snare and hi‑hat (you can use bracketing EQ to minimise the kick drum spill, of course).

To achieve your desire of keeping the snare and hi‑hat central in the image you will have to ensure that the overhead mics are equally spaced from those two sources, so that the level and time of arrival of snare and hi‑hat sounds are equal in both mics. With that as a primary requirement, you can then experiment with moving the mics (and/or cymbals) around to achieve the required spread of cymbal sound. Angling the mics, to assist with the rejection of as much snare and hat spill as possible while capturing the wanted cymbals, is also a useful tool, providing you maintain the equal distance so that whatever spill is captured remains central in the stereo image.
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources. 
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources.

There are also some less conventional alternative techniques you might also like to consider, using fig‑8 mics where you can aim the deep null to minimise snare and hat pickup in a useful way.



Published March 2011

Saturday, October 7, 2017

Q. Where should I place my monitors in a small room?

By Paul White

I recently built my own home studio by converting an old garage into a well‑isolated music room of 410 x 215 x 275cm. The isolation is great, but I'm now moving on to phase two — acoustics — and bass is a problem, especially on the notes of A, B‑flat and B, which are kind of booming.
So I am wondering how to position my Dynaudio BM6As? At first I put them along the short wall, but a lot of bass was built up, probably because of the proximity of the corners. I've already tried to put the speakers backwards, but noticed no change.

I've now got them along the long wall, which I think sounds more balanced, even though there's still some resonance on certain notes. Also, this tends to differ a lot depending on whether I sit in the exact 'sweet spot' or not. The further forward I go with my head, the more bass I get; the further back I go, the less bass I get.
In your books and in Sound On Sound, I've seen you advocate placing speakers on both the shortest wall, and the longest wall, depending on the room. So, what would you recommend for a room of my size and dimensions? Also, are the BM6As too much for my room?

Paul Stanhope via email

SOS Editor In Chief Paul White replies: In large studio rooms, which includes many commercial studios, putting the speakers along the longest wall is quite common and has the benefit of getting those reflective side walls further away. However, in the smaller rooms many of us have to deal with, it is invariably best to have the speakers facing down the longest axis of the room. If you work across the room, the reflective wall behind you is too close and the physical size of the desk means you're almost certainly sitting mid‑way between the wall in front and the wall behind, which causes a big bass cancellation in the exact centre and, as you've noticed, causes the bass end to change if you move your position even slightly. In a room the size of yours, working lengthways will give the most consistent results. Your room is a slightly unfortunate size for bass response as the length is almost twice the width, so any resonant modes will tend to congregate at the same frequencies.
In a small room such as this, which is about twice as long as it is wide, it's usually best to position monitors of this size along the shortest wall. Working the other way — across the room — would create a bass cancellation in the centre of the room, where you'll most likely be sitting. Moving around even slightly would create variable results, as the space is so small. Positioning them as shown in the bottom image will give more consistent results, though you will still need to treat the room accordingly. 
In a small room such as this, which is about twice as long as it is wide, it's usually best to position monitors of this size along the shortest wall. Working the other way — across the room — would create a bass cancellation in the centre of the room, where you'll most likely be sitting. Moving around even slightly would create variable results, as the space is so small. Positioning them as shown in the bottom image will give more consistent results, though you will still need to treat the room accordingly.

You can often change the bass behaviour by moving the speakers forward or backwards slightly, but try to keep them out of the corners, as that just adds more unevenness to the bass end. Corner bass traps of the type you're making may help, but if they don't do enough, you could try one of the automatic EQ systems designed for improving monitoring. I don't normally like to EQ monitors but, in difficult situations, using EQ to cut only the boomy frequencies can really help.

As for your monitors, the BM6As should be fine in that room. Just make sure they're perched on something solid, as standing them directly on a desk or shelf can also cause bass resonances. Either solid metal stands or foam speaker pads with something solid on top work best and can really tighten up the bass end. You can buy the Primacoustic or Silent Peaks pads, which have steel plate on top, use Auralex MoPads or similar with a heavy floor tile stuck on top, or make your own from furniture foam with ceramic floor tiles or granite table mats stuck on top. A layer of non‑slip matting under the speakers will keep them in place.

For the mid‑range, foam or mineral wool absorbers placed at the mirror points in the usual way should be adequate, but try to put something on the rear wall that will help to scatter the sound, such as shelving or unused gear.


Published March 2011

Thursday, October 5, 2017

Q. How should I record an upright piano?

I have a pretty basic recording setup and, up until now, have just been making vocal and guitar recordings using an Audio‑Technica AT2035 and an Edirol FA66 audio interface with Reaper. However, I've been playing the piano a lot lately and would like to incorporate that. I have access to an old upright that's in the corner of my mum's living room. How can I achieve the best recording of the piano? Will I need different equipment?

Fiona McKay, via email

SOS Editor In Chief Paul White replies: There are many different ways to mic the upright piano, but in a domestic room a pair of cardioid capacitor mics would probably be the best option, as they would exclude much of the room reflection that might otherwise adversely colour the sound. Aim each mic at an imaginary point about a quarter-piano's width in from the ends of the piano, as that helps keep the string balance even. If the piano sounds good to the player, you can use a spaced pair of mics either side of the player's head, but it is also common practice to open the lid and, often, to remove the upper front cover above the keyboard as well. With the strings exposed in this way, you have more options to position the spaced pair either in front of or above the instrument, and I'd go for a 600 to 800 mm spacing between the mics, adjusting the mic distances as necessary to get an even level balance between the bass and treble strings.If a piano sounds good to the player, it's worth trying the recording from just either side of their position, placing the microphones 600 to 800 mm apart. However, it's also common practice to open the lid of the piano and place the mics above the exposed strings at that same distance apart. 

If a piano sounds good to the player, it's worth trying the recording from just either side of their position, placing the microphones 600 to 800 mm apart. However, it's also common practice to open the lid of the piano and place the mics above the exposed strings at that same distance apart.

If you're lucky enough to have a great‑sounding room, you can increase the mic distance to let in more room sound or switch to omnis. But in a typical domestic room I'd be inclined to start with the mics around that 600 to 800 mm distance apart. Also listen out for excessive pedal noise on your recording and, if necessary, wrap some cloth around the pedals to damp the sound.

SOS contributor Mike Senior explored this subject in some detail back in April of 2009. It's probably worth going to /sos/apr09/articles/uprightpianos.htm and giving it a read.



Published October 2010

Wednesday, October 4, 2017

How We Hear Pitch

By Emmanuel Deruty

When two sounds happen very close together, we hear them as one. This surprising phenomenon is the basis of musical pitch — and there are lots of ways to exploit it in sound design.
How We Hear Pitch
Films and television programmes consist of a series of individual still images, but we don't see them as such. Instead, we experience a continuous flow of visual information: a moving picture. Images that appear in rapid succession are merged in our perception because of what's called 'persistence of vision'. Any image we see persists on the retina for a short period of time — generally stated as approximately 40ms, or 1/25th of a second.

A comparable phenomenon is fundamental to human hearing, and has huge consequences for how we perceive sound and music. In this article, we'll explain how it works, and how we can exploit it through practical production tricks. The article is accompanied by a number of audio examples, which can be downloaded as a Zip archive at /sos/apr11/articles/perceptionaudio.htm. The audio examples are all numbered, so I'll refer to them simply by their number in the text.

Perceptual Integration

The ear requires time to process information, and has trouble distinguishing audio events that are very close to one another. With reference to the term used in signal processing, we'll call this phenomenon 'perceptual integration', and start by pointing out that there are two ways in which it manifests itself. In some cases, the merging of close occurrences is not complete. They're perceived as a whole, but each occurrence can still be heard if one pays close attention. In others, it becomes completely impossible to distinguish between the original occurrences, which merge to form a new and different audio object. Which of the two happens depends on how far apart the two events are.

Take two short audio samples and play them one second apart. You will hear two samples. Play them 20 or 30ms apart, and you will hear a single, compound sound: the two original samples can still be distinguished, but they appear as one entity. Play the two sounds less than 10ms apart, and you won't hear two samples any more, just a single event. We are dealing with two distinct thresholds, each one of a specific nature. The first kind of merging seems to be a psychological phenomenon: the two samples can still be discerned, but the brain spontaneously makes a single object out of them. In the second case, the problem seems to be ear‑based: there is absolutely no way we can hear two samples. The information just doesn't get through.

These two thresholds are no mere curiosities. Without them, EQs would be heard as reverberation or echoes, compression would never be transparent, and AM and FM synthesis would not exist. Worse still, we would not be able to hear pitch! In fact, it's no exaggeration to state that without perceptual integration, music would sound completely different — if, indeed, music could exist at all. In audio production, awareness of the existence of such perceptual thresholds will make you able to optimise the use of a variety of production techniques you would otherwise never have thought about in this way. The table to the right lists some situations in which perceptual integration plays an important role. Changing a single parameter can radically change the nature of sound(s). 
Changing a single parameter can radically change the nature of sound(s).

Two Become One

Let's think in more detail about the way in which two short samples, such as impulses, are merged first into a compound sample and then into a single impulse. You can refer to audio examples 1 through 15 to hear the two transitions for yourself, and real‑world illustrations of the phenomenon are plentiful. Think about the syllable 'ta', for instance. It's really a compound object ('t” and 'a'), as can easily be confirmed if you record it and look at the waveform. But the amount of time that separates both sounds lies below the upper threshold, and we hear 't' and 'a' as a single object. Indeed, without perceptual integration, we wouldn't understand compound syllables the way we do. Nor would we be able to understand percussive sounds. Take an acoustic kick‑drum sample, for instance. The attack of such a sample is very different from its resonance: it's a high, noisy sound, whereas the resonance is a low, harmonic sound. Yet because the two sounds happen so close to each other, we hear a single compound object we identify as a 'kick drum'.

In audio production, there are lots of situations where you can take advantage of this merging. A straightforward example would be attack replacement: cut the attack from a snare drum and put it at the beginning of a cymbal sample. The two sounds will be perceptually merged, and you will get a nice hybrid. Refer to audio examples 16 to 18 to listen to the original snare, the original cymbal, and then the hybrid sample. This is but an example, and many other applications of this simple phenomenon can be imagined. A very well-made compound sound-object of this kind can be found in the Britney Spears song 'Piece Of Me': the snare sound used is a complex aggregate of many samples spread through time, and though it's easy to tell it's a compound sound, we really perceive it as a single object.

Creating Pitch

Let's try to repeat our original experiment, but this time with several impulses instead of only two. With the impulses one second apart, we hear a series of impulses — no surprise there. However, reducing the time between the impulses brings a truly spectacular change of perception: at around a 50ms spacing, we pass the upper threshold and begin to hear a granular, pitched sound. As we near 10ms and cross the lower threshold, we begin to hear a smooth, pitched waveform, and it's quite hard to remember that what you are actually hearing is a sequence of impulses. Refer to audio examples 19 to 33 in this order to witness for yourself this impressive phenomenon. Hints of pitch can also be progressively heard in examples 1 through 15, for the same reasons.

This points to a fundamental property of hearing: without perceptual time integration, we would have no sense of pitch. Notice how, in this series of examples, we begin to hear pitch when the spacing between impulses falls to around 50ms. It's no coincidence that the lowest pitch frequency humans can hear — 20Hz — corresponds to a period of 50ms.

In fact, we're often told that humans are not able to hear anything below 20Hz, but referring to our little experiment, you can see that this is misleading. Below 20Hz, we can indeed hear everything that's going on — just not as pitch. Think about it: we hear clocks ticking perfectly well, though they tick at 1Hz: we're just not able to derive pitch information from the ticking. Again, compare hearing with vision: obviously, we can see pictures below 10 frames per second, we just see them as… pictures, not as a continual stream of information in the manner of a film.

You don't need this article to be aware of the existence of pitch, so let's get a bit more practical. In audio production, reducing the interval between consecutive samples to below perceptual time thresholds can be of real interest. A good example can be found in a piece called 'Gantz Graf' by British band Autechre. In this piece, between 0'56” and 1'05”, you can witness a spectacular example of a snare‑drum loop being turned into pitch, then back into another loop. More generally, most musical sequences in this track are made from repetitions of short samples, with a repetition period always close to the time thresholds. Apparently, Autechre enjoy playing with the integration zone.

This track being admittedly a bit extreme, it's worth mentioning that the same phenomenon can also be used in more mainstream music. In modern R&B, for instance, you can easily imagine a transition between two parts of a song based on the usual removal of the kick drum and the harmonic layer, with parts of the lead vocal track being locally looped near the integration zone. This would create a hybrid vocal‑cum‑synthesizer‑like sound that could work perfectly in this kind of music.

AM Synthesis: Tremolo Becomes Timbre

The idea that simply changing the level of a sound could alter its timbre might sound odd, but this is actually a quite well‑known technique, dating back at least to the '60s. Amplitude modulation, or AM for short, was made famous by Bob Moog as a way to create sounds. It's an audio synthesis method that relies on the ear's integration time. When levels change at a rate that approaches the ear's time thresholds, they are no longer perceived as tremolo, but as adding additional harmonics which enrich the original waveform.AM synthesis converts level changes into changes in timbre. 
AM synthesis converts level changes into changes in timbre.

AM synthesis uses two waveforms. The first one is called the carrier, and level changes are applied to this in a way that is governed by a second waveform. To put it another way, this second waveform modulates the carrier's level or amplitude, hence the name Amplitude Modulation. The diagram on the previous page illustrates this principle with two sine waves. When we modulate the carrier with a sine wave that has a period of one second, the timbre of the carrier appears unchanged, but we hear it fading in and out. Now let's reduce the modulation period. When it gets close to 50ms — the upper threshold of perceptual integration — the level changes are not perceived as such any more. Instead, the original waveform now exhibits a complex, granular aspect. As the lower theshold is approached, from 15ms downwards, the granular aspect disappears, and the carrier is apparently replaced by a completely different sound. Refer to audio examples 34 through 48, in this order, to hear the transition from level modulation through granular effects to timbre change.
In audio production, you can apply these principles to create interesting‑sounding samples with a real economy of means. For instance, you can use a previously recorded sample instead of a continuous carrier wave. Modulating its amplitude using an LFO that has a cycle length around the integration threshold often brings interesting results. This can be done with any number of tools: modular programming environments such as Pure Data, Max MSP and Reaktor, software synths such as Arturia's ARP 2600, and hardware analogue synths such as the Moog Voyager. If you like even simpler solutions, any DAW is capable of modulating levels using volume automation. The screen above shows basic amplitude modulation of pre‑recorded samples in Pro Tools using volume automation (and there's even a pen tool mode that draws the triangles for us).
 You can use Pro Tools volume automation to process samples with amplitude modulation techniques. 
You can use Pro Tools volume automation to process samples with amplitude modulation techniques.

FM Synthesis: Vibrato Becomes Timbre

FM synthesis is, to some extent, similar to AM synthesis. It also uses a base waveform called a carrier, but it is modulated in terms of frequency instead of being modulated in terms of amplitude. The diagram to the rightFM synthesis converts frequency changes into changes in timbre. 
FM synthesis converts frequency changes into changes in timbre.

illustrates this principle with two sine waves. The FM technique was invented by John Chowning at Stanford University near the end of the '60s, then sold to Yamaha during the '70s, the outcome being the world‑famous DX7 synth.

Suppose a carrier is modulated in frequency by a waveform whose period is one second: we hear regular changes of pitch, or vibrato. Now let's reduce the modulation period. Near 50ms we begin to have trouble hearing the pitch changes and experience a strange, granular sound. Near 10ms the result loses its granularity, and a new timbre is created. Audio examples 49 to 63 in this order show the transition from frequency modulation to timbre change.

In practice, dedicated FM synths, such as the Native Instruments FM8 plug‑in, are generally not designed to function with modulation frequencies as low as this, which makes it difficult to play with the integration zone. It's often easier to use a conventional subtractive synth in which you can control pitch with an LFO — which, in practice, is most of them!

Panning, Delay & Stereo Width

A well‑known trick that takes advantage of integration time is the use of delay to create the impression of stereo width. As illustrated in the top screen overleaf
An easy way to create a stereo impression. 
An easy way to create a stereo impression.

 , we take a mono file and put it on two distinct tracks. Pan the first track hard left and the second hard right. Then delay one of the tracks. With a one‑second delay, we can clearly hear two distinct occurrences of the same sample. If we reduce the delay to 50ms, the two occurrences are merged, and we hear only one sample spread between the left and the right speakers: the sound appears to come from both speakers simultaneously, but has a sense of 'width'. Still going downwards, this width impression remains until 20ms, after which the stereo image gets narrower and narrower. Refer to audio examples 64 to 78 to hear the transition in action.

This is a simple way to create stereo width, but as many producers and engineers have found, it is often better to 'double track' instrumental or vocal parts. Panning one take hard left, and the other hard right, makes the left and right channels very similar to each other, but the natural variation between the performances will mean that small details are slightly different, and, in particular, that notes won't be played at exactly the same times. Double‑tracking thus produces an effect akin to a short, random delay between the parts, making the technique a variant of the simple L/R delay, though it's more sophisticated and yields better results. Judge for yourself by listening to audio example 79. It's based on the same sample as audio examples 64 to 78, but uses two distinct guitar parts panned L/R. Compare this with audio example 69, the one that features a 50ms L/R delay.

Double‑tracking is an extremely well‑known production trick that has been used and abused in metal music. Urban and R&B music also makes extensive use of it on vocal parts, sometimes to great effect. Put your headphones on and listen to the vocal part from the song 'Bad Girl' by Danity Kane. This song is practically a showcase of vocal re‑recording and panning techniques (see http://1-1-1-1.net/IDS/?p=349 for more analysis). To return to 'Piece Of Me' by Britney Spears: not only does the snare sound take advantage of the merging effect described earlier, but it also generates a stereo width impression by using short delays between left and right channels.

Delay Becomes Comb Filtering

Take a mono file, put it on two tracks and delay one of the tracks, but this time don't pan anything to the left or right. If we set the delay at one second, we hear the same sample played twice. As we reduce the delay time to 15ms or so (the lower threshold of perceptual integration, in this case), the delay disappears and is replaced by a comb filter. This works even better using a multi‑tap delay. With a delay value set at 1s, we hear the original sample superimposed with itself over and over again, which is to be expected. At a delay value approaching 40‑50ms (theupper threshold in this case), we still can distinguish the different delay occurrences, but the overall effect is of some kind of reverb that recalls an untreated corridor. Getting nearer 10ms (the lower threshold in this case), we only hear a comb filter. Refer to audio examples 80 through 94 to listen to the transition between multi‑tap delay, weird reverb and finally comb filter.

The ear's ability to convert a multi‑tap delay into a comb filter is exploited in the GRM Comb Filter plug‑in from GRM Tools. GRM Tools seldom make harmless plug‑ins, and this one is no exception, containing a bank of five flexible comb filters that can be used as filters, delays or anything in between. If you happen to get your hands on it, try setting the 'filter time' near the integration zone: fancy results guaranteed.
Likewise, very short reverbs are not heard as reverb but as filters. Conversely, filters can be thought of in some ways as extremely short reverbs — the screen belowAn impulse response from a filter. 
An impulse response from a filter.

shows the impulse response not of a reverb, but of a filter. This particular subject was discussed in detail in SOS September 2010 (/sos/sep10/articles/convolution.htm): see especially the section headed 'The Continuum Between Reverb And Filtering', and refer to the article's corresponding audio examples (/sos/sep10/articles/convolutionaudio.htm) 16 to 27 to hear a transition between a reverb and a filter. In the same article, I also explained how discrete echoes gradually turn into a continuous 'diffuse field' of sound when the spacing between them becomes short enough to cross the upper threshold of perceptual integration — see the section called 'Discrete Or Diffuse' and audio examples 3 to 15.

Dynamics & Distortion

Dynamic compression involves levelling signal amplitude: any part of the signal that goes over a given level threshold will be attenuated. Consider a signal that's fed into a compressor. Suddenly, a peak appears that's above the level threshold: compression kicks in, and negative gain is applied. However, the gain can't be applied instantaneously. If it was, the signal would simply be clipped, generating harmonic distortion. This can be interesting in certain cases, but the basic purpose of compression remains compression, not distortion. As a consequence, gain‑reduction has to be applied gradually. The amount of reduction should be 0dB at the moment the signal goes over the threshold, and then reach its full value after a small amount of time; the Attack time setting on a compressor determines exactly how much time.

The screen to the right
Attack time is an important parameter of dynamic compression. 
Attack time is an important parameter of dynamic compression.

shows the results of feeding a square wave through a compressor using a variety of attack times. In this screenshot, the attenuation applied by the compressor (the Digidesign Dynamics III plug‑in) is clearly visible. When attack time is set at 300ms, the action of the gain reduction can clearly be heard as a gradual change in level. When we reduce the attack time to 10ms (the lower time threshold in this case), it's not possible to hear this as level change any more. Instead, we perceive the envelope change almost as a transient — an 'attack' that now introduces the sound. Refer to audio examples 95 to 103 to hear this effect. For comparison purposes, audio example 104 contains the original square wave without progressive attenuation.
Of course, there is much more to dynamic compression than the attack time: other factors, such as the shape of the attack envelope, the release time, and the release envelope shape, all have the potential to affect our perception of the source sound. In music production, compressors are often set up with time constants that fall below the threshold of perceptual integration, and this is one reason why we think of compressors as having a 'sound' of their own, rather than simply turning the level of the source material up or down.

A Starting Point

This article covers many situations in which perceptual time integration can bring unexpected and spectacular results from basic modifications of the audio signal. Think about it: since when are simple level changes supposed to add harmonics to a sample? Yet it's the principle at the base of AM synthesis. And how on earth can a simple delay actually be a comb filter? Yet it takes only a few seconds in any DAW to build a comb filter without any plug‑ins. There are many other examples that this article doesn't cover. For instance, what happens if you automate the centre frequency of a shelf EQ to make it move very quickly? Or automate panning of a mono track so it switches rapidly from left to right? Try these and more experiments for yourself, and you might discover effects you never thought could exist.

Why Perceptual Integration Exists

Perceptual integration is an interesting phenomenon and one that's very important for music production. But why does it exist? As I explained in the main text, the upper threshold of perceptual integration lies between 30 and 60 milliseconds, depending on the situation. This seems to be a cognitive phenomenon that is based in the brain, and is not fully understood. On the other hand, the lower threshold, which lies between 10 and 20 ms, depending on the circumstances, originates in the physics of the ear, and is easier to understand.
The key idea here is inertia. Put a heavy book on a table, and try to move it very quickly from one point to another: no matter what you do, the book will resist the acceleration you apply to it. With regard to the movement you want it to make, the book acts like a transducer — like a reverb or an EQ, in fact, except that it's mechanical instead of being electrical or digital. The input of the 'book transducer' is the movement you try to apply to it, and its output is the movement it actually makes. Now, as we saw in March's SOS (/sos/mar11/articles/how-the-ear-works.htm) our ears are also mechanical transducers, which means that they also display inertia. There is a difference between the signal that goes into the ear, and the signal that reaches the cilia cells.
The illustration to the right
 Mechanical inertia of the ear prevents us from distinguishing samples that are too close to each other. 
Mechanical inertia of the ear prevents us from distinguishing samples that are too close to each other.

schematically shows the difference between those two signals, when the input is a short impulse. Because it is made from mechanical parts, the ear resists the movement the impulse is trying to force it to make: this explains the slowly increasing aspect of the response's first part. Then, without stimulus, the ear parts fall back to their original position. Naturally, if the two input impulses are very close to each other, the 'ear transducer' doesn't have time to complete its response to the first impulse before the second one arrives. As a consequence, the two impulses are merged. This exactly corresponds to what is described at the beginning of this article, when we were merging two sound objects into a single one: as far as we can tell, the two impulses are joined into a single one.
(Readers with a scientific background who specialise in psychoacoustics may be wondering what my proof is for the claim that the upper and lower time thresholds I keep referring to originate respectively from the brain and the ear. To the best of my abilities, I think this is a safe and reasonable assertion, but I can't prove it. Still, in a comparable field, it has been proven that persistence of vision is eye‑centred, whereas perception of movement is brain‑centred. I'm eager to see similar research concerning hearing.)


Published April 2011


Monday, October 2, 2017

Q. How could I get the most from a Korg Monotron?

I have a fairly basic setup that I've so far been using for some simple audio work. However, I'd like to introduce some more interesting sounds and thought that a Korg Monotron might be an inexpensive way to start experimenting. However, being a beginner, I'm not entirely sure of the extent of the Monotron's capabilities. How could I get the most from it? Do you have any interesting tips or tricks?

Craig Varney via email

SOS contributor Paul Nagle replies: OK, without knowing about your setup I'll opt for a generic sort of reply. As you know, the Monotron is a tiny synthesizer with just five knobs and a short ribbon. Its strength is in having genuine analogue sound generation rather than massive versatility or playability. But, in my opinion, it possesses that 'certain something' that stands out in a recording.

Being old and hairy, I use mine primarily for the kind of weebly sound effects heard on Hawkwind or Klaus Schulze albums. Add a dash of spring reverb for atmosphere and its electronic tones get closer to my EMS Synthi than a pile of posh digital synths! Through studio monitors (or a large PA), the bass end is quite impressive and the filter screams like a possessed kettle, its resonance breaking up in that distinctive 'Korg MS' way. On stage or in the studio, I'd always recommend extra distortion, courtesy of as many guitar pedals as you can get your hands on.
Though the Monotron looks like a simple piece of kit, it has surprising potential when used in inventive ways. 
Though the Monotron looks like a simple piece of kit, it has surprising potential when used in inventive ways.

 An easy way to experiment with the sounds available from your Korg Monotron is to pile on the effects with different guitar pedals. 
An easy way to experiment with the sounds available from your Korg Monotron is to pile on the effects with different guitar pedals.

But let's not get too carried away. We're still talking about a basic monophonic synthesizer with an on/off envelope and just one waveform (a sawtooth). If it's tunes you're hoping for, that's going to take some work, and preferably external help, such as a sampler. Personally, I ignore the keyboard markings on the ribbon, finding the correct pitch entirely by ear. The ribbon's range is only slightly above one octave, so to squeeze out a fraction more, turn the tiny screw at the rear as far as it will go. On my Monotron, this gives a range of about an octave and a half: roughly comparable to your typical X‑Factor contestant.

As with X‑Factor contestants, there's no universally adopted gripping technique, but I mostly sweep the pitch with my right thumb whilst adjusting the knobs with my left hand. I also find a Nintendo DS stylus works fairly well for melodies, à la the Stylophone.

When your thumb gets tired, you should try the Monotron's second trick: being an audio processor. In a typical loop‑harvesting session, I'll run a few drum loops through it while playing with the filter cutoff and resonance. Once I've recorded a chunk of that, I go back through the results, slicing out shorter loops that contain something appealing, discarding the rest. Often when the filter is on the edge of oscillation, or is modulated by the LFO cranked to near maximum speed, loops acquire that broken, lo‑fi quality that magically enhances plush modern mixes (I expect that this effect is due to our ears becoming acclimatised to sanitised filter sweeps and in‑the‑box perfection). This is a fun (and cheap) way to compile an array of unique loops to grace any song, and you can process other signals too, of course. The results can get a little noisy, though, so you will need to address that, perhaps with additional filtering, EQ or gating. Alternatively, you can make a feature of the hiss, using some tasteful reverb or more distortion.

I have a pal who takes his Monotron into the park with a pocket solid‑state multitracker and acoustic guitar – the joys of battery power! When multitracking in the studio, you might be skilled enough to eventually achieve tracks like those seen on YouTube. Or, if you have a sampler (hardware or computer‑based), and take the time to sample many individual notes, the Monotron can spawn a polyphonic beast that sends expensive modelled analogues scurrying into the undergrowth. Some of the dirty filter noises, when transposed down a few octaves, can be unsettlingly strange and powerful.

I don't know if your setup includes digital audio workstation software, but if so, its built‑in effects and editing can do marvellous tricks with even the simplest analogue synthesizer. Later down the line, you will discover more sophisticated programs — such as Ableton Live and its Lite versions — offering mind‑boggling ways to warp audio, shunting pitch and timing around with a freedom I'd have killed for when I started out.
Anyone handy with a soldering iron should check out the raft of mods kicking around: Google 'Monotron mods' to see what I mean. Lastly, if the Monotron is your first real analogue synth, beware: it might be the inexpensive start to a long and hopeless addiction. Oh, and my final tip is very predictable to any who know me: delay, delay and more delay.

For a full review of the Korg Monotron go to /sos/aug10/articles/korg‑monotron.htm.


Published May 2011

Friday, September 29, 2017

Q. Can I use an SM58 as a kick-drum mic?

By Mike Senior

I'll be doing a session with lots of mics and I'm going to be running out of gear choices without hiring, begging or stealing! For the kit, I don't really have all the right mics, so will need to compromise. Is it wise to use a Shure SM58 on kick drum? What can I expect?
 The SM58 is better known as a vocal, guitar and snare mic than anything else — but can it be pressed into service as a kick-drum mic? 
The SM58 is better known as a vocal, guitar and snare mic than anything else — but can it be pressed into service as a kick-drum mic?

 If you have to use a kick‑drum close‑mic that lacks low end, the neatest mix fix is usually to employ some kind of sample‑triggering plug‑in to supplement the sound, such as Wavemachine Labs' Drumagog, SPL's DrumXchanger or Slate Digital's Trigger. 
If you have to use a kick‑drum close‑mic that lacks low end, the neatest mix fix is usually to employ some kind of sample‑triggering plug‑in to supplement the sound, such as Wavemachine Labs' Drumagog, SPL's DrumXchanger or Slate Digital's Trigger.

 Q. Can I use an SM58 as a kick-drum mic?
Q. Can I use an SM58 as a kick-drum mic?
Via SOS web site

SOS contributor Mike Senior replies: The first thing to say is that, although this mic (and, indeed, its SM57 cousin) is much better known for vocal, guitar and snare miking, there is also a good deal to recommend it for kick‑drum applications: its physical ruggedness; its ability to deal with high SPLs; and its presence-frequency emphasis, which can, in many situations, help the drum 'click' to cut through the mix, even when it's played back on small speakers. The biggest potential problem will be the low‑frequency response, which has been tailored to compensate for proximity effect in close‑miking situations and so falls off pretty steeply below 100Hz. However, there are several reasons why this needn't actually be a disaster in practice.

The first reason is that your microphone placement may well compensate for this, somewhat, especially if you're planning to use the mic inside the casing of the drum, where small changes in positioning can make an enormous difference to the amount of captured low end. It's also worth bearing in mind that lots of low‑end may not actually be very desirable at all, especially if the song you happen to be recording features detailed kick‑drum patterns that could lose definition in the presence of bloated lows. I often find myself filtering out sub‑bass frequencies at mixdown, in fact, as this can make the drum feel a lot tighter, as well as leaving more mix headroom for the bass part.

However, even if you do get an undesirably lightweight kick‑drum close‑mic sound, it's comparatively easy to supplement that at the mix: this is usually one of the simpler mix salvage tasks you're likely to encounter, in fact. One approach is to create some kind of low‑frequency synth tone (typically a sine wave, but it might be something more complex if you need more low‑end support) and then gate that in time with the kick‑drum hits. You can do this in most DAW systems now, using the built‑in dynamics side‑chaining system. I've done this in the past, but I tend to prefer the other common tactic: triggering a sample alongside the live kick‑drum using a sample‑triggering program (see our feature in last month's issue). There are now loads of these on the market, including the examples shown in the screens above.



Published April 2011

Tuesday, September 26, 2017

Q. How can I learn to create drum parts?

By Mike Senior

I'm just starting out in learning to record audio but am beginning to expand on what I want to do. Though I'm now fairly competent at using my DAW of choice (Reaper), I'm finding it really difficult to create drum parts. What would be the most straightforward way for a complete beginner to get into and learn about this?
Sara Willis, via email

SOS contributor Mike Senior replies: In a word: loops. There are two basic things you have to contend with when putting together great drum parts. Firstly, you have to obtain good performances: whether you're wanting the sound of live drums or electronic drum‑machine timbres, the nuances of the performance or programming of the part play a vital role in creating a commercial sound in almost any style. Secondly, you need to be able to control the sonics well enough to build up a decent mix once all the other parts of your arrangement are in place. The reason I recommend loops as a starting point is that it simplifies the process of dealing with these issues. All you have to do is find a suitable loop and then learn how to adjust its performance or sonics where the unique circumstances of your music require it.

Just type 'sample' into the 'quick search' box at the top right‑hand side of the SOS home page to access an enormous archive of sample‑library reviews. 
Just type 'sample' into the 'quick search' box at the top right‑hand side of the SOS home page to access an enormous archive of sample‑library reviews.

Finding a good library really shouldn't be hard. I've been reviewing loop collections for the magazine for ages now and I know that there are loads of really good ones available, catering for just about every musical genre imaginable. My first suggestion would be to go back through the magazine's sample‑library reviews: typing 'sample' into the 'quick search' field at the top right‑hand side of the SOS web site should pull them up out of the magazine's online archives for you. Anything with a four‑ or five‑star review is definitely worth investigating, but don't part with any cash before you've had a careful listen to the manufacturer's audio demos, and you should be as picky as possible in looking for exactly the right sonics for your needs. Don't just listen on your laptop's speaker or earbuds — drag the demo files over to your studio system, and if example loops are provided, try those out within a test project. This is what I regularly do as part of the review process, and it can be very revealing. Lining the demos up against some of your favourite commercial records may also help you narrow down the choices.

As far as the library format is concerned, I suggest you look for something based on REX2 loops, because these beat‑sliced files typically offer better tempo‑matching and rearrangement opportunities than the time‑stretching formats (such as Acidised WAV or Apple Loops). I don't think there's much sense in getting involved with any of the virtual instrument‑based libraries at this stage: while they can increase your flexibility in terms of sonics and programmability, they can also add a great deal of complexity to the production process, and I imagine you've got enough on your plate already with learning about all of this stuff! Often, loop‑library developers structure their libraries into 'suites', with several similar loops grouped together, and this can make it easier to build some musical variation into your song structure. There are also libraries that include supplementary 'one‑shot' samples of some of the drums used, and these can also be very handy for customising the basic loops, as well as for programming fills, drops and endings manually.

If you drag a REX2 file into Reaper's main arrange window, it'll automatically match itself to the project's tempo and present you with a series of beat slices. These slices make it easy to rearrange the performance, and also provide you with a lot of extra sonic options at mixdown. 
If you drag a REX2 file into Reaper's main arrange window, it'll automatically match itself to the project's tempo and present you with a series of beat slices. These slices make it easy to rearrange the performance, and also provide you with a lot of extra sonic options at mixdown.

Faced with a shortlist of good‑sounding REX2 libraries, the last consideration is whether the performances really sound musical. This is the most elusive character of a loop library and it's an area where the SOS review can provide some guidance. My usual barometer in this respect while reviewing is whether the loops make me want to stop auditioning and immediately rush off to make some music, so thinking in those terms may help clarify your thinking. It's also a good sign if the drum hits in the loop seem somehow to lead into each other, rather than just sounding like isolated events, because this can really make a difference to how a track drives along.

Once you've laid hands on some decent loops, you can just drag files directly onto a track in your Reaper project and they should, by default, match themselves to your song's tempo. Because each drum hit will have its own loop slice, it's quite easy to shuffle them around to fit existing parts. Just be aware that sounds with long sustain tails may carry over several adjacent slices. Map out a rough drum part by copying your chosen loops, making sure that Snap is 'on' so that the loops always lock to bar‑lines, but then be sure to also put in some work introducing fills and variations, so that the listener doesn't get bored. There are lots of ways of varying the loop patterns: edit or rearrange the slices; substitute a different loop from the same 'suite'; or layer additional one‑shots over the top. A lot of people think that using loops inevitably makes repetitive‑sounding music, but with most REX2 libraries there's no excuse whatsoever for letting this happen. (If you want to listen to an example of a drum part built with REX2 loops, check out my Mix Rescue remix from SOS October 2008 at /sos/oct08/articles/mixrescue_1008.htm, where I completely replaced the band's original drum parts in this way.)

The REX2 slices can also assist when it comes to adjusting sonics at the mix, because it's easy to slide, say, all the kick‑drum slices onto a separate track for processing. This is such a useful technique that I often end up doing it manually with loops at mixdown, even when they're not REX2 files! The Mix Rescue I did in SOS November 2010 (/sos/nov10/articles/mixrescue‑1110.htm) is a good example of this, and with that one you can even download the full Reaper remix project from the SOS web site if you want to look at how I implemented this in more detail.


Published January 201

Saturday, September 23, 2017

Q. Can I get rid of string buzz?

By Hugh Robjohns & Mike Senior

I've got a recording of an acoustic guitar that I'm loath to re‑record, but there are several sections in which string buzz is clearly audible. Can I remove this with a bit of clever processing?

Mike Fenton, via email

SOS contributor Mike Senior replies: As far as after‑the‑fact mix processing is concerned, I'm not sure I can think of any decent way to remove string buzz, I'm afraid. The problem is that, unlike a lot of other mechanical noises the guitar makes, there's not really any way to get independent control over it with normal plug‑in processing. (I suspect that even high‑end off‑line salvage tools such as CEDAR's Retouch might struggle to make much of an impact with this, in fact.) In the case of pick noise, for example, the transient nature of the noise means that it can be effectively targeted with transient‑selective processors such as SPL's Transient Designer or Waves' TransX Wide. For fret squeaks you can use high‑frequency limiting, or simply an automated high‑frequency shelving EQ to duck the high end of the spectrum briefly whenever a squeak occurs, because such noises are usually brief and occur as the previously played notes are decaying (therefore having less high‑frequency content to damage). String buzz, on the other hand, isn't transient by nature and usually happens most obviously at the beginnings of notes, where the noise spectrum is thoroughly interspersed with the wanted note spectrum.

 
It's relatively difficult to fix fret noises with processsing, due to the very specific nature of the transients produced. For this reason, it's always advisable to record several takes of an important guitar part.

All is not lost, however, because you still may be able to conjure up a fix using audio editing if your recording includes any repeated sections and the string buzz isn't common to all sections; you may be able to just paste clean chords or notes over the buzzy ones. The main thing to remember is to try to put your edits just before picking transients if possible, to disguise them, but you should also be careful that all notes sustain properly across each edit point too, because you may not have played exactly the same thing every time. If you know that string buzz is a problem for you, I'd recommend doing several takes of guitar parts, as this will increase your editing options. If the guitar part is important enough that a bit of string buzz really matters, you should probably be comping it anyway, to be honest, if you're after commercial‑sounding results.


Published February 2011

Thursday, September 21, 2017

Q. How can I prevent feedback?

When setting up for a gig we always suffer really bad feedback from the singer's mic. We've tried positioning things differently, but it doesn't seem to help. We're pretty new to this; how can we counteract feedback?

Jo Ellison, via e‑mail

SOS Editor In Chief Paul White replies: Acoustic feedback is caused when sound from the speakers gets back into the microphones at a high enough level to cause the signal to keep increasing. This produces acoustic feedback as the signal cycles round and round the system. Positioning the main speakers well in front of the vocal mics and aimed so as to minimise the amount of sound bouncing back into the microphones will help, but there are other issues to consider. For example, if the wall behind the band is hard, it will reflect more sound back into the live side of the microphones. Imagine the room is made of mirrors and it'll be easier to establish where the problematic reflections are likely to come from. If you can hang up a thick fabric backdrop, it will help, as will positioning the main speakers so that most of the sound goes into the audience, and as little as possible points toward the walls and ceiling.

Feedback always starts at the point where the gain is highest and where the phase of the audio picked up by the mic reinforces what is coming from the speakers. If you apply EQ boost, there's more likelihood that feedback will occur at the boosted frequency, as that's where the gain is highest, but the same applies to microphones and PA speakers that have significant peaks in their frequency response curves. Choosing good-quality mics and speakers might help to minimise the risk of feedback. A mic with a gentle presence peak should be OK, but some cheaper mics have very pronounced peaks that can cause problems. You also need less gain if the singer has a naturally loud voice, so those with quieter voices need to work close to the mic. Quiet singers who stand back from the mic have no chance in smaller venues, where mics are invariably closer to the speakers than is ideal.

Stage monitors can be particularly problematic when it comes to feedback, so it pays to spend a little more on monitors that have a reasonably flat response. You also need to ensure monitors are aimed toward the least sensitive part of the vocal microphone, which, for a cardioid pattern mic, is directly from the rear. You may need to angle the back of the mic downwards to achieve this, but it will help. Hypercardioid mics, on the other hand, tend to be least sensitive around 45 degrees off the rear axis, so aim the monitor there.

 
The area directly behind a cardioid mic is the least sensitive, so positioning stage monitors there will reduce the risk of feedback. However, if you're using a hypercardioid mic, this is true of the area at a 45‑degree angle to the rear axis.

 

A third‑octave graphic EQ can help pull down troublesome peaks, but the type you find built into mixers, with only five or six bands, isn't very useful for dealing with feedback, as they change too much of the wanted sound. They can help balance the overall room sound, but that's about it. A better solution may be to connect an automatic 'feedback eliminator' hardware device to the mixer output. These are set up during the soundcheck by turning up the mic gain until feedback occurs, at which point the device measures the frequency and sets up a narrow filter to pull down the gain at that frequency. Most have several filters that can lock onto the main feedback frequencies, and they can help you gain a few more dBs of level before feedback becomes a problem. As the filter bands are so narrow, they have little effect on the overall sound. Most also include roaming filters that can lock onto feedback that occurs during performance, as it might if the singer moves the mic around.

 
Small venues of the type that so many up-and-coming bands play definitely make the fight against feedback harder, as they provide fewer opportunities for optimum positioning of PA speakers. 

Finally, when setting up levels, establish a maximum safe vocal level, leaving a few dBs of fader travel in hand, rather than working right on the edge of feedback where the sound is ringing all the time. Then set up the level of the back line to match the vocals. It's no good setting up the backline first and then expecting the vocals to match it, because in most small venue situations the vocal level is the limiting factor. You'll also find that some venues are inherently worse than others for feedback and you just have to live with it.


Published August 2010