Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Monday, December 31, 2018

Q. Can you explain audio interface input sensitivity?

By Hugh Robjohns & Mike Senior
My interface allows me to switch line‑input sensitivity between +4dBu and ‑10dBV. My calculations suggest that should be a difference of 14dB, but it looks more like 12dB on my DAW's meters. What's going on?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: As anyone with a GCSE Maths qualification knows, the difference between +4 and ‑10 is, indeed, 14. However, as any audio engineer knows, the difference between +4dBu and ‑10dBV is actually 12dB. So your meters are telling the truth!
This is a very common, but very basic, misunderstanding for a lot people (and even some manufacturers), but it is well worth understanding the facts.
The reason for the apparent discrepancy is that the two standard operating levels your interface allows you to select are quoted in respect of different reference signal voltages. Those little letters after the dB values are there for a very important reason!

  
It may seem obvious that the difference between these two input sensitivity settings is 14dB, so you may be surprised when your DAW measures it as 12.

The professional operating level of +4dBu is measured with respect to a reference signal level (denoted by the little 'u') of 0.775Vrms, and works out at a signal voltage of 1.228Vrms. (The 'rms' appendage basically means that we are assessing the average signal level — and we're talking about sine-wave test‑tones here.)

The semi‑pro operating level of ‑10dBV is with reference to 1.000Vrms (denoted by the big 'V') and works out to 0.316Vrms. The difference between the two is 11.790dB, although, unless you wear anoraks or have a PhD, it's probably much easier and more convenient to think of it as a 12dB difference. If you want the maths (which is still only GCSE level, thankfully), here it is:

Decibels = 20 x log (signal voltage/reference voltage)

So:
20 x log (1.228/0.775) = 4dBu (Note the term dBu to denote the 0.775Vrms reference.)

And:
20 x log (0.316/1.000) = ‑10dBV (Again, the use of dBV denotes the 1.000Vrms reference.)
To calculate the difference between the two standard operating‑level voltages:
20 x log (1.228/0.316) = 11.79dB ~ 12dB

 Note that in this case, where we are simply calculating the ratio of two signal voltages, no reference is involved, so the letters 'dB' are used on their own.


Published February 2011

Friday, December 28, 2018

Q. Can I get rid of string buzz?

By Hugh Robjohns & Mike Senior
I've got a recording of an acoustic guitar that I'm loath to re‑record, but there are several sections in which string buzz is clearly audible. Can I remove this with a bit of clever processing?

Mike Fenton, via email

SOS contributor Mike Senior replies: As far as after‑the‑fact mix processing is concerned, I'm not sure I can think of any decent way to remove string buzz, I'm afraid. The problem is that, unlike a lot of other mechanical noises the guitar makes, there's not really any way to get independent control over it with normal plug‑in processing. (I suspect that even high‑end off‑line salvage tools such as CEDAR's Retouch might struggle to make much of an impact with this, in fact.) In the case of pick noise, for example, the transient nature of the noise means that it can be effectively targeted with transient‑selective processors such as SPL's Transient Designer or Waves' TransX Wide. For fret squeaks you can use high‑frequency limiting, or simply an automated high‑frequency shelving EQ to duck the high end of the spectrum briefly whenever a squeak occurs, because such noises are usually brief and occur as the previously played notes are decaying (therefore having less high‑frequency content to damage). String buzz, on the other hand, isn't transient by nature and usually happens most obviously at the beginnings of notes, where the noise spectrum is thoroughly interspersed with the wanted note spectrum.

 
It's relatively difficult to fix fret noises with processsing, due to the very specific nature of the transients produced. For this reason, it's always advisable to record several takes of an important guitar part.

All is not lost, however, because you still may be able to conjure up a fix using audio editing if your recording includes any repeated sections and the string buzz isn't common to all sections; you may be able to just paste clean chords or notes over the buzzy ones. The main thing to remember is to try to put your edits just before picking transients if possible, to disguise them, but you should also be careful that all notes sustain properly across each edit point too, because you may not have played exactly the same thing every time. If you know that string buzz is a problem for you, I'd recommend doing several takes of guitar parts, as this will increase your editing options. If the guitar part is important enough that a bit of string buzz really matters, you should probably be comping it anyway, to be honest, if you're after commercial‑sounding results.


 
Published February 2011

Wednesday, December 26, 2018

Q. Should I be mixing in mono?

By Hugh Robjohns & Mike Senior
I've read a lot of articles about the benefits of mixing in mono. So is pressing the mono button on your DAW's stereo output and turning off one of your stereo monitors the way to go? I've had a quick go from one speaker and mono switch, but it was a bit of a mess, to be honest! I've also read a lot on panning in mono, but I didn't think this would work.

Is it better to record in mono, pan, add effects, and mix to a single stereo master? The reason I ask is that when we recorded everything in mono it seemed to sit better than many stereo files fighting in the mix.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: There certainly are some advantages to mixing in mono. The main reason for checking the derived mono from a stereo mix is to make sure it still works for mono listeners, and there can be a lot of them. For example, a lot of portable radios are mono and all car FM radios automatically switch to mono whenever the signal suffers multi‑path problems or receives weak signals, which is surprisingly often in most places. Many clubs also play music in mono, and sometimes Internet files are converted to mono to reduce data bandwidth, or become near‑mono just because of the chosen codec.

So checking that the mix works in mono is a very sensible thing to do, and if you're going to do that, it is infinitely better to check mono on a single speaker, rather than as a phantom image across a stereo pair of speakers, because the latter over‑emphasises the bass end and stimulates more room reflections, which can be distracting and affect the perceived mix.

But what about mixing in mono? Well, it's generally much harder than mixing in stereo, but you'll get much better results for your effort. The fact is that when you mix in mono you can really only separate different instruments by using differences in their relative levels and spectral content. So achieving the right balance and applying the right EQ to separate sources, becomes a lot more critical: that's why it feels harder to do. But when it's right, it is very obviously right.
  
Even now that we have hi‑tech digital radio, a lot of listeners are still hearing a mono mix, as is the case for users of the Pure Evoke radio, currently one of the best‑selling DAB radios.

Conversely, when mixing in stereo you have the same level and tonal differences to help make the mix work, but you also have spatial position (panning). By panning sounds across the stereo image, you can make the mix sound great very easily, even if you have several near‑identical‑sounding sources. Yet, when that great‑sounding stereo mix is collapsed to mono, you will often find it no longer works, because those sources occupy the same spectrum and end up trampling all over one another.

However, if you can get the mix to sound good in mono first, it will definitely sound great in stereo too. I find it a lot easier and more satisfying to work in that way, although that's possibly partly to do with my formative BBC days working in mono. If you create the stereo mix first, it can be very frustrating afterwards to have to make it work in mono too.

Of course, the only problem with mixing in mono is what happens when you come to pan sources to create the stereo image. Panning a source will inevitably change its relative level in the two output channels. Depending on the panning law in use, this may also, therefore, affect the mono mix balance slightly. Since the mono balance is inherently more critical than the stereo balance, the result is that you end up having to work around the loop a few times. For example, you set up the initial mix in mono by adjusting the fader levels and, possibly, also using EQ to ensure each source occupies its own spectrum and doesn't trample over anything else. You then switch to stereo and pan the instruments to create a pleasing stereo image. This will usually modify the balance slightly, although you are unlikely to notice anything significant while listening to the stereo mix; it will still sound great. You then switch back to mono and, if you notice the mix has gone 'off' slightly, you can fine‑tune the fader positions to get the mix balance perfect once more.

Finally, check once again in stereo and print to master tape (or whatever!). In some descriptions of mono mixing, you'll come across the idea of finding the spatial 'sweet spot' for a source by adjusting the pan pot, while listening in mono. However, this is, quite obviously, completely bonkers! What you're doing in this case is fine‑tuning the mono mix balance by using the pan pot as an ultra‑fine fader, trimming the signal level by very small amounts. Sure, it may well make it easier to fine‑tune the mono mix, but there probably won't be much sense in the stereo image positioning when you finally do come to check the stereo mix. It's obviously far better to pan the sources while listening in stereo, so you can position them precisely where you want them, then revert to mono and fine‑tune the fader positions, if necessary, to make the mono mix work as well as it can.

Reverbs and some stereo effects can be tricky when you're trying to find a perfect balance in both mono and stereo. Almost all reverbs will sound much drier in mono compared to stereo, and so, usually, some compromise will be needed. If you adjust the reverb for a good sense of space or perspective in mono, it will often end up sounding a little bit too wet in stereo (although some people like it that way), and if you get the reverb sounding right in stereo, it will often end up a little too dry in mono. There's nothing you can really do about this; it's a fundamental issue with the way most reverbs are created and the way stereo works.
Narrowing the reverb width can make the differences less obvious — and some reverbs have a parameter to enable you to do this — but it also makes the reverb less spacious‑sounding in stereo. Some mix engineers like to pan mono reverbs with each individual instrument to try to maintain a better stereo‑mono balance, but it's a lot of extra work and I'm not convinced it sounds that much better anyway.



Published February 2011

Monday, December 24, 2018

Q. How can I link outboard to prevent degradation in quality?

By Matt Houghton
I have a few bits of outboard gear that I want to set up as external plug‑ins in Cubase. Should I be linking each bit of gear to different inputs and outputs of my soundcard (a Focusrite Saffire Pro 40), or should I just use a patchbay so that I can link multiple processors together in series? Presumably, doing it the latter way, I get less degradation of the audio signal as it's not passing through the Saffire's D‑A/A‑D each time?

Adding external effects with hardware can really open up your options in terms of adding character to your music. With good‑quality gear, you'd have to go through several stages of conversion to notice any degradation in sound quality. 
Adding external effects with hardware can really open up your options in terms of adding character to your music. With good‑quality gear, you'd have to go through several stages of conversion to notice any degradation in sound quality.

 Q. How can I link outboard to prevent degradation in quality?

John Corrigan via email

SOS Reviews Editor Matt Houghton replies: You are perfectly right in theory: yes, there is some distortion each time audio passes through your interface's A‑D or D‑A converter stages. So, if you're chaining multiple processors in series (say, an EQ and a compressor), then it's better to only pass through one stage of D‑A and A‑D conversion. But that's the theory and (as in all matters audio), in practice, it comes down to what you can hear.

With a good modern interface, like those in Focusrite's Saffire series, you have to go through many stages of conversion before you'll notice any audible degradation. This is especially true if you're using outboard to impart a bit of 'character' or 'flavour'; it's extremely unlikely that a couple of extra stages of conversion will be at all noticeable. If you're a mastering engineer then maybe you have good reason for worrying about this, but then you'd already know enough from listening to the difference that you wouldn't be asking this question! In my opinion, the benefits, in terms of saving time and being able to go with the creative flow of patching in your external effects as if they are DAW plug‑ins, far outweigh any theoretical disadvantage. Just remember to use and trust your ears!



Published March 2011

Friday, December 21, 2018

Q. What settings should I use when backing up vinyl?

By Hugh Robjohns

I've just started putting my vinyl collection onto my hard drive for the purposes of backing up and preserving it. I'm currently using Audacity to record the WAVs but I don't know what settings I should be using. Someone mentioned that I should record at 32-bit — is this correct?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: To answer the last question first: not really! The longest word length you can get from any converter or interface is 24 bits, so that's the format you should record and archive your files in, and if you plan to make 'safety copies' in the CD audio format you'll need 16/44.1kHz files.

However, the 32‑bit format does exist. Most DAWs process signals internally using a '32‑bit floating-point' format and some allow you to choose whether to save ongoing projects in this native form to avoid multiple format changes as a project proceeds. In general, the 32‑bit floating‑point format still works with 24‑bit audio samples, but adds a scaling factor using the other eight bits to allow it to accommodate very loud or very quiet signals following processing. The problem is that not all DAWs share the same 32‑bit float format, so, for maximum compatibility, it's not the best idea to long‑term archive audio files in this format.


As the longest word length any converter can record in is 24 bits, that's the setting you should use when backing up your vinyl. If you're likely to want to run de-clicking software, it's worth making your original recordings at 96kHz. 
As the longest word length any converter can record in is 24 bits, that's the setting you should use when backing up your vinyl. If you're likely to want to run de-clicking software, it's worth making your original recordings at 96kHz.

 If your records are in bad shape, it might be worth using de‑clicking software on your recordings before doing anything else. 
If your records are in bad shape, it might be worth using de‑clicking software on your recordings before doing anything else.

As for the other settings, it depends on the condition of the records you are transferring and how much processing you are planning to do to them. For starters, though, if your records suffer from clicks, these can have a huge dynamic range that can easily overload the A‑D converter (which doesn't sound nice!). The sensible way around this is to leave masses of headroom when digitising, and that means using a 24‑bit analogue-to-digital converter and leaving at least 20‑30dB of headroom — more if the noise floor of the disc and converter allow it.

If you are planning to run de‑click software, then I would also recommend using a higher sample rate during the digitisation. That makes things much easier for the software, so digitising at 24/96 would be a good starting point.

If you are going to use de‑clicking software, run that first. There are various packages that do this, from the superb (but expensive) CEDAR tools, down to various low‑cost plug‑ins. I often use Izotope RX, which is a very cost‑effective solution. Alternatively, you can manually edit out the clicks or, in some DAWs, redraw the waveform to erase them.

With the clicks taken out, you can then remove the (now empty) headroom margin by bringing up the level of the music signal to peak close to 0dBFS (I generally aim to normalise to ‑1dBFS).

You may, at this stage, want to deal with the surface noise — again, there are various tools for that — or adjust the overall tonal balance, but my advice would be to tread lightly if you do go down these routes.
Finally, sample‑rate convert the files down to 44.1kHz, reduce the word length (with dither) to 16 bits, and burn to CD.


Published March 2011

Wednesday, December 19, 2018

Q. How do you make a Reflexion Filter more stable?

By Various
I'm writing to you in an attempt to discover the genius of reversing the fittings of an SE Reflexion Filter, as espoused many times in the superb Studio SOS features by Paul White and Hugh Robjohns. I have struggled with this very impressive tool since it was first marketed and have had to contend with a drooping or toppling stand on many occasions.

Brian Langtry

SOS Technical Editor Hugh Robjohns replies: The SE Reflexion Filter is a very useful and effective tool for home and project studio recording applications, but in our opinion the mechanical design of its mounting hardware is not very clever, and makes the assembly potentially unstable when used with the most common variety of mic stands.

The basic problem is that, as designed, the entire system is intended to be supported by attaching one end of the mounting hardware to the mic‑stand pole. The Reflexion Filter ends up being mounted on its spigot about 100mm away from the stand pole, and the microphone on its post about 300mm away. The result is that all the weight — and, more importantly, the centre of gravity — ends up being a considerable distance away from the mic stand pole, which is an unstable configuration. Unless great care is taken to position one leg of the stand directly under the axis of the Reflexion Filter mounting hardware, the whole assembly is likely to topple over, and the microphone will probably be damaged as a result.

As part of the Studio SOS make-overs, Paul White and I have installed lots of Reflexion Filters and have evolved an alternative mounting technique that we think is safer and more practical. Picture 1, bottom left on the opposite page, shows the Reflexion Filter constructed as the manufacturer intends. I have rigged it on the main vertical pole of a conventional boom‑arm stand and attached a Microtech Gefell M930 large‑diaphragm microphone with a Microfonen pop shield.

1. A Reflexion Filter assembled according to the manufacturer's instructions. The centre of gravity is quite some distance from the supporting mic stand pole. 
1. A Reflexion Filter assembled according to the manufacturer's instructions. The centre of gravity is quite some distance from the supporting mic stand pole.

1: As you can see, the centre of gravity of the assembly (which is roughly midway between the Filter and microphone posts) is a considerable distance from the supporting mic stand. The reason I've used the M930 is that it's the lightest large‑diaphragm mic I own. I didn't want to risk a heavier, more conventional vocal mic with the Reflexion Filter configured in this way!

Ideally, the mounting hardware should be completely redesigned to position the entire assembly's centre of gravity over the centre of the mic stand itself, but in the meantime, a very simple and practical work‑around is to modify the mounting arrangements to achieve a similar end result. It's not a perfect solution, but it is a great improvement and one that provides considerably better stability.

2. Start by part‑assembling the hardware. 
2. Start by part‑assembling the hardware.

2: As picture 2 shows, the first step is to assemble the hardware as the manufacturer's instructions indicate. The clamp is placed around the mic‑stand pole with the knurled spigot of the slide bar inserted into the appropriate hole in the clamp. The slide bar is (initially) orientated horizontally, with the microphone pole upright. Normally, the spigot extending from the bottom of the Reflexion Filter would then be inserted in the appropriate hole at the mic‑stand end of the slide bar.

3. Invert the microphone slide bar...3. Invert the microphone slide bar...4. ... so that the support pole points downwards.4. ... so that the support pole points downwards.

3 & 4: Instead of mounting the Filter at this stage, invert the microphone slide bar by loosening the locking lever on the stand clamp. With the slide bar upside down (and with the mic support pole now pointing straight down), re‑tighten the locking lever.
5. Loosen the pivot lever on the mic slide and and rotate the slide and pole up and over 180 degrees. 
5. Loosen the pivot lever on the mic slide and and rotate the slide and pole up and over 180 degrees. 

5: Next, loosen the pivot lever on the microphone slide and rotate the whole slide and microphone pole up and over a full 180 degrees, so that it folds over the top of the microphone stand clamp. To achieve this, you may need to adjust the knurled spigot in the microphone stand clamp again to provide sufficient clearance for the microphone slide base. Re-tighten the pivot lever.
6. Now you can begin to attach the Reflexion Filter by inserting its spigot into the slide base. 
6. Now you can begin to attach the Reflexion Filter by inserting its spigot into the slide base.

 6: The Reflexion Filter can now be attached by inserting its spigot into the (now inverted) slide base, so that it is positioned just behind the microphone stand pole. The microphone can then be fitted to the mic support pole so that it ends up in front of the microphone stand, roughly in line with the front edges of the Reflexion Filter.
7. The reconfigured assembly is better balanced, with its centre of gravity much closer to the microphone stand pole. 
7. The reconfigured assembly is better balanced, with its centre of gravity much closer to the microphone stand pole.

7: That's it: job done. The whole assembly should now be far better balanced, with its centre of gravity much closer to the microphone stand pole itself. This solution works best on a mic stand without boom‑pole attachment, but it is still perfectly possible with the boom pole in place if you're careful.  


Published May 2009


Monday, December 17, 2018

Q. Does mono compatibility still matter?

By Various
I've recently started working at a classical radio station in my area, and I was fully expecting to have to deal with mono issues and think about miking live performance with those in mind. But everything is done in stereo and broadcast in stereo. Spaced omnis are common, which is not very mono compatible. So when is mono compatibility a necessity, and is mono really ever used any more as a final 'product'?

Even popular modern DAB radios such as this one from Pure are mono by default, and a large part of the potential audience for radio and TV in the UK still listens in mono — so mono compatibility is still a consideration for music producers. 
Even popular modern DAB radios such as this one from Pure are mono by default, and a large part of the potential audience for radio and TV in the UK still listens in mono — so mono compatibility is still a consideration for music producers.

Via SOS web site
SOS Technical Editor Hugh Robjohns replies: In a technical sense, mono compatibility is still important. Whether a particular radio station chooses to bother about it is a decision for them, but I would suggest it unwise to ignore it completely.

FM radio is transmitted essentially in a Mid/Side format, where the derived mono sum (Mid) signal is transmitted on the main carrier and the 'Side' information is transmitted on a weaker secondary carrier. A mono radio ignores the Side signal completely, whereas a stereo radio applies M/S matrix processing to extract normal left and right signals.

However, there is potentially a noise penalty in this process, so in poor reception areas, and often when on the move in a car, FM receivers are designed to revert to mono, to avoid reproducing a very hissy stereo signal. As a result, a large amount of in‑car listening will be in mono (at least, here in the UK) because of signal fading and multi‑path issues. In addition, a very large proportion of radio listeners do their listening in the kitchen, bathroom or garden, using portable radios that are usually mono. So mono compatibility is still important to a very large proportion of the potential FM radio audience.

Amusingly, mono doesn't even become less relevant in the digital radio market. The most popular DAB digital radio receiver in the UK is currently the Pure Evoke, and although you can attach an optional second speaker to enjoy stereo from it, by default the stereo output from the DAB receiver is combined to mono to feed the single internal speaker. So mono compatibility remains important in the digital radio market too!
Considering TV for a moment, the primary sound on analogue (terrestrial) TV in the UK is in mono, transmitted by an FM carrier associated with the vision carrier. Although a secondary stereo sound carrier was added in 1991, using a digital system called NICAM, there are still a lot of small mono TVs on the market. Analogue TV will be switched off in the UK within the next three years, and digital TV (both terrestrial and satellite) is broadcast entirely in stereo (or surround in some cases) — but even so, it is still possible to buy mono receivers.

So given that a significant proportion of the potential audience (for analogue and digital radio and TV) could well be listening in mono, I'd suggest that checking and ensuring mono compatibility is still important. I know that some classical radio stations, in particular, argue that only serious music enthusiasts listen to their output, and they would only do so on decent stereo hi‑fi equipment. Perhaps that is the case, but to my way of thinking, ensuring reasonable mono compatibility is still the safest approach, and needn't restrict the way broadcast material is produced in any way at all.

Using spaced omnis is a technique often favoured by classical engineers, largely because of the more natural sound and smoother bass extension provided by pressure‑operated mics. In some situations, particularly when using a single spaced pair, there can be mono compatibility issues — but only rarely, and it is usually easily fixed. For example, if any additional accent or spot mics are used and panned into the appropriate spatial positions, any phasing or comb filtering from the spaced omnis, when auditioned in mono, will be diluted and usually ceases to be an issue. Even in cases where a single spaced pair is used, listening to the derived mono may sound different, but it is rarely unacceptable.

To sum up, I would definitely recommend checking mono compatibility and trying to ensure that it is acceptable (even if not entirely perfect). If the sound quality of spaced omnis is preferred, there's no reason not to use them — even if the final output is mono — provided suitable skill and care is used in their placement and balance. The BBC certainly use spaced pairs for Radio 3 transmissions in appropriate situations.




Published June 2009

Friday, December 14, 2018

Q. What’s the best order for mixing?

By Various

I've been wondering what order people use when mixing. Mixing the instruments in order of priority? Mixing the rhythm section first?

Deciding on the right order for mixing your tracks might well depend on the genre in which you're working. The approach could be very different on a Rihanna mix than one of Dido's, for example. 
Deciding on the right order for mixing your tracks might well depend on the genre in which you're working. The approach could be very different on a Rihanna mix than one of Dido's, for example.

 Q. What’s the best order for mixing?


Via SOS web site
SOS contributor Mike Senior replies: I've spent the last couple of years researching and comparing the techniques of many of the world's top engineers, and you might be surprised to discover that they disagree considerably on the issue of the order in which to deal with the different aspects of a mix. On this basis, it would be tempting to think that your mixing order isn't actually that important, but I think that this is a mistake, as in my experience it can have a tremendous impact on how a mix turns out.

One reason for this is that each different track in your mix has the potential to obscure (or 'mask') certain frequency regions of any other track. The primary way to combat frequency masking is to reduce the level of the specific problem frequency‑range for the less important instrument, letting the other one shine through better. So it makes a good deal of sense to start your mix with the most important track and then add in successively less important tracks, simply so that you can take a methodical approach to dealing with the masking problem. If any track you introduce is obscuring important elements of a more important track that is already in the mix, you set about EQ'ing the problem frequencies out of the newly added track. If you don't introduce important tracks until later, you'll tend to find difficulty in getting them to sound clear enough in the mix, because there will now be umpteen less important tracks muddying the water. This is a common problem for those who only introduce their lead vocal track right at the end of the mix, and can often lead to an over‑processed and unmusical end result.

Another persuasive reason for addressing the most important tracks first is that in practice almost everyone has mixing resources that are limited to some extent. If you're mixing in the analogue domain, you'll already be well acquainted with the frustration of only having a few of your favourite processors, but even in the digital domain there are only a certain number of CPU cycles available in any given hardware configuration, so some compromise is usually necessary, by which I mean using CPU‑intensive processing only for a smaller number of tracks. In this context, if you start your mix with the most important instruments, you're not only less likely to over-process them, but you'll also be able to use your best processors on them — an improved sonic outcome on two counts!

Taking another look at different engineers' mixing‑order preferences in the light of these issues, the disparity in their opinions begins to make more sense if seen in the context of the music genre they're working in. In rock and dance music styles, for example, people often express a preference for starting a mix with the rhythm section, while those working in poppier styles will frequently favour starting with the vocals. As a couple of examples to typify how this tends to affect the mix, try comparing Rihanna's recent smash 'Umbrella' with something like Dido's 'White Flag'. The first is built up around the drums, while the second has been constructed around the lead vocal, and you can clearly hear how various subsidiary sounds have been heavily processed, where necessary, to keep them out of the way of the main feature in each instance. In the case of 'Umbrella', check out the wafer‑thin upper synths, whereas in 'White Flag' listen for the seriously fragile acoustic guitars.



Published June 2009

Wednesday, December 12, 2018

Q. Which room should I record in?

By Various
I am about to do a recording at a farm in the North York Moors. It will be done 'live', using two acoustic guitars and two voices, and we will add bits of percussion, mandolin and accordion afterwards. The site has a choice of buildings to set up in (using my own equipment). Should I choose the huge, high‑ceilinged barn or the small, cosy stable?

With a choice of large recording spaces to work in, acoustic screens can be very useful for tailoring the room characteristics to suit your players. 
With a choice of large recording spaces to work in, acoustic screens can be very useful for tailoring the room characteristics to suit your players.

Via SOS web site

SOS contributor Martin Walker replies: It's really difficult to generalise about the acoustics of a room without seeing it personally. The small, cosy stable might provide a more intimate acoustic that works well with your small acoustic ensemble, especially if it includes lots of wooden stalls that result in lots of pleasing diffused reflections. On the other hand, it might sound really nasty, depending on its dimensions and whether it's built of rough stone or breeze blocks. In general, smaller rooms can tend to sound more 'boxy'.

A larger space with good dimensions can exhibit a flatter response down to a lower frequency, and hence better acoustics, and it may give you a richer and grander reverberation whose amount you could alter by how close you place the mics to the performers. A larger space also provides you with more opportunities to place several mics at different distances from the performers; close ones capturing the intimacy of the performance, and more distant ones capturing the ambience of the space, recorded on to additional audio tracks that can be later mixed in to taste. However, this doesn't necessarily mean that your particular barn will sound good, especially if you end up with several discrete reflections coming back off plain, unadorned walls and a concrete floor.

Ultimately, you really do have to use your ears. Don't worry if you don't have sufficient experience to judge room acoustics immediately on entering the space. Just set up one of the performers (with an acoustic guitar, for instance), put on some high‑quality, closed‑back headphones and move both performer and mic about while you monitor their performance. While this will be vital in helping you find the optimum mic position to record each instrument and voice, it will also tell you a lot about the room acoustics (probably a lot more than clapping a couple of times, as so many people do).

Acoustic guitars often benefit from a 'live' sound, so you may find it beneficial to place the performers near some reflective surfaces such as doors, a hard floor, or those stable stalls. You may also find that using a couple of mics on each instrument works well, such as one below the bridge and another near the neck of the guitars. You can find lots more useful advice on mic positions and distances for recording acoustic guitar in SOS August 2001 (/sos/aug01/articles/recacgtr0801.asp).

If the room sound proves to be poor, it's handy to have a few movable acoustic screens (or even improvise some with clothes horses and duvets) to add some extra absorption close to the performers. Such screens can also be used to increase acoustic separation between the players. However, I suspect that your safest option is not to restrict yourself to the stable or barn. Since you're recording at a farm, see if other rooms in the farmhouse itself could be used. Many excellent acoustic ensemble recordings have emerged from such cosy environments, which normally contain enough furniture to provide plenty of absorption and diffusion. Starting with a room that sounds good is always easier than attempting to knock a poor one into shape.




Published June 2009

Monday, December 10, 2018

Q. What should I buy for Mono monitoring?

By Mike Senior
I'm looking for a new monitor speaker, a single one for mono. Ideally I was hoping for something self-powered, with its own summing circuit. I had an Auratone 5C for ages, wired up to a crappy amp, but I'd like a more elegant solution. Any ideas?


SOS Forum post

If you want to check your mixes on a mono 'grot box' such as an Auratone, some monitor controllers, such as the Mackie Big Knob pictured here, have a mono switch that will allow you to connect up just one side of a speaker output to the mono speaker. 
If you want to check your mixes on a mono 'grot box' such as an Auratone, some monitor controllers, such as the Mackie Big Knob pictured here, have a mono switch that will allow you to connect up just one side of a speaker output to the mono speaker.

 SOS Contributor Mike Senior replies: If you're after a replacement for your 'Horrortone', you want more than just any old mono speaker: you specifically want a small, one-way, unported mono speaker. This narrows your field of enquiry somewhat.

There are two passive models that come to mind — Triple P's Pyramid, which we reviewed back in SOS 2004, and the Avantone Mixcube, reviewed in SOS April 2007 — but it sounds to me like you're after something active so that you can avoid using a separate amp. The Mixcube Actives have recently been launched (I have one of these on order myself!), and may provide the most 5C-like solution, but there are also some other models worth considering, such as the Fostex 6301 and Canford Audio's Diecast Speaker (which I've uses a great deal for mixing purposes).

As regards summing to mono, there are lots of things you could do. The pro studio approach would probably be to invest in a dedicated mono summing unit, but that's by no means the only option. Some monitor controllers have a mono switch, so you can connect up just the left side of one of the speaker outputs to your speaker — this is what I do on my SPL Model 2381, but units such as the PreSonus Central Station, Mackie Big Knob and Mindprint Trio all have a Mono switch too.

Q. What should I buy for Mono monitoring?
However, there are workarounds even if you have no dedicated monitor controller. For example, if you're monitoring through a hardware mixer and can spare a stereo channel with an aux send, then any stereo signal through the channel will usually be automatically summed to mono at the respective aux output. Plumbing your mix through this channel and connecting the mono speaker to the relevant aux send output should then do the trick.

If you're tempted to just wire up a cable connecting your left and right control-room outputs to the single input of the speaker, tread very carefully. I'm no electronics buff, but as I understand it there are very few pieces of equipment that will tolerate your connecting together their outputs in this way without causing some audible side-effects and possibly damage. However, if you're handy with a soldering iron and have a spare headphone monitoring output available, you can create an adaptor lead incorporating a couple of built-in resistors that should do the job fine. Recent Mix Rescue candidate David Greaves kindly wired up one of these for me so that I could easily connect my Canford speaker into his hardware mix system, and has helpfully provided full details of the lead's construction at http://koo.corpus.cam.ac.uk/mixerton/articles/mono..., if you fancy giving that a go.


 
Published October 2008

Friday, December 7, 2018

Q. How can I use multiple audio interfaces together?

By Various
Mackie Onyx audio interface.
I’m currently upgrading my project studio, which is based around a Focusrite Saffire LE and is fine when using Cubase or NI’s Traktor. However, I am looking to bring Pro Tools into the equation, and despite some hunting I’m still stumped. Would using an M Box affect the sound drivers for my Saffire, forcing me to disable one piece of hardware and restart my system? Despite all my reading in forums, I am still unsure about the feasibility of adding a second Mackie Onyx 400F to my Windows XP DAW system. I understand Mac OS has the ability to aggregate, but it is not clear to me if Windows XP can handle the two interfaces at once or not. Would the drivers do this for me, and will I effectively end up with a 20-in/20-out interface to use with the bundled Tracktion 2?

SOS contributor Martin Walker replies: When plug-and-play soundcards started appearing in 1998, the need to manually choose such arcana as IRQ numbers and DMA channels disappeared, and since then it’s been comparatively easy to physically install and run more than one audio interface in a computer. Occasionally a particular model of PCI soundcard might refuse to share the interrupt it had been given with that of another expansion card, which might result in you having to shuffle it to a different slot, but over the years I’ve regularly managed to install up to four soundcards in a single computer without things ending in tears. With modern Firewire and USB audio interfaces it’s even easier, since even if you run out of suitable ports on your computer, many Firewire audio interfaces can be daisy–chained, and you can add more USB ports using a (preferably powered) hub.

So, to answer the first question, adding an M Box wouldn’t cause any conflicts with an existing Saffire LE interface, and the two should both run happily when plugged into the same computer. To be compatible with Pro Tools software, an audio interface will either need to be from Digidesign or from M-Audio’s ‘Pro Tools M-Powered’ range, but all such interfaces additionally have both ASIO and Core Audio drivers, so that you can also use them with any other Mac or PC audio application. Therefore you could either replace the Saffire LE with a Pro Tools–compatible interface and use the latter either with Pro Tools or Traktor (but not both at once), or you could simultaneously run Pro Tools with an M Box and Traktor with the Saffire LE.

The second question covers slightly different ground. Combining two or more audio interfaces from the same manufacturer into a single ‘super interface’ with more inputs and outputs requires an ASIO ‘multi-device’ driver. Many audio interface manufacturers offer such drivers (typically supporting up to four devices), so that you can increase your I/O complement easily as your recording and playback requirements become more sophisticated. There’s no increase in latency, and as long as there’s a way to lock the clocks of all the devices together, they should stay locked permanently in sample-accurate sync (follow the manufacturer’s instructions on the best way to do this for your particular interface).

Without multi-device drivers, there’s no way to install and run two or more identical audio interfaces in a computer, since the operating system would have no way to differentiate between the various units. However, in this particular case there’s a happy ending, since from version 3.2.8 onwards Mackie’s Onyx drivers for Windows XP do support several devices, so you can create a single interface with 20 inputs and outputs.

Those with two or more different audio interfaces can try a different approach. On the PC you can try combining their functions using the freeware ASIO4ALL driver (www.asio4all.com), although this can result in increased latency, and on the Mac you can try creating an ‘Aggregate Device’. Once again, this can significantly increase latency.
By the way, MIDI and audio drivers are quite separate, and you can nearly always combine the MIDI ports from several different audio interfaces and use them within a single sequencer application, whichever interface is providing the audio I/O.



Published November 2008

Wednesday, December 5, 2018

Q. Which mixer should I choose for drum recordings?

By Various

I'm looking for something to record my latest project with. I currently own an Apple iMac running OS X Tiger, and the main program I use for recording is Apple's own Soundtrack. I'm thinking of upgrading at some point to Logic Pro 8, but for the moment I'm with Soundtrack. I have eight Audio Technica mics for the drum kit and I'm really looking to record through a mixer into Soundtrack (Logic in the future). However, I still want to be in control of individual mic levels after I've recorded within Soundtrack or Logic, even having those individual mics on different tracks.

I'm wondering if you could help me in choosing some kind of mixer with eight XLR inputs that will let me control individual tracks within the drum mix afterwards?

Mackie's 1642 VLZ3 mixer, with eight good-quality mic preamps, looks like a good bet if multitrack drum recording is part of your studio workload. 
Mackie's 1642 VLZ3 mixer, with eight good-quality mic preamps, looks like a good bet if multitrack drum recording is part of your studio workload.George Barnett

SOS Reviews Editor Matt Houghton replies: Dilemmas like this are rather difficult to advise upon without knowing anything about your budget, but I'll try...

Let's start with the cheapest and simplest option. If you plan to do all the mixing and processing 'in the box', you don't really need a mixer at all: you just need an audio interface and sufficient mic preamps. Something like the recently announced Focusrite Saffire Pro 40 would give you all of this in a single unit, and although I've not used it myself I hear good reports, and it looks to be very decent value. Alternatively, you could choose an interface with eight line inputs and use external mic preamps — or, of course, a mixer. There are so many interfaces you could use with this, at so many different prices, that I can't really list them here, but most recent 24-bit interfaces should be up to the job.

If you want to use an analogue mixer to feed the line inputs of your interface, I'd suggest something like the Mackie 1642 VLZ3, taking a feed from its direct out sockets. It will probably have more functions than you need, but I doubt you'll find eight preamps of this quality in anything cheaper — the VLZ3 series preamps are neutral sounding and offer plenty of headroom. Mixers, of course, can be useful if you want to add some processing while recording — meaning that if you like to tweak EQ and compress to get the right sound before recording, so that you only need to do smaller tweaks in the computer, a mixer would still be a good option. Personally, unless you're used to working in that way, I'd opt to do that in the computer, but mixers can also double up as monitor control systems, without running the risk of latency in your computer, which is useful if you're doing a lot of recording. Alternatively, choosing a mixer with Firewire or USB2 connectivity would mean there's no need for a separate audio interface. Something like the Alesis Multimix Firewire 16 would do the job, offering you all the preamps and I/O that you need to record your eight tracks simultaneously. Whichever option you choose, you just need to select the appropriate audio interface driver in your DAW and assign the physical input channels of your interface to separate software input channels in your DAW. In this way you can record several tracks simultaneously, so that you can do whatever you want with them in your DAW software.

Speaking of your DAW, I must admit that I'm not that familiar with Soundtrack Pro. I believe it offers sufficient input tracks and I know that you can do plenty of processing in there, but I suspect it will lack some of the useful features commonly found in music-oriented DAWs such as Logic, Cubase and Pro Tools. You could certainly do worse than purchasing Logic, as it comes with a great bundle of plug-ins as standard and the current price is very competitive. If that's too pricey, there's always Logic Express, from which you can upgrade to Logic Pro when you can afford it.



Published December 2008

Monday, December 3, 2018

Q. Why does my USB mixer make a whining sound?

By Various
I have a Core 2 Duo PC, Yamaha MW10 USB mixer, Alesis RA150 power amp and JBL speakers. I use high-quality cable from amp to speakers, a TRS-wired jack lead between the mixer output and amp, and a good-quality USB cable between my mixer and computer.

My problem is a slight, audibly noticeable (and recordable!), annoying high-pitched 'whine' from both speakers when Reaper is booted up and set ready for recording. To date I have isolated or switched off all other equipment and had only the computer, amp, mixer and LCD monitor powered from one surge-protected domestic plug board going to its own mains power socket in my small bedroom studio, but the noise persists. Apart from this noise, all the audio works fine via the loaded MW10 drivers. Other stuff in my studio includes some outboard hardware powered by wall-wart PSUs, but I've had these switched off and disconnected and the whine still persists. I'm stumped.

Reason's Hardware Interface is the key to using up to four external hardware controllers to control the devices in the Reason rack. Devices are chosen for each MIDI input bus using pop-up menus accessed by the arrow button under each channel 'LCD'. 
Reason's Hardware Interface is the key to using up to four external hardware controllers to control the devices in the Reason rack. Devices are chosen for each MIDI input bus using pop-up menus accessed by the arrow button under each channel 'LCD'.

Malcolm Furneaux

SOS contributor Martin Walker replies: I suspect that you have run into a common problem faced by many musicians who use USB and Firewire peripherals. The whining noises you hear are due to a ground loop, and you may also notice them changing when you move your mouse, access your hard drive or update your graphic display.

A ground loop occurs when there's more than one ground path between two pieces of gear, and nowadays is comparatively common when connecting an earthed computer via an audio interface to an earthed mixing desk or power amp. As I described in SOS July 2005 (www.soundonsound.com/sos/jul05/articles/qa0705_1.htm), there are three common ways to break such loops safely (depending on whether or not the gear you're using provides balanced inputs and outputs), and they are: using balanced audio cables; using pseudo-balanced audio cables; or using a ground-loop eliminating box containing an isolating transformer, such as the ART Cleanbox II, reviewed in SOS August 2005 and typically costing under £35 in the UK or around $50 in the US.

They may not be touch-sensitive, but the motorised faders on the Behringer BCF2000 are a bonus at this price point. 
They may not be touch-sensitive, but the motorised faders on the Behringer BCF2000 are a bonus at this price point.

In your case, the Yamaha MW10 mixer and the Alesis RA15 power amp both provide balanced I/O on TRS-wired jacks, so if you've already got a pair of balanced, TRS-wired cables between the two for left and right channels there's no possibility of a ground loop here. You've already been systematic by unplugging all the gear except the computer, monitor, mixer, and amp, to rule out other ground-loop possibilities.
So where's the problem? Well, since your PC and USB mixer are each earthed via their respective mains plugs, the ground loop is, on this occasion, completed via the USB cable that connects the two. Try unplugging the amp and USB cable, and then listening to the headphone output on the Yamaha MW10. If the whine only reappears when you plug the USB cable back in (or even touch the metal of its plug to the chassis of the other gear), you've confirmed the culprit.

I'm sure that many manufacturers of mains-powered USB and Firewire audio peripherals incorporate safeguards that circumvent such computer ground-loop problems. Nevertheless, I know of many owners of USB or Firewire mixers, audio interfaces, MIDI interfaces, and even the occasional music synth/keyboard controller with USB MIDI output for connection to a computer, that have been plagued by them.

Synth users can often cure such problems by abandoning the USB output in favour of the alternative opto-isolated MIDI connection, while laptop owners can safely bypass their computer's mains earth by running on batteries or replacing an earthed mains PSU with a Universal double-insulated model (see the PC Notes column in this very issue for more details). However, desktop and tower PCs require a mains earth, and you can't remove this for safety reasons. US readers might like to try Ebtech's Hum X adaptor (www.ebtechaudio.com/humxdes.html), but I haven't yet found a European equivalent.

I have heard of a few musicians modifying the USB or Firewire cable by cutting through its outer plastic sheath near the plug at one end and carefully severing its outer screen connection, but although this may cure the audio interference, it often prevents reliable data transmission. Specialist USB isolators are available that are, in effect, DI boxes for USB connections (one example is the USB-GT from Meilhaus Electronic (www.meilhaus.com), but they generally don't come cheap.

If you experience this problem, you may simply have to minimise its effects. Plugging all your mains appliances into the same mains distribution board, to create a 'star' system with everything powered from the same mains wall socket, can often reduce the interference, as can trying different lengths of USB/Firewire cable (generally, shorter ones will be better). If I track down any more definitive solutions, I'll publicise them in PC Notes.




Published December 2008