Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, July 18, 2017

Q. What’s the best way to connect gear digitally?



By Various

I currently have an Emu 1820M audio interface and a Line 6 Pod XT Pro modelling guitar preamp, and am about to get a PreSonus Digimax FS eight‑channel mic preamp. The thing is, I haven't got a clue as to how you would connect all these devices together, or even if it is possible.

Via SOS web site

SOS contributor Martin Walker replies: It's often possible to connect gear in several ways, but there will nearly always be a 'best' way that should ensure the highest audio quality. First of all, avoid unnecessary A‑D and D‑A conversions.

With your gear, for instance, if you simply connected your Pod analogue out to one of your 1820M analogue inputs, this would pass the signal through the D‑A converter in the Pod and then an A‑D converter in the Emu, which would compromise the signal slightly. It's far better to connect the S/PDIF digital output of your Pod to an S/PDIF input on the Emu, bypassing these two conversion stages. The PreSonus Digimax FS also provides separate analogue outputs for each of its eight mic preamps, but once again you're better off connecting the PreSonus ADAT output to the Emu ADAT input, rather than tying up every Emu analogue input!

Next, with any combination of digitally connected devices you have to decide which should be set to its 'internal' clock setting, and thus provide the master clock signal, via its S/PDIF, ADAT or AES/EBU outputs, to the other devices. These, in turn, should all be set to 'external' clock and become 'slaves', locking everything together in perfect digital sync.

Choosing the device to provide the master clock is the key to achieving the best audio quality. Theory states that you should always choose as master the device whose clock offers the lowest jitter levels, which in this case would be the PreSonus, with its JetPLL jitter‑reduction technology. However, in practice this choice is often more complicated.

The two most critical points, as far as digital clocking are concerned, are when analogue signals are converted to digital by the A‑D conversion process during recordings, where any digital 'shaking' (jitter) will result in a permanently 'blurred' recording that can't be corrected or improved later on. When digital audio is converted back to analogue, so we can hear it through loudspeakers or headphones, any further digital shakiness will blur existing recordings.

Like many budget interfaces, Emu's 1820M works rather well on its Internal clock, but its jitter levels increase if you switch to external clock, however good that external clock is. So for best results when using a budget interface, you should generally allow it to be the master device during playback and when recording through its analogue inputs.

Connecting the elements of a digital audio system together is not always a straightforward process, with clocking and jitter just two of the issues that should be taken into account. Some devices, such as this PreSonus Digimax FS preamp and A‑D converter, even have jitter‑reduction technology that can improve the jitter performance of other devices in the system. 
Connecting the elements of a digital audio system together is not always a straightforward process, with clocking and jitter just two of the issues that should be taken into account. Some devices, such as this PreSonus Digimax FS preamp and A‑D converter, even have jitter‑reduction technology that can improve the jitter performance of other devices in the system.

The beauty of the jitter‑reduction technology featured in the PreSonus Digimax FS (and various other devices) is that even when slaved to the Emu's more jittery clock it will nevertheless significantly reduce its jitter levels on the way in, so your mic recordings will still sound pristine. To do this, just connect a cable between the Emu S/PDIF or ADAT output and the corresponding digital input on the Presonus (it doesn't matter which, since only the embedded clock signal is being utilised). The Pod XT manual doesn't mention jitter reduction, so when recording guitar you should probably switch that to be the master device and the Emu to slave.

Whenever you're faced with several clocking choices, try each one in turn and listen carefully. The one offering lowest jitter should provide a stereo image that's both wider and deeper; you should notice more 'air' and space in recordings, and you should also be able to hear further into the mix, with distant sounds revealed better.



Published August 2009
 




Saturday, July 15, 2017

Q. What is the best way to reduce bleed on a drum recording?



By Various

When mixing drums, is it standard practice to try and tighten things up by getting rid of bleed on all but the overheads? I'm guessing it's genre specific. At the moment I'm recording mainly rock and indie‑style music and just wondered what the pros and cons of doing this are? Also, besides manually going through and silencing or reducing the level of bleed on these tracks, are there any better ways of doing it? I've tried noise gates but to get them at the level of noise reduction I need, they stifle the actual drum hits. I'm using Apple Logic 8.

Logic's Noise Gate plug‑in's side‑chain filtering and range (Reduction) control make it useful for processing drum recordings to reduce spill. 
Logic's Noise Gate plug‑in's side‑chain filtering and range (Reduction) control make it useful for processing drum recordings to reduce spill.

Via SOS web site

SOS contributor Mike Senior replies: While there are a lot of ways to reduce bleed levels on close mics, and you'll often see some kind of spill‑reduction processing in mixes of multitrack drum recordings, I'd advise against trying to remove all the spill from them. Even if you could actually pull it off effectively, you'd almost certainly throw out the baby with the bathwater in the process, as the spill contributions can actually improve your complete kit sound by picking up more aspects of each instrument and by generally gluing everything together. It's much better to build a kit balance from the mics without processing (although you should pay adequate attention to the polarity settings on each of the tracks, as with any multi‑mic recording) and then only bring in spill‑reduction processing where it's needed. For example, it's not uncommon for there to be too much hi‑hat in the balance if the snare mic has picked up lots of hi‑hat spill, so that would be an argument for trying to reduce this bleed — not killing it completely, necessarily, just pulling it down enough to sort out the balance problem. Similarly, if your tom‑tom close mics are over‑emphasising the sympathetic ringing of these drums (a common problem), some reduction in the spill on those mics would probably be in order.

In terms of techniques, there are a lot of ways to deal with spill, but the primary way is via dynamics processing such as gating/expansion. It's useful that Logic's built‑in Noise Gate plug‑in has side‑chain filtering, so that you can achieve reliable triggering, and that it offers range control, which lets you reduce spill without completely muting it. Both of these facilities make life a lot easier. Some people also just use manual audio editing to deal with spill sections, but that can quickly get very laborious on anything but rarely used tom‑tom tracks. If you've got a complicated part and are really having trouble getting a gate to trigger properly, try automating the gate's threshold control for any problem sections.

Reading between the lines of your question, it sounds to me as if you might have a really tough spill problem to deal with, by nature of some problem with the recording. If you've got overwhelming levels of hat spill on your snare close mic, probably the best salvage technique I can recommend is to trigger a snare sample of some type from the close‑mic track. Much of the trouble people get with hi‑hat spill is on account of heavily EQ'ing the snare mic to brighten it, so layering in a bright (and even high‑pass filtered) sample instead may solve the problem, by obviating the need for the EQ. You could also just completely replace the close mic with the sample, of course, but it can sometimes then be difficult to blend the sample convincingly with the kit, so try just blending first.



Published September 2009

Thursday, July 13, 2017

Q. How many guitar layers should I use?



By Various

I'm doing a demo for a local act and we've tracked layer upon layer of overdubbed guitars: there are 10 rhythm parts with various chord voicings, and 10 lead parts playing variations on the solo and riff hook. A few of the layers are duplicates, but we had four different guitars, playing bar chords, open chords and power chords for the rhythm parts.

If you're mixing many layered guitar parts, consider identifying sub‑sections of each and giving them their own characters with different amp‑sim treatments and re‑amping.  
If you're mixing many layered guitar parts, consider identifying sub‑sections of each and giving them their own characters with different amp‑sim treatments and re‑amping. 

 Q. How many guitar layers should I use?Q. How many guitar layers should I use?
Q. How many guitar layers should I use?Q. How many guitar layers should I use? 
My question is: how many should I use? The lead tracks are mostly duplicates and there isn't much distinction between them, so I'll comp those later; it's the rhythm that's bugging me. The parts are tight and played on nice instruments, so the issue isn't so much of musicality, it's of fitting all the variations into the mix without it sounding like mush. Do I try and fit them all in? Or comp them down to make one or two awesome tracks? It's essentially a bog‑standard rock sound, so double‑tracking the rhythm makes sense, with each part hard panned, but how would you incorporate the other rhythm tracks? We DI'd the guitars, as we can then use IK Multimedia Amplitube to change the tones. I assume I would try different Amplitube settings for each pass?

Via SOS web site

SOS contributor Mike Senior replies: In my experience, unless someone's put in a fair few hours of punching‑in and/or editing, most tracked‑up walls of guitar aren't tight enough to sound punchy. Even if the timing of the individual picking transients is on the money, the lengths of the notes or the point at which the strings are damped doesn't match up nearly as well and also affects the rhythm. So my first suggestion would be to focus on the tightness of any layers of the same basic guitar part. This is especially relevant if you're stereo panning them, because human hearing is extremely sensitive to inter‑ear timing differences. If you're having trouble with mushiness, you may find that there are some tuning problems too, so be critical and ditch anything sour. The closer the tuning of your parts, the more the pitched elements of the part will reinforce, and this will help keep the harmonies of the part clear despite any layering you may do.

For a middle‑of‑the‑road rock sound, it's typical for the stereo field to be balanced by putting double‑tracked rhythm parts on opposing sides of the image, and if you're going to have more than two parts going at the same time it'll probably give you a more satisfying spread if you don't pan them all to the same two places. You'd also expect the guitar arrangement to fill out for the choruses, so you're sensible to think in terms of adding more overdubs for those sections, plus you could add more extreme‑panned layers here while leaving the verses slightly narrower. That said, while added overdubs can help increase the illusion of size, they will tend to make the composite sound more bland and homogenous, as well as pushing the guitars away from the listener: using more guitar parts means that each has to be lower in level to avoid making the rest of the mix sound small. Every producer tends to make their own compromise in this regard, so you can only make an informed judgement by comparing your mix to a few commercial tracks in the style.

Beyond that, if you've got that many parts available to you, I'd use them to bring some light and shade to the arrangement. Most riffs are made up of smaller musical figurations and fills, and you can really bring them to life if you give each of the different sub‑sections of a riff its own character, by altering the balance of the parts from moment to moment. The easiest way to do this is to line all the tracks up together with different modelled amps and then edit different bits away from each track. If you use clearly contrasted settings from Amplitube, this could make life easier. Because your guitars are DI'd, you could also use them to drive virtual stomp‑boxes and amps at different moments, by splitting the DI audio between sequencer tracks with different plug‑in settings.

You have access to the amp settings, so you shouldn't need much processing during the mix, beyond perhaps a little low cut to tame any general woofing around that might interfere with your bass part. However, in my experience, layering up parts that all use the same amp‑modelling engine seems to make it trickier to get a really solid sound, so I'd experiment with other modelling options as well (even comparatively lo‑fi ones) — or, even better, try re‑amping a few of the parts through a real amp and speaker to catch the sound of some real air moving. 


Published September 2009

Published September 2009