Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Friday, August 29, 2014

Q. Should I apply an effect to the whole mix, or use effects on each track?

Sound Advice : Mixing

Does sending multiple instruments to the same effects track (reverb/delay) have a different effect on the overall mix from applying effects to each individual channel? What is the best way to apply lots of effects to tracks whilst saving CPU?Also, how does this work in terms of mixing multiple tracks through one compression effects track?

Via SOS web site

SOS Reviews Editor Matt Houghton replies: If I had a pound for every time I've been asked this question I'd be a rich man!

An efficient way to route effects is to send multiple channels to the same effects processor.

Let's start with reverb and delay: I'll discuss reverb, but the same principle applies to delays. It's worth pointing out that only recently, with plug-in-based DAWs and powerful computers, has it really become possible — at least without being a millionaire — to run separate reverbs on every channel. In the days of yore, almost all reverbs would have been set up as send effects. Partly, that was down to the cost, but there were some practical and sonic benefits to this approach too. Firstly, using reverb as a send effect meant it was quick and easy to adjust the wet/dry balance of things, either using the aux send knob on each source channel, or using the fader on the effects return channel, depending on whether you wanted to adjust the balance of one source or the whole mix. Secondly, you could leave reverbs patched into aux sends, so that your favourite reverb unit was always easily accessible to each channel, without needing to re-patch. Thirdly, if the aim is to gel different sources together, you probably want them to sound like they're in the same space, so it makes sense to share the same reverb. And, finally, separating the reverb and the source track allows you to further process the reverb signal, with filters or other processors, before mixing it back with the source.An efficient way to route effects is to send multiple channels to the same effects processor.An efficient way to route effects is to send multiple channels to the same effects processor.

However, it's also quite common for some sources to have their own dedicated reverb. A snare drum, for example, might be sent both to an overall ambience or drum-room reverb, as well as to its own dedicated reverb. In this situation, you could use a reverb as an insert or as a send effect, and if you wanted the snare reverb to be processed along with the snare (or the rest of a drum group), you'd route the snare reverb's return channel to a group/bus track along with the snare, or all of the drums. Another possibility is that you simply want to thicken up a weedy sound using a short ambience patch and, again, you can use either approach here. So, in short, it's perfectly acceptable to use a reverb either as an insert or as a send, but the send approach is much more common, and for good reason.

When it comes to EQ and dynamics processors, such as compressors, gates and limiters, this situation is reversed. In other words, they'd normally be used as inserts, to sculpt sounds and manage their dynamic range so that they better fit the mix (which is why some high-end consoles feature EQ and compression on every channel). Again, though, there are occasions when you might want to use them as sends, the most common of which is when you want to perform parallel compression. Some compressors feature a wet/dry, or 'blend', control for exactly this purpose, but you have more flexibility by using a compressor as a send effect, because you're able to EQ the return signal again before blending back with the source. This technique is fairly commonly deployed on a drum bus, sometimes with a slight 'smile' EQ in series with the compressor. If you want another, more extreme example, producer Michael Brauer reportedly mults out vocals to several different compressors in parallel to get the effect he wants.

Of course, it's also perfectly acceptable to use processors such as compressors on bus channels without doing any parallel compression. That's known as 'bus compression' and is typically done to 'glue' or 'gel' things together. You'll often see that done on a drum group bus or on a master bus, and if that's a technique that you're interested in I'd recommend reading the feature in SOS May 2008 (/sos/may08/articles/mixcompression.htm).

If you're working in a digital environment, like a modern computer-based DAW, it's worth a quick word about latency compensation. Most DAWs now include automatic plug-in delay compensation, and this is essential when doing parallel compression, as the delays will otherwise cause unwanted phase-cancellation. It's not really an issue for reverbs, though, where any delay can be compensated for by reducing the reverb's pre-delay.    

Thursday, August 28, 2014

Frankfurt Musikmesse 2008: SSL Duende PCIe

Q. How can I warm up my recording without using EQ?

Sound Advice : Recording

I've put a lot of effort into creating and editing a recording of solo mandolin — played quite slowly — and although I like the final result a lot, on consideration the tone is too trebly and cold, almost like a photograph with too sharp a resolution. A friend mentioned he thought I could perhaps 'warm it up' using compression, perhaps of a type designed for vocals. Can you give me some guidance on how best I might do this? Of course, I realise I can use EQ, but would specifically be interested in any thoughts on how compression/limiting could be used on an existing take to get a warmer result. I've used Logic and the recording is clear, undistorted, and free from ambient sound.

Simon Evans via email

The Advanced Settings panel in Logic's built-in Compressor plug-in contains side-chain equalisation facilities that can be very useful if you're trying to sensitise (or desensitise!) the compressor to a mandolin's picking transients.

SOS contributor Mike Senior replies: There are ways to warm up a mandolin sound subjectively using compression, although none of them are likely to make as big an impact as EQ. Fast compression may be able to take some of the edge off a mandolin's apparent tone, for instance, assuming the processing can duck the picking transients independently of the note-sustain elements. There are two main challenges in setting that up. Firstly you need to have a compressor that will react sufficiently quickly to the front edges of the pick transients, so something with a fast attack time makes sense. Not all of Logic's built-in compressor models are well-suited to this application, so be sure to compare them when configuring this effect; instinctively I'd head for the Class A or FET models, but it's always going to be a bit 'suck it and see'. The second difficulty will be getting the compressor not to interfere with the rest of the sound. The release-time setting will be crucial here: it needs to be fast enough to avoid pumping artifacts, but not so fast that it starts distorting anything in conjunction with the attack setting. Automating this compressor's threshold level may be necessary if there are lots of dynamic changes in the track, for similar reasons. Applying some high-pass filtering to the compressor's side-chain (open the Logic Compressor plug-in's advanced settings to access side-chain EQ, and select the 'HP' mode) may help too, because the picking transients will be richer in HF energy than the mandolin's basic tone.The Advanced Settings panel in Logic's built-in Compressor plug-in contains side-chain equalisation facilities that can be very useful if you're trying to sensitise (or desensitise!) the compressor to a mandolin's picking transients.The Advanced Settings panel in Logic's built-in Compressor plug-in contains side-chain equalisation facilities that can be very useful if you're trying to sensitise (or desensitise!) the compressor to a mandolin's picking transients.

Another way to apparently warm up a mandolin is to take the opposite approach: emphasise its sustain character directly while leaving the pick spikes alone. In a normal insert-processing scheme, I'd use a fast-release, low-threshold, low-ratio (1.2:1 to 1.5:1) setting to squish the overall dynamic range. Beyond deciding on the amount of gain reduction, my biggest concern here would be choosing an attack time that avoided any unwanted loss of picking definition. In this case, shelving a bit of the high end out of the compression side-chain might make a certain amount of sense if you can't get the extra sustain you want without an unacceptable impact on the picking transients.

Alternatively, you might consider switching over to a parallel processing setup, whereby you feed a compressor as a send effect, and then set it to more aggressively smooth out all the transients. The resulting 'sustain-only' signal can then be added to the unprocessed signal to taste, as long as you've got your plug-in delay compensation active to prevent processing delays from causing destructive phase-cancellation. Using an analogue-modelled compressor in this role might also play further into your hands here, as analogue compressors do sometimes dull the high end of the signal significantly if they're driven reasonably hard, giving you, in effect, a kind of free EQ.    

Frankfurt Musikmesse 2008: SM Pro Audio V-Box

Wednesday, August 27, 2014

Frankfurt Musikmesse 2008: SM Pro Audio V-Pedal

Q. Are some analogue signal graphs misleading?

Sound Advice : Mixing

I read your feature about 'Digital Problems, Practical Solutions' (/sos/feb08/articles/digitalaudio.htm), which said that digital audio can capture and recreate analogue signals accurately, and that the 'steps' on most teaching diagrams are misleading. Does that mean that the graph should really show lines, or plot 'x's, instead of looking like a standard bar-graph?

Remi Johnson via email

SOS Technical Editor Hugh Robjohns replies: Good question! The graphs in that article are accurate as far as they go, but offer a very simplified view of only one part of the whole, much more complex, process.

When an analogue signal (the red line on Graph 1: Sample & Hold) is sampled, an electronic circuit detects the signal voltage at a specific moment in time (the sampling instant) and then holds that voltage as constant as it can until the next sampling instant. During that holding period the quantising circuitry works out which binary number represents the measured sample voltage. This, not surprisingly, is called a 'sample and hold' process, and that's what that diagram is trying to illustrate. Graph 1: Sample & HoldGraph 1: Sample & Hold

Graph 1: Sample & Hold

So the sampling moment is, theoretically, an instant in time, best represented on the graph as a thin vertical line at the sample intervals (the blue lines in the picture Graph 1: Sample & Hold), but the actual output of the sample and hold process is the grey bar extending to the right of the blue line.

However, the key to understanding sampling is understanding the maths behind that theoretical sampling 'instant', and that means delving into the maths of 'sinc' (sin(x)/x) functions, which is the time-domain response of a band-limited signal sample. At this point most musicians' eyes glaze over…

Graph 2: Two Sinc Functions

As we know, the measured amplitude of each sample from an analogue waveform is represented by a binary number in the digital audio system. When reconstructing the analogue waveform that number determines the height of the sinc function.

The important point is that we are not just creating a simple 'pulse' of audio at the sample point, because the sinc signal actually comprises a main sinusoidal peak at the sampling instant (and of the required amplitude), plus decaying sine wave 'ripples' that extend (theoretically for ever) both before and after that central pulse. The reconstructed analogue waveform is the sum of all the sinc functions for all the samples.

Graph 3: 3kHz Sinc Addition

The clever bit is that the points where those decaying sinc ripples cross the zero line always occur at the adjacent sampling instants. This is shown in the next diagram (Graph 2: Two Sinc Functions) where, for simplicity, just two sample sinc functions are shown for samples 23 (red) and 27 (blue). You can see that at the intermediate sample points (26, 25, 24 and so on) the sinc functions are always zero.Graph 2: Two Sinc FunctionsGraph 2: Two Sinc Functions

That means that the ripples don't contribute to the amplitude of any other sample, but they do contribute to the amplitude of the reconstructed signal in between the samples, with the adjacent sample sinc functions having the greatest influence, and lesser contributions from the more distant samples. This is shown in the next diagram (Graph 3: 3kHz Sinc Addition), in which the sinc functions of a number of adjacent samples are shown, and when summed together produce the dotted line that is a sampled 3kHz sine waveform. Graph 3: 3kHz Sinc AdditionGraph 3: 3kHz Sinc Addition

These last two diagrams have been borrowed from a superb paper by Dan Lavry (of Lavry Engineering), which explains sampling theory extremely well, and can be found here: www.lavryengineering.com/documents/Sampling_Theory.pdf.    

Frankfurt Musikmesse 2008: Celemony Direct Note Access