Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, September 30, 2015

Q Can I feed 16-bit digital audio into a 24-bit digital device via S/PDIF?

Sound Advice : Mixing



Hugh Robjohns

If a  device, such as this Boss DR-880, can only output 16-bit digital audio over S/PDIF, you can still hook it up to a  modern 24-bit audio interface — but the interface must slave to the other device’s clock signal.

Can I link the S/PDIF out of my Boss DR-880 to the S/PDIF input of my Focusrite Scarlett 8i6 audio interface and get good results? The DR-880 output is at 16-bit resolution but the Focusrite’s S/PDIF input operates at 24-bit. Do I have to match the bit rate [or ‘word length’] as well as the sample rate? Both units are set to 44.1kHz. When I make this connection, I have nothing at all showing in my Focusrite Scarlett’s Mix Control software: nothing shows in the metering at all, no matter what the output level is.



If a device, such as this Boss DR-880, can only output 16-bit digital audio over S/PDIF, you can still hook it up to a modern 24-bit audio interface — but the interface must slave to the other device’s clock signal.If a device, such as this Boss DR-880, can only output 16-bit digital audio over S/PDIF, you can still hook it up to a modern 24-bit audio interface — but the interface must slave to the other device’s clock signal.



SOS Forum post



SOS Technical Editor Hugh Robjohns replies: Yes, you can. The S/PDIF interface format always carries 24 bits of audio data and the 16-bit signal from the DR-880 is automatically padded out with zeros (effectively) in the bottom eight bits. That’s definitely not the problem.



Having the same sample rate is important, but digital equipment can only have one clock source. Everything else must slave to that, so a common cause for an apparent lack of input signal is when the interface is running from its internal clock rather than slaved to the incoming signal. In this situation, the external input is non-synchronous with the interface’s internal clock and thus unusable. Make sure that the 8i6 is configured to slave to the external input from the DR-880, which becomes the wordclock master for your interface/computer. Other possibilities include a ground loop between the two devices, or insufficient signal level from the source. Digital ground loops don’t cause audible hums but often result in glitchy or non-functioning connections, while a poor-quality or overly long S/PDIF cable can degrade the signal level substantially, rendering it unusable. If available, an optical S/PDIF connection would avoid ground loops completely, as would an in-line digital transformer in a coaxial connection. Another reason could be incompatible channel-status bits, but although that is much harder to resolve it is also extremely rare!  


Monday, September 28, 2015

Q Can you coach a vocalist to sound more ‘breathy’?

Sound Advice : Recording



Mike Senior



I’m working on an acoustic track and would like my vocalist to sing backing vocals in a more ‘breathy’ way — in other words, as per Messrs Gibb, although not in quite such a high register! Are you able to offer some vocal coaching tips that I can ask him to try?

What a  singer hears in his headphones can affect how he delivers his vocal part — and that’s something you can use to your advantage.

What a singer hears in his headphones can affect how he delivers his vocal part — and that’s something you can use to your advantage.What a singer hears in his headphones can affect how he delivers his vocal part — and that’s something you can use to your advantage.



SOS Forum post



SOS contributor Mike Senior replies: If he’s having trouble shifting into a breathy delivery style, it might help for him to think ‘whispering’ while singing. This doesn’t necessarily mean singing as quietly as possible, because it’s possible to sing breathily at a reasonable level, in much the same way a speech actor can ‘stage whisper’. To state the blindingly obvious, though, it does require more breath to sing like this, so the singer may well need to rethink his breathing patterns to allow for more frequent lung refills. In addition, you can help from an engineering perspective by giving the singer a bit more vocal level than normal in his cans. This serves a dual purpose: he can hear what he’s singing without having to move out of ‘breathiness mode’; and he’ll be discouraged from opening up too much in terms of level (which is likely to reduce the breathiness) because it’ll otherwise feel over-obtrusive in his foldback mix  


Friday, September 25, 2015

Q Why do my power supplies spark when first plugged in?

Sound Advice : Recording


Hugh Robjohns



I have three laptops in the house, all of which produce various sparks and flashes at the mains end when plugged into the wall. It’s always bothered me —in terms both of health and of the safety of my computers and audio gear. Can this be ‘right’ or is there a problem with the power supplies, or even my mains supply?



Switched-mode power supplies can cause large inrushes of current when power is first switched on. By plugging them into the wall socket before you switch on the mains supply, you’ll at least prevent the scarily visible sparks, even if they’re probably happening behind the scenes!SOS Forum post



Switched-mode power supplies can cause large inrushes of current when power is first switched on. By plugging them into the wall socket before you switch on the mains supply, you’ll at least prevent the scarily visible sparks, even if they’re probably happening behind the scenes!Switched-mode power supplies can cause large inrushes of current when power is first switched on. By plugging them into the wall socket before you switch on the mains supply, you’ll at least prevent the scarily visible sparks, even if they’re probably happening behind the scenes!



SOS Technical Editor Hugh Robjohns replies: Switched-mode power supplies, like those ‘line lumps’ used on laptops (and almost everything else, including plenty of audio processors) usually have a pretty heavy in-rush current when first exposed to the mains supply. The magnitude of the in-rush depends on the design and power rating of the SMPS, but a 24V / 10A supply could easily pull 30 Amps or more for a fraction of a second — hence the sparks. It’s quite common to see and, sadly, is even considered normal. The sparks act as RF interference generators, inducing transient clicks on poorly shielded audio equipment and often imposing a brief voltage pulse in the local mains earth wiring, which can cause glitches in susceptible digital equipment of all kinds!



If you plug in with the socket switched off and then switch on at the wall, at least you won’t see the sparks any more (although they are probably still there across the switch terminals), but I’ve known some el-cheapo knock-off wall-socket switches become welded closed because of excessive in-rush currents!  


Tuesday, September 22, 2015

Q Why would I want to bounce out mixes for referencing?

Sound Advice : Mixing




Mike Senior

While dedicated plug-ins such as Melda's MCompare, Sample Magic's Magic AB and Meterplugs' Perception all, in slightly different ways, provide useful means of making comparisons on the fly, they're probably best used as a  complement to -- rather than a  substitute for -- bouncing out and referencing your mixes in the traditional way.

While dedicated plug-ins such as Melda's MCompare, Sample Magic's Magic AB and Meterplugs' Perception all, in slightly different ways, provide useful means of making comparisons on the fly, they're probably best used as a complement to -- rather than a substitute for -- bouncing out and referencing your mixes in the traditional way.While dedicated plug-ins such as Melda's MCompare, Sample Magic's Magic AB and Meterplugs' Perception all, in slightly different ways, provide useful means of making comparisons on the fly, they're probably best used as a complement to -- rather than a substitute for -- bouncing out and referencing your mixes in the traditional way.



In May’s Mix Rescue, Mike Senior talks about how he bounced out a mix any number of times to compare it to his references. I understand the purpose of referencing, but what I don’t understand is why he bounced out the mix. Why not use something like Magic AB? Wouldn’t that be easier and faster? It would be possible to put Magic AB on the master bus and have everything in the mix feeding into a pre-master bus (which then went to the master bus) so that you could adjust EQ and stuff on the pre-master to tweak it into line with the references. No? I can see just the one advantage of doing it his way — you’d have a series of bounces that you could compare to see if you were tweaking in the right direction.



SOS Forum post



SOS contributor Mike Senior replies: This is an interesting question that I’ve been asked on a number of occasions, but I’m not sure I’ve ever written down my answer to it before! I realise that it’s perfectly possible to compare a mix in progress with commercial releases using something like Magic AB, Melda MCompare, or Meterplugs Perception — or indeed just using a multi-channel switcher plug-in within Reaper, which is my own normal method. However, I do still prefer to bounce out my mix as a WAV for referencing purposes most of the time, for several reasons — although not, funnily enough, for the reason you suggested!

Q Why would I want to  bounce out mixes for referencing?

Q Why would I want to bounce out mixes for referencing?



Q Why would I want to  bounce out mixes for referencing?Q Why would I want to bounce out mixes for referencing?On a practical level, I like the flexibility the DAW offers in terms of editing out and looping the most relevant pieces of each reference track, and the way it lets me easily adjust the time offset between my mix file and each reference track, something that I’ve not found as straightforward in the referencing plug-ins. I also often experiment during referencing to see what impact loudness processing might have on my mix, but mastering-style processors can cause CPU or latency-compensation problems when applied to an already heavily loaded mix project, and I can do without glitches or crashes while mixing. Besides, anything that encourages people to apply mastering processing to their mix project is a bit hazardous in my view, because I’ve seen a lot of people come unstuck that way, effectively trying to master a quick fix to complex mix problems.



However, the main reason I like to bounce out the mix is purely psychological. You see, when I reference using a bounce-down in a separate project, I can’t change the mix while I’m listening, so it encourages me to take decisions much more rigorously before acting on them. In other words, I’m reminded to cross-check each decision across several different references and several different listening systems before actually tweaking any mix settings. It’s enormously tempting when referencing within your mix project to hear, say, that the hi-hat’s too loud in comparison with one of your references over your main monitors and then to immediately charge off and change it, without checking whether that hi-hat’s also too loud compared with another of your references, or on a different listening system. Referencing within the mix project is therefore all too often a recipe for tail-chasing, in my experience, and I prefer to remove that temptation from my workflow.



The other psychological advantage of the ‘separate reference project’ approach for me is that it makes me more confident of when the mix is finished. At each referencing iteration, I’ll build up a properly cross-checked list of tweaks I want to do, and then check the effectiveness of those tweaks at the next iteration. Once everything’s crossed off the list, I can feel pretty confident of signing off the mix. If you reference in a less structured ‘hunt and peck’ kind of way, I find it’s a lot trickier to know when you’re actually done.



The last thing to say is that while referencing I prefer to step back mentally from the technical details of a mix and listen more like a typical punter, which is far easier to do when I’m listening to a bounce-out. Because I can’t change anything, my whole mindset changes. Thanks to pure paranoia, I actually do most of my bounce-outs in real time, and I’m constantly amazed at how often I’ll spot some glaring oversight even during the bounce-down itself that I haven’t noticed for the last five hours of mixing, simply because of the change in mental perspective that occurs once I think “now I’m bouncing down the mix”. Also I’m more likely to transport the bounce-down to the car, the office PC (with my trusty old Goodmans multimedia grotboxes!), the iPod or wherever.



Sure, you could work around all of these issues when using a referencing plug-in on the mix project, but you’ll need a whole lot more self-discipline than I have, frankly! And besides, I think the little breaks you’re forced to have while bouncing things out and switching projects are good for perspective in their own right, but that might be the Luddite in me speaking — ah, the high jinks we used to get up to while the tape was rewinding...  

Saturday, September 19, 2015

Q What’s the best way to ‘warm up’ a mandolin recording?

Sound Advice : Mixing


Mike Senior



I’ve put a lot of effort into creating and editing a recording of a solo mandolin. Although I like the final result a lot, on reflection the tone seems too trebly and cold, almost like a photograph with too sharp a resolution. A friend suggested I might ‘warm it up’ using compression. Can you give me some guidance on how best I might do this, using Logic Pro’s compressors? Of course, I realise I can use EQ, but would specifically be interested in any thoughts on how compression/limiting could be used on an existing take to get a warmer result. My recording is clear, undistorted and free from ambient sound.



SOS Forum post

The advanced settings panel in Logic’s built-in Compressor plug-in contains side-chain equalisation facilities which can be very useful if you’re trying to sensitise (or desensitise!) the compressor to a  mandolin’s picking transients.

The advanced settings panel in Logic’s built-in Compressor plug-in contains side-chain equalisation facilities which can be very useful if you’re trying to sensitise (or desensitise!) the compressor to a mandolin’s picking transients.The advanced settings panel in Logic’s built-in Compressor plug-in contains side-chain equalisation facilities which can be very useful if you’re trying to sensitise (or desensitise!) the compressor to a mandolin’s picking transients.



SOS contributor Mike Senior replies: There are ways to warm up a mandolin sound subjectively using compression, although none of them are likely to make as big an impact as EQ. Fast compression may be able to take some of the edge off a mandolin’s apparent tone, for instance, assuming the processing can duck the picking transients independently of the note-sustain elements. There are two main challenges in setting that up. Firstly you need to have a compressor which will react sufficiently to the front edges of the pick transients, so something with a fast attack time makes sense. Not all of Logic’s built-in compressor models are well-suited to this application, so be sure to compare them when configuring this effect — instinctively I’d head for the Class-A or FET models, but it’s always going to be a bit ‘suck it and see’. The second difficulty will be getting the compressor not to interfere with the rest of the sound. The release-time setting will be crucial here: it needs to be fast enough to avoid pumping artifacts, but not so fast that it starts distorting anything in conjunction with the attack setting. Automating the compressor’s threshold level may be necessary if there are lots of dynamic changes in the track, for similar reasons. Applying some high-pass filtering to the compressor’s side-chain (open the Logic Compressor plug-in’s advanced settings to access side-chain EQ, and select the ‘HP’ mode) may help too, because the picking transients will be richer in HF energy than the mandolin’s basic tone.



Another way to apparently warm up a mandolin is to take the opposite approach: emphasise its sustain character directly while leaving the pick spikes alone. In a normal insert-processing scheme, I’d use a fast-release, low-threshold, low-ratio (1.2:1-1.5:1) setting to squish the overall dynamic range. Beyond deciding on the amount of gain reduction, my biggest concern here would be choosing an attack time that avoided any unwanted loss of picking definition. In this case, shelving a bit of the high end out of the compression side-chain might make a certain amount of sense if you can’t get the extra sustain you want without an unacceptable impact on the picking transients.



Alternatively, you might consider switching over to a parallel processing setup, whereby you feed a compressor as a send effect, and then set it to more aggressively smooth out all the transients. The resulting ‘sustain-only’ signal can then be added to the unprocessed signal to taste (as long as you’ve got your plug-in delay compensation active to prevent processing delays from causing destructive phase cancellation). Using an analogue-modelled compressor in this role might also play further into your hands here, as analogue compressors do sometimes dull the high end of the signal significantly if they’re driven reasonably hard, giving you in effect a kind of free EQ.

  

Thursday, September 17, 2015

Q What will give my electronic tracks real low-end impact?

Sound Advice : Mixing



Mike Senior


I’m an aspiring electronic musician, and am hoping to create some stuff with real shaking low-end impact, but the interaction between kick and bass still puzzles me. In particular, a lot of the music I listen to has this kind of low boomy rumble that occurs every time the kick hits, which creates the impression that the headphones or speakers are shaking almost like they are made of jelly, even when the volume is turned down — you can hear this in tracks such as Clark’s ‘Outside Plume’ at 1:22 or Animal Collective’s ‘My Girls’ after 1:40, for example. Is that some sort of low sine wave with volume modulation or something? Also, I’ve heard about the idea of carving a space for the bass out of the kick, and adding some high to the bass to make sure it is heard, but I’m not sure whether that applies here. Which would get the low end here: the kick or the bass rumble? I am almost imagining that the kick may be getting a bit of the higher low end at about 160-240 Hz, whereas that low boom sound would get the real low 50-60 Hz area, but I really have no idea. Can someone help me figure out how I might replicate this sort of thing, since I’ve noticed that other amateur attempts tend to just sound like a muddy thump with a low, ‘blank’ sine wave on top.SOS Forum post

While these two tracks both offer an impressive-sounding bottom end, a  little sleuth work in your DAW will reveal that the low frequencies are actually managed slightly differently.

While these two tracks both offer an impressive-sounding bottom end, a little sleuth work in your DAW will reveal that the low frequencies are actually managed slightly differently.While these two tracks both offer an impressive-sounding bottom end, a little sleuth work in your DAW will reveal that the low frequencies are actually managed slightly differently.



Q What will give my electronic tracks real low-end impact?SOS contributor Mike Senior replies: I had a close listen to both the examples you mentioned, and the difficulty with trying to pick things apart after the fact is that the kick and bass always appear to be playing at the same time, so it’s a bit tricky to reverse-engineer the sound. Nevertheless, my best guess is that your general conclusion is correct, and that you need to concentrate your bass part primarily into the sub-70Hz region, so it stands to reason that focusing the kick drum further up the spectrum would be sensible from an arrangement/mixing perspective. Keeping the kick’s decay quite short (at least in the bottom couple of octaves) will also usually help avoid the low end of the mix getting muddy where there’s heavy-duty low end coming from the bass. However, there’s quite a bit more about these two commercial productions that might also be responsible for what you’re hearing, so let’s get down to some specifics.



Q What will give my electronic tracks real low-end impact?Turning to the Clark track first, for a start the bass line itself seems to be a bit unusual on two counts. The first is that it appears to have quite a well-defined attack phase, so you get a real low-frequency pulse whenever the kick drum hits. As a result of this, I’m guessing that the kick drum would feel quite a lot less heavy if you heard it on its own, simply because you’re perceiving that bass pulse as an element of the kick-drum sound. That said, it seems to me that there are also some less prominent ‘static’ frequency peaks at around 50Hz and 100Hz which may well be on account of the kick drum itself. This makes a good deal of sense, given that in general I’d be loth to entirely filter out the kick’s sub-70Hz region in any club style, however much I might otherwise tailor its low-end frequency contour, simply because that helps keep the punchiness of the low end a bit more consistent if the bass line drops out or changes pitch.



The second unusual aspect of this line is the way the bass synth’s pitch can be heard sweeping around between about 35 and 90 Hz. This certainly helps to avoid it sounding like a simple sine wave, and I imagine it may also be the source of the subjectively ‘rubbery’ sound that you mention. The upper registers of the line should also be easily audible on a lot of smaller systems, but you’ll notice that if you stick a high-pass filter across the mix at 100Hz, precious little of the bass remains — so I don’t think its audibility is coming from any serious helping of mid-range or high-frequency energy. If the synth waveform isn’t a sine wave, then I’d suggest it has at least been very tightly filtered above its fundamental frequency.



Where the Animal Collective tune differs most clearly from Clark’s, to my ears at least, is that ‘My Girls’ has considerable harmonic information from the bass higher up the frequency spectrum, so the 100Hz high-pass filter test doesn’t kill it in the same kind of way. Otherwise, though, the two tracks have a lot in common: a bass line that, if not quite as wild, still provides a good deal of movement and the odd portamento; and a kick with tightly controlled, fast decaying low-end.



Finally, what’s also worth mentioning is that the kick/bass waveforms of these mixes appear to have been clipped (presumably at some point in the bus-processing or mastering chains), and this will also tend to make them a little more audible on smaller systems by virtue of the higher-frequency distortion products, albeit at the potential expense of a reduction in relative low-end power.  


Wednesday, September 16, 2015

Q Why put preamps in the live room?

Sound Advice : Miking



Hugh Robjohns

Eric Valentine, whose long list of production/engineering credits includes work with Queens Of The Stone Age, Slash, Maroon 5, Joe Satriani... and many more names you’ve probably heard of!

Eric Valentine, whose long list of production/engineering credits includes work with Queens Of The Stone Age, Slash, Maroon 5, Joe Satriani... and many more names you’ve probably heard of!Eric Valentine, whose long list of production/engineering credits includes work with Queens Of The Stone Age, Slash, Maroon 5, Joe Satriani... and many more names you’ve probably heard of!







I recently came across an Eric Valentine video, in which he mentions how he keeps his preamps in the live room, close to the mics, and then runs line level to the converters which are situated elsewhere in his studio. My question is this: is there really a valid reason for doing this on runs which I imagine to be less than 100 feet? Will there even be a measurable difference in the converted signal, let alone an audible one?



SOS Forum post



SOS Technical Editor Hugh Robjohns replies: I’m afraid the answer (as is so often the case) is “it depends”! From a strict engineering standpoint, it makes sense to keep the low-level signal paths as short as possible, to minimise any degradation from interference and noise. In most recent audio installation jobs (predominantly in the broadcast radio and TV area, but also for stage and theatre work), which almost exclusively employ digital consoles, the mic preamps and converters have been placed in racks in the studios or on the stages, as close as practical to the places where the mics are used. Of course, practicality and convenience are not affected by the use of remote preamps because the technology includes comprehensive remote control.



However, in traditional analogue music-studio applications, the standard for over 50 years has been to install the mic preamps in the control room, usually inside the mixing console; and with properly engineered microphones, cables and preamps, history has shown us that it really doesn’t make any significant difference to the sound if the cable is 10 or 50 feet long. Subtle losses might start to become noticeable in critical A/B comparisons when the cable length is significantly greater than 100 feet, but I’ve often recorded at the end of a 50-metre (150 feet) multicore cable, and no one ever complained about poor sound quality from the mics!



As is so often the case, the reason for the folklore about recording with preamps in the studio really comes from the use of ‘quirky’ microphones and preamps which don’t stick to the established microphone electrical interface requirements, and even more so when using ‘esoteric’ cable types. In such cases, longer cable lengths could well produce quite audible sound differences — but only because of improper interface configurations which simply don’t work correctly (usually because of interference or inappropriate impedances).



Of course, there are plenty of respected recording engineers who swear by the quality improvement achieved by keeping preamps near the mics, Eric amongst them, and if working this way is practical, why not? It certainly can’t harm the signal quality to work this way and, potentially, it might lead to a worthwhile (albeit very small) improvement in retained detail. The argument against is usually one of practicality and inconvenience: it’s a royal PITA to have to run back and forth between control room and studio setting or adjusting mic gains!  


Monday, September 14, 2015

Q Can Haas delays be mono-compatible?

Sound Advice : Mixing



Matt Houghton

Mid/Sides-based stereo-width enhancement, such as that offered by Brainworx’s bx_Control plug-in, doesn’t suffer when heard in mono and is often a better option than using a Haas delay — but not always!

Mid/Sides-based stereo-width enhancement, such as that offered by Brainworx’s bx_Control plug-in, doesn’t suffer when heard in mono and is often a better option than using a Haas delay — but not always!Mid/Sides-based stereo-width enhancement, such as that offered by Brainworx’s bx_Control plug-in, doesn’t suffer when heard in mono and is often a better option than using a Haas delay — but not always!







I’ve been using the Haas delay effect to add some nice width to my guitar parts. It sounds great, but I’ve noticed that when I listen in mono my guitars pretty much disappear from the mix. I’d read that this approach can cause problems, but is there any way at all to make this sort of thing more mono-friendly?



Ben Laming, via email.



SOS Reviews Editor Matt Houghton replies: The short answer is ‘no’: although Haas delays often sound very impressive when heard in stereo, the parts on which they’re used will disappear or, at best, change in tone and level when mixed to mono, due to phase cancellation. And while increasing the delay time outside the 5-35 ms Haas region can remedy the cancellation problems, it will negate the effect that you found pleasing in the first place: you’ll hear a discrete echo, not a single sound. The longer answer is more encouraging because there are alternative, mono-compatible ways to add width to mono or stereo material and, despite their inherent limitations, Haas delays can still be useful on occasion.



A common rookie mistake is to use the Haas effect on critical elements of the mix. While this can yield an instant ‘wow factor’, you must remember than many modern DAB radios are mono, and FM car radios revert to mono when they receive a weak signal. Given that the sound is critical to your mix balance, you’ll have a mix that sounds very strange to a lot of people!



If you want to enhance the sense of width for elements that are important to the success of your mix, you’re much better off experimenting with one or more of the following techniques: an LCR approach to panning in your mix; reverb; stereo modulation effects, such as chorus; real or artificial double-tracking; and Mid/Sides-based stereo-width enhancement (M/S). M/S works very well for stereo material and it’s inherently mono-compatible. It doesn’t work on mono parts, but can be used in conjunction with another technique to widen the mono part. There are ‘stereoising’ plug-ins that use comb filtering to create two parts, panning these left and right, and these would be an option. Personally, I’ve enjoyed better results with double tracking or early-reflections reverb patches.



So where can you use the Haas effect without fear of calamity? Mono compatibility may be important but the whole point of making stereo mixes is that you can make them sound better! If you limit your Haas-delay experiments to elements that aren’t so critical to the overall balance and success of your mix, then, your mix shouldn’t suffer unduly when heard in mono. Think not in terms of what you risk losing in mono, but what you stand to gain in stereo: desirable but not essential ‘fairy dust’ that helps you make the most of the extra mix real-estate offered by stereo.



You might apply it to, for example, some extra backing-vocal layers, or electric guitar doubles, so that they sound wider in stereo. You won’t miss these in the more crowded mono mix, in which you can still hear the original parts. Or you might try using the trick on an dedicated reverb or delay return that you’re similarly happy to junk for the mono version. In any individual case, you’ll find that the effect might or might not work well — just use your ears and avoid critical parts, and your mix shouldn’t suffer.  


Friday, September 11, 2015

Q How consistent are different examples of the same mic?

Sound Advice : Miking



Hugh Robjohns

Mid-priced mics from modern manufacturers, such as these SE Electronics SE5s, tend to be pretty well matched when sold as stereo pairs.

Mid-priced mics from modern manufacturers, such as these SE Electronics SE5s, tend to be pretty well matched when sold as stereo pairs.Mid-priced mics from modern manufacturers, such as these SE Electronics SE5s, tend to be pretty well matched when sold as stereo pairs.











I get the impression that manufacturing tolerances for mid-range and cheaper microphones are not always as tight as one would desire, something that’s reflected in the fact that such microphones need to be individually tested and matched if a ‘matched pair’ if desired. This made me wonder whether, if I were to try out a mic made by, say, Rode, MXL or SE Electronics and found that I liked it, and were then to go out and buy that model of mic, might the one I tested and the one I bought sound significantly different?



SOS Forum post



SOS Technical Editor Hugh Robjohns replies: The three companies you cite here are far from being in the lowest echelons of manufacturing tolerances these days, and my limited experience of them is that they are all generally quite good as far as tolerances go, if not quite in the same league as the likes of Neumann, Schoeps or Sennheiser. Different mics of the same model often do sound slightly different when compared in critical A/B listening tests, although the variations tend to be far less with the higher-end manufacturers for the reasons you’ve mentioned. Nevertheless, there are always tolerance limits in any manufacturing process, and two different mics of the same model could easily be at opposite extremes of those tolerances. Whether such inevitable variations (however large or small) are significant for any specific purpose depends on the application’s requirements and user expectations.



In my experience, close matching is only really critically important when using the mics for coincident (or near-coincident) stereo pairs and, to a slightly lesser extent, in spaced stereo pairs. Fortunately, there’s a very simple way to check for the accuracy of matching between any two mics:



1. Mount the two microphones directly above one another in a studio or large-ish, well-damped room, with their capsules as close together as possible and facing forwards. You’ll need an assistant to talk and walk around the mics, so set the mics up so that they are level with the assistant’s mouth. The assistant should remain at least two feet away from the capsules — ideally more if the acoustics allow — to ensure that both mics receive the same voice level at the same time.



2. Connect the mics to a mixer or other recording system, and pan one mic hard left and the other hard right. With someone talking directly in front of the mics (at a sensible distance to ensure both receive the same level), adjust the preamp gains to give precisely the same output from both mics. My referred way to do this is to set the preamp gains roughly for a sensible output level, then switch the monitoring to mono, flip the polarity of one channel, and fine-tune the gain of one channel only for the deepest cancellation null.



3. Reset the monitoring to stereo and remove the polarity reversal.



4. With your assistant still talking, their voice should appear to come from a narrow point in the centre of the stereo image. If different frequencies (such as sibilants) appear to flick the image towards one side or the other, the mics’ on-axis frequency responses aren’t well matched. For example, an out-of-tolerance peak in the response of one mic will pull the image towards the side to which it is panned, with the size of the peak determining how much the image shifts. A difference of 4dB is enough to move the image about a third of the way towards the edge.



5. If you have an acceptably stable centre image when talking directly on axis to the mics, ask the assistant to walk and talk slowly in a circle around the mics (facing them at all times). The voice should still remain stable in the centre of the stereo image throughout (because the two mics are facing in the same direction), although the level will fall towards the rear if you’re testing directional mics due to the reduced rearward sensitivity of the microphone polar pattern.



6. Any substantial image shifts towards one side or the other indicate unmatched polar-pattern sensitivity and/or different off-axis frequency responses. Imaging shifts will tend to get worse towards the rear, but such errors are not particularly relevant once you get beyond 90 degrees off-axis (which is the maximum limit for stereo acceptance angles in coincident pairs). However, any movement within the front ±60 degrees region indicates differences that will significantly compromise the stereo imaging if the mics are used in a coincident pair.    


Wednesday, September 9, 2015

Q Which modular mic parts are matched?

Sound Advice : Miking



Hugh Robjohns



I often wonder exactly what gets matched when microphones with interchangeable capsules are bought as a stereo pair. Is it the bodies, the capsules, or both? My Rode NT55s have serial numbers on the bodies but not the capsules, and I sometimes lie awake, fretting that I may inadvertently have swapped them over at some point!



SOS Forum post

Many small-diaphragm mics are sold with interchangeable capsules. When selecting pairs of such mics, it’s the capsules that need to be matched — the preamp ‘bodies’ of such mics are usually quite closely matched in any case, and not only on high-end models such as those pictured.

Many small-diaphragm mics are sold with interchangeable capsules. When selecting pairs of such mics, it’s the capsules that need to be matched — the preamp ‘bodies’ of such mics are usually quite closely matched in any case, and not only on high-end models such as those pictured.Many small-diaphragm mics are sold with interchangeable capsules. When selecting pairs of such mics, it’s the capsules that need to be matched — the preamp ‘bodies’ of such mics are usually quite closely matched in any case, and not only on high-end models such as those pictured.



SOS Technical Editor Hugh Robjohns replies: The capsules are the critical elements insofar as matching is concerned. The mechanical construction tolerances affect the overall sensitivity, the precision of the polar response, and the on- and off-axis frequency responses (as described in my answer to the previous question). The bodies, though, only provide an impedance converter and output driver, and the electronic bandwidth should be massively wider than that of the capsule — and thus not a limiting factor in the combined mics’ performance. The only possible tolerance variation in a mic body is of the gain, which is very easy to set quite precisely by design, and any remaining tolerance variations are easily corrected by the preamp/mixer it’s plugged into.



So swapping capsules between bodies shouldn’t affect the stereo matching at all... You can sleep soundly!

  

Monday, September 7, 2015

Q Can you explain Reaper’s Playback Resample mode?

Sound Advice : Recording




Hugh Robjohns

Setting Cockos Reaper’s playback resample mode.


Setting Cockos Reaper’s playback resample mode.Setting Cockos Reaper’s playback resample mode.



I can’t find good explanations about the ‘Playback Resample Mode’ options available in Cockos Reaper, specifically the difference between the Good (64pt Sinc) and Better (192pt Sinc — Slow) modes. Also, the default track mixing bit depth is ’64–bit float’. I guess that’s because most CPUs are now 64–bit engines, but how does this setting relate to my outboard converters, which all work at 24-bit? Michael, via email



SOS Technical Editor Hugh Robjohns replies: The ‘Playback Resample Mode’ options relate to the application of automatic sample–rate conversion when importing an audio file with a different sample rate from that of the current project, for example when importing a 24–bit, 96kHz source file into a 44.1kHz project. When sample–rate conversion is applied to a digital signal, the original audio waveform effectively has to be reconstructed from the existing samples, so that the correct amplitudes at each of the (new) required sample points can be calculated.



‘Sinc’ refers to a mathematical function which is intrinsically involved in reconstructing the original audio waveform from individual samples. In very simple terms, it describes how each sample contributes to the amplitude of the audio waveform between the sample points, both before and after each individual sample. There isn’t really space to get into the mathematics of the sampling theorem here, but if you want to know more I recommend Dan Lavry’s Sampling Theory white paper (http://lavryengineering.com/pdfs/lavry–sampling–theory.pdf).



The important point to note is that the Sinc function looks like an impulse with decaying ripples, which extend, in theory, forever, both before and after each sample, but always with zero amplitude at each sample point. Consequently, these ripples influence the amplitude of the entire reconstructed waveform and need to be taken into account when performing sample–rate conversion.



Calculating the Sinc contributions of every sample for every other sample is not practical in most cases, and so Reaper’s sample–rate conversion process can be optimised for varying levels of accuracy and speed. Performing the calculations for 64 sample points either side of the current sample gives good results, but extending that out to 192 sample points either side is more accurate (it achieves lower noise and distortion). However, it takes much longer because it involves significantly more computation.



Moving on to the internal ‘64–bit float’ mix engine, this is about maximising the internal dynamic-range capability to accommodate very loud or very quiet signals without degrading them. When a lot of signals are combined, the result is usually much louder than any individual source, so the DAW engine needs additional headroom to cope. In a similar way, changing the level of digital signals often results in ‘remainders’ in the calculation. Additional bits are needed to keep track of these remainders, to avoid degrading the signal while processing.



This additional dynamic-range requirement is achieved in different ways in different systems, and depends on the type of processing being applied. You’ll often see references to double– or triple–precision, for example (where the calculations are done with 48– or 72–bit resolution), and most early DAWs used 32–bit floating–point maths, which gives a notional internal dynamic range of something like 1500dB. Modern computer hardware is designed to work in a 64–bit operating system environment, and a lot of DAW software has followed suit for practical convenience. It just means an even greater internal dynamic-range capability, which makes internal clipping all but impossible and the noise floor of processing distortion impossibly small.



However, audio signals always have to be auditioned in the human world, and our ears and replay equipment can’t accommodate a dynamic range of more than about 120dB. A 24–bit system can (in theory) accommodate a dynamic range of about 140dB, so the industry has standardised on 24–bit interfaces, which are more than sufficient. The implication is that we have to ‘manage’ the (potentially) huge dynamic range signals created inside a DAW to make sure that they fit into the 24–bit dynamic range for real–world auditioning. That’s why DAWs have output meters that show clipping if the internal level is too high; it’s not the 64–bit floating-point signal in the computer that’s clipping, but the external 24–bit converter  

Published in SOS July 2015

Friday, September 4, 2015

Q Why does my stereo imaging suffer when I move my head?

Sound Advice : Recording




Hugh Robjohns

The closer you are to your stereo monitors, the smaller the listening sweet spot — and the greater the change in stereo imaging when you move your head.

The closer you are to your stereo monitors, the smaller the listening sweet spot — and the greater the change in stereo imaging when you move your head.The closer you are to your stereo monitors, the smaller the listening sweet spot — and the greater the change in stereo imaging when you move your head.




When I sit at my listening position, at the apex of an equilateral triangle with my monitors, everything sounds OK. I have a good stereo image — some sounds even seem to come from a point wider than the speakers — depth is also perceivable, and things panned centrally really do sound like they are floating in front of me. So I’m happy with how it sounds. When I shift my head just a little bit to the right or to the left, though, the things that were panned centrally move immediately to the side, and seem to come just out of that one speaker.



Is this the way it’s supposed to be, or should it not be that extreme? I can still hear everything else coming from the other speaker as well, so I have a good sense of what’s going on, but if it’s not supposed to be like this, what should I do to solve this issue?



SOS Forum post.



SOS Technical Editor Hugh Robjohns replies: The perception of sound sources floating in space between the speakers is a handy illusion discovered and explained by Alan Blumlein in the 1930s. In essence, each ear hears both speakers, but with slightly different times of arrival, because they’re at different distances. Naturally, the sounds from both speakers combine at each ear, creating a new composite waveform, and the crucial part of that process is that the apparent time of arrival of this composite waveform is dependent on the relative signal levels reproduced by each loudspeaker (as well as any intentional timing differences between them).



Since adjusting signal levels is much easier than adjusting their relative timing, we generally use pan-pots to control the apparent spatial position of sources within the stereo field. With the pan pot central, identical levels are reproduced from both speakers, and the apparent time-of-arrival at each ear is exactly the same. Consequently we perceive that as implying a central sound source.



In contrast, if the signal is panned away from the centre, the level from one loudspeaker is louder than the other, and these signals combine at the ear in such a way that the perceived time-of-arrival is earlier for the ear on the louder side. We then perceive that as a shift in the stereo image towards the louder speaker. To quantify this effect, a level difference between channels of about 6dB is sufficient to cause the image to move roughly halfway towards the louder of the speakers.



As I mentioned earlier, the relative timing (and phase) of signals from the two speakers is critical too, and a time delay between the channels of just 0.6ms (equivalent to moving six inches away from the source) will also move the image roughly halfway towards the earlier side.



So what happens when you move your head slightly away from the centre line is that you are imposing a physical time-of-arrival offset, because the sound from the closer speaker will arrive much sooner than the sound from the distant speaker. The result, as you’ve heard, is that the sound source appears to move towards the closer speaker too — and, in extremis, if you sit well off to the side it will sound like only the nearer speaker is working. This ‘bunching’ effect is made all the stronger because the closer speaker will also become slightly louder than the distant one, enhancing further any intended level-difference panning effects.



This bunching effect, then, is inherent in the way stereo monitoring works. It’s entirely normal, and there’s nothing you can really do about it. And that’s why we have the notion of a ‘sweet spot’ where the imaging is accurate and stable, which is the listening position at the apex of the equilateral triangle when the monitor speakers are at the other corners. The further your listening position is from the speakers, the less impact a small physical movement to the left or right will have on the imaging, and the larger the sweet spot will appear to be. In your case, I suspect you are probably sitting pretty close to the speakers, which is why a relatively small movement on your part results in a significant shift in the stereo image.


Wednesday, September 2, 2015

Q What’s the best way to clip my drums?

Sound Advice : Recording




Matt Houghton

The Clipper section of Vladislav Goncharov’s excellent freeware Limiter No6 can be a  really useful tool for certain drum sounds, even if that’s not what it was created for. Just be sure to turn off any other sections you don’t need — for example, the limiter can ‘soften the edge’ of drum transients before the sound hits the clipping stage.

The Clipper section of Vladislav Goncharov’s excellent freeware Limiter No6 can be a really useful tool for certain drum sounds, even if that’s not what it was created for. Just be sure to turn off any other sections you don’t need — for example, the limiter can ‘soften the edge’ of drum transients before the sound hits the clipping stage.The Clipper section of Vladislav Goncharov’s excellent freeware Limiter No6 can be a really useful tool for certain drum sounds, even if that’s not what it was created for. Just be sure to turn off any other sections you don’t need — for example, the limiter can ‘soften the edge’ of drum transients before the sound hits the clipping stage.



I’ve read that I can get a good hip–hop kick or snare sound by ‘clipping’ it. But when I try this, it sounds horrible. What am I doing wrong?



Jake Johnson, via email



SOS Reviews Editor Matt Houghton replies: Let’s be clear what we mean by ‘clipping’. Digital clipping, whereby the part of the waveform that exceeds the digital headroom is flattened, is difficult to achieve in your DAW by accident because there’s bags of headroom (as Hugh makes clear in his previous reply) but it’s possible if, for example, gain is applied to a signal before it hits an old, poorly designed plug-in. The reason this form of clipping sounds so bad is that the clipped section of the waveform is essentially a series of square waves, with strong, odd-order harmonics extending up beyond half the sample rate. These high harmonics aren’t supposed to exist in the digital domain, so they cause aliasing distortions at the D-A converter.



The ‘nice’ form of clipping you’re referring to was originally done by abusing the analogue stages of an A-D converter. When you clip in the analogue domain, the artifacts of clipping are all harmonics at frequencies higher than the fundamental: clipping a 100Hz sine wave would generate odd harmonics at 300Hz, 500Hz and so on. Any harmonics above half the sample rate were filtered out by the converter’s anti-alias filters, so there would be no aliasing. To work in this way, you must run out of analogue headroom before you run out of quantisation levels (‘digital headroom’). Some converters are designed like this and others aren’t, but quite a few (including on many audio interfaces) offer a ‘soft-clip’ facility, to ensure the device behaves in the desired way when clipping.



Today, you can achieve the same effect by recording your sound without clipping and then using a dedicated clipper plug–in such as the freeware Limiter No6 by VladG. (There are lots of other processors in this plug–in, and the key to success is disabling the sections you don’t need.) Such plug-ins usually include anti-alias filtering, to give an analogue-like effect.



Of course, not all sounds will benefit from being clipped, particularly those with a strong pitched element. Snare drums, hi-hats and cymbals are usually better candidates, as they all feature a strong noise component and little pitch information, particularly during the attack phase of each hit — the brief transient peak, which is really the only bit you’re looking to clip. It’s important to understand that the effect won’t work for every track, either, as clipping tends to give a sound more ‘bite’, which is not always what a track’s going to require. As with any creative processing t hat changes the tonality of a sound, you must use your ears.