Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Monday, July 31, 2017

Q. What exactly is ‘headroom’ and why is it important?

By Various
I'm a synth guy getting more and more into recording and mixing my own tunes. One thing that stumps me is the issue of 'headroom': for example, in the case of my Focusrite Saffire Pro 26 I/O, the manual says that using the PSU rather than Firewire bus power yields 6dB of additional headroom in the preamps. I assume that this is a good thing, but how so? What is headroom and why do I want more of it? How do I know it's there (or not there), and how can I take advantage of it?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: These are all good questions. Every audio‑passing system (analogue or digital) has two limits: at the quiet end there is the noise floor, normally a constant background hiss into which signals can be faded until they become inaudible; and at the loud end there is clipping, the point where the system can no longer accommodate an increase in signal level and gross distortion results. The latter is generally due to the signal level approaching the power supply voltage levels in analogue systems, or the coding format running out of numbers to count more quantising levels in digital systems.
Obviously, we need to keep the signal level somewhere between these two extremes to maximise quality: somewhere well above the noise floor but comfortably below the clipping point. In analogue systems, this is made practical and simple by defining a nominal working level and encouraging people to stick to that by scaling the meters in a suitable way. For example, VU meters are scaled so that 0VU usually equates to +4dBu. The clipping point in professional analogue gear is typically around +24dBu, so around 20dB higher than the nominal level indicated on the VU meter.

That 20dB of available (but ideally unused) dynamic‑range space is called the headroom, or is referred to as the headroom margin. It provides a buffer zone to accommodate unexpected transients or loud sounds without risking clipping. It's worth noting that no analogue metering system displays much of the headroom margin. Rather, it's an 'unseen' safety region that is easy to overlook and take for granted. In most digital systems, the metering tends to show the entire headroom margin, because the meter is scaled downards from the clipping point at 0dBFS. The top 20dB or so of a digital scale is showing the headroom margin that is typically invisible on the meters of analogue systems. As a result, many people feel they are 'under‑recording' on digital systems if they don't peak their signals well up the scale, when in fact they are actually over‑recording and at far greater risk of transient distortion.

The reason why your interface offers greater headroom when operating from its external power supply is because the PSU provides a higher‑voltage power rail than is possible when the unit is running from the USB power supply. A higher supply voltage means that a large signal voltage can be accommodated; in this case, twice as large, hence the 6dB greater headroom margin. More headroom means you have to worry less about transient peaks causing clipping distortion, and generally translates to a more open and natural sound, so it's a good thing.


Published February 2010

Korg Education SP170S Digital Piano Lab Bundle

Saturday, July 29, 2017

Q. What’s the best way to add a subtle vinyl effect?

By Various
I'm trying to figure out how I would create a really old‑style, warm‑sounding distortion/crackle on a string motif for an intro to a song I'm writing. I'll be using East West Quantum Leap Symphonic Orchestra for the actual string loop, and I want to create a sort of 'AM radio' feel for it. That's easy enough to achieve using various EQ techniques, but I also want to give it a really subtle '60s record‑player crackle — something that's there if you know what you're listening for, but not so 'in your face' as to sound cheesy or clichéd. I was wondering if there are plug‑ins that can do this. I fear I may have to break the bank again...
 
Here are three plug‑ins you could use to add simulated vinyl noise to your audio tracks without breaking the bank: Izotope's Vinyl (left), Retro Sampling's Vinyl Dreams (far left), and Steinberg Cubase's bundled Grungelizer (top).

Via SOS web site

SOS contributor Mike Senior replies: There's no need to break the bank for this, because there are actually a few different freeware plug‑ins that provide the kind of thing you're after. One of the best known is Izotope's freeware Vinyl plug‑in, which is available for both Mac and PC. The advantage of this one is that you get a lot of control over the exact character of the vinyl noise you're creating: not only can you balance various different mechanical and electrical noises, but you can also choose the decade you want your virtual vinyl to hail from and how your processed audio is affected by disc wear.

The downside of this plug‑in for me, though, is that it doesn't seem to output some of its added noises in stereo, irrespective of how I set up the controls, and a lot of the character of vinyl noise, to me, lies in its stereo width. To be fair, though, the 'dust' and 'crackle' components seem to be stereo, and stereo was, of course, only really in its infancy in the '60s, so this might not matter to you. Indeed, collapsing the whole signal to mono might be a useful way to 'date' the string sound itself. If you're running Steinberg's Cubase, the built‑in Grungelizer plug‑in provides a similar paradigm to the Izotope plug‑in, albeit with a simpler control set. However, all the added noises from this plug‑in appear to be in mono too.

For stereo vinyl noise, check out the freeware plug‑ins from Retro Sampling (www.retrosampling.se). Both Audio Impurities and Vinyl Dreams can overlay vinyl noise, although you only get wet/dry knobs, so you're stuck with the preset effect. That said, if you set up the plug‑ins on a separate channel in your sequencer, you can dramatically adjust their character with EQ to make them seem less obtrusive — a combination of high‑cut and low‑cut filtering usually works well for me. If you want a smoother vinyl noise (less of the Rice Crispies!), you can also slot in a fast limiter or dedicated transient processor to steamroller spikes in the waveform.

These processing techniques also allow you to get good mileage from the vinyl noise samples that periodically crop up on sample libraries. I've been collecting vinyl noise samples for a while, so I can tell you that there are good selections on the Tekniks Ghetto Grooves and Mixtape Toolkit titles, as well as on Spectrasonics' original Retrofunk collection. I've also turned up a good few examples in general‑purpose media sound‑effects libraries, if you have anything like that to hand.

Published January 2010

Thursday, July 27, 2017

Q. How can I achieve a ‘dry’ sound?



By Various

I record and mix in my 'studio', which isn't too great acoustically. I can manage somehow when mixing, by working on headphones and doing lots of cross‑referencing, but the problem is that when it comes to recording I really hate the room sound on my vocals, and most of all on acoustic guitars, which I use a lot. The reverb tail is pretty short, but I'm still having a hard time getting a nice dry sound on my guitars, because I can't record dry! I know that the obvious solution is to treat the room, but the truth of the matter is that I can't do much better than this for now. So is there any way to treat a 'roomy' sound (on vocals and guitar) to make it sound drier? I know it is very difficult, or maybe impossible, especially for acoustic guitars, but any kind of suggestion, even for small improvements, would be very welcome.
 
A high‑resolution spectrum analyser such as Schwa's Schope lets you quickly and precisely home in on specific resonant frequencies that may be responsible for a coloured or uneven sound.

Via SOS web site

SOS contributor Mike Senior replies: Given that the reverb doesn't have a 'tail' as such, I reckon it's the reverb tone that's the biggest problem, so trying to use some kind of gating or expansion to remove it is unlikely to yield a useful improvement. You could help minimise the ambient sound pickup by using a directional mic for both vocals and guitar and keeping a fairly close placement. For vocals, very close miking is pretty commonplace, but for acoustic guitar you might want to experiment with using an XY pair of mics instead of a single cardioid, to avoid 'spotlighting' one small area of the guitar too much. That setup will usually give you a more balanced sound because its horizontal pickup is wider than a single cardioid on its own. In all but the smallest rooms, it's usually possible to get a respectable dry vocal sound just by hanging a couple of duvets behind the singer, and because I suspect that you've already tried this fairly common trick, I'm suspicious that room resonances are actually the biggest problem, rather than simple early reflections per se. Duvets are quite effective for mid‑range and high frequencies, but aren't too good at dealing with the lower‑frequency reflections that give rise to room resonances.

So given that room resonance is likely to be the problem, what can you do about it? Well, if you've no budget for acoustic treatment, I'd seriously consider doing your overdubs in a different room, if there's one available. If you're recording on a laptop, or have a portable recorder, maybe you can use that to record on location somewhere if you're confined to just the one room at home. I used to do this kind of thing a lot when I first started doing home recordings, carting around a mic, some headphones and a portable multitrack machine to wherever was available.

Part of what the room resonances will be doing is putting scary peaks and troughs into the lower mid‑range of your recorded frequency response, but the exact frequency balance you get will depend on exactly where your player and microphone are located in relation to the dimensions of the room, so a bit of determined experimentation in this respect might yield a more suitable sound, if not quite an uncoloured one. You might find that actually encouraging a few more high‑frequency early reflections using a couple of judiciously placed plywood boards might also improve the recorded room sound a little. A lot of domestic environments can have a bit too much high‑frequency absorption, on account of carpets, curtains, and soft furnishings.

After recording, you could also get busy with some narrow EQ peaks in the 100‑500Hz range, to try to flatten any obvious frequency anomalies. One thing to listen for in particular is any notes that seem to boom out more than others: a very narrow notch EQ aimed precisely at that note's fundamental frequency will probably help even things out. You can find these frequencies by ear in time‑honoured fashion by sweeping an EQ boost around, but in my experience a good spectrum analyser like Schwa's Schope plug‑in will let you achieve a better result in a fraction of the time. However, while EQ may address some of the frequency‑domain issues of the room sound, it won't stop resonant frequencies from sustaining longer, which is just as much part of the problem, and there's no processing I know of that will deal with that.

For my money, this is the kind of situation where you can spend ages fannying around with complicated processing to achieve only a moderate improvement, whereas nine times out of 10 you'll get better results much more quickly by just re‑recording the part.


Published January 2010

Wednesday, July 26, 2017

Q. What are filters and what do they do?


By Various
I always hear people talking about low‑pass filters and high‑pass filters and cutting at this and that frequency, but where do you get these filters from? I don't think I have one in my Cakewalk Project 5 software. Are they part of equalizers?
Via SOS web site

This diagram illustrates both low‑pass (high cut) and high‑pass (low‑cut) filtering. The shaded areas in the diagram will be attenuated. 
This diagram illustrates both low‑pass (high cut) and high‑pass (low‑cut) filtering. The shaded areas in the diagram will be attenuated.

SOS Technical Editor Hugh Robjohns replies: People sometimes use the terms 'EQ' and 'filter' interchangeably, so it's understandable that you might be confused. We've published several introductory guides to EQ, most recently in SOS December 2008 (/sos/dec08/articles/eq.htm), so if you're a bit baffled about the broader subject of EQ, it would be well worth reading this.

Essentially, EQ is used to boost or attenuate (turn down) a range of frequencies in order to shape a sound. High‑pass and low‑pass filters are common in professional equalisers, but less common in budget designs. They are used to define the highest and lowest frequencies of interest in the signal and they pretty much do what their names suggest: let audio above a certain frequency pass (high‑pass filter) or audio below a certain frequency pass (low‑pass filter). Anything outside those limits is attenuated. They are also called low‑cut or high‑cut filters, but the function is the same.

Filters are defined by their slope, which determines the attenuation of signals outside the 'pass' band. Most audio filters on mixing desks (and DAWs) will have a slope of 12dB or 18dB per octave, and in synthesizer filters the slope may be as steep as 24dB per octave. If an 18dB/octave high‑pass filter is set to 80Hz, any audio an octave below that (at 40Hz) will be attenuated by 18dB, and an octave lower still, at 20Hz, it will be attenuated by 36dB... and so on.

High‑ and low‑pass filters generally have much steeper slopes than the more normal equaliser bands (which are typically only 6dB/octave) and are intended for a different purpose. You can't effectively remove rumble with a bass EQ control, but you can with a high‑pass filter. But equally, you can't shape the tone of a bass guitar with a high‑pass filter as easily as you can with a bass EQ control.

Filters are used for 'corrective' equalisation, as opposed to creative equalisation. They are used to clean up a signal, rather than to shape the sound creatively. They only provide attenuation of unwanted frequencies, and there's no scope to boost any part of the frequency range. Of the two, the high‑pass filter is probably the most useful, as it helps to remove unwanted rumbles and other unwanted sub‑sonic rubbish that microphones tend to capture. Most DAW software includes a software EQ that you'll be able to use to perform any of these tasks, and although I'm not personally familiar with Cakewalk Project 5, I notice that it can host third‑party VST plug‑ins, so there are many freeware plug‑ins that you could use if your DAW doesn't have them built in.


Published July 2009

Monday, July 24, 2017

Q. Does mono compatibility still matter?



By Various 
I've recently started working at a classical radio station in my area, and I was fully expecting to have to deal with mono issues and think about miking live performance with those in mind. But everything is done in stereo and broadcast in stereo. Spaced omnis are common, which is not very mono compatible. So when is mono compatibility a necessity, and is mono really ever used any more as a final 'product'?
Even popular modern DAB radios such as this one from Pure are mono by default, and a large part of the potential audience for radio and TV in the UK still listens in mono — so mono compatibility is still a consideration for music producers. 
Even popular modern DAB radios such as this one from Pure are mono by default, and a large part of the potential audience for radio and TV in the UK still listens in mono — so mono compatibility is still a consideration for music producers.

Via SOS web site
SOS Technical Editor Hugh Robjohns replies: In a technical sense, mono compatibility is still important. Whether a particular radio station chooses to bother about it is a decision for them, but I would suggest it unwise to ignore it completely.

FM radio is transmitted essentially in a Mid/Side format, where the derived mono sum (Mid) signal is transmitted on the main carrier and the 'Side' information is transmitted on a weaker secondary carrier. A mono radio ignores the Side signal completely, whereas a stereo radio applies M/S matrix processing to extract normal left and right signals.

However, there is potentially a noise penalty in this process, so in poor reception areas, and often when on the move in a car, FM receivers are designed to revert to mono, to avoid reproducing a very hissy stereo signal. As a result, a large amount of in‑car listening will be in mono (at least, here in the UK) because of signal fading and multi‑path issues. In addition, a very large proportion of radio listeners do their listening in the kitchen, bathroom or garden, using portable radios that are usually mono. So mono compatibility is still important to a very large proportion of the potential FM radio audience.

Amusingly, mono doesn't even become less relevant in the digital radio market. The most popular DAB digital radio receiver in the UK is currently the Pure Evoke, and although you can attach an optional second speaker to enjoy stereo from it, by default the stereo output from the DAB receiver is combined to mono to feed the single internal speaker. So mono compatibility remains important in the digital radio market too!
Considering TV for a moment, the primary sound on analogue (terrestrial) TV in the UK is in mono, transmitted by an FM carrier associated with the vision carrier. Although a secondary stereo sound carrier was added in 1991, using a digital system called NICAM, there are still a lot of small mono TVs on the market. Analogue TV will be switched off in the UK within the next three years, and digital TV (both terrestrial and satellite) is broadcast entirely in stereo (or surround in some cases) — but even so, it is still possible to buy mono receivers.

So given that a significant proportion of the potential audience (for analogue and digital radio and TV) could well be listening in mono, I'd suggest that checking and ensuring mono compatibility is still important. I know that some classical radio stations, in particular, argue that only serious music enthusiasts listen to their output, and they would only do so on decent stereo hi‑fi equipment. Perhaps that is the case, but to my way of thinking, ensuring reasonable mono compatibility is still the safest approach, and needn't restrict the way broadcast material is produced in any way at all.

Using spaced omnis is a technique often favoured by classical engineers, largely because of the more natural sound and smoother bass extension provided by pressure‑operated mics. In some situations, particularly when using a single spaced pair, there can be mono compatibility issues — but only rarely, and it is usually easily fixed. For example, if any additional accent or spot mics are used and panned into the appropriate spatial positions, any phasing or comb filtering from the spaced omnis, when auditioned in mono, will be diluted and usually ceases to be an issue. Even in cases where a single spaced pair is used, listening to the derived mono may sound different, but it is rarely unacceptable.

To sum up, I would definitely recommend checking mono compatibility and trying to ensure that it is acceptable (even if not entirely perfect). If the sound quality of spaced omnis is preferred, there's no reason not to use them — even if the final output is mono — provided suitable skill and care is used in their placement and balance. The BBC certainly use spaced pairs for Radio 3 transmissions in appropriate situations.


Published June 2009


Friday, July 21, 2017

Q. What’s the best order for mixing?



By Various
I've been wondering what order people use when mixing. Mixing the instruments in order of priority? Mixing the rhythm section first?


Deciding on the right order for mixing your tracks might well depend on the genre in which you're working. The approach could be very different on a Rihanna mix than one of Dido's, for example. 
Deciding on the right order for mixing your tracks might well depend on the genre in which you're working. The approach could be very different on a Rihanna mix than one of Dido's, for example.Q. What’s the best order for mixing?
Via SOS web site


SOS contributor Mike Senior replies: I've spent the last couple of years researching and comparing the techniques of many of the world's top engineers, and you might be surprised to discover that they disagree considerably on the issue of the order in which to deal with the different aspects of a mix. On this basis, it would be tempting to think that your mixing order isn't actually that important, but I think that this is a mistake, as in my experience it can have a tremendous impact on how a mix turns out.

One reason for this is that each different track in your mix has the potential to obscure (or 'mask') certain frequency regions of any other track. The primary way to combat frequency masking is to reduce the level of the specific problem frequency‑range for the less important instrument, letting the other one shine through better. So it makes a good deal of sense to start your mix with the most important track and then add in successively less important tracks, simply so that you can take a methodical approach to dealing with the masking problem. If any track you introduce is obscuring important elements of a more important track that is already in the mix, you set about EQ'ing the problem frequencies out of the newly added track. If you don't introduce important tracks until later, you'll tend to find difficulty in getting them to sound clear enough in the mix, because there will now be umpteen less important tracks muddying the water. This is a common problem for those who only introduce their lead vocal track right at the end of the mix, and can often lead to an over‑processed and unmusical end result.

Another persuasive reason for addressing the most important tracks first is that in practice almost everyone has mixing resources that are limited to some extent. If you're mixing in the analogue domain, you'll already be well acquainted with the frustration of only having a few of your favourite processors, but even in the digital domain there are only a certain number of CPU cycles available in any given hardware configuration, so some compromise is usually necessary, by which I mean using CPU‑intensive processing only for a smaller number of tracks. In this context, if you start your mix with the most important instruments, you're not only less likely to over-process them, but you'll also be able to use your best processors on them — an improved sonic outcome on two counts!

Taking another look at different engineers' mixing‑order preferences in the light of these issues, the disparity in their opinions begins to make more sense if seen in the context of the music genre they're working in. In rock and dance music styles, for example, people often express a preference for starting a mix with the rhythm section, while those working in poppier styles will frequently favour starting with the vocals. As a couple of examples to typify how this tends to affect the mix, try comparing Rihanna's recent smash 'Umbrella' with something like Dido's 'White Flag'. The first is built up around the drums, while the second has been constructed around the lead vocal, and you can clearly hear how various subsidiary sounds have been heavily processed, where necessary, to keep them out of the way of the main feature in each instance. In the case of 'Umbrella', check out the wafer‑thin upper synths, whereas in 'White Flag' listen for the seriously fragile acoustic guitars.



Published June 2009