Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, April 30, 2013

Q. Can I use the front and rear sides of a Blumlein array simultaneously?

Most of the recording I do involves tracking several musicians playing together in a room. I’d like to use a stereo pair to capture the overall picture, as well as close miking, but often the musicians arrange themselves in such a way that X-Y or A-B rigs won’t work. I’ve been wondering about using a Blumlein-crossed figure-of-eight pair placed between the drummer and the rest of the group, in such a way that the front of the array captures the drum kit and the rear captures the other musicians. In other words, is Blumlein strictly restricted to the 90-degree acceptance angle in front, or is it OK to use the 90-degree space behind the array too? And if so, should I reverse the polarity of any other mics on that side?

You actually have little choice over whether to use the rear of your mics in a Blumlein array, as the mics will always capture ambient noise to the rear of the setup. This can be quite useful in certain circumstances, such as radio drama, for example, in which the setup allows the actors to be positioned less rigidly but still be picked up by the mics.
Simon Earle, via email
SOS Technical Editor Hugh Robjohns replies: 
The short answer is yes, it’s perfectly OK to use the rear pick-up region, and yes, you might need to reverse the polarity of spot mics covering sources on the rear of the Blumlein array.
The slightly longer answer is that you actually have no choice in the matter; the rear side of a Blumlein array is captured anyway, so you might as well make use of it. In an orchestral recording, for example, it will be capturing the room ambience and audience (which will make it sound rather more open than might be expected). In radio drama, both sides of a Blumlein array are often used to great effect, as the technique allows the actors to face each other across the mic for good eye-contact, while still being able to move freely within their own ‘stereo space’.
In your situation, it’s perfectly acceptable to arrange the musicians to use both front and rear 90-degree stereo-recording angles, using relative distances from the mics to help achieve the appropriate balance. In radio drama, the studio floor is often marked up with tape to identify the edges of the 90-degree pickup areas, with additional marks to show the desired positions for each performer, so they don’t wander away and upset the optimum balance.
There are two things to beware of. Firstly, don’t let any real sound sources move around to the sides of the Blumlein pair, because they will then be out of phase in the stereo image. Secondly, choose your figure-of-eight mics carefully, as many are designed with strong tonal differences between front and back. That may be quite useful in your situation, but can cause significant issues in others. Finally, if you’re planning to close-mic sources to supplement their contributions to the main pair balance, sources on the rear of the mic will be captured with an inverted polarity relative to those on the front, as you say.
Consequently, you will probably need to flip the polarity of those close mics in the mix to avoid phase cancellation issues, depending, to a degree, on the distance between the close mics and Blumlein pair, the nature of the source, and the level of the spot-mic contribution. I’d start with the rear-side close mics flipped in polarity, and check each one as you build the mix, to see what works best.  

Audient ID22 - Musikmesse 2013

Q. What can I do to make my mixes sound more like commercial records?

I’m producing my own music, but I want it to sound as professional as possible. I’m sure that there must be certain tools that home studio owners can use to help them match their mixes and recordings with commercial ones. Do you have any advice for me on the best way to go?

A reference CD compilation of commercial tracks whose production qualities you admire can be a useful tool for helping to ensure high standards in your own mixes.
Greg Dillon, via email
SOS contributor Tom Flint replies: 
One of the best things you can do is create your own reference compilation so that you have something with which to compare your own work and production decisions. All of us, of course, can think of songs or pieces of music that we love because they sound a certain way. Making a reference compilation is really just a matter of collecting some of those tracks together and putting them onto a format that can be played on a variety of music systems. At this point, I still think the CD-R is the best media choice.
In general, the bigger the variety of tracks, the better, although if you were concentrating on producing a very particular genre of music it might be worth creating another dedicated compilation comprising tracks just from within that genre. There may also be music that is not particularly your cup of tea but still has admirable production qualities, and this is worth including too, as long as you can bear to listen to it! The most important thing is to select tracks that have something about them that seems to work particularly well, and make sure that each one reveals something its compilation that others on the collection do not. There would be no point, for example, in including endless variations of a particularly pleasing type of bass sound; one or two examples should suffice.
The first thing a well-considered compilation will reveal is that there really is no such thing as the perfect sound. Some productions seem to pack every frequency with noise, while others are relatively sparse. There are countless other contrasts too and I am continually amazed at how much productions can vary, and yet still sound professional, polished and satisfying.
Ideally, tracks should be taken from CDs, tapes and vinyl rather than MP3s, for quality reasons, but be sure to respect the music owners’ copyrights by only creating the reference CD-R from your own purchases and not distributing the end result to others.
Ethics, good practice and legalities aside, it is then a matter of using the reference material properly. Get to know your chosen tracks intimately by playing them everywhere you can. In the car, for instance, the body of some productions is lost under the drone of the engine, while others seem to fare quite well. It soon becomes apparent which kind of sounds are important, and which are merely ‘fairy dust’, only appreciable to those with superior hi-fi systems and ideal listening environments. Not every production sounds great in every situation, although there are usually one or two gems that seem to sound fantastic whatever the limitations of the listening environment or playback system.
Of course, the compilation can be a constantly evolving thing. Some favourite tracks might turn out to be of little use as reference material and should be replaced with others that have very specific characteristics. It might even be worth creating a separate ‘bad production’ compilation, just as a reminder of what you want to avoid doing to your own music.
Take the time to run the tracks through a narrow-band graphic EQ with spectrum analyser and then alter the level of the bands to see which ones have the most effect. This will help explain why certain mixes work, and where the important energy is centred.
One of the situations in which the reference CD is of great use is in the mastering studio. Mastering engineers are often keen to hear examples of what you want and can bear those examples in mind while processing a mix.
It’s also a good idea to take your CD of reference material to other studios when you’ll be making important decisions based on the output of unfamiliar gear. If you know how your tracks usually sound, something that is too prominent or lacking will be immediately obvious.
Most of all, though, the reference CD will keep you on the straight and narrow, particularly if you’ve been working on something for a long time. In such circumstances, the reference tracks should act like a user reset button for your ears.
For more on compiling a reference CD, see the SOS articles at www.soundonsound.com/sos/sep03/articles/testcd.htm and www.soundonsound.com/sos/sep08/articles/referencecd.htm.  

Monday, April 29, 2013

Sonodyne SRP Range - Musikmesse 2013

Q. Can you recommend a 73-key stage piano?

I seem to be the only person in the world who wants an unfussy, weighted stage piano, with — at most — 73 keys. I have little money, so can’t afford to have a piano for home use and a piano for stage use, and I have no space to store them even if I could afford it. I don’t play the ‘dusty’ ends much, so saving space by not having 88 notes suits me fine. I also have feeble arms and a small car, so it’d be great to keep the weight down too. My ideal would basically be the Casio Privia P3 with two octaves missing, as it has a great sound and lovely action. Is there really nothing out there — current or discontinued — that could do all I want? I can probably stretch to around £1500 if I had to. What might you suggest?
Lucy Weston via email
SOS contributor Robin Bigwood replies: 
There are actually quite a number of 73- or 76-note keyboards out there that could fit the bill. As always, you have to decide what your priorities are. For example, hammer-action keyboards are usually very heavy, so the keyboard with the action that suits you most might also be the least portable. There’s also a choice to be made between a high-quality but limited piano-oriented sound set, or the ‘jack of all trades’ nature of a synth workstation.
I think the keyboard most worthy of your consideration is the Nord Electro. Version 4 of this well-respected and undeniably vibey keyboard was launched fairly recently, but its v3 predecessor seems to live on in Nord’s range. The 73-note version comes in at around £1400, has semi-weighted keys and weighs less than 10kg. The version with a hammer action, surprisingly, weighs only 1kg more, but it’ll set you back a cool £1800. Still, these are brilliant gigging instruments that are well worth the money. They can be loaded with all sounds from the Nord piano and wave libraries, and sport top-class rock organ emulations too.
Challenging Nord in this same market sector are a couple of serious players’ instruments by Japanese manufacturers. The Korg SV-1-73 is £1299, offers 36 electric and acoustic piano presets, and has a decent Korg RH3 hammer action. The alternative offered by Roland is the 76-note VR700 V-Combo at about £1200. You get great organs and pianos, along with strings, synths and pads. And, with a lighter ‘waterfall’ keyboard, it’s not too heavy. It is rather long, though, because of those extra keys and a ‘bender’ section to the left of the keyboard.
Next up, a couple of 76-note stage keyboard all-rounders. Cheapest of all (£599) is the Kurzweil SP4-7. There’s no doubting the pedigree, but this workmanlike piano could prove a bit basic for really serious use. More flexible, though unashamedly oriented towards the synth world (the clue’s in the name) is the Roland Juno Stage for £950. I spent some time with one a little while back and enjoyed playing it. Like the V-Combo it’s quite long, but it has some nice live-leaning features such as audio file playback (for backing tracks and so on) from USB sticks, a click output for drummers, and a phantom-powered mic input that’s routed through the internal effects.

Buying a stage piano will always require some compromise, whether that means having less than you want or, in some cases, more. The excellent Nord Electro 4, for example, is easily portable, but only has semi-weighted keys, and is not cheap. However, the Korg M50, though technically a synth workstation, rather than a stage piano, is a snip at £850 but, again, only has semi-weighted keys. It’s no surprise that fully weighted keys and portability do not go together!
Finally we get to those synth workstations. The Korg M50-73, around £850, is a svelte 9kg and could get you safely in and out of many gigging jobs. But there’s also the new Korg Krome 73 for £1000 or so, and that boasts a flagship Steinway piano sound, plus good e-pianos too: definitely one to audition. I reviewed the Kurzweil PC3LE7 for SOS a while back, and, while I thought it was a real workhorse, its pianos (in particular) are a little way off state-of-the-art. I’m sure the Yamaha S70XS at around £1600 would be nice, too, but it’s a hammer-action whopper and a solid 20kg.
In essence, though, these are all rewarding, useful instruments, so choosing between them is a nice problem to have. Best of luck! 

Eve Audio SC Monitors and TS Subwoofers - Musikmesse 2013

Saturday, April 27, 2013

Q. Can you explain digital clocking?

Phrases like ‘digital clocking’, ‘word clock’ and ‘interface jitter’ are bandied around a lot in the pages of Sound On Sound. I’m not that much of a newbie, but I have to admit to being completely in the dark about this! Could you put me out of my misery and explain it to me?

Interface ‘jitter’, which results from clock-data degradation, can cause your waveform to be constructed with amplitude errors, seen in the diagram. These could produce noise and distortion. It’s for this reason that people sometimes use a dedicated master clock, which all other devices are ‘slaved’ to.
James Coxon, via email
SOS Technical Editor Hugh Robjohns replies: 
Digital audio is represented by a series of samples, each one denoting the amplitude of the audio waveform at a specific point in time. The digital clocking signal — known as a ‘sample clock’ or, more usually, a ‘word clock’ — defines those points in time.
When digital audio is being transferred between equipment, the receiving device needs to know when each new sample is due to arrive, and it needs to receive a word clock to do that. Most interface formats, such as AES3, S/PDIF and ADAT, carry an embedded word-clock signal within the digital data, and usually that’s sufficient to allow the receiving device to ‘slave’ to the source device and interpret the data correctly.
Unfortunately, that embedded clock data can be degraded by the physical properties of the connecting cable, resulting in ‘interface jitter’, which leads to instability in the retrieved clocking information. If this jittery clock is used to construct the waveform — as it often is in simple D-A and A-D converters — it will result in amplitude errors that could potentially produce unwanted noise and distortion.
For this reason, the better converters go to great lengths to avoid the effects of interface jitter, using a variety of bespoke re-clocking and jitter-reduction systems. However, when digital audio is passed between two digital devices — from a CD player to a DAW, say — the audio isn’t actually reconstructed at all. The devices are just passing and receiving one sample value after another and, provided the numbers themselves are transferred accurately, the timing isn’t critical at all. In that all-digital context, interface jitter is totally irrelevant: jitter only matters when audio is being converted to or from the digital and analogue domains.
Where an embedded clock isn’t available, or you want to synchronise the sample clocks of several devices together (as you must if you want to be able to mix digital signals from multiple sources), the master device’s word clock must be distributed to all the slave devices, and those devices specifically configured to synchronise themselves to that incoming master clock.
An orchestra can only have one conductor if you want everyone to play in time together and, in the same way, a digital system can only have one master clock device. Everything else must slave to that clock. The master device is typically the main A-D converter in most systems, which often means the computer’s audio interface, but in large and complex systems it might be a dedicated master clock device instead.
The word clock can be distributed to equipment in a variety of forms, depending on the available connectivity, but the basic format is a simple word-clock signal, which is a square wave running at the sample rate. It is traditionally carried on a 75Ω video cable equipped with BNC connectors. It can also be passed as an embedded clock on an AES3 or S/PDIF cable (often known as ‘Digital Black’ or the AES11 format), and in audio-video installations a video ‘black and burst’ signal might be used in some cases.  

Unity Audio Avalanche - Musikmesse 2013

Friday, April 26, 2013

Q. Are wow and flutter key to that analogue tape sound?

I have come to the conclusion that wow and flutter are a lot more important in the sound of tape and analogue recordings than they are usually given credit for. Most of the discussion about tape seems to concentrate on tape compression and the effects of transformers in the signal path, for example, and the majority of plug-in treatments designed to make recordings warmer focus on this. I don’t hear of many people applying wow and flutter plug-ins, or waffling on about the right type of capstan emulator. 
Recently I was re-reading one of those pieces Roger Nichols wrote for SOS a few years back, where he mentions that someone had invented a de-wow-and-flutter system that tracked variations in the pitch of the bias signal to correct for wow and flutter, and he said the result sounded ‘just like digital’.I recently did a couple of projects where I more or less did the same thing, albeit hugely more labour-intensively: I transferred some old four-track cassette recordings to my PC. The recordings used a drum machine, which I still own, so I also made a clean new digital recording of the drum machine part. But, of course, due to wow and flutter, the old four-track recordings were out of sync with the drum machine on a couple of bars, so I ended up chopping up the four-track capture bar-by-bar, and time-stretching each bar so that the waveform of the drum machine recording on tape lined up exactly with the new, clean digital version. By the time I’d finished, the four-track did indeed sound quite different in character to what it had before. I think Nichols was right. I wonder what opinions the SOS team might have about the importance of wow and flutter on getting ‘that sound’?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
I agree that the subtle (and sometimes not so subtle) speed instability of tape is an important subconscious factor in the tape sound. Any time-modulation process, including wow and flutter, creates additional frequency components, and I think the subliminal presence of these on all analogue recordings is sometimes missed from digital recordings. However, I suspect it is actually the presence of the far more complex harmonics produced by ‘scrape flutter’ that is the most significant element, rather than the very low and cyclical frequency modulations caused by wow and flutter. Added to which, I find wow and flutter generally quite objectionable, especially in music with sustained tones, like piano and organ recordings.
However, what you are describing here is not actually wow and flutter. You’re describing speed ‘drift’, which is an absolute difference between the record and replay speeds. It’s not unusual for two devices to run at slightly different speeds, even in digital circles. Two separate CD players might run with sample rates of at 44101Hz and 44099Hz, for example, or two analogue tape machines at 19.1cm/s and 18.9cm/s. If you start the two machines at the same time with identical recordings, they will drift in time relative to one another, just as you found with your four-track cassette — although in that case I suspect the problem was caused either by poor speed control or physical tape stretch.
Wow is a low-frequency cyclical speed variation, which is very common on vinyl records if the centre hole is punched slightly off-centre, of if the disc is badly warped. Flutter is a much faster-frequency version of the same thing, typically caused by a worn tape-machine capstan or a lumpy pinch-roller. Scrape flutter is a higher-frequency effect again, typically caused by the inherent ‘stiction’ or vibration of tape against the heads as it is dragged past.
Wow and flutter, being cyclical phenomena, don’t usually result in a change in the average replay (or record) speed because any short-term speeding up is balanced completely by the same amount of slowing down as the cycle completes.
I’m not at all surprised that your heavily edited and time-stretched ‘fixed’ version of the electronic drum track sounds different from the straight digital recording, specifically because you performed so much processing on the individual sections. However, that ‘fixed’ version will also sound very different from the drum machine’s direct analogue outputs. You’re not ‘fixing wow and flutter’ but actually correcting for speed drift or tape stretch by time-adjusting the original material in short sections, which is naturally messing with the sonic character of the drum beats in short, unrelated sections.

Though wow and flutter may once have been phenomena that we were used to and could therefore ignore, their absence in modern recording means that this is no longer the case. Celemony’s Capstan is an incredibly effective tool for removing these unwanted effects, and it leaves few artifacts.
Returning to conventional wow and flutter, though, after nearly 30 years of ‘digital stability’ most of us have been completely weaned off the sound of wow and flutter, and our ears have become very good once again at spotting these grossly unnatural phenomena that we were once so happy to ignore. Last year I reviewed Celemony’s Capstan software, which is designed to fix both wow and flutter and speed-drift issues, and it does so extremely well and without artifacts!    

SSL Sigma - Musikmesse 2013

Q. How can I easily match levels on MP3s?

When I listen to MP3s on a PC or Apple Mac, they are all at different levels, with the louder ones seeming twice as loud as the quietest ones.
Back in the days of tape and vinyl, you set your own recording level on the tape deck. So if you recorded three different tracks to tape you could get the levels similar; listening to a ‘mix tape’ was way more consistent than ripping CDs to MP3, where different tracks have different volumes.Has anyone made a piece of software that adjusts the volume level of tracks so they all match? Surely this can’t be that hard? And surely I am not the only guy on earth that finds this annoying? (Please note, however, I do not mean I want a compressor or limiter that zaps the dynamic range!)
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
The ‘mix tape’ solution is still available of course: you can transfer tracks manually, perhaps using a DAW, adjusting levels as you go. But few of us are probably prepared to invest that kind of time and care these days!
The issue of varying loudness is a serious one, and something that afflicts many different media outlets. The film industry tried to address this issue a while ago with some success, for example, and the broadcast TV industry has recently introduced a new standard that is currently being adopted worldwide and which involves a reliable and standardised way of assessing and quantifying loudness. The broadcast radio industry will follow in a year or two’s time, and these initiatives will finally end the ‘loudness wars’. The standard in question here is the ITU-R BS.1770, and the various local adaptations such as EBU R-128, ATSC-A/85 and the CALM Act in the USA.
The metering system that underpins these standards is already widely available and I would urge anyone involved in mixing music or involved in audio production to familiarise themselves with this system as soon as they can. There is no doubt that this is the future.

While the ‘loudness wars’ still rage on (for the time being, at least), you can easily level out your MP3s with iTunes’ surprisingly effective Sound Check feature. That’s assuming you don’t have the patience to adjust the levels of your favourite tracks manually using a DAW, of course!
Returning to your question of software solutions, the easiest option, if you are using iTunes and/or iOS-based players, is to switch on the ‘Sound Check’ function. This doesn’t quite conform to the BS.1770 standard, but it is very close and works extremely well. Essentially, it analyses the loudness of each track in the library and writes metadata into the file header which documents the playback level needed to achieve a perceived loudness equivalent to -16dBFS; this level being chosen to accommodate wide dynamic range material without clipping. Since the Sound Check function is only storing a ‘level offset’ instruction, the actual stored audio data isn’t altered in any way, and the replay process is directly equivalent to you manually adjusting the playback level, so it is entirely non-destructive. The result is that every track ends up with a similar perceived loudness, which is exactly what you want. An alternative that I’ve not tried, but which I’ve seen recommended, is a program called Mp3Gain Pro (www.mp3gain-pro.com). This is capable of batch-processing MP3 files to establish a common loudness across all files, but I believe it does so via a destructive manipulation of the original file data.  

Thursday, April 25, 2013

Novation Launchpad S - Musikmesse 2013

Q. Do balanced connections prevent ground loops?

I’ve carefully wired up my gear using all balanced inputs and outputs, and proper balanced cables, but I’m still getting occasional digital hash in the background. What have I missed?

Even with balanced cables you can sometimes experience ground loops, so here’s the best place to break one without risking RF interference.
Jamie, via email
SOS columnist Martin Walker replies: 
Ground-loop problems can be absolutely infuriating, and I wrote a step-by-step guide to tracking them down back in SOS July 2005 (www.soundonsound.com/sos/jul05/articles/qa0705_1.htm). In essence, you have to temporarily unplug all the cables between your power amp and mixer. If the noises go away, you’ve found the location of your problem. If not, plug them back in and try unplugging whatever gear is plugged into the mixer — and so on down the chain.
The majority of ground-loop problems occur with unbalanced connections, so my next advice would have been to replace the offending unbalanced cable with a balanced or pseudo-balanced version. However, as you’ve found, sometimes such problems occur even in fully balanced setups where you carefully connect balanced outputs of one device to balanced inputs of another via ‘two-core plus screen’ balanced cables.
I recently had just such a problem in my own studio and, to make it even worse, it was an intermittent one, so whenever I got close to discovering its cause, it mysteriously vanished again. Here’s what I did to track it down, so others can try some similar detective work in their own setups.
First of all, you’ve got to be systematic, and note down everything you try, particularly with an intermittent problem, so you don’t have to start from scratch every time it occurs. In my case, I could hear the digital low-level hash through my loudspeakers even with my power-amp level controls turned fully down, and it also persisted when I turned off the D-A converter box feeding my power amp. However, it completely disappeared as soon as I disconnected both cables between the D-A output and power amp input.
These quick tests confirmed that the noise wasn’t coming from the output of the converter, or from the power amp itself, but instead from a ground loop completed when the two were connected. However, just like you, I was already using balanced cables. I double-checked the wiring of both of my XLR balanced cables and there were no errors: the screen of the cable was connected to pin 1 at each end, the red core connected to pin 2 at each end, and the blue (or black) core to pin 3 at each end. So far, so good.
Next, I double-checked with a multimeter that there was no electrical connection between the metalwork of the two devices via my equipment rack (a common source of ground-loop problems, and curable by bolting one of the devices to the rack using insulated washers or ‘Humfrees’). Again, there was no problem.
The best wiring for balanced audio equipment is to tie the cable screen to the metal chassis (right where it enters the chassis) at both ends of the cable, which guarantees the best possible protection from RFI (Radio Frequency Interference). However, this assumes that the interconnected equipment is internally grounded properly, and this is where things can go awry. The cure is to disconnect one end of the cable screen, and the best choice to minimise the possibility of RFI is the input end (as shown in the diagram).
By this time, my intermittent problem had disappeared again, so here’s another tip. I carefully cut the screen wire of one of my two cables just before it arrived at pin 1 of the XLR plug, but left the other cable unmodified. Then, the next time the ground loop problem occurred a few days later I quickly unplugged the unmodified cable, whereupon the noise disappeared immediately. This proved that I’d correctly tracked down the problem, and modifying the other cable in the same way ensured that it never happened again.  

SSL Buss Compressor - Musikmesse 2013

Wednesday, April 24, 2013

Q. What’s the best way to add a subtle vinyl effect?

I’m trying to figure out how I would create a really old-style, warm-sounding distortion/crackle on a string motif for an intro to a song I’m writing. I’ll be using East West Quantum Leap Symphonic Orchestra for the actual string loop, and I want to create a sort of ‘AM radio’ feel for it. That’s easy enough to achieve using various EQ techniques, but I also want to give it a really subtle ’60s record-player crackle — something that’s there if you know what you’re listening for, but not so ‘in your face’ as to sound cheesy or clichéd. I was wondering if there are plug-ins that can do this. I fear I may have to break the bank again...

Here are three plug-ins you could use to add simulated vinyl noise to your audio tracks without breaking the bank: Izotope’s Vinyl (left), Retro Sampling’s Vinyl Dreams (far left), and Steinberg Cubase’s bundled Grungelizer (top)
Via SOS web site
SOS contributor Mike Senior replies: 
There’s no need to break the bank for this, because there are actually a few different freeware plug-ins that provide the kind of thing you’re after. One of the best known is Izotope’s freeware Vinyl plug-in, which is available for both Mac and PC. The advantage of this one is that you get a lot of control over the exact character of the vinyl noise you’re creating: not only can you balance various different mechanical and electrical noises, but you can also choose the decade you want your virtual vinyl to hail from and how your processed audio is affected by disc wear. The downside of this plug-in for me, though, is that it doesn’t seem to output some of its added noises in stereo, irrespective of how I set up the controls, and a lot of the character of vinyl noise, to me, lies in its stereo width. To be fair, though, the ‘dust’ and ‘crackle’ components seem to be stereo, and stereo was, of course, only really in its infancy in the ’60s, so this might not matter to you. Indeed, collapsing the whole signal to mono might be a useful way to ‘date’ the string sound itself. If you’re running Steinberg’s Cubase, the built-in Grungelizer plug-in provides a similar paradigm to the Izotope plug-in, albeit with a simpler control set. However, all the added noises from this plug-in appear to be in mono too.
For stereo vinyl noise, check out the freeware plug-ins from Retro Sampling (www.retrosampling.se). Both Audio Impurities and Vinyl Dreams can overlay vinyl noise, although you only get wet/dry knobs, so you’re stuck with the preset effect. That said, if you set up the plug-ins on a separate channel in your sequencer, you can dramatically adjust their character with EQ to make them seem less obtrusive — a combination of high-cut and low-cut filtering usually works well for me. If you want a smoother vinyl noise (less of the Rice Crispies!), you can also slot in a fast limiter or dedicated transient processor to steamroller spikes in the waveform.
These processing techniques also allow you to get good mileage from the vinyl noise samples that periodically crop up on sample libraries. I’ve been collecting vinyl noise samples for a while, so I can tell you that there are good selections on the Tekniks Ghetto Grooves and Mixtape Toolkit titles, as well as on Spectrasonics’ original Retrofunk collection. I’ve also turned up a good few examples in general-purpose media sound-effects libraries, if you have anything like that to hand.  

Joe Meek and Sunset Sound 500-series - Musikmesse 2013

Q. Should I use parallel compression in mastering?

I’m looking for some in-depth education on the subject of parallel compression, with respect to its application in the mastering process. Can you help?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
Parallel compression is a ‘bottom up’ arrangement that lifts the quieter elements in the dynamic range in a relatively gentle and benign way, without crushing top-end dynamics or introducing a dulling effect, which is a side-effect of many compressors.
In essence, the signal to be processed is split, one path feeding the output directly while the other feeds a compressor. The output of the compressor is mixed into the main output along with the direct signal, and this is why it’s called parallel compression. If analogue gear is used for parallel compression, there are usually no timing or phasing problems, but in DAW-based setups there can be, if the plug-in delay compensation isn’t spot-on.
The compressor is normally set up with a relatively modest ratio of 2:1 and the threshold adjusted so that the compressor is providing perhaps 20dB of gain reduction on the loudest peaks. You can then fine-tune the threshold, ratio and output level of the compressor against the direct signal to get the desired effect.
The way it works is that when the signal is quiet, the output is comprised of both the direct and compressor-path signals. The compressor won’t be doing anything for a quiet signal, so the direct and compressor outputs are going to be roughly the same level. Mixed together, the actual output will therefore be about 6dB louder than the source. For high-level signals, the direct path will be loud, but the compressor will be applying 20dB or so of gain reduction, such that the contribution from its output is relatively small. As a result, the output will be only slightly louder than the original signal.
So quiet signals are made louder, while loud signals aren’t: bottom-up compression. The big advantage is that the louder signals don’t sound congested and squashed as they would with a conventional compressor setup. Of course, it’s vital that the compressor can handle 20dB of gain reduction (or more) without sounding nasty. This shouldn’t be a problem with software, but can be with analogue hardware.
One other risk, because of the summing of the two parallel paths, is phasing. The solution is to place a short delay in the direct signal path and adjust to remove the phasing. If the parallel compression is being done in a DAW, the delay will need to be a handful of samples. If you’re using external hardware, it could be a couple of milliseconds.
It’s often easiest to calibrate matching delays using sine tones and a (temporary) polarity inversion in the direct channel. Inject a tone (of any frequency) with opposite polarities in the two paths, and the combination should cancel out completely if the delays in each path are identical. If they aren’t identical, only a partial cancellation will result. However, when you’re trying to set up a delay in the direct path to match the processing delay in the compression path, and using a pure tone, there’s a danger that you could end up delaying the signal too much and still get a perfect cancellation, because the delay could end up introducing 360 degrees of phase shift instead of zero degrees. The way to make sure that doesn’t happen is to start with a very low frequency (which has a very long wavelength, so would need a huge delay to get a 360-degree shift), adjust the compensating delay for maximum cancellation, and then increase the frequency in stages, fine-tuning the cancellation as you go. That way you can’t accidentally end up 360 degrees out.
Start with a sine wave at a low frequency and adjust the delay to obtain the maximum null (silence). Then increase the frequency as you focus in on the correct delay time. The higher the frequency, the more accurate the matching delay needs to be to maintain the quietest or deepest null. Once you get up to about 15kHz with a very deep null, switch off the tone, restore the polarity inversion and enjoy working with your time-aligned parallel paths. This is an old technique that was used to align tape-head azimuths, where the same potential error problems existed.  

Tuesday, April 23, 2013

Softube Console 1 - Musikmesse 2013

Q. How can I achieve a ‘dry’ sound?

I record and mix in my ‘studio’, which isn’t too great acoustically. I can manage somehow when mixing, by working on headphones and doing lots of cross-referencing, but the problem is that when it comes to recording I really hate the room sound on my vocals, and most of all on acoustic guitars, which I use a lot. The reverb tail is pretty short, but I’m still having a hard time getting a nice dry sound on my guitars, because I can’t record dry! I know that the obvious solution is to treat the room, but the truth of the matter is that I can’t do much better than this for now. So is there any way to treat a ‘roomy’ sound (on vocals and guitar) to make it sound drier? I know it is very difficult, or maybe impossible, especially for acoustic guitars, but any kind of suggestion, even for small improvements, would be very welcome.

A high-resolution spectrum analyser such as Schwa’s Schope lets you quickly and precisely home in on specific resonant frequencies that may be responsible for a coloured or uneven sound.
Via SOS web site
SOS contributor Mike Senior replies: 
Given that the reverb doesn’t have a ‘tail’ as such, I reckon it’s the reverb tone that’s the biggest problem, so trying to use some kind of gating or expansion to remove it is unlikely to yield a useful improvement. You could help minimise the ambient sound pickup by using a directional mic for both vocals and guitar and keeping a fairly close placement. For vocals, very close miking is pretty commonplace, but for acoustic guitar you might want to experiment with using an XY pair of mics instead of a single cardioid, to avoid ‘spotlighting’ one small area of the guitar too much. That setup will usually give you a more balanced sound because its horizontal pickup is wider than a single cardioid on its own. In all but the smallest rooms, it’s usually possible to get a respectable dry vocal sound just by hanging a couple of duvets behind the singer, and because I suspect that you’ve already tried this fairly common trick, I’m suspicious that room resonances are actually the biggest problem, rather than simple early reflections per se. Duvets are quite effective for mid-range and high frequencies, but aren’t too good at dealing with the lower-frequency reflections that give rise to room resonances.
So given that room resonance is likely to be the problem, what can you do about it? Well, if you’ve no budget for acoustic treatment, I’d seriously consider doing your overdubs in a different room, if there’s one available. If you’re recording on a laptop, or have a portable recorder, maybe you can use that to record on location somewhere if you’re confined to just the one room at home. I used to do this kind of thing a lot when I first started doing home recordings, carting around a mic, some headphones and a portable multitrack machine to wherever was available.
Part of what the room resonances will be doing is putting scary peaks and troughs into the lower mid-range of your recorded frequency response, but the exact frequency balance you get will depend on exactly where your player and microphone are located in relation to the dimensions of the room, so a bit of determined experimentation in this respect might yield a more suitable sound, if not quite an uncoloured one. You might find that actually encouraging a few more high-frequency early reflections using a couple of judiciously placed plywood boards might also improve the recorded room sound a little. A lot of domestic environments can have a bit too much high-frequency absorption, on account of carpets, curtains, and soft furnishings.
After recording, you could also get busy with some narrow EQ peaks in the 100-500Hz range, to try to flatten any obvious frequency anomalies. One thing to listen for in particular is any notes that seem to boom out more than others: a very narrow notch EQ aimed precisely at that note’s fundamental frequency will probably help even things out. You can find these frequencies by ear in time-honoured fashion by sweeping an EQ boost around, but in my experience a good spectrum analyser like Schwa’s Schope plug-in will let you achieve a better result in a fraction of the time. However, while EQ may address some of the frequency-domain issues of the room sound, it won’t stop resonant frequencies from sustaining longer, which is just as much part of the problem, and there’s no processing I know of that will deal with that.
For my money, this is the kind of situation where you can spend ages fannying around with complicated processing to achieve only a moderate improvement, whereas nine times out of 10 you’ll get better results much more quickly by just re-recording the part.  

Korg Volca series - Musikmesse 2013

Monday, April 22, 2013

Q. What exactly is ‘headroom’ and why is it important?

I’m a synth guy getting more and more into recording and mixing my own tunes. One thing that stumps me is the issue of ‘headroom’: for example, in the case of my Focusrite Saffire Pro 26 I/O, the manual says that using the PSU rather than Firewire bus power yields 6dB of additional headroom in the preamps. I assume that this is a good thing, but how so? What is headroom and why do I want more of it? How do I know it’s there (or not there), and how can I take advantage of it?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
These are all good questions. Every audio-passing system (analogue or digital) has two limits: at the quiet end there is the noise floor, normally a constant background hiss into which signals can be faded until they become inaudible; and at the loud end there is clipping, the point where the system can no longer accommodate an increase in signal level and gross distortion results. The latter is generally due to the signal level approaching the power supply voltage levels in analogue systems, or the coding format running out of numbers to count more quantising levels in digital systems.
Obviously, we need to keep the signal level somewhere between these two extremes to maximise quality: somewhere well above the noise floor but comfortably below the clipping point. In analogue systems, this is made practical and simple by defining a nominal working level and encouraging people to stick to that by scaling the meters in a suitable way. For example, VU meters are scaled so that 0VU usually equates to +4dBu. The clipping point in professional analogue gear is typically around +24dBu, so around 20dB higher than the nominal level indicated on the VU meter.
That 20dB of available (but ideally unused) dynamic-range space is called the headroom, or is referred to as the headroom margin. It provides a buffer zone to accommodate unexpected transients or loud sounds without risking clipping. It’s worth noting that no analogue metering system displays much of the headroom margin. Rather, it’s an ‘unseen’ safety region that is easy to overlook and take for granted. In most digital systems, the metering tends to show the entire headroom margin, because the meter is scaled downards from the clipping point at 0dBFS. The top 20dB or so of a digital scale is showing the headroom margin that is typically invisible on the meters of analogue systems. As a result, many people feel they are ‘under-recording’ on digital systems if they don’t peak their signals well up the scale, when in fact they are actually over-recording and at far greater risk of transient distortion.
The reason why your interface offers greater headroom when operating from its external power supply is because the PSU provides a higher-voltage power rail than is possible when the unit is running from the USB power supply. A higher supply voltage means that a large signal voltage can be accommodated; in this case, twice as large, hence the 6dB greater headroom margin. More headroom means you have to worry less about transient peaks causing clipping distortion, and generally translates to a more open and natural sound, so it’s a good thing.

Korg Kross Music Workstation -- Video Manual part 5 of 5 -- Global & Media Mode

Q. What’s the best way to organise samples and effects?

If I buy a sample library, I usually drop its contents into the ‘Sample Library’ folder on my hard drive, but that’s ended up as rather a mess, and I don’t know how best to organise it. Where should I start?

Even when the timing of a vocal double-track matches that of the lead vocal quite closely, the two parts still won’t actually have identical waveforms (as you can see here), so phase-cancellation between them is rarely a problem in practice, unless you artificially tighten things up too far with audio editing or pitch-correction tools.
Chris, via email
PC Notes columnist Martin Walker replies: 
There are three main aspects of this subject to consider: location, performance and organisation. Let’s discuss each one in turn.
First, given the large size of many of today’s sample libraries, it makes sense to keep them all grouped together. However, don’t dump them all in the same hard-drive partition as your operating system and applications, as this partition will end up many tens of gigabytes in size, and then you’re less likely to back it up regularly, which is asking for trouble. It’s far safer to store sample libraries on a different partition or drive.
This approach can also help with the second aspect, performance. Even if your samples are loaded into RAM in their entirety, keeping them together on a well-defragmented partition will minimise loading times compared with having them scattered all over the place among the OS and applications on a single huge drive. Moreover, many samplers now stream audio data in ‘real time’ from the hard drive, so storing them in one place avoids the drive read/write heads having to work harder darting about all over the place, potentially limiting the maximum polyphony you can achieve.
So musicians should ideally store all their sample libraries on one separate drive or partition, but if you need polyphony greater than a couple of hundred simultaneous voices, it’s probably worth splitting them across two or more dedicated sample drives. This is particularly true if you’re using huge orchestral sample libraries, since you can dedicate each drive to a different section of the orchestra, and they will share the streaming load, allowing greater polyphony overall.
When it comes to the organisation of your own personal sample collection, ultimately the most important aspect (as with any filing system) is that you can find what you’re looking for as quickly and efficiently as possible, so you can continue the creative process rather than getting frustrated trying to track down a particular sound. How you do this is very much a personal thing, and also depends on how big your sample collection is. If, for instance, your music uses lots of individual drum hits, it makes sense to start with a folder named Drums, and within that create subfolders for Kicks, Snares, Hi-hats, Toms, Cymbals, and so on, since this is the thought process you’re likely to be having when you’re searching for drum sounds. If this still leaves you with many dozens of samples within each subfolder (for instance), divide each existing folder into further sub-categories, such as Acoustic/Electronic, Hard/Soft, Dry/WithFX and keep refining your scheme until you feel that each folder contains a manageable number of files. Similarly, instruments can be sorted by genre (rock, jazz, metal and so on), acoustic/electronic characteristic, or according to their timbre, while Drum Loops are probably best grouped in folders sorted by tempo, and then subdivided by genre.
Such a scheme of organisation should work well for standard sample libraries, but many of the modern ones intended for specific software samplers, such as Logic’s EXS24, Gigastudio and NI’s Kontakt are already highly organised by the developer into subfolders. I’ve reviewed such libraries, which contain hundreds or even thousands of individual files sorted into stereo/surround and high/low CPU versions, as well as sound categories. Here you’re entering dangerous territory, since each preset may use several dozen associated samples, and impulse responses for added reverb. If you start shuffling files, you risk getting ‘missing sample’ error messages. With this type of library, I tend to leave well alone.
See the latest PC Notes column on page 150 of this issue for another idea to help you to navigate the sample and sound files on your hard drive.  

Saturday, April 20, 2013

Korg Kross Music Workstation -- Video Manual part 4 of 5 -- Audio In & Audio Recording

Q. Is phasing affecting the sound of my double-tracked vocals?

I’ve been reading about how you have to be quite precise in matching the distance from source to mic when multi-miking guitar cabinets, and something occurred to me. If this kind of phase alignment is so important in this instance, how can we avoid such issues when double-tracking a vocal, given that the singer inevitably moves their head around? The singer in question here is me, and I tend to move around a fair bit when singing! I’ve noticed when lining up and trimming my doubled vocals in the past (and on my current song) that some words sound ‘different’ when combined than others, and by different I mean ‘worse’. Could phasing be the underlying cause, and if so, is there anything I can do to rectify this?

Sorting your sample library into nested folders is an excellent way to help you find what you’re looking for more quickly, but some software samplers (like NI’s Kontakt 4, shown here) already provide extensive database ‘tagging’ systems for just that purpose.
Via SOS web site
SOS contributor Mike Senior replies: 
Yes, if you double-track very closely, you’ll inevitably get some phase-cancellation between the two layers, but that’s not a problem; it’s an inherent part of what makes double-tracking sound the way it does. However, the potential for phase cancellation between the parts won’t be nearly on the same scale as with the two signals of a multi-miked guitar amp, because, firstly, the waveforms of two different vocal performances will never match anywhere near as closely; and, secondly, the phase relationship between the performances will change from moment to moment, especially if you’re moving around while singing. Furthermore, in practice a vocal double-track often works best when it’s lower in level than the lead, in which case any phase-cancellation artifacts will be much less pronounced.
For these reasons, nasty tonal changes from double-tracking haven’t ever really presented a major problem for me, and if they’re regularly causing you problems, I suspect you might be trying to match the layers too closely at the editing stage. Try leaving a little more leeway for the timing and see if that helps for a start — just make sure that the double-track doesn’t aniticipate the lead if you don’t want it to draw undue attention to itself. Similarly, try to keep pitch-correction as minimal as you can (especially anything that flattens out the shorter-term pitch variations), because that will also tend to match the exact frequency of the two different waveforms. In fact, if there are any notes that sound really phasey to you, you might even consider shifting one of the voices a few cents out of tune to see if that helps. Anything you can do to make the double-track sound less similar to the lead can also help, whether that means using a different singer (think Lennon and McCartney), a different mic, or a different EQ setting. You may only need the high frequencies to provide the double-tracking effect, and these are unlikely to phase as badly as the low frequencies.  

Friday, April 19, 2013

Korg Kross Music Workstation -- Video Manual part 3 of 5 -- Sequencer Mode & Effects

Q. How can I create the sound of a crowd?

I’m working on a song and have all the parts nailed, but I think the outro chorus is lacking something, so I’ve decided to try giving it the feel of a large crowd singing the outro, with me singing some lead over the top. I can most compare the feel I’m trying to achieve to the tracks ‘Dungeness’ and ‘You Know’ by Athlete. I’ve tried several overdubs of my own voice, and a few of my mates have given it a go too, but it’s still not sounding right. Is it a case of literally squeezing a crowd into my living room and recording them all at once, or should I use a multitude of different tones/pitches/styles from fewer voices?

If you’re trying to get that convincing crowd sound, you should never sing alone...
Photo: Vincent Teeuwen
Via SOS web site
SOS contributor Mike Senior replies: 
If you want this kind of crowd sound, you’ll get the best results if you use as many different people as possible. Overdubbing just a couple of people multiple times is very time-consuming and is unlikely to sound that convincing. Much better to get a half-dozen people in a room and record them all at once. You’ll get more voices in less time, and the result will sound more convincingly crowd-like because of the variations between the performers’ voices.
Even with a larger handful of people, you’ll still probably want to layer up a few takes to fill things out a bit, spreading them out to some extent across the stereo spectrum when you mix. If you can slightly rearrange the positioning of the performers between takes, that will also introduce a bit more variety, and you might consider changing mics, too. In case you’ve not already spotted it, I noticed that those Athlete songs include lower harmonies as well, which thicken the texture, so if you don’t have anything like that in your song, you might want to think something up.
One practical problem you’ll have to deal with, though, is delivering a cue mix to the performers, as I’m guessing that you may not have enough headphones and headphone amplifiers to give each performer their own foldback. One solution would involve first routining the parts in the control room until the performers are comfortable with what they’re doing. In any group of singers, you’ll find that there are one or two who lead, while the others follow, so when the time comes to record, give your available headphones to the leaders and instruct the rest of the group to follow them. As likely as not, everyone will be able to hear a little headphone spill as well, which will help timing, but if it’s still a problem, get some cans on yourself and beat time in the live room.
This setup can work if your singers are fairly confident (or amply refreshed!), but the most common drawback with too few headphones is that the singers without them will feel a bit exposed without a cue mix and hence perform a bit tentatively. If this proves to be a problem, the alternative would be to use speaker-based monitoring in the live room while recording. The difficulty there, however, is monitor spill, and although you can put the speaker in the null of a directional mic to reduce its pickup (a figure-of-eight mic will work best here), you’ll inevitably find some of the cue mix leaking into the background of your takes. This has two ramifications: first, you need to make sure that the arrangement of your backing track doesn’t change significantly after the crowd overdubbing sessions, otherwise the spill may produce an unwanted ‘ghost’ of any parts that have later been removed; and second, you’ll need to work with the miking distance and the monitoring level to keep the spill level within reasonable limits. Given that there’s no avoiding the spill, I’d also recommend recording for long enough on either side of the vocal parts that you have some freedom to decide exactly where to fade the spill in and out at the mixdown stage. It may sound odd if the spill cuts out abruptly at the end of the last phrase, for example, rather than waiting until a song-section boundary.
Whether monitor spill is an issue or not, I reckon you’re probably better off trying to catch the sound as dry as possible, as most small-room sounds are unlikely to aid the effect you’re after. This leaves you more flexibility to simulate a larger, more crowd-pleasing acoustic artificially. As to what effects to use, a lot of people would instinctively reach for reverb, but I think you’ll probably get much closer to the sound you’re after if you rely more on slapback delay. Try delay times in the region of 100ms. If you’re after a slightly more aggressive tone, you might consider sending the delay’s output through a guitar amp modeller as well. Usually I find that a decent slapback does enough that you can then use reverb just for some subtler blending or to sketch in an impression of a large room size, both of which roles can actually be filled by an effect with a fairly quick decay, to avoid cluttering the mix.  

Korg Kross Music Workstation -- Video Manual part 2 of 5 -- Programs & Combinations

Thursday, April 18, 2013

Q. How do I know a mic is worth the money?

What differences can you hear when comparing inexpensive and expensive equipment? As I do a lot of vocal recording, I’d like to splash out on a really good microphone. But how can I be sure that an expensive microphone is worth the money? What am I listening for?
Sarah Betts, via email

Fidelity and accuracy are expensive qualities to build into a microphone, so those are the areas that will generally improve as you increase your budget. However, this doesn’t necessarily mean that your voice will sound better through a more expensive mic; it’s more important that you find the right mic to suit you. Bono, for example, famously favours the inexpensive Shure SM58 over high-end alternatives.
SOS Technical Editor Hugh Robjohns replies: 
The benefits extend far wider than just the sound, but basically you’re listening for an improvement over your current mic, and you then need to decide if the price justifies that improvement, bearing in mind the law of diminishing returns. Going from a very low-budget mic to a mid-range mic will usually bring about very obvious sound improvements. Going from there to a high-end model will bring smaller improvements, which may not always be obvious. And going from there to a mic worth several thousand dollars will bring smaller benefits still. Some people will believe the improvements are worth the expense, others won’t!
However, you’ll know immediately and quite instinctively when you find a mic that is well suited to your voice, and that doesn’t always mean the mic needs to be expensive. If you’re looking for a general-purpose mic, expensive usually equates to increased flexibility in use. But if it’s a mic that will always be used on your voice and nothing else, finding a mic that suits your voice is the prime directive.
Sonic fidelity or accuracy is generally an expensive thing to engineer into a microphone, and the most expensive mics are generally pretty accurate. But recording vocals is rarely about accuracy. It’s more to do with flattery, and different voices need to be flattered in different ways. When working with a new vocalist, I’ll usually try a range of mics to see which one works best with their voice. Sometimes the most expensive mic gives the best results, but it’s equally likely that it will be a less expensive model. U2’s Bono famously records his vocals using a Shure SM58, and he seems happy with the results!
But, as I said, there’s more to an expensive mic that just the sound. More expensive mics tend to be built to higher standards. They tend to include internal shock-mounting for the capsule, to reduce handling noise. They are thoroughly tested to comply with the design specifications and provide consistent results. Being better constructed, they tend to have longer working lives and can be maintained by the manufacturer relatively easily. They also generally deliver a very usable (although that might not necessarily equate to ‘the best’) sound whatever the source, without needing much EQ to cut through in the mix.
Less expensive mics often sound great on some things but terrible on others, often needing a lot of EQ to extract a reasonable sound within a mix. Often they’re less well manufactured, which reduces their working life expectancy and, once broken, can rarely be repaired.  

Korg Kross Music Workstation -- Video Manual Part 1 of 5- Introduction & Navigation

Q. Will my gear be affected by freezing-cold conditions?

My studio gear is currently set up inside my garage, and lately it has been freezing cold inside. Though there is no surface moisture on the gear (so I am assuming that nothing is condensing), I am worried that my gear could be affected by sub-zero temperatures overnight.

Walking into a very cold studio is never very inspiring, especially if the change in temperature once the room starts to warm up could damage your equipment. Investing in a heater to keep the studio at a reasonable temperature during the winter could be a very wise move.
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
If you look at the specifications for any piece of electronic equipment (normally printed in the handbook or available on-line), you will usually see a specification for the acceptable storage and operating temperature ranges (and sometimes a figure for acceptable humidity too). The range of temperatures in which a product can be stored is usually significantly wider than that in which it can be operated, and both are generally wider than the range of temperatures typically experienced in the UK.
So the short answer is that for most people it is unlikely that their equipment will suffer damage overnight just because of cold temperatures. Be aware, however, that a lot of plastics do become significantly stiffer or more brittle in the cold, so cables will be less flexible and plastic components are more likely to break. This is more likely to be an issue with tape and video recorders, or other machines with moving parts, than with computers and mixers, but worth bearing in mind all the same.
Humidity is usually a more serious problem, though, and you are right to be more concerned about that. Condensation forms when warm, humid air comes into contact with something much colder, taking that air below the dew point. I wouldn’t expect to see condensation on the equipment in the morning because both air and equipment will be at the same temperature. The problem will come when the air in the room starts to heat up (because of the heat from your body, the room lighting and any equipment you switch on), but the equipment remains cold (initially, at least). The condensation that forms can cause all manner of electrical problems, ranging from potentially very serious electrical short-circuits at one extreme to annoying intermittent computer glitches at the other, and mechanical problems such as rust and corrosion.
The best approach is to keep the room comfortably above the dew point by having some form of safe low-level background heating overnight. A night storage heater or an oil-filled electric radiator is probably the best solution — and it’s always more pleasant and inspiring to walk into a studio that has some residual warmth than trying to become motivated in a freezing-cold room!  

Wednesday, April 17, 2013

The Korg MS-20 Mini- Patch Examples Part 2

Q. What do I need to start recording my own music?

As an acoustic musician, I’d like to start learning more about recording and mixing my own material. So far I have no equipment of my own and a budget limited to a few hundred pounds. What are the absolute basics that I’d need to do some vocal or guitar recording at home? As I currently own a PC, should I be thinking of extending my budget and moving to Mac instead?
Chris Simpson, via email

For novice recordists, the cost of a setup can be kept low by home-made solutions like a ‘coat-hanger and nylon stocking’ pop-shield, and careful gear choices such as the Focusrite Saffire 6 (below), a good first interface for beginners.
SOS contributor Mike Senior replies: 
The good news is that a starter setup that will deliver respectable vocal and acoustic guitar recordings needn’t set you back a tremendous amount of cash, especially if you already have a fairly modern PC. However, there are a lot of options available to you and it makes sense to find equipment that will remain useful to you if and when you expand the setup later on.
First off, you’ll need a mic — the sound of a DI’d acoustic guitar doesn’t usually cut the mustard in the studio, and most singers don’t have a DI socket at all. (Plain selfish of them, if you ask me, but there you go.) A good first choice would be a large-diaphragm condenser mic, and fortunately market forces have squished the prices of these in recent years, so there are some good deals to be had here. Out of choice I’d tend to gravitate towards established manufacturers with a history of R&D, and I’d also look for something with three polar patterns, too: omni and figure-of-eight patterns tend to sound clearer on budget mics and will also make the mic more future proof. A couple of recent mics that fit these criteria would be the Audio Technica AT2050 and AKG Perception 420 (retail prices are between £219 and £279 in the UK, but both are currently well under £200 on the street), and each has a decent shockmount included, which is helpful for keeping your recordings clean.
If the mic is primarily going to be for your own voice, see if you can try out a couple of contenders before you buy. Budget mics can be quite coloured-sounding, and this can either work for you or against you, depending on whether that colour suits your unique voice. When auditioning, pay particular attention to ‘S’ sounds, as these quickly highlight high-frequency harshness, something cheap condensers can be prone to and which causes problems with both vocals and acoustic guitars.
Along with the mic, you’ll need a stand and an XLR signal cable. The UK’s Studiospares do a good basic studio stand, and they also stock spare bits for it, which should help extend its working life. Their leads are good value too, and I’d recommend their five-metre mic lead, as it has solid Neutrik connectors that can be re-soldered if the lead needs repairing. (For my money, cheaper leads with moulded connectors are a false economy because they can be difficult to repair.) You’ll probably need a pop shield for vocal recording too, and although you could also buy one of those from Studiospares, a bit of nylon stocking stretched over an old wire coat hanger should be perfectly up to that task at this stage.
As far as your budget goes, then, you’re looking at maybe a couple of hundred pounds for that lot in the UK, if you shop around, which does seem like a big chunk of your change gone already. However, that befits the fact that the mic is the most important thing in the setup — it’s what actually captures the sound after all! Your next most important piece of gear will be what you listen back to your recording with. Given the budget and your likely monitoring environment, I think there’s little point in investing in studio speakers at the moment, so try to get hold of a decent pair of headphones instead — probably a closed-back pair that can also be used for overdubbing without spill becoming problematic. We did a big round-up of the main headphone contenders back in January 2010 if you want to read a range of views, but my tip would be the AKG K240 MkII, which is an excellent monitoring option and, although it’s semi-open-backed, it still seems to deliver low enough spill levels for most overdubbing purposes.
If you’ve already got a PC, there’s little advantage to be had in changing to a Mac just for recording purposes at this stage. Neither platform should hold you back at all. What you will need, though, is an audio interface to get sound in and out of the computer, and some software with which to record. The interface will need to have at least one phantom-powered preamp for your mic and an output for your headphones, but there’s a lot of choice here and I’d look for something that has both a second mic input and a dedicated instrument input socket. The Focusrite Saffire 6 USB, M-Audio Fast Track Pro and Presonus Audiobox all offer these features. They all also include free software bundles, including a ‘Lite’ version of either Steinberg’s Cubase or Ableton’s Live recording application. The new Alesis Multimix 4 USB is even cheaper, but doesn’t appear to offer any kind of software bundle. For my money, a Cockos Reaper license (which you may have seen me using in the Mix Rescue column) is a steal at $60 and knocks any ‘Lite’ software version into a cocked hat as far as recording and mixing are concerned.
According to the back of my envelope, that lot should set you back a few hundred pounds. Not a lot when you consider that a good engineer could probably produce a commercial record with nothing else!  

MESSE13: Korg Kross First Look

Tuesday, April 16, 2013

Q. Do I really need to use dithering?

I have been working on an album and am in the final stage of exporting the tracks to WAV, which will then be burned to CD. All the tracks are in 24-bit audio, but I know that for it to meet the audio CD standard they should be 16-bit.
To increase the volume of a track, I put the Oxford Inflator plug-in on the master out, which tends to work well. The problem is that when I put a dithering plug-in (from Cubase SX3) at the end of the chain (after the Oxford Inflator), the output starts clipping. However, when I just use the Inflator on its own, everything works fine. My question is: do I need to use dithering, or can I just export the master to 16-bit without it?
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: 
You must use dithering because you are reducing the word length from 24 to 16 bits. If you don’t dither, you will end up with unwanted truncation distortions, and although they may not be obvious to everyone during a track, they may well become very obvious during any fade-outs or fade-ins.
I suspect the reason for the clipping you’re getting is because you have set Inflator to raise the peak level of your tracks to the maximum 0dBFS.
Simple triangular dither (which is what I suspect you are using in that Cubase plug-in) adds a low-level broadband noise signal, equivalent to the 16th bit level, to the output of Inflator (it adds noise at about -93dBFS). The result is that where the signal is already hitting 0dBFS from the Inflator process, the added dither noise will be just enough to push it over the top into clipping.
There are two solutions. The first is to adjust Inflator so that it raises the peaks to something a little less than 0dBFS. Something like -0.5dBFS should work better, leaving just enough room for the dither noise without clipping. You may need to experiment a little, but something in the range -0.3 to -1dBFS will cure the problem.
The other solution would be to use a more sophisticated form of dither noise that has been ‘noise shaped’. These dither variations reduce the level of dither noise across the lower half of the audio spectrum (where most of the musical signal energy is) and instead put more dither noise energy up at the higher frequencies, where there tends to be little musical energy and therefore more headroom available to accommodate the dither noise (and where our ears are less sensitive to the noise anyway).
Noise-shaped dither subjectively sounds quieter than simple triangular dither because it takes advantage of the ear’s non-linear frequency response to low-level sounds, although the total dither noise energy remains exactly the same for both forms, and that’s the critical aspect as far as proper dithering is concerned.
There are lots of different noise-shaped dither systems around, some generic and some bespoke commercial forms such as POW-R, Prism’s SNS, Apogee’s UV22 and Sony’s Super Bit-mapping.  

Korg Volca Product Overview -- Analog Synthesis for Lead, Bass and Beats

Q. Is it safe to over-clock my PC?

I always understood that over-clocking your computer’s CPU could make it unstable and meant you needed noisy fans to cool it down, but I notice that several specialist audio PC manufacturers now say they routinely over-clock processors on their machines. So is it now a good idea to do this and, if it is, what sort of increase is going to be ‘safe’?
Jerry Philips, via email
SOS columnist Martin Walker replies: 
The simplest definition of over-clocking is probably ‘making your computer go faster for free’, but a more accurate description might be ‘forcing one or more of your computer components to run faster than the manufacturer intended’.
Anyone who lives in a house where the mains voltage is higher than normal will find that their light bulbs burn brighter but need replacing more often. So, in PC terms, pushing your computer components beyond their nominal speeds can shorten their life. The increased power requirements of over-clocked components mean a greater strain on your computer power supply, while your over-clocked CPU, RAM or graphics card will also generate more heat when forced to run faster, and may therefore have a shorter life.
As you mention, the most obvious way to counteract this increased heat dissipation is to beef up component cooling, which often means more noise from CPU and case fans. However, more importantly, as you gradually increase any component’s speed beyond the manufacturer’s specification, you may find your computer starts to crash randomly, occasionally shuts down due to over-heating, or that a component completely burns out and needs replacing.
There are two approaches to over-clocking. Some enthusiasts go for the ‘extreme’ variety, which generally means gradually pushing motherboard component speeds and internal voltages ever higher by tweaking various parameters in its BIOS (Basic Input/Output System) until the computer crashes or refuses to boot up at all, and then backing them off slightly. During this process you have to carefully monitor various component temperatures to make sure you don’t burn anything out. You also have to stress-test the computer at your extreme settings for at least several hours to ensure that it’s totally stable.
In my opinion, extreme over-clocking is akin to playing Russian roulette, which is perhaps acceptable for a gaming machine, but not wise for a music computer that you rely on to capture many hours of creativity. A more sensible approach is to opt for a conservative increase in clock speed rather than pushing a particular machine to its limits while, again, monitoring temperatures and stability. Such ‘sweet-spots’ can be determined by experience for each make and model of CPU, but can vary considerably, although some recent processors seem relatively happy being over-clocked by up to 50 percent.
Some mainstream PC manufacturers don’t test their computers at all, which means that they can pare prices to the bone, but is why a few machines inevitably end up dead on arrival, and may even be found to have missing components that prevent them being booted up at all.
However, specialist audio PC builders already perform extensive soak testing and temperature monitoring on each and every machine before it gets shipped to the customer, which places them in an ideal position to offer their customers a ‘sensible’ over-clocking option that provides increased performance without compromising stability or long-term reliability. Components are still being operated outside the manufacturer’s specification, but you should, nevertheless, get a guarantee to cover you in the event of any system problems.
Is over-clocking a good idea on your own DIY or mainstream PC? Well, you’re on your own if anything goes wrong, and any damage won’t be covered under the normal guarantee. It’s not worth the risk (however small) if you’re already content with your computer’s performance, or if you’re a software developer or reviewer who has to be certain that any bugs discovered are due to the products being tested, and not due to your computer operating beyond its recommended speed. However, for those prepared to dabble, it’s certainly possible to achieve modest performance boosts on many computers fairly easily, if you take care, and more significant ones if you’re prepared for greater heat and fan noise.