Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Saturday, December 9, 2017

Q. Do mixes benefit from low-pass filtering at mixdown?

I've heard a lot about high-pass filtering tracks to reduce clutter at mixdown, but not as much about low-pass filtering in this context. Would mixes suffer or benefit from doing the same at the opposite end? For example, would it be easier to bring out 'air' in a vocal if other parts were low-passed?

 Via SOS web site

SOS contributor Mike Senior replies: Particularly in small-studio environments where the low-frequency monitoring fidelity is questionable, there's a lot to be said for high-pass filtering in a fairly systematic way to head off problems at mixdown. However, widespread low-pass filtering offers fewer benefits, simply because so many instruments in a mix will have harmonics and noise components that extend right up the spectrum. In practice, I find peaking/shelving cuts are, therefore, more appropriate for dealing with typical mixdown tasks, such as frequency-masking problems. Yes, in theory you could make your lead vocal sound airier by low-pass filtering the other parts, but you'd still have to consider how the mix as a whole will sound during moments when the vocal isn't active, so achieving an airy vocal in practice isn't usually as simple as this.

Although fairly systematic high-pass filtering is very sensible in home-studio mixing, as you can see in this screenshot from a recent Mix Rescue project, it's rarely beneficial to apply low-pass filtering in a similar way. 
Although fairly systematic high-pass filtering is very sensible in home-studio mixing, as you can see in this screenshot from a recent Mix Rescue project, it's rarely beneficial to apply low-pass filtering in a similar way.

Having said that, there's nothing wrong with low-pass filtering if you really want to kill the high frequencies of an instrument for balancing reasons. I would most commonly do this with amped instruments, such as electric guitars, which are capable of contributing a lot of undesirable amplifier noise in the top two octaves of the audible spectrum. However, this has to be evaluated on a case-by-case basis, because it's very easy to dull the overall mix if you're not careful.


Published January 2012

Thursday, December 7, 2017

Q. Is flutter echo a problem in a well-treated room?

My daughter managed to play a tough piece she's been practising on the keyboard this weekend. She played it so well that we clapped our hands... then we noticed how strange the clapping sounded. It rang on but died very quickly, and for the time it rang on, it sounded very metallic and almost robotic.That was close to the middle of the room. The room is partially treated at the moment, with panels at the side-wall reflection points and ceiling, one on the ceiling, and three corner superchunks. I tried clapping again with some further panels on the side walls directly to the left and right of where I was sitting, and the noise disappeared. I understand enough to realise the sound is the clap bouncing back and forth between the two walls, and I'm guessing that this is what folk refer to as flutter echo. What I'm a little less sure about is whether it is a problem, and what — generally — a hand clap should sound like in a well-treated room.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: If we're talking about the sound in a control room, the point is what the room sounds like when listening to sound from the monitor speakers. It is conceivable that, by design (or coincidence), the acoustics could well sound spot on for sounds from the speakers, but less accurate or flattering for sources elsewhere. And, unless you're planning on recording sources in the control room at the position you were clapping your hands, those flutter echoes might not represent a problem or require 'fixing'.

However, in general, strong flutter echoes are rarely a good thing to have in a control room and I'd certainly be thinking about putting up some absorption or diffusion on those bare walls to prevent such blatant flutter echoes.

Flutter echoes in a studio can be distracting and fatiguing, so it's often worth putting up some absorbent foam on bare walls to reduce them.  Don't overdo it, though: you need to maintain a balanced acoustic. 
Flutter echoes in a studio can be distracting and fatiguing, so it's often worth putting up some absorbent foam on bare walls to reduce them. Don't overdo it, though: you need to maintain a balanced acoustic.

You shouldn't go overboard with the room treatment, though, because while working in a control room that has 'ringy' flutter echoes or an ultra-live acoustic can be very distracting and fatiguing, so too is trying to work in a room that sounds nearly as dead as an anechoic chamber!

Of course, traditional control rooms are pretty dead, acoustically speaking, and that is necessary so that you can hear what you are doing in a mix without the room effects dominating things. But the key is to maintain a balanced acoustic character across the entire frequency spectrum. The temptation in your situation might simply be to stick a load of acoustic absorbers on the walls, and that would almost certainly kill the flutter echoes, but in doing so there is also a risk that you'd end up with too much HF and mid-range absorption in the room (relative to the bass-end absorption).

That situation would tend to make the room sound boxy, coloured and unbalanced, and that's why a better alternative, sometimes, is to use diffusion rather than absorption; to scatter the reflections rather than absorb them. The end result is the same, in that the flutter echoes are removed, but the diffusion approach keeps more mid-range and HF sound energy in the room.

The question of which approach to use — diffusion or absorption (or even a bit of both) — depends on how the rest of the room sounds, but from your description I'd say you still had quite a way to go with absorption before you've gone too far.

To sum up, I'd suggest that you're not worrying unnecessarily, and that it would help to put up some treatment to reduce those flutter echoes.


Published February 2012

Tuesday, December 5, 2017

Q. How do I record a double bass alongside other instruments?

Having been a bass player for years, I've recently come into possession of an acoustic double bass. I seem to be getting a decent enough sound out of it that I think I'm ready to use it with my band. We're going to be recording soon, but will all be playing together in the studio. How can I record the bass alongside other musicians, reducing as much spill as possible?

The 'modern' method of recording a double bass in the studio is to 'bug' it, often with a pickup fitted on the instrument's bridge. Any 'character' lost in the sound is then usually EQ'd back in. However, the 'vintage' way would have been to use careful mic and instrument placement, in conjunction with carefully placed acoustic treatment, to provide a degree of separation. 
The 'modern' method of recording a double bass in the studio is to 'bug' it, often with a pickup fitted on the instrument's bridge. Any 'character' lost in the sound is then usually EQ'd back in. However, the 'vintage' way would have been to use careful mic and instrument placement, in conjunction with carefully placed acoustic treatment, to provide a degree of separation.

Bradley Culshaw via email

SOS Technical Editor Hugh Robjohns replies: The obvious 'modern' solution is to fit a 'bug' — a bridge pickup or an internal mic — to the bass, which will provide a pretty high degree of separation. The sound character might not be entirely 'natural', but a little EQ should deal with that. The 'vintage' alternative is to use acoustic screens or gobos in the studio and thoughtful instrument and mic layout, with the aim of minimising spill and helping to provide some sound shadowing for mics, especially the double-bass mic, thus reducing the spill and providing a workable degree of separation from the other instruments playing in the studio. This is a well‑proven historic technique, and the remaining spill generally helps to gel the mix together and provide a great 'live' character to the mix. Of course, such spill makes it almost impossible to overdub replacement parts, but that's what practice and an unlimited number of takes are for!

Published September 2011

Saturday, December 2, 2017

Q. What is side‑chaining, and what do you use it for?

This might be a very big topic, but I'm hoping that you can help to clear up some confusion. Side‑chaining seems to be something that is used a lot, but I don't really understand what it is. Can you explain?

Kim Nguyen via email


Normally, compressors and gates use the signal that's being processed to control the amount of gain reduction taking place, as in the top arrangement in the diagram to the right. Some devices, however, allow you to use a secondary input to control the gain of the first input (below). This allows you to, for example, compress a bass guitar using a kick drum as the trigger, or 'side‑chain' input. 
Normally, compressors and gates use the signal that's being processed to control the amount of gain reduction taking place, as in the top arrangement in the diagram to the right. Some devices, however, allow you to use a secondary input to control the gain of the first input (below). This allows you to, for example, compress a bass guitar using a kick drum as the trigger, or 'side‑chain' input.

 Q. What is side‑chaining, and what do you use it for?

SOS Reviews Editor Matt Houghton replies: This is a huge topic, and it would be well worth reading some of the past SOS features about it (see the archive on our web site). Essentially, though, any dynamics processor (for example, a gate, expander, compressor or limiter) uses two input signals: the incoming audio itself and a side‑chain, which feeds the detection circuitry that determines whether or not the processor acts on the material. Simple processors take their side‑chain signal directly from the audio input. A more sophisticated approach is to split that signal, and allow you to process the side‑chain with high‑ or low‑pass filters.

Many professional devices also have a second physical input called the external side chain, so that you can feed the processor's detection circuit with any audio signal, which can be totally unrelated to the main audio input. A common example is ducking, where you might feed the kick‑drum signal into a compressor on the electric bass and set it up with a fast attack and release time so that the bass is attenuated by 1‑2 dB every time the kick exceeds the threshold. Another example would be to use a signal to 'key' a gate: you could place a gate on a synth pad, for example, and use a percussive loop to make the gate open and close rhythmically with the groove of the loop, without ever needing to hear the loop itself!


Published September 2011

Thursday, November 30, 2017

Q. What are auxes, sends and returns?

Excuse the simplicity of the question, but I'm always coming across these terms in the magazine, and I don't know what they are: auxes, buses, sends and returns. Can you explain to me what are? Are they all part of the same thing or completely unrelated?

Tony Robbins via email

The aux sends on a mixer (whether hardware or software) allow you to send independent mixes to performers on stage or in the studio. You can also use them to feed effects processors at mixdown.  
The aux sends on a mixer (whether hardware or software) allow you to send independent mixes to performers on stage or in the studio. You can also use them to feed effects processors at mixdown.

SOS contributor Mike Senior replies: All of these terms are related, in that they are all ways of talking about the routing and processing of audio signals. The word 'bus' is probably the best one to start with, because it's the most general: a bus is the term that describes any kind of audio conduit that allows a selection of different signals to be routed/processed together. You feed the desired signals to the bus, apply processing to the resulting mixed signal (if you want), and then feed the signal on to your choice of destination. If that description seems a bit vague, that's because buses are very general‑purpose.

For example, it's common in mixing situations to hear the term 'mix bus', which is usually applied to the DAW's output channel. In this case, all the sounds in your mix are feeding the bus, and it might then have some compression applied to it before the sound is routed to a master recorder or recorded directly to disk within the software. A 'drums bus', on the other hand, would tend to refer to a mixer channel that collects together all the drum‑mic signals for overall processing, routing them back to the mix bus alongside all the other instruments in the arrangement. Other buses are much simpler, such as those that can be found on a large‑scale recording mixer, feeding the inputs of the multitrack recorder, or those which carry audio to/from external processing equipment. Some don't even provide a level control.

An 'aux' is just a type of bus that you use to create 'auxiliary' mixes alongside that of the main mix bus: each mixer channel will have a level control that sets how much signal is fed to the aux bus in question. What you do with your aux buses is up to you: the most common uses are feeding a cue signal to speakers or headphones, so that performers can hear what they're doing on stage or during recording; and sending signals to effects processors during mixing. In the latter case, the aux bus that feeds the effects processor is usually referred to as a 'send', while the mixer channel that receives the effect processor's output will usually be called the 'return'. For more information, check out Paul White's 'Plug‑in Plumbing' feature back in SOS April 2002; you can find it at /sos/feb02/articles/plugins.asp.


Published September 2011

Wednesday, November 29, 2017

Q. How much power does my stage system need?

I'm trying to work out how much power a PA system I work with draws, and I also need to come up with a sensible 'plug‑it‑all‑in' type of procedure. (I've read the Sound On Sound December '05 article 'PA Basics'.) It's mainly small venues we play in, such as function rooms and town halls. Looking at the manual for my Mackie SA1530z, I'm kind of baffled. It says:

Line Input Power Europe: 230V, 50Hz

Recommended Amperage Service: 16 amps

Is this saying that a 16‑amp circuit is recommended? The spec sheet doesn't seem to list how much current the box will draw. Also, it's often stated that FOH, mixer and racks, lights and backline should be powered from their own separate sockets (three in total). Is it acceptable to power from both sides of a double socket and another adjacent socket, therefore, all being powered from the same ring main?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: The 16‑amp thing looks like a generic suggestion to me. In the UK, standard domestic outlets are nominally 13A anyway!

Essentially, what they are saying is that it needs to be plugged into a sensible supply. The typical average current will be a few amps at most, but the initial inrush current on switch‑on will be considerably higher, so don't try to turn everything on in one go!

If you need to know the real current and power‑consumption figures, invest in something like an energy monitor, such as the one I've found here: www.maplin.co.uk/plug-in-mains-power-and-energy-monitor-38343. This one is marketed by Maplin in the UK, but I'm sure you'll find similar devices from all the usual suppliers. You simply plug in the device you want to know about, and the display will give you the current and power being consumed, as well as the supply voltage and frequency. It's a really handy device and I use mine a lot when testing and checking equipment.

Regarding the use of wall sockets, assuming that you're working with a PA and backline system that is consuming less than about 4kW in total (which would be most systems for a modest‑sized venue), use a double socket to run all the audio equipment. That minimises any problems with ground loops.

 If you need to know how much current your setup is using, a simple energy monitor like this should do the trick: plug in whatever you'd like to measure and its power consumption will be displayed. 
If you need to know how much current your setup is using, a simple energy monitor like this should do the trick: plug in whatever you'd like to measure and its power consumption will be displayed.

Run all the backline from one side of the double outlet, and all the PA (FOH, racks, PA and monitors, for example) from the other side. Supplying the two systems from their own RCDs (Residual Current Devices) is essential too, particularly from the point of view of preventing a backline fault from taking out the PA. If the musicians want to use their own RCDs for their gear, that's fine too!

Running the FOH on a long mains extension from the PA power‑supply socket (or distribution board) continues the theme of 'star grounding' and will minimise the potential for ground loops in the PA system. Run lighting from a different socket (or sockets) and try to keep the dimmer racks and cabling well away from the audio cables.



Published October 2011

Monday, November 27, 2017

Q. How can I connect hardware synths to my setup?

Currently, I have a MIDI keyboard, a Mackie Spike audio interface, an Apogee Duet interface, a UA Solo 610 preamp and a Neumann TLM103 mic. I use the Spike as a soundcard and run my MIDI through it, and the Duet for recording vocals.

I'm looking to get some hardware synths in the near future and need some advice. In preparation for the synths, I've bought a MOTU Express 128 so that I can have up to eight synths at once hooked up for MIDI. As both the Spike and Duet only have two audio inputs each, I am also looking to do away with those and get a better audio interface. However, if I get rid of them, I do not have a soundcard to produce sound via my monitors.

This is where I'm getting confused. How do I set up, say, three hardware synths via audio and MIDI (I believe you need both connected to get sound in your DAW?) and also get sound from my monitors out of my DAW? Can I get an audio interface that I can record vocals through and plug hardware synths into?

Via SOS web site

SOS Editor In Chief Paul White replies: You have a couple of options, one of which is to use an external analogue mixer to combine the output of your DAW (stereo) with your hardware synths. When the mix is sounding right, you record the output of the mixer back via your audio interface onto a new stereo track, but with the playback fader turned down during recording so the signal doesn't feed back on itself. Speaker and headphone monitoring would be done from the output of the mixer. I used to work in this way and got really good‑sounding results.

The other option is to buy an interface with plenty of spare inputs, ideally one that can be further expanded using an ADAT‑compatible preamp. MOTU's interfaces are generally reliable and straightforward (most include volume controls for your monitors) and I've also used M‑Audio with no problems. Expanders are available from under , such as Behringer's ADA8000, which will give you eight more inputs if you need more. You'd then connect your synths up to pairs of inputs (for stereo) and record their outputs just as you'd record any other audio. Most DAWs now have the ability to set up live inputs in permanent monitor mode, so you can always hear them even when they're not set to Record Ready. Working in this way, each synth would have both a MIDI track to control it and a stereo audio track to record it.

 Expanding the number of inputs in your setup can be done at a relatively low cost. This Behringer ADA8000 can be found for well under <UK>£200</UK><US>$250</US> and will give you an extra eight inputs to play with. 
Expanding the number of inputs in your setup can be done at a relatively low cost. This Behringer ADA8000 can be found for well under £200$250 and will give you an extra eight inputs to play with.

The advantage of working like this, rather than using an external mixer, is that you can apply plug‑ins to the synth channels if you need more effects. You can also come back to your mixes years later when the synths have been disconnected or sold.

The MOTU multi‑port MIDI interface will enable you to handle up to eight multitimbral synths at once without running out of MIDI channels, so that seems a practical choice.


Published October 2011

Friday, November 24, 2017

Q. Should I mix an album as I’m writing it, or all at once?

I'm in the long process of trying to write enough material to put a cohesive, album-length bunch of stuff together. I have a few ideas in 'semi-baked' state, and have got to the point where I have one track written, structured and recorded, and am ready to make a proper mix (I've already made a rough mix).My decision now is whether to go to town on mixing that one track, and then get on with the rest of the writing and recording at a later date, or to keep it at the rough mix stage, finish the rest of the material, then mix the whole lot afterwards.I'm guessing the second approach would lead to greater overall consistency, but this is my first real stab at 'doing an album', if you want to call it that. My output up to now has been rather discontinuous, so it hasn't mattered before.What approach would you take, and how do you think it could help your progress?

Via SOS web site

SOS Reviews Editor Matt Houghton replies: Consistency is great if it's consistently good. Otherwise it's not such a laudable aim! There's no harm in still writing and recording stuff while you're mixing other stuff, but I would rather mix one track at a time, so that any lessons I learn can be applied to the next mix, and so on.
Also, bear in mind that, while mixing the first or second tracks, you might have one of those dawning "Oh, that would have been so much easier if only I'd recorded it like that!” moments, and that would be a bugger if you'd already tracked everything else.
There's no particular reason not to continue writing while you're mixing other tracks, but it makes sense to complete a couple of mixes before getting stuck into the rest of a project if you're, say, recording an album. This means that you can apply what you've learnt from your first mix(es) to the rest of the material. It also means that any recording issues you pick up during the mixing stage won't appear in all tracks. 
There's no particular reason not to continue writing while you're mixing other tracks, but it makes sense to complete a couple of mixes before getting stuck into the rest of a project if you're, say, recording an album. This means that you can apply what you've learnt from your first mix(es) to the rest of the material. It also means that any recording issues you pick up during the mixing stage won't appear in all tracks.

SOS contributor Mike Senior adds: I'd second Matt on that one. It may mean that you end up redoing the first couple of mixes with the benefit of hindsight, but I think, overall, it's probably the best option if you're still feeling your way though a little bit with the mixing side of things.

It's no different from when you're mixing anything: you have to reference your work against any other material you want consistency with. Often that will be commercial releases with which you want your work to compete, but it can just as easily be other mixes you've done, which are destined for the same record. If you make sure to do that, then everything else should sort itself out in the long run.

I do tend to keep the main send effects I used for the first mix available for the second if I'm working on several things for one artist, as long as those effects met with their approval first time round! That does help to give some conformity to the sound. However, there are perfectly valid aesthetic reasons for not wanting to make all the tracks sound the same, so you should still try to make each track shine on its own terms. If that means using completely different mixing strategies, then so be it.


Published November 2011

Tuesday, November 21, 2017

Q. Which speakers will be best for digital piano playback?

I currently have a Yamaha P300 piano, which has built-in speakers and amplification and is ideal for me. However, I'm thinking about getting a Roland RD700-series piano that would be mainly for home use, and I guess I would need to get some kind of amplification/speaker system for it. Are you able to recommend a reasonably priced setup that would do the job? Would decent-quality home hi-fi speakers cope with the wide range of frequencies that the piano can generate?

Geoffrey Clarke via email

SOS contributor Robin Bigwood replies: Built-in speakers are certainly a handy feature for many applications, including home use. However, you'll probably find that some kind of external amplification for your RD700 will give more flexibility, and quite possibly an improvement in sound quality too. At the very least, you're going to want to match what your P300 offers, which is a stereo amp rated at a modest 20W per channel, driving a pair of one-way 13cm (5-inch) drivers.

A pair of KRK RP6s, mounted on stands behind a digital piano, would be an excellent choice for reproducing its sound in an accurate and pleasing way. However, if you only need the speakers to hear piano playback and they don't need to double for mixing or other monitoring, a set of quality hi-fi speakers, such as the Wharfedale Diamond 8.2s shown here, will do the job well. They can also be picked up cheaply on the secondhand market. 
A pair of KRK RP6s, mounted on stands behind a digital piano, would be an excellent choice for reproducing its sound in an accurate and pleasing way. However, if you only need the speakers to hear piano playback and they don't need to double for mixing or other monitoring, a set of quality hi-fi speakers, such as the Wharfedale Diamond 8.2s shown here, will do the job well. They can also be picked up cheaply on the secondhand market.

 Q. Which speakers will be best for digital piano playback?
One option would be to look at a dedicated keyboard 'combo' amplifier. Most are essentially mono (even if they have stereo inputs) and won't give you much of an immersive piano sound experience. However, stereo models are available and these also sport two-way speaker systems (ie. a main driver plus a tweeter) that should provide clarity as well as 'oomph'. Roland's compact KC110 may well match your P300 for scale, if not stereo separation and, with a handful of stereo and mono inputs and a battery‑power option, could be a useful thing in and out of the home. The KC880, which costs considerably more, is the bigger, gym-obsessed, steroid-taking brother, and knocks out 320 Watts via 12-inch woofers. Probably overkill for your particular needs.

Better still, though, I think, would be to go with a pair of active monitors, mounted behind and at either end of your RD700, perhaps on a shelf or tall stands. A pair of KRK RP6s, for example, would turn in a performance markedly superior to what the P300 offers. Similarly priced products by the likes of Mackie, Yamaha and M-Audio are also a safe bet, and, if you can stretch to greater expense, offerings by Dynaudio, Adam, Focal and Genelec will be that much better again. Many active monitors also have an upgrade path, so to speak, in the form of a matching subwoofer: that could help generate a feeling of scale and bass extension on a par with a real piano. It's useful, too, that all the recent RD700 models — certainly from the SX onwards — have balanced XLR audio outputs, so getting a good interference-free sound from directly connected active monitors won't be a problem.

You mention the possibility of using home hi-fi, and since you're primarily going for a good piano sound, rather than ruthless accuracy for mixing, this makes perfect sense. Some really high‑quality 1990s vintage amplifiers and speakers can be had for peanuts on the secondhand market, yet they offer excellent performance and should have no trouble managing the RD700's full-range signal. One thing to watch for, though: most domestic amplifiers will have unbalanced inputs on RCA phono sockets, so make sure you keep the line-level cable runs from the RD700's quarter-inch outputs as short as is practical, to avoid picking up unwanted RF interference.


Published July 2011

Saturday, November 18, 2017

Q. Can I improve my monitors’ frequency response with EQ?

I use a set of Roland DS50 monitors. According to the graph in the manual, the frequency response isn't what one would call completely flat. Would it be reasonable to plot the discrepancies and compensate for them by adjusting the respective frequencies with EQ? Or would I then be fooling myself?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: This frequency response is perfectly normal, and I'd be extremely suspicious if I saw a completely flat line. There are lots of reasons for these response irregularities, including cabinet effects, such as internal reflections, external diffraction and port resonances, driver response inconsistencies, crossover effects and matching, and so on. However, providing the response variations are modest and gentle (and the quoted +/-3dB spec is, again, normal and reasonable) there won't be any problems. The effect of the room and its contents on the speakers' responses at low frequencies and through the mid-range will be orders of magnitude larger than any built-in variations anyway.

Plotting the discrepancies and using EQ to adjust the respective frequencies is a nice idea, and many people have tried it over the decades, but the situation is more complex than simple EQ can resolve, and the inherent phase shifts involved in conventional analogue EQ often make the problem worse rather than better.
As conventional analogue EQ isn't really precise enough, several speaker manufacturers have used DSP techniques to try to correct for speaker response anomalies, but I'm not convinced there is any real benefit. Well-designed speakers using high-quality components rarely need this kind of correction, and applying it to budget speakers is a bit like putting a sticking plaster on a major knife wound; it doesn't fix the underlying problems.

There are also many digital room‑acoustics correction devices on the market that attempt to address the in-room performance of a speaker system by levelling the response (amongst other things), but while they often make a difference, I feel the traditional acoustic treatment approach provides generally better and more consistent results.

 The idea of trying to use EQ to compensate for perceived shortcomings in the frequency response of your monitors might seem tempting, but it can actually make things worse. Addressing your mixing room instead, using correctly placed acoustic treatment, will usually yield much more sensible and consistent improvements.  
The idea of trying to use EQ to compensate for perceived shortcomings in the frequency response of your monitors might seem tempting, but it can actually make things worse. Addressing your mixing room instead, using correctly placed acoustic treatment, will usually yield much more sensible and consistent improvements.

In conclusion, I would not recommend trying to apply inverse EQ to your monitor chain, as it's likely to make things worse rather than better. The frequency response of the speakers you have is adequately flat for a design of this type. You will gain a far greater improvement in sound quality by addressing the acoustics of your listening environment, treating the mirror points with broadband absorbers and installing bass traps to control the room modes.



Published July 2011

Korg Kaoss DJ - Control Your Mix (Free Serato DJ Intro Download)

Friday, November 17, 2017

Cory Henry Invites You To The 2015 Korg Pre-NAMM Show

Q. Does my shotgun mic have any uses in the studio?

I've recently inherited a shotgun mic that seems to be in pretty good condition. However, I never do any kind of video or broadcast work, so I can't see myself using it for its intended purpose. I'm loath to get rid of something if I can make use of it, so are there any uses for a shotgun mic in the studio?

James Gately, via email

SOS Technical Editor Hugh Robjohns replies: You can always find a use for a decent mic in a studio, but shotgun — or rifle — mics aren't the easiest to use because their particular blend of properties don't really work well in enclosed spaces.

The shotgun mic gets its name from the long slotted tube — the 'interference tube' — affixed in front of a (usually) hypercardioid capsule. The idea of the tube is to enhance the rejection of off‑axis sound sources, and thus make the polar pattern more directional, although it relies on on‑axis sounds not being picked up off‑axis (and vice versa), and that means it doesn't work so well in an enclosed and reverberant space.
Though a shotgun mic may appear to have obvious uses in the studio — rejecting, as it does, off‑axis sound effectively — it actually captures highly coloured spill and is, therefore, very difficult to use in the studio context. 
Though a shotgun mic may appear to have obvious uses in the studio — rejecting, as it does, off‑axis sound effectively — it actually captures highly coloured spill and is, therefore, very difficult to use in the studio context.

In normal use, the sound wavefront from an on‑axis source travels down the length of the tube unheeded, to strike the capsule diaphragm in the usual way, and so generates the expected output. However, sound wavefronts from an off‑axis sound source enter the tube through the side slots. The numerous different path lengths from each slot to the capsule itself mean that multiple off‑axis sound waves actually arrive at the diaphragm at the same time and with a multitude of different relative phase shifts. Consequently, this multiplicity of sound waves partially cancel one another out, and so sound sources to the sides of the microphone are attenuated relative to those directly in front. The polar pattern essentially becomes elongated and narrower in the forward axis, and the microphone is said to have more 'reach' or 'suck'.

Sadly, though, there's no such thing as a free lunch, and in this case the down side is that the interference‑tube phase cancellation varies dramatically with frequency (because the phase‑cancellation effects relate to signal wavelength as a proportion of the interference‑tube slot distances). If you examine the polar plot at different frequencies of a real interference‑tube microphone, you'll see that it resembles a squashed spider: deep nulls and sharp peaks in the polar pattern appear all around the sides and rear of the mic. What this means, in practice, is that off‑axis sounds are captured with a great deal of frequency coloration, and if they move relative to the mic, they will be heard with a distinctly phasey quality.

So while it might seem that a shotgun mic could afford greater separation in a studio context, in reality the severe off‑axis colouration makes the benefit rather less advantageous, the strongly coloured spill doing more damage than good and making it almost impossible to get a sweet‑sounding mix.

Shotgun mics really only provide useful advantage out of doors (or in very large and well‑damped enclosed spaces), and where no other, better‑sounding alternative is viable. My advice would be to sell the mic to someone who is involved with film, video or external sound effects work, and use the funds to buy something more useful for your studio applications!



Published August 2011

Thursday, November 16, 2017

Intel Processing; Seagate Storage

By Martin Walker
Computing's big names continue to offer more power in less space, as Intel pioneer new microprocessor technology and Seagate put even more data storage on your platter...
Coming soon to a PC near you — Intel's Ivy Bridge processors will offer faster performance and greater efficiency, thanks to a radically new 3D transistor design. On the left is the 32nm planar transistor in which the current (represented by the yellow dots) flows in a plane underneath the gate. On the right is the 22nm 3D Tri‑Gate transistor with current flowing on three sides of a vertical fin. 
Coming soon to a PC near you — Intel's Ivy Bridge processors will offer faster performance and greater efficiency, thanks to a radically new 3D transistor design. On the left is the 32nm planar transistor in which the current (represented by the yellow dots) flows in a plane underneath the gate. On the right is the 22nm 3D Tri‑Gate transistor with current flowing on three sides of a vertical fin.

The next generation of microprocessors from Intel in the Sandy Bridge series is code‑named Ivy Bridge, and will be introduced sometime in 2012, when Intel move to an even smaller 22nm manufacturing process. However, the microprocessors are notable for another reason: they will use the world's first 3D transistors. In place of the two‑dimensional planar (flat) transistors of the past with a single 'gate' on top, 'Tri‑Gate' transistors feature incredibly thin three‑dimensional silicon fins that rise up vertically, with a gate on both sides of the fin and a third on top.

The main advantage is that since the fins are vertical, transistors can be packed even closer together. This, in turn, should help extend Moore's Law, the 1965 theory by Intel co‑founder Gordon Moore that the number of transistors in a given area would double every two years, with increased functionality and reduced cost. Another advantage of the Tri‑Gate technology is that it allows more powerful processing with greater efficiency: the new transistors are said to consume half the power of current ones for the same performance, and up to a 37 percent improvement at low voltages compared with current planar transistors.

Other new features for Ivy Bridge include integral USB 3 and Thunderbolt support, which should reduce compatibility problems for musicians (compared with the current situation of motherboard manufacturers having to add their own support chips), as well as upgrades to the graphics core, which should help those involved in video work.

Nebula Kernels Promoted

Back in SOS February 2008 (/sos/feb08/articles/nebula3.htm), I reviewed Acustica Audio's Nebula 3, an impressive 'dynamic convolution' plug‑in with great potential, although at the time the bundled effect‑library patches varied greatly in audio quality. The secret of Nebula's engine was 'Volterra Kernels', each of which is essentially a stream of treated audio chunks that acts rather like a single convolution impulse response, but which can exist in various tiers.

Nebula 3 now offers a new 'Aqua' interface to third‑party developers, providing the same 'dynamic convolution' engine but giving them free rein with GUI design. As this 'vintage British console channel strip' plug‑in from CDSoundMaster shows, you may even be using the Nebula 3 engine without realising it. 
Nebula 3 now offers a new 'Aqua' interface to third‑party developers, providing the same 'dynamic convolution' engine but giving them free rein with GUI design. As this 'vintage British console channel strip' plug‑in from CDSoundMaster shows, you may even be using the Nebula 3 engine without realising it.

The output stream morphs between these tiers, depending on the desired effect, so, for example, Nebula can model compression by moving between the tiers depending on input level, at a speed determined by the attack/release controls, and at a depth determined by its threshold control. A Nebula preamp does the same at maximum speed to vary the level of harmonic distortion/saturation with input level, and swept filters and phasing or flanging can be reproduced by smoothly moving between the tiers using one of Nebula's LFOs.

Over the last three years, Acustica's tiny part‑time development team have concentrated on what they do best — enhancing the Nebula engine so it can capture and replay the sounds of existing hardware with greater realism and efficiency. However, Nebula's interface can still be confusing, and while it comes with a utility for capturing kernels, that's not a job for the faint‑hearted, either.

Raising The Bar

Fortunately, third‑party developers have stepped up to the plate, releasing a host of Nebula libraries that are streets ahead of the bundled offerings in both realism and versatility, in the process capturing vintage hardware EQs, compressors, tape saturation, tube preamps and consoles, as well as plenty of other exotica. Many are at pocket‑money prices, while others audibly rival or arguably surpass much more expensive plug‑ins!

I'm hoping to explore the best of these shortly, but in the meantime Nebula 3 Pro users can point their browsers at the new and improved Acustica Audio web site (www.acustica‑audio.com) to catch up with all the improvements, and at www.alessandroboschi.eu, www.analoginthebox.com, http://cdsoundmaster.com, http://cupwise.com and http://rhythminmind.net to see what some of these third‑party developers have been up to.

PC News

Windows 7 On The Up: Microsoft have sold 350 million licenses for their Windows 7 operating system in the 18 months since its release, and also estimate that 90 percent of corporations are currently in the process of migrating to Windows 7. This is a huge improvement compared with the take‑up of Vista, but hardly surprising, given the latter's failings. Ironically, though, Windows XP (which celebrates its 10th birthday in October 2011) still remains in pole position worldwide, holding 54 percent of the global market, although Windows 7 is expected to have caught up in a year or so, by which time Windows 8 could be upon us. Interestingly, Windows 8 will finally see the end of the dreaded 'blue screen of death' crash message — it's going to be black instead!

Seagate Break 1TB Barrier: As I write this, Seagate have just launched the first commercially available 3.5‑inch external hard drive to offer one terabyte per platter — the highest storage capacity on the market to date. Seagate's GoFlex Desk range offers models with capacities up to a massive 3TB of storage spread across three platters, and by the time you read this, its flagship Barracuda desktop hard drives may also be shipping with this technology on board.


Published July 2011

Wednesday, November 15, 2017

Korg Step Master: Function

Q. How should I use my new multi‑pattern microphone?

Having been using a cardioid mic for some time, I've just bought an Audio‑Technica AT2050. Although my decision was partly based on the flexibility of its switchable polar patterns, I've not ventured beyond the cardioid pattern that I'm used to since I bought it. How can I use the different patterns? Are there any creative techniques I can use?

Ben Allen via email

SOS Reviews Editor Matt Houghton replies: This is probably a rather broader topic than you realise, but it's great that you're showing curiosity and a willingness to learn! Generally speaking, the best thing to do is to learn through trial and error: try out the different patterns and compare the results. Even with all the theory in the world, you need to make errors in order to learn! That said, we've published several features over the years that discuss this topic in more detail (for example, there's one in SOS March 2007: /sos/mar07/articles/micpatterns.htm), and I'd suggest that you have a read of some of those.

To get you started, though, I'd recommend investigating the figure‑of‑eight pattern, which is really useful where you want to reject sounds off to the side: you point the null at the bit you want to reject, and the front (or rear!) at the bit you want to capture. Bear in mind that the trade‑off in achieving this excellent off‑axis rejection is that you pick up as much sound from the rear as you do from the front, so you either need to be working in a nice‑sounding room, and be happy to capture ambience, or to have some sort of acoustic shield placed behind it. I find that figure‑of‑eight mics often make very useful room mics: you'd set them up to pick up room ambience only, with the null pointing toward the sound source.

A multi‑pattern mic, like the Audio‑Technica AT2050 shown here, provides a relatively inexpensive way to try out different polar patterns. If you already have a cardioid mic, you could use the two in conjunction to start experimenting with stereo miking techniques. 
A multi‑pattern mic, like the Audio‑Technica AT2050 shown here, provides a relatively inexpensive way to try out different polar patterns. If you already have a cardioid mic, you could use the two in conjunction to start experimenting with stereo miking techniques.

If you have another cardioid mic handy, you could try Mid/Side stereo recording, with the cardioid mic (it could actually be an omni, figure of eight or anything in between, but cardioid is more typically used) pointing toward the sound source and the figure-of-eight rejecting the sound source but picking up from left and right. In this instance you record three tracks: the cardioid, and two signals from the figure of eight. Polarity‑invert (ie. flip the 'phase') one of those figure‑of‑eight signals, and route both to a group channel in your DAW, and you have a mono‑compatible stereo recording whose width you can alter by balancing the cardioid's fader with the figure‑of‑eight group fader. If this whistle‑stop explanation is a bit brief, you can learn more about the technique at /sos/feb02/articles/cheshire0202.asp.

The polar patterns available from most multi‑pattern microphones include the three shown here: (left to right) cardioid, omnidirectional and figure of eight. The diagram shows where the polar pattern picks up sound and where it rejects it. 
The polar patterns available from most multi‑pattern microphones include the three shown here: (left to right) cardioid, omnidirectional and figure of eight. The diagram shows where the polar pattern picks up sound and where it rejects it.

The omnidirectional pattern is also potentially very useful. As this is a large‑diaphragm mic, it's probably not as true an omni pattern as you'd find in a small‑diaphragm capsule, but it should give you a much more 'honest' sound than you'd get from the cardioid pattern, so if you're looking to capture the sound you hear in the room, an omni is a good bet. Beware again, though, that this pattern picks up sound from all directions. That makes it great for one‑track‑at‑a‑time recordings (it's a good bet for acoustic guitar, for example), but you need to be in a nice‑sounding space — not too near to reflective surfaces — and it makes it a poor choice if you need to achieve separation between different sound sources.



Published August 2011

Monday, November 13, 2017

Q. Is a 'reflection filter' worth the money?

I've been thinking about trying out an SE Reflexion Filter or similar device. So far, however, I've been hearing mixed reviews and seeing a lot of DIY stuff when looking them up. Some folks say that they're not worth the money, but the DIY options — hanging duvets and hooking up foam contraptions — look so complicated that I figure it must be, to a certain extent. My studio is treated, but I'd still like to tighten up on the vocal side of things. I've been going back and forth looking at different options, but I just don't know whether it's worth it and don't know anyone who has one I can try. Can you give me some advice?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: All of these products are useful to some extent, but they aren't a magical cure‑all and can't instantly turn a bad‑sounding room into a good one. Most people use cardioid‑pattern mics for recording vocals and, if you think about the physics of the situation, the mic is therefore most sensitive in the direction facing the performer, and only slightly less sensitive to the sides. So it's going to pick up any sound reflected from rear and side walls that bounces back over the shoulders and around the performer.

It should be obvious, then, that the single most important area to treat with sound-absorption material is directly behind and to the sides of the performer. This is why we champion the SOS mantra of hanging a duvet (or similar) behind the vocalist: it really does make a massive difference to any recording sessions (unless you are lucky enough to have a properly treated studio).
 As a vocal mic is so sensitive in the direction of the singer, it will pick up reflections from any walls or surfaces behind and to the sides of the singer, so whether you decide to use a reflection filter or not, acoustic treatment in these areas is a priority. A Reflexion Filter or similar device is designed to absorb some of the sound that would hit the rear of the mic, so if you're thinking of buying one, it makes sense to use it in conjunction with acoustic treatment to the rear of the performer, rather than simply using one or the other. 
As a vocal mic is so sensitive in the direction of the singer, it will pick up reflections from any walls or surfaces behind and to the sides of the singer, so whether you decide to use a reflection filter or not, acoustic treatment in these areas is a priority. A Reflexion Filter or similar device is designed to absorb some of the sound that would hit the rear of the mic, so if you're thinking of buying one, it makes sense to use it in conjunction with acoustic treatment to the rear of the performer, rather than simply using one or the other.

 Q. Is a 'reflection filter' worth the money?
The idea of the reflection filter type of product is to provide some helpful absorption of sounds that would otherwise reach the rear-facing sides of the mic, and also to catch and absorb some of the direct sound from the vocalist. The latter helps to minimise the amount of energy that gets out into the room in the first place, thus reducing the amount that subsequently bounces around to get back into the mic.
The differences between the various alternative filter designs really come down to the usual compromises of size, weight, cost and the efficiency of their low‑frequency absorption. Bigger is generally better, as is thicker (both lower the LF‑absorption frequency roll-off).

Whereas most products use a simple acoustic foam panel, the SE design uses clever multi‑layer panel construction, which is designed to extend the LF performance without making the unit excessively heavy or thicker! However, simple DIY filter constructions can be virtually as effective as commercial versions, and if you have an experimental nature I'd certainly recommend having some fun with a foam panel to see whether the idea is useful to your specific recording situation or not!

However, start with the absorbers behind the performer — the ubiquitous duvet — because that will make much more of an improvement.

Published August 2011

Tuesday, November 7, 2017

Q. Can I improve my monitors’ frequency response with EQ?

I use a set of Roland DS50 monitors. According to the graph in the manual, the frequency response isn't what one would call completely flat. Would it be reasonable to plot the discrepancies and compensate for them by adjusting the respective frequencies with EQ? Or would I then be fooling myself?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: This frequency response is perfectly normal, and I'd be extremely suspicious if I saw a completely flat line. There are lots of reasons for these response irregularities, including cabinet effects, such as internal reflections, external diffraction and port resonances, driver response inconsistencies, crossover effects and matching, and so on. However, providing the response variations are modest and gentle (and the quoted +/-3dB spec is, again, normal and reasonable) there won't be any problems. The effect of the room and its contents on the speakers' responses at low frequencies and through the mid-range will be orders of magnitude larger than any built-in variations anyway.

Plotting the discrepancies and using EQ to adjust the respective frequencies is a nice idea, and many people have tried it over the decades, but the situation is more complex than simple EQ can resolve, and the inherent phase shifts involved in conventional analogue EQ often make the problem worse rather than better.
As conventional analogue EQ isn't really precise enough, several speaker manufacturers have used DSP techniques to try to correct for speaker response anomalies, but I'm not convinced there is any real benefit. Well-designed speakers using high-quality components rarely need this kind of correction, and applying it to budget speakers is a bit like putting a sticking plaster on a major knife wound; it doesn't fix the underlying problems.

There are also many digital room‑acoustics correction devices on the market that attempt to address the in-room performance of a speaker system by levelling the response (amongst other things), but while they often make a difference, I feel the traditional acoustic treatment approach provides generally better and more consistent results.

 The idea of trying to use EQ to compensate for perceived shortcomings in the frequency response of your monitors might seem tempting, but it can actually make things worse. Addressing your mixing room instead, using correctly placed acoustic treatment, will usually yield much more sensible and consistent improvements.  
The idea of trying to use EQ to compensate for perceived shortcomings in the frequency response of your monitors might seem tempting, but it can actually make things worse. Addressing your mixing room instead, using correctly placed acoustic treatment, will usually yield much more sensible and consistent improvements.

In conclusion, I would not recommend trying to apply inverse EQ to your monitor chain, as it's likely to make things worse rather than better. The frequency response of the speakers you have is adequately flat for a design of this type. You will gain a far greater improvement in sound quality by addressing the acoustics of your listening environment, treating the mirror points with broadband absorbers and installing bass traps to control the room modes.


Published July 2011

Saturday, November 4, 2017

Q. What’s the noise coming from my Slate Pro Dragon?

I value SOS's opinion very highly and, when I wanted to add a versatile, quality compressor to my arsenal, I thought about the review in SOS July 2010 of the Slate Pro Dragon [see /sos/jul10/articles/slateprodragon.htm for the full review]. On the first unit I got I identified something that I thought was weird, so I got it replaced — but the second unit displays the same behaviour, which apparently hasn't been noted by anyone, so I'm kind of puzzled.

The noise that reader Eric heard coming from his Slate Pro Dragon is most likely from the transformer. It's very common in modern devices with these kinds of transformers, and is nothing to worry about. 
The noise that reader Eric heard coming from his Slate Pro Dragon is most likely from the transformer. It's very common in modern devices with these kinds of transformers, and is nothing to worry about.

The behaviour is this: if I send a track to the unit (it's more obvious with something like a guitar or vocals) with a normal level, have the input at around six, the output at around three to four, and the saturate knob on three, you can hear the track feeding the unit acoustically from within the Dragon itself. What I mean is, if you don't even connect the output of the Dragon to anything, and there's not a single monitor turned on, you clearly hear the sound feeding the unit, produced by something acting as a transducer inside the Dragon. When the saturate knob is on a lower setting, you really have to put your ear on the unit to hear something, but it's actually there.

It's so strange that, after having seen this on the first unit, I contacted Slate Audio, but, apparently, they were not aware of this either. That's why I thought that this first unit had a problem. So I was hoping you might have noticed something during your review.

Eric Robert via email

SOS Technical Editor Hugh Robjohns replies: I would suspect that the output transformer is rattling; the laminations move slightly in response to the audio signal passing through it, which create a varying magnetic field and cause the laminations to vibrate in sympathy, hence generating an acoustic output. Depending on the way the transformer is mounted, those vibrations can be amplified acoustically by the circuit board or case metalwork and become surprisingly audible. It's the same thing that makes small mains power transformers buzz annoyingly in so much modern equipment.

It's not unusual, and it's nothing to be concerned about. I hadn't noticed it in the review model, but I'm not surprised at that: there was always other noise going on when it was on test, I expect, to mask this effect. I have come across it in many other products, though. It's really not that unusual in devices with output transformers.


Published June 2011