Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Saturday, March 31, 2018

Q. Are there different types of MP3?

By Sam Inglis

I have downloaded various MP3s from the Internet. I also buy vinyl, which I record into Steinberg Wavelab and convert to MP3. The Wavelab MP3s sound different to the downloaded ones and are generally quieter. Are there different MP3 types and how can they differ sonically?

Eddie Howell

 

Features Editor Sam Inglis replies: The answer to your question is 'yes'.

Firstly, the MPEG Layer 3 format (MP3 for short) supports different levels of data compression. Perhaps the most common is 128kbps (kilobits per second), but 48, 56, 64, 96, 192 and more are all in use. The lower the bit-rate, the more extreme the compression, and the more obvious its audible consequences.

Secondly, there are two types of MP3 coding — constant and variable bit-rate. In the former, the data compression is applied 'evenly' to an entire audio file, so the compressed version will use the same amount of data to encode each 'frame' of the file. In the latter, the limited amount of data available is used more efficiently. Some parts of an audio file will be more complex than others and will require more data to encode without audible side-effects, so variable bit-rate encoding 'saves up' data from less demanding passages to code these more accurately. As a result, a variable bit-rate MP3 usually sounds better than a constant bit-rate one for a given amount of data reduction.

Thirdly, and most fundamentally, different encoders can produce different results. The basic function of an MP3 encoder is to take an audio file and output a data file that conforms to certain requirements. A decoder does the reverse — it takes a data file and 'reconstitutes' it as audio. However, the MP3 format doesn't specify exactly how the encoding should take place, and programmers have developed a number of different encoders, which make different decisions about what parts of the audio to discard when creating an MP3 file. The original 'Fraunhofer codec' is one of the most widely used, but there are numerous others, and you will certainly notice the difference between them even on MP3s coded at the same bit-rate.

For more detail, take a look at www.mp3-converter.com/mp3codec/implementation.htm.


Published May 2004

Thursday, March 29, 2018

Q. What can I do to improve my stereo recording setup?

By Hugh Robjohns
I record brass bands regularly using a stereo mic setup (X/Y or spaced pair). The people I record for are always happy with the results, but I feel I can do better. The sound still doesn't come close enough to a commercial brass band recording. My current setup consists of two AKG C1000s mics, a Behringer 1804 mixer and compressor, and a Sony Minidisc. I know that some parts of my setup are not state-of-the-art, but I'm sure the results can still be better. Which mic placement should give the best results? How can I find out if I'm suffering from phase problems? I only use very subtle compression to cut off some peaks. At normal levels the compressor doesn't have to work at all. What could I do to make the sound more bright? Would better preamps contribute to a better-sounding record?

The Microtech Gefell M930's ORTF mounting bar allows precise mic placement. 
The Microtech Gefell M930's ORTF mounting bar allows precise mic placement.

Ief Sels

Technical Editor Hugh Robjohns replies: The first thing I'd say is that the C1000s doesn't have a particularly good high-frequency response. Brass instruments have a very strong harmonic content which is critical to their sound, so I fear that your mics are not ideal for the task. The other thing to mention straightaway is that the Minidisc format uses a data reduction system which discards a lot of subtle information, and this will affect the perceived quality of your recordings as well.

In terms of mic placement, coincident and spaced mic arrays produce different kinds of stereo imaging. Choose the configuration that you feel sounds best in each location. Personally, given a pair of cardioid mics I'd probably start with an ORTF arrangement (named after the French broadcasting network, Office de Radiodiffusion Television Francaise), which largely combines the best of both coincident and spaced configurations. Angle the mics outwards at 110 degrees to each other (in other words, 55 degrees each side of the centre axis), and space apart by about 17cm (see diagram). It depends on the room to a degree, but I find this arrangement usually gives good imaging with a nice sense of spaciousness.

You'll always risk encountering phase problems using spaced microphones, since the two mics capture sound from a given source at different times, and hence with different phases. However, I've never had a problem of this kind with the ORTF configuration, although whether or not you'll suffer phase problems in a specific venue is hard to predict. To find out, simply listen in mono. If the mono sound is noticeably coloured compared to the stereo sound then you have phase problems and the only practical solution is to revert to a coincident (X/Y) mic arrangement. Coincident mics don't suffer phase problems at all since the two mics capture sounds from any direction at exactly the same time (and hence phase).

Q. What can I do to improve my stereo recording setup?
Finally, I wouldn't record with a compressor at all. Most compressors reduce brightness of the material when driven and that's the last thing you need here. It sounds like you are only using the compressor as a security blanket anyway, so instead, why not simply leave a few more decibels of headroom in your recording. That way you won't have to worry about peak overloads while recording, and you can adjust the overall dynamic range and levels as a post-production treatment, when you know exactly what you have to play with, and can make such critical decisions without committing them forever to the original recording.

To answer your last question, yes, better preamps will contribute to a better-sounding recording, but it's a case of degree. Using more appropriate mics and recording with an uncompressed format will probably have a more significant effect on the quality of your recordings and I would suggest that that should be your first approach. Almost any of the small-diaphragm condenser mics on the market would probably give you better results. I'd suggest the Rode NT5s or Sennheiser e664 (superb value for money but soon to be discontinued), or maybe the SE Electronic SE1s if your budget is tight. After that, maybe think about changing the recorder for something with better converters, decent preamps and a linear recording format. The new Fostex FR2 would be an excellent (if still relatively expensive) choice for a hardware recorder, or you could think about using a USB or Firewire interface box and recording directly to a laptop computer.



Published May 2004

Tuesday, March 27, 2018

Q. Do I need a mic with a HPF and pad?

By Hugh Robjohns
I've just been looking at buying a cheap condenser mic and I was wondering how useful it is to have the low-frequency cut and 10dB pad options that are on some mics. I assume the low-frequency cut is there to cut out bumps and rumble from knocking the mic stand. Is this still a problem when using a shockmount? How essential is it to have these facilities on a cheap mic?

Nick Bramwell

Technical Editor Hugh Robjohns replies: The 10dB (or sometimes 20dB) pad on capacitor mics is intended to prevent the head amplifier (the part of the mic which amplifies the signal picked up by the diaphragm) from overloading when the mic is placed in front of very loud sound sources. Obviously, if the head amp overloads there is nothing you can subsequently do at the desk or preamp to rectify the problem. If recording loud sources like close-miked drums or brass instruments is something you intend to do, then a pad switch will be very useful.

Low-cut and 10dB pad switches, as found on the Groove Tubes GT67 and GT55, are a useful feature, though not essential. 
Low-cut and 10dB pad switches, as found on the Groove Tubes GT67 and GT55, are a useful feature, though not essential.

The low-frequency cut is a more complex matter. Different manufacturers intend this 'low-cut' filter to serve different functions. Some do indeed provide a high-pass filter (HPF) intended specifically to reduce low-frequency vibration and rumbles. Again, in severe situations, low-frequency energy picked up mechanically by the mic can cause overloads in the head amplifier, and hence building the filter into the mic amp circuit can be very beneficial. In more typical situations, though, the HPF on the mixing desk, preamp or recording channel is usually as effective.

However, it is worth being aware that some manufacturers provide a 'low-cut' filter which is intended instead to help compensate for the rise in bass energy when a source is close miked — the proximity effect. In this situation, the 'low-cut' filter will not be so effective in removing mechanical rumbles and the like. So, it is worth finding out what kind of 'bass-cut' filtering is provided — it should be listed in the manufacturer's specs.

If you're using a decent elastic suspension shockmount, you may well find that you don't need to use the HPF. There's a good case for not using it unless you have to — unlike head amp overloads, which you're stuck with, the decision to filter out low frequencies can be made later on, should it be necessary.


Published November 2003

Saturday, March 24, 2018

Q. Can I flatten out my finished tracks using a hardware compressor?

By Mike Senior
TC Triple*C multi-band compressor.

In my hardware-based setup, with my TC electronics Triple*C compressor, is it possible to do the kind of limiting on a full mix where you end up with a waveform that is levelled off at the top and bottom, 'brick wall'-style? Also, when recording the co-axial digital output from the Triple*C onto my hi-fi CD recorder, what should the Triple*C's dither setting be if my source is a 24-bit Tascam 788?

SOS Forum Post

Reviews Editor Mike Senior replies: If you're after a waveform which is levelled off at the top and the bottom, then simply clip the output of the processor by cranking up the make-up gain control. To make this slightly less unpleasant on the ear, make sure that the Soft Clip option is on. However, you've got to ask yourself why you're wanting to do this. Although short-term clipping usually doesn't degrade pop music too much, it's really easy to go overboard and do serious damage to your audio if you're not careful. I'd advise doing an un-clipped version as well as the clipped version for safety's sake. You've got to ask yourself just how well your monitoring system compares to the one in a dedicated mastering studio — you should always let your ears be the judge, but remember that your monitors, combined with the room they are in, may not be giving you sufficient information to make an informed decision.

If you're after maximum loudness, then clipping isn't going to get you all the way there in any case. Use the Triple*C's multi-band compressor as well — set an infinity ratio, switch on lookahead, and make the attack time as fast as possible. Adjust the threshold and release time to taste. Make sure that you're aware of what the thresholds of the individual compression bands are doing as well (they're set in the Edit menu), as you might want to limit the different bands with different thresholds. Switch on Soft Clip and set the low level, high level, and make-up gain controls for the desired amount of clipping. Once again, make sure to record an unprocessed version for posterity as well, because you may well overdo things first time, or in case you get access to a dedicated loudness maximiser such as the Waves L2 in the future.

The Triple*C's dithering should be set to 16-bit, because you should set it according to the destination bit-depth, not the source bit-depth. The CD recorder will be 16-bit, so set the dithering to the 16-bit level.

Published December 2003

Thursday, March 22, 2018

Q. Is it safe to apply phantom power to dynamic mics?

By Hugh Robjohns

I did a recording session recently using a mixture of dynamic and condenser mics, and realised my desk does not have switchable phantom power for each individual channel — they're either all on or all off. Luckily, I had a second mixer and some external channel strips which I ran the condensers through, but is it safe to apply phantom power to dynamic mics?

SOS Forum Post
Q & A: solutions to your problems 
Technical Editor Hugh Robjohns replies: People get very hung up about phantom power. As long as your mic cables are all wired properly (balanced, with the correct pin connections) and well made, and you are using decent XLRs everywhere — and all your microphones are modern — there is no problem at all.

In BBC radio and TV studios, for example, phantom power is provided permanently on all wall box connections. It cannot be turned off. And engineers are plugging dynamic, condenser and even ribbon mics in and out all day without any problems whatsoever.

Clearly, it is vital that dynamic and ribbon mics are properly balanced internally and well maintained, but this should be a given with any modern mic. The female connectors on good-quality XLR cables should have the contact of the earth pin socket (pin 1) slightly forward of the other two so that the earth contact mates first, and are designed so that the other two pins mate simultaneously. There is therefore little chance of subjecting the mic to significantly unbalanced phantom voltages.

There will be a loud 'splat' over the monitors when connecting a condenser mic as the circuitry powers up, but it is good practice to always keep the channel fader down when plugging in mics anyway. I don't disagree that plugging mics in with phantom off is a safe way of working, but I have never really bothered about it, and have never destroyed a mic yet — not even a ribbon, and I've used a lot of those over the years.

It's perfectly safe to apply phantom power to modern ribbon mics, like the Oktava ML52, and dynamic mics, like the Sennheiser e903, provided you use good quality XLR-XLR cables. 
It's perfectly safe to apply phantom power to modern ribbon mics, like the Oktava ML52, and dynamic mics, like the Sennheiser e903, provided you use good quality XLR-XLR cables.

The important issue about ribbon mics is that it is safe to plug in ribbon mics on circuits carrying phantom power, provided the ribbon mics in question are compatible with phantom power. Some vintage ribbon mics employ an output transformer which is centre-tapped, and that centre tap is earthed. This arrangement essentially short-circuits the phantom power supply and can cause damaging currents to flow through the transformer, potentially magnetising it or even burning it out (although that is extremely unlikely). So it is sheer lunacy to be using vintage ribbon mics with centre-tap grounded transformers in an environment where phantom power is also used. Sooner or later, a ribbon will get plugged into a phantom supply by accident and will be permanently damaged. If you want to use vintage ribbons with centre-tap transformers in the same room as phantom-powered condensers, get the ribbons modified before it's too late.

The bottom line is that all modern mics with balanced outputs terminated with XLRs, whether they be dynamics (moving-coils and ribbons) and electrostatics (condenser and electrets), are designed to accommodate phantom power, and can be plugged in quite happily with phantom power switched on, provided you are connecting XLRs, not jack plugs/sockets. Some vintage ribbon mics, and any mic wired for unbalanced (sometimes also referred to as high-impedance) operation will be damaged by phantom power unless suitably modified.


Published January 2004

Tuesday, March 20, 2018

Q. Are all faders created equal?

By Hugh Robjohns

I have noticed that different mixing consoles and multitrackers have different kinds of faders — long- and short-throw, motorised, touch-sensitive, conductive plastic, and so on. Clearly, not all faders are created equal, but what are the essential differences?

Trevor Cox

Technical Editor Hugh Robjohns replies: On the first sound mixing consoles, up until around the 1950s, faders were actually large rotary knobs, because that was all that the engineering of the day could manage. However, rotary controls are very ergonomic to use — a simple twist of the wrist provides very precise and repeatable settings of gain —but you can only operate two at once because most people have only two hands. The level of the audio signal was changed by altering the electrical resistance through which it had to pass corresponding to the fader position, and this changing resistance was usually acheived by using a chain of carefully selected resistors mounted between studs which were contacted via a moving wiper terminal connected to the rotary control. This arrangement typically provided 0.75dB between stud positions, so that as the control was rotated the gain jumped in 0.75dB steps. This is just below the amount of abrupt level change that most people can detect.

EMI REDD 17. 
EMI REDD 17.

The next stage was the quadrant fader popular through the 1960s and early 1970s. Superficially, this arrangement was much closer to the concept of a fader which we have today, except that the knob on top of the fader arm travels along a curved surface rather than a flat one. You can see two pairs of four quadrant faders in the central section of the EMI REDD 17 desk pictured here. The advantage of this new approach was that the mechanism was quite slim, so that these quadrant faders could be mounted side by side with a fader knob more or less under each finger of each hand. This allowed the operator to maintain instant control of a lot more sources at once. Again, a travelling wiper traversed separate stud contacts with resistors wired between them to create the required changing resistance.

The more familiar slider-type fader we all take for granted today was developed in the 1970s, with the control knob running on parallel rails to provide a true, flat fader. By this time the stud terminal had been replaced in professional circles by a conductive plastic track, which provided far better consistency and a longer life than the cheaper and simpler carbon-deposit track used in cheaper rotary controls and faders. However, both of these mechanisms provided a gradual and continuous change of resistance, rather than the step increments of the stud-type faders.

The mechanism of a slider fader is relatively complex, and economies can be made by using shorter track lengths, hence a lot of budget equipment tends to employ 'short-throw' faders of 60mm or so, rather than the professional standard length of 104mm. Obviously, the longer the fader travel, the greater the precision with which it can be adjusted.

Fader technology has come a long way since the early '60s, when the EMI REDD 17 valve desk shown above (in Toe Rag Studios) was the latest thing. Compare and contrast with this month's cover product, the Korg D32XD... 
Fader technology has come a long way since the early '60s, when the EMI REDD 17 valve desk shown above (in Toe Rag Studios) was the latest thing. Compare and contrast with this month's cover product, the Korg D32XD... 

With the introduction of multitrack recording, mixing became increasingly complex and mix automation systems started to emerge in the late 1970s and 80s. Initially, these employed voltage-controlled amplifiers to govern the signal levels of each channel, rather than passing the audio through a fader's track — the fader simply generated the initial control voltage. However, the performance of early VCAs wasn't very good, and motors were eventually added to the faders so that the channel levels could once again be controlled directly by the fader track. Besides the benefits in audio quality, this approach also enabled the engineer to see what the mix automation was doing on the desk itself, rather than just on a computer screen. Conductive knobs were also introduced so that the fader motor control system would know when a fader was being manipulated by hand, and so drop the appropriate channels into automation-write mode while simultaneously disabling the motor drive control so that the fader motors wouldn't 'fight' the manual operation.

When digital mixing consoles were developed, the audio manipulation was performed in a DSP somewhere, so audio no longer passed through the faders. Some systems use essentially analogue faders to generate control voltages — much like the early VCA automation systems — but the control voltages are then translated into a digital number corresponding to the fader position with a simple A-D converter. This fader position number is used as the multiplying factor to control the gain multiplications going on inside the DSP. Some more sophisticated systems employ 'digital faders', many of them using contact-less opto-electronics. A special 'barcode' is etched into the wall of the fader, and an optical reader is fixed below the fader knob so that as the fader is moved, the reader scans the barcode to generate a digital number corresponding to its position, which, in turn, controls the DSP.

Being digital, the faders output a data word, and the length of this word (the number of bits that it is comprised of) determines the resolution with which the fader's physical position can be stated. Essentially, the longer the data word, the greater the number of steps into which the length of the fader's travel can be divided. More subdivisions, in turn, mean more precision in the digital interpretation of the movement of the fader knob. Audio faders are typically engineered with eight-bit resolution, providing 256 levels, but some offer 10-bit resolution, which translates as 1024 different levels. In crude terms, as an audio fader needs to cover a practical range of, say, 100dB, then an eight-bit fader will provide an audio resolution of roughly 0.4dB per increment. In other words, the smallest change of level that can be obtained by moving the fader a tiny amount would be about 0.4dB. A 10-bit fader would give 0.1dB resolution per increment, but these are both well below the typical level change that people can hear. In practice, there is also a degree of interpolation and smoothing performed by the DSP, so the actual level adjustment tends to be even smoother, and 'stepping' is rarely, if ever, audible in modern, well-designed systems.

One other thing worth mentioning at this point is that the fader's resolution — whether it's a digital or analogue fader — changes with fader position. The fader law is logarithmic so that a small physical change of position while around the unity gain mark on the fader (about 75 percent of the way to the top, usually) changes the signal level by a fraction of a dB, whereas the same physical movement towards the bottom of the fader might change the signal level by several dBs. This is why it is important to mix with the faders close to the unity gain mark, since that is where the best resolution and control are to be found.

Going back to the idea of the touch-sensitive fader, which was first developed for fader automation systems, this has also become popular in digital consoles which use assignable controls. By touching a fader, the assignable controls can be allocated to the corresponding channel, obviating the need to press a channel select button and, in theory at least, making the desk more intuitive and quicker to operate. However, if you are in the habit of keeping a hand on one fader while trying to adjust another, this touch-sensitive approach can be a lot more trouble than it is worth. Fortunately, most consoles allow the touch-sensitive fader function to be disabled in the console configuration parameters.


Published December 2003

Saturday, March 17, 2018

Q. What's the difference between floating- and fixed-point systems?

By Hugh Robjohns

Could you clarify the difference between floating- and fixed-point 32-bit operation in the digital domain. I know that floating-point systems allow for data to be handled at word lengths above 24-bit, which are then dithered back down. Does it also result in a greater dynamic range?

SOS Forum Post

Technical Editor Hugh Robjohns replies: Accurate digital audio capture and reproduction requires, at the very most, 24-bit resolution. The reasoning behind this is that a 24-bit signal has a theoretical dynamic range of 144dB, which is greater than the dynamic range of the human ear, so, in theory, a 24-bit system can record sounds slightly quieter than those we can hear and reproduce sounds louder than we can stand. There is therefore no need for A-D/D-A converters to work at resolutions higher than 24-bit.

High-end digital consoles like the Sony DMX R100 use 32-bit floating-point processing, giving them almost limitless headroom. 
High-end digital consoles like the Sony DMX R100 use 32-bit floating-point processing, giving them almost limitless headroom. 

However, when it comes to processing sound within a digital system, there needs to be some headroom to accommodate the fact that adding two 24-bit numbers together can produce a result which can only be described using 25 bits, and adding 30 or 40 such numbers together can produce something even bigger. At the other end of the scale, the mathematical calculations involved in complex signal processing like EQ generates very small 'remainders', and these have to be looked after properly, otherwise the EQ process effectively becomes noisy and distorted. The natural solution is to allocate more bits for the internal maths — hence 32-bit systems.

Fixed-point systems use the 32 bits in the conventional way to provide an internal dynamic range of about 192dB. Systems that use fixed-point 32-bit processing (like the 0-series Yamaha desks) usually arrange for the original 24-bit audio signal to sit close to the top of that 32-bit processing number to provide a lower noise floor and slightly greater headroom for the signal processing. (Incidentally, a 192dB SPL is roughly equivalent to two atmospheres' pressure on the compression of the wave and a complete vacuum on the rarefaction.)

Floating-point systems also use 32-bit numbers, but organise them differently. Essentially, they keep the audio signal in 24-bit resolution, but use the remaining bits to denote a scaling factor. In other words the 24-bit resolution can be cranked up or down within a colossal internal dynamic range so that, in effect, you can never run out of headroom or fall into the noise floor — there is something like 1500dB of dynamic range within the processing, if the maths is done properly.

Most high-end consoles and workstations employ floating-point maths because (if properly implemented) you can get better performance and quality in the computations. Most budget/low-end consoles and DAWs use fixed-point processing because it's easier and faster, and can be implemented in hardware more easily.


Published January 2004

Thursday, March 15, 2018

Q. Do I need to address the gain structure differently in digital and analogue consoles?

By Hugh Robjohns & Mike Senior

I was told by a sound engineer that, when mixing, it is not a good practice to have all the channel volume faders way up and to have the master fader down, and that this applies both to analogue and digital consoles. Is this true? I thought that if none of the channels is clipping, then there should be no problem with putting them up and keeping the master fader at a lower level.

SOS Forum Post
Q&A Gain staging optimisation. 

Technical Editor Hugh Robjohns replies: This is all a question of optimising the gain structure — or, in other words, setting the correct level at each stage of the signal path from start to finish in order to avoid overloads or deterioration in sound quality — something which is rather more critical when using an analogue console than when using a digital one.

In an analogue desk, the most likely point of overload when mixing is at the mix buss amps, so you have to set the gain structure of the front-end (voice channels, mic preamps and so on) and mixer channels correctly to optimise headroom and noise floor when these signals meet at the mix buss. Lowering the master output fader won't affect the mix buss in any way, since the master fader comes after this part of the circuitry.

Q&A fader levels.
So, optimise the channel input gains with the channel faders at their unity positions, and pull down the channel faders (or reduce the input gains) if the mix buss gets too hot. In a recording/mix environment the master fader should ideally be on its unity gain position at all times unless you are fading out. In live sound situations you may want to have the master fader rather lower than unity to enable better level control of the PA system, leaving room to gradually crank the level up through the set, for example, or after the support act!

In the case of digital consoles, the situation is rather different, because the signal processing is done in a different way. In systems which employ floating-point maths, you cannot overload the notional mix busses however hard you try. With fixed-point systems the headroom is considerably less and it is potentially possible to overload mix busses, which is one reason why fixed-point digital console makers like Yamaha provide a digital attenuator before the EQ section of each channel.

Reviews Editor Mike Senior adds: I'd definitely agree with all that Hugh has said, although there's another thing which needs to be taken into consideration too: fader resolution. Most faders — analogue and digital — have their maximum resolution around the unity gain mark. This means that, around unity gain, moving the fader by a small amount effects a small change in level, while moving the fader by the same distance at the very bottom of its travel, where resolution is much lower, equates to a much larger change in level. When you're mixing (particularly with automation) very small changes in level can become very important, so it's useful to have the channel faders near the unity gain position for that reason.

The only way to do this and also keep the master fader near unity, however, is by using the channel input gains (or the digital channel attenuators on a fixed-point digital console) to get your mix into the right ballpark initially, only tweaking the faders once a rough balance has been set. On some digital consoles you can assign the attenuator levels to the faders temporarily to make this initial process simpler. If no channel input trimmer is available, you can sometimes adjust the output levels of the individual tracks from the multitrack recorder (or whatever your source for the mix might be) instead.

Unfortunately, many manufacturers don't provide the pre-EQ digital attenuators that Hugh mentions in their fixed-point systems, especially at the lower end of the market — all-in-one multitrackers in particular often don't include them — in which case you have no choice but to keep the channel faders quite low if you're going to avoid overloads, at the expense of mixing resolution. In the case of Roland's VS880, for example, there's no more headroom on the mix buss than on the individual channels, so you can easily encounter clipping at the mix buss. Furthermore, as on an analogue console, turning the master fader down doesn't avoid the clipping, it just reduces the level of the clipped signal!


Published January 2004

Wednesday, March 14, 2018

Q. Do I need to address the gain structure differently in digital and analogue consoles?

By Hugh Robjohns & Mike Senior

I was told by a sound engineer that, when mixing, it is not a good practice to have all the channel volume faders way up and to have the master fader down, and that this applies both to analogue and digital consoles. Is this true? I thought that if none of the channels is clipping, then there should be no problem with putting them up and keeping the master fader at a lower level.

SOS Forum Post
Q&A Gain staging optimisation. 

Technical Editor Hugh Robjohns replies: This is all a question of optimising the gain structure — or, in other words, setting the correct level at each stage of the signal path from start to finish in order to avoid overloads or deterioration in sound quality — something which is rather more critical when using an analogue console than when using a digital one.

In an analogue desk, the most likely point of overload when mixing is at the mix buss amps, so you have to set the gain structure of the front-end (voice channels, mic preamps and so on) and mixer channels correctly to optimise headroom and noise floor when these signals meet at the mix buss. Lowering the master output fader won't affect the mix buss in any way, since the master fader comes after this part of the circuitry.

Q&A fader levels.
So, optimise the channel input gains with the channel faders at their unity positions, and pull down the channel faders (or reduce the input gains) if the mix buss gets too hot. In a recording/mix environment the master fader should ideally be on its unity gain position at all times unless you are fading out. In live sound situations you may want to have the master fader rather lower than unity to enable better level control of the PA system, leaving room to gradually crank the level up through the set, for example, or after the support act!

In the case of digital consoles, the situation is rather different, because the signal processing is done in a different way. In systems which employ floating-point maths, you cannot overload the notional mix busses however hard you try. With fixed-point systems the headroom is considerably less and it is potentially possible to overload mix busses, which is one reason why fixed-point digital console makers like Yamaha provide a digital attenuator before the EQ section of each channel.

Reviews Editor Mike Senior adds: I'd definitely agree with all that Hugh has said, although there's another thing which needs to be taken into consideration too: fader resolution. Most faders — analogue and digital — have their maximum resolution around the unity gain mark. This means that, around unity gain, moving the fader by a small amount effects a small change in level, while moving the fader by the same distance at the very bottom of its travel, where resolution is much lower, equates to a much larger change in level. When you're mixing (particularly with automation) very small changes in level can become very important, so it's useful to have the channel faders near the unity gain position for that reason.

The only way to do this and also keep the master fader near unity, however, is by using the channel input gains (or the digital channel attenuators on a fixed-point digital console) to get your mix into the right ballpark initially, only tweaking the faders once a rough balance has been set. On some digital consoles you can assign the attenuator levels to the faders temporarily to make this initial process simpler. If no channel input trimmer is available, you can sometimes adjust the output levels of the individual tracks from the multitrack recorder (or whatever your source for the mix might be) instead.

Unfortunately, many manufacturers don't provide the pre-EQ digital attenuators that Hugh mentions in their fixed-point systems, especially at the lower end of the market — all-in-one multitrackers in particular often don't include them — in which case you have no choice but to keep the channel faders quite low if you're going to avoid overloads, at the expense of mixing resolution. In the case of Roland's VS880, for example, there's no more headroom on the mix buss than on the individual channels, so you can easily encounter clipping at the mix buss. Furthermore, as on an analogue console, turning the master fader down doesn't avoid the clipping, it just reduces the level of the clipped signal!


Published January 2004

Monday, March 12, 2018

Q. With EQ, is it better to cut than to boost?

By Hugh Robjohns
There are several good reasons why cutting is often a better idea than boosting, particularly when applying large amounts of EQ, such as is necessary when trying to correct the sound of something. 
There are several good reasons why cutting is often a better idea than boosting, particularly when applying large amounts of EQ, such as is necessary when trying to correct the sound of something.

I've been told that, if possible, you should always cut rather than boost when EQ'ing. So, for example, if you need more bass, you should cut the high- and mid-frequencies and raise the overall level, rather than simply boosting the low end. Is this true, and, if so, why?

Tom Brown

Technical Editor Hugh Robjohns replies: Firstly, let me say that the governing factor when applying EQ should be how it sounds, so if it sounds right — regardless of whether you have used boosts or cuts, or both — it is right! Secondly, if you weren't meant to boost signals the designers of console and stand-alone equalisers wouldn't have provided their EQ controls with a boost side! So rest assured that boosting is allowed, and you won't be dragged off to the Sound Processing jail to be shot at dawn if you get caught using boosts.

Having said all that, I generally only use EQ boosts when I need to apply a relatively small amount of gentle, wide-bandwidth tonal shaping. So if the bass end needs a little lifting, then I would probably boost the bass a little using a wide-bandwidth shelf equaliser. On the other hand, if I needed to do something more dramatic in the way of tonal shaping or corrective EQ'ing, I'd almost certainly try to do it with cuts. The usual technique is to wind in a fair bit of boost and then dial through the frequency range of a parametric equaliser to help find the problem frequencies. Once located, reverse the gain control to cut the offending frequency area.

There are several good reasons why cutting is often a better idea than boosting, particularly when applying large amounts of EQ, such as is necessary when trying to correct the sound of something. The first is the issue of headroom in the EQ circuitry. Boosting quickly eats into the system headroom, and you risk transient distortion when fast peaks run out of headroom.

Next, if you need to use high-Q (narrow-bandwidth) filters, the ear seems to be very sensitive to their effect when boosting, but surprisingly oblivious when cutting. The result is that the effect of the EQ is often far more subtle and less audible if cuts are used rather than boosts.

In the case of live sound, using EQ boosts tends to increase the risk of acoustic feedback. It's true that if you cut and then bring the level back up you are equally at risk, but in practice people tend to subconsciously know that raising the channel gain risks causing feedback and they take care. On the other hand, most people don't seem to associate turning an EQ gain knob up as adding gain — only as changing the tonality — with the result that as they try to make the vocalist sound a little sharper, the PA squeals embarrassingly!


Published January 2004

Friday, March 9, 2018

Q. How can I tell if a pair of mics is well matched?

By Hugh Robjohns

I bought a Rode NT1A a few months ago and I'm now considering buying a second one. How can I tell whether the two mics are closely matched enough for use as a stereo pair? In any case, how important is it that the two mics behave identically to achieve decent results in stereo recording?

SOS Forum Post

A few simple tests will establish whether a pair of mics, like these SE Electronics SE1s, are accurately matched. 
A few simple tests will establish whether a pair of mics, like these SE Electronics SE1s, are accurately matched. 

Technical Editor Hugh Robjohns replies: To answer the last part of the question first, if you plan to use the mics as a coincident pair (mics placed very close together, usually at 90 degrees to one another), stereo information is conveyed by small level differences between the left and right channels. Those level differences are generated by the combination of the angle of sound incidence to each mic and the mics' polar patterns, and any discrepancies caused by poorly matched frequency or polar responses will destroy the positional accuracy of the recording and cause a blurred, unstable stereo image.

When assessing the performance of a mic, there are three main characteristics to consider: sensitivity, frequency response and polar pattern. Here is a quick and easy way to compare the behaviour of a pair of mics, and hence gauge their suitability for use as a stereo pair. You will need an assistant to talk at the microphones, and a large enough room to be able to walk in a circle around the mics — ideally the room should be fairly dry-sounding too.

Rig the two mics on separate stands and arrange the first so that its front axis (ie. the most sensitive part of its polar response) is pointing horizontally forward. Position the second mic directly above the first, with its capsule as close above the first mic as possible, and arrange it to point in exactly the same direction. The gap between the two mics should be exactly at the level of the mouth of your helpful assistant, who will provide a test signal by talking directly on-axis to the mics from a couple of feet away. It is important to stay outside the region where the proximity effect starts to occur for this test, and two feet is usually a safe working distance. Check both mics are set to the same polar pattern (if switchable) and remove any high-pass filtering or pad settings.

Plug each mic into a separate channel on your mixer and pan both centrally. Make sure there is no EQ or dynamics processing being applied to either channel. If your mixer has a phase-reverse facility, switch it into the second channel; if not, use a phase-reversed balanced cable to achieve the same result. As both mics are effectively in the same physical place and facing the same way, they should be receiving the same acoustic signal. However, the use of phase reverse inverts one of the mics' outputs, so when (and if) the two signals are identical they should cancel each other out. We will be listening for how well that is achieved. A perfect match gives zero output!

Let's begin by assessing sensitivity. Have the assistant talk directly on-axis to the mics from two feet, with a constant voice level. It is often useful to give the assistant something to read to avoid the problem of running out of things to say — a handy copy of SOS is a good source, and guaranteed to keep the reader's interest!
Fade up the first mic channel to its nominal unity gain mark and set the input gain so that the voice peaks well up the scale. Listen to the sound quality and check that all is as you would expect. Then close the first mic fader, open up the second mic to the unity gain mark and adjust its channel gain to get roughly the same output level. Again, listen to the sound quality of this second mic and make sure it sounds as expected — exactly like the first mic, in fact. If it doesn't, you can save yourself the bother of going through the rest of this process!

Now, fade up both mics together. The phase reversal in the second channel means that the the two signals should cancel each other out when their levels are identical, so fine-tune the gain of the second mic channel to get the deepest-level null (or silence) you can. If the mics are well matched for sensitivity, the gain controls for the two channels should end up in the same places.

If the null isn't very deep, or if you have an odd frequency response (maybe lots of high-frequency sibilance is coming through?) then either the two mics aren't tonally matched on-axis — and are therefore not suitable for use as a stereo pair — or there is some EQ left in one of the mixer channels (or the mixer channels aren't tonally matched — it can happen!).

Assuming the null is deep (the voice level should drop by well over 20dB compared to the level with a single channel faded up) and the resulting sound is tonally flat, we can go on to check the matching of the polar pattern.

Take the phase reverse out of the second channel and pan the two mic channels hard left and hard right. With your assistant still reading aloud from the pages of SOS directly on-axis to the two mics, you should have a well-defined central image from your monitoring loudspeakers (don't try this with headphones — you won't be able to judge imaging errors sufficiently well). If you are lucky enough to have a vectorscope meter or a twin-needle PPM you will be able to confirm visually that the two channels are identical in level.
Next, ask your assistant to walk in a perfect circle around the two mics, maintaining the monologue as he or she walks and keeping the same distance between his or her mouth and the two mics.

If the mics have directional polar patterns the overall level will obviously fall as the assistant moves around towards the rear null (or nulls, in the case of hypercardioid mics). However, the thing to listen out for is that both mics should behave identically — especially over an angle of about ±60 degrees relative to the frontal on-axis position.

If the polar patterns aren't matched perfectly, you will hear the sound image of your assistant pull to the left or right of centre — the direction will depend on which mic is the more sensitive at that particular angle of incidence. On twin-needle PPMs you will also see the two needles separate, and on a vectorscope you will see the narrow vertical line start to lean over to one side or the other. If you become aware of any image shifts or instability while performing this test, the two mics are not matched closely enough for accurate coincident stereo work.

The accuracy of matching between two examples of a single make and model of mic is essentially determined by the manufacturing tolerance of that particular model, as well as any ageing effects if comparing old and new models. The term 'manufacturing tolerance' refers to the degree of deviation from a defined norm for each model that the manufacturer will accept when testing and inspecting the mic prior to shipping.

One of the things that you're paying for when you buy a high-end microphone is a very tight manufacturing tolerance. I've performed the above tests on various Sennheiser MKH-series mics that I acquired from different sources, as well as on pairs of Neumann KM184s bought at different times, and all have demonstrated superb matching. Likewise, Schoeps mics seem to be built to amazingly tight tolerances.
At the budget end of the market, manufacturing tolerances tend to be far wider so that fewer mics fail the test and therefore production costs are lower, resulting in a lower sale price. Therefore, there tends to be a degree of luck involved in finding two mics which are closely matched if you buy them at different times from different batches. At this end of the market, the manufacturers often supply dedicated matched pairs for stereo applications, where they have taken the trouble to select reasonably closely matched examples at the factory, and if you want a cheap stereo pair, this is probably the best way to acquire one.

For the sake of completeness, when recording in stereo with a spaced pair arrangement (usually involving omnidirectional mics), timing differences between the channels are captured, as well as the (smaller) amplitude differences which coincident recording relies on. Consequently the accuracy of polar pattern and frequency response matching is, arguably, less critical.

Realistically, how seriously you need to consider these issues will depend on the demands of the situation and your personal standards — you could say that you need to discover your own tolerance level! An experienced ear will be able to hear imaging problems stemming from poorly matched mics in coincident arrays in a recording of, say, an orchestra or unaccompanied choir. But if you are recording a group of backing vocalists to form part of a heavy rock song, you probably won't require the same level of precision!


Published February 2004

Wednesday, March 7, 2018

Q. How can I set up my Korg Kaoss Pad to act as a pitch-bender?

By Mike Senior
Korg KP2 Kaoss Pad with fingers.
Can you please tell me how to set up the Korg Kaoss Pad KP2 as an ordinary pitch-bender?

SOS Forum Post

Reviews Editor Mike Senior replies: Hold down the Tap/BPM and Rec/Stop buttons at the same time, and after a second or so you'll enter the MIDI editing mode — various buttons will light up and the MIDI channel will be shown in the display. You can at this point change the MIDI channel as necessary using the Program/BPM dial. If you're only wanting to transmit MIDI pitch-bend messages from the Y axis of the pad, then make sure that only the Program Memory 5 button is lit. If you want something transmitted from the X axis as well, then the Program Memory 4 button should also be lit. Pressing any button briefly will toggle its lit/unlit status.

Now to get Y axis movements sending MIDI Pitch-bend messages. Still in MIDI Edit mode, hold down Program Memory 5 until the currently assigned Y-axis controller (by default MIDI Continuous Controller number 13) is displayed in place of the MIDI channel number. Use the Program/BPM dial to bring up 'Pb' on the display. If you're also wanting to set the X axis, press and hold Program Memory 4 until its controller number (by default MIDI Continuous Controller number 12) is shown, and adjust as necessary. Finally, to exit MIDI Edit mode, hold Rec/Stop until you're back in the normal operating state.

A quick bit of general advice too — the unit will automatically leave MIDI Edit mode if you leave it alone for more than 10 seconds, so don't hang around too long when making settings, or you'll be dumped back into the normal operational mode. I find that it's worth toggling a random one of the Program Memory keys on and off occasionally, as the activity keeps the unit in MIDI Edit mode and gives me time to think and consult the manual!

There is another thing to think about when setting the KP2 to transmit pitch-bend information: a normal pitch-bend wheel is sprung so that it resets the pitch whenever you let go of it. Unfortunately, the Hold button doesn't affect MIDI transmission in the same way as it does the response of the internal effects, so the degree of pitch-bend will always stay where it is when you remove your finger from the pad. (Apparently the KP1 doesn't suffer from this problem.) This isn't necessarily a problem, however, because you can effectively do a rough-and-ready pitch-bend 'spring-back' manually, especially if you're able to use both hands: one to pitch-bend and the other to tap the centre of the pad, resetting pitch-bend to zero. If you only have one hand free, you could keep one finger in the centre of the pad while pitch-bending with other fingers. However, the finger that you leave in the centre of the pad will decrease the range over which the rest of the pad operates, so you won't get the same maximum bend range.

If you really need to be able to zero the pitch-bend exactly without sacrificing pitch-bend range, I'd suggest putting a controller button in-line to do this (I'd use one of the ones on my Peavey PC1600X for this) and setting it to generate a 'centred' pitch-bend message. But, to be honest, if you're using the KP2 for subtle pitch changes, it should be adequately accurate to zero the pitch-bend manually. If you're doing mad sweeps the whole time, then it may not even matter if you're not able to zero it perfectly.

However, if you simply have to have mad pitch sweeps along with perfect pitch-bend zeroing, then consider restricting yourself to pitch-bends in only one direction, with the zero point at the top or bottom edge of the pad, so that you can accurately reset the controller manually (finding the middle of the pad accurately is tricky, but finding the edge is easy). To do this for upwards-only bending, set your synth to play an octave higher than you want it (assuming that the bend range will be an octave). This will give you two octaves' shift above whatever note you're playing, with the low edge of the pad representing the former zero-bend position. Reverse the idea for downwards-only shift. If you really want to shift both ways, then you could assign a normal MIDI Continuous Controller (CC) message to the other axis and then use that to control the other pitch-bend direction, assuming that the synth you're triggering allows ordinary controllers also to modulate the pitch — my Korg Prophecy does. You won't get the same controller resolution out of a MIDI CC, so large shifts may sound stepped, but this will at least give you both directions of bend from the pad, and with exact pitch-bend reset.

Having said all of this, there is one other workaround to this problem, which provides all the functionality of a 'sprung' pitch-bend wheel, but it requires that you use a synth with fairly flexible modulation routing. Two of the KP2's transmission types do actually exhibit a 'sprung' action: Modulation Depth One (Y=5-1) and Modulation Depth Two (Y=5-9), activated in MIDI edit mode by Program Memory buttons one and two respectively. Both of these will automatically send their minimum values when you let go of the pad, as if you had moved your finger to the centre of the pad. If you switch both of these types of transmission on in the MIDI edit mode, then the top half of the Y axis will transmit MIDI Continuous Controller number one, and the bottom half will transmit MIDI Continuous Controller number two. The problem is that you can't change the controller assignments for this transmission type, so you'll need to assign the two controllers to upwards and downwards pitch modulation respectively to make it all work. The same caveat concerning controller resolution applies as before, but you do get a true pitch-bend wheel-style action. If your synth won't allow this modulation routing, you may be able to use your sequencer or MIDI controller to convert the MIDI CC messages to MIDI Pitch-bend or Aftertouch messages to achieve the same result.


Published December 2003

Monday, March 5, 2018

Q. Can I flatten out my finished tracks using a hardware compressor?

By Mike Senior
TC Triple*C multi-band compressor.
In my hardware-based setup, with my TC electronics Triple*C compressor, is it possible to do the kind of limiting on a full mix where you end up with a waveform that is levelled off at the top and bottom, 'brick wall'-style? Also, when recording the co-axial digital output from the Triple*C onto my hi-fi CD recorder, what should the Triple*C's dither setting be if my source is a 24-bit Tascam 788?

SOS Forum Post

Reviews Editor Mike Senior replies: If you're after a waveform which is levelled off at the top and the bottom, then simply clip the output of the processor by cranking up the make-up gain control. To make this slightly less unpleasant on the ear, make sure that the Soft Clip option is on. However, you've got to ask yourself why you're wanting to do this. Although short-term clipping usually doesn't degrade pop music too much, it's really easy to go overboard and do serious damage to your audio if you're not careful. I'd advise doing an un-clipped version as well as the clipped version for safety's sake. You've got to ask yourself just how well your monitoring system compares to the one in a dedicated mastering studio — you should always let your ears be the judge, but remember that your monitors, combined with the room they are in, may not be giving you sufficient information to make an informed decision.

If you're after maximum loudness, then clipping isn't going to get you all the way there in any case. Use the Triple*C's multi-band compressor as well — set an infinity ratio, switch on lookahead, and make the attack time as fast as possible. Adjust the threshold and release time to taste. Make sure that you're aware of what the thresholds of the individual compression bands are doing as well (they're set in the Edit menu), as you might want to limit the different bands with different thresholds. Switch on Soft Clip and set the low level, high level, and make-up gain controls for the desired amount of clipping. Once again, make sure to record an unprocessed version for posterity as well, because you may well overdo things first time, or in case you get access to a dedicated loudness maximiser such as the Waves L2 in the future.

The Triple*C's dithering should be set to 16-bit, because you should set it according to the destination bit-depth, not the source bit-depth. The CD recorder will be 16-bit, so set the dithering to the 16-bit level.


Published December 2003

Friday, March 2, 2018

Q. How do I create a stereo mix from mono material?

By Hugh Robjohns
Finger on Mono button of console.
I want to remix some old mono tracks in stereo. Can you offer any advice or suggest any tricks to achieve this?

Jon Bennet

Technical Editor Hugh Robjohns replies: The first thing to accept is that you cannot create a true stereo (or surround) mix from mono material; you can only give an impression of greater width. In other words, there is nothing you can do to separate instruments and pan them to specific points in the stereo image, as you could if mixed originally for stereo.

One of the best ways to create fake stereo from mono is to make an M&S (Middle and Sides) stereo mix from the mono source. You'll need to treat the mono source as the 'M' element of an M&S stereo matrix, and decode accordingly, having created a fake 'S' component.

This fake 'S' signal is simply the original mono signal, high-pass filtered (to avoid the bass frequencies being offset to one side of the stereo image) and delayed by any amount between about 7ms and 100ms, according to taste. The longer the delay, the greater the perceived room size — but I would only recommend delays over about 20ms for orchestral or choral music.

Here's how to do it practically: take the mono signal and route it to both outputs on the mixer equally, or, in other words, pan it to the centre. Take an aux output of the mono signal and route it to a digital delay. Ideally, high-pass filter the signal before the delay. A 12dB-per-octave high-pass filter set at about 150Hz should do the job, but this figure isn't critical and will affect the subjective stereo effect, so experiment. Alternatively, high-pass filter the output from the delay.

You now need to derive two outputs from this delayed and filtered signal, which may be possible directly from the delay processor, if it's of the mono in, stereo out variety, for example, with the same delay dialled into both channels. If not, use a splitter cable or parallel strip in a patch bay to produce two outputs.
Route this pair of filtered and delayed signals back to the mixer, ideally into a stereo channel, or, if not, into two mono channels panned hard left and right. Invert the phase of one of the channels. If using adjacent mono channels, fix the faders together and match the input gains so that the gain is the same on both channels.

Now, with the original mono signal faded up, you should hear the central mono output, and if you gradually fade up the fake 'S' channels, you will perceive an increase in stereo width. The length of delay, the turnover frequency of the high-pass filter and the relative level of mono 'M' and fake 'S' channels will determine the perceived stereo width.

If you overdo the amount of 'S' relative to 'M', then you will generate an ultra-wide stereo effect, and if monitored through a Dolby Pro Logic decoder, this will cause a lot of the signal to appear in the rear speakers.

The advantage of this fake stereo technique is that if you subsequently hit the mono button, the fake 'S' signal cancels itself out and disappears completely, to leave the original mono signal unaffected.


Published December 2003