Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, January 30, 2018

Q. How do I create a stereo mix from mono material?

By Hugh Robjohns
Finger on Mono button of console.
I want to remix some old mono tracks in stereo. Can you offer any advice or suggest any tricks to achieve this?

Jon Bennet

Technical Editor Hugh Robjohns replies: The first thing to accept is that you cannot create a true stereo (or surround) mix from mono material; you can only give an impression of greater width. In other words, there is nothing you can do to separate instruments and pan them to specific points in the stereo image, as you could if mixed originally for stereo.

One of the best ways to create fake stereo from mono is to make an M&S (Middle and Sides) stereo mix from the mono source. You'll need to treat the mono source as the 'M' element of an M&S stereo matrix, and decode accordingly, having created a fake 'S' component.

This fake 'S' signal is simply the original mono signal, high-pass filtered (to avoid the bass frequencies being offset to one side of the stereo image) and delayed by any amount between about 7ms and 100ms, according to taste. The longer the delay, the greater the perceived room size — but I would only recommend delays over about 20ms for orchestral or choral music.

Here's how to do it practically: take the mono signal and route it to both outputs on the mixer equally, or, in other words, pan it to the centre. Take an aux output of the mono signal and route it to a digital delay. Ideally, high-pass filter the signal before the delay. A 12dB-per-octave high-pass filter set at about 150Hz should do the job, but this figure isn't critical and will affect the subjective stereo effect, so experiment. Alternatively, high-pass filter the output from the delay.

You now need to derive two outputs from this delayed and filtered signal, which may be possible directly from the delay processor, if it's of the mono in, stereo out variety, for example, with the same delay dialled into both channels. If not, use a splitter cable or parallel strip in a patch bay to produce two outputs.
Route this pair of filtered and delayed signals back to the mixer, ideally into a stereo channel, or, if not, into two mono channels panned hard left and right. Invert the phase of one of the channels. If using adjacent mono channels, fix the faders together and match the input gains so that the gain is the same on both channels.

Now, with the original mono signal faded up, you should hear the central mono output, and if you gradually fade up the fake 'S' channels, you will perceive an increase in stereo width. The length of delay, the turnover frequency of the high-pass filter and the relative level of mono 'M' and fake 'S' channels will determine the perceived stereo width.

If you overdo the amount of 'S' relative to 'M', then you will generate an ultra-wide stereo effect, and if monitored through a Dolby Pro Logic decoder, this will cause a lot of the signal to appear in the rear speakers.

The advantage of this fake stereo technique is that if you subsequently hit the mono button, the fake 'S' signal cancels itself out and disappears completely, to leave the original mono signal unaffected.



Published December 2003

Saturday, January 27, 2018

Q. Is it safe to apply phantom power to dynamic mics?

By Hugh Robjohns

I did a recording session recently using a mixture of dynamic and condenser mics, and realised my desk does not have switchable phantom power for each individual channel — they're either all on or all off. Luckily, I had a second mixer and some external channel strips which I ran the condensers through, but is it safe to apply phantom power to dynamic mics?

SOS Forum Post

Q & A: solutions to your problems 

Technical Editor Hugh Robjohns replies: People get very hung up about phantom power. As long as your mic cables are all wired properly (balanced, with the correct pin connections) and well made, and you are using decent XLRs everywhere — and all your microphones are modern — there is no problem at all.
In BBC radio and TV studios, for example, phantom power is provided permanently on all wall box connections. It cannot be turned off. And engineers are plugging dynamic, condenser and even ribbon mics in and out all day without any problems whatsoever.

Clearly, it is vital that dynamic and ribbon mics are properly balanced internally and well maintained, but this should be a given with any modern mic. The female connectors on good-quality XLR cables should have the contact of the earth pin socket (pin 1) slightly forward of the other two so that the earth contact mates first, and are designed so that the other two pins mate simultaneously. There is therefore little chance of subjecting the mic to significantly unbalanced phantom voltages.

There will be a loud 'splat' over the monitors when connecting a condenser mic as the circuitry powers up, but it is good practice to always keep the channel fader down when plugging in mics anyway. I don't disagree that plugging mics in with phantom off is a safe way of working, but I have never really bothered about it, and have never destroyed a mic yet — not even a ribbon, and I've used a lot of those over the years.

It's perfectly safe to apply phantom power to modern ribbon mics, like the Oktava ML52, and dynamic mics, like the Sennheiser e903, provided you use good quality XLR-XLR cables. 
It's perfectly safe to apply phantom power to modern ribbon mics, like the Oktava ML52, and dynamic mics, like the Sennheiser e903, provided you use good quality XLR-XLR cables. 

The important issue about ribbon mics is that it is safe to plug in ribbon mics on circuits carrying phantom power, provided the ribbon mics in question are compatible with phantom power. Some vintage ribbon mics employ an output transformer which is centre-tapped, and that centre tap is earthed. This arrangement essentially short-circuits the phantom power supply and can cause damaging currents to flow through the transformer, potentially magnetising it or even burning it out (although that is extremely unlikely). So it is sheer lunacy to be using vintage ribbon mics with centre-tap grounded transformers in an environment where phantom power is also used. Sooner or later, a ribbon will get plugged into a phantom supply by accident and will be permanently damaged. If you want to use vintage ribbons with centre-tap transformers in the same room as phantom-powered condensers, get the ribbons modified before it's too late.

The bottom line is that all modern mics with balanced outputs terminated with XLRs, whether they be dynamics (moving-coils and ribbons) and electrostatics (condenser and electrets), are designed to accommodate phantom power, and can be plugged in quite happily with phantom power switched on, provided you are connecting XLRs, not jack plugs/sockets. Some vintage ribbon mics, and any mic wired for unbalanced (sometimes also referred to as high-impedance) operation will be damaged by phantom power unless suitably modified.


Published January 2004

Thursday, January 25, 2018

Q. What does a compressor's sidechain do?

0By Hugh Robjohns
Having access to a compressor's side-chain allows more precise control over how the compressor behaves and when it operates.Having access to a compressor's side-chain allows more precise control over how the compressor behaves and when it operates.

I have a compressor with inserts to access the sidechain. What is the sidechain and what would you use these inserts for?

Dan Lister

Technical Editor Hugh Robjohns replies: All dynamics processors — compressors, gates and de-essers — use a gain-controlling device to alter the instantaneous level, and thus the dynamics of the input signal. The gain controller could be a solid-state VCA, a valve, or an opto-resistor, depending on the design of the unit, but whatever the type of device, it has to be controlled by a circuit which looks at the signal and decides how much to reduce its gain. This control circuit is known generically as the 'sidechain'.

The reason a lot of dynamics processors provide access to the sidechain signal is to allow additional external signal processing to modify the signal that the sidechain is working with, and the usual process is an equaliser. It is worth emphasising at this point that we are talking here about full-band compressors — whatever we do to the sidechain signal, the level of the entire input signal is affected when the compressor operates. In multi-band compressors, the input signal is split into several frequency bands and each is processed separately, which is a very different thing and should not be confused with the approach I'm describing here.

Consider the situation where a very bass-heavy mix needs to be compressed. The amount of compression is determined by the amount of energy in the sidechain, so in a bass-heavy mix, the amount of compression is going to be heavily influenced mainly by the bass signals because that is where most of the energy is. In extreme cases, the result might be heavy gain reduction on each kick-drum hit or bass-guitar note, which might not produce the required overall control at all.

One solution is to insert an equaliser in the sidechain to filter out or reduce the sidechain signal's low-frequency energy. The result is that the compressor sidechain now determines the amount of compression on the energy in the mid- and high-frequency ranges, which will probably produce a more natural and consistent sound.

So, because the sidechain tells the compressor what to do, reducing the level of a frequency band using an equaliser in the sidechain also reduces the compressor's sensitivity to energy in that frequency band.

Sidechain diagram.

Another technique using equalisation of the sidechain is to deliberately increase the compressor's sensitivity to certain frequency bands, and the classic example here is the de-esser. Sibilant singers, for example, tend to produce peaks of excessive energy in a fairly narrow frequency band somewhere between 2kHz and 8kHz. Ideally, such problems should be avoided through careful mic selection and positioning, but if you are faced with a recording already containing sibilance, you can often deal with it quite effectively using a de-esser, or a compressor with an equaliser inserted in its sidechain.

By using the sidechain equaliser to boost the frequency region containing the sibilant energy, the compressor is made more sensitive to that frequency region. As a result, even quite small increases of energy here will result in quite large amounts of compression, thus reducing the audible impact of the vocalist at the moment of sibilance. It can take a little juggling of equaliser boost, threshold, ratio and attack/release settings, but when properly optimised this system can prove extremely effective.

Sidechain equalisation can also be used with gates and expanders in the same way. Consider a snare-drum mic which has a lot of spill from both the hi-hat and the kick drum. If the energy level of hat and kick is sufficiently high, it might prove impossible to find a threshold setting which allows the gate to let the snare through but hold back the kick and hats. One solution would be to reach for the sidechain equaliser again — and adjust it to remove both the low kick-drum frequencies and the higher hi-hat frequencies, leaving just the mid-range snare drum frequencies. A pair of high- and low-pass filters are ideal for this job, but high and low shelf filters will usually work as well. With only the mid-range snare drum signal left in the sidechain, the gate is left in no doubt when to open and close, and setting the threshold is now remarkably easy!

These are just a few obvious examples of sidechain equalisation, but it is a powerful technique which can be used in a wide range of applications to solve problems or to make dynamics processing more effective and easier to control.

In practice, it is useful to be able to listen to the output from the sidechain equaliser to help fine-tune the EQ settings — doing it 'deaf' is not ideal. A lot of hardware and software compressors incorporate some form of sidechain equalisation as standard, and these units usually have a 'Listen' mode which routes the sidechain signal to the output to allow auditioning of the EQ settings.



Published January 2004

Wednesday, January 24, 2018

Q. What does diatonic mean?

By Len Sasso

I know that the white keys on a keyboard form a diatonic scale, but what does diatonic really mean?

Rob Fowler

Finger on piano keyboard. 
SOS Contributor Len Sasso replies: To understand the meaning of diatonic, it helps to think of a scale not as a collection of notes, but rather as a series of intervals. The definition of a diatonic scale is that there are five whole-tone and two semitone intervals in the series and that the semitones must always be separated by at least two whole-tones. Using '2' to symbolize the whole-tone steps and '1' for the semitone steps, the major diatonic scale corresponds to the interval series 2212221. No matter what note you start on, following this prescription yields a major diatonic scale — the white keys starting on C is one example. It turns out that all possible diatonic scales are constructed by starting somewhere in the major diatonic scale and continuing until you reach the same note you started on. Those are generally referred to as the church modes: Dorian for 2122212, Phrygian for 1222122, Lydian for 2221221, and so on.

While the preceding definition is correct and functionally useful, it might leave you a little cold, as it does nothing to explain why those intervals are used or why the seven notes in a diatonic scale are chosen over the other notes in the 12-tone equal-tempered scale.

For reasons deriving from the physics and maths of sound, the strongest harmonic relationship aside from the octave is the perfect fifth, which makes G the closest relative of C, for example. Since C stands in the same relationship to F as G does to C, it makes sense that a scale centered around C should contain both G (called the dominant) and F (called the subdominant). The next closest harmonic interval is the major third. Together, the root, major third, and perfect fifth constitute a major triad, and it's not too big a stretch to imagine that you might want to construct a major triad on the three notes C, F, and G. Do that and you have the seven notes in the C diatonic scale.

There's still the question of why there are five other notes in the 12-tone equal-tempered scale, and the answer contains a hidden but important compromise. You can make music, which is naturally called diatonic music, with just the seven notes of the diatonic scale. And if you did that, they would in fact be slightly different notes from the ones you find in the equal-tempered scale. If you want to expand the system to accommodate diatonic scales in other keys, one natural way is to iterate the process of adding perfect fifths. This produces what is commonly called the 'cycle of fifths', but is actually a spiral of fifths that never really comes full circle. But if you make the perfect fifths just slightly flat, they do come full circle after 12 steps. Miraculously, you also wind up with notes that are close to the major thirds — they're a little sharp and a little more out of tune than the fifths, but still usable.

This compromise gives us the 12-tone equal-tempered scale (equal-tempered meaning all the intervals are the same). Relative to C, the extra five notes turn out to be where you find the black keys on the piano keyboard, and that's why the intervalic definition we started with works.


Published September 2003

Monday, January 22, 2018

Q. Is it wise to buy a second-hand microphone?

By Hugh Robjohns
Higher-end mics like the Neumann TLM103 may cost more than budget models, but they'll last longer and can be fully repaired in the future. 
Higher-end mics like the Neumann TLM103 may cost more than budget models, but they'll last longer and can be fully repaired in the future.

With so many brand-new budget-priced microphones out there these days, I'm wondering if it might be better to get a high-quality mic second-hand instead. I don't think I'd trust second-hand monitors, but does the same apply to mics? For the £150 or £200 I'd spend on a new mic, I could get a far better model second-hand.

SOS Forum Post

Technical Editor Hugh Robjohns replies: I would certainly support the idea of buying second-hand pro audio gear, but I would be very wary of buying gear in the price range you're talking about second-hand. My reasoning is as follows. Firstly, users of high-end professional equipment generally know what they are doing and so their equipment tends (with the inevitable exceptions) to have been reasonably well maintained. It's not always the case, but you can usually tell in an instant by looking at a piece of equipment whether or not it has been well looked-after, and if it looks OK it usually is.

Secondly, bona fide pro gear can be serviced and repaired. All the reputable speaker manufacturers will happily supply replacement drivers, and all the reputable mic manufacturers will be able to repair and recondition their microphones. So even if the gear has had a hard life, it will remain perfectly serviceable. You can still get spares for 30-year-old Studer tape machines, for example.

Conversely, budget audio equipment is made as cheaply as possible, and while you can get remarkable quality for your money, most of it is not cost-effective to service — in other words, it is disposable. The current glut of Chinese-made mics offer exceptional value, but you certainly won't be able to get them repaired in the factory after 20 years like you can a Neumann, AKG or Sennheiser. Likewise, getting spare parts for a Fostex multitrack tape recorder is a lot harder than for an old Studer or Otari.

So, if the second-hand budget gear in question is in good condition and very cheap, then it may be worth the risk, but go into it with your eyes open — it may well prove impossible or prohibitively expensive to have this kind of gear repaired should it fail a week after you bought it. On the other hand, a second-hand truly professional product should remain serviceable for decades. I bought four Sennheiser MKH20 mics second-hand a few years ago, and one turned out to be faulty, but it was serviced by Sennheiser and came back like new, and, even adding in the cost of the service, it was still a very good deal compared to the cost of the mics brand-new.



Published September 2003

Friday, January 19, 2018

Q. Should I believe my meters?

By Hugh Robjohns

I read recently that the level meters in most DAWs don't give a true indication of when audio is peaking, and that audio which looks like it's below 0dBFS may actually be peaking above it, causing distortion when it's played back. Is this true? I want my mixes to be as loud as possible and use compression to push them as 'hot' as I can. How do I tell when and if these invisible overloads are occurring, and how do I avoid them?

SOS Forum Post

Technical Editor Hugh Robjohns replies: The problem you are referring to is a very real and widely recognised one. Simple digital meters register the amplitude of the individual samples within the digital domain and not the waveform which is reconstructed from them by the D-A converter. Even if the samples stay just beneath 0dBfs, the reconstructed waveform, which is, after all, a smooth curve, is likely to exceed full-scale at certain points, potentially causing overmodulation and digital distortion.

Top: A perfectly legal digital signal with no samples higher than 0dBFS. However, this signal will overmodulate a typical oversampling digital filter in a D-A converter.
Bottom: An oversampling meter will reveal the overload. 
Top: A perfectly legal digital signal with no samples higher than 0dBFS. However, this signal will overmodulate a typical oversampling digital filter in a D-A converter. Bottom: An oversampling meter will reveal the overload. 

This overloading generally happens in the integrated digital filters employed in most consumer and budget D-A converter designs. The state-of-the-art converters used in professional environments tend to be far less prone to this kind of problem, and consequently, a mastering engineer may not be aware of a problem which is glaringly obvious when the track is replayed over cheaper D-As. The problem is likely to be worst in heavily compressed material.

It's important to understand what the different types of meter are actually measuring. VU meters read averaged signal levels and don't give any indication of peak values whatsoever. The VU meter was designed to provide a crude indication of perceived volume (hence 'volume units'), originally in telecommunications circuits, and so served its original purpose perfectly well. It was only when the VU meter was adopted by the recording industry that its limitations became significant.

Subsequently, the PPM or peak programme meter was developed. This has complex analogue circuitry designed to register peak signal levels, so that the sound operator can better control the peak modulation of recordings and radio transmissions. However, the international standards defining the various versions of PPM all include a short integration period of between 5 and 10ms. This means that, in fact, the meter deliberately ignores short transients. True peak levels are typically 4 to 6dB higher than a standard PPM would indicate. This deliberate 'fiddling' with the meter's accuracy was done to optimise the modulation of analogue transmitters and recorders, safe in the knowledge that the short-term harmonic distortion caused by a small amount of overmodulation of analogue systems was inaudible to the majority of listeners.

Now we come to digital meters. These have to show true peak levels because any overloads in the digital domain cause aliasing distortions — distortions which are anything but harmonically pleasing and extremely audible. However, the inherent difficulty in achieving true peak readings from raw sample amplitudes, as described above, is one reason why it is advisable to engineer in a degree of headroom when working in the digital domain. Oversampling digital meters, which are far more accurate in terms of displaying the true peak levels, have been available in professional systems for a long time, and Trillium Lane Labs have recently produced an oversampling meter plug-in for Pro Tools TDM systems running on Mac OS X, called Master Meter.

You can circumvent the problems of inaccurate metering and the resulting potential for overmodulation by working at 96kHz. Even simple sample-based metering at this sample rate is essentially oversampled as far as the bulk of energy in the audible frequency range is concerned.

But the simplest solution, as ever, is to turn away from the notion that a track has to be louder than loud, and to leave a small but credible headroom margin. If you really want to overcompress particular genres of music, that's fine, but remember to leave a decent amount of headroom. There's really no need for recordings to hit 0dBFS, nor for recording musicians to misuse the digital format, and CD in particular. If the end user wants the music louder, there is always the volume control on the hi-fi!


Published October 2003

Tuesday, January 16, 2018

Q. What's the difference between a talk box and a vocoder?

By Craig Anderton
In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals. 
In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.

I've heard various 'talking instrument' effects which some people attribute to a processor called a vocoder, while others describe it as a 'talk box'. Are these the same devices? I've also seen references in some of Craig Anderton's articles about using vocoders to do 'drumcoding'. How is this different from vocoding, and does it produce talking instrument sounds?

James Hoskins

SOS Contributor Craig Anderton replies: A 'talk box' is an electromechanical device that produces talking instrument sounds. It was a popular effect in the '70s and was used by Peter Frampton, Joe Walsh and Stevie Wonder [ see this YouTube video], amongst others. It works by amplifying the instrument you want to make 'talk' (often a guitar), and then sending the amplified signal to a horn-type driver, whose output goes to a short, flexible piece of tubing. This terminates in the performer's mouth, which is positioned close to a mic feeding a PA or other sound system. As the performer says words, the mouth acts like a mechanical filter for the acoustic signal coming in from the tube, and the mic picks up the resulting, filtered sound. Thanks to the recent upsurge of interest in vintage effects, several companies have begun producing talk boxes again, including Dunlop (the reissued Heil Talk Box) and Danelectro, whose Free Speech talk box doesn't require an external mic, effecting the signal directly.

The vocoder, however, is an entirely different animal. The forerunner to today's vocoder was invented in the 1930s for telecommunications applications by an engineer named Homer Dudley; modern versions create 'talking instrument' effects through purely electronic means. A vocoder has two inputs: one for an instrument (the carrier input), and one for a microphone or other signal source (the modulator input, sometimes called the analysed input). Talking into the microphone superimposes vocal effects on whatever is plugged into the instrument input.

The principle of operation is that the microphone feeds several paralleled filters, each of which covers a narrow frequency band. This is electronically similar to a graphic equaliser. We need to separate the mic input into these different filter sections because in human speech, different sounds are associated with different parts of the frequency spectrum.

For example, an 'S' sound contains lots of high frequencies. So, when you speak an 'S' into the mic, the higher-frequency filters fed by the mic will have an output, while there will be no output from the lower-frequency filters. On the other hand, plosive sounds (such as 'P' and 'B') contain lots of low-frequency energy. Speaking one of these sounds into the microphone will give an output from the low-frequency filters. Vowel sounds produce outputs at the various mid-range filters.

But this is only half the picture. The instrument channel, like the mic channel, also splits into several different filters and these are tuned to the same frequencies as the filters used with the mic input. However, these filters include DCAs or VCAs (digitally controlled or voltage-controlled amplifiers) at their outputs. These amplifiers respond to the signals generated by the mic channel filters; more signal going through a particular mic channel filter raises the amp's gain.

Now consider what happens when you play a note into the instrument input while speaking into the mic input. If an output occurs from the mic's lowest-frequency filter, then that output controls the amplifier of the instrument's lowest filter, and allows the corresponding frequencies from the instrument input to pass. If an output occurs from the mic's highest-frequency filter, then that output controls the instrument input's highest-frequency filter, and passes any instrument signals present at that frequency.

As you speak, the various mic filters produce output signals that correspond to the energies present at different frequencies in your voice. By controlling a set of equivalent filters connected to the instrument, you superimpose a replica of the voice's energy patterns on to the sound of the instrument plugged into the instrument input. This produces accurate, intelligible vocal effects.

Vocoders can be used for much more than talking instrument effects. For example, you can play drums into the microphone input instead of voice, and use this to control a keyboard (I've called this 'drumcoding' in previous articles). When you hit the snare drum, that will activate some of the mid-range vocoder filters. Hitting the bass drum will activate the lower vocoder filters, and hitting the cymbals will cause responses in the upper frequency vocoder filters. So, the keyboard will be accented by the drums in a highly rhythmic way. This also works well for accenting bass and guitar parts with drums.

Note that for best results, the instrument signal should have plenty of harmonics, or the filters won't have much to work on.


Published October 2003

Saturday, January 13, 2018

Q. How can I improve the quality of samples taken from a record deck?

I just got hold of an old record deck and am having problems trying to record samples off some of my Dad's old vinyl. When I plug the deck into my mixer (Mackie 1402 VLZ Pro) I can hear the sounds, but they're really, really quiet and if I turn it up on the desk it gets really noisy. Is this a fault, or am I doing something wrong?
To record samples from a record deck you need a phono preamp stage between your deck and the mixer, otherwise you won't be able to capture anything usable, given that the audio will be so quiet. And anything you do capture will be seriously noisy once the volume is turned up. 
To record samples from a record deck you need a phono preamp stage between your deck and the mixer, otherwise you won't be able to capture anything usable, given that the audio will be so quiet. And anything you do capture will be seriously noisy once the volume is turned up.

 Q. How can I improve the quality of samples taken from a record deck?

Jack Holland, via email

SOS Reviews Editor Matt Houghton replies: There are a couple of issues here, but the answer's pretty simple: you need a phono preamp stage between the deck and the mixer. You've not mentioned problems with the frequency balance, but when mixes are mastered for vinyl, a 'pre-emphasis' curve is applied, boosting the high frequencies and cutting bass. This reduces noise and allows us to get more low end from a record, but a corrective EQ curve needs to be applied to restore the correct frequency balance on playback.
That side of things can be done in software if you want, but you'll still need to boost the signal to a sensible level, either using the mixer preamps or a separate phono preamp — which will both apply the corrective EQ curve and boost the output. The ART DJ Pre II Phono Preamp provides a tailor-made solution to getting the sound directly into your computer, but to feed your mixer the right signal, any old hi-fi amp with a phono input and tape in/out facility should do the job. The tape out would be used to feed a signal your mixer channels. Decent-quality amps can be had cheaply off eBay and similar sites. The Mackie accepts both balanced and unbalanced inputs, and you'll be fine feeding it signals from consumer gear like this.


Published November 2012

Thursday, January 11, 2018

Q. Why do speakers only seem to have round diaphragms?

I've noticed a lot of microphones on the market lately that have odd-shaped diaphragms: for example, there's a Pearl model with a rectangular diaphragm and an Ehrlund mic with a triangular diaphragm. Given that mics and speakers are both transducers, why don't we see different shapes like this in speakers? I've only ever seen round and elliptical shapes.

Darren Ellis, via email

SOS Technical Editor Hugh Robjohns replies: In a capacitor microphone, the diaphragm barely moves, because it's not trying to absorb sound energy, just sense the changing air pressure. As a result, there's virtually no significant movement necessary at the edges of the diaphragm, so the 'surround' isn't too difficult to deal with, even in square and triangular arrangements. The idea of non-round diaphragms, by the way, is to minimise and control the natural membrane resonances. Whereas a round diaphragm has a strong single primary resonance, a rectangular diaphragm has two, related to its different length and width dimensions. And, if arranged carefully, these resonances will be weaker and spread over a greater frequency range, which gives a smoother overall performance. A triangular diaphragm has no parallel surfaces, and so no strong resonances at all.

 Although oddly-shaped diaphragms are used on microphones, speaker cones are required to move far more air and require very flexible surrounds. Non-circular speaker diaphragms  are consequently difficult to do well. 
Although oddly-shaped diaphragms are used on microphones, speaker cones are required to move far more air and require very flexible surrounds. Non-circular speaker diaphragms are consequently difficult to do well.

Loudspeaker cones have similar resonant modes, but non-round diaphragms are much harder to implement. The main reason is that a loudspeaker has physically to move a lot of air and that means the diaphragm has to move a relatively long way. This 'long throw' diaphragm movement requires a very flexible surround, and achieving that in a non-circular shape is a serious design headache. A suitable 'cornered' surround would be likely to introduce all sorts of unwelcome 'non-linearities'. It can be done: Sony manufactured flat square drive units for some of its consumer speakers many years ago (for example, the Sony APM X270). However, the idea was much more about quirky aesthetics than audio quality and wasn't a great success, as the higher manufacturing costs far outweighed the dubious sonic benefits.


Published September 2012

Wednesday, January 10, 2018

Q. How important are microphone self-noise and SPL figures?

I am interested in the Shure SM7b mic and have been looking at its specifications, but the Shure web site seems to be missing information for self-noise and maximum SPL levels. I've heard people saying that the SM7 can handle up to 180dB SPL! I'm curious as to whether or not that is true (probably not) and if it is anywhere near that, I'm assuming it's because it's got some kind of -30dB switch on it or something crazy like that. Can you shed any light on this?


You won't find self-noise specifications for the Shure SM7b, as it is a dynamic (moving-coil) microphone. The only self-noise generated is the thermal noise from its own output impedance. 
You won't find self-noise specifications for the Shure SM7b, as it is a dynamic (moving-coil) microphone. The only self-noise generated is the thermal noise from its own output impedance.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: The reason you can't find those specific specifications is because the SM7 is a dynamic (moving-coil) microphone. In fact, you probably won't find those specs for any dynamic mic from any manufacturer (other than dynamic mics with built-in buffers or gain stages), because they are largely meaningless and pointless figures.

The self-noise generated by a moving-coil microphone is only the thermal noise generated by the mic's own output impedance, which is essentially just the DC resistance of the moving coil itself, plus that of a humbucking coil (if employed) and the output transformer (if present). This noise contribution is negligible, and will be utterly swamped by the receiving preamp's own electronic noise.

The maximum SPL level for a dynamic mic is determined mainly by the range of mechanical movement afforded to the coil, and that will be more than high enough for any conventional application. So it's not unusual to find professional dynamic mics that are capable of over 150dB SPL (for one percent THD), albeit with rapidly increasing distortion towards the limits, and with mechanical clipping occurring when the diaphragm and/or coil hits the end stops at 170 or 180 dB SPL.

In contrast, the self-noise and maximum SPL figures are quoted for all electrostatic mics (capacitor and electret) because the impedance converter electronics built into the microphone determine the mic's dynamic range capability, the lower limit being set by the amplifier's self-noise, and the upper limit by the amplifier's distortion or clipping.

Published April 2012

Monday, January 8, 2018

Q. Are some analogue signal graphs misleading?

I read your feature about 'Digital Problems, Practical Solutions' (/sos/feb08/articles/digitalaudio.htm), which said that digital audio can capture and recreate analogue signals accurately, and that the 'steps' on most teaching diagrams are misleading. Does that mean that the graph should really show lines, or plot 'x's, instead of looking like a standard bar-graph?

Remi Johnson via email

SOS Technical Editor Hugh Robjohns replies: Good question! The graphs in that article are accurate as far as they go, but offer a very simplified view of only one part of the whole, much more complex, process.
When an analogue signal (the red line on Graph 1: Sample & Hold) is sampled, an electronic circuit detects the signal voltage at a specific moment in time (the sampling instant) and then holds that voltage as constant as it can until the next sampling instant. During that holding period the quantising circuitry works out which binary number represents the measured sample voltage. This, not surprisingly, is called a 'sample and hold' process, and that's what that diagram is trying to illustrate.

 Graph 1: Sample & Hold 
Graph 1: Sample & Hold

So the sampling moment is, theoretically, an instant in time, best represented on the graph as a thin vertical line at the sample intervals (the blue lines in the picture Graph 1: Sample & Hold), but the actual output of the sample and hold process is the grey bar extending to the right of the blue line.

However, the key to understanding sampling is understanding the maths behind that theoretical sampling 'instant', and that means delving into the maths of 'sinc' (sin(x)/x) functions, which is the time-domain response of a band-limited signal sample. At this point most musicians' eyes glaze over…
As we know, the measured amplitude of each sample from an analogue waveform is represented by a binary number in the digital audio system. When reconstructing the analogue waveform that number determines the height of the sinc function.

The important point is that we are not just creating a simple 'pulse' of audio at the sample point, because the sinc signal actually comprises a main sinusoidal peak at the sampling instant (and of the required amplitude), plus decaying sine wave 'ripples' that extend (theoretically for ever) both before and after that central pulse. The reconstructed analogue waveform is the sum of all the sinc functions for all the samples.
The clever bit is that the points where those decaying sinc ripples cross the zero line always occur at the adjacent sampling instants. This is shown in the next diagram (Graph 2: Two Sinc Functions) where, for simplicity, just two sample sinc functions are shown for samples 23 (red) and 27 (blue). You can see that at the intermediate sample points (26, 25, 24 and so on) the sinc functions are always zero.

Graph 2: Two Sinc Functions 
Graph 2: Two Sinc Functions

That means that the ripples don't contribute to the amplitude of any other sample, but they do contribute to the amplitude of the reconstructed signal in between the samples, with the adjacent sample sinc functions having the greatest influence, and lesser contributions from the more distant samples. This is shown in the next diagram (Graph 3: 3kHz Sinc Addition), in which the sinc functions of a number of adjacent samples are shown, and when summed together produce the dotted line that is a sampled 3kHz sine waveform.

Graph 3: 3kHz Sinc Addition 
Graph 3: 3kHz Sinc Addition

These last two diagrams have been borrowed from a superb paper by Dan Lavry (of Lavry Engineering), which explains sampling theory extremely well, and can be found here: www.lavryengineering.com/documents/Sampling_Theory.pdf.


Published May 2012


Friday, January 5, 2018

Q. Can I use my aux send/return loop, or do I need insert points?

I'm trying to hook a Behringer Denoiser and an SPL Vitalizer MK1 into an older-model Alesis Multimix 16USB mixing console, working with the sends and returns. I purchased two sets of send/Y-leads, which are obviously TRS single-to-dual monos, since the desk is a single jack send, but double on the returns. Now, when I'm fully set up on both units, and I plug in fully, I am missing one channel on each. I'm using the less reliable method of inserting the Y-lead plug halfway into the insert until there is a springy 'click' feeling and all is fine. Do I need to purchase a different-style lead, a TS, and not a TRS? I've not used many outboard effects before in send modes, but am digging out old gear that may still be of use.

If your console doesn't have channel-insert sockets and you want to be able to process individual source channels, the easiest solution would be to use a patchbay.  
If your console doesn't have channel-insert sockets and you want to be able to process individual source channels, the easiest solution would be to use a patchbay.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: Hooking up the Behringer Denoiser and the SPL Vitalizer MK1 into the Alesis Multimix should be easy enough. The first port of call is the manual, to check how the mixer is wired. In this case, there are no channel or mix insert points, but there are two mono aux sends (called Aux A and Aux B), and both are wired as impedance-balanced outputs. That means that there is a signal on the tip connection, but no signal on the ring connection.

The mixer also has two stereo balanced effects returns (called FX Return A and B), although the B return socket is normalled from the output of the internal FX processor. Stereo FX Return A is wired such that if you only plug something into the left channel, it is normalled across to the right as well. FX Return B does not have that facility. As I recall, the original stereo Vitalizer had both balanced XLRs and unbalanced quarter-inch TS sockets for the inputs and outputs, and the Behringer SNR2000 two-channel Denoiser has both XLR and TRS sockets on the inputs and outputs, wired balanced, but usable unbalanced.
Given these interconnection formats, it makes sense that you'd be missing one channel with your types of cables. The kind of cable you have connects the TRS tip to the tip of one of the TS plugs, and the TRS ring to the tip of the other. With an impedance-balanced output, there will, therefore, only be a signal on one of the TS plugs.

That's physically what is available, so how should things be connected? Firstly, in terms of the send/return Y-leads you've purchased, I think you've misunderstood what's going on here. Each aux output is a mono send. The effects returns are stereo returns. This is quite normal because, typically, you'd be patching a stereo reverb across them: taking a mono input to the reverb and creating a stereo return signal, for example.

Both the Vitalizer and the Denoiser are stereo or dual-channel devices — so what are you trying to achieve? If you want to process the main mix bus, the aux send/effects return loop can't access the mix bus at all. If you want to process a stereo input channel, the aux sends are both derived mono sums, so that won't work in stereo either. And, if you want to process the input channels individually in mono, plugging up both sides of both processors is pointless.

But, fundamentally, both the Vitalizer and the Denoiser are really insert processors, not send-return processors. They are both designed to work directly on the source signal, and the processed signal is then mixed with all the other console inputs. Neither the Vitalizer nor the Denoiser generates an independent return signal — like, say, a reverb does — that you would want to mix alongside everything else. Basically, these tools are simply not designed to be used in an aux-send/effects-return configuration.

If you want to be able to process individual source channels through the Vitalizer or Denoiser, and the console doesn't have channel-insert sockets (and yours doesn't), the easiest solution would be to invest in a TRS patchbay. You could then manually patch the source signals either directly to the mixer inputs, or to a processor input, and then patch the processor output back to the appropriate mixer input. You could even patch the stereo mix out via the Vitalizer or Denoiser before sending it on to your recording and monitoring chain. That would be a far more practical and sensible solution.



Published April 2012

Tuesday, January 2, 2018

Q. Can I output my final mix one channel at a time?

I have recently purchased a Golden Age Project Pre 73 MkII and Comp 54 on the recommendation of someone from the SOS forums, and I am so pleased. I use an RME Babyface and wondered, with my limited hardware, would it be possible to output my final mix one channel at a time through the Comp 54? The reason I ask is that the hardware adds something that no VST seems to be able to do. If someone knows how I could do this it would be great. If it matters, the DAW I am using is Reaper.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: The answer is yes, but it's not as straightforward as it might appear and you need to be careful.

The basic problem is that when you're working with a stereo mix the stereo imaging is determined by the subtle level differences of individual instruments in the two channels. A compressor exists to alter the level of whatever you pass through it dynamically, depending on its own level.

Imagine an extreme situation where you have some gentle acoustic guitar in the centre of your mix image, and some occasional heavy percussion panned hard left. If you process those two channels with separate unlinked compressors, the right channel compressor only sees a gentle guitar and does nothing, while the left channel compressor will feel obliged to wind the level back every time the mad drummer breaks out.

 While you may like the effect a certain piece of gear (like this Golden Age Project Comp 54) has on your recordings, passing your left and right channels through it separately is not a good idea. The reason for this is that the compressor can only react to what it is fed at any given time. So when the left and right channels are heard together — after being run through the Comp 54 — the sound will be very uneven. You can get around this by setting up an external side-chain input, which will cause the compressor to react to what it gets from the other channel, but with the Comp 54 this is not possible, so another approach altogether might be in order. 
While you may like the effect a certain piece of gear (like this Golden Age Project Comp 54) has on your recordings, passing your left and right channels through it separately is not a good idea. The reason for this is that the compressor can only react to what it is fed at any given time. So when the left and right channels are heard together — after being run through the Comp 54 — the sound will be very uneven. You can get around this by setting up an external side-chain input, which will cause the compressor to react to what it gets from the other channel, but with the Comp 54 this is not possible, so another approach altogether might be in order.

Listen to the two compressed channels afterwards in stereo and the result will be a very unsettled guitarist who shuffles rapidly over to the right every time the percussionist breaks out (probably a wise thing to do in the real world, of course, but not very helpful for our stereo mix).

If you process your stereo mix one channel at a time through your single outboard compressor, that's exactly what will happen. The compressor will only react to whatever it sees in its own channel during in each pass, and when you marry the two compressed recordings together again you will find you have an unstable stereo image. The audibility of this, and how objectionable you find it, will depend on the specific material (the imaging and dynamics of your mix), but the problem will definitely be there.

Stereo compressors avoid this problem by linking the side chains of the two channels, so that whenever one channel decides it has to reduce the gain, the other does too, and by the same amount. In that way it maintains the correct level balance between the two channels and so avoids any stereo image shifts.

You can achieve the same end result if your single outboard compressor has an external side-chain input, but sadly I don't think the Golden Age Project model does. If it did, what you'd need to do is create a mono version of the stereo mix in your DAW and feed that mono track out to the compressor's external side-chain input, along with one of the individual stereo mix channels (followed by the other). That way, the compressor will be controlled only by the complete mono mix when processing the separate left and right mix channels, so it will always react in the same way, regardless of what is happening on an individual channel, and there won't be any image shifting.

That's no help to you with this setup, of course, but don't give up yet, as there is another possibility. You could take an entirely different approach, and that's to compress the mix in a Mid/Side format instead of left-right. It involves a bit more work, obviously, as you'll need to convert your stereo track from left-right to Mid/Side, then pass each of the new Mid and Side channels separately through the compressor, and then convert the resulting compressed Mid/Side channels back into left-right stereo. Using an M/S plug-in makes the task a lot easier than fiddling around with mixer routing and grouping, and there are several good free ones around.

The advantage of this Mid/Side technique is that, although the Mid and Side signals are being processed separately and independently, the resulting image shifts will be much less obvious. The reason for this is that instead of blatant left-right shifts, they will now be variations in overall image width instead, and that is very much less noticeable to the average listener.

Sorry for the long-winded answer, but I hope that has pointed you in the right direction.
SOS Reviews Editor Matt Houghton adds: I agree with Hugh's suggestion of M/S compression. I regularly use that when I want to deploy two otherwise unlinkable mono compressors, and there's no reason why you can't process the Mid and Side components one at a time. The only issue here will be your inability to preview what you're doing to a stereo source, so be careful not to overwrite your original audio files!

However, I sense that it's the effect of running through the compressor's transformers that you're hoping to achieve. In that case, just set to unity gain and set the threshold so that the unit isn't compressing, and then run the signal through it. If it is standard L/R compression you want, you could always get another Comp 54, as although they're mono processors they're stereo-linkable with a single jack cable.

In Cubase, I find that the best approach to incorporating such outboard devices into my setup is to create an External FX plug-in for each device, and then insert that on each channel and print the result. In Reaper, the equivalent tool is the excellent ReaInsert plug-in. This approach not only makes the process less labour intensive in the long run, but means that you can drag and drop the processor to different points in the channel's signal chain, should you want to.


Published February 2012