Hugh Robjohns
I know it changes settings according to the source material, but how does the 'automatic recovery' function on a compressor actually work?
Bob Rogers via email
SOS Technical Editor Hugh Robjohns replies: The first thing to consider is why an auto-recovery function is useful at all. A slow recovery time provides very gentle changes in loudness, which are perceived as controlled but 'un-dynamic', which is often a good reason for choosing a slow setting. There's a risk, though, that some sudden, large, transient peaks will cause substantial gain-reduction that takes ages to recover, effectively 'punching a hole' in the audio. This can be an interesting effect, but it's often highly undesirable!
A very fast recovery time, on the other hand, produces an aggressively 'bouncy' or 'pumpy' signal, and those rapid gain changes are perceived as being very loud. A heavily compressed signal with a fast recovery time will always sound louder than the same signal with a slow recovery time — even though the peak levels are exactly the same. The downside of a fast recovery time is that the resulting rapid changes in any ambient noise level can become very obvious and may be perceived as unpleasant — the effect we know as 'pumping'.
So, if we want to make the signal sound loud (the most common reason for employing a compressor!) we need a fast recovery time, but if we also want to avoid ambience pumping we need a slow recovery time. The usual compromise is to find a single setting that achieves a reasonable balance between these two requirements.
The dual-stage or auto-recovery system is an attempt to provide both fast and slow recovery times simultaneously, handling the loudest signal transients with a fast recovery to maintain the perceived loudness, while cushioning the quieter signals with a slow recovery time to avoid pumping. Hopefully, this will give the best of both worlds with none of the drawbacks, and avoid the inherent compromises of a fixed recovery time.
You've probably heard it in action, but have you ever wondered what the 'auto' setting on a compressor actually does to your signal?To understand how this auto-recovery mode works, you must examine how the standard fixed attack and recovery (or release) times are typically established. The side-chain circuitry starts by rectifying the audio signal to convert it into a varying DC voltage, which represents either the signal peak or average (RMS) amplitude, depending on design. When that DC voltage exceeds the compressor's threshold level it's allowed to charge a 'time-constant' capacitor through a resistor, so that the voltage across the capacitor builds relatively slowly (taking a few milliseconds, typically). This slowly building voltage is used to control the compressor's gain-reducing element, causing it to increase the audio signal attenuation at a controlled rate called the 'Attack Time'. Altering the value of the resistor changes the charging time, allowing the attack time to be adjusted manually.
When the audio signal level falls below the threshold value, the capacitor is allowed to discharge through a second resistor, controlling the recovery or release time. As the capacitor discharges, the voltage across it falls, and the gain-reducing element restores the program signal level back to normal. Usefully, the capacitor charges and discharges with an exponential curve, and that means the voltage change can be easily engineered to have a logarithmic response, and that's ideally suited to the way we perceive loudness. In other words, it ensures that the program loudness through the attack and recovery phases changes smoothly and progressively, just like riding a mixer's fader up and down.
The auto-recovery mode works in exactly the same way, but with two (or sometimes three) time-constant circuits connected together, each with a different discharge resistor value (see diagram). The idea is that the loudest two-thirds (or thereabouts) of the signal voltage is used to charge the top capacitor, which is arranged to have a very fast discharge time. In this way the level changes in the loudest parts of the signal are handled with a very fast recovery time (typically 30ms), which maintains that perception of loudness.
Quieter elements of the signal charge the lower capacitor, which is arranged to have a much slower discharge time. So once the signal falls below the compressor's threshold level, the recovery rate is initially very fast, but then it slows so that the last 4dB or so of gain reduction is released very slowly (maybe with a 700ms recovery time). This ensures that the level of ambient noise is restored subtly, without audible pumping. Additional time-constant networks are sometimes used to introduce a more gradual change in the recovery rates, although the simple two-stage arrangement is extremely effective and widely employed.
So, really, an automatic release is often a good option if you're experiencing unwanted pumping of the signal from a long release, but it's less likely to be suitable with high ratios when you're approaching limiting — when you'll pretty much always want a relatively fast release.
No comments:
Post a Comment