Gordon Reid's Guide To Synthesis
Published August 2005
By Gordon Reid
We launched a brand new website on 15 June 2016 and at the time it holds articles from August 2005 to the latest issue.
This means that the entire Synth Secrets series of articles are not yet on this site.
Temporary Solution
However, there is a temporary solution. You can find all parts on the wonderfully useful web.archive.org site at this URL.
We are working as fast as possible to add more past issues and will eventually reach right back to the January 1994 issue, but this will take time as you can appreciate.
We thank you for your patience and understanding.
Published August 2005
Welcome to No Limit Sound Productions. Where there are no limits! Enjoy your visit!
Welcome to No Limit Sound Productions
Company Founded | 2005 |
---|
Overview | Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting. |
---|---|
Mission | Our mission is to provide excellent quality and service to our customers. We do customized service. |
Thursday, June 30, 2016
Wednesday, June 29, 2016
Q. What’s wrong with playing while wearing headphones?
Sound Advice : Recording
Mike Senior
It’s fine to use cans while recording if that suits the artist, but there are very good reasons to avoid them, particularly when working with singers.It’s fine to use cans while recording if that suits the artist, but there are very good reasons to avoid them, particularly when working with singers.
On Wikipedia, the musician Chris Thile is quoted as saying the following about the recording of How To Grow A Woman From The Ground (easily one of my favourite albums of all time): “Everything was tracked live, and I’ve decided never to record wearing headphones again unless I absolutely have to... because you’re in your own little world playing to a mix that no-one will ever hear but you. What’s the point?” I don’t get this. What’s wrong with headphones? I can see someone not wanting to overdub, but why not play with headphones? Surely he’s confusing overdubbing with recording with headphones?
SOS Forum post
SOS contributor Mike Senior replies: Actually, I don’t think he is confusing anything, and I also broadly speaking agree with him myself, for many reasons. What Thile is specifically saying here, as I see it, is that if he turns himself up in his own cans so he can better hear what he’s playing, for example, then he’ll likely undermine his normal musical impulse to project more strongly when his solo comes around. And that alteration in the normal balance of the performance may then have knock-on effects for the other players, and so on. The interactions that happen between expert acoustic performers is tremendously subtle, and based on their previous experiences of playing together. If you bring an ensemble into the studio and suddenly give them completely different balances than those they normally hear in rehearsal, then you’re asking for trouble — not just a less secure performance, but also a mix where the balances need masses more automation work to clarify.
But that’s not the only problem with headphones, by any means. For a start, for most musicians, the sound coming through the mic-plus-headphone chain will usually be a pale imitation of the acoustic reality they’re intimately familiar with, through hours and hours of practising, so when working on headphones they may find it difficult to control the tonal variables of the instrument with the same assurance they normally do. Then there’s the issue of monitoring latency to deal with in digital systems, which is often a problem in project studios — it’s not just a timing issue, either, because even the smallest monitoring latencies can trigger comb-filtering between the monitored signal and bone-conducted sound coming through the performer’s body, especially where brass/wind instruments and vocalists are concerned. And finally, separate headphone mixes can make natural communication between players rather tricky between takes, which makes refining arrangements and performances harder than it needs to be — and even if it doesn’t, for most people there’s something unnatural about communicating with people whose answers sound like they’re being beamed directly into your head!
To put it another way: what are the advantages of working without headphones? In my experience, it allows the technology to retreat into the background as far as the performers are concerned. They can play and interact the way they normally would during rehearsal, so their natural dynamics can be captured by the mics, and those same dynamics do half the mixing work for you. Now, clearly, project-studio recording sessions often have to diverge from that ideal for many practical reasons, so there’s no escape from using headphones in many situations. If you have a choice, though, I’d thoroughly recommend giving them a miss. If nothing else, think of the time you’ll save setting it all up!
Tuesday, June 28, 2016
Monday, June 27, 2016
Q. How much impact can a CD transport have on sound quality?
Sound Advice : Maintenance
Hugh Robjohns
I’ve been looking for a compact CD transport [an audio CD Player with a digital output, usually with no on-board D-A converter] to hook up to my home-studio DAC and I’m surprised by how expensive they are — €1000 to €2000 isn’t rare and those aimed at rich-and-crazy audiophiles cost 10 times that. I can live with the idea of spending €1000 on a really nice DAC, or €2000 on really good speakers that will last the rest of my life, but shouldn’t it be entirely possible to make a decent CD transport for €250 or less? How much impact can a CD transport really have on sound quality (assuming there’s nothing wrong with the CD it’s playing)?
SOS Forum post
SOS Technical Editor Hugh Robjohns replies: Equipment prices in the audiophile market have always confounded and amazed me, but so too does your acceptance that a nice DAC should cost four times as much as a reliable disc transport! Have you ever really thought about what a CD transport has to do? A DAC is basically a little bit of etched silicon (albeit a cleverly designed one), glued onto a circuit board with lots of other little bits of silicon. Once these chips have been designed, the manufacturers can churn them out in the thousands for micro-cents, and the rest of the electronics are pretty cheap to build too. A lot of expertise is required to design a system in which all the elements come together in a way that achieves the desired level of performance, but we’re really only talking about good PSU design, effective grounding strategies, attention to clocking arrangements, and so on. It’s challenging, but it’s not rocket science!
Contrast that with a CD transport, which is, in my view, a miracle of intricate moving parts which need to be positioned precisely and continuously within fractions of a micrometre, with sophisticated servo-control circuitry, and elaborate digital processing systems! The data bumps pressed into a spiral within the CD have to be positioned under the laser beam and detector assembly at a very steady speed of around 1.4m/s (about 3mph, or walking speed!) and, since the bump spiral runs from inside to outside, the disc rotation motor has to vary its speed as the disc plays, from about 500rpm down to 200rpm, and that needs a sophisticated servo-control system.
The presence or absence of bumps within the disc is detected by a laser beam which is focused dynamically to a spot about 1.2 m across, and that laser beam has to be kept precisely aligned on the passing stream of bumps with the separation between adjacent spirals of just 1.6 m. Just to make things slightly harder, the spiral groove is probably also wobbling from side to side as the disc spins because the centre hole is probably slightly off-centre, and the horizontal plane of the bumps is probably also moving up and down too because the disc balance won’t be perfect either. So the laser focusing mechanism is continuously having to readjust the beam focus onto the moving surface. Consequently, the entire laser-beam generator and optical sensor assembly has to be motored across the surface of the disc while keeping the static laser beam precisely focused onto the moving spiral of microscopic bumps within the disc’s polycarbonate substrate.
If you scale these dimensions up a million times so that the bumps are about 1.2 metres wide — about the same as the width of the crash barrier down the centre of a motorway — the tracking servos in the laser assembly are performing the equivalent of flying a jumbo jet at three million miles an hour, while keeping the nose wheel aligned to within about 20cm either way directly above the central crash barrier as the motorway winds its way across the countryside.
So we are already talking about mind-blowing precision with unbelievably fast-moving targets — but it doesn’t stop here. The inevitable cyclical variations in the amount of power demanded by the tracking and focus servos can result in varying reference voltages in the digital, clocking, and analogue circuitry, if the transport’s power-supply system is not carefully designed. That can cause significant problems too, especially if the CD player has an on-board DAC. Prism Sound ran some fascinating tests in 1996 to find out why numerically identical CDs were perceived to sound different when played on various CD systems, and found that power-supply modulation problems played a significant role (www.prismsound.com/m_r_downloads/cdinvest.pdf).
If the tracking and focusing systems all work as they should, the photo-detector in the laser assembly will output an analogue signal with a varying brightness (light/dark) which represents the data pressed into the disc. That analogue signal is digitised (so the player needs an accurate word-clock generator) and passed to a decoder chip, which has to figure out how to unscramble the eight-to-fourteen modulation and CIRC encoding structures and pass the resulting binary data to an error-correction system.
When the CD format was designed, it was assumed that errors would be inevitable, so a powerful error-detection system was incorporated. Originally they expected short absences of data caused by pin-holes in the metal reflective layer (because the ‘sputtering’ technology wasn’t 100 percent reliable in the early 1980s), but it turned out that the real problems were caused by deep scratches and oily residues (fingerprints, marmalade...) on the disc, since these deflect the laser beam to a completely different part of the data stream.
In those cases, instead of a short gap in the recovered data stream the receiver gets continuous data, but sections of it are junk and don’t relate sensibly to the rest. Moreover, the beam deflections upset the tracking and focus servos in a major way, ultimately resulting in the familiar ‘stuck-groove’ effect we’ve all heard from scratched CDs.
When faced with such severe problems, the error-correction system can identify that there’s a problem, but the system isn’t powerful enough to correct it — the only option is to throw the confusing data away and take a guess instead: the error-concealment mode. Thankfully, audio tends largely to be cyclical, so a predictive approach to error concealment is surprisingly effective, and artifacts are usually fairly inaudible. Its failings become most obvious and dramatic on transient-rich and high-frequency components.
After all that, the (hopefully) accurate audio data is re-encoded into an S/PDIF signal and made available on an output socket for an external DAC, which has the relatively easy-peasy job of just turning it back into an analogue signal again.
DVD players have to work four times as hard as a CD transport because all the dimensions are smaller, and Blu–ray even more so! So it’s really quite astonishing that all that can be done for so little money, especially when you compare it with the cost of a high-end record turntable/pickup arm/cartridge or an open-reel tape machine, all of which employ, in comparison, very crude ‘tractor’ engineering!
As to the question, “How much impact can a CD transport have on sound quality?” the reality is that a poor transport could easily be making up large parts of the data if its tracking systems are not working well, forcing the error-correction system to give up and resort to concealing errors instead. The frustrating thing is that you’ll probably never know if it’s doing that because although the error-correction chips provide status flags to indicate how well they’re working, these are not usually brought out to the front panel for the user to see (other than in a few high-end and specialist machines).
A common problem is a susceptibility to mechanical and acoustical vibrations, which cause tracking and focus errors leading to uncorrectable data errors. There’s also the issue of power-rail fluctuation that I mentioned previously, and it may also have less than perfect clocking and so introduce interface jitter in the S/PDIF output. Of course, a good external DAC should be able to remove any interface jitter, but it can’t do anything if the data itself has already been corrupted through the use of error concealment.
The better-designed transports are designed to minimise these risks by using large, competent power supplies, to avoid any cross-interference between the different servo systems, the digital electronics and so on. They also have well-isolated and shock-mounted mechanisms so that mechanical vibration doesn’t affect the disc-reading process. The tracking and focus servos and all the associated mechanics and bearings are also of the highest standard and able to work quickly and accurately.
Different manufacturers use different mechanism designs, too. The original Philips idea uses a ‘swinging arm’ arrangement in which the laser assembly is mounted on an arm which pivots around a fixed post, tracing an arc across the CD much like a vinyl pickup arm. In contrast, many Japanese transports employ a ‘sled’ laser assembly in which the laser optics are moved along a couple of parallel rails. I’ve found the former to be far more reliable, because dust sticks to the rail lubrication in the latter, resulting in sticky ‘gummed-up’ regions at both ends of the sled’s excursion range over time, making longer discs unplayable towards the end. There are also different techniques used to keep the laser beam on track, with Philips again using a split-beam system, which seems particularly reliable.
So the cost of CD/DVD transports isn’t all that shocking, when you consider all that goes into them and all they have to do! When everything’s working well, the S/PDIF data from a cheap and cheerful transport will be identical to that from a properly engineered unit — that’s the beauty of digital audio. However, the problem comes when the going gets tough and the cheap transport fails to extract the disc information correctly. Its only option is to resort to making the data up through the concealment process, without telling you what it’s doing!
There are a lot of quite reasonable CD transports around in the $350-700 range, while a decent DVD or Bluray player should be able to generate good CD data too, because they work to even finer tolerances, and they all seem to have audio S/PDIF outputs capable of feeding an external DAC.
Saturday, June 25, 2016
Friday, June 24, 2016
Q. Why should reverbs be send effects?
Sound Advice : Miking
Mike Senior
Why do people put delays and reverbs on a separate track instead of putting it with the track that has the EQs and compressors? Do you have to do this?
SOS Forum post
SOS contributor Mike Senior replies: There are several reasons why it often makes sense to keep delay and reverb effects on separate tracks. The main thing is that it allows you to share the same plug-in effect instance between several recorded sounds, by sending from their different tracks to a separate effects track via the DAW software’s auxiliary sends system — the so-called ‘send-return’ configuration. This not only makes more efficient use of your computer’s processing power, especially where reverb is concerned (decent reverb effects can be extraordinarily CPU-hungry), but also allows you to implement global changes to an effect across the board with ease. Want a drier sound overall? Just turn down the reverb channel, leaving all your instrument balances otherwise as they were — assuming that you’re using the effect in its ‘wet only’ mode, as is the norm when using separate effects channels in this way.
In addition, though, having your delay or reverb effect on a separate track gives you much more control to customise the effect without changing the dry sound. Most reverb plug-ins do now have some EQ at least on-board, but if you want more surgical spectral control — or indeed some distortion, chorusing, or triggered ducking applied just to the effect tail — then it’s much easier to do that when the effect is on a separate track. In more creative electronic styles, this is a big plus point of the send-return approach.
Having said that, there are some occasions where I still insert delay or reverb effects directly into a track, rather than using them in a send-return configuration. Usually this is where I’m using super-short delays or reverbs to adjust the tone of a sound via comb-filtering, rather than trying to set up an effect tail. In that kind of situation, the effect usually has to be heavily tailored to the specific recorded source I’m processing, so it wouldn’t be much use for anything else in the arrangement, and I would want any further processing (or indeed sends to more traditional send-return effects) to be fed with that altered timbre, not the initial dry sound. So in that scenario the send-return configuration makes less sense.
Thursday, June 23, 2016
Wednesday, June 22, 2016
Tuesday, June 21, 2016
Q. Which omni mics are best for small ensemble recordings?
Sound Advice : Miking
Hugh Robjohns
The Sennheiser MKH20 — one of our Technical Editor’s favourite ‘ruler-flat’ omni mics.The Sennheiser MKH20 — one of our Technical Editor’s favourite ‘ruler-flat’ omni mics.
I’m looking for extremely detailed and natural omnidirectional microphones for two-channel small ensemble recordings on location. What would be your preference, from my shortlist of: Josephson C617SET, Gefell M221 or Earthworks QTC40? I’m not a fan of DPA 4006As, but how would a DPA 4060 set work? I’m not interested in tube or ribbon mics, but what other suggestions do you have? I usually rig the mics six to eight feet away from the sources, and sometimes closer in church. The problem I face in the US is that demo pairs are not available, and the dealers charge 20 percent restocking if I buy something and subsequently decide to return it! I’m using a Millennia HV32P preamp and a Tascam DA3000 master recorder, and also have a Nagra 7.
Carl Beitler, via email
SOS Technical Editor Hugh Robjohns replies: My personal preference would be for the Microtech Gefell M221s. I reviewed this mic back in SOS June 2013 (http://sosm.ag/jun13gefell), and was greatly impressed. The Josephson C617SET uses the same capsule, of course, and their electronics are fractionally quieter, but the Acoustic Pressure Equalising spheres which are supplied with the Gefell mics give them a significant edge in versatility to my mind.
Earthworks make some very nice, neutral-sounding mics, but they tend to be noisy in comparison with the Gefell (22dBA versus 15dBA) because of the very small capsule size. That’s something that’s necessary to achieve the extended high-frequency bandwidth which Earthworks prioritise, but I didn’t feel that the Gefell lacked anything in the upper regions.
The DPA 4060 microphones are astonishingly good for their size and price, but are inherently slightly compromised on the self-noise front, again, and have a tendency towards brightness that I don’t think you would appreciate. DPA’s d:dicate range, reviewed in last month’s issue, now includes the MMC2006 omni capsule, which essentially contains a back-to-back pair of 4060s internally (with a self-noise advantage). This ‘twin-diaphragm’ technology is presented as a lower cost alternative to the classic MMC4006 capsule, but the MMC2006 is not compatible with the company’s range of APE spheres.
As for other alternatives, I remain a big fan of Sennheiser’s MKH20s, which I think still sound slightly better than the newer MKH8020. I like the ability to switch them from nearfield to diffuse-field equalisation, to suit different applications, and I relish their amazingly low harmonic distortion, ruler-flat frequency response, and very low self-noise.
Hugh Robjohns
The Sennheiser MKH20 — one of our Technical Editor’s favourite ‘ruler-flat’ omni mics.The Sennheiser MKH20 — one of our Technical Editor’s favourite ‘ruler-flat’ omni mics.
I’m looking for extremely detailed and natural omnidirectional microphones for two-channel small ensemble recordings on location. What would be your preference, from my shortlist of: Josephson C617SET, Gefell M221 or Earthworks QTC40? I’m not a fan of DPA 4006As, but how would a DPA 4060 set work? I’m not interested in tube or ribbon mics, but what other suggestions do you have? I usually rig the mics six to eight feet away from the sources, and sometimes closer in church. The problem I face in the US is that demo pairs are not available, and the dealers charge 20 percent restocking if I buy something and subsequently decide to return it! I’m using a Millennia HV32P preamp and a Tascam DA3000 master recorder, and also have a Nagra 7.
Carl Beitler, via email
SOS Technical Editor Hugh Robjohns replies: My personal preference would be for the Microtech Gefell M221s. I reviewed this mic back in SOS June 2013 (http://sosm.ag/jun13gefell), and was greatly impressed. The Josephson C617SET uses the same capsule, of course, and their electronics are fractionally quieter, but the Acoustic Pressure Equalising spheres which are supplied with the Gefell mics give them a significant edge in versatility to my mind.
Earthworks make some very nice, neutral-sounding mics, but they tend to be noisy in comparison with the Gefell (22dBA versus 15dBA) because of the very small capsule size. That’s something that’s necessary to achieve the extended high-frequency bandwidth which Earthworks prioritise, but I didn’t feel that the Gefell lacked anything in the upper regions.
The DPA 4060 microphones are astonishingly good for their size and price, but are inherently slightly compromised on the self-noise front, again, and have a tendency towards brightness that I don’t think you would appreciate. DPA’s d:dicate range, reviewed in last month’s issue, now includes the MMC2006 omni capsule, which essentially contains a back-to-back pair of 4060s internally (with a self-noise advantage). This ‘twin-diaphragm’ technology is presented as a lower cost alternative to the classic MMC4006 capsule, but the MMC2006 is not compatible with the company’s range of APE spheres.
As for other alternatives, I remain a big fan of Sennheiser’s MKH20s, which I think still sound slightly better than the newer MKH8020. I like the ability to switch them from nearfield to diffuse-field equalisation, to suit different applications, and I relish their amazingly low harmonic distortion, ruler-flat frequency response, and very low self-noise.
Monday, June 20, 2016
Saturday, June 18, 2016
Q. How do I ensure the show goes on when a live backing track fails?
Hugh Robjohns
I’m about to take on the task of running live backing tracks for Hacienda Classical live shows, and although I already use Pro Tools for backing tracks (for Peter Hook and the Light) this system doesn’t have a backup facility, or at least not one that runs alongside in sync. I want to run backing tracks on two systems, synchronised to each other. However, if one systems fails, I want the second system to keep running regardless of lost sync — is this possible? I would also need it to automatically switch the DI outputs to the second system.
Andrew Poole, via email
SOS Technical Editor Hugh Robjohns replies: Yes, this is certainly possible! The synchronised backup is easily achieved by using a reference master timecode signal. That can either be provided by the master system, or from an external master timecode generator, depending on what kind of system security and control you require. The master and slave systems can be hardware machines or DAWs — or even a combination of both — and you simply need to configure the slave system(s) to continue running should the master timecode disappear, which is a pretty standard timecode-chase option. You’ll also need to configure how the slave machine(s) should behave if the master timecode comes back (ignoring it is probably the safest option). Remember that if you’re using digital systems you’ll also need to make sure they are synchronised to the same independent word-clock master (timecode is a positional reference, not a clocking reference).
The ‘DI box with automatic switchover’ function is quite a specialised requirement, but there are at least two companies I know of that make suitable units. In most cases, the switchover is triggered by the loss of a continuous tone signal replayed from the master machine and routed to the switch box via a spare channel. Check out the offerings of Radial Engineering (www.radialeng.com/sw8.php), who have both four- and eight-channel switching DI units, or Orchid Electronics (www.orchid-electronics.co.uk/Switching_DI.htm), who make an eight-channel version. Multiple units can be linked together for higher channel counts in both cases.
I’m about to take on the task of running live backing tracks for Hacienda Classical live shows, and although I already use Pro Tools for backing tracks (for Peter Hook and the Light) this system doesn’t have a backup facility, or at least not one that runs alongside in sync. I want to run backing tracks on two systems, synchronised to each other. However, if one systems fails, I want the second system to keep running regardless of lost sync — is this possible? I would also need it to automatically switch the DI outputs to the second system.
Andrew Poole, via email
SOS Technical Editor Hugh Robjohns replies: Yes, this is certainly possible! The synchronised backup is easily achieved by using a reference master timecode signal. That can either be provided by the master system, or from an external master timecode generator, depending on what kind of system security and control you require. The master and slave systems can be hardware machines or DAWs — or even a combination of both — and you simply need to configure the slave system(s) to continue running should the master timecode disappear, which is a pretty standard timecode-chase option. You’ll also need to configure how the slave machine(s) should behave if the master timecode comes back (ignoring it is probably the safest option). Remember that if you’re using digital systems you’ll also need to make sure they are synchronised to the same independent word-clock master (timecode is a positional reference, not a clocking reference).
The ‘DI box with automatic switchover’ function is quite a specialised requirement, but there are at least two companies I know of that make suitable units. In most cases, the switchover is triggered by the loss of a continuous tone signal replayed from the master machine and routed to the switch box via a spare channel. Check out the offerings of Radial Engineering (www.radialeng.com/sw8.php), who have both four- and eight-channel switching DI units, or Orchid Electronics (www.orchid-electronics.co.uk/Switching_DI.htm), who make an eight-channel version. Multiple units can be linked together for higher channel counts in both cases.
Friday, June 17, 2016
Thursday, June 16, 2016
Q. Does pan placement change if I place my speakers further apart?
Sound Advice : Recording
Hugh Robjohns
I have two sets of monitors. The first pair are 40 inches apart and the second, outside of these, around 55 inches apart. The vocals are usually dead centre in both. Let’s say I have different distinct-sounding hand percussion on extreme left and extreme right, and a saxophone panned 30 percent to the left of my nose.
Don’t get your percentages confused with absolute measurements! When setting your speakers further apart, the placement of a panned source will inevitably change in degrees/distance, but not in terms of the relative distance from the centre to the extreme of the stereo panorama.Don’t get your percentages confused with absolute measurements! When setting your speakers further apart, the placement of a panned source will inevitably change in degrees/distance, but not in terms of the relative distance from the centre to the extreme of the stereo panorama.
When I switch to the second pair of monitors the percussion remains extreme left and right — the image width is wider, as you’d expect, but the vocal is still dead centre. Should the saxophone move left by several degrees or still remain at 30 percent? I want a theoretical explanation so I can better understand what’s happening.
Michael, via email
SOS Technical Editor Hugh Robjohns replies: The maximum image width is obviously determined by the physical separation of the speakers, so switching to the 55-inch set moves the outer edges further out, as you’ve noticed. The whole stereo image has been stretched from the centre outwards in both directions. Imagine an elastic band, with the centre pinned in the middle of your sound stage, and the outer edges fixed to the monitors. If you mark the positions of different sound sources on the band and move the monitors outwards, the elastic band stretches and so too does the spacing between your marked sound sources. So, if the saxophone is panned 30 percent left in the image, then that’s where it will always be. When you switch to the wider speakers ‘30 percent left’ is actually going to be physically further left than it was with the closer speakers.
I’ll assume your listening position is at the apex of an equilateral triangle, with the other two points being at your 40-inch spaced speakers. Rough trigonometry calculations suggest that with the closer speakers the sax will appear roughly 10 degrees left of centre. Switch to the second set and this perceived angle increases to about 14 degrees. But it is still panned 30 percent left within this wider overall image!
Tuesday, June 14, 2016
Monday, June 13, 2016
Q. Should monitors be near the rear wall?
Sound Advice : Recording
Hugh Robjohns
There are many good reasons why soffit-mounted speakers, such as this beautiful set of Kinoshitas in South Africa’s BOP Studios, are used. But not all speakers are designed to be placed so close to the wall!There are many good reasons why soffit-mounted speakers, such as this beautiful set of Kinoshitas in South Africa’s BOP Studios, are used. But not all speakers are designed to be placed so close to the wall!
Most of the advice I’ve seen about speaker placement suggests keeping them well away from the walls, but I was recently advised to place my monitor loudspeakers directly against the back wall to give a better sound in the room. Is this valid and, if so, what is the thinking behind this approach?
Jerry Jones, via email
SOS Technical Editor Hugh Robjohns replies: The ideal speaker placement depends on the design of the speaker, the dimensions of the listening room, the efficacy of any acoustic treatment, and the location of the listening position within the room. In most cases, though, the positioning of loudspeakers inherently involves some level of compromise in the overall performance. Sometimes, placing speakers directly against the back wall does indeed provide the best balance but, often, positioning the speakers away from the wall by a small distance might give better results. The only way to know is to try different positions and listen critically!
Let’s consider the loudspeaker issues first. Most loudspeakers are designed to give a tonally balanced sound when placed away from the room boundaries (the so-called ‘free space’ condition), but different manufacturers adopt different ‘ideal’ placement distances. Many monitor speakers incorporate EQ facilities to reduce the bass output if the speaker is placed near room boundaries, because a compact loudspeaker radiates more or less omnidirectionally at low frequencies. The consequence of this is that if a speaker is placed against a back wall, the portion of sound that would have gone away from the listener is bounced instead straight towards them. This is often referred to as a ‘half space’ condition and it produces a 6dB increase in the SPL of low frequencies measured in the room. Place the speaker in the corner of two walls and the area it radiates into is a ‘quarter space’, with a 12dB rise in LF output compared to the ‘free space’ condition.
So placing a speaker near or against the room boundaries changes the spectral balance very significantly, and different manufacturers therefore optimise their designs for use in different placement conditions. Some also provide EQ facilities to adjust the balance to allow use in different conditions. It is therefore vitally important to place or configure the speaker according to the manufacturer’s specifications.
Thinking more about the room now, solid room boundaries effectively act like mirrors to low-frequency sound. If a loudspeaker is placed away from the wall, the direct and reflected LF sound waves travel to the listener over different path lengths, and thus arrive with different phases. The inevitable outcome is a degree of cancellation resulting in one or more deep notches in the LF frequency response, the precise tuning being dependent on the spacing between speaker and boundary wall(s). As the speaker is moved further from the wall, the notch moves lower in frequency, and vice versa.
As a rough rule of thumb, it’s usually best to keep the speaker within 1 metre of the back wall, to push any notches up above about 80Hz and into a region where back-wall reflections can be controlled reasonably well by conventional bass-trapping treatments. The real danger zone is when the speaker is between 1 and 2.5 metres from the back wall, as this can result in substantial notches between about 35 and 80 Hz, which require more complex bass-trapping techniques to resolve. A back-wall spacing of more than 2.5 metres puts the notches below the lowest usable audio frequencies, so it’s not really a problem any more.
Another consideration is the way in which the speaker’s position stimulates the room’s standing waves. Often, moving the speaker a few centimetres forwards or backwards, side to side, or even up and down, can have a dramatic effect on how it interacts with the room and the resulting consistency of the bass response. Ideally, the distance between the speaker and back wall should be different from that between the speaker and side wall, and also between the speaker and ceiling/floor.
This might suggest that the ideal would be to mount the speaker in the back wall. Providing it can be equalised appropriately, that approach can work well, hence the popularity of ‘soffit-mounted’ speakers in commercial studios. However, soffit-mounting is not practical for most project studios, and it involves a raft of technical challenges that we don’t need to get into here.
One down side of placing the speaker very close to the back wall is the effect it has on stereo imaging and the perceived depth of the soundstage. In general, the further the speaker is from the back wall, the greater the impression of soundstage depth becomes. So there are lots of interacting aspects of the overall in-room speaker performance that have to be balanced — compromises are inevitable.
When we try to optimise speaker positioning in our Studio SOS visits, our normal practice is to listen to familiar music with a varied bass line to assess the balance in the room. We then move the speakers closer or further from the wall (and adjust their angle and height) as necessary to optimise the performance. No two rooms are ever the same, and the only reliable technique is the time-consuming one of moving the speakers in roughly 10cm increments and reassessing the balance, the consistency of the bass, and the soundstage depth and imaging.
Saturday, June 11, 2016
Friday, June 10, 2016
Q. Do I need a better headphone amp?
Sound Advice : Miking
Hugh Robjohns
A high-end headphone amp, such as in this Grace product, is demonstrably superior to those in consumer and ‘prosumer’ devices — and in this case a good many professional ones too — but that doesn’t mean the humble headphone output of your CD player is useless!A high-end headphone amp, such as in this Grace product, is demonstrably superior to those in consumer and ‘prosumer’ devices — and in this case a good many professional ones too — but that doesn’t mean the humble headphone output of your CD player is useless!
When does one need a headphone amp? I have some Shure 1840s, which are great, but I’ve read about headphone amps and want to be sure I am getting the most from the cans. I currently have them fed from the phones output of my Mackie desk — should this be sufficient?
SOS Forum post
SOS Technical Editor Hugh Robjohns replies: Really, this is the same kind of question as “When does one need an expensive mic?” or “When does one need a high-end mic preamp?” In the majority of cases you probably don’t ‘need’ a better headphone amp from any practical standpoint. On the other hand, once you can perceive any beneficial improvements or changed characteristics from using that kind of equipment, you’ll find that it quickly becomes essential!
Q Do I need a better headphone amp?Like any electro-mechanical device, headphones can be tricky to drive accurately under all conditions. A ‘proper’ headphone amp has been carefully optimised for that role, and is usually a stand-alone device with more elaborate circuitry than the simpler and far more cost-effective headphone circuitry found in most compact mixers and other hardware. In general, these cheaper designs are simpler and can struggle to supply adequate voltage/current to meet the most demanding excursions of high-quality headphones. This usually results in slightly compromised dynamic range, distortion and sometimes noise-floor performance, as well as possibly curtailed LF response (again, usually because of limited current capability).
I have, and use regularly, headphone amps by Grace, Crookwood and Benchmark, and they’re all superb. For convenience, though, I often also use the headphone outputs of CD players, compact mixers and so on. Can I tell the difference? Yes, I can. Is the difference so significant that I can’t work effectively? No!
Like most things in this business, you can spend a shed-load of money on a headphone amp and you will probably perceive slightly better detail and clarity from your headphones. But whether that ‘investment’ can be justified in an improved standard of mixing or recording, only you can decide. If you spend a lot of your time mixing while monitoring via headphones, then a headphone amp might be a worthwhile investment, but do try before you buy. If you just use headphones as a cross-check after working on speakers, then it might be wiser to invest your cash in better room treatment or monitors first!
Thursday, June 9, 2016
Wednesday, June 8, 2016
Q. What should I use to record my gigs?
Sound Advice : Recording
Sam Inglis
Trying to capture a gig in stereo is risky, but a multi-mic setup based around a computer or phone and audio interface can be clumsy if you’re capturing a number of sources. In this scenario, a portable recorder such as the Zoom R16 (pictured), with its eight mic/line inputs, might be a better option. Trying to capture a gig in stereo is risky, but a multi-mic setup based around a computer or phone and audio interface can be clumsy if you’re capturing a number of sources. In this scenario, a portable recorder such as the Zoom R16 (pictured), with its eight mic/line inputs, might be a better option.
I’m an amateur double bass player and eager reader. While playing gigs, I usually record my band with an Apogee Quartet attached to an iPhone, recording four tracks on Harmonicdog’s Multitrack DAW: one track for drums, with an AKG C2000B mic placed between snare and bass drums (and pointing at the space between them), one for the guitar amp, close-miked with an Oktava MK-012-01, a channel for the bass DI (blending a DPA d:vote for bass with a piezo pickup), and another Oktava plugged into the fourth channel, miking the voice monitor (the guitar player and drummer also sing).Recording from the Quartet straight to my phone is convenient but I struggle with the small number of channels. I was intrigued by your Cowboy Junkies article (http://sosm.ag/classic-tracks-cowboyjunkies) because of the idea of recording gigs binaurally, and hence with the whole band on two channels. That would leave me two spare channels for the main vocals and maybe bass DI, for safety. Would you recommend exploring this further?I’ve also been looking into the idea of raising the track count by using the Quartet’s ADAT input, yet all ADAT preamps/converters are bulky, rackmount units, which make the whole setup less convenient. I wouldn’t need eight additional channels — four, or maybe just two, would do, but there’s not such a thing on the market. Given that I wouldn’t carry an Apogee Ensemble and laptop to my gigs (due to the risk of them being stolen), are there options that would give me a few more channels?
Matteo Rogero, via email
SOS Features Editor Sam Inglis replies: Recording a complete ensemble performance using any stereo technique is difficult — it is, if you like, the highest art of sound recording! As you’ll have learned from the Cowboy Junkies article, it took Peter J Moore many hours to arrange the microphone and the musicians so as to capture an acceptable balance, and in that situation the sound engineer was not part of the band, there was no audience, and the venue had been chosen specifically for the purpose. If you try to achieve the same at a typical gig, it would require a lot of luck to get good results, regardless of whether you’re using a Soundfield mic, a binaural array, or any other technique.
A better option might be to use a stand-alone digital multitrack recorder. I have a Zoom R16, which I sometimes use for gig recording: the quality of the preamps and analogue circuitry is not amazing, but it can record eight tracks at once, it’s simple to use and portable, and it runs off battery power. Oh, and it also has a pair of built-in mics that will give you a stereo capture (though I’ve not always found this very useful in practice). As it records onto SD cards, you can boot it up in card-reader mode when you get home and dump the files onto a computer for mixing. Zoom offer a number of other suitable devices, such as their R24 and their F8, which we reviewed last month (http://sosm.ag/zoom-f8), and other options are available from the likes of Tascam and Boss.
Tuesday, June 7, 2016
Q. Can you solve my foldback problems?
Sound Advice : Mixing
Hugh Robjohns
I’m using a Yamaha EMX 5000-20 for live sound and I seem to be unable to get the foldback working for some sources, even though it works for others. I’m reasonably sure that I am using a pre-fader bus to send to the monitor, using the aux 1 output. With the aux 1 control turned fully up (clockwise), I still seem to get nothing from a microphone or guitar, whereas the keyboard is fine in the same channel with no changes. I can replicate the same problem in several channels too — I’ve tried four so far, all with the same result. Am I missing something obvious? Maybe a different signal level from the mic/guitar? But if that’s the case, why is the FOH end result fine?
If, when using aux controls to create foldback mixes, you find that some sources fold back and others don’t, it’s probably a level issue — and solving your problem requires some methodical sleuth work!If, when using aux controls to create foldback mixes, you find that some sources fold back and others don’t, it’s probably a level issue — and solving your problem requires some methodical sleuth work!
SOS Forum post
SOS Technical Editor Hugh Robjohns replies: This is a very simple desk and there’s not much to get wrong, so working through the signal path logically should resolve the issue. I suspect this is simply a levels issue, probably complicated or confused by the fact that the keyboard has no acoustic presence on stage, whereas the vocals and guitar do. I’d suggest turning off or disconnecting the FOH and foldback speakers and, using headphones connected to the mixer, to analyse the signal levels at various points.
The first thing to do is route your mic, guitar and keyboard to the main stereo outputs. You might need the pad switch selected on channels handling line-level sources like the keyboard and (possibly) guitar (depending on how you have connected them to the desk). Deselect the high-pass filters and zero all of the EQ controls (set the knobs to face 12 o’clock). Start with the aux and effects controls turned down to zero, the Post buttons deselected (up), Pan and Balance controls in the middle (12 o’clock), Channel On buttons selected (with the associated LEDs illuminated) and put the main stereo output fader at its unity-gain mark (the black line at 0dB).
Now, bring up the keyboard fader(s) (with someone playing, obviously!) to the same unity-gain (0dB) mark and adjust the corresponding channel input-gain control(s) to get the signal peaks registering around the 0dB mark on the meters. The channel ‘signal’ LEDs should flash with signal, but the ‘peak’ LEDs shouldn’t, other than on the very loudest transients (and ideally, not at all). Repeat individually for the guitar and mic inputs.
You should now have roughly equal (peak) signal levels at the main stereo outputs for all three sources. Press the Stereo Output master channel’s AFL button above the fader and have a listen on the headphones to confirm that’s the case, fading up each of the sources one at a time to compare levels.
Depending on the nature of the playing/singing and sound-source characteristics, you may need to adjust the channel gain levels slightly to achieve more consistent perceived level balance, but this isn’t about creative mixing, it’s about identifying signal paths, so as long as you can hear each source element clearly and strongly, that’s fine.
Now for each input channel in use (mic, guitar, keyboard), set the aux 1 send level to the black triangle mark at 3 o’clock (the nominal unity-gain position) and — for this test only! — press the Post buttons on all of the input channels you’re using.
Deselect the Stereo Out AFL button, and instead press the AFL button above the aux 1 master output fader, and raise that fader to its unity gain (0dB) position.
Listen on the headphones, and fade each of your input channels up in turn. You should hear each of the corresponding sources at the same levels as they were when you monitored the Stereo Out AFL previously.
Assuming the aux sends now all work individually post-fader, deselect the Post buttons on each of the input channels. You should now have a full mix of all three sources that remains consistent, regardless of input-channel fader position. You can now adjust the relative balance of sources on aux 1 (if necessary) by tweaking the individual channel send controls.
Pull all the faders down, and reconnect the main FOH and foldback speaker outputs. You can now check that the signals are reaching the appropriate destinations at the appropriate levels by carefully raising the channel, aux and master output faders. Finally, it’s important to be aware that you may well have dialled in a lot more gain than previously on the guitar and mic channels, so feedback might start much sooner than before — adjust output levels as appropriate!
Monday, June 6, 2016
Q. Should monitors be near the rear wall?
Sound Advice : Recording
Hugh Robjohns
There are many good reasons why soffit-mounted speakers, such as this beautiful set of Kinoshitas in South Africa’s BOP Studios, are used. But not all speakers are designed to be placed so close to the wall!There are many good reasons why soffit-mounted speakers, such as this beautiful set of Kinoshitas in South Africa’s BOP Studios, are used. But not all speakers are designed to be placed so close to the wall!
Most of the advice I’ve seen about speaker placement suggests keeping them well away from the walls, but I was recently advised to place my monitor loudspeakers directly against the back wall to give a better sound in the room. Is this valid and, if so, what is the thinking behind this approach?
Jerry Jones, via email
SOS Technical Editor Hugh Robjohns replies: The ideal speaker placement depends on the design of the speaker, the dimensions of the listening room, the efficacy of any acoustic treatment, and the location of the listening position within the room. In most cases, though, the positioning of loudspeakers inherently involves some level of compromise in the overall performance. Sometimes, placing speakers directly against the back wall does indeed provide the best balance but, often, positioning the speakers away from the wall by a small distance might give better results. The only way to know is to try different positions and listen critically!
Let’s consider the loudspeaker issues first. Most loudspeakers are designed to give a tonally balanced sound when placed away from the room boundaries (the so-called ‘free space’ condition), but different manufacturers adopt different ‘ideal’ placement distances. Many monitor speakers incorporate EQ facilities to reduce the bass output if the speaker is placed near room boundaries, because a compact loudspeaker radiates more or less omnidirectionally at low frequencies. The consequence of this is that if a speaker is placed against a back wall, the portion of sound that would have gone away from the listener is bounced instead straight towards them. This is often referred to as a ‘half space’ condition and it produces a 6dB increase in the SPL of low frequencies measured in the room. Place the speaker in the corner of two walls and the area it radiates into is a ‘quarter space’, with a 12dB rise in LF output compared to the ‘free space’ condition.
So placing a speaker near or against the room boundaries changes the spectral balance very significantly, and different manufacturers therefore optimise their designs for use in different placement conditions. Some also provide EQ facilities to adjust the balance to allow use in different conditions. It is therefore vitally important to place or configure the speaker according to the manufacturer’s specifications.
Thinking more about the room now, solid room boundaries effectively act like mirrors to low-frequency sound. If a loudspeaker is placed away from the wall, the direct and reflected LF sound waves travel to the listener over different path lengths, and thus arrive with different phases. The inevitable outcome is a degree of cancellation resulting in one or more deep notches in the LF frequency response, the precise tuning being dependent on the spacing between speaker and boundary wall(s). As the speaker is moved further from the wall, the notch moves lower in frequency, and vice versa.
As a rough rule of thumb, it’s usually best to keep the speaker within 1 metre of the back wall, to push any notches up above about 80Hz and into a region where back-wall reflections can be controlled reasonably well by conventional bass-trapping treatments. The real danger zone is when the speaker is between 1 and 2.5 metres from the back wall, as this can result in substantial notches between about 35 and 80 Hz, which require more complex bass-trapping techniques to resolve. A back-wall spacing of more than 2.5 metres puts the notches below the lowest usable audio frequencies, so it’s not really a problem any more.
Another consideration is the way in which the speaker’s position stimulates the room’s standing waves. Often, moving the speaker a few centimetres forwards or backwards, side to side, or even up and down, can have a dramatic effect on how it interacts with the room and the resulting consistency of the bass response. Ideally, the distance between the speaker and back wall should be different from that between the speaker and side wall, and also between the speaker and ceiling/floor.
This might suggest that the ideal would be to mount the speaker in the back wall. Providing it can be equalised appropriately, that approach can work well, hence the popularity of ‘soffit-mounted’ speakers in commercial studios. However, soffit-mounting is not practical for most project studios, and it involves a raft of technical challenges that we don’t need to get into here.
One down side of placing the speaker very close to the back wall is the effect it has on stereo imaging and the perceived depth of the soundstage. In general, the further the speaker is from the back wall, the greater the impression of soundstage depth becomes. So there are lots of interacting aspects of the overall in-room speaker performance that have to be balanced — compromises are inevitable.
When we try to optimise speaker positioning in our Studio SOS visits, our normal practice is to listen to familiar music with a varied bass line to assess the balance in the room. We then move the speakers closer or further from the wall (and adjust their angle and height) as necessary to optimise the performance. No two rooms are ever the same, and the only reliable technique is the time-consuming one of moving the speakers in roughly 10cm increments and reassessing the balance, the consistency of the bass, and the soundstage depth and imaging.
Saturday, June 4, 2016
Q. Is it safe to leave a dummy jack plugged into my headphone socket?
Sound Advice : Miking
Hugh Robjohns
If your keyboard has speakers you wish to disable while monitoring to the line outputs, then putting an unconnected jack plug in the headphone socket is a harmless means of muting them.If your keyboard has speakers you wish to disable while monitoring to the line outputs, then putting an unconnected jack plug in the headphone socket is a harmless means of muting them.
I have a Casio PX310 digital piano. When jacks are inserted into the (quarter-inch) line out sockets, the onboard speakers are not muted. They are muted when headphones (or a jack) are inserted into either of two 1/8-inch headphone sockets. The line out volume is also controlled by the onboard speaker/headphone volume control, so it’s not an option to just turn it down. So, to mute the onboard speakers, is it safe to just insert an open-circuit stereo jack into the headphone socket, or would it be better (for the onboard amps) to present it with a dummy load? Elsewhere I read that a 100Ω resistor from tip to sleeve and ring to sleeve would do it, but I don’t believe anything unless I hear it from the SOS experts!
SOS Forum post
SOS Technical Editor Hugh Robjohns replies: Yes, a dummy jack plug or adapter will do just fine — and it won’t cause any problems!
Friday, June 3, 2016
Thursday, June 2, 2016
Q. Should insert connections be balanced or unbalanced?
Sound Advice : Maintenance
Hugh Robjohns, Paul White
I’ve read that pro gear should use balanced connections, but I’ve been looking for a mixer recently and find that most have unbalanced insert points. My interface uses balanced inputs and outputs, as does most of my (modest collection of) outboard gear. What gives?
If you’re handy with a soldering iron, a typical Y-cord insert cable can be adapted to connect balanced outboard gear to your mixer’s unbalanced TRS insert point, as indicated on the diagram.If you’re handy with a soldering iron, a typical Y-cord insert cable can be adapted to connect balanced outboard gear to your mixer’s unbalanced TRS insert point, as indicated on the diagram.
Bruce Milner via email
SOS Editor In Chief Paul White replies: Many mixers offer unbalanced insert points as it’s convenient to use a TRS jack to handle both the send and return signal. It also saves on balancing and unbalancing circuitry inside the mixer. In most cases, the unbalanced cable runs connecting the external gear are short enough that interference won’t be a problem, especially as the signals are at line level, although there’s the possibility of increased hum due to ground loops. This issue is caused by having multiple ground paths between equipment, due to having both mains power grounds, and the unbalanced cable screens on connecting cables also being ground conductors.
In reality, this is not usually a significant problem, though if you were to actually measure the hum level using sensitive test equipment, it is likely that it would be higher in an unbalanced system even if still inaudible. In the event that ground-loop hum is evident, some improvement can be achieved by making up special cables to connect the insert points to your patchbay or external equipment.
SOS Technical Editor Hugh Robjohns adds: Paul mentions making some ‘special cables’. It’s worth mentioning that this sort of DIY job needn’t be daunting! With that in mind, here’s a diagram (see below) which should help anyone wanting to adapt a TRS to 2xTS or 2xXLR insert cable to allow balanced gear to be used with an unbalanced single-socket insert point.
Wednesday, June 1, 2016
The Write Stuff
Article Preview :: Cubase Tips & Techniques
Technique : Cubase NotesWith a comprehensive automation toolset, Cubase Pro lets you fix some things in your mix you probably should have got right beforehand!With a comprehensive automation toolset, Cubase Pro lets you fix some things in your mix you probably should have got right beforehand!
Cubase now offers powerful control over parameter automation, including a number of useful editing options.
John Walden
While creating a rough mix with a first pass at automation of levels/pan is generally quite a straightforward affair in Cubase, refining that rough ‘n’ ready automation data to take you towards what can be an all–too–illusive ‘finished mix’ typically requires much more skill and effort. Fortunately, Cubase’s automation system boasts a number of tools that can help.
Before considering the various automation editing options, there’s one thing that’s well worth noting: for some parameters (for example, volume/level), automation data can actually be recorded at track level (in Automation Lanes) or at event level (within a specific MIDI or audio event). You can tie yourself in automation knots if you are not careful so, however, you choose to manage your automation data, a consistent approach is a good idea. Personally, I like to stick with just the Automation Lane system, but your own mileage may vary.
Published in SOS March 2016
Subscribe to:
Posts (Atom)