Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, December 30, 2014

Jack Hotop Korg Krome Demonstration at the 2013 Winter NAMM Show

Implanting Awareness

Rachel Van Besouw & The Interactive Music Awareness Programme


Cochlear implants help the profoundly deaf experience sound. But can users overcome the limitations of the technology to understand music?


Cochlear implants can sometimes help people overcome profound hearing loss, but experiences such as listening to music can become very confusing.

Pete Thomas



Cochlear implants can sometimes help people overcome profound hearing loss, but experiences such as listening to music can become very confusing.Cochlear implants can sometimes help people overcome profound hearing loss, but experiences such as listening to music can become very confusing.



Cochlear implants have been a considerable success story in helping profoundly deaf people to hear speech. However, it has always been assumed that they would be of little use for the appreciation, performance, composition or production of music. At the world-renowned Institute of Sound and Vibration Research at the University of Southampton, scientists are working to overcome this perception. Dr Rachel van Besouw has teamed up with Professor David Nicholls and Dr Ben Oliver from the Department of Music to create the Interactive Music Awareness Programme (IMAP). With the help of AHRC funding, this is part of a research project called Compositions for Cochlear Implantees, which is proving that, with the right training, breakthroughs can be made in the area of music appreciation for cochlear implant users.

The experience of hearing music through a cochlear implant is different from the limitations imposed by, for example, age-related hearing loss. To make a visual analogy, the latter might be considered similar to fuzzy or blurred vision, while cochlear implants present a 'pixelated' or low-resolution image.What Cochlear Implants Can Do



"The main limitation of a cochlear implant,” says Rachel van Besouw, "is that with up to 22 electrodes we have to try to replicate the job of upwards of 15,000 hair cells connected to about 30,000 auditory nerve fibres in the cochlea, which help us get very fine frequency resolution.”



This makes it sound as though a cochlear implant is attempting to replace a 15,000-band graphic equaliser with a 22-band equaliser, but it's a lot more complicated than that. We can begin to get a very rough idea of what a cochlear implant sounds like to the user by using a visual analogy. We can compare a well-focused, clear image with a blurred, out-of-focus image, analogous to somebody with typical old-age-related hearing loss and a clearer, but pixellated image which corresponds to how somebody with cochlear implants hears sound. The experience of hearing music through a cochlear implant is different from the limitations imposed by, for example, age-related hearing loss. To make a visual analogy, the latter might be considered similar to fuzzy or blurred vision, while cochlear implants present a 'pixelated' or low-resolution image.The experience of hearing music through a cochlear implant is different from the limitations imposed by, for example, age-related hearing loss. To make a visual analogy, the latter might be considered similar to fuzzy or blurred vision, while cochlear implants present a 'pixelated' or low-resolution image.

Rachel van Besouw of the University of Southampton's Interactive Music Awareness Programme.

"The cochlea is 'tonotopically organised'; that is, the auditory nerve fibres towards the tip (the 'apex') of the cochlea spiral respond best to low frequencies, and those near the start (the 'base') of the spiral respond best to the high frequencies. In a healthy cochlea, pitch discrimination is best in the low-frequency region, around 500Hz, and in that region we can discriminate differences in frequency as small as 1Hz. The cochlear implant makes use of this tonotopic organisation by sending low-frequency information to the most deeply inserted electrodes and high-frequency information to the more shallowly inserted electrodes. However, unlike a graphic equaliser, a cochlear implant does not convey what we call the 'temporal fine structure' in each band, which we rely on for perceiving pitch accurately; it only provides the amplitude envelope information.



"The input frequency range of an implant is also typically between 100Hz or so to 8kHz, depending on the make, and so an implant user won't hear anything above or below. Part of the problem is because the bass area is deep inside the cochlea, and we can only physically insert the electrodes about two thirds of the way in. When an implant patient has what we call their first 'tuning session', about a month after their operation, they often complain that everything sounds shifted, high, electronic, tinny or like a Dalek.”



Cochlear implants also present a vastly smaller dynamic range than a typical human ear. "The dynamic range between sounds that the normal ear can only just perceive and sounds that are too painful is about 120dB. The input dynamic range of a cochlear implant is between about 40 and 80 dB, and levels below a certain threshold get cut off. But the electrical dynamic range of a cochlear implant is even smaller, and so sounds are further compressed to a range of about 10 or 20 dB.

Implanting Awareness

"With a cochlear implant, things get complicated, because if you change, for example, the loudness of a sound, this can also change the number of electrodes responding and the pattern of stimulation, which can sound like a pitch change. So loudness can affect pitch, pitch changes can also affect the perceived timbre of a sound, and changes in timbre can affect pitch and loudness... these three things affect each other.



"Implant users rely a lot on what we call place pitch cues. As a musical note changes pitch, the stimulation pattern across the electrodes changes, but not always in a way that you think would be consistent with such a change in pitch. Sometimes implant users experience 'pitch reversals' where say, an increase in pitch is perceived as a decrease.



"The amplitude modulation of the electrical pulses, particularly for the apical (low-frequency) electrodes can also give what we call rate pitch cues. It's a weak cue, but it appears that some implant users can make use of it to discriminate differences in pitch that are much smaller than one might expect, given the few electrodes and their spacing in the cochlea.”



Rachel van Besouw of the University of Southampton's Interactive Music Awareness Programme.Rachel van Besouw of the University of Southampton's Interactive Music Awareness Programme."Some people have described listening to music through an implant as like listening to a piano being played with a boxing glove on: the rhythmic information is good, but the frequency resolution is poor. Sound separation is also very difficult. In a normal hearing situation, we use harmonics to group sounds, but an implant user does not get fine enough frequency resolution with the electrodes, so picking individual instruments out of a mix is a real challenge.



"So our big limitations are frequency resolution and dynamic range. In addition to the limited number of electrodes to do the job of stimulating those thousands of auditory nerve fibres (the 'electro-neural' bottleneck), there may well be 'dead regions' in the cochlea which won't respond to the stimulation, so some of those electrodes may not be doing anything useful. We would love to have more electrodes if we could control the pattern of current spread in the cochlea, but at the moment we can't control that, so having more would not necessarily help, unless we can come up with another way of stimulating the auditory nerve fibres.”

Learning To Listen



This is where the Interactive Music Awareness Programme (IMAP) comes in. Its underlying principle is that with the right guidance, cochlear implant users can develop an understanding and appreciation of music. "Many implant users, especially those with a memory of what music sounded like prior to losing their hearing, are often disappointed when they first hear music, and this can result in them actively avoiding music,” explains Rachel van Besouw. "Compared to speech, they also have fewer opportunities to develop their music perception abilities. It might sound strange, but many implant users find that they have to re-learn what a piano or a violin sounds like.

Once implanted, each of the electrodes within a cochlear implant needs to be 'tuned'. The result of this process is a 'map'; it is often helpful for users to have different maps for different listening situations.

"We are developing a rehabilitation programme which is about training. We have found that with training it is possible to improve a cochlear implant user's ability to recognise instruments. Because cochlear implant users receive little, if any, harmonic information, they rely heavily on an instrument's attack/decay/sustain/release (ADSR) envelope for recognition. Training may help them to use these more subtle cues. However, cochlear implant users still find it especially difficult to distinguish instruments of the same family.



"The Interactive Music Awareness Programme is a training programme that we have developed together with cochlear implant users. We have taken this approach because we believe that in order to meet the needs and desires of users, it is essential to actively involve them in the development process. From consultations and workshops, we found that cochlear implant users wanted a resource that would not only enable them to develop their music-perception abilities, but also help them re-engage with music. They wanted help to discover new kinds of music that they might be able to appreciate, giving them the motivation to continue to practise listening to music.”



Implanting AwarenessPhoto: Image Cochlear EuropeAs part of the process of engaging implant users in music, the team have developed a way to give listeners more control over what they're hearing. "One of the tools we've come up with in the IMAP is called the N-Machine, named after Professor David Nicholls in the Department of Music, which enables the user to come up with a mix of a piece of music that suits them. This involves getting hold of individual stems from artists. The importance of this tool is that an implant user can work out what they can hear clearly and what they can't. They can start to bring other things into the mix when they feel more confident and start to discriminate between sounds. We've also done workshops in which they do a mix of live musicians, and typically we find they start with a mix of just one or two instruments. It helps them to make sense of a song that they may have been familiar with before losing their hearing. They are mixing to suit their implant. One of the pieces that we have stems for is Ravel's Bolero, and that's gone down very well. It's very useful as an orchestral track because it's highly rhythmic and repeats a lot, with different instruments playing the same refrain. The repetition helps them to 'put the puzzle together'.



"So far, what we have used comes from generous upcoming artists such as Robin Grey, Blue Swerver and Madelaine Hart, as well as music from the Research Assistant composer on the project, Dr Ben Oliver. When we extend this, of course it would be wonderful to get stems from some more established artists. Philip Selway from Radiohead is one artist who has kindly donated some stems for the research phase of the project, and we're hoping to get more established artists on board. We'd be over the moon to get more stems from artists that we can use, not only for implant users but hearing-impaired listeners in general. One of the important aspects of the training programme is about using 'real world' material to help implant users explore and engage with music again, and helping them know what's out there. One of the frequent comments we get is, 'I want to get into music again, I just don't know where to look and what's best for my situation.' They might have been listening to quite complex orchestral music which may no longer be suitable, and they might not have had the courage or confidence to try another genre.



"Another tool that we have in the IMAP is called the 'Environmental Rhythm Machine', which uses everyday sounds around the house: teaspoons, microwave beeps, car door slams and so on. Users can create a simple drum-machine type track with samples of everyday sounds, and then build these up and layer them. It's a gradual way of helping cochlear implant users re-engage with music through the creative use of sound effects. What's nice about that is that they don't need any prior knowledge of music and don't have to worry about not knowing the difference between a cello and a violin, which can be intimidating. We also have another piece of software in the IMAP that allows the user to listen to a melody and change the instrument. They can then find out which instrument is best for hearing pitch changes. Some of the software tools in the IMAP allow the user to pitch-shift stems, so if there is a bass guitar playing a riff that they can't hear, they could try shifting it into a different range to see if this helps.”



SOS readers will, of course, be very familiar with what we've just discussed, ie. basic mixing, but for many people, this is a brand new way to experience music. It reflects the fact that the experiences of cochlear implant users seem to vary a lot, to the extent that there is no 'one size fits all' mix that will suit everyone: each person may need something different. "Yes, it's highly individual,” agrees Rachel van Besouw. "And the situation where users can mix and manipulate the instruments encourages active listening. Just as you don't get fit by watching sport, it is less likely that you will improve your music perception abilities with incidental, passive exposure to music.”

The Rate Of Progress



Results so far have been encouraging, but it's early days. I asked Rachel to speculate about what might be possible in the future. Will we ever see the day when a profoundly deaf person with cochlear implants becomes a sound engineer or producer?



"At the moment, the limitations of current devices makes it seem unlikely, but then 30 years ago many believed that cochlear implants would merely aid lip reading — and now, many implant users are able to use the telephone and perceive speech without any visual cues at all. Currently, we have implant users at the South of England Cochlear Implant Centre who play musical instruments and are doing very well. It certainly seems easier for young implant users who haven't experienced normal hearing before to accept and engage in music. Many have music lessons at school and learn to play instruments along with their peers. Cochlear implant users are still are exceeding our expectations, and I am optimistic about future developments in the technology. I think when the breakthrough occurs it will be at that electro-neural interface. It might be a different kind of electrode, different placement of the array or a form of stimulation other than current. Or it could be in the form of drugs that encourage auditory hair cell and nerve fibre regeneration, but I think we are a long way off being able to generate hair cells where we want them and get them connected to the nerve fibres that we want them to be connected to in humans.”



How about the many musicians and engineers who suffer gradual hearing loss as a result of exposure to loud music? As Rachel explains, current cochlear implant technology is not really designed to address this sort of problem. "The first step would be to look for opportunities to reduce the level and duration of their exposure to noise and to protect their hearing to prevent further hearing loss and tinnitus. If they are concerned about their hearing, they should see their GP for a referral to an audiologist who can assess their hearing to determine the type and degree of hearing loss, recommend interventions that may help and provide advice. One option might be a hearing aid. Cochlear implants are only recommended for people with severe-to-profound bilateral hearing loss, who get little benefit from hearing aids.



"Unfortunately, there is still a stigma attached to hearing aids and many hearing-aid users still experience prejudice. Whilst a hearing aid makes a person's hearing impairment visible, it does not tell you about the magnitude of that person's impairment and whether or not their impairment would affect their role as a producer or engineer. Would you rather have a sound engineer who has addressed their hearing loss with the latest hearing aid technology, or a sound engineer who has not addressed their hearing loss?”



The Interactive Music Awareness Programme has a web site at www.southampton.ac.uk/mfg/current_projects/trial.html .

What Is A Cochlear Implant?

Once implanted, each of the electrodes within a cochlear implant needs to be 'tuned'. The result of this process is a 'map'; it is often helpful for users to have different maps for different listening situations.Once implanted, each of the electrodes within a cochlear implant needs to be 'tuned'. The result of this process is a 'map'; it is often helpful for users to have different maps for different listening situations.



In short, a cochlear implant is a surgically implanted electronic 'hearing' device, which uses an electrode array to stimulate nerve fibres in the cochlea to provide the auditory signals which would normally be transmitted by the hair cells.



1. Sound is captured by a microphone.



2. A sound processor converts the signal from analogue to digital information.



3. Digital signals are sent to the implant from the headpiece.



4. The implant converts the digitally coded sound into electrical impulses, and sends them along the electrode array, which is positioned in the cochlea (inner ear).



5. The implant's electrodes stimulate the cochlea's hearing nerve, which then sends the impulses to the brain where they are interpreted as sound.



Cochlear implants are fine-tuned for the individual recipient, as Rachel van Besouw explains: "The clinician will tune each of the electrodes, setting the lower threshold and maximum or 'comfort' level for each one. This process is called creating a 'map'. You can also change the frequency-to-electrode allocation. For example in this case (image below) the filter centre frequency of the most deeply inserted electrode is 333Hz and the filter centre frequency of the highest frequency electrode 6665Hz.



"A cochlear implant user will undergo many tuning sessions, more frequently to begin with, and it's possible for them to have different maps that they can switch between, like presets, using a remote control. The maps might have a different dynamic range, microphone directional characteristic or noise-reduction algorithm for listening in certain environments.”



One limitation on current technology is that low frequencies cannot easily be conveyed, partly because the electrode array cannot be inserted right up to the apical end of the cochlea, where the nerve fibres respond best to bass frequencies. "Bass frequencies will be limited by the input bandwidth of the implant, by the insertion depth of the electrode array in the cochlea and by the ability of the implant user to make use of the weak rate pitch cues from the amplitude envelope of the stimulation on the apical electrodes. If an implant user has some residual hearing in their other ear, they may be able to perceive (and benefit from) some additional low-frequency information.”    

Monday, December 29, 2014

Q. Are wow and flutter key to that analogue tape sound?

I have come to the conclusion that wow and flutter are a lot more important in the sound of tape and analogue recordings than they are usually given credit for. Most of the discussion about tape seems to concentrate on tape compression and the effects of transformers in the signal path, for example, and the majority of plug-in treatments designed to make recordings warmer focus on this. I don't hear of many people applying wow and flutter plug-ins, or waffling on about the right type of capstan emulator. Recently I was re-reading one of those pieces Roger Nichols wrote for SOS a few years back, where he mentions that someone had invented a de-wow-and-flutter system that tracked variations in the pitch of the bias signal to correct for wow and flutter, and he said the result sounded 'just like digital'.I recently did a couple of projects where I more or less did the same thing, albeit hugely more labour-intensively: I transferred some old four-track cassette recordings to my PC. The recordings used a drum machine, which I still own, so I also made a clean new digital recording of the drum machine part. But, of course, due to wow and flutter, the old four-track recordings were out of sync with the drum machine on a couple of bars, so I ended up chopping up the four-track capture bar-by-bar, and time-stretching each bar so that the waveform of the drum machine recording on tape lined up exactly with the new, clean digital version. By the time I'd finished, the four-track did indeed sound quite different in character to what it had before. I think Nichols was right. I wonder what opinions the SOS team might have about the importance of wow and flutter on getting 'that sound'?


Though wow and flutter may once have been phenomena that we were used to and could therefore ignore, their absence in modern recording means that this is no longer the case. Celemony's Capstan is an incredibly effective tool for removing these unwanted effects, and it leaves few artifacts.

Via SOS web site



SOS Technical Editor Hugh Robjohns replies: I agree that the subtle (and sometimes not so subtle) speed instability of tape is an important subconscious factor in the tape sound. Any time-modulation process, including wow and flutter, creates additional frequency components, and I think the subliminal presence of these on all analogue recordings is sometimes missed from digital recordings. However, I suspect it is actually the presence of the far more complex harmonics produced by 'scrape flutter' that is the most significant element, rather than the very low and cyclical frequency modulations caused by wow and flutter. Added to which, I find wow and flutter generally quite objectionable, especially in music with sustained tones, like piano and organ recordings.



However, what you are describing here is not actually wow and flutter. You're describing speed 'drift', which is an absolute difference between the record and replay speeds. It's not unusual for two devices to run at slightly different speeds, even in digital circles. Two separate CD players might run with sample rates of at 44101Hz and 44099Hz, for example, or two analogue tape machines at 19.1cm/s and 18.9cm/s. If you start the two machines at the same time with identical recordings, they will drift in time relative to one another, just as you found with your four-track cassette — although in that case I suspect the problem was caused either by poor speed control or physical tape stretch.



Wow is a low-frequency cyclical speed variation, which is very common on vinyl records if the centre hole is punched slightly off-centre, of if the disc is badly warped. Flutter is a much faster-frequency version of the same thing, typically caused by a worn tape-machine capstan or a lumpy pinch-roller. Scrape flutter is a higher-frequency effect again, typically caused by the inherent 'stiction' or vibration of tape against the heads as it is dragged past.



Wow and flutter, being cyclical phenomena, don't usually result in a change in the average replay (or record) speed because any short-term speeding up is balanced completely by the same amount of slowing down as the cycle completes.



I'm not at all surprised that your heavily edited and time-stretched 'fixed' version of the electronic drum track sounds different from the straight digital recording, specifically because you performed so much processing on the individual sections. However, that 'fixed' version will also sound very different from the drum machine's direct analogue outputs. You're not 'fixing wow and flutter' but actually correcting for speed drift or tape stretch by time-adjusting the original material in short sections, which is naturally messing with the sonic character of the drum beats in short, unrelated sections. Though wow and flutter may once have been phenomena that we were used to and could therefore ignore, their absence in modern recording means that this is no longer the case. Celemony's Capstan is an incredibly effective tool for removing these unwanted effects, and it leaves few artifacts.Though wow and flutter may once have been phenomena that we were used to and could therefore ignore, their absence in modern recording means that this is no longer the case. Celemony's Capstan is an incredibly effective tool for removing these unwanted effects, and it leaves few artifacts.



Returning to conventional wow and flutter, though, after nearly 30 years of 'digital stability' most of us have been completely weaned off the sound of wow and flutter, and our ears have become very good once again at spotting these grossly unnatural phenomena that we were once so happy to ignore. Last year I reviewed Celemony's Capstan software, which is designed to fix both wow and flutter and speed-drift issues, and it does so extremely well and without artifacts!  


Korg at WNAMM 2013 - Tom Coster, Steve Smith, Victor Wooten, Frank Gambale Live Performance Teaser

Korg at Winter NAMM 2013 - Tom Coster, Steve Smith, Victor Wooten Signing

Q. Can you recommend a 73-key stage piano?

I seem to be the only person in the world who wants an unfussy, weighted stage piano, with — at most — 73 keys. I have little money, so can't afford to have a piano for home use and a piano for stage use, and I have no space to store them even if I could afford it. I don't play the 'dusty' ends much, so saving space by not having 88 notes suits me fine. I also have feeble arms and a small car, so it'd be great to keep the weight down too. My ideal would basically be the Casio Privia P3 with two octaves missing, as it has a great sound and lovely action. Is there really nothing out there — current or discontinued — that could do all I want? I can probably stretch to around £1500 if I had to. What might you suggest?




Lucy Weston via email



SOS contributor Robin Bigwood replies: There are actually quite a number of 73- or 76-note keyboards out there that could fit the bill. As always, you have to decide what your priorities are. For example, hammer-action keyboards are usually very heavy, so the keyboard with the action that suits you most might also be the least portable. There's also a choice to be made between a high-quality but limited piano-oriented sound set, or the 'jack of all trades' nature of a synth workstation.

Buying a stage piano will always require some compromise, whether that means having less than you want or, in some cases, more. The excellent Nord Electro 4, for example, is easily portable, but only has semi-weighted keys, and is not cheap. However, the Korg M50, though technically a synth workstation, rather than a stage piano, is a snip at £850 but, again, only has semi-weighted keys. It's no surprise that fully weighted keys and portability do not go together!

I think the keyboard most worthy of your consideration is the Nord Electro. Version 4 of this well-respected and undeniably vibey keyboard was launched fairly recently, but its v3 predecessor seems to live on in Nord's range. The 73-note version comes in at around £1400, has semi-weighted keys and weighs less than 10kg. The version with a hammer action, surprisingly, weighs only 1kg more, but it'll set you back a cool £1800. Still, these are brilliant gigging instruments that are well worth the money. They can be loaded with all sounds from the Nord piano and wave libraries, and sport top-class rock organ emulations too.


Q. Can you recommend a 73-key stage piano?
Challenging Nord in this same market sector are a couple of serious players' instruments by Japanese manufacturers. The Korg SV-1-73 is £1299, offers 36 electric and acoustic piano presets, and has a decent Korg RH3 hammer action. The alternative offered by Roland is the 76-note VR700 V-Combo at about £1200. You get great organs and pianos, along with strings, synths and pads. And, with a lighter 'waterfall' keyboard, it's not too heavy. It is rather long, though, because of those extra keys and a 'bender' section to the left of the keyboard.



Next up, a couple of 76-note stage keyboard all-rounders. Cheapest of all (£599) is the Kurzweil SP4-7. There's no doubting the pedigree, but this workmanlike piano could prove a bit basic for really serious use. More flexible, though unashamedly oriented towards the synth world (the clue's in the name) is the Roland Juno Stage for £950. I spent some time with one a little while back and enjoyed playing it. Like the V-Combo it's quite long, but it has some nice live-leaning features such as audio file playback (for backing tracks and so on) from USB sticks, a click output for drummers, and a phantom-powered mic input that's routed through the internal effects. Buying a stage piano will always require some compromise, whether that means having less than you want or, in some cases, more. The excellent Nord Electro 4, for example, is easily portable, but only has semi-weighted keys, and is not cheap. However, the Korg M50, though technically a synth workstation, rather than a stage piano, is a snip at £850 but, again, only has semi-weighted keys. It's no surprise that fully weighted keys and portability do not go together!Buying a stage piano will always require some compromise, whether that means having less than you want or, in some cases, more. The excellent Nord Electro 4, for example, is easily portable, but only has semi-weighted keys, and is not cheap. However, the Korg M50, though technically a synth workstation, rather than a stage piano, is a snip at £850 but, again, only has semi-weighted keys. It's no surprise that fully weighted keys and portability do not go together!Q. Can you recommend a 73-key stage piano?



Finally we get to those synth workstations. The Korg M50-73, around £850, is a svelte 9kg and could get you safely in and out of many gigging jobs. But there's also the new Korg Krome 73 for £1000 or so, and that boasts a flagship Steinway piano sound, plus good e-pianos too: definitely one to audition. I reviewed the Kurzweil PC3LE7 for SOS a while back, and, while I thought it was a real workhorse, its pianos (in particular) are a little way off state-of-the-art. I'm sure the Yamaha S70XS at around £1600 would be nice, too, but it's a hammer-action whopper and a solid 20kg.



In essence, though, these are all rewarding, useful instruments, so choosing between them is a nice problem to have. Best of luck!  

Saturday, December 27, 2014

Korg at Winter NAMM 2013 - Tom Coster, Steve Smith, Victor Wooten, Frank Gambale Live Performance

Q. Can you explain digital clocking?

Phrases like 'digital clocking', 'word clock' and 'interface jitter' are bandied around a lot in the pages of Sound On Sound. I'm not that much of a newbie, but I have to admit to being completely in the dark about this! Could you put me out of my misery and explain it to me?

Interface 'jitter', which results from clock-data degradation, can cause your waveform to be constructed with amplitude errors, seen in the diagram. These could produce noise and distortion. It's for this reason that people sometimes use a dedicated master clock, which all other devices are 'slaved' to.


Interface 'jitter', which results from clock-data degradation, can cause your waveform to be constructed with amplitude errors, seen in the diagram. These could produce noise and distortion. It's for this reason that people sometimes use a dedicated master clock, which all other devices are 'slaved' to.Interface 'jitter', which results from clock-data degradation, can cause your waveform to be constructed with amplitude errors, seen in the diagram. These could produce noise and distortion. It's for this reason that people sometimes use a dedicated master clock, which all other devices are 'slaved' to.Q. Can you explain digital clocking?

Q. Can you explain digital clocking?

James Coxon, via email



SOS Technical Editor Hugh Robjohns replies: Digital audio is represented by a series of samples, each one denoting the amplitude of the audio waveform at a specific point in time. The digital clocking signal — known as a 'sample clock' or, more usually, a 'word clock' — defines those points in time.



When digital audio is being transferred between equipment, the receiving device needs to know when each new sample is due to arrive, and it needs to receive a word clock to do that. Most interface formats, such as AES3, S/PDIF and ADAT, carry an embedded word-clock signal within the digital data, and usually that's sufficient to allow the receiving device to 'slave' to the source device and interpret the data correctly.



Unfortunately, that embedded clock data can be degraded by the physical properties of the connecting cable, resulting in 'interface jitter', which leads to instability in the retrieved clocking information. If this jittery clock is used to construct the waveform — as it often is in simple D-A and A-D converters — it will result in amplitude errors that could potentially produce unwanted noise and distortion.



For this reason, the better converters go to great lengths to avoid the effects of interface jitter, using a variety of bespoke re-clocking and jitter-reduction systems. However, when digital audio is passed between two digital devices — from a CD player to a DAW, say — the audio isn't actually reconstructed at all. The devices are just passing and receiving one sample value after another and, provided the numbers themselves are transferred accurately, the timing isn't critical at all. In that all-digital context, interface jitter is totally irrelevant: jitter only matters when audio is being converted to or from the digital and analogue domains.



Where an embedded clock isn't available, or you want to synchronise the sample clocks of several devices together (as you must if you want to be able to mix digital signals from multiple sources), the master device's word clock must be distributed to all the slave devices, and those devices specifically configured to synchronise themselves to that incoming master clock.



An orchestra can only have one conductor if you want everyone to play in time together and, in the same way, a digital system can only have one master clock device. Everything else must slave to that clock. The master device is typically the main A-D converter in most systems, which often means the computer's audio interface, but in large and complex systems it might be a dedicated master clock device instead.



The word clock can be distributed to equipment in a variety of forms, depending on the available connectivity, but the basic format is a simple word-clock signal, which is a square wave running at the sample rate. It is traditionally carried on a 75Ω video cable equipped with BNC connectors. It can also be passed as an embedded clock on an AES3 or S/PDIF cable (often known as 'Digital Black' or the AES11 format), and in audio-video installations a video 'black and burst' signal might be used in some cases.    


Friday, December 26, 2014

George Duke at the Korg USA NAMM 2013 Booth

Q. What can I do to make my mixes sound more like commercial records?

I'm producing my own music, but I want it to sound as professional as possible. I'm sure that there must be certain tools that home studio owners can use to help them match their mixes and recordings with commercial ones. Do you have any advice for me on the best way to go?


A reference CD compilation of commercial tracks whose production qualities you admire can be a useful tool for helping to ensure high standards in your own mixes.

A reference CD compilation of commercial tracks whose production qualities you admire can be a useful tool for helping to ensure high standards in your own mixes. A reference CD compilation of commercial tracks whose production qualities you admire can be a useful tool for helping to ensure high standards in your own mixes.



Greg Dillon, via email



SOS contributor Tom Flint replies: One of the best things you can do is create your own reference compilation so that you have something with which to compare your own work and production decisions. All of us, of course, can think of songs or pieces of music that we love because they sound a certain way. Making a reference compilation is really just a matter of collecting some of those tracks together and putting them onto a format that can be played on a variety of music systems. At this point, I still think the CD-R is the best media choice.



In general, the bigger the variety of tracks, the better, although if you were concentrating on producing a very particular genre of music it might be worth creating another dedicated compilation comprising tracks just from within that genre. There may also be music that is not particularly your cup of tea but still has admirable production qualities, and this is worth including too, as long as you can bear to listen to it! The most important thing is to select tracks that have something about them that seems to work particularly well, and make sure that each one reveals something its compilation that others on the collection do not. There would be no point, for example, in including endless variations of a particularly pleasing type of bass sound; one or two examples should suffice.



The first thing a well-considered compilation will reveal is that there really is no such thing as the perfect sound. Some productions seem to pack every frequency with noise, while others are relatively sparse. There are countless other contrasts too and I am continually amazed at how much productions can vary, and yet still sound professional, polished and satisfying.



Ideally, tracks should be taken from CDs, tapes and vinyl rather than MP3s, for quality reasons, but be sure to respect the music owners' copyrights by only creating the reference CD-R from your own purchases and not distributing the end result to others.



Ethics, good practice and legalities aside, it is then a matter of using the reference material properly. Get to know your chosen tracks intimately by playing them everywhere you can. In the car, for instance, the body of some productions is lost under the drone of the engine, while others seem to fare quite well. It soon becomes apparent which kind of sounds are important, and which are merely 'fairy dust', only appreciable to those with superior hi-fi systems and ideal listening environments. Not every production sounds great in every situation, although there are usually one or two gems that seem to sound fantastic whatever the limitations of the listening environment or playback system.



Of course, the compilation can be a constantly evolving thing. Some favourite tracks might turn out to be of little use as reference material and should be replaced with others that have very specific characteristics. It might even be worth creating a separate 'bad production' compilation, just as a reminder of what you want to avoid doing to your own music.



Take the time to run the tracks through a narrow-band graphic EQ with spectrum analyser and then alter the level of the bands to see which ones have the most effect. This will help explain why certain mixes work, and where the important energy is centred.



One of the situations in which the reference CD is of great use is in the mastering studio. Mastering engineers are often keen to hear examples of what you want and can bear those examples in mind while processing a mix.



It's also a good idea to take your CD of reference material to other studios when you'll be making important decisions based on the output of unfamiliar gear. If you know how your tracks usually sound, something that is too prominent or lacking will be immediately obvious.



Most of all, though, the reference CD will keep you on the straight and narrow, particularly if you've been working on something for a long time. In such circumstances, the reference tracks should act like a user reset button for your ears.



For more on compiling a reference CD, see the SOS articles at /sos/sep03/articles/testcd.htm and /sos/sep08/articles/referencecd.htm.  

Get In Tune with Korg's Pitchblack Portable Polyphonic Tuner

Thursday, December 25, 2014

Q. Can I use the front and rear sides of a Blumlein array simultaneously?

Most of the recording I do involves tracking several musicians playing together in a room. I'd like to use a stereo pair to capture the overall picture, as well as close miking, but often the musicians arrange themselves in such a way that X-Y or A-B rigs won't work. I've been wondering about using a Blumlein-crossed figure-of-eight pair placed between the drummer and the rest of the group, in such a way that the front of the array captures the drum kit and the rear captures the other musicians. In other words, is Blumlein strictly restricted to the 90-degree acceptance angle in front, or is it OK to use the 90-degree space behind the array too? And if so, should I reverse the polarity of any other mics on that side?

You actually have little choice over whether to use the rear of your mics in a Blumlein array, as the mics will always capture ambient noise to the rear of the setup. This can be quite useful in certain circumstances, such as radio drama, for example, in which the setup allows the actors to be positioned less rigidly but still be picked up by the mics.



You actually have little choice over whether to use the rear of your mics in a Blumlein array, as the mics will always capture ambient noise to the rear of the setup. This can be quite useful in certain circumstances, such as radio drama, for example, in which the setup allows the actors to be positioned less rigidly but still be picked up by the mics.You actually have little choice over whether to use the rear of your mics in a Blumlein array, as the mics will always capture ambient noise to the rear of the setup. This can be quite useful in certain circumstances, such as radio drama, for example, in which the setup allows the actors to be positioned less rigidly but still be picked up by the mics.



Simon Earle, via email



SOS Technical Editor Hugh Robjohns replies: The short answer is yes, it's perfectly OK to use the rear pick-up region, and yes, you might need to reverse the polarity of spot mics covering sources on the rear of the Blumlein array.



The slightly longer answer is that you actually have no choice in the matter; the rear side of a Blumlein array is captured anyway, so you might as well make use of it. In an orchestral recording, for example, it will be capturing the room ambience and audience (which will make it sound rather more open than might be expected). In radio drama, both sides of a Blumlein array are often used to great effect, as the technique allows the actors to face each other across the mic for good eye-contact, while still being able to move freely within their own 'stereo space'.



In your situation, it's perfectly acceptable to arrange the musicians to use both front and rear 90-degree stereo-recording angles, using relative distances from the mics to help achieve the appropriate balance. In radio drama, the studio floor is often marked up with tape to identify the edges of the 90-degree pickup areas, with additional marks to show the desired positions for each performer, so they don't wander away and upset the optimum balance.



There are two things to beware of. Firstly, don't let any real sound sources move around to the sides of the Blumlein pair, because they will then be out of phase in the stereo image. Secondly, choose your figure-of-eight mics carefully, as many are designed with strong tonal differences between front and back. That may be quite useful in your situation, but can cause significant issues in others. Finally, if you're planning to close-mic sources to supplement their contributions to the main pair balance, sources on the rear of the mic will be captured with an inverted polarity relative to those on the front, as you say.



Consequently, you will probably need to flip the polarity of those close mics in the mix to avoid phase cancellation issues, depending, to a degree, on the distance between the close mics and Blumlein pair, the nature of the source, and the level of the spot-mic contribution. I'd start with the rear-side close mics flipped in polarity, and check each one as you build the mix, to see what works best.  

Get In Tune with Korg's Pitchblack Portable Polyphonic Tune

Q. Which should I check first: monitors or FOH?

I'm responsible for live sound at a lot of small shows where there isn't the budget for a separate monitor desk or engineer. In this situation, I've seen engineers handle things in different ways. Some concentrate on getting the sound right on stage first before bringing up the front-of-house speakers. Some make sure the sound out front is right and only then turn up the aux sends on any instruments that the band are struggling to hear. Others go through, instrument by instrument and set levels for both FOH and monitors at the same time. What are the pros and cons of each approach, and which would you recommend?

Sound engineers can differ over whether to set up stage monitoring or the front-of-house sound first. Our contributor likes to get a rough FOH mix done and then move onto the wedges, leaving fine-tuning until the band are on stage.



Sound engineers can differ over whether to set up stage monitoring or the front-of-house sound first. Our contributor likes to get a rough FOH mix done and then move onto the wedges, leaving fine-tuning until the band are on stage.Sound engineers can differ over whether to set up stage monitoring or the front-of-house sound first. Our contributor likes to get a rough FOH mix done and then move onto the wedges, leaving fine-tuning until the band are on stage.



Lee Entwistle, via email



SOS contributor Jon Burton replies: This is a very common situation and one I've come across many times. When I'm doing monitors from the same desk as the house sound, I always try to use a Y-split cable on the lead vocals. If there are enough channels this means you can split signal across two channels, one dedicated to the monitors and one to the FOH. This has the advantage that you can set and leave the monitor channel optimised for the stage sound, whilst having an FOH channel that you can equalise and compress during the show, knowing that it is not adversely affecting the sound on stage. Even if I can't do this, I always create a rough front-of-house sound first. I set the gain for both channels, then set the EQ flat on the desk, but with the high-pass filter in, if there is one. I will then concentrate on checking all the wedges on stage. If there are any equalisers on the monitor sends, I usually flatten these. I then check each monitor in turn, speaking normally through the mic, using the same desk channel and microphone for each monitor. By doing this I can check that each speaker is working correctly. If they are not, which is not unusual, I'll try to fix them, checking connections and drivers, for example, and, failing that, move the best-sounding ones into the most crucial positions!



If there are graphic EQs, I try not to do too much, as I prefer them to look like smiley faces rather than cross-sections of the Himalayas. If you hack away with a graphic, you'll usually start causing more problems than you're solving. If there are no outboard equalisers, I'll EQ the channel, but only as a last resort.



Having got all the wedges working and sounding OK, I'll then get all the vocal microphones up in their respective wedges. Once I'm happy that vocals sound good on stage, I'll start soundchecking the other channels.



I always leave the vocal microphones open but dipped a bit during the soundcheck, as they will be on during the show and will contribute a lot to the overall sound coming from the stage, adding high-end spill to the drums and other instruments.



After I have checked the vocals, I like to continue with drums, getting the drummer to play a simple beat on kick, snare and hi-hat. I prefer to do all three at the same time, as this way the drummer tends to play more naturally, like he or she would in a show, rather than repetitively hitting a drum, which is monotonous for all — and unrepresentative. Checking all the instruments one by one, I then usually leave the FOH master faders at half volume while the band play a song. During this time, I'll work on the monitors for them, maybe adding keys or kick drum. I always dip the FOH, otherwise the sound of the loud PA in an empty room will drown the stage. If you do leave the PA system at a higher level, you enter into an upward spiral of volume where everybody is competing to hear. I usually know when all the channels sound good and I have a rough balance; the time for fine-tuning will be when the room is full and the first chord is struck!



Before the performance, time is always against you, and I prefer to get the stage sound right as fast as possible, usually before the band arrive. Soundchecking monitors is always easier on a quiet stage without musicians tuning and checking their instruments. Checking FOH is a lot easier, as you can just don a pair of headphones and check your channels, returns and inserts. So I would always prioritise and make sure the monitors are sorted before checking the band.



For more advice see the article 'Effective Soundcheck' in SOS July 2012 (/sos/jul12/articles/soundchecking.htm) for advice from some top live engineers!  

Wednesday, December 24, 2014

Q. Why do waveforms sometimes look lop-sided?

When I look at the waveform of a vocal recording, it looks lop-sided, with the waveform going further above the zero-crossing point than it goes below it. It sounds fine, though, so what's happening here?




This is a naturally occurring asymmetrical waveform, built from the linear sum of a cosine fundamental and its first four harmonics (created in Adobe Audition).In this image we can see that as the waveform amplitude decays, it settles on the zero line (red). There is no DC offset.This simple sine wave has a DC offset, which raises the centre of the sine wave well above the zero line. As the waveform amplitude decays, it remains well above the zero line.This is the typical phase response of a 'phase rotator', which can be used to impose symmetry on asymmetrical waveforms by adjusting the phase relationships between fundamentals and harmonics. An increasing negative phase shift is applied progressively as the signal frequency increases, with the highest frequencies being shifted a full 360 degrees relative to the lowest frequencies.



Bruno D'Cunha, via email



SOS Technical Editor Hugh Robjohns replies: This kind of asymmetrical waveform is quite natural and normal, and is particularly common on recordings of speech and vocals, brass instruments, and sometimes also closely miked strings. A lot of percussive sounds are also strikingly asymmetrical, of course.



In the 'BC' era (Before Computers) we didn't look at waveforms, we just listened to them, and this kind of asymmetry was generally inaudible and didn't bother us, although some people are sensitive to absolute polarity, and can actually tell if an asymmetrical waveform is inverted. Waveform asymmetry has been known about for a very long time, and in the few areas where it can be an issue (such as in broadcast processing), technology has long been in place to deal with it. However, since the prevalence of the DAW and its ubiquitous waveform display, a lot of people have become aware of it and asked the same question.



This asymmetry is due mainly to two things, the first being the relative phase relationships between the fundamental and different harmonic components in a harmonically complex signal. In combining different frequency signals with differing phase relationships, the result is often a distinctly asymmetrical waveform, and that waveform asymmetry often changes and evolves over time, too. That's just what happens when complex related signals are superimposed.



The other element involved in this is that many acoustic sources inherently have a 'positive air pressure bias' because of the way the sound is generated. To talk or sing, we have to breathe out, and to play a trumpet, we have to blow air through the tubing. So, in these examples, there is inherently more energy available for the compression side of the sound wave than there is for the rarefaction side, and that can also contribute to an asymmetrical waveform.



Confusingly — and erroneously — this natural waveform asymmetry is often attributed to a 'DC offset', but that's not the case at all. A DC offset is a specific fault condition where the varying AC audio signal voltage is offset by a constant DC voltage, and the 'tell-tale' is that, although the waveform might look asymmetrical, a decaying signal waveform settles away (offset) from the centre zero-line.



DC offsets are virtually unheard-of these days, but can occur in hardware analogue electronics under fault conditions. In the digital world, very early multi-bit A-D converters sometimes suffered a problem in the quantiser that essentially resulted in encoding a fixed-level shift or offset onto the audio sample values — the digital equivalent of an analogue DC offset.



However, a DC offset can be very easily corrected by passing the audio through a high-pass filter tuned to a low frequency (typically 10Hz or lower). It is important to correct DC offsets when they do occur, because editing between an audio clip with a DC offset and one without results in a loud thump or plop at the edit point, which is not good!



In contrast, natural waveform asymmetry cannot be 'corrected' with a high-pass filter, and a rather more complicated solution is required called a 'phase rotator'. Generally, there is no need to 'correct' a naturally asymmetrical signal, but occasionally the asymmetry can restrict how much the signal can be amplified because the stronger half of the waveform will reach the clip level before the weaker side. By using a phase rotator process to alter the harmonic phase relationships, a more balanced symmetry can be established, allowing slightly more gain to be applied before both sides reach the clipping level at the same amplitude. Asymmetrical waveforms can also sometimes confuse the side-chain level-detection circuitry (or algorithms) of some compressors, resulting in less effective compression than might be expected.


Tuesday, December 23, 2014

Q. Is there a better balanced-to-unbalanced cabling solution?

The line inputs on my computer interface are all balanced on TRS sockets, but the outputs of my hardware synths are all unbalanced on mono TS sockets. I've been connecting them with ordinary unbalanced guitar leads up until now, but some of them suffer from what I think is ground-loop hum, and I wondered if there was a better way. I thought about using DI boxes, but I don't have enough mic-level inputs.




Unless you're handy with a soldering iron — and have a lot of patience! — SOS's custom-made pseudo-balanced cables could be the best solution for connecting unbalanced equipment to balanced inputs.Unless you're handy with a soldering iron — and have a lot of patience! — SOS
's custom-made pseudo-balanced cables could be the best solution for connecting unbalanced equipment to balanced inputs.

Unless you're handy with a soldering iron — and have a lot of patience! — <em>SOS</em>'s custom-made pseudo-balanced cables could be the best solution for connecting unbalanced equipment to balanced inputs.
Peter Bradley, via email



SOS Technical Editor Hugh Robjohns replies: DI boxes would certainly cure the ground-loop hum problem, because they generally employ a transformer to balance the signal, with a ground-lift switch to isolate the grounds between the synth and interface. However, it seems a bit silly taking a line-level signal from a synth and knocking it down to mic level, only to re-amplify it again inside the interface, and as you don't have enough mic inputs we can discount that option anyway! A better option would be to use line-level transformer isolation boxes. These use a transformer again to balance the signal and isolate the source and destination grounds but, like DI boxes, the good ones are quite expensive.



Thankfully, there is a much cheaper and more convenient alternative, which takes advantage of the fact that a balanced input is also a 'differential' input. A differential input looks for a signal applied between the 'hot' and 'cold' sides of the connection, and unlike an unbalanced connection, the cable screen (which is grounded) plays no part in transferring the wanted audio — it's only there to trap unwanted external interference. We can use this differential input idea to our advantage by wiring a cable in a slightly non-standard way, to trick the balanced input into accepting an unbalanced signal, while also avoiding a ground loop.



The non-standard wiring is actually very simple. The signal side of the unbalanced output (quarter-inch plug tip) is connected to the 'hot' side of the balanced input (quarter-inch plug tip), while the 'trick' part is to connect the ground side of the unbalanced output (quarter-inch plug sleeve) to the 'cold' side of the balanced input (quarter-inch ring). In that way the balanced input 'sees' the wanted signal between its hot and cold inputs, as it should, but the unbalanced output's ground isn't connected directly to the balanced input's ground any more, so there can't be a ground loop!



Technically, the balanced input loses its ability to reject electrostatic (RF) interference, because the impedances to ground from each input terminal are now different (unbalanced). However, with line-level signals connected with relatively short cables of under five metres or so, RF interference is unlikely to be a problem anyway, and in practice I've employed variations of this kind of 'bodge' interfacing for decades without any problems. Incidentally, this kind of 'bodge' interface is typically called a 'pseudo-balanced' connection.



The quickest, easiest and 'dirtiest' DIY approach would be to cut off the TS plugs from the interface end of your existing unbalanced cables, and solder on new TRS plugs for the balanced input. The signal core wire should be reconnected to the TRS tip terminal, and the screen to the ring terminal, leaving the TRS sleeve unconnected and isolated.



Although that solution will work, it has virtually no protection against RF interference, and a better solution can be obtained if you start with balanced cables and modify the unbalanced end instead. In this case, the hot signal wire is connected to the unbalanced tip, the cold signal wire to the unbalanced sleeve, and the balanced cable screen is left isolated and disconnected. In this way the cable screen will provide a useful degree of RF protection, although it isn't as good as it could be...



The optimum solution is to connect the screen at the unbalanced end too, because that maximises the RF screening. However, a direct connection will reinstate the ground loop, so that's not a good idea. Instead, the cable screen should be connected at the unbalanced end via a simple circuit that maintains a relatively high impedance to mains and audio frequencies to prevent the ground-loop hum, but a much lower impedance to RF to maintain effective RF screening. This simple circuit is nothing more than a 100Ω resistor in parallel with a 10nF capacitor, although these values are not particularly critical.



The practical problem with this approach lies in trying to squeeze a resistor and capacitor into the very limited amount of space inside a standard TS plug without everything shorting out. It's not impossible if you have decent soldering skills, small components and some patience, but for those who don't, help is at hand in the form of our very own custom-made SOS pseudo-balanced cables. These are available from the SOS shop to UK customers at £19.99 (or £16.99 to SOS subscribers). They come with either an XLR or a TRS connector at the balanced end. They are made to our own specifications and to very high standards by Pirahna Cables using Neutrik connectors and Pirahna Ultraflex cable. I've been using some recently to avoid the frustration of making up some new pseudo-balanced cables myself, and they work superbly well  


Korg SP-280 Digital Piano - Acoustic and Electric Piano Performance

Q. Parallel Universe

I thought Hugh's Robjohns' Parallel Compression feature in the February 2013 issue (read it at /sos/feb13/articles/latest-squeeze.htm) was excellent and informative. However, I'd like to argue a bit about the benefit of adding multiple identical parallel compressors, though. Let's assume for a while, to avoid getting bogged down in details, that we're talking about a plug-in compressor, such that two instances of it produce a bit-for-bit identical output if given bit-for-bit identical inputs.

Q. Parallel Universe


For example, if I wanted to parallel compress a signal such that the quiet parts are amplified by 12dB, I'd set up three sends (according to picture 5 in the article), and feed them identical inputs from the dry channel. In this case, each of the compressor channels produces an identical output.



The final signal, then, assuming all channels are at unity gain, is simply dry + parallel 1 + parallel 2 + parallel 3, right? But since the compressors are identical and fed with identical input signals, parallel 1 = parallel 2 = parallel 3. Therefore, the final signal is just dry + 3 * parallel 1. But this is the same as if I'd just set up one channel of parallel compression and raised its gain by 9.5dB.



A similar logic leads to the same conclusion (using a different gain) with any number of identical parallel compressors and their relative gains.



Q. Parallel UniverseThe point here is that I don't see the advantage of using multiple identical compressors over just raising the gain of the single compressed channel above unity. Am I missing something? In case the compressors are different, there can, of course, be a change in the tone, so that is a different matter.



Timo, via SOS web site



SOS Technical Editor Hugh Robjohns replies: It's taken a while to revisit this, but I have now reconstructed the setup I used for the original feature article and then augmented it by using a single parallel compressor with varying levels of make-up gain.



Having thought about this over the intervening period, I came to the conclusion that Timo was entirely correct, and in fact it is obvious that, when using identical parallel compressors with identical settings, the result of summing their outputs is exactly the same as boosting the output of one. It was a "Doh!” moment, really. Sorry about that!



I have now recreated the original test setup using a SADiE DAW and its default compressor plug-ins, and re-measured the responses on an AP test system via AES3 connections with none, one, three, five and seven compressors, all running with thresholds of -40dBFS, ratios of 50:1, and 0 gain make-up.



Those are the five green lines on the plot shown, and are exactly the same as the plot in the feature article.



The dotted red lines are obtained from a single parallel compressor running with varying levels of make-up gain. The two lowest dotted red lines are with 3 and 6 dB of make-up gain. A setting of 9.5dB make-up gain produced a trace that sits directly on top of that from a three-parallel compressor set up. The next dotted line is 12dB make-up gain, with 14dB make-up gain sitting over the five parallel compressor array traces. The remaining three traces are for 15, 17 and 18 dB make-up gain, with the 17dB line sitting over the seven parallel arrays.



I hope you find the plot and information useful, and I'm very grateful to Timo for bringing this to my attention.


Monday, December 22, 2014

Happy Holidays!! From us at No Limit Sound Productions

Merry Christmas to all of your and Happiest of the New Year!!
 
Music by  
Jordan

http://www.cdbaby.com/jordan13


Watch or listen to "Solace" on YouTube



Need a last minute gift?  Give the gift of "Solace" by Jordan for your loved one. Download the album today! 
(Offers good thru 12-31-2014)


http://www.cdbaby.com/jordan132

Still need motivation? 
Charted #1 instrumental music on internet radio 
Grab his latest full music album in a hurry with "In Motion" to add excitement to your listening experience.
(Offers good thru 12-31-2014)


www.cdbaby.com/jordan133

Are you a true Jordan fan?
Download his
20th anniversary special edition of
single at a great low price!
(Offers good thru 12-31-2014)



Take a sneak peek of the up coming album by Jordan

Korg KP3+: Grain Shift/Looper Examples

Korg All Access: Jem Godfrey

Q. Is my stage laptop causing PA system noise?

I use a laptop live for sequencing and soft synths, but I can hear an annoying whining sound through the PA, which definitely isn't our vocalist. The USB interface I have works fine on my desktop studio computer, so what's going on here and how can I fix it?



If PA whine caused by a laptop power supply isn't eliminated by running the laptop from battery power, connecting a two-channel transformer DI box between your audio interface's stereo output and the PA system could do the trick.
Matt Calder, via email



If PA whine caused by a laptop power supply isn't eliminated by running the laptop from battery power, connecting a two-channel transformer DI box between your audio interface's stereo output and the PA system could do the trick.If PA whine caused by a laptop power supply isn't eliminated by running the laptop from battery power, connecting a two-channel transformer DI box between your audio interface's stereo output and the PA system could do the trick.



SOS Editor In Chief Paul White replies: There are a couple of possible answers to this problem, one of which is that your computer's switch-mode power supply is breaking through onto the USB ground, with the result that a digital background noise is being added to your audio. This sound is quite different from analogue hum or hiss: there's a definite whining quality to the noise, sometimes with a pulsing modulation added to it. An easy way to check whether the power supply is causing the problem is to run the computer from battery power alone, with the PSU unplugged, and see if the problem goes away. If it does, your best solution is to run the computer from its PSU right up to the point when the gig starts, then switch to battery power. You can always plug in again at half time to recharge if you're worried about battery life.



If the problem is still there when you're running on battery power, a two-channel transformer DI box between your audio interface's stereo output and your sound system may help, as that will provide complete ground isolation. Even so, this isn't a guaranteed fix, as some audio interfaces seem to be more prone to USB whine than others, usually because of the way their internal grounding is handled, and with some models we've found that nothing really helps! If this turns out to be the case, you'll need to look for a new USB interface. When you do, take your computer with you to the shop and insist on trying it before you buy. Fortunately, there are many affordable two-channel USB audio interfaces out there that will do the job perfectly well.  


Korg All Access: Brett Tuggle and Kronos

Saturday, December 20, 2014

Q. What exactly is comb filtering?

I've read many articles in which you mention phase and comb filtering but I'm not really sure what these terms mean or how they affect my mixes. Can you explain them without being too technical, please?



Comb filtering creates peaks and troughs in frequency response, and is caused when signals that are identical but have phase differences — such as may result from multi-miking a drum kit — are summed. An undesirably coloured sound can result. The same effect can be harnessed deliberately to create flanging effects.
Len Fairfield, via email



SOS Editor In Chief Paul White replies: When discussing the phase of electrical signals, such as the outputs from microphones, we tend to start out by describing the relationship between two sine-wave signals of the same frequency and the consequences of any timing offset between them. Where both signals have the same timing, such that their peaks and troughs coincide exactly, they are said to be in phase, and the voltages of the two waveforms will add together. Where both signals are of the same amplitude, the signal voltage will double or increase by 6dB. If, however, they arrive at different times, we say that there is a difference in phase, the most extreme case being when the peak of one signal coincides exactly with the trough of the other (assuming, again, the same amplitude), causing them to cancel each other out completely. Where the amplitudes are different, the degree of cancellation will be smaller, but it will still be present to some extent.



Q. What exactly is comb filtering?At points between these two extremes, the combination of the two waves will exhibit different degrees of addition or cancellation. Phase is measured in degrees; a whole waveform cycle is expressed as 360 degrees and a 180-degree phase shift marks the point of maximum cancellation if that waveform is added to one with zero phase shift. When the waveforms are 1.5 cycles apart, they will also cancel, as this again brings the peaks of one waveform into coincidence with the other. This happens again at a time difference equivalent to 2.5 cycles, 3.5 cycles, and so on. Similarly, spacings of 1.0, 2.0, 3.0, and so on, cause addition as the peaks become coincident.



Comb filtering creates peaks and troughs in frequency response, and is caused when signals that are identical but have phase differences — such as may result from multi-miking a drum kit — are summed. An undesirably coloured sound can result. The same effect can be harnessed deliberately to create flanging effects.Comb filtering creates peaks and troughs in frequency response, and is caused when signals that are identical but have phase differences — such as may result from multi-miking a drum kit — are summed. An undesirably coloured sound can result. The same effect can be harnessed deliberately to create flanging effects.Q. What exactly is comb filtering?Note that although the 'phase' button on a mic preamp or mixing console inverts the signal — and so causes cancellation if the signal is summed with a non-inverted version of the same signal — polarity is not the same thing as phase, and the button really should have a different name! However the 'phase' button can be used to help resolve some phase-related problems.



The simplistic explanation of phase given so far describes what happens with sine waves, but typical music waveforms comprise a complex blend of frequencies. If we examine the same scenario, in which two versions of a musical signal are summed with a slight delay, some frequencies will add, while others will cancel. A frequency-response plot would show a sequence of peaks and dips extending up the audio spectrum, their position depending on the time difference between the two waveforms. That's how a flanger works: a delayed version of a signal is added to a non-delayed version of itself, deliberately to provoke this radical filtering effect, which, because of the appearance of its response curve, is affectionately known as comb filtering. Varying the time delay makes the comb filter sweep through its frequency range, picking out different harmonics as it moves.



A less severe form of comb filtering occurs when the outputs from two microphones set up at different distances from a sound source are combined — a situation familiar to anyone who has miked up a drum kit, for example. Because the more distant mic receives less level than the close mic, the depth of the filtering isn't as pronounced as in our flanger example, but it can still compromise the overall sound. That's why some engineers take great care to adjust the track delays in their DAWs to ensure that the waveforms from all the mics line up precisely. When layering drum or bass sounds, it's particularly important to ensure that the first waveform peak of each is aligned and that both peaks are positive or both negative. If they go in different directions, the low frequencies will be very obviously affected, resulting in a less punchy sound.  


Korg All Access: Omar Edwards (Musical Director for Jay-Z, Rihanna, The Weeknd, and More)

Friday, December 19, 2014

Q. USB, Firewire or Thunderbolt?

I'm about to buy a new audio interface and was wondering whether to opt for USB 2, USB 3 or Firewire, or whether I should splash out on something Thunderbolt compatible? I don't often need to record more than eight channels at a time. Do you know which is the better option and which is most likely to have a long life before the formats change, as they invariably do?




Darren Ashby, via email



SOS Editor In Chief Paul White replies: After speaking with various interface designers, it seems that both USB 2 and Firewire 400/800 are equally capable of handling in excess of 16 channels of simultaneous audio (which, of course, would be well over the top for your current needs), while USB 3 is considerably faster than Firewire and can handle a huge channel count. However, USB 3 audio interfaces are not yet widely available, and the only model I know of to date comes from RME, who use field-programmable gate arrays (FPGAs) to create, in effect, their own USB 3 equivalent.



You may have noticed that many current computers come without Firewire and it is generally accepted that it is being phased out, while USB seems to set to continue for a good while yet. So, in terms of future-proofing, USB 2 seems a safer bet than Firewire. Having said that, reports suggest that Firewire interfaces work fine via an adaptor cable when connected to the Thunderbolt port on a modern Mac, so it doesn't look like those Firewire interfaces will have to be thrown in a skip anytime soon.



In your situation, with not a huge budget and relatively small track counts, I'd be inclined to go for the USB option. But make sure you plug the interface into its own USB port and not via a hub, to ensure you have enough bandwidth for it to work properly.



As for Thunderbolt, these interfaces are still relatively expensive, but will no doubt become less so as more products enter the market. However, it doesn't sound as though you need to take this step right now. It might seem logical to assume that Thunderbolt interfaces are the least likely to become defunct, as they're newer technology, but I'm afraid the only thing you can be really certain of in the world of computers is 'change'.   .


Korg All Access: Jeff Babko (Keyboardist for Jimmy Kimmel Live)

Q. How do I set the gain on my preamp and interface?

I could really use some advice! I've got a Shure SM7b mic, a Golden Age Project Pre 73 MkII preamp and an M-Audio Fast Track Pro interface that I use when recording vocals. The preamp has two different knobs: one is gain (labelled 'mic/line'), the other is output. Then this signal goes to the interface, which also has a signal level knob. I know that different settings will change the sound on the preamp, but I was wondering how I should set the interface to get as good and balanced a sound as possible. Can you give me any advice?




Via SOS Facebook page

Just where should you set the gain knob on your audio interface if you're also using an external mic preamp? First, tweak your external preamp settings to achieve the desired sound and a healthy level, and then use the interface's gain control to set the right level running into your DAW.

Just where should you set the gain knob on your audio interface if you're also using an external mic preamp? First, tweak your external preamp settings to achieve the desired sound and a healthy level, and then use the interface's gain control to set the right level running into your DAW.Just where should you set the gain knob on your audio interface if you're also using an external mic preamp? First, tweak your external preamp settings to achieve the desired sound and a healthy level, and then use the interface's gain control to set the right level running into your DAW.



SOS Reviews Editor Matt Houghton replies: The Shure mic and GAP Pre 73 should be a good match, given the Neve 1073-style high-impedance input on the preamp, which should get the best out of a dynamic mic such as this, so I'd stick with that combo. The harder you drive the gain knob on the preamp, the more 'colour' you'll get from the transformers. So, while aiming for the same overall level coming out of the preamp, a low gain setting combined with a high-output level setting will sound more neutral, whereas a high gain with a lower output will sound a bit more rich/distorted (and even more so if the input signal is very 'hot'). Then feed the line-level output of the preamp to one of the Fast Track Pro's inputs, making sure that the input is set to 'line'. You should set the gain control on the interface as low as possible, while still making sure that you're seeing the right sort of level on the meters in your DAW software or your audio interface (and without the M-Audio's clip light showing!). If you're recording at 24-bit, the noise floor will be low enough that you don't need your meters going anywhere near to red; you can safely raise the level later on without noise being an issue. If you're recording at 16-bit (try not to, but you may have good reason!), you're looking for as high a level as you can get without clipping, which is trickier to set up, but should give perfectly good results too.   .

Published in SOS June 2013