Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Monday, June 24, 2019

Q. What's the difference between a talk box and a vocoder?

By Craig Anderton
In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.
I've heard various 'talking instrument' effects which some people attribute to a processor called a vocoder, while others describe it as a 'talk box'. Are these the same devices? I've also seen references in some of Craig Anderton's articles about using vocoders to do 'drumcoding'. How is this different from vocoding, and does it produce talking instrument sounds?

James Hoskins

SOS Contributor Craig Anderton replies: A 'talk box' is an electromechanical device that produces talking instrument sounds. It was a popular effect in the '70s and was used by Peter Frampton, Joe Walsh and Stevie Wonder [ see this YouTube video], amongst others. It works by amplifying the instrument you want to make 'talk' (often a guitar), and then sending the amplified signal to a horn-type driver, whose output goes to a short, flexible piece of tubing. This terminates in the performer's mouth, which is positioned close to a mic feeding a PA or other sound system. As the performer says words, the mouth acts like a mechanical filter for the acoustic signal coming in from the tube, and the mic picks up the resulting, filtered sound. Thanks to the recent upsurge of interest in vintage effects, several companies have begun producing talk boxes again, including Dunlop (the reissued Heil Talk Box) and Danelectro, whose Free Speech talk box doesn't require an external mic, effecting the signal directly.

The vocoder, however, is an entirely different animal. The forerunner to today's vocoder was invented in the 1930s for telecommunications applications by an engineer named Homer Dudley; modern versions create 'talking instrument' effects through purely electronic means. A vocoder has two inputs: one for an instrument (the carrier input), and one for a microphone or other signal source (the modulator input, sometimes called the analysed input). Talking into the microphone superimposes vocal effects on whatever is plugged into the instrument input.

The principle of operation is that the microphone feeds several paralleled filters, each of which covers a narrow frequency band. This is electronically similar to a graphic equaliser. We need to separate the mic input into these different filter sections because in human speech, different sounds are associated with different parts of the frequency spectrum.

For example, an 'S' sound contains lots of high frequencies. So, when you speak an 'S' into the mic, the higher-frequency filters fed by the mic will have an output, while there will be no output from the lower-frequency filters. On the other hand, plosive sounds (such as 'P' and 'B') contain lots of low-frequency energy. Speaking one of these sounds into the microphone will give an output from the low-frequency filters. Vowel sounds produce outputs at the various mid-range filters.

But this is only half the picture. The instrument channel, like the mic channel, also splits into several different filters and these are tuned to the same frequencies as the filters used with the mic input. However, these filters include DCAs or VCAs (digitally controlled or voltage-controlled amplifiers) at their outputs. These amplifiers respond to the signals generated by the mic channel filters; more signal going through a particular mic channel filter raises the amp's gain.

Now consider what happens when you play a note into the instrument input while speaking into the mic input. If an output occurs from the mic's lowest-frequency filter, then that output controls the amplifier of the instrument's lowest filter, and allows the corresponding frequencies from the instrument input to pass. If an output occurs from the mic's highest-frequency filter, then that output controls the instrument input's highest-frequency filter, and passes any instrument signals present at that frequency.

As you speak, the various mic filters produce output signals that correspond to the energies present at different frequencies in your voice. By controlling a set of equivalent filters connected to the instrument, you superimpose a replica of the voice's energy patterns on to the sound of the instrument plugged into the instrument input. This produces accurate, intelligible vocal effects.

Vocoders can be used for much more than talking instrument effects. For example, you can play drums into the microphone input instead of voice, and use this to control a keyboard (I've called this 'drumcoding' in previous articles). When you hit the snare drum, that will activate some of the mid-range vocoder filters. Hitting the bass drum will activate the lower vocoder filters, and hitting the cymbals will cause responses in the upper frequency vocoder filters. So, the keyboard will be accented by the drums in a highly rhythmic way. This also works well for accenting bass and guitar parts with drums.
Note that for best results, the instrument signal should have plenty of harmonics, or the filters won't have much to work on.


Published October 2003

Friday, June 21, 2019

Q. Do I need a dedicated FM soft synth?

By Craig Anderton
Native Instruments' FM7 soft synth is equipped with six operators, each with a 31-stage envelope which can be edited from this screen.Native Instruments' FM7 soft synth is equipped with six operators, each with a 31-stage envelope which can be edited from this screen.
I'm too young to have experienced the first wave of FM synthesis, but I find those sounds very interesting. I was considering buying Native Instruments' FM7 soft synth, but I've noticed that many virtual analogue soft synths claim to offer FM capabilities, and provide virtual analogue besides. Is there any advantage to the extra expense of something like the FM7?

Richard Ledbetter

SOS Contributor Craig Anderton replies: If your interest in FM synthesis is at all serious, then yes, there are advantages to dedicated FM soft synths like Native Instruments' FM7. But to understand why, it might first be helpful to explore some FM synthesis basics.

The basic 'building block' of FM synthesis is called an operator. This consists of an oscillator (traditionally a sine wave, but other waveforms are sometimes used), a DCA to control its output, and an envelope to control the DCA. But the most important aspect is that the oscillator has a control input to vary its frequency.

Feeding one operator's output into another operator's control input creates complex modulations that generate 'side-band' frequencies. The operator being controlled is called the carrier; the one doing the modulation is called the modulator. The sideband frequencies are mathematically related to the modulator and carrier frequencies. (Note that the modulator signal needs to be in the audio range. If it's sub-audio, then varying the carrier's frequency simply creates vibrato.)

Operators can be arranged in many ways, each of which is called an algorithm. Basic Yamaha FM synths of the mid-80s had four operators. A simple algorithm suitable for creating classic organ sounds is to feed each operator's output to a summed audio output, in which case each operator is a carrier. A more complex algorithm might feed the output from two operators to the audio output, with each being modulated by one of the other operators. Or, Operator 1 might modulate Operator 2, which modulates Operator 3, which modulates Operator 4, which then feeds the audio output. The latter can produce extremely complex modulations. Sometimes feedback among operators is also possible, which makes things even more interesting.

So more operators means more algorithm options. The Yamaha DX7 had six operators and 32 fixed algorithms; FM7 actually offers considerably more power because it lets you create your own algorithms.

A virtual analogue synth that offers FM typically allows one oscillator to modulate another one, essentially serving as a primitive two-operator FM synth with one algorithm. While this is enough to let you experiment a bit with simple FM effects, the range of sounds is extremely limited compared to the real thing. You'll be able to get some nice clangy effects and maybe even some cool bass and brass sounds, but don't expect much more than that unless there are multiple operators and algorithms.


Published November 2003

Wednesday, June 19, 2019

Q. Should I use my PC's ACPI mode?

By Martin Walker
MOTU 828 MkI Firewire audio interface.
I've just bought a PC laptop and a MOTU 828. I installed the software and have been using Emagic Logic 5.5. However when I turn off the computer's Advanced Configuration Power Interface (ACPI) it no longer recognises that the 828 is there. This is a common PC/Windows tweak for making audio apps run more efficiently, so I was hoping that the 828, or any other Firewire hardware for that matter, would be able to run with ACPI turned off. Can anyone enlighten me on this subject?

SOS Forum Post

SOS PC Notes columnist Martin Walker replies: It's strange that your MOTU 828 is no longer recognised, since Plug and Play will still detect exactly the same set of devices when you boot your PC, including the host controllers for both your USB and Firewire ports. It's then up to Windows to detect the devices plugged into their ports, and this ought not to be any different when running under ACPI or Standard mode.

MOTU have reported some problems running their 828 interface with Dell Inspiron laptops, due to IRQ sharing between the Firewire and graphics, and this could possibly be cured by changing to Standard Mode, but there's a much wider issue here. While switching from ACPI to Standard Mode has solved quite a few problems for some musicians in the past (those with M-Audio soundcards have certainly benefitted), it shouldn't be used as a general-purpose cure-all, and particularly not with Windows XP, which, I suspect, is installed on your recently bought PC laptop.

Since Windows XP needs very little tweaking compared with the Windows 9x platform, I would recommend all PC Musicians leave their machines running in ACPI mode on XP. I'd only suggest turning it off if there is some unresolved problem such as occasional audio clicks and pops that won't go away with any other OS tweaks, no matter how you set the soundcard's buffer size, or the inability to run with an audio latency lower than about 12ms without glitching.

Some modern motherboards now offer an Advanced Programmable Interrupt Controller (APIC) that offers 24 interrupts under Windows XP in ACPI mode rather than the 16 available to Standard Mode, so if yours provides this feature you should always stick to ACPI. Apparently it's also faster at task-switching, leaving a tiny amount more CPU for running applications. The latest hyperthreading processors also require ACPI to be enabled to use this technology, since each logical processor has its own local APIC. Moreover, laptops benefit from ACPI far more than desktop PCs, since it's integrated with various Power Management features that extend battery life. If I were you, I'd revert to ACPI.


Published October 2003

Monday, June 17, 2019

Q. Are all Decibels equal?

By Hugh Robjohns
Equipment specifications often give input and output levels with units like dBu, dBV and dBFS. What's the difference?

Andrew Dunn

Technical Editor Hugh Robjohns replies: The decibel (dB) is a handy mathematical way of expressing a ratio between two quantities. It's handy for audio equipment because it is a logarithmic system, and our ears behave in a logarithmic fashion, so the two seem to suit each other well, and the results become more meaningful and consistent.
dB comparison metering scales.This chart compares the scale of various types of meter in relation to dBu values (left) and dBV values (right).

Decibels are often used to compare two signal levels — say, the input and output level from an amplifier. If the output level (Vout) was twice as big as the input level (Vin), the amplifier would have a gain of 6dB. The calculation is:
Ratio dB = 20 log (Vout/Vin)

Sometimes, rather than compare two arbitrary signals in this way, we want to compare one signal with a defined reference level. That reference level is indicated by adding a suffix to the dB term. The three common ones in use in audio equipment are those you've asked about.

The standard reference level for audio signals in professional environments was defined a long time ago as being the level achieved when 1mW of power was dissipated in a 600Ω load. The origins are tied up in telecommunications and needn't concern us, but the salient fact is that 1mW in 600Ω produces an average (rms) voltage of 0.775V.

If you compare the reference level with itself, the dB figure is 0dB (the log of 1 is 0) — so the reference level described above was known as 0dBm — the 'm' referring to milliWatt.

Fortunately, the practice of terminating audio lines in 600Ω has long since been abandoned in (most) audio environments, but the reference voltage has remained in use. To differentiate between the old dBm (requiring 600Ω terminations) and the new system which doesn't require terminating, we use the descriptor: 0dBu — the 'u' meaning 'unterminated.' Thus, 0dBu means a voltage reference of 0.775V (rms), irrespective of load impedance. A lot of professional systems adopt a higher standard reference level of +4dBu, which is a voltage of 1.228V (rms).

Domestic and semi-professional equipment generally doesn't operate with such high signal levels as professional equipment, so a lower reference voltage was defined. This is -10dBV, where the 'V' suffix refers to 1Volt — hence the reference is 10dB below 1V, which is 0.316mV (rms).

It is often useful to know the difference between 0dBu and 0dBV, and those equipped with a scientific calculator will be able to figure it out easily enough. For everyone else, the difference is 8.8dB — and the difference between +4dBu and -10dBV is 11.8dB. So in round numbers, converting from semi-pro to pro levels requires 12dB of extra gain, and going the other way needs a 12dB pad.

The third common decibel term — dBFS — is used in digital systems. The only valid reference point in a digital system is the maximum quantising level or 'full scale' and this is what the 'FS' suffix relates to. So we have to count down from the top and signals usually have negative values.

To make life easier, we build in a headroom allowance, and define it in terms of a nominal working level. This is generally taken to be -18dBFS in Europe which equates to 0dBu when interfacing with analogue equipment, or -20dBFS in America which equates to +4dBu in analogue equipment.



Published October 2003