Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Friday, September 30, 2016

Q. Can you recommend a digital multitracker?

By Mike Senior
Q. Can you recommend a digital multitracker?
I am not sure if you can help me but I thought it would be worth a go! I am a guitarist and I want to buy a four-track digital recorder for less than £300. Do you have any recommendations? I do not want to buy a piece of computer software for recording, just a stand-alone recorder.

Mark Taylor

Reviews Editor Mike Senior replies: For that kind of money, you can get eight tracks if you want, assuming that you're after something new. The Fostex VF80EX (retailing at £298.45 when we went to press, but now on sale in some shops for as little as £229) gives you eight tracks of audio recording without data compression, S/PDIF digital input and output, and an onboard CD burner. It would probably be quite a good choice in your circumstances. The Tascam DP01FX might also be an option (at a retail price of £345, but often discounted to as low as £299), although this has no CD drive built in, so you'll have to back it up to a computer over USB. It also has no digital input, so you're stuck with the internal preamp and A-D electronics for recording. If you went for the Fostex, you could, at a later date, connect a decent mic/instrument preamp with built-in A-D conversion and hence bypass the internal preamp electronics of the multitracker.
Digital recorders like the Fostex VF80EX and Tascam DP01FX offer affordable eight-track recording. 

Digital recorders like the Fostex VF80EX and Tascam DP01FX offer affordable eight-track recording.

There are lots of other models of eight-tracker available in this price range, but I wouldn't really recommend them over the ones I've already mentioned for any serious recording. For a start, most other multitrackers in this price band use data compression for recording, which I wouldn't recommend if there's any chance that you might want to use anything you record on your multitracker for a proper commercial record production later on. Some models record to solid-state memory (such as Smart Media or Compact Flash cards), and usually don't include a particularly large card at the outset, so you'll have to budget for additional cards as well. Many cheaper multitrackers also don't offer phantom-powered mic inputs, which means you won't be able to use the majority of decent condenser mics unless you already own an external preamp or mixer.

If you're willing to look into the second-hand market, there's a lot more choice, but I'd steer clear of Minidisc multitrackers, again for data-compression reasons — technology has moved on quite a way from these now. You might even be able to pick up a 16-track machine within your price range in the SOS Readers Ads. In particular, keep your eyes peeled for a Korg D16 — it's small and has great effects, a built-in CD-RW drive and a touchscreen, but no phantom power — or a Fostex VF160, which has phantom power, a built-in CD-RW and individual track faders, but slightly underwhelming effects and mixing capabilities. Both machines will also record eight tracks at once and include S/PDIF digital input and output.


Published February 2006

Tuesday, September 27, 2016

Q. How can I clear my head?

By Hugh Robjohns


I have an odd, but I'm sure not uncommon, problem that I hope your experienced staff can help with. This time of year the outside world is an especially ghastly, germ-ridden place. During a rare occasion out of the studio last week, I managed to catch myself a cold. This would not normally be a problem, only I had some very important work to complete and mix by the end of the week. So, replacing the biscuit tin with a box of tissues and a mug of Lemsip, I soldiered on. However, all my studious investment in fine hardware couldn't make up for the fact that with blocked sinuses I felt like I was mixing with a motorcycle helmet on! So, what I need to know is, are there any recommended products or remedies (apart from hiring another mix engineer!) to use in this situation? I have tried sinus sprays but they only work for an hour or so and I'm slightly worried that over-use will affect my hearing in the long term. My doctor doesn't really understand the issues either, which doesn't help. We're only as good as our ears, right? Your help on this issue would be more useful to me right now than any advice on speaker placement, room treatment or the latest and greatest convolution reverb — I can't hear it anyhow!

Simon West

Technical Editor Hugh Robjohns replies: This is not an unusual problem and I completely sympathise. I tend to suffer from this problem quite badly myself. All I can suggest is to find a good decongestant that works for you. I find Olbas Oil safe and useful — pour a few drops into a bowl of hot water and breathe the vapours for a while. However, the congestion will inevitably come back.
There are lots of pharmaceutical decongestants available, but many are combined with other drugs (paracetemol, for example) which limits how often they can be taken, and some have side-effects that may not agree with you. Try talking to your local chemist for specific product advice — I generally find that approach more helpful than talking to the doctor in situations like this.

But I'm afraid the bottom line is that your ears will not work properly until the cold has passed and the sinuses have cleared.


Published February 2006

Saturday, September 24, 2016

Q. Can you explain the origins of wavetable, S&S and vector synthesis?

By Steve Howell
The PPG Wave wavetable synthesizer. This one belongs to synth programmer, engineer and producer Nigel Bates. 

The PPG Wave wavetable synthesizer. This one belongs to synth programmer, engineer and producer Nigel Bates.

I keep reading about different types of synthesis like 'wavetable', 'S&S' and 'vector' but I don't know what they are. I've looked around the net for information but either the descriptions are very simplistic or they're too technical. Could someone at SOS please explain the origins of these techniques?

Michael Cullen

SOS contributor Steve Howell replies: 'Wavetable synthesis' is actually quite easy to understand. In the early days of synthesis, (analogue) oscillators provided a limited range of waveforms, such as sine, triangle, sawtooth and square/pulse, normally selected from a rotary switch. This gave the user a surprisingly wide range of basic sounds to play with, especially when different waveforms were combined in various ways.
However, in the late '70s, Wolfgang Palm [of PPG] used 'wavetable' digital oscillators in his innovative PPG Wave synths. Instead of having just three or four waveforms, a wavetable oscillator can have many more — say, 64 — because they are digitally created and stored in a 'look-up table' that is accessed by a front-panel control. As you move the control, so you hear the different waveforms as they are read out of the table — the control is effectively a 64-way switch. If nothing else, this gives a wide palette of waveforms to use as the basis of your sounds. However, the waveform-selection control is not a physical switch as such, but a continuously variable control implemented in software. The advantage this has (apart from the 60 extra waveforms!) is that it is also possible to use LFOs or envelopes or MIDI controllers to step through these waveforms.
Now, if the waveforms are sensibly arranged, we can begin to create harmonic movement in the sound. For example, if Wave 1 is a sine wave and Wave 64 is a bright square wave with Waves 2 to 63 gradually getting brighter as extra harmonics are added in each step of the wavetable, as you move through the wavetable, you approach something not unlike a traditional filter sweep. However, one disadvantage to this (but something that characterised the PPG) is that the sweep will not be smooth — the waveforms will step in audible increments.
Each oscillator in the PPG, however, didn't just have one wavetable — there were 32 wavetables, each with 64 waveforms! Many were simple harmonic progressions as described above; others were rudimentary attempts at multisampling, whilst others attempted to emulate oscillator sync sweeps and PWM (pulse-width modulation) effects. Because the wavetable sweeping was so audibly stepped, the latter two weren't entirely convincing emulations, though they had a character all their own nonetheless.
Where things begin to get interesting, however, is when the waveforms in the wavetable are disparate and harmonically unrelated, as the tonal changes become random and unpredictable. For many, this feature of wavetable synthesis was unusable, but some creative individuals like Tom Dolby exploited it to create unique and distinctive sounds, as can be heard on his 1982 album The Golden Age Of Wireless.
The PPG had something of a trump up its sleeve, however — totally analogue filters! Using these, it was possible to smooth out the wavetable sweeps. Another endearing quality of the PPG was its low-resolution digital circuitry, which exhibited aliasing at extreme frequencies that added a certain 'gritty' quality to the sound. Later manifestations of the PPG (in Waldorf products) were of a higher quality and offered smooth wavetable sweeping. But while they sounded better, they lacked that (arguably) essential 'lo-fi' character.
Other synths have employed wavetable synthesis in one guise or another since then and there are several software synths available today which incorporate wavetable synthesis capabilities.
The massively influential Korg M1 really put S&S synthesis on the map.The massively influential Korg M1 really put S&S synthesis on the map.
'S&S' is an abbreviation for 'samples and synthesis' and refers to the new breed of synth that appeared with the introduction of the seminal Roland D50 in 1987. Whereas synths prior to this used analogue or digital oscillators to create sound, samplers were now in the ascendent, with the introduction of affordable sampling products such as the Ensoniq Mirage, the Emu Emax and the Akai S900. These allowed almost any sound to be sampled and mangled but they had one inconvenience — the samples took time to load and were inconveniently stored on floppy disks. Roland could see that by using short samples as the basic sound sources, and storing them in ROM for instant recall, they could make the same type of sound as a sampler but with no tedious load times. However, they also retained many of their previous synthesizers' functions such as multi-mode filters, envelopes, LFOs and so on. To all intents and purposes, the D50 'felt' like a synth but sounded like a sampler. Furthermore, to smooth out any inadequacies in the very short samples such as clicky and/or obvious loops, the D50 also had chorus and reverb which 'smudged' these artifacts quite effectively.
And so a legend — and a new synthesis method — was born! Roland called it 'LA (linear arithmetic) synthesis'. In truth, it was a simple layering method where up to four samples could be stacked to create more complex sounds. Because of memory constraints (ROM/RAM was very expensive at the time), Roland had to use very short samples, and there were two categories of sample on the D50 — short, unlooped samples (such as flute 'chiff' or guitar 'pluck') and short sustaining loops. By combining and layering, for example, a flute 'chiff' with a sustained flute loop sample, you could (in theory) create a realistic flute sound. In practice, it didn't quite work out like that, but this layering technique also gave the instrument a new palette of sounds to work with and it was possible to layer, say, the attack of a piano with the sustain of a violin. With the wealth of synthesis functions available to process the samples, this allowed the user to create interesting hybrid sounds.
Korg took this concept to a new level a year or so later when they released their M1, another legend in modern music technology. Although similar concepts were involved, the M1 used longer, more complete samples which, in conjunction with typical synth facilities, blurred the distinction between synth and sampler and arguably heralded the beginning of the slow, gradual demise of the hardware sampler! However, as well as advancing S&S, they also added a very functional multitrack sequencer and good quality multi-effects so that (maybe for the first time) it was possible to create complete works on a single, relatively affordable keyboard. And so the 'S&S workstation' was born. I think it's fair to say that most modern synths owe something to the Korg M1 in one or another aspect of their design.
The ill-fated Sequential Circuits Prophet VS introduced vector synthesis to the world.The ill-fated Sequential Circuits Prophet VS introduced vector synthesis to the world.These days, many synths and keyboards routinely use these same basic principles, but memory is now far more affordable and so it is possible to have many more (and considerably more detailed) multisamples in the onboard ROM. Whereas early S&S synths boasted around 4MB of onboard ROM, figures of 60MB or more are bandied about today. That said, many of the same techniques used for optimising samples and squeezing as many into ROM as possible are still used today.
'Vector synthesis' is a slightly different (but related) technique. First pioneered by Dave Smith in his Prophet VS, vector synthesis typically uses four oscillators which the user can 'morph' smoothly between, using real-time controllers such as a joystick or automated controllers such as LFOs and/or envelope generators. As the joystick is moved, so the balance of the four oscillators changes and, depending on the nature of the source waveforms, many interesting, evolving sounds can be created. But the Prophet VS was ill-fated — Sequential Circuits were in financial trouble and the company soon went to the wall. However, the concept lived on in the Korg Wavestation, which was a joint venture between a post-Sequential Smith and Korg. The Wavestation had a significant advantage over the VS in that it used multisampled waveforms, allowing more complex building blocks to be used — in many ways, it was a hybrid S&S and vector synth. As well as extensive synth facilities (filters, multi-stage envelopes and so on), it also had comprehensive multi-effects and other facilities (not least of which was 'Wave Sequencing') that made the Wavestation a programmer's dream, and a casual user's nightmare! Indeed, they are still a staple component in many players' keyboard rigs today. The Wavestation was discontinued many years ago (though it's been resurrected in Korg's Legacy Collection software), but vector synthesis lives on in Dave Smith's Evolver range of keyboards.
If you're looking for further information on synthesis out there on the web, I can suggest two sections of the Sound On Sound web site worth investigating. Paul Wiffen's 12-part Synth School series, which appeared in the magazine between June 1997 and October 1998, is a good introduction to the basics of synthesis in its various forms. If you enter "synth school" into the search engine at www.soundonsound.com, you'll find it. Judging by your comments, you may find some of Gordon Reid's long-running Synth Secrets series too technical, but it's nevertheless worth a mention as it covered so much ground in its five-year tenure. To make this vast amount of material a little easier to navigate, we have created a special page with links to all of the Synth Secrets articles: www.soundonsound.com/sos/allsynthsecrets.htm.


Published February 2006




Thursday, September 22, 2016

Q. Which digital multitracker is the right one for me?

By Tom Flint
The Zoom MRS1608's dedicated drum pads set it apart from other similarly priced multitrackers. 

The Zoom MRS1608's dedicated drum pads set it apart from other similarly priced multitrackers.
I read Tom Flint's piece on the Zoom MRS1608 multitracker and think it may be the right machine for me. I still use a Roland TR707 drum machine which allows you to step write and tap write. The sounds, of course, are ancient. I write simple country songs, mostly backed by drums and guitars. I think the Zoom's drum machine would be great for what I do. I would think the guitar effects would also be pretty good on this machine. I currently own the Tascam 2488. I think my recordings sound really good on this machine, but I don't like the guitar effects much and find them a little difficult to use. I don't even use the drum machine and I don't use MIDI or edit much at all. Based on what I have told you, do you think I would be pleased if I unloaded the Tascam and bought the Zoom? I would appreciate your opinion on this subject and thank you in advance.

Robert Tambuscio

SOS contributor Tom Flint replies: Before they entered the multitracker market, Zoom were busy gaining a name for themselves producing drum machines and guitar effects (amongst other things), so you can expect a reasonable level of quality and competence in both these areas. If I remember correctly, the MRS1608's internal drums sounds are good and varied — if country music is your thing then the chances are that the sounds in the MRS will serve you better than the TR707! The MRS has 50 drum kits which should certainly include a few that are suitable, and it is possible to take the best sounds from various kits and create a custom kit yourself. If you're not satisfied with the onboard sounds, the Pad Sampler facility allows AIFF and WAV samples to be loaded from CD and used as alternative drum sounds. Alternatively, you could use the Phrase Loop sampler to put together drum and percussion loops taken directly from sample libraries, or choose from among the MRS's 475 preset drum and bass patterns.

The sequencer itself offers both real-time and step-based recording, so it should allow you to program drums in a similar way to the TR707, although I believe the Zoom's grid has a finer resolution than the 707 and there are more time-signature options. It's also worth noting that some of the Zoom's programming facilities will be familiar to TR users. For example, just as the 707 has a set of faders for setting sound levels for each kit component, the MRS allows the channel faders to be used for adjusting its own drum samples. The Zoom multitracker also benefits from having 12 touch-sensitive pads for triggering drums.

Tascam have a long history of producing multitrack recorders, but they're not known as makers of effects or drum machines so it's not surprising to hear that the 2488 hasn't quite lived up to your expectations in these areas. It does have an internal GM sound module with many useful drum and instrument sounds, and, like the MRS, it can Import and play Standard MIDI files, but it doesn't have anything approaching a pad bank, and there are no sampling facilities. So as far as the drum machines are concerned, the MRS is much better equipped.

That said, a decent drum section shouldn't be your only consideration. Before you offload the 2488, think carefully about whether there are any recording, editing or mixing facilities that you regularly use, and check they are also available on the Zoom. The Zoom has a rather more basic display which may hinder its usability a little. That has to be something to consider, given that you say you find some of the 2488's features difficult to use. Without doing some objective side-by-side testing it's impossible to say whether the Zoom sounds as good as the Tascam or not, but I can say that I didn't think the Zoom was particularly weak in that department, and I suspect there's little to choose between them.

Nevertheless, I'd advise anyone using a budget multitracker to use a good-quality external preamp for any important lead work if at all possible, simply because the onboard preamps are not going to be of the highest quality. What's more, if your preamp has a decent A-D converter with an S/PDIF output built in, it would be a good idea to bypass the multitracker's converters by using its S/PDIF input, and clocking the multitracker to the preamp's digital clock.

Normally I'd probably suggest upgrading to a better machine when trading in your old multitracker for a new one, but there aren't really any high-end products which go in for drum machines and sequencers in quite the same way as the MRS1608, so I'm not sure you have much choice if you really want these kinds of features. The other option would be to hold onto the Tascam 2488 and buy a more modern drum machine — Alesis, Boss and Zoom all make self-contained drum machines which cost less than £300 — and slave it to the 2488 via MIDI.



Published February 2006


Wednesday, September 21, 2016

Q. What determines the CPU reading in Cubase SX?

By Martin Walker

I remain baffled by the CPU load in Cubase SX 2 (as shown in the VST Performance indicator). I'm particularly curious to know why in my larger projects the indicator shows a constant load (typically 80 percent or more) even when I'm not playing anything back! What exactly is the CPU doing when nothing is happening in the project? My projects typically have 15 to 25 audio tracks, five to 10 virtual-instrument tracks and a couple of MIDI tracks, with five or so group channels and maybe a couple of FX Channels. Some of the channels have an insert effect or two, typically a compressor or gate, and there's a couple of aux channels for send effects.

SOS Forum Post

PC music specialist Martin Walker replies: When Cubase isn't playing back, the CPU overhead is largely down to the plug-ins, all of which remain 'active' at all times. This is largely to ensure that reverb tails and the like continue smoothly to their end even once you stop the song, and it lets you treat incoming 'live' instruments and vocals with plug-ins before you actually start the recording process. However, this isn't the only design approach — for instance, Magix's Samplitude allows plug-ins to be allocated to individual parts in each track, which is not only liberating for the composer, but also means that they consume processing power only while that part is playing.

Freezing tracks, adjusting the buffer size and using single send effects instead of multiple inserts can all help reduce CPU overhead. 

Freezing tracks, adjusting the buffer size and using single send effects instead of multiple inserts can all help reduce CPU overhead.

Of all the plug-ins you'll be using frequently, reverbs are often the most CPU-intensive, so make sure you set these up in dedicated FX Channels and use the channel sends to add varying amounts of the same reverb to different tracks, rather than using them as individual insert effects on each track. You can do the same with delays and any other effects that you 'add' to the original sound — only those effects like EQ and distortion where the whole sound is treated need to be individually inserted into channels.

The other main CPU drain for any sequencer when a song isn't playing back comes from software synths that impose a fixed overhead depending on the chosen number of voices. These include synth designer packages such as NI's Reaktor and AAS's Tassman, where their free-form modular approach makes it very difficult to determine when each voice has finished sounding. However, fixed-architecture software synths are more likely to use what is called dynamic voice allocation. This only imposes a tiny fixed overhead for the synth's engine, plus some extra processing for each note, but only for as long as it's being played.

If you use a synth design package like Reaktor or Tassman, try reducing the maximum polyphony until you start to hear 'note-robbing' — notes dropping out because of insufficient polyphony — and then increase it to the next highest setting. This can sometimes drop the CPU demands considerably. Many software synths with dynamic voice allocation can also benefit from this tweak if they offer a similar voice 'capping' preference.

Anyone who has selected a buffer size for their audio interface that results in very low latency will also notice a hike in the CPU meter even before the song starts, simply due to the number of interrupts occurring — at 12ms latency the soundcard buffers need to be filled just 83 times a second, but at the 1.5ms this happens 667 times a second, so it's hardly surprising that the CPU ends up working harder. For proof, just lower your buffer size and watch the CPU meter rise — depending on your interface, the reading may more than double between 12 and 1.5ms. You'll also notice a lot more 'flickering' of the meter at lower latencies. If you've finished the recording process and no longer need low latency for playing parts into Cubase, increase it to at least 12ms.

Finally, if some of those audio or software synth tracks are finished, freeze them so that their plug-ins and voices no longer need to be calculated. Playing back frozen tracks will place some additional strain on your hard drive, but most musicians run out of processing power long before their hard drives start to struggle.


Published February 2006




Monday, September 19, 2016

Q. How do I lower the latency on my laptop?

By Sam Inglis
I have been experiencing some big problems with latency whilst trying to use Cubase SX. I would be grateful for any help or advice you can offer me. I'm using a Sony Vaio laptop with a 1.4GHz Intel Celeron M processor, 512MB of RAM, a 60GB hard drive, and a Realtek High Definition Audio sound chip. I've tried reducing the buffer size on this driver and upping the sample rate to 96kHz, with no effect on latency. Could the cause be my hardware?


Carol Robinson

Features Editor Sam Inglis replies: The latency is almost certainly caused by the hardware — most built-in laptop sound chips only have Direct X and MME drivers, and these can suffer latencies of half a second or more. Ideally, you'd be better off with a specialist audio device for music with proper ASIO drivers: upgrading your sound hardware will improve both audio quality and driver performance. Either a PCMCIA or USB device should be OK, or a Firewire one if your computer has a Firewire port. However, you could also investigate third-party ASIO drivers such as ASIO4ALL (www.tippach.net/asio4all) which are designed to work with any hardware.


Published February 2006



Friday, September 16, 2016

Q. How do I lower the latency on my laptop?

By Hugh Robjohns
Figure 1: The D-A converter's low-pass filter, set at half the sample rate, removes the upper and lower images while keeping the wanted audio.Figure 1: The D-A converter's low-pass filter, set at half the sample rate, removes the upper and lower images while keeping the wanted audio.
With reference to A-D/D-A converters, what exactly is an 'alias'? How and when do they occur and what causes it?

SOS Forum Post

Technical Editor Hugh Robjohns replies: An alias occurs when a signal above half the sample rate is allowed into, or created within, a digital system. It's the anti-aliasing filter's job to limit the frequency range of the analogue signal prior to A-D conversion, so that the maximum frequency does not exceed half the sampling rate — the so-called Nyquist limit.

Figure 2: When the 10kHz signal overloads the A-D converter, the resulting third harmonic at 30kHz creates an alias at 18kHz which will be allowed through by the low-pass filter. 
Figure 2: 
When the 10kHz signal overloads the A-D converter, the resulting third harmonic at 30kHz creates an alias at 18kHz which will be allowed through by the low-pass filter.Aliasing can occur either because the anti-alias filter in the A-D converter (or in a sample-rate converter) isn't very good, or because the system has been overloaded. The latter case is the most common source of aliasing, because overloads result in the generation of high-frequency harmonics within the digital system itself (and after the anti-aliasing filter).

The sampling process is a form of amplitude modulation in which the input signal frequencies are added to and subtracted from the sample-rate frequency. In radio terms, the sum products are called the upper sideband and the subtracted products are called the lower sideband. In digital circles they are just referred to as the 'images'.

These images play no part in the digital audio process — they are essentially just a side-effect of sampling — but they must be kept well above the wanted audio frequencies so that they can be removed easily without affecting the wanted audio signal. This is where all the trouble starts. The upper image isn't really a problem, but if the lower one is allowed too low, it will overlap the wanted audio band and create 'aliases' that cannot be removed.

Let's consider what occurs if we put a 10kHz sine-wave tone into a 48kHz sampled digital system. The sampling process will generate additional signal frequencies at 58kHz (48 + 10) and 38kHz (48 - 10). Both of these images are clearly far above half the sample rate (24kHz), so can be easily removed with a low-pass filter, which is the reconstruction filter on the output of the D-A converter, leaving the wanted audio (the 10kHz tone) perfectly intact. See Figure 1, above.
However, consider what happens if our 10kHz tone is cranked up too loud and overloads the A-D converter's quantising stage. If you clip a sine wave, you end up with something approximating a square wave, and the resulting distortion means that a chain of odd harmonics will be generated above the fundamental. So our original 10kHz sine wave has now acquired an unwanted series of strong harmonics at 30kHz, 50kHz and so on.

Note that these harmonics were generated in the overloaded quantiser and after the input anti-aliasing filter that was put there to stop anything above half the sample rate getting in to the system. By overloading the converter, we have generated 'illegal' high-frequency signals inside the system itself and, clearly, overloading the quantiser breaks the Nyquist rule of not allowing anything over half the sample rate into the system.

Considering just the third harmonic at 30kHz for the moment, the sampling modulation process means that this will 'mirror' around the sample rate just as before, generating additional signal frequencies at 78kHz (48 + 30) and 18kHz (48 - 30). The 18kHz product is clearly below half the sample rate, and so will be allowed through by the reconstruction filter. This is the 'alias'. We started with a 10kHz signal, and have ended up with both 10kHz and 18kHz (see Figure 2, above). Similarly, the 50kHz harmonic will produce a 2kHz frequency, resulting in another alias.

Note that, unlike an analogue system, in which the distortion products caused by overloads always follow a normal harmonic series, in a digital system aliasing results in the harmonic series being 'folded back' on itself to produce audible signals that are no longer harmonically related to the source.
In the simplistic example I've explained, we have ended up with aliases at 2kHz and 18kHz that have no obvious musical relationship to the 10kHz source. This is why overloading a digital system sounds so nasty in comparison to overloading an analogue system.
I hope this brief explanation helps to clear up the topic of aliasing for you.


Published January 2006



Tuesday, September 13, 2016

Q. Should I use my mixer's group outputs or its direct outs for recording?


By Mike Senior
Like other mixers, this Allen Heath GL2400-424 offers both direct outs on channels and group outs. But which should be used and when?

I recently started teaching music technology in a college and was asked to rebuild one of the studios. It uses a 32-channel mixing desk, patchbay and Alesis HD24 hard disk recorder to record to, as well as outboard gear. The desk has eight group busses arranged in four stereo pairs. There are 24 mono group output sockets, three per group buss, so that group 1 goes to outputs 1, 9 and 17, group 2 goes to 2, 10 and 18, and so on. The way it was set up previously was that these 24 group outputs were normalled through the patchbay to the 24 inputs on the HD24. The students were being taught that the signal should come into the desk and then be routed through the relevant group to get to the HD24. For instance, if your mic is plugged into channel 3 and you want to go to track 5, you have to route it to group 5-6, pan it hard left and bring up the channel fader and group fader. However, I changed it so that the direct outs of the first 24 channels are normalled through to the 24 inputs of the HD24, which seems to make more sense. One of the lecturers is kicking up a fuss, so my question is: which practice is most common in professional studios?

Thom Corah

Reviews Editor Mike Senior replies: You're both right after a fashion, but I'm afraid that I think the lecturer is probably more right in this case, as you appear to be using a group desk, rather than an in-line one. Your approach has two main limitations. Firstly, you can only route channel 1 on the desk to channel 1 on the recorder. This is admittedly less of a limitation with a digital recorder, where you can swap tracks digitally, but it's still quicker to do this from the desk than from the recorder.
The second (and more serious) limitation is that you can't record a mix of several channels to the same track on the recorder. Although 24 tracks is quite a lot to work with, you might need to submix a number of microphones to, say, a stereo pair of tracks — for example, when layering up a string quartet a few times to make a composite string sound for a pop production. Another problem is that you can't use the mixer's EQ on the way to the recorder, as direct outputs are often taken from before the EQ circuitry. Also, you couldn't bounce down a group of tracks through the desk in this way without sending them all to a group first, and then patching from the group output to a further channel. So you'll have more flexibility if you do things the lecturer's way.

One reason that you're not completely wrong is that you're implementing a kind of in-line methodology, treating the input stage up to the direct output as the input path and the rest of the channel as the monitor path. However, a group desk isn't really sufficiently well equipped to do this properly, most notably because there is no routing matrix between the input channels and the recorder inputs, as there would be on an SSL desk or similar. There's only one routing matrix per channel on a group desk, and that is situated after the channel fader. There's no real alternative, given the facilities, but to have separate channels for the input and monitor paths. In your case, as you have only 32 mixer channels, this means repatching for mixdown and monitoring purposes, I imagine, but I don't know all the details of your setup.

One situation where you can get away with using an in-line configuration on a group desk, exactly as you have, is where the recorder is actually a computer system. In this case, given the powerful processing facilities a computer offers, there's little advantage these days in pre-processing audio before it reaches the computer, so the lack of input EQ would not really be a problem. Also, there are comprehensive input routing and mixing facilities built into most modern audio-recording packages, so a hardware routing matrix would also be unnecessary. Perhaps you could justify your routing scheme as just being a little ahead of its time? You are simply anticipating the happy day when the college moves to a more flexible computerised system!
At the end of the day, which is the more appropriate arrangement depends on how many tracks you plan to record at one time. The group routing approach is more flexible when it comes to being able to do track bounces and partial submixes, and it is an important way of working to teach students. However, the down side is that you can record no more than eight (different) tracks at a time because there are only eight groups on your mixer.

Taking the direct outs approach allows up to 24 different tracks to be recorded at the same time and is ideal in areas designed purely for tracking, but you are then in for lots of replugging when it's time to mix. In any case, students should definitely be made aware of both techniques and configurations.

One possible solution that you could consider is using the patchbay to normal the group outputs to the recorder inputs, as before, but also send all of the desk's direct outs to patchbays on the row above, so that when you need to patch direct outs straight into recorder tracks it's just a case of plugging in some patch cords.



Published January 2006

Saturday, September 10, 2016

Q. Should I opt for active or passive monitors?


By Hugh Robjohns
One advantage of passive monitors is that the two components of your monitoring system — the speakers and the amp — can be upgraded separately, allowing a more gradual and less expensive progression to better-quality gear.One advantage of passive monitors is that the two components of your monitoring system — the speakers and the amp — can be upgraded separately, allowing a more gradual and less expensive progression to better-quality gear.

I'm interested in buying a pair of Alesis Monitor 1 MkIIs. Should I buy the passive versions and a good amp or just go for the active versions, which cost £100 more? I've always thought that active monitors are a bit of a gimmick and don't give a good sound, but I have now been told that they will give the best sound, as there is no crossover. Can you help me?

SOS Forum Post

Technical Editor Hugh Robjohns replies: In the middle and upper parts of the monitor market there is no doubt that active models offer significant advantages over passive designs, such as optimised power amps for each driver, optimised driver-protection circuitry, short and direct connections between amps and drivers, more complex and precise line-level crossovers, and so on.

However, at the budget end of the market these advantages are somewhat clouded by the inherent problems of achieving a low sale price. Most notably, many models are saddled with poor-quality power amps and power supplies that have been built down to a price rather than built up to a standard. Obviously, I'm painting pictures with a very broad brush here — there are some good and some less good designs out there — but the generalisations are true.

Active speakers come in two forms: true 'active' monitors, which have a separate amplifier for each driver, and 'powered' monitors, which have a single amplifier built into the cabinet, feeding both drivers via a normal passive crossover. In examples of the latter, you often get a better amplifier because you are only paying for one amp and not two (or three, in the case of a true active three-way monitor), while retaining the advantages of having an integrated package with very short internal speaker cables and so on. In the case of a well designed two-way speaker, a passive crossover can deliver superb results, and there is often little, if any, quality advantage from employing a complex line-level active crossover instead.

However, one facility that's easy to implement in active designs with line-level crossovers is user-adjustable EQ tweaks. These can be helpful sometimes in matching the speaker to the room, but in inexperienced hands such facilities can often be more trouble than they are worth because they can be mis-set... and usually are!
Perhaps a more relevant argument against budget active speakers — for me, at least — is the difficulty of upgrading. When the time comes to move up to a higher standard of monitoring, you will have to change both the speaker and its integrated amps. This inherently means that upgrading has to jump in large financial steps. On the other hand, if you go down the passive route you can upgrade the speaker separately from the amp, and vice versa. That approach allows you to improve the quality of the complete system in several easier and more cost-effective stages.
For example, you could start off with the best passive monitors you can afford and a reasonable amp (possibly second-hand — there are plenty on the markets as people switch to the more 'fashionable' active monitors), then maybe upgrade the amp to something that will warrant a better speaker after a year or two, then upgrade the speaker, and so on.

For what it's worth, all my 'little speakers' are passive designs coupled to good quality amps, in some cases with the amps fixed to the back of the speaker to make a 'powered' unit. I have found this approach to provide the best-quality result whilst still being very cost-effective and flexible.


Published January 2006

Thursday, September 8, 2016

Q. Why does my Mackie Control make strange noises in Cubase?


By Sam Inglis
The Mackie Control works via MIDI, so keep an eye on the input assignments of your MIDI tracks.The Mackie Control works via MIDI, so keep an eye on the input assignments of your MIDI tracks.
I'm using a Mackie Control control surface with Cubase SX, and it works fine on audio tracks. However, whenever I select a MIDI track within Cubase, pressing buttons on the Mackie Control seems to trigger random MIDI notes, and using the other controls sometimes seems to make my synths go out of tune. What's going on?

Jeremy Carter

Features Editor Sam Inglis replies: Mackie Control and similar control surfaces communicate with Cubase via MIDI, and they use ordinary Note On and Continuous Controller messages to tell the computer that a button has been pressed or a fader moved — but not ones that will have any musical relevance to your song! Meanwhile, the default preference in Cubase SX is that whichever track is selected is automatically record-enabled, and all MIDI tracks default to accepting MIDI input from all connected sources. This means that if you have, say, a controller keyboard and a Mackie Control connected, Note On and Controller messages from both will be recorded on the selected track. Even when you're not recording, all MIDI messages from all sources will be routed to whatever synth is attached to the selected track.

The solution to this is to change the input selection for each of your MIDI tracks. In Cubase's track Inspector, change the MIDI input from 'All' to a specific device that's not the Mackie Control, or 'None' if you don't want them to accept any MIDI input. If you're not planning on recording any MIDI, you could also achieve the same result by visiting Cubase 's Preferences and deselecting the 'Record enable selected track' box.


Published January 2006

Wednesday, September 7, 2016

Q. Is there something wrong with my vintage spring reverb?

By Steve Howell



I have just bought an old spring reverb unit called the Great British Spring off eBay. It sounds great but if I feed any drums through it, or a percussive synth sound, it makes a weird 'ping' sound. I've had a look around the Internet and can't find much if any info on the thing. Can you help?



Rob Pope



SOS contributor Steve Howell replies: The Great British Spring was very popular in the '80s — I had one myself. One of the first affordable, decent-quality spring reverbs, it arrived at a time when Fostex were bringing fairly serious eight-track reel-to-reels to the market — it was a marriage made in heaven for the emerging home studio market. That said, the GBS was of serious enough quality to have been adopted in 'proper' studios as a cost-effective way to add extra reverb channels to supplement the main plate reverb.



Spring reverbs work by feeding the input signal, typically from an effects/aux send, to a transducer that 'excites' one or more of the springs. The signal travels down the spring and is picked up by another transducer at the other end, then sent to the output and on to the effects return. But it's not quite as simple as that, as the signal also 'bounces' back along the spring, colliding with other signals on their way down and causing complex pseudo-reflections. We perceive this as a reverb effect, and the more springs a unit has, the more diffuse the reverb effect is.



The length of the spring dictates the reverb length and density — the GBS's springs are quite long and give a nice hall reverb effect. However, as with all spring reverbs, percussive attack transients can cause the springs to become temporarily unstable, generating all sorts of unpleasant audio artifacts, as you've found out.



The simplest solution is just to reduce the level of the signal going to the GBS. This will prevent the springs from getting over-stimulated and thus will eliminate (or at least reduce) the 'ping' effect. The down side to this is that to have the same level of reverb on the sound, you will have to increase the reverb return level which will, of course, increase the amount of noise — these electro-mechanical devices are not known for their noise-free operation! However, even that can be overcome. You see, the frequency range of the springs is limited so, by bringing the reverb returns back through channels that have EQ, you can roll off the top end to reduce the hiss coming from the unit without adversely affecting the reverb sound too drastically, if at all. In fact, given the simplicity of the GBS (and spring returns in general), using EQ can add a lot of creative as well as correctional possibilities.



A more elaborate solution is to run the effects/aux send that is feeding the GBS via a limiter set pretty hard, so that the signal never reaches the level that will cause the springs to become unstable. Many more expensive spring reverbs had just such a facility built in.


Published December 2005

Monday, September 5, 2016

Q. What factors affect the quality of a microphone capsule?

By Hugh Robjohns



I am curious to know more about the design and construction of capacitor mic capsules. For example, what is it about the capsule or the way it is mounted that dictates the polar pattern of the mic? If the capsule is sturdy and made from good-quality parts, what other factors come into play which affect its sound quality? Do capsule designs really differ that much from mic to mic?



Paul Curtis



Besides the capsule itself, the design and construction of the mic body and internal electronics also shape the sound of the mic.
Besides the capsule itself, the design and construction of the mic body and internal electronics also shape the sound of the mic.




Technical Editor Hugh Robjohns replies: A capacitor mic capsule is an extremely complex thing, and the very best are expensive and time-consuming to make.



There are two basic types of capsule, working according to two different principles — pressure-operated and velocity-operated (also known as pressure-gradient). The former is constructed a bit like a snare drum — the capsule is, in essence, a sealed box with a diaphragm stretched across one side. The diaphragm acts like a pressure sensor, comparing the pressure changes caused by passing sound waves with the static internal pressure inside the box. The result is an omni-directional polar response —the direction of the sound waves don't matter, the diaphragm is only sensitive to the fact that they pass by.



The other way of doing things is to suspend the diaphragm in free space so that sound waves can get to both sides. In this case, the diaphragm moves (hence 'velocity') as a result of the pressure difference (pressure gradient) between the two sides. This arrangement gives a figure-of-eight response — the capsule is sensitive to sounds from front and back, but insensitive to sounds from the sides.



Often, it is more useful to have a mic that is sensitive to frontal sounds but rejects rearward ones — the familiar cardioid polar pattern. A cardioid pickup pattern is produced by combining equal proportions of pressure operation and pressure-gradient operation, and the earliest cardioid mics actually did have both an omni and figure-of-eight capsule side by side in the same box, with their outputs summed together before reaching the output terminals.



As Paul White discovered when he visited the Rode Microphones factory (see SOS August 2005), the utmost precision is required for drilling holes in a cardioid mic's backplate.

As Paul White discovered when he visited the Rode Microphones factory (see SOS August 2005), the utmost precision is required for drilling holes in a cardioid mic's backplate.


These days, most cardioids are 'phase shift' or 'labyrinth' designs which are constructed with a single diaphragm, like a pressure-operated mic (the snare drum), but with special convoluted passageways in the rear plate which allow sound to find its way through to the inside of the diaphragm after a time delay. The way this works is rather less obvious than the two prime capsule designs, and would take more space to explain than I have available here, but you can learn more about the subject by reading this article from SOS September 2000: www.soundonsound.com/sos/sep00/articles/direction.htm.



In terms of construction, there are literally dozens of different parameters to consider. There's the material the diaphragm is made from and its shape, thickness and tension, there's the spacing between the diaphragm and the back plate, the damping arrangement, the isolation dielectrics, the polarising voltage and so on and so on.



In the case of a cardioid capsule, there is also the complex arrangement of the rear chamber labyrinth to consider, and how that affects the polar pattern and the linearity of the capsule's off-axis frequency response. Entire books have been written on this subject alone!



Then, once the capsule has been designed and built, it has to be mounted in a mic body, the size and shape of which (along with the grille) affects the response of the capsule. And then there is the impedance converter circuitry, the powering circuitry and the output circuitry to consider, all of which affect the sound of the mic further.



This is why it is relatively easy for manufacturers in the Far East to reverse-engineer established mics and build copies very cheaply. But it is extremely hard for them to design new models from the ground up because the real science involved is known by a relatively small group of people.    


 
Published December 2005

Friday, September 2, 2016

Q. What makes some interfaces more expensive than others?

By Martin Walker




When it comes to computer audio interfaces, what is it that we are really paying for and how does the price relate to the quality of the A-D/D-A converters? Devices like the MOTU Traveler and the RME Fireface 800 cost more than, for example, the Focusrite Saffire or Digidesign M Box 2, so what does the extra money get you? When I look at the A-D/D-A specifications (sample rate, dynamic range and so on) of interfaces which differ quite a lot in price, they often seem very similar. So do more expensive units sound better?



Focusrite Saffire audio interface.



SOS Forum Post



Focusrite Saffire audio interface.
PC music specialist Martin Walker replies: When it comes to audio quality, there's a lot more to computer audio interfaces than the choice of A-D/D-A converters — having a low-jitter clock is vital if the sound is to remain 'focused', and the design of the analogue support circuitry (the input preamps and output stages) also modifies the final sound to a lesser extent, including the choice of op-amps, some of the capacitors, the power-regulator design... the list goes on!



Many manufacturers start the design of a new audio interface by establishing a rough feature list along with a likely price point, and then the engineers have a complex juggling act to perform to meet this brief. Entering the equation are the quality and price of the converters, the quality of the analogue circuitry (particularly the mic preamps, if there are any), the quality of digital circuitry, plus the controls, connectors, casework and so on. However, when it comes to the converters, many companies tend to choose exactly the same components from one of a handful of manufacturers like AKM Semiconductor, Cirrus Logic and Burr Brown.



The converters may only end up contributing a tiny part of the overall build cost, but their specifications often become an important part of the marketing process, particularly when new features like 192kHz support are available (though in the real world I still regard this as a red herring for most recording musicians). Some audio interface manufacturers also quote specifications for the converter chip alone, which can be misleading, since once all the support circuitry is added this inevitably compromises overall performance to some extent. Others quote real-world performance for the entire interface, which is far more helpful.

The Focusrite Saffire and the MOTU Traveler are both 24-bit/192kHz Firewire interfaces, so why does one cost twice as much as the other?

The Focusrite Saffire and the MOTU Traveler are both 24-bit/192kHz Firewire interfaces, so why does one cost twice as much as the other?



With many audio interfaces you are predominantly paying for the array of features on offer, so an eight-in/eight-out interface will cost a lot more than a stereo one simply because there's nearly four times as much circuitry, socketry and controls. You will also pay more for additional features such as mic preamps, built-in limiting, word clock I/O and so on, which is why I always stress the importance of choosing the interface that best suits your needs. A £1000 interface with loads of features may not benefit you if you really only need one with basic stereo in/out capability that could give you similar audio quality for half the price or less. On the other hand, if two interfaces with similar features and I/O are at wildly different prices, the more expensive one is almost bound to offer better audio quality, although whether or not you'll really benefit from it depends to some extent on the rest of your gear.    


Published December 2005