Welcome to No Limit Sound Productions

Company Founded

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, September 27, 2016

Q. How can I clear my head?

By Hugh Robjohns

I have an odd, but I'm sure not uncommon, problem that I hope your experienced staff can help with. This time of year the outside world is an especially ghastly, germ-ridden place. During a rare occasion out of the studio last week, I managed to catch myself a cold. This would not normally be a problem, only I had some very important work to complete and mix by the end of the week. So, replacing the biscuit tin with a box of tissues and a mug of Lemsip, I soldiered on. However, all my studious investment in fine hardware couldn't make up for the fact that with blocked sinuses I felt like I was mixing with a motorcycle helmet on! So, what I need to know is, are there any recommended products or remedies (apart from hiring another mix engineer!) to use in this situation? I have tried sinus sprays but they only work for an hour or so and I'm slightly worried that over-use will affect my hearing in the long term. My doctor doesn't really understand the issues either, which doesn't help. We're only as good as our ears, right? Your help on this issue would be more useful to me right now than any advice on speaker placement, room treatment or the latest and greatest convolution reverb — I can't hear it anyhow!

Simon West

Technical Editor Hugh Robjohns replies: This is not an unusual problem and I completely sympathise. I tend to suffer from this problem quite badly myself. All I can suggest is to find a good decongestant that works for you. I find Olbas Oil safe and useful — pour a few drops into a bowl of hot water and breathe the vapours for a while. However, the congestion will inevitably come back.
There are lots of pharmaceutical decongestants available, but many are combined with other drugs (paracetemol, for example) which limits how often they can be taken, and some have side-effects that may not agree with you. Try talking to your local chemist for specific product advice — I generally find that approach more helpful than talking to the doctor in situations like this.

But I'm afraid the bottom line is that your ears will not work properly until the cold has passed and the sinuses have cleared.

Published February 2006

Saturday, September 24, 2016

Q. Can you explain the origins of wavetable, S&S and vector synthesis?

By Steve Howell
The PPG Wave wavetable synthesizer. This one belongs to synth programmer, engineer and producer Nigel Bates. 

The PPG Wave wavetable synthesizer. This one belongs to synth programmer, engineer and producer Nigel Bates.

I keep reading about different types of synthesis like 'wavetable', 'S&S' and 'vector' but I don't know what they are. I've looked around the net for information but either the descriptions are very simplistic or they're too technical. Could someone at SOS please explain the origins of these techniques?

Michael Cullen

SOS contributor Steve Howell replies: 'Wavetable synthesis' is actually quite easy to understand. In the early days of synthesis, (analogue) oscillators provided a limited range of waveforms, such as sine, triangle, sawtooth and square/pulse, normally selected from a rotary switch. This gave the user a surprisingly wide range of basic sounds to play with, especially when different waveforms were combined in various ways.
However, in the late '70s, Wolfgang Palm [of PPG] used 'wavetable' digital oscillators in his innovative PPG Wave synths. Instead of having just three or four waveforms, a wavetable oscillator can have many more — say, 64 — because they are digitally created and stored in a 'look-up table' that is accessed by a front-panel control. As you move the control, so you hear the different waveforms as they are read out of the table — the control is effectively a 64-way switch. If nothing else, this gives a wide palette of waveforms to use as the basis of your sounds. However, the waveform-selection control is not a physical switch as such, but a continuously variable control implemented in software. The advantage this has (apart from the 60 extra waveforms!) is that it is also possible to use LFOs or envelopes or MIDI controllers to step through these waveforms.
Now, if the waveforms are sensibly arranged, we can begin to create harmonic movement in the sound. For example, if Wave 1 is a sine wave and Wave 64 is a bright square wave with Waves 2 to 63 gradually getting brighter as extra harmonics are added in each step of the wavetable, as you move through the wavetable, you approach something not unlike a traditional filter sweep. However, one disadvantage to this (but something that characterised the PPG) is that the sweep will not be smooth — the waveforms will step in audible increments.
Each oscillator in the PPG, however, didn't just have one wavetable — there were 32 wavetables, each with 64 waveforms! Many were simple harmonic progressions as described above; others were rudimentary attempts at multisampling, whilst others attempted to emulate oscillator sync sweeps and PWM (pulse-width modulation) effects. Because the wavetable sweeping was so audibly stepped, the latter two weren't entirely convincing emulations, though they had a character all their own nonetheless.
Where things begin to get interesting, however, is when the waveforms in the wavetable are disparate and harmonically unrelated, as the tonal changes become random and unpredictable. For many, this feature of wavetable synthesis was unusable, but some creative individuals like Tom Dolby exploited it to create unique and distinctive sounds, as can be heard on his 1982 album The Golden Age Of Wireless.
The PPG had something of a trump up its sleeve, however — totally analogue filters! Using these, it was possible to smooth out the wavetable sweeps. Another endearing quality of the PPG was its low-resolution digital circuitry, which exhibited aliasing at extreme frequencies that added a certain 'gritty' quality to the sound. Later manifestations of the PPG (in Waldorf products) were of a higher quality and offered smooth wavetable sweeping. But while they sounded better, they lacked that (arguably) essential 'lo-fi' character.
Other synths have employed wavetable synthesis in one guise or another since then and there are several software synths available today which incorporate wavetable synthesis capabilities.
The massively influential Korg M1 really put S&S synthesis on the map.The massively influential Korg M1 really put S&S synthesis on the map.
'S&S' is an abbreviation for 'samples and synthesis' and refers to the new breed of synth that appeared with the introduction of the seminal Roland D50 in 1987. Whereas synths prior to this used analogue or digital oscillators to create sound, samplers were now in the ascendent, with the introduction of affordable sampling products such as the Ensoniq Mirage, the Emu Emax and the Akai S900. These allowed almost any sound to be sampled and mangled but they had one inconvenience — the samples took time to load and were inconveniently stored on floppy disks. Roland could see that by using short samples as the basic sound sources, and storing them in ROM for instant recall, they could make the same type of sound as a sampler but with no tedious load times. However, they also retained many of their previous synthesizers' functions such as multi-mode filters, envelopes, LFOs and so on. To all intents and purposes, the D50 'felt' like a synth but sounded like a sampler. Furthermore, to smooth out any inadequacies in the very short samples such as clicky and/or obvious loops, the D50 also had chorus and reverb which 'smudged' these artifacts quite effectively.
And so a legend — and a new synthesis method — was born! Roland called it 'LA (linear arithmetic) synthesis'. In truth, it was a simple layering method where up to four samples could be stacked to create more complex sounds. Because of memory constraints (ROM/RAM was very expensive at the time), Roland had to use very short samples, and there were two categories of sample on the D50 — short, unlooped samples (such as flute 'chiff' or guitar 'pluck') and short sustaining loops. By combining and layering, for example, a flute 'chiff' with a sustained flute loop sample, you could (in theory) create a realistic flute sound. In practice, it didn't quite work out like that, but this layering technique also gave the instrument a new palette of sounds to work with and it was possible to layer, say, the attack of a piano with the sustain of a violin. With the wealth of synthesis functions available to process the samples, this allowed the user to create interesting hybrid sounds.
Korg took this concept to a new level a year or so later when they released their M1, another legend in modern music technology. Although similar concepts were involved, the M1 used longer, more complete samples which, in conjunction with typical synth facilities, blurred the distinction between synth and sampler and arguably heralded the beginning of the slow, gradual demise of the hardware sampler! However, as well as advancing S&S, they also added a very functional multitrack sequencer and good quality multi-effects so that (maybe for the first time) it was possible to create complete works on a single, relatively affordable keyboard. And so the 'S&S workstation' was born. I think it's fair to say that most modern synths owe something to the Korg M1 in one or another aspect of their design.
The ill-fated Sequential Circuits Prophet VS introduced vector synthesis to the world.The ill-fated Sequential Circuits Prophet VS introduced vector synthesis to the world.These days, many synths and keyboards routinely use these same basic principles, but memory is now far more affordable and so it is possible to have many more (and considerably more detailed) multisamples in the onboard ROM. Whereas early S&S synths boasted around 4MB of onboard ROM, figures of 60MB or more are bandied about today. That said, many of the same techniques used for optimising samples and squeezing as many into ROM as possible are still used today.
'Vector synthesis' is a slightly different (but related) technique. First pioneered by Dave Smith in his Prophet VS, vector synthesis typically uses four oscillators which the user can 'morph' smoothly between, using real-time controllers such as a joystick or automated controllers such as LFOs and/or envelope generators. As the joystick is moved, so the balance of the four oscillators changes and, depending on the nature of the source waveforms, many interesting, evolving sounds can be created. But the Prophet VS was ill-fated — Sequential Circuits were in financial trouble and the company soon went to the wall. However, the concept lived on in the Korg Wavestation, which was a joint venture between a post-Sequential Smith and Korg. The Wavestation had a significant advantage over the VS in that it used multisampled waveforms, allowing more complex building blocks to be used — in many ways, it was a hybrid S&S and vector synth. As well as extensive synth facilities (filters, multi-stage envelopes and so on), it also had comprehensive multi-effects and other facilities (not least of which was 'Wave Sequencing') that made the Wavestation a programmer's dream, and a casual user's nightmare! Indeed, they are still a staple component in many players' keyboard rigs today. The Wavestation was discontinued many years ago (though it's been resurrected in Korg's Legacy Collection software), but vector synthesis lives on in Dave Smith's Evolver range of keyboards.
If you're looking for further information on synthesis out there on the web, I can suggest two sections of the Sound On Sound web site worth investigating. Paul Wiffen's 12-part Synth School series, which appeared in the magazine between June 1997 and October 1998, is a good introduction to the basics of synthesis in its various forms. If you enter "synth school" into the search engine at www.soundonsound.com, you'll find it. Judging by your comments, you may find some of Gordon Reid's long-running Synth Secrets series too technical, but it's nevertheless worth a mention as it covered so much ground in its five-year tenure. To make this vast amount of material a little easier to navigate, we have created a special page with links to all of the Synth Secrets articles: www.soundonsound.com/sos/allsynthsecrets.htm.

Published February 2006

Thursday, September 22, 2016

Q. Which digital multitracker is the right one for me?

By Tom Flint
The Zoom MRS1608's dedicated drum pads set it apart from other similarly priced multitrackers. 

The Zoom MRS1608's dedicated drum pads set it apart from other similarly priced multitrackers.
I read Tom Flint's piece on the Zoom MRS1608 multitracker and think it may be the right machine for me. I still use a Roland TR707 drum machine which allows you to step write and tap write. The sounds, of course, are ancient. I write simple country songs, mostly backed by drums and guitars. I think the Zoom's drum machine would be great for what I do. I would think the guitar effects would also be pretty good on this machine. I currently own the Tascam 2488. I think my recordings sound really good on this machine, but I don't like the guitar effects much and find them a little difficult to use. I don't even use the drum machine and I don't use MIDI or edit much at all. Based on what I have told you, do you think I would be pleased if I unloaded the Tascam and bought the Zoom? I would appreciate your opinion on this subject and thank you in advance.

Robert Tambuscio

SOS contributor Tom Flint replies: Before they entered the multitracker market, Zoom were busy gaining a name for themselves producing drum machines and guitar effects (amongst other things), so you can expect a reasonable level of quality and competence in both these areas. If I remember correctly, the MRS1608's internal drums sounds are good and varied — if country music is your thing then the chances are that the sounds in the MRS will serve you better than the TR707! The MRS has 50 drum kits which should certainly include a few that are suitable, and it is possible to take the best sounds from various kits and create a custom kit yourself. If you're not satisfied with the onboard sounds, the Pad Sampler facility allows AIFF and WAV samples to be loaded from CD and used as alternative drum sounds. Alternatively, you could use the Phrase Loop sampler to put together drum and percussion loops taken directly from sample libraries, or choose from among the MRS's 475 preset drum and bass patterns.

The sequencer itself offers both real-time and step-based recording, so it should allow you to program drums in a similar way to the TR707, although I believe the Zoom's grid has a finer resolution than the 707 and there are more time-signature options. It's also worth noting that some of the Zoom's programming facilities will be familiar to TR users. For example, just as the 707 has a set of faders for setting sound levels for each kit component, the MRS allows the channel faders to be used for adjusting its own drum samples. The Zoom multitracker also benefits from having 12 touch-sensitive pads for triggering drums.

Tascam have a long history of producing multitrack recorders, but they're not known as makers of effects or drum machines so it's not surprising to hear that the 2488 hasn't quite lived up to your expectations in these areas. It does have an internal GM sound module with many useful drum and instrument sounds, and, like the MRS, it can Import and play Standard MIDI files, but it doesn't have anything approaching a pad bank, and there are no sampling facilities. So as far as the drum machines are concerned, the MRS is much better equipped.

That said, a decent drum section shouldn't be your only consideration. Before you offload the 2488, think carefully about whether there are any recording, editing or mixing facilities that you regularly use, and check they are also available on the Zoom. The Zoom has a rather more basic display which may hinder its usability a little. That has to be something to consider, given that you say you find some of the 2488's features difficult to use. Without doing some objective side-by-side testing it's impossible to say whether the Zoom sounds as good as the Tascam or not, but I can say that I didn't think the Zoom was particularly weak in that department, and I suspect there's little to choose between them.

Nevertheless, I'd advise anyone using a budget multitracker to use a good-quality external preamp for any important lead work if at all possible, simply because the onboard preamps are not going to be of the highest quality. What's more, if your preamp has a decent A-D converter with an S/PDIF output built in, it would be a good idea to bypass the multitracker's converters by using its S/PDIF input, and clocking the multitracker to the preamp's digital clock.

Normally I'd probably suggest upgrading to a better machine when trading in your old multitracker for a new one, but there aren't really any high-end products which go in for drum machines and sequencers in quite the same way as the MRS1608, so I'm not sure you have much choice if you really want these kinds of features. The other option would be to hold onto the Tascam 2488 and buy a more modern drum machine — Alesis, Boss and Zoom all make self-contained drum machines which cost less than £300 — and slave it to the 2488 via MIDI.

Published February 2006

Wednesday, September 21, 2016

Q. What determines the CPU reading in Cubase SX?

By Martin Walker

I remain baffled by the CPU load in Cubase SX 2 (as shown in the VST Performance indicator). I'm particularly curious to know why in my larger projects the indicator shows a constant load (typically 80 percent or more) even when I'm not playing anything back! What exactly is the CPU doing when nothing is happening in the project? My projects typically have 15 to 25 audio tracks, five to 10 virtual-instrument tracks and a couple of MIDI tracks, with five or so group channels and maybe a couple of FX Channels. Some of the channels have an insert effect or two, typically a compressor or gate, and there's a couple of aux channels for send effects.

SOS Forum Post

PC music specialist Martin Walker replies: When Cubase isn't playing back, the CPU overhead is largely down to the plug-ins, all of which remain 'active' at all times. This is largely to ensure that reverb tails and the like continue smoothly to their end even once you stop the song, and it lets you treat incoming 'live' instruments and vocals with plug-ins before you actually start the recording process. However, this isn't the only design approach — for instance, Magix's Samplitude allows plug-ins to be allocated to individual parts in each track, which is not only liberating for the composer, but also means that they consume processing power only while that part is playing.

Freezing tracks, adjusting the buffer size and using single send effects instead of multiple inserts can all help reduce CPU overhead. 

Freezing tracks, adjusting the buffer size and using single send effects instead of multiple inserts can all help reduce CPU overhead.

Of all the plug-ins you'll be using frequently, reverbs are often the most CPU-intensive, so make sure you set these up in dedicated FX Channels and use the channel sends to add varying amounts of the same reverb to different tracks, rather than using them as individual insert effects on each track. You can do the same with delays and any other effects that you 'add' to the original sound — only those effects like EQ and distortion where the whole sound is treated need to be individually inserted into channels.

The other main CPU drain for any sequencer when a song isn't playing back comes from software synths that impose a fixed overhead depending on the chosen number of voices. These include synth designer packages such as NI's Reaktor and AAS's Tassman, where their free-form modular approach makes it very difficult to determine when each voice has finished sounding. However, fixed-architecture software synths are more likely to use what is called dynamic voice allocation. This only imposes a tiny fixed overhead for the synth's engine, plus some extra processing for each note, but only for as long as it's being played.

If you use a synth design package like Reaktor or Tassman, try reducing the maximum polyphony until you start to hear 'note-robbing' — notes dropping out because of insufficient polyphony — and then increase it to the next highest setting. This can sometimes drop the CPU demands considerably. Many software synths with dynamic voice allocation can also benefit from this tweak if they offer a similar voice 'capping' preference.

Anyone who has selected a buffer size for their audio interface that results in very low latency will also notice a hike in the CPU meter even before the song starts, simply due to the number of interrupts occurring — at 12ms latency the soundcard buffers need to be filled just 83 times a second, but at the 1.5ms this happens 667 times a second, so it's hardly surprising that the CPU ends up working harder. For proof, just lower your buffer size and watch the CPU meter rise — depending on your interface, the reading may more than double between 12 and 1.5ms. You'll also notice a lot more 'flickering' of the meter at lower latencies. If you've finished the recording process and no longer need low latency for playing parts into Cubase, increase it to at least 12ms.

Finally, if some of those audio or software synth tracks are finished, freeze them so that their plug-ins and voices no longer need to be calculated. Playing back frozen tracks will place some additional strain on your hard drive, but most musicians run out of processing power long before their hard drives start to struggle.

Published February 2006