Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Friday, May 31, 2019

Q. Can I use my drum pads to trigger drum machine sounds?

Q. Can I use my drum pads to trigger drum machine

 sounds?

MIDI pads such as the Roland SPD86 are ideal for capturing a live, stick-led drum performance.MIDI pads such as the Roland SPD86 are ideal for capturing a live, stick-led drum performance.I currently own some Roland SPD6 MIDI drum pads, and I want to be able to record from them into some kind of drum machine, so that I can use the rhythms in a live situation later (I play guitar and sing in a trio with a guitarist and synth player.) My local music shops tell me that things like the Zoom and Boss drum machines on the market today will not do this, as they say you cannot connect via MIDI and record the external MIDI information into them. Is there any other product that might be worth looking at?

Mark May

Reviews Editor Mike Senior replies: If you're happy with the kinds of sounds you can get from the average drum machine, you could consider getting one of Yamaha's little QY-series hardware sequencers — the most recent we reviewed was the QY100. These have MIDI recording (called MIDI sequencing) facilities built into them, along with a MIDI sound module to provide the sounds. You could plug your SPD6 pads into the MIDI input on one of those machines and record your performance in real time. Then you could re-record sections or edit note-by-note, if you wished. The QY100 is inexpensive, and very portable, too, so it would be good for gigs. You can check out the specs in our review (SOS February 2002) and by downloading the manual from the Yamaha web site's library. Check out the rest of the QY range, too, because each one offers a different balance of facilities.

If you're after more realistic sounds, I'd suggest something like an Akai MPC2000, which also contains a MIDI sequencer. However, instead of a generic MIDI sound module, you get a drum sampler, which allows you to use the most suitable sounds for your music. You could also look at the Yamaha RS7000 for this, and perhaps the Korg Triton or Trinity keyboard workstations with the sampling option (though these would be rather more expensive). Each provides these kinds of sampling and sequencing facilities in a slightly different format.

Q. Are red indicators in audio software significant?

What is the best level, or what should be the highest indicator point in mixing, using computer software such as Nuendo or Samplitude? At times the mix will sound low when the LEDs are hitting red. Am I using my compressors wrongly? Also, how can you tell that your song is really going to have the desired punch in a club setting? This has given me headaches with clients complaining that their songs are not 'punching' hard enough. My overall compressor is the Timeworks mastering compressor.

Chris Musyoka

Features Editor Sam Inglis replies: Red lights in any digital system, including computer software, indicate digital overloads and should be avoided. When a signal exceeds the maximum level available in a digital system, the audible result is digital distortion, which is not at all like the 'warm' sound of an overloaded analogue tape recorder or mixer channel, and is usually very unpleasant.
Having said that, there are sounds (such as snare drums) where brief overloads are not very noticeable, and the mixing engines in many software packages seem designed to alleviate some of the audible consequences of overloading. As usual, if it sounds good, don't worry about how it looks! It should, however, be possible to get a punchy mix without continually overloading channel busses — if you find all your channels in Nuendo are showing red lights, pull the faders down on all the channels and boost the Master channel instead.

The usual way of achieving 'punch' is to process the mix buss with a stereo compressor and/or limiter: multi-band models generally achieve more transparent results than full-band ones. However, if your mix isn't punchy to start with, no amount of mix buss processing is really going to help that much, so you probably need to address some more fundamental issues. Is your monitoring system properly set up? Do your sounds sit well together in the mix? For instance, do your bass and kick drum sounds get muddled because they occupy the same frequency range, or do they complement each other?

Reviews Editor Mike Senior adds: The best advice I can give is to ask your clients to bring in tracks which they consider to have the required amount of 'punch' and then to adjust your compressor and EQ settings to match these reference tracks. Set up your monitoring system so that you can quickly switch between your mix and the output of your mix buss — it's vital that you A/B very quickly, as this stops your ear compensating for any deficiencies in your mix.

If you need advice on compression and EQ, I'd suggest looking up the Advanced Compression workshops (in the December 2000 and January 2001 issues of SOS) and the two EQ workshops in July and August 2001. You might also want to look up the Advanced Gating workshops (April 2000 and May 2001), and the multi-band compression workshop (August 2002). This is only the tip of the iceberg, though, given that there are eight years or so of back issues available free to read on-line, many of them workshops. Mix processing is an extremely difficult thing to do effectively, so don't be afraid to mix things several times in order to experiment.

Q. What is 'zero level'?

Could you give me a definition of 'zero level', or explain what it is? I am a University student in my first year, studying sound recording, and I can't seem to find a definition for this anywhere! Any reply will be greatly appreciated.

SOS Forum post

Technical Editor Hugh Robjohns replies: Virtually every book published in the last 20 years that discusses recording techniques explains zero level in some depth (I have a bookcase full of them here), so the best thing to do would be to investigate your college library and do the homework for yourself. However, because there may be some confusion out there, I'll try and provide a simple explanation.
Audio signals are measured in decibels (dB), as you no doubt know, and zero level is slang shorthand for the 0dB point — ie. where the reference and measured signal have the same value. However, that reference changes depending on what you are talking about.
  • 0dBA is the threshold of hearing — the quietest sound an average person can hear.
  • 0dBu is a common reference level for line signals, and is usually the target of the 'zero level' term. It equates to a signal voltage of 0.775V rms. Another very common reference level for line-level signals is +4dBu (a signal voltage of 1.223V rms) and this is also usually (but certainly not always) the reference point for 0VU, the zero mark on a traditional VU meter.
  • 0dBm is irrelevant to audio these days, but was the correct term when everything used matched 600Ω impedance terminations. It defines the voltage when one milliwatt of power is dissipated in 600 Ohms... which just happens to be 0.775V rms.
  • 0dBV is rarely used in audio circles. The reference level for semi-pro gear is actually -10dBV, a signal voltage of 0.316V rms
  • 0dBFS is the highest possible value of a digital signal. Unlike analogue systems, which all encompass a certain amount of usable headroom above the 'zero level', digital systems stop at precisely 0dBFS, so a working headroom has to be built into the system alignment. Depending on the standard used, this is somewhere between 12 and 24dB, with 18dB being a common European standard. Consequently, 0dBu is often said to equate to -18dBFS.

Q. Can you advise me on vocal recording and 

microphones?

I currently have a small home studio and am making a a few different styles of music but mainly house and trance. I've used samples for remixing tracks but never done any actual vocal recording myself. Not so long ago I was asked by the lead singer of a group if I would help them do some recording. I really am interested in doing it, but the problem I'm hitting is acoustic treatment for the recording booth. It's so expensive, and reading the Studio SOS article in the October issue this year I see Paul White using a duvet for dampening down room reflections. Would it be good enough to hang duvets on the walls of the vocal booth ? Also, I'm not sure what mic I will be buying, as I don't know a great deal about them. I'll be on a budget of about £4-500. The mic will be running straight into the mixer itself, which hopefully will be the Mackie 32-channel. I have been told that it would be fine to run the mic straight into the mixer, as the Mackie's quality is very good.

Craig Young
These days, even quality valve mics like this Rode NTK are very affordable.These days, even quality valve mics like this Rode NTK are very affordable.

Reviews Editor Mike Senior replies: Regarding your acoustics, a couple of duvets should do the trick here. If not, you can always upgrade your treatment at a later date with some acoustic foam. To address your second question, at that price for a mic you've got an awful lot of choice. You should only need a cardioid (unidirectional) polar pattern, and you ought to be able to do without pad or filter switches — vocalists aren't likely to get that loud and you can filter low end on the mixer channel.

Your main choice will be whether to go for a solid-state or a valve model. The solid-state ones will give good results on a variety of different sounds, but the enhancement provided by a valve model may be more suitable for your vocal-only applications, if you want that kind of sound. If you're after a solid-state mic, our Editor Paul White very much likes the Rode NT mics — the NT1 is currently excellent value. There's also a very nice Rode valve mic, the NTK, which is within your budget. 

Another one to have a look at is the AKG SolidTube, which I've heard models its sound on my personal favourite vocal mic, the AKG C12. Having a quick look through the Turnkey ad in this month's issue, there's also a good deal on the AKG C414, a classic solid-state mic, which brings it within your price range. There's also the Neumann TLM103, which is another lovely mic from a pedigree manufacturer, for around £470. In short, you're spoilt for choice, even without considering the flood of super-cheap Chinese-built clones from Joemeek, Red5 Audio, Studio Projects, Canford, Samson, MXL, and a growing number of others. The good news is that you shouldn't go wrong with any of the mics I've named above. Furthermore, any money you spend on a good mic will certainly not be wasted, so don't necessarily head for the cheapest option.

As for plugging the mic directly into your mixer, if you're getting one of the Mackie 'VLZPro' models, the mic preamps should easily do your choice of mic justice.

Q. Why have my faders stopped working?

I'm using a Roland VS880 digital multitrack, and one of my songs has developed a weird bug. The faders aren't working — they're just dead. Looking into the track parameters, I see that the channel mix levels are stuck on a fixed value with an asterisk after it (eg. 'CH MIX LEVEL=102*'), and using the wheel to vary the value is now the only way I can think of to alter it. Any ideas?

Alan Pittaway

Reviews Editor Mike Senior replies: I have a couple of ideas as to the root of the problem. That asterisk after the fader value means that the position of the physical fader is not representative of the actual internal level parameter. This situation can easily arise in normal use, because of using the same set of faders to adjust several sets of level parameters, and also because the automation can move the level parameter but not the physical fader.

If a mismatch between the level parameter and the physical fader occurs, you can normally set the level parameter to the physical fader position simply by moving the fader. However, if the Fader Match mode (set in the System menu's System Prm sub-menu, I think) is set to 'Null', you have to move the physical fader through the current level parameter value before the level parameter will follow the physical fader again. If you want it to work as it did before, simply switch Fader Match to 'Jump'.

If this does't sort things out, it's probably because you've got the machine's local control switched off. Head to the System menu's MIDI Prm sub-menu and switch the Cntrl Local switch to 'On'. What is the MIDI Local Control switch there for, I hear you wonder? It's for using the VS with a sequencer, so that you don't get a situation where the VS faders and the sequencer are both sending control signals to the VS's internal digital mixer.

Q. Does normalising have any adverse effects on 

audio?

I have a question to which I haven't really been able to find an answer. Most of the time, I tend to record audio to my Pro Tools LE system at somewhat under digital full scale, so that I'm 100 percent certain of not getting any distortion. Recordings are then normalised, maximising the peaks and making the audio as 'loud' as possible while maintaining dynamics, before I start mixing. What I'd like to know is whether the normalisation process actually changes or degrades the audio in any way, beyond adding bits to increase level.

Alex Elliott

Technical Editor Hugh Robjohns replies: The normalising process searches the audio file for the highest recorded peak, and then applies gain to the entire file to raise that highest peak to 0dBFS, thus raising the level of the entire file. A lot of people work this way, and it is a useful technique, since you can allow sufficient headroom during recording to avoid transient overloads, yet peak levels can subsequently be maintained at a similar level to commercial products.

Obviously, the amount of gain added to normalise the signal will also raise the noise floor by the same amount, but with the prevalence of 24-bit converters these days, the noise floor is almost certainly going to be dominated by your own recording environment rather than the resolution of the recording system, so you are not losing anything in this process. The original dynamic range will be maintained.

The only other possible cause for concern is the issue of 0dBFS peaks. While working on the signal within the DAW environment (assuming a floating-point DSP system) this causes no problems whatever. However, some thought and care needs to be given to using normalised signals for making CDs. Although it is common practice to peak signals to 0dBFS during CD mastering, some mastering engineers are now making -0.5dBFS (or even lower) their maximum level. The reason is to avoid the danger of overloading the D-A converter in the replay system, since reconstructing a signal which has several 0dBFS peaks in close succession can create an analogue waveform which has a greater amplitude than its source samples. This can overload the D-A converter or the analogue electronics, which is clearly not a good thing.

Q. Can I obtain better orchestral sounds on a 

budget?

I have a very humble home studio comprising a PC running Windows 98, a Creative Labs Soundblaster Live soundcard with MIDI interface, a basic 76-key controller and Steinberg Cubasis VST software. I write classical compositions, which means all of the above gear has been an investment, enabling me to score at 10 times the speed of transcription by hand.

The problem I have is that the SoundFonts supplied with the Creative package are OK but not accurate enough and produce a bad simulation of an orchestra. This defeats the purpose for me. Can I get hold of better banks of SoundFonts, say on CD somewhere? If this is not possible, is my only option to purchase a sound module such as the Emu Proteus?

Allan Wiseman
If you really want orchestral realism. there's nothing to beat a sampler (perhaps a soft sampler like Steinberg's HALion) and some decent orchestral sample libraries.If you really want orchestral realism. there's nothing to beat a sampler (perhaps a soft sampler like Steinberg's HALion) and some decent orchestral sample libraries.

Features Editor Sam Inglis replies: You can buy sample CDs in the SoundFont format, although it's fair to say that many of the more professional libraries are only available in other formats, such as Akai and Gigastudio. With your setup, you'll also be limited by the fact that the Creative card only has a certain amount of memory onboard to load SoundFonts. If you want to try to look for better SoundFonts, your first port of call should probably be Time + Space, who distribute most of the sample CDs sold in this country (their web site is atwww.timespace.com).
If your PC is a recent model with a reasonably fast CPU and plenty of memory, however, my suggestion would be that you look for a software sampler or synthesizer instead. (It would be worth checking that your version of Cubasis supports VST Instruments, as this would be the most convenient format for you.) If you want the ultimate in realism, you would choose a software sampler such as HALion and a selection of orchestral libraries on CD. However, this will be expensive and time-consuming to set up, and if you want an all-in-one, easy-to-use solution, the most obvious choice would be Edirol's Orchestral Instrument. It won't give you the same realism that you'd get from a nine-CD £1000 library of string samples, but I think you would notice a substantial improvement over the Creative SoundFonts.


Published January 2003

Wednesday, May 29, 2019

Q. Why is phase important?

By Various
Phase shown on waveforms.

Q. Why is phase important?

I'm an experienced musician who's just beginning to understand recording techniques and acoustic treatment. On page 160 of your December 2001 issue there was a box called "Absolute Phase Is Important", and this has prompted a few questions.

How does one reverse the phase of a microphone as suggested in the article? Is this done by resoldering connections, or are there any quality mics you can suggest that come with a phase-reverse switch? Phase testers were also mentioned. What are they? How do they work? Where can they be purchased?

Many of us use a phaser effect, and I assume that phasers basically alter the phase of a sound source over a given time period. Perhaps you could elaborate on how they work?
Finally, I use a PC to store all my recorded audio information as WAV files. Is it possible to uniformly alter the phase of a piece of digitally recorded audio so it stays static at the same place? I have been trying in Sound Forge, but haven't found a way to successfully do it yet.

Jonathan Sammeroff

Paul White and Sam Inglis respond: As you probably know, sound consists of pressure waves in the atmosphere. The function of a microphone is to translate these pressure waves into changes in the voltage of an electrical signal. Absolute phase usually refers to instrument miking where a positive increase in air pressure translates to a positive increase in voltage at the microphone output. If the mic is wired out of phase, or some other phase inversion is introduced, the output voltage will go negative as the air pressure becomes positive, and in the context of some percussive sounds, such as kick drums, there can be an audible difference. Also, if you have two very similar signals (such as are obtained by close-miking the same source with two different microphones) which happen to be out of phase, a lot of cancellation will occur, and this is usually undesirable. The classic case is when you close-mic both the top and bottom of a snare drum: here, you will get two very similar signals, but one will effectively be phase-reversed with respect to the other, so it's standard practice to reverse the phase on the bottom microphone.

Any balanced mic can be reversed in phase by making up a cable with the hot and cold conductors (the two inner wires) swapped over at one end of the lead. Most mixers, and some mic preamps, also have mic phase invert buttons that will switch the phase without requiring any special cables. Most cable testers will check that your leads are wired correctly (without crossed over hot and cold wires that would cause a phase reversal), but devices that can check acoustic phase from microphone to loudspeaker tend to be more complex and rather more expensive. Check with Canford Audio (www.canford.co.uk), as they carry this type of test equipment.

A phaser effect combines a signal with a delayed version of itself, using a low-frequency oscillator to modulate the delay time. As the length of the delay time is varied, cancellation occurs at different frequencies, and the result is a type of notch filter where the notch frequency is constantly moving, introducing a sense of movement into the sound.

You can't uniformly alter the phase of a whole piece of music, as phase relates to frequency, so unless your music comprises a single tone, adding delay (which is how some phaser effects work) will cause some frequencies to add and others to cancel. However, I suspect this is the effect you want, in which case you can get it by copying the audio to be treated, then moving it slightly ahead or behind the original audio, usually by just a few milliseconds. The two parts summed together will exhibit the static phase effect you describe.

Q. What orchestral sample libraries are available for 

HALion?

I recently bought Steinberg's software sampler HALion to replace Cubase's good (but limited) Universal Sound Module for orchestral work. However, I'm having difficulty sourcing suitable samples for this, and wondered if you could recommend some alternatives in the price region of £100 to £300, and how I might be able to hear them before I buy?

Bill Taylor
Orchestra Section Strings sample library.

Assistant Editor Mark Wherry replies: HALion has the ability to import a wide variety of sample formats, including Akai and Emu CD-ROMs, and SoundFonts — so any of the orchestral libraries in these formats should work without a problem. GigaStudio is a highly regarded platform for sample-based orchestral work and has many fine libraries available, though most are priced at the higher end of the market. Giga libraries can be imported into HALion (from version 1.1), although, as Giga import isn't 100 percent accurate right now, you might be best sticking to more conventional libraries that place less demand on your computer.
Q & A Advanced Orchestra CD artwork.

As for library recommendations, for orchestral work on a budget you could do worse than Emu's two downloadable volumes of Orchestral SoundFont banks, available for $39.95 each atwww.soundfont.com. These provide a good starting point and are rumoured to be based on the same sound library as Emu's Virtuoso 2000 module. There are also many good Orchestral Implants SoundFont libraries available from Sonic Implants (www.sonicimplants.com), and these are also reasonably priced.

If you want something more professional, both Peter Siedlaczek and Miroslav Vitous offer junior versions of their larger (and more expensive) orchestral libraries as Akai CD-ROMs. At £99, Peter Siedlaczek's Advanced Orchestra Compact might be a bargain, though many people regard it as being a little too compact. But at the upper limit of your budget, Miroslav Vitous' Mini Library offers a good selection of high-quality bread-and-butter sounds for £299, which is around a tenth of the price of the full library. Both are available from Time & Space (www.timespace.com).
Hearing libraries before you buy can be tricky, but all of those mentioned here have MP3 demo songs on their respective web sites.

Q. Where can I get Windows-based ASIO drivers for 

an Audiomedia III card?

Digidesign Audiomedia III card.Digidesign Audiomedia III card.I recently bought a Digidesign Audiomedia III card second-hand from your magazine's Readers' Ads, and it didn't come with any ASIO drivers for my Windows-based PC. The card was highly recommended by a friend of mine (combined with Pro Tools), and the sound quality is very good. However, I've got a problem with latency and can't find any ASIO drivers to bring this down to acceptable levels. With Cubase 3.7 I can only use the default settings of 750mS (ASIO DirectX/ASIO Multimedia), which is far too high for any serious recording. How can I reduce the latency (apparently the AMIII is capable of latencies less than 5mS), and what ASIO driver should I install to use the card's full potential?

Another problem I'm experiencing is that when recording electric guitar through the soundcard into Pro Tools, I get nasty digital clicks at the beginning of every recording. Is there any way I can eliminate these? I'm not very experienced in setting up studios, and everything I read regarding Pro Tools and the Audiomedia III card seems to be written for the Mac platform, and not for PCs.

Bernd Krueper

PC Music specialist Martin Walker replies: The Audiomedia III is now quite elderly as soundcards go, having been introduced by Digidesign in 1996, and features 18-bit converters, although internally it has a 24-bit data path. I've only mentioned it once in the pages of SOS, in my first ever (May 1997) PC Notes column, where I published details of a way to cure inexplicable clicks by disabling PCI Burst Mode in your motherboard BIOS, should this setting be available. At the time, Digidesign were finalising a chip upgrade addressing the problem, so hopefully you have one of the later cards with this modification. Digidesign mention various other known incompatibilities on their web site, including AMD processors, VIA chipsets, and various Hewlett Packard PCs, which isn't encouraging.

I eventually found the latest Wave (MME) drivers on Digidesign's web site including version 1.7 for Windows 98/ME, dated January 2001, which supports 16 and 24-bit recording and playback at sample rates of up to 48kHz, in addition to other drivers for Windows NT, 2000, and even the announcement of an XP beta test program to support the AMIII cards. Various cures for crackling during playback were implemented in driver development, so make sure you have the latest versions. These will still give high latency, although you may be able to tweak the ASIO Multimedia settings inside Cubase 3.7 to bring the default 750mS down a little.

However, there was absolutely no mention of ASIO drivers, and Digidesign UK subsequently confirmed that none were ever written by them, or are now likely to be. Because the AMIII was released pre-ASIO, Digidesign developed the DAE (Digidesign Audio Engine) and relied on the sequencer developers to add support for it. Apparently, Steinberg did originally write an ASIO driver that supported this, and Emagic supported the DAE in Logic Audio up to version 3.5 on the PC, but since the DAE apparently wasn't updated by Digidesign to support Windows 98, support was dropped in Logic version 4.

So, sadly, although the card might be capable of latencies down to 5mS, you won't find any modern audio application that can use anything other than the high-latency MME drivers. This is a cautionary tale for any musician buying a soundcard, and particularly a second-hand one, so make your decision based on what drivers you can confirm are available to save yourself regrets later on.

Q. Are there really reverb and synth plug-ins supplied 

with Mac OS X?

I'm running Mac OS 10.1.2 and use SparkME, but there's no sign of the reverb and synth plug-ins anywhere. What's going on?

Arum Devereux

Assistant Editor Mark Wherry replies: The short answer is yes, there's a reverb and a synthesizer supplied with Mac OS X. The slightly longer answer is that developers have to provide support in their applications to take advantage of these features. And, since the MIDI and audio APIs (Application Programming Interfaces), collectively known as the Core Audio services, are some of the newest elements of Mac OS X, it's going to take a while for developers to fully support them.

The Core Audio services provide a plug-in architecture known as Audio Units, which isn't a million miles away from DirectX plug-ins on Windows. Audio Units can be used for a variety of applications, including software effects and instruments, and indeed, the reverb and DLS/SoundFont player instrument Apple supply with Mac OS X are both Audio Units.

The advantage of Audio Units, like DirectX plug-ins, is that any musical application running on Mac OS X can use the same pool of global plug-ins if it was developed to support Audio Units. This saves developers having to develop their plug-ins to support multiple architectures like VST, MAS, RTAS, and so on.

Q. How can I isolate the vocals from a stereo mix?

Do you know of any software or hardware that can remove a vocal from a track but allow you to save the vocal? There are numerous software packages that remove vocals from a track, but those are the parts I want.

Simon Astbury

Senior Assistant Editor Matt Bell replies: This question and variants on it come up time and time again here at SOS, and also on music technology discussion forums all over the Internet, presumably because budding remixers are forever coming to the conclusion that it would be great if there were a way of treating the finished stereo mixes of songs on CD and coming up with the isolated constituents of the original multitrack, thus making remixing a doddle. The situation is further complicated by the ready availability of various hardware and software 'vocal removers' or 'vocal cancellers', which leads people to assume that if you can remove the vocal from a track, there must be some easy way of doing the opposite, ie. removing the backing track and keeping the vocals.

Sadly, the truth is that there's no easy way to do this. To understand why not, it's helpful to learn how vocal cancellation — itself a very hit-and-miss technology — works. Believe it or not (given that so much of this month's Q&A is already given over to the topic) it's all to do with signal phase!

A stereo signal consists of two channels, left and right, and most finished stereo mixes contain various signals, mixed so they are present in different proportions in both channels. A percussion part panned hard left in the final mix, for example, will be present 100 percent in the left channel and not at all in the right. A guitar overdub panned right (but not hard right) will be present in both channels, but at a higher level in the right channel than it is in the left. And a lead vocal, which most producers these days pan dead centre, will be equally present in both channels. When we listen to the left and right signals together from CD, the spread of signal proportions in both channels produces a result which sounds to us as though the different instruments are playing from different places in the stereo sound stage.

If you place one of the channels in a stereo mix out of phase (ie. reverse the polarity of the signal) and add it to the other channel, anything present equally in both channels (ie. panned centrally) will cancel out — a technique sometimes known as phase cancellation. You can try this for yourself if you have a mixer anywhere which offers a phase-reversal function on each channel (many large analogue mixers have this facility, as do some modern software sequencers and most recent stand-alone digital multitrackers such as Roland's popular VS-series, although the software phase switch on the Roland VS1680 and 1880 doesn't exactly advertise its presence — see pic, right). Simply pan both the left and right signals to dead centre (thus adding them on top of one another), and reverse the phase of one of them — it doesn't matter which. The resulting mono signal will lack all the items that were panned centrally in the original mix. Sometimes, the results can be dramatic. Old recordings from the early days of stereo sometimes featured the rhythm section panned dead centre and overdubs (vocals, say, or guitar or keyboard) panned off-centre. In these cases the vocals or guitar will remain following phase cancellation, and the drums and bass will disappear completely, allowing you to appreciate details you never knew were there in the parts that remain. In recent recordings, the tendency has usually been for lead vocals to be panned centrally, so with these recordings, it's the lead that will cancel from the mix, leaving (in theory) the backing. This is how most vocal-cancellation techniques work.

So, doesn't this mean that the success (or failure) of vocal cancelling depends on whether or not the original vocal was panned centrally? Well, yes — which is why vocal cancelling is such a hit-and-miss technique! What's more, although most vocals are panned centrally in today's stereo productions, backing vocals are often panned off-centre, and will therefore not cancel with the lead vocal. Furthermore, nearly all lead vocals in modern productions have some effects applied to them. If these are stereo effects and therefore present unequally in both channels (as is the case in a stereo reverb), the dry signal may cancel, but the processed signal will not, leaving a 'reverb shadow' of the lead vocal in the phase-cancelled signal. No matter how much you pay for vocal-cancelling software or hardware, there's nothing that can be done if the original vocal was not mixed in such a way as to allow complete cancellation.

In addition, although you can cancel anything panned centrally in this way, you can't isolate what you've cancelled to the exclusion of everything else. Many people, when learning of phase-cancelling techniques, assume that if you can cancel, say, a vocal from a mix, then if you take the resulting vocal-less signal and reverse the phase of that and add it back to the original stereo mix, the backing will cancel and leave you with the vocal. This is hardly ever workable in practice, however, because a phase-cancelled signal is always mono, and if the original backing mix is in stereo (as it nearly always is), you can never get the phase-cancelled mono backing on top of the stereo mix in the right proportions to completely cancel it out.

Another suggestion that is often made when encountering phase-cancelling techniques is that of dividing a stereo mix into its component sum and diference signals, which you can do with a Mid and Side matrix. However, isolating the 'Mid' component of any given stereo mix won't merely give you anything that was panned centrally in the original mix to the exclusion of everything else — it's simply the mono signal obtained by panning Left and Right signals to centre and reducing the overall level by 3dB. So, if an original mix consists of a centre-panned lead vocal and an off-centre guitar overdub, the Mid signal constituent of the mix is not the isolated lead vocal, but a mono signal with the vocal at one level, and the guitar at a slightly lower level. You may be able to emphasise the vocal at the expense of the guitar with EQ, but you'll never remove the guitar altogether. In a busy mix with several instruments playing at once, deriving the Mid component of a stereo mix won't get you very much nearer to an isolated vocal than you are with the source stereo mix!

Despite this, it's worth pointing out that phase-cancellation techniques can be fascinating for listening to the component parts of mixes, and useful for analysing tracks you admire or are trying to learn to play. If you pan left and right channels to centre, reverse the phase of one of the channels and play around with the level of the phase-reversed channel, different parts of the mix will drop out as differently panned instruments cancel at different settings. Sometimes the relative volume of one component can shift very slightly, but enough to lend a whole new sound to a mix, enabling you to hear parts that have never seemed distinct before. An example might be if a song contains a blistering, overdriven mono guitar sound panned off-centre, which normally swamps much of the rest of the track when you play it back in ordinary stereo. With the faders set unequally, and one channel phase-reversed such that the guitar cancels out, you will hear most of the other constituents of the mix, but minus the guitar, which could make the track sound very different!

However, as a technique for isolating parts from a stereo mix, phase cancellation remains very imprecise, its success or failure dependent entirely on how the original track was mixed. This doesn't mean that it's not worth a try, but it also means that the only sure-fire way to obtain the isolated vocals from a track is to obtain a copy of the original multitrack from the artist or record company — which is, of course, what professional remixers do. Sadly, this is not an option for most of us!



Published April 2002

Monday, May 27, 2019

Q. Why can't I run Cubase 5.1 on Windows 95?

By Various
Q&A May 2002: Steinberg Cubasis.
Your technical questions and queries answered.

Q. Why can't I run Cubase 5.1 on Windows 95?

I've recently upgraded my copy of Cubase VST 3.5 to version 5.1 and, after several attempts to install it, I keep getting a list of errors. Having tried several times to contact Steinberg's 'non-existent' tech support, nearly a month later I found a forum in Club Cubase that dealt with the exact problem. Apparently, despite what it says in the manuals and on all the advertising, Cubase VST 5.1 will not run under Windows 95. So despite paying for the software, I'll now have wait until I can afford to upgrade my OS before I can use it.

From listening to others, it seems that nobody ever gets any reply from Steinberg and, for a company with their reputation, this isn't very good. I can't reinstall version 3.7, as I had to send its dongle back, and the new one doesn't work with the old software — so I'm stuck.

Andy Sayner

PC Specialist Martin Walker replies: I had a look at the installation manual of my Cubase 5.0 update, and it did mention Windows 95, along with Windows 98 and 2000, as minimum requirements. However, when I followed this up with Steinberg Germany, I found that the latest 5.1 packaging only mentions Windows 98, ME, and 2000. This is the first version that doesn't officially support Windows 95, apparently not because there is any Cubase code that stops it working on this platform, but because there have been user problems with audio/MIDI drivers that are out of Steinberg's hands.

I've often contacted Steinberg's Helpline (020 8970 1924 in the UK), and found the tech support staff to be extremely helpful. They told me that they will happily send you a version 5.0 CD-ROM if you wish, which should scrape by running on the final Win 95C version, or return your 3.7 dongle and give you a refund if you prefer to return your 5.1 dongle to them.

However, there's a larger issue here. Windows 95 is now seven years old, making it extremely long in the tooth for an operating system, and it s hardly surprising that many music developers are withdrawing support for it. It's always sensible to wait for at least a few months after release before upgrading, so that the lemmings expose any remaining bugs and they can be dealt with by an update or Service Pack, and of course there's no point in upgrading for the sake of it if you're happy with the performance of your existing operating system.

However, there comes a point when ignoring progress leaves you in a tricky situation. The majority of musicians still seem to be running Windows 98SE, and while this was still available as an upgrade it would have been an ideal way for you to bring your PC rather more up to date three or four years after installing Windows 95. The subsequent Windows ME would have been a possible upgrade for you as well.

Sadly, retailers are now only stocking the latest Windows XP, and as I mentioned in my review, having Windows 95 doesn't qualify you to install XP as an upgrade. You can probably buy a full OEM version for a similar price if you order a small hardware item at the same time, but if your PC is more than five years old then it will probably require the minimum of a BIOS update, as well as some hardware upgrades, since XP realistically requires a minimum of a 400MHz Pentium processor or equivalent, plus 128MB of RAM.

For any musician to stay abreast of new software developments, periodic computer upgrades become almost inevitable. Although Cubase 5.0 did introduce various new MIDI features, the majority were audio ones, and if you intend to make use of these then you almost certainly face the prospect of some major hardware upgrades, or even a new PC.

Despite the stated minimum requirements, Steinberg's recommended minimum system for Cubase 5.0 is a Pentium III 266MHz and 128MB of RAM running Win 98 or 2000. However, it's a sad fact of life that the latest software is designed to run best with computers that are not more than a couple of years old, and although you may scrape by with an older one, performance may still be sluggish.

Q. Is there a software package that acts purely as a 

virtual tape recorder?

I'm writing to ask for some advice on appropriate software that can act act purely as a virtual tape recorder. I want transport controls, locators, track arming and so on, and I'd like to play back MIDI and record audio signals. I don't want to use it for automation purposes or effects processing, I use a mixer for those tasks, but I want to master down to hard disk for CD burning.

SOS Editor Paul White replies: To the best of my knowledge, there's currently no software that functions exactly like a virtual multitrack tape recorder in all operational respects. Specifically, punching in and out isn't usually handled in the same way as for tape, because with tape punch-ins are always destructive, whereas computers tend to create undo files. This is fine from a safety point of view, but it does mean that a song ends up containing far more audio files than tracks.

Of course you can get multitrack functionality with a sequencer, along with all the MIDI tools, but then you get automation and effects processing whether you need it or not. I really do think there's room for something that works like a simple tape machine when you're recording multiple players at the same time, but which allows you to open the finished files in a standard sequencer for editing. Sadly, if such a thing exists it's never been brought to my attention! In the meantime, Cakewalk's Guitar Tracks Pro is perhaps closest to what you're after in terms of its interface, which is intended to feel like using a cassette multitracker, but it has no MIDI functionality at all. If that's crucial to you, you'll need to check out the same company's Home Studio XL, or entry-level versions of sequencers like Logic and Cubase.

Q. What is panning law?

Can someone please explain about panning law? Apparently Cubase defaults to -6dB and Logic to -0dB — does this mean better playback, higher recording levels, or something else?

SOS Forum post
Most audio applications let you select the 'pan law' (shown on bottom-left side).Most audio applications let you select the 'pan law' (shown on bottom-left side).

Technical Editor Hugh Robjohns replies:When a signal is panned centrally, exactly the same signal will be output on both the left and right audio channels. If, on the other hand, the signal is panned fully to one side or the other, it's only output on one channel. If you listen to the results over loudspeakers, the doubled acoustic energy in the room when replaying a centrally-panned sound will produce a louder signal. And as we're talking about acoustic power, the signal level will appear to increase by about 3dB, compared to the level when only one channel is outputting the signal.

So if you pan from hard left, through centre, to hard right, it will sound as if the level rises slightly as it passes through the centre. To overcome this, mixer designers engineer the panning law to introduce a 3dB level drop at the centre, relative to the edges. However, if mono compatibility is important, a 6dB centre attenuation is necessary because the left and right channels are mixed together to make a derived mono. With the same signal in both channels (from a centrally panned signal), the voltage addition results in a 6dB level increase compared to the level of a signal in a single channel, so many broadcast desks employ a 6dB centre attenuation in their pan pots.

Manufacturers of mixers for general purpose often hedge their bets, and arrange a centre attentuation of 4.5dB, being halfway between the two camps! This is an excellent compromise in my opinion and seems to work well in all circumstances. However, bear in mind that because of the panning law, altering an instrument's stereo panning will also change its perceived level in the mix by a small but often significant amount. For this reason, I advocate panning sources to the required positions before crafting the final balance. Some people recommend getting the balance in mono first, but the balance will change by anything up to 6dB if you then pan sources around, which isn't very helpful. And if you think all this is complicated, spare a thought for surround sound panning laws!

Q. How can I get equal monitoring levels?

I have a Yamaha Promix 01 mixer and have set the internal test tone at 0dB. The output goes to the analogue input of a Pulsar soundcard and, although I get 0dB on the computer's mixer software, when the signal comes back into the Promix it's +12dB, even though I have 'ST IN' set to zero on that channel. I'm confused about how to get an equal monitoring level. Any ideas?

SOS Forum post

Technical Editor Hugh Robjohns replies: I suspect what you've found is the discrepancy between so-called 'professional' levels, referred to as a nominal +4dBu, and 'semi-pro' levels, referred to as -10dBV. I think you're sending tone from the main outputs of your mixer, which are designed to produce +4dBu with 0VU indicated on the meters, and the input to your soundcard is obviously calibrated the same way. So I'd therefore assume that the output from the card is also the same, ie. 0VU on the meter produces +4dBu signal at the output.

You have then routed this back to the Yamaha, I suspect through an unbalanced input (via the phono connectors?), which is designed to accept a -10dBV nominal input level for 0VU. A signal level of -10dBV has an RMS voltage of 316mV, equating to
-8dBu in round figures. So this unbalanced input is normally expecting -8dBu for a 0VU display and you're giving it +4, which is 12dB higher (in round figures again), hence the high reading. Possible solutions could be to:
1. Set the output of the soundcard to provide a -10dBV signal instead of a +4 level (if possible).
2. Use a different return input to the Promix 01 — one which is expecting a higher level.
3. Use a 12dB attenuator between soundcard and mixer.

Q. Do multi-client soundcards still exist?

I'm looking for a multi-client capable soundcard so that I can use more than one audio application simultaneously. The specifications for most soundcards mention full-duplex capabilities, but not multi-client. Are they still available and practical? And if so, which model does SOS recommend?

Tim Brind

PC Music specialist Martin Walker replies: Full-duplex simply means that a soundcard can record and play back signals simultaneously. Nearly all modern soundcards are capable of doing this, although some elderly consumer models may have limitations on the sample rates or bit depths available. Similar restrictions may affect certain USB sound devices, since USB bandwidth isn't sufficient to run many 24-bit/96kHz audio channels simultaneously, for example.
Q&A

I'm not surprised that there's still so much confusion about multi-client capability since, as you point out, soundcard specifications are often vague or nonexistent in this area. This is largely because it's not the soundcard that has the multi-client capability, but its driver software, so the capabilities of each soundcard model may change as new driver versions are released.

Being 'multi-client' can also mean various things: a soundcard may have multi-client drivers for its audio ports, its MIDI ports, or both, but each type is generally implemented in a very different way. For instance, if the soundcard's MIDI drivers are multi-client capable, you can send data to the same MIDI output from your MIDI+Audio sequencer and a synth editor simultaneously, which makes real-time tweaking of sounds far easier.

When it comes to audio, multi-client capability does allow you to run more than one audio application at once but, in this case, each application nearly always requires a dedicated stereo output. This may either be an additional physical output in the case of a multi-port card, or a virtual one in the case of models like the Echo Mia, where four stereo outputs are mixed internally and their combined signal sent to a single physical output. So yes, multi-client soundcards are certainly available and extremely practical, but each model may implement this feature in a different way.

The applications you use may also restrict the way you mix and match your audio I/O. For instance, if you use Cubase, it will always grab soundcard outputs 1/2, whatever driver type you choose. If other outputs exist, multi-client drivers will probably let you allocate these to other applications simultaneously, but only if you're using a different driver type. For instance, with my Echo Mia card I can allocate ASIO channels 1/2 to Cubase, MME channels 3/4 to Wavelab, and GSIF channels 5/6 to GigaStudio.
Whenever I review a soundcard, I test out its multi-client capabilities with a variety of software, since this is often the only practical way to find out what its drivers are truly capable of, and I always report on my findings. However, new models are often launched with the promise of multi-client capability, even though this sometimes takes six months or more to appear.

I can't recommend a specific soundcard model, largely because I don't know how many inputs and outputs you need, whether or not you require digital I/O to connect up external gear and so on. However, I've written plenty of FAQs on this subject for the PC Music section of the SOS Forum at www.soundonsound.com/forum, and if you want to use a particular combination of audio applications, the easiest way to find out how practical this is with a particular soundcard is to post a query on the SOS Forum and see how other musicians have fared with a similar setup.

Q. Which hard disk format is best for PC audio?

Martin Walker said in March 2002's PC Musician that FAT32 is more audio-friendly than NTFS, and I'd like to know more about why this is.

Harri Era

PC Music specialist Martin Walker replies: I mentioned a bit more about this subject in the Optimising PC Hard Drives For Audio article in the April 2002 issue, but formatting hard drives using NTFS (New Technology File System) is only a valid option for musicians running Windows NT, 2000, or XP. However, since so many of us are moving over to XP, here's a more detailed comparison.
The prime feature behind NTFS is security: to prevent unauthorised users seeing sensitive data, and to provide additional protection against corruption and data loss. It allows access rights to be assigned to files and folders, permitting each user full, partial, or no access at all to specific data. It also features integral file encryption facilities, and keeps multiple copies of its Master File Table. There's also a disk quota system that allows space to be allocated to different users in a transparent way, and it supports transparent compression of individual files, folders, and volumes.

NTFS does have the distinct advantage over FAT32 that its performance doesn't slow down once partitions or folders contain thousands of files, and if you do have such large numbers of files, its indexing feature greatly speeds up searches by maintaining an overall index. This certainly means that for most purposes, NTFS is the better file system.

However, the whole point of an audio partition is that it holds large files, often many megabytes in size, and normally fewer than you'd find on any drive containing Windows or text-based data, such as those found on most Internet servers or company databases. In addition, most musicians are primarily interested in squeezing the last drop of performance from their PCs, so the added overhead of the NTFS protection features won't meet with much enthusiasm either.

However, reports of noticeable differences in performance between the two tend to be greatly exaggerated, especially if you format them both using one of the larger cluster sizes like 32K that are routinely recommended for audio purposes, which means there are fewer overall clusters to manage.
All hard disk read and write operations are handled by the operating system, and not the file system, so your format choice for an audio partition or drive is also dictated by which version of Windows you are running. FAT32 partitions can be read by Windows 95B and C, 98, 98SE, ME, 2000, and XP, which makes it an almost universal format. However, choosing to format your audio drive with NTFS will mean that it's invisible to any Windows 95, 98, or ME partition. So unless you're leaving these behind and moving totally to XP, it's not a wise choice.

Overall, I can't see that choosing NTFS format for your audio drives makes any sense unless you get a performance advantage when running audio applications, and judging by all the information I have, the differences between FAT32 and NTFS with modern PCs are minimal. I've also come across people whose NTFS partitions won't convert back to FAT32 using PowerQuest's Partition Magic, which is another consideration if you later change your mind.



Published May 2002