Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Monday, May 31, 2021

Vocal Processing

Having done your best to capture a high–quality recording of your vocals, how best do you go about processing them so they work in your mix?

While there are plenty of alternative approaches to treating vocals, the processing chain suggested here can form a good starting point.While there are plenty of alternative approaches to treating vocals, the processing chain suggested here can form a good starting point.The basics of recording a good vocal performance are pretty much the same whatever your recording system: put together a suitable mic, a clean preamp and audio interface, a recording room that doesn't colour the sound in any detrimental way — and some suitable singing talent — and you're most of the way there. However, there remain plenty of options, both creative and corrective, for maximising the impact of vocals in your mix. In this article I'm going to look at how a number of Cubase 4's audio plug–ins (and, in passing, some freeware alternatives) can be used to form a basic signal processing chain for vocals.

The Chain Gang

If a gate is required, set a fast attack time and then adjust the release time to suit the material.If a gate is required, set a fast attack time and then adjust the release time to suit the material.To keep things short and sweet, I'll assume that the basic vocal performance has already been captured and edited to create a composite track. I'll also ignore the usual send effects like reverb and delay, which we've covered in this column and our Mix Rescue features many times before, and instead focus on how you can most usefully use insert effects on a mono vocal track.

What sequence of insert effects forms a 'typical' signal chain for a vocal will vary depending upon who you ask and the style you're after, but I do find it useful to keep a basic chain of plug–ins as a starting point. The first screenshot shows a chain of plug–ins that I keep as a Cubase track preset: from top to bottom, this chain uses the Gate, DeEsser, two instances of the Compressor, Studio EQ and DaTube. Before I go step-by-step through this chain, let's quickly consider the order in which you place your plug-ins. It's worth paying particular attention to which effects go before and after any compressors: compression tends to emphasise frequencies that already dominate a sound, so it can make good sense to perform any corrective processing (notch EQs, de–essing and so on, but not pitch correction — more on that later) before any more general compression. That way, the compressor is working on the sounds you want to hear, not exaggerating problems like leakage, pops, clicks or sibilance.

Shut That Gate!

De-essers are handy but try not to overdo it!De-essers are handy but try not to overdo it!Gates are used to strip away unwanted audio when it falls below a user–determined level. An alternative is to use the Detect Silence function from the Audio/Advanced menu: this is in effect an off–line gate plug–in that simply edits out the silent parts of a track, meaning your computer isn't needlessly streaming 'silent' audio from your hard drive. Automating the level can have a similar effect on the sound, but remember that any pre-fade sends on the vocal channel won't be muted by bringing the fader level down.

If you want to use a gate to tidy up a few short sections between vocal phrases, make sure that you use a fast attack setting. Both the threshold and release settings will require experimentation, because the best settings will vary with the part being processed. The threshold control must be low enough to allow the quietest sections of the performance to open the gate (including any breaths — see the 'Don't Forget To Breathe' box), but high enough to keep the gate shut during sections when the vocalist is quiet. It's worth starting with a fairly lengthy release time, which will ensure that the tail ends of words don't get cut off. Gradually, shorter times can then be tried to find what suits the performance.

S–Express

It can be useful to combine two compressors for vocal processing.It can be useful to combine two compressors for vocal processing.After the gate, I often use the DeEsser plug–in to address those silly, stubborn sibilance issues. Some folks hate de–essers, while others seem happy to let them do a job. I'm in the latter camp: I'm happy to use them where needed, although if I've done a good job of recording the vocal in the first place, I'd expect sibilance to have been resolved at source. When a de–esser is required, it's best to place it before any standard compressor in the signal chain, as I explained at the beginning of this article.

Cubase's DeEsser is a fairly basic affair, and pretty easy to get your head around. You choose between male or female — which simply switches the centre of the frequency range for processing between 6kHz and 7kHz respectively — and then set the S–Reduction control, which dictates how much compression is focused on that frequency range in order to reduce any unwanted 'ssssss'. I tend to start with low values and gradually increase them, but unless there's a real problem (which is generally going to be better solved by re–recording anyway), I always opt for the 'less is more' approach. From my own experience, I also tend to leave the Auto Threshold on, because this automated threshold adjustment seems to produce fairly good results.

Squeeze Me, Please Me

Having de–essed as required, I'll either move on to an EQ or a compressor. As I mentioned earlier, corrective EQ would usually be done at this stage, but more general tonal sculpting can be done before or after the compressor — if you do it before, it's worth having the compressor in line already, in the way you would mix 'into' a bus compressor.

I like to use two stages of compression: the first to do a basic compression job, levelling out the volume of the vocal, to help it sit more comfortably within the mix and give it greater impact; and the second to catch only the real peaks of volume, allowing the overall level of the vocal to be raised without the possibility of the signal getting out of hand.

I've included two screenshots to show what might be typical starting points for these two stages. In the first, a combination of a medium ratio (4:1 in this case with 'soft knee' selected) and low threshold (–20dB) have been used. You'll need to adjust the threshold to suit the particular track, and aiming for a target of about 6dB maximum gain–reduction should produce a reasonable balance between a punchy sound and a natural result. The second compressor has a higher ratio (8:1, which is almost into limiting territory) and a high threshold, so that only the peak signals are subjected to this further compression. The make–up gain control of the first compressor can be used to drive the level of the signal reaching the second compressor, and — alongside the threshold control — it will determine how often the second compressor is active. It's important that these controls are be adjusted so that gain reduction takes place only on the peaks of the signal, because otherwise things will start to sound ugly!

Note that in both cases I've used a fast attack and a modest release (250 ms): the latter may need some adjustment according to the style of the material. The end result should be a much more solid–sounding vocal — so you should find it easier to place in a mix. A word of warning, though: don't let compression become a substitute for old-fashioned fader movement. Especially in situations where you're after a natural vocal sound, there's often no substitute for controlling its level through detailed automation, rather than heavy compression.

Mind Your Es & Qs

Bass roll–off aside, it's best to keep EQ fairly subtle if you want to maintain a natural sound.Bass roll–off aside, it's best to keep EQ fairly subtle if you want to maintain a natural sound.The compression should tame your track and make it more controllable, so let's look more closely at how you might use EQ for a bit of tonal shaping. When we talk about the tonal qualities of a sound, they're often described using terms such as 'nasal', 'boomy' or 'boxy'... but, of course, you won't find those terms on your EQ! Helpfully, for us studio–using mortals, some golden–eared folk have attempted to translate those words into more specific frequency ranges, and there are also some useful pointers in the EQ article in this edition of SOS. For example, a low–pass filter turning over at 100Hz (as in the screenshot) can help get rid of any unnecessary bottom end (making the vocal less 'boomy' and getting it out of the way of the bass and kick), and for less prominent backing vocal parts you might be able to set it even higher. If the sound is a little 'boxy' or 'nasal', then a cut of a few decibels somewhere in the 200–1.5kHz range can help. A similar amount of boost centred somewhere in the 2–7kHz range can be used to add a little extra presence, while shelving EQ anywhere from 10kHz upwards can be used to add 'air'. Exact frequencies and amounts of gain will vary according to the tonal character of the voice, so even these very general ideas will serve only as a starting point, and it's important that you experiment and use your ears. One way to find what works best is to use a very narrow boost and adjust the frequency while listening out for the most resonant, boxy sound. When you find that point, turn the boost into a cut to filter out the offending frequencies. However, with all EQ — unless you are attempting to correct an obvious flaw in the original recording — subtle use of the gain controls and low Q settings will produce more natural results.

Mind The Gap

For a little extra attitude, a gentle touch of distortion can help.For a little extra attitude, a gentle touch of distortion can help.By this stage, the vocal should be clean of unwanted artifacts, be well–controlled in terms of dynamics, and have a suitable tonal balance, with no unnecessary bottom end — all of which means it will be perfect for feeding in to any pitch–correction processor that you might need to employ. Although Cubase 4 is capable of pitch manipulation, there's no insert plug–in for it, so you have to perform the processing off–line... which itself means you need to bounce the track down if you want to do any other processing beforehand. I'd usually opt to use a third–party tool such as Melodyne or Auto-Tune here — and if those excellent plug–ins are out of your reach you could try a less sophisticated alternative like the freeware Gsnap (www.gvst.co.uk/gsnap.htm). Whichever plug–in you choose, this is probably the best stage at which to apply it — but do remember that it's easy to overdo things!

Exciting Times

Having worked so hard to get your recording as clean as possible, it might seem odd to deliberately distort it, but that's essentially what harmonic enhancers and tube emulations do — and both can be very effective on vocals. Cubase 4 includes two useful plug–ins for this: DaTube (as used in the example) and SoftClipper. DaTube attempts to add a little tube–like warmth and distortion to a signal. While it's probably not the best tube–emulation plug–in in the world, it can (when used subtly) add an extra presence to a voice.

Treat With Caution!

By this point you should have a vocal that sounds pretty good. The nasty noises should have been dealt with, it should be more even in terms of dynamics (making it easier to fit in the mix), and it should be tonally pleasing. The only final word of caution would be a reminder to repeatedly bypass the various elements in this signal chain as you tweak the controls just to make sure you're moving the sound in the right direction and are not in danger of over–processing.

Don't Forget To Breathe

Whether using a gate plug–in or manually editing to tidy up the gaps in your vocal recordings, the treatment of performance noises such as breaths or lip noise can often create a dilemma. If these are completely removed some of the character of the performance can be lost (much like trying to remove all finger noise from an acoustic guitar recording). It's here that waveform editing offers a real edge over a traditional gate. Sounds such as breaths can often be isolated, which allows both their amplitude (for example, by using detailed envelopes for volume automation) and position (moving the breath to make it sync with the groove and therefore enhance the rhythmic role of the vocal) to be adjusted. While such editing can involve a certain amount of painstaking work, if you're trying to 'fake' the perfect vocal performance, it is probably worth the effort.

Freebie Alternatives

Of course, the signal chain I'm suggesting in this article is only one of many possibilities. For example, you could use Cubase 4's VST Dynamics plug–in to replace the gate and two compressors. While most of the plug–ins included in Cubase 4 are pretty good workhorses, there are plenty of third–party alternatives that may do the job better. Here isn't the place for an extensive discussion of third party plug-ins, but if you're on a limited budget I'd recommend trawling the KVR database (www.kvraudio.com) to find good freeware alternatives, and it's worth me drawing your attention to a few of them here. Regular readers of Mix Rescue will probably be aware of the wonderfully named Fish Fillets bundle (www.digitalfishphones.com), which remains one of my personal favourites. It includes a good compressor, de–esser and gate — and the compressor, which has bags of character, is particularly worth trying. If you're on a PC (no Mac version yet, I'm afraid), then the Antress Modern plug–ins (http://antress.webng.com) are also worth a look — there are various compressors, EQs and exciters in there (and plenty more besides) which sound great, although they do tend to hit your CPU quite hard. Another exciter, as a possible alternative to DaTube, is X–cita, by Elogoxa (www.uv.es/~ruizcan/p_vst.htm), which attempts to mimic the BBE Sonic Maximiser.

Duck For Cover

Vocals are often the most important element in a song, and their place in the mix should reflect that. Compression and limiting of the vocal track can help make that easier to achieve, but another trick is to slightly drop the levels of other mix elements when the vocal is present, and raise them again when the vocal drops out. Good candidates here are rhythm guitar and keyboard parts.

The volume changes can of course be achieved via volume automation, but the recent addition of side–chain facilities to some Cubase 4 plug–ins means you can also do this via ducking, without having to draw all that automation data in. Inserting a compressor in the track to be ducked, activating its side–chain input and specifying the lead vocal track as the source for the side–chain input will allow the compressor to gently squeeze the level of the instrumental track whenever the vocal is present. Even a drop of 1 or 2dB in some instrumental backing elements in this way can just help give the vocal a little more space to work in the overall mix.



Published December 2008

Friday, May 28, 2021

Timing Correction

 By Matt Houghton

With Cubase 4.1, Steinberg overhauled the Sample Editor, creating a very powerful new tool for real-time pitch and time manipulation.

Back in this column in SOS November 2007, John Walden took us through how to use the time‑stretching and Audio Warp functions in Cubase 4. But no sooner had that issue hit the shelves than Steinberg decided to release version 4.1, which brought a significant overhaul of these functions in an attempt to make them easier to use. Of course, there are still several articles and tutorial videos floating about on the Web that instruct you how to do things in the earlier incarnation of Cubase 4, and if you managed to download the update without also separately downloading the updated manuals (as I initially did) you'll probably be very confused. So before I go on any further, I can't stress enough how important it is to keep the Cubase manuals in step with your software updates.

We're now on v4.5 and it's high time we investigated the new features, so in this article I'll take you through what's changed, and consider a few potential applications for Hitpoints and Audio Warp along the way.

What's Changed?

There isn't really much new 'under the bonnet', because most changes have been to how you access the existing audio processing functionality, and how things are presented in the GUI. There have also been a few changes to the terms used to describe some of the functions, which do make sense but can be a little confusing at first.

To perform real-time time-stretching in previous versions, you had to go in to the audio Pool, define the tempo of your clip (if it wasn't already defined) and tick a box that put the clip into 'musical mode'. You can still work this way if you prefer, and it makes sense to do so if you wish to enter this information for several clips that you know to be of the same tempo. You could also access this via the rather fiddly toolbar of the Sample Editor — the logic, presumably, being that you'll want to be able to define the tempo when working with the audio clip in question. But from v4.1 access via the Sample Editor has changed considerably — and for the better.

New Sample Editor

It's now much easier to define the tempo of audio for real-time Warping using Cubase's Sample Editor.It's now much easier to define the tempo of audio for real-time Warping using Cubase's Sample Editor.

When you double-click on an audio event to launch the Sample Editor, it will open just as it used to, but some of the buttons from previous versions are 'missing' from the top toolbar: you no longer have any controls there for activating musical mode, or Audio Warping, for example. This is because everything has been rationalised and moved to the left‑hand side of the Sample Editor window. In fact, you have access to far more processing options there than you had previously from the toolbar, including some of the processes that are accessible via the Audio menu, such as off‑line application of plug‑ins. Taking the Sample Editor menus from the top down, there are sections called Definition, Playback, Hitpoints, Range and Processes, which we'll look at in turn.

Sadly, what Steinberg haven't yet done is integrate any of these features into the Arrange page, as they did when they introduced edit-in-place MIDI, for example — and this is something that I'll explore more towards the end of this article.

Definition

The Definition section is, as the name suggests, where you define the tempo of the audio event that you've opened in the Sample Editor. If you're working with a fixed‑tempo loop that starts with a downbeat, this is incredibly simple: you make sure the time signature and bar length are correct (you may need to audition the loop to count the latter), then click on the crotchet Preview symbol so that it lights up orange and hit Auto Adjust. This should force the grid to match the tempo of the audio file. We'll come on to the time‑stretching functions themselves later: suffice to say that what we're doing here is telling Cubase where the bars and beats are in the audio file, in order that it can later know how to automatically stretch and align them. Think of it as metadata for the audio file.

Where you don't have a steady tempo, automatic time‑stretching and tempo definition can be something of a minefield, and this is where the Manual Adjust function comes in handy. Using this, you're able to drag the first beat of the grid (denoted by a green flag) to align it with the first downbeat of the audio clip. Manual adjustments to an audio clip's grid are made by clicking and dragging the bars to the down beats of the audio, and Control or Alt‑clicking allows you to warp the bar and beat lengths to fit an uneven tempo.Manual adjustments to an audio clip's grid are made by clicking and dragging the bars to the down beats of the audio, and Control or Alt‑clicking allows you to warp the bar and beat lengths to fit an uneven tempo.You then click on waveform at the first beat of the second bar (of the ruler) and a red flag appears, which you can drag to align with the first beat of the second bar of audio. You can perform the same task for any bar, but dragging any of these red flags will adjust the whole grid: it won't warp that bar alone.

To warp the grid so that the bars match an audio file's uneven tempo, you need to Alt‑click (Option‑click on Macs), which turns the flag pink. In this state, the flag can be dragged to the relevant position, stretching or shrinking the previous bar, and slipping all the following ones forward or backward accordingly. This enables you to define a variable tempo for the audio that the time‑stretching algorithms can reference. You can drill down further too: Control‑clicking will give you a blue flag, which allows you to stretch or shrink an individual beat on the grid, without affecting the position of the beats or bars to either side.

Playback

Once you've defined the tempo of the clip, you can snap the audio to the project tempo by clicking in the Playback tab. All you need do is ensure that the crotchet symbol is lit, indicating that the clip is in 'Straighten Up' mode — what was previously called 'Musical Mode'. Not only will the audio snap to the project tempo, but it will stretch and shrink as you change the tempo. As well as the obvious potential for remix fun, this is great for subtler tweaks, such as bumping up the tempo of a chorus by a couple of bpm using the global tempo track.

It's also in this section that you can determine which algorithm is used for real‑time time‑stretching. There are different ones for plucked instruments, percussion, vocals and so on — all of which are described in detail in the manual. Suffice it to say that which one you choose makes a significant difference to the results, so be sure to select the best one for each clip.

Beneath this section, you're able to alter the degree of quantisation and adjust the feel, using the Swing slider. This may seem like a straightforward function, but it's incredibly useful if you're trying to work with loops from disparate sources, or perhaps tweaking the groove in a remix You also get controls for fixed pitch‑shifting up to an octave either way, and locking to the new global Transpose track, so you can alter the pitch of the audio clip in real time without affecting the length. I've found this very useful for tasks like dropping or raising the pitch of a kick drum so it fits better with a bass part.

Warp Factor

The sharper‑eyed amongst you may have noticed that I skipped a function in the Playback section: the Free Warp facility, which is arguably the most practical of the real-time processes found here. What it does is allow you to stretch or shrink sections of the audio file manually, without having to force it to the Project tempo. In other words, you can Warp the audio to fit a grid, rather than Warping the grid to fit the audio.And you don't need to define the audio files's tempo to do this.

Using the Free Warp function allows you to stretch individual bars and beats, to bring them into alignment with other parts in your project.Using the Free Warp function allows you to stretch individual bars and beats, to bring them into alignment with other parts in your project.The new operation manual goes into plenty of detail about using this feature to lock audio to tempo, but I find that its most useful function is the simplest. To create a Warp Tab, click and hold at any point in the file; drag the tab in the timeline to position it where you want on the audio file; then drag the tab on the waveform itself to warp the audio. It's easy this way to perform basic timing corrections — say, tweaking the timing of a double-tracked guitar. You just create three Warp Tabs: the first defines the beginning of the section that you want to be Warped (probably the end of the previous note); the second the time-critical point that you wish to move (the start of the note you want to move); and the third the end of the audio that you want to be processed (probably the start of the next note). Dragging the second of these Warp Tabs will bring the note into line, stretching and shrinking the audio in between the two other tabs. As you do this, you can also see the audio change in the Project window at the same time, although it helps to have plenty of screen space available to line things up by sight.

Point Break

If you need to make several timing corrections to an audio part, you can create warp tabs from any hitpoints that you've defined.If you need to make several timing corrections to an audio part, you can create warp tabs from any hitpoints that you've defined.

Where you need to perform many such Warps, it is often easier to generate Warp tabs for every note, in order that you can go through, tweaking the ones that require it. You could of course use the grid-warp approach described earlier, but you won't always want to quantise the whole file; sometimes it's nice to choose which timing imperfections to correct and which to leave for their, erm, endearing human quality. It's also helpful for dialogue editing, where you're not working to a musical tempo.

The Hitpoint editor is handy for a number of applications that we've covered many times before (such as creating REX‑style audio slices, or extracting groove templates) and it is equally useful when it comes to the Audio Warp process. Generating Hitpoints is pretty intuitive: you click on the Hitpoint section in the Sample Editor and adjust the Sensitivity slider to make the Hitpoints appear. Once you're broadly happy with their position, you can tweak or delete them, or add new ones by clicking in the timeline.

Personally, I've always found Hitpoint detection frustrating in Cubase, as the Sensitivity slider is rather fiddly. If you have a track with lots of leakage it can be difficult to set the threshold accurately. Where the leakage is dispensable, an alternative is to use the Detect Silence function to gate the part and split it into separate events. Using the glue tool, you can reassemble the part to make it the correct length (remember to draw in empty audio parts at the beginning and end of the loop using the pencil tool, so that the part is the same length as the original). Now, simply bounce the part to a new audio file, using the Audio/Bounce Selection command, which you can access by selecting the new part and right‑clicking. I offer this as an alternative approach because Detect Silence seems to be a more powerful, user-friendly tool for detecting transients (it would be great to see a similar user interface in the Hitpoint editor). You should find that Hitpoint detection works much better on the new part.

Whichever way you go about generating the Hitpoints, once you have them, you're able to use them to generate Warp Tabs. You can access this function via the Audio/Realtime Processing menu, as shown in the screenshot (below left), but for some reason there's no control for this in the Sample Editor: it would make sense to have a dedicated button for this alongside the Free Warp button.

Although it's not intended for this purpose, and not ideal for all material, the Detect Silence function can tidy up your audio to make it easier to use the slightly fiddly Hitpoint Sensitivity slider.Although it's not intended for this purpose, and not ideal for all material, the Detect Silence function can tidy up your audio to make it easier to use the slightly fiddly Hitpoint Sensitivity slider.This is all pretty straightforward where you're working with material with distinct transients, but not all sounds do. If you have something that has a slower build, with a time‑critical event part way through (such as a reversed cymbal hit, for example, where the end of the note is the crucial quantise point), you can define a Q‑point for an individual hitpoint, which will be referenced by any slicing or audio‑warp processing to ensure that the note plays in time. Q‑points aren't enabled by default (I've no idea why), but you can activate them, by navigating to the Editing/Audio page, and ticking the 'Hitpoints Have Q‑Points' option.

Range & Processes

The two remaining sections in the Sample Editor are straightforward but useful additions. The Range section allows you to quickly and easily select a range within the audio clip — for example, selecting the area between the loop markers; or turning a selected range into a loop, which can be particularly useful when you're defining the part's tempo. All the off‑line processing options are now, usefully, available from the Sample Editor.All the off‑line processing options are now, usefully, available from the Sample Editor.The Processes section simply provides an convenient alternative means of accessing the range of off‑line processes from the main menu.

Sounding Better

All of the processes that I've described so far combine to make the Sample Editor a powerful tool for audio manipulation of individual audio files. Treating audio in this 'elastic' way — both in terms of tempo and pitch — will be a boon to anyone who wants to manipulate loops. Combined with tools such as the new Global Tempo track, and the Arrange track (formerly called the Play Order track), you also have pretty much unlimited flexibility for remixing.

Bear in mind, though, that it's easy to get carried away. Whenever you process audio there will be at least some undesirable artifacts, and it's often a case of "less is more", unless you're stretching or pitch‑shifting as an effect. It isn't really feasible to transpose a bass line by an octave, for example, or to double the length of a loop.

It's also worth considering that the real‑time algorithms that we've been using here to Warp audio are of lower quality than Cubase's off‑line algorithms, but thankfully the Sample Editor includes a button that allows you to 'flatten' Warped audio. This uses the higher‑quality off‑line algorithm to perform the same processes as that you've already 'sketched out', so if there are only faintly detectable artifacts, you may find that you're able to get away with it through this flattening process. In any case, it would be worth doing this for each Warped track before bouncing your final mix.

Warped Wishlist

Not all audio events commence with their tempo‑critical point. Reverse drum hits, for example, start with a tail, and it is the end that needs to fit the rhythm. In such situations, it's worth using Q‑points: the Hitpoint defines the note, while the Q‑point defines which part of it is time‑critical when you manipulate it.Not all audio events commence with their tempo‑critical point. Reverse drum hits, for example, start with a tail, and it is the end that needs to fit the rhythm. In such situations, it's worth using Q‑points: the Hitpoint defines the note, while the Q‑point defines which part of it is time‑critical when you manipulate it.

I said above that the Sample Editor is a very powerful tool for manipulating individual audio files. I've also lamented the fact that you're not able to perform these operations directly in the Project window. Not only would that make simple processes easier to access, but it would allow you to line up different parts by sight, and to decide which to bring in to line with the other. Technically, this may not be as straightforward to implement as it sounds: on any given track you might be working with many different audio files, for example. But it can't be an insurmountable problem.

I first started examining the Audio Warp functionality when reader Sam Grant asked what Cubase offered that could compete with Pro Tools' 'Elastic Time' function. For those who aren't familiar with Elastic Time, it's a very powerful and intuitive means of manipulating the tempo and length of recorded audio which has the advantage that you can use it on multiple tracks simultaneously. So, for example, you're able to detect the transients in a multitrack drum session; then, by tweaking the timing of a transient on one track, you can move the other transients — while preserving the timing differences between them (so that you don't pull a carefully placed distant room mic to the same place as the close-miked snare, for example). Cubase is great when you want to quantise audio and to force it to tempo, but I couldn't find any way to manipulate multiple parts, nor to replicate the Elastic Time function; at least not without tedious manual editing. I'm told that it is something that Steinberg R&D are actively researching. Meanwhile, if you've figured a clever way around this, drop us a line and we'll spread the word! 


Published January 2009

Monday, May 24, 2021

Steinberg Cubase 5

 By Mark Wherry


The first paid‑for update to Cubase for two years introduces some major innovations for sequencing and composition, including integrated Melodyne‑style pitch correction and editing.

It's been nearly nine years since Sound On Sound last reviewed Cubase 5. However, that was version 5 of the original Cubase application, the last version released before the introduction of Cubase SX. Since then, Steinberg have been consistently improving Cubase alongside their other, more post‑production‑oriented audio application, Nuendo. Cubase 4, released a little over two years ago, dropped the 'SX' suffix, returning the product to its original name once again.

Unlike the earlier versions of Cubase SX, which added interesting tools for musicians to embrace, Cubase 4, if I'm being honest, just didn't seem that exciting to me. Most of the new functionality centred around the new Media Bay, which only really helped you navigate the content that was provided by Steinberg, and VST3, an update to Steinberg's plug‑in technology that was initially unavailable to third‑party developers.

However, in the two years following Cubase 4's release, Steinberg released two important updates: 4.1, bringing significantly better mixer routing (and parity with Nuendo 4.1), and, more recently, 4.5, which introduced VST Sound as a new way to integrate content into Media Bay. In addition, the VST3 SDK was finally made available to third‑party developers. Although the uptake has been slow, the first third‑party VST3 plug‑ins have now started to appear, and you get the feeling that Cubase 4 was, in retrospect, setting the scene for greater things to come. Enter Cubase 5.

The Tracks Of My Tempo

Cubase 5 now makes it possible to include the Tempo and Time Signature tracks in the Track List of the Project window. Note how the selected Time Signature event shows up in the Event Info Line.Cubase 5 now makes it possible to include the Tempo and Time Signature tracks in the Track List of the Project window. Note how the selected Time Signature event shows up in the Event Info Line.

Bizarrely, one of the features I was happiest to see in Cubase 5 is also, by comparison, one of the smallest. I'm sure that I haven't been the only Cubase user who, over the years, has dreamed of being able to see and edit Tempo and Time Signature events in the Project window without having to open the Tempo Track editor. Well, it's finally possible: you can now create a Tempo track and a Time Signature track in the Track List on the Project window, and edit Tempo and Time Signature events directly on these tracks.

The best place for the Tempo and Time Signature tracks is obviously at the top of the Track List, and so Cubase's Divide Track List feature is essential here to ensure that these tracks stay next to the ruler, even if you scroll the Track List. Part of me wishes Steinberg would have incorporated these new tracks into a larger ruler, but this implementation is at least consistent with adding Video, Marker, Arranger, Transpose, and other ruler tracks. And an added bonus is that you have easy access to Process Tempo and Process Bars via Track buttons on the Tempo and Time Signature tracks respectively.

Perhaps the biggest down side to having Tempo and Time Signature variation implemented within tracks rather than as part of the main ruler is that it's currently not possible to add these types of tracks to other editor windows, such as the Key editor. Given that you can now edit tempo and time signature events in the Project window, it would be handy to be able to do this while editing MIDI notes as well (and no, using the in‑place MIDI editing just isn't the same). Even though Logic Pro's piano‑roll editor and Pro Tools 8's MIDI editor have fewer features than Cubase's Key editor, both afford you the ability to manipulate tempo and signatures without having to open another window.

When enabled, the Virtual Keyboard shows up at the end of the Transport panel and transforms your computer's keyboard into a MIDI input device.When enabled, the Virtual Keyboard shows up at the end of the Transport panel and transforms your computer's keyboard into a MIDI input device.Another small yet potentially handy feature is that Steinberg have brought the Virtual Keyboard from their Sequel 2 into Cubase, allowing you to use your computer's keyboard as a MIDI input device. When Virtual Keyboard is enabled (from the Devices menu or by toggling Alt/Option‑K), a new section of the transport panel shows a visual representation of what keys on your keyboard trigger the Virtual Keyboard.

A series of indicators underneath the Virtual Keyboard display shows you which octave you're playing, and you can adjust this range with the left and right cursor keys. A slider to the right of the Virtual Keyboard shows you the velocity level that will be triggered (although you can't actually see the precise number), which can be adjusted using the up and down arrow keys.

An alternate piano‑roll display mode is also provided for the Virtual Keyboard, offering enhanced mouse control over a three‑octave range and adding a few extra notes playable by the computer keyboard as well, although you unfortunately lose the on‑screen labels to let you know what keys trigger what notes. In this mode, clicking a note and dragging the mouse horizontally adds pitch‑bend, while dragging vertically controls modulation, and additional sliders are displayed to the left of the keyboard for these controls as well.

While you're probably not going to use your computer's keyboard for performing a virtuosic solo, the Virtual Keyboard is invaluable for those situations when you might only have your laptop and no access to a MIDI keyboard. My only complaint is that when Virtual Keyboard is enabled, only a couple of other key commands are supported (such as the space bar), which becomes a real pain. Obviously the Virtual Keyboard isn't going to be compatible with every key command, but I really think all commands that use modifiers or the numeric keypad should be allowed. As it is, you have to keep toggling Virtual Keyboard on and off, usually when you realise it's still enabled and that's why the key command you just pressed isn't working.

Vary Impressive

VariAudio is a new way to edit the pitch (and, to some extent, the timing) of monophonic notes in an audio event directly in the Sample editor.VariAudio is a new way to edit the pitch (and, to some extent, the timing) of monophonic notes in an audio event directly in the Sample editor.

The first big feature that really caught my attention in Cubase 5 is VariAudio, which makes it possible to detect and manipulate the individual notes of a monophonic audio recording directly in Cubase's Sample Editor window. If you've ever seen or used Celemony's Melodyne, you'll instantly get what VariAudio is about, and while it was obvious that this type of technology belonged in a digital audio workstation, as opposed to having to exchange audio files with another application or plug‑in, it's a pleasant surprise to see Steinberg implementing such functionality in Cubase 5. Note, however, that VariAudio is only available in the full Cubase 5: Cubase Studio 5 users will have to upgrade to Cubase to get this feature.

To edit an audio recording with VariAudio, you simply open it in the Sample editor (usually by double‑clicking an audio event on the Project window) and open the VariAudio section of the Sample editor's Inspector to switch to this mode. You'll notice that the vertical amplitude scale is replaced by a piano keyboard, and a single waveform will be displayed, even if the audio event itself is stereo. In order to edit the individual notes, Cubase needs to analyse the audio and create so‑called Segments, where each Segment will represent a single note in the audio. You can do this by enabling the 'Pitch & Warp' mode in the VariAudio Inspector section.

A Segment is basically akin to a note in the Key editor, in that its pitch is plotted on the vertical axis and time position on the horizontal. Should VariAudio not quite detect the Segments as it should, a Segments mode is also provided to let you adjust where the Segments are defined within a piece of audio. For example, if VariAudio fails to distinguish two notes of the same pitch that are very close to each other, you can simply divide that one Segment into two at the appropriate place on the waveform.

Unlike notes in the Key editor, Segments don't necessarily fall onto discrete pitch steps, since, especially with singers, the chances of having recorded a precise frequency for that pitch is quite unlikely, for both creative and technical reasons. If you hover the mouse over a Segment, the editor will display the numerical deviation from the detected pitch in cents (if you have sufficient horizontal resolution), a value that can easily be adjusted in either the Info Line or by using other mouse and keyboard commands.

Another visual cue that's displayed when you hover the mouse over a Segment is a translucent piano keyboard that appears in the background behind that particular note. This is useful because the main background for the editor is a single‑colour gradient that, unlike the background in the Key editor, doesn't distinguish black and white notes. Having a translucent keyboard appear behind a note is fancy, but in some ways, just having the whole background show the translucent keyboard might actually be more useful, at least as an option. Knowing the pitches of notes you're not hovering over can actually get quite tricky if you have a large display.

Segments can be dragged up and down to different pitches, just like notes in the Key editor, which is great for rewriting parts. And though you can't duplicate Segments, you can always duplicate the audio event being edited onto another track in the Project window and edit the Segments in the duplicate to create a harmony line, for example. One particularly nice touch when rewriting the pitches of notes in this way is a MIDI step input mode, which works similarly to MIDI step input in the Key editor, letting you adjust each successive Segment's pitch simply by playing it on your MIDI keyboard.

The Time Warp, Again

These features are great for creative editing, but VariAudio can be used for corrective editing as well, enabling you to easily fix the tuning of notes, or even iron out pitch deviations within a note, such as a singer's vibrato. You don't necessarily have to adjust each Segment manually, thanks to two useful functions that proportionally adjust the currently selected Segments closer to an absolute state. Pitch Quantize gradually pulls a note closer to its identified pitch, while Straighten Pitch evens out the micro‑tuning within the note that VariAudio detected. Very handy indeed.

What's nice about VariAudio's micro‑tuning detection is the way the editor plots this information as a graph for each Segment, allowing you to easily see the pitch deviation within a Segment. VariAudio even allows you to go beyond simply straightening out this micro‑tuning, so you can tilt the micro‑tuning graph by dragging the top left or top right of a Segment, making it possible to easily apply a portamento from or to the next Segment.

You can freely drag Segments up and down to adjust the pitch, but not from side to side. However, it is possible to adjust the timing by dragging either the bottom‑left or bottom‑right corner of a Segment. This causes Warp Tabs to be created, meaning that adjusting the start and end points of a Segment can affect any Segments that might be either side of the one in question. This limits the type of timing edits that can be performed using VariAudio, but it's not dissimilar to the way this type of editing is handled in the current version of Melodyne.

As well as manipulating audio, VariAudio has another neat trick up its sleeve: converting the detected Segments in an audio event to a MIDI part. And what's particularly useful is that this process can convert micro‑tuning information into either static or continuous pitch‑bend data. If you choose static pitch‑bend conversion, the pitch‑bend will be used to provide the tuning adjustment in cents, as opposed to the continuous conversion, which will attempt to mimic the complete micro‑tuning graph with pitch‑bend data.

Overall, VariAudio is a pretty impressive feature, both in terms of what it lets you do and the quality of the results, although I found that pitch adjustments sounded more natural than time adjustments, especially when performing a more drastic edit. And while Cubase doesn't necessarily replace a product like Celemony's Melodyne, especially with the forthcoming version of Melodyne supporting polyphonic material, it's so much more convenient to have this type of editing in your digital audio workstation, particularly since the edits are non‑destructive, are saved with your Project, and can easily be adjusted at any time.

The new Pitch Correct plug‑in provides automatic, real‑time pitch correction for situations when you don't need the editing control afforded by VariAudio.The new Pitch Correct plug‑in provides automatic, real‑time pitch correction for situations when you don't need the editing control afforded by VariAudio.Accompanying the new VariAudio feature in the pitch‑correction department is the new Pitch Correct plug‑in, which is included in both Cubase and Cubase Studio, and is based on Yamaha's Pitch Fix technology. Pitch Correct is basically Steinberg's answer to Auto‑Tune, and corrects, in real time, the pitch of notes detected in monophonic material.

Express Yourself

In recent years, sample‑based instruments have continued to grow and become more complex, offering an incredible amount of control for the composer looking to create natural‑sounding performances of acoustic instruments. However, for the most part, the method for how we use sequencers to program these sampled instruments has remained largely the same, and the sequencer is mostly dumb about the context of the music being programmed.

One example of this is the way many sampled instruments use key switches (notes on a MIDI keyboard that trigger an action in a sampled instrument) to select different articulations. For instance, you might have a violin instrument where the key switches allow you to select different playing styles, such as legato, staccato, pizzicato, and so on. This is fine, but it can make editing the resulting MIDI data quite difficult, since you have to remember what the various key switches trigger — and, rather annoyingly, since they are merely notes, key switches don't chase, so if you jump around to different parts of the Project, most sequencers won't know to go back and find the last key switch to ensure your pizzicato section really does play pizzicato.

Steinberg have solved this problem in a rather neat way in Cubase 5 by introducing a feature called VST Expression that's conceptually similar to Drum Maps. In the same way a Drum Map can be assigned to a MIDI or Instrument track to tell Cubase what drums are assigned to what notes, VST Expression enables an Expression Map to be assigned so that Cubase knows about the various articulations that can be played by the instrument to which the track is routed.

An Expression Map defines Articulations, and there are two different types of Articulation provided: Directions and Attributes. A Direction is a general change in playing style for a duration in a given part, such as when a violinist switches from bowed notes (arco) to pizzicato, whereas an Attribute is an articulation that's just applied to a single note. For example, if our imaginary violinist is playing an arco passage but the occasional note should be played staccato, the Direction will be arco, and the staccato notes will be assigned the staccato Attribute.

The VST Expression Setup window is where you manage, create and edit Expression Maps. Here you can see the included Violins Combi Expression Map for Steinberg's Halion Symphonic Orchestra VST Instrument.The VST Expression Setup window is where you manage, create and edit Expression Maps. Here you can see the included Violins Combi Expression Map for Steinberg's Halion Symphonic Orchestra VST Instrument.This is obviously pretty powerful, but what's even more useful is that Articulations aren't just there to provide access to key switches. Once an Articulation is defined in an Expression Map, you actually have quite a bit of control over what it will do when active, and having it trigger a key‑switch note is just one possibility. Articulations can send MIDI Program Change messages, change the channel on which MIDI data is sent, and also manipulate the pitch, length and velocity of notes, much like the old MIDI Meaning feature found in the score editors in Cubase and many other sequencers. This means that even if you're working with an older instrument that doesn't support key switches, it's possible to instead create an Expression Map that sends Program Change messages to reproduce the same type of behaviour that we've been discussing for newer, key‑switchable instruments.

In addition to these various output mapping options, each Articulation also allows the input mapping of a MIDI note, enabling you to switch Articulations in Cubase from a MIDI keyboard in exactly the same way you would a conventional key‑switchable instrument. This is important because it means that when you perform key switches on your MIDI keyboard, Cubase now knows about these key switches and will record them as Articulation events instead of MIDI notes. Unless, that is, you're using the Retrospective Record feature, since this mode still seems to capture the key switches as notes — an issue which, according to Steinberg, should be fixed very soon.

While the input and output mapping options are quite comprehensive, there are a couple of extra options that would make Expression Maps even more useful. Firstly, it would be great to have the option of sending MIDI Controllers (in addition to notes and program changes) in the output mapping section, and Steinberg will apparently include this in a 5.0.1 update that may even be available by the time you read this. It would be equally useful to have more input mapping options; for instance, you might want to trigger Articulations from a different type of controller, and there are situations where it's useful not to have notes triggering Articulations, such as when you want to select the same Articulation on multiple instruments that have different pitch ranges simultaneously from a single MIDI message.

Map Reading

Working with Expression Maps is pretty simple. You assign a Map to a MIDI or Instrument track via the Expression Map pop‑up in the new VST Expression Inspector Section, from where you can also open the VST Expression Setup window to edit and create Maps. Fortunately, Cubase 5 comes with a selection of ready‑made Expression Maps for the Halion One and Halion Symphonic Orchestra instruments to get you started (HSO is sold separately, but you get a 90‑day demo with Cubase 5). Creating your own Expression Maps is not particularly hard, but does require reading the appropriate chapter in the manual.

A new Articulations controller lane in the Key editor makes it easy to edit Articulations. Note how the Event Info Line now has an Articulations option, which in this example shows that a Half‑Tone Trill Attribute has been applied to the selected note.A new Articulations controller lane in the Key editor makes it easy to edit Articulations. Note how the Event Info Line now has an Articulations option, which in this example shows that a Half‑Tone Trill Attribute has been applied to the selected note.Once an Expression Map is assigned, you can use the new Articulations controller lane in the Key, Drum and In‑place editors to visually edit Articulation changes. The Articulations controller lane works similarly to other controller lanes, but is divided into a number of sub‑lanes, one for each Articulation, making it easy to see and edit Articulation changes.

While the Articulations controller lane enables you to see all Articulations, from an editing perspective it's most convenient for working with Directions. For assigning Attributes to individual notes, there's a new Articulations option in the Event Info Line. When one or more notes are selected, this lets you assign an Attribute from the pop‑up menu listing all available Attributes in the currently assigned Expression Map.

Although assigned Attributes do show up in the Articulations controller lane, this isn't always the best way to see what Attributes are assigned to what notes, because it's obviously possible to have a chord where the top note might have a different Attribute from the other notes in the chord, an accent, maybe. The ability to colour notes by Attribute would be very helpful. Meanwhile, the List editor is actually one of the clearest places to see all of the Articulation data, since Directions show up as Text events and Attributes are listed in the Comment field of a given MIDI note event.

The Score editor enables you to both see Articulation events in a musically relevant way and add new Articulation symbols via the VST Expression Inspector Section. While the notes aren't interesting, this example shows a direct interpretation of the same data displayed in the Key editor illustration.The Score editor enables you to both see Articulation events in a musically relevant way and add new Articulation symbols via the VST Expression Inspector Section. While the notes aren't interesting, this example shows a direct interpretation of the same data displayed in the Key editor illustration.One of the most useful editors for interpreting Articulation events is the Score editor, and this is where things could get really interesting for composers who work with notation and would like to have a more precise score to pass onto an orchestrator (via MusicXML), or even directly to a musician. Because Articulations that are defined in an Expression Map can either have a musical symbol or an item of text associated with them, Articulation events automatically appear on the score in the Score editor in a mostly musically correct fashion, which is just fantastic. And although they appear light blue by default, you can easily change the colour of their appearance, if you like, in the Preferences window.

There are a couple of areas where the layout of Articulations could be improved. Firstly, Direction Articulations often end up on the staff, and especially for text events, it would be great to set a default staff offset so they would mostly stay clear, either above or below. Secondly, it would be helpful if there was a default Articulations Text Attribute Set, so that you could globally change the font used for Articulation text.

In addition to showing existing Articulation events, you can also add Articulations to the score, as all of the available Articulations show up as symbols in a new VST Expression Section of the Score editor's Inspector.

VST Expression is a powerful and generally well thought‑out feature, and Steinberg have clearly thought about what a composer will need. As with VariAudio, it was perhaps inevitable that sequencers would incorporate such a feature, now that composers are working with increasingly large and complex soft‑synth setups. But I think Steinberg deserve credit for being the first to address this need, and for incorporating it meaningfully into so many areas of the program, and especially into the Score editor.

Designer Beats

Staying with the theme of new MIDI‑related composing features, an oft‑neglected and sadly under-used feature of Cubase is the ability to use MIDI plug‑ins to process MIDI events on MIDI tracks, just as you would process audio events on audio tracks with VST effects plug‑ins. In Cubase 5, Steinberg have revitalised their collection of bundled MIDI plug‑ins with updated interfaces and some completely new plug‑ins, such as MIDI Monitor.

MIDI Monitor is a rather useful tool that lists the MIDI events being output by the track on which it's inserted. You can even save the listed events in a text file for further study. To help prevent the list from becoming cluttered with events that you don't want to see, various filters are provided for different types of MIDI events, and you can even filter out events played back from MIDI parts or live MIDI input.

Beat Designer is a new MIDI plug‑in step sequencer that makes it easy to create drum patterns.Beat Designer is a new MIDI plug‑in step sequencer that makes it easy to create drum patterns.Besides analysis, if you spend time creating drum loops from individual samples, you'll almost certainly like the new Beat Designer MIDI plug‑in, a pattern‑based step sequencer ideal for programming drums. Each pattern consists of a number of lanes, and each lane can be set to trigger a specific MIDI pitch. The name of a lane is based on the Drum Map that's set for the track on which Beat Designer is being used, and a General MIDI map is used if no Drum Map is set. This is all right, except that I'm guessing most people will use Beat Designer with VST Instruments, and wouldn't it be nice if somehow the drum names used in the VST Instrument could find their way into Beat Designer? One would hope this would at least be possible with those made by Steinberg.

Adding steps is a simple matter of clicking in the step display for a given lane, and you can remove a step by clicking it again. You can also set the velocity for a step by clicking and dragging, where the colour of the step will change to reflect the velocity, which is a nice touch; and Beat Designer also makes it easy to add flams to individual steps. You can set between and one and three flams for a step to play by clicking in the bottom part of a step, and the number of flams will be indicated by one, two, or three dots. At the bottom of the Beat Designer interface are global controls for how the flams are performed, and you can adjust the timing and velocity of the flams. Being able to vary the timing is actually very neat, because this makes it possible to have the flams play before or after the beat. Staying with timing, you can also set each lane to one of two swing settings, in addition to setting a slide value to independently move a lane forwards or backwards in time.

A pattern can be between one and 64 steps, and you can set the resolution of these steps between a half note and a 128th note, with various triplet options along the way. The resolution of the pattern also dictates the note length of a step; so if the resolution is eighth‑note, each step will last for a quaver on a eighth‑note grid. Because the triggered length is global across all lanes, I can't help but think it would be useful to have a gate control on each lane, so you could add to or subtract from the global note length.

The hierarchy of patterns is little confusing at first, but in essence, a single pattern bank (which can be stored as a preset) consists of four sub‑banks, and each sub‑bank contains 12 patterns. The pattern selector is represented by an on‑screen one‑octave keyboard, and you can either select patterns using a combination of Beat Designer's on‑screen keyboard and sub‑bank buttons, or remotely via MIDI over a four‑octave range when Jump mode is enabled. Not enabling Jump mode lets you still trigger sounds on the MIDI Track rather than changing patterns, which is also quite useful.

Agent Of Your Tunes?

Groove Agent ONE is an MPC‑inspired drum machine VST Instrument that can also be used to play the individual slices of a drum loop.Groove Agent ONE is an MPC‑inspired drum machine VST Instrument that can also be used to play the individual slices of a drum loop.

A new instrument plug‑in that works particularly well with Beat Designer is Groove Agent ONE. This is not a version of Steinberg's separately available Groove Agent VST Instrument, but Steinberg's take on an MPC‑like drum machine, even offering the ability to import mappings from MPC PGM‑format files. It provides a really simple way of playing back one‑shot samples or loops.

Although Groove Agent ONE is supplied with a number of preset kits, what's really nice about this plug‑in is how easy it is to create your own kits. As you would expect, there are 16 virtual pads onto which you can drag audio events from your Project or audio files from the Media Bay, and then trigger the assigned sound by clicking the pad or by setting a MIDI note that will trigger it remotely. It's possible to drag multiple sounds to a single pad, thereby creating different layers that are triggered via velocity; and if 16 pads aren't quite enough, Groove Agent ONE actually offers eight Groups of 16 pads (similar to an MPC's pad banks), accessible via the Group buttons. These conveniently show a red outline if they contain assigned pads.

An LCD‑style view in Groove Agent ONE's interface enables you to adjust various settings for the currently selected pad, such as tuning, adding a filter, adjusting the amplifier envelope, setting the play direction (making it easy to reverse a sample non‑destructively), defining whether the pad triggers the sample to play one‑shot or for as long as the pad (or corresponding MIDI note) is held down, and so on. There are 16 stereo outputs available from Groove Agent ONE, and it's possible to assign individual pads to any of these outputs. One slightly curious omission in terms of editable parameters is that it doesn't seem to be possible to adjust the sample start and end points, which is a shame, since a waveform overview is available from the LCD.

A particularly neat trick Groove Agent ONE has up its virtual sleeve is that it's also possible to use it for playing back sliced loops. Simply slice up a loop in the Sample editor (which you can easily do by creating hitpoints and using the Slice & Close button), open the sliced loop in the audio Part editor, select all the audio events and drag them onto a pad. The slices will be added to consecutive pads so that you can now trigger elements of the loop from a MIDI keyboard.

Better still is that, by dragging the MIDI Export pad in the Exchange section into your Cubase Project, you can make Groove Agent ONE export a MIDI file that plays the slices for you as a loop again, just as you can do with some third‑party instruments, such as Spectrasonics' Stylus RMX. This is a pretty nice feature, and my only complaint is that the MIDI file you import is always added to a new track — you can't drag the MIDI file onto a track that's already created and has its output set. However, this isn't strictly a Groove Agent ONE issue and is something of a minor point.

Five Stars

Overall, Cubase 5 is a really fantastic update, and, compared to version 4, actually has musically useful new features that cater to many different groups of musicians. From major new facilities like VST Expression and VariAudio, to more workflow‑oriented improvements like the batch export feature in the Export Audio window, which I didn't have space to discuss in more detail, allowing you to perform multiple exports of tracks or mixer channels with a single command, there really is something to make every Cubase user smile in this update.

Like any piece of software, it would seem, Cubase 5 isn't perfect. There are a few quirks here and there, such as an instability that can manifest itself when you have multiple controller lanes open in the Key editor, or the fact that certain 'on top' windows like the transport panel don't always minimise correctly with the parent window on Windows Vista. But at least Steinberg will be addressing many of these issues in the forthcoming 5.0.1 update mentioned earlier. One particular area of the application I wish Steinberg would look at in a future version, though, is the Preferences window.

With so many preferences, it's becoming really hard to remember where to find certain options, especially when they get moved around from version to version or disappear altogether. Mac OS X Tiger's System Preferences window showed one possible approach to locating certain settings, allowing you to search for items and having the window disclose where suitable matches might be found. And even Reaper, the "reasonably priced” digital audio workstation, offers a Find field in its Preferences window. We really need something like this in Cubase.

Ultimately, though, I actually do like the direction of Cubase 5. Rather than simply focus on redoing the user interface, or adding only 'me too' features from competing products, Steinberg seem to have really thought about functionality that will help users get more from the application. If Sound On Sound ever has to review Cubase 5 for a third time, there will be a high standard to beat.  

The Cubase Studio 5 Disclaimer

Unless otherwise stated, references to Cubase 5 in this review relate to both the product that's named Cubase 5 and its feature‑reduced sibling, Cubase Studio 5. If someone could just explain to me why the version with fewer features is the one that's called Cubase Studio, I'd love to understand this particularly odd convention.

Cubase 64

With more and more computers being sold with 64‑bit operating systems, and musicians' demands increasing due to the desire to run more plug‑ins and ever‑larger sample libraries, having 64‑bit music applications is becoming quite important. Although Steinberg have made available 64‑bit versions of Cubase 4 (and Nuendo 4) for some time, Cubase 5 marks the release of the first fully supported 64‑bit release of Cubase on the 64‑bit (x64) version of Windows Vista. This is a great thing, especially for those working with large Projects, although there are a couple of points to bear in mind if you're considering running the 64‑bit version of Cubase 5.

The most important consideration is that there need to be 64‑bit drivers for all of the hardware you require on your system. Most computer components should be covered, including most audio hardware, but some musician‑specific hardware lacks 64‑bit support at present, including DSP cards and instruments like Access's Virus TI. While plug‑ins technically need to also be specifically compiled to work inside a 64‑bit application, Steinberg supply Cubase with a technology called VST Bridge, allowing your existing 32‑bit plug‑ins to run with the 64‑bit version of Cubase. For more information about this, check out our Nuendo 4 review back in the December 2007 issue of SOS (/sos/dec07/articles/nuendo4.htm).

Besides hardware and plug‑ins, Propellerhead's ReWire and REX file technologies are currently unsupported in 64‑bit operating systems, and users who work with video inside Cubase should note that QuickTime support is also unavailable at the moment. If you use one of Cubase's other Video Players, you need to make sure you have 64‑bit video codecs that support the video files you'll be using.

A 64‑bit release of Cubase for Mac users is also being prepared, but, in order for this to be possible, Steinberg needed to port all of Cubase's application framework code to use Apple's Cocoa framework instead of the old Carbon API, which originally eased the transition for developers from Mac OS 9 to X. So Cubase 5 now makes full use of the Cocoa framework, making a Mac 64‑bit release possible, although it seems likely Steinberg may introduce this following the release of Apple's next major version of Mac OS X, Snow Leopard (10.6).

Mash It Up!

LoopMash is a great plug‑in for creating new loops based on existing loops, and the results often sound as quirky as its interface suggests.LoopMash is a great plug‑in for creating new loops based on existing loops, and the results often sound as quirky as its interface suggests.

New in Cubase 5 (but not Cubase Studio 5) is a unique plug-in called LoopMash. When you drag in a loop from the Project window or Media Bay, its rhythmic and spectral properties are analysed, it's chopped into quaver slices and it's placed onto one of eight tracks. When you drag in more loops, LoopMash will then start to look for slices in those loops that are similar to slices in the 'master' loop. For example, if you drag in an acoustic guitar loop, LoopMash might substitute snare hits from a master drum loop with what it thinks are similar slices from the guitar loop, which can be pretty neat.

Most of the magic happens algorithmically, but there is some user control. A slider on each track tells LoopMash how vigorous it should be in detecting similar slices. The lower the value, the fewer slices will be used from that track. You can also set how many slices should play at the same time, so if you have two loops playing and 'Number of Voices' is set to 1, only one slice will play at a time. You can choose how many slices can be triggered from the same loop at once, and set a randomisation factor to vary the results.

The plug-ins bundled with DAWs are often fairly uninspiring, but LoopMash is a quirky exception, generating from existing audio loops ideas you would never otherwise have thought of.

Cubase & Convolution: The Reverence Reverb

Reverence is a new VST3 convolution reverb that sounds remarkably decent for a Steinberg reverb! Here you can see it being used on a 5.1 audio track.Reverence is a new VST3 convolution reverb that sounds remarkably decent for a Steinberg reverb! Here you can see it being used on a 5.1 audio track.

I think it's fair to say that the reverb plug‑ins bundled with Cubase have never been particularly stunning. However, this is another area that Steinberg have addressed in Cubase 5 (although, sadly, not in Cubase Studio 5), finally taking the lead of many other developers and including a good‑quality convolution reverb plug‑in.

Reverence (and I can't make up my mind if I like this name or not) has most of the settings you would expect to find in a reverb, such as the ability to change the reverb time, adjust the EQ of the reverb, and so on, along with a couple of settings you might not expect. For example, a single Reverence preset actually has the ability to comprise 36 different programs, where a program defines all of the settings for a reverb, including the impulse response. A program is, therefore, basically the same as a preset; but the advantage in having 36 presets within a preset, as it were, is that all of the impulse responses required for the programs in a preset are pre‑loaded with the preset. This makes it possible to switch between programs without the usual lag associated with recalling a convolution reverb preset, which is great if you need to automate changes in impulse responses.

A couple of other options I liked, which you'll find in some other reverbs (but not all) include the ability to reverse the impulse response — great for sound-design work — and a play button that triggers a click, so you can audition a setting without having to run audio through the plug‑in.

Reverence sounds pretty good, with some decent impulse responses supplied, such as a venerable chapel in Cambridge and, slightly more esoteric, a tunnel. But, perhaps best of all, Reverence is capable of surround operation, with many surround impulses provided to take advantage of this capability. Unfortunately, though, I did encounter a slight performance issue when running the surround version of Reverence on my test computer, an older but still pretty powerful dual quad‑core Xeon machine running at 2.66GHz with 16GB memory. Cubase's red CPU overload indicator started flashing and the audio output became garbled. This would happen in a Project with only one 5.1 audio track and only Reverence loaded, so I'm not sure if I was doing something wrong, but unfortunately I didn't have time to test this on another configuration. The stereo version worked just fine, however, and seemed quite efficient.

Automation For The People

You can now control how Volume automation events on a MIDI track interact with Volume MIDI Controller events in the controller lane. The grey line indicates the actual values that will be sent to your MIDI device, having been modulated by the automation events on the same track.You can now control how Volume automation events on a MIDI track interact with Volume MIDI Controller events in the controller lane. The grey line indicates the actual values that will be sent to your MIDI device, having been modulated by the automation events on the same track.

Cubase 5 offers a number of improvements when it comes to automation, some of which have been facilitated by the inclusion of Nuendo 4's Automation Panel. While you don't get all the commands from Nuendo 4, a particularly useful one is the ability to be able to suspend the reading and writing of certain types of automation, such as Volume, Pan, EQ, and other mixer controls. This means that it's now possible to just record pan automation, for example, so that any volume moves are ignored during a pass, which is a big improvement over the all‑or‑nothing approach in previous versions.

Another improvement concerning automation, which is a completely new feature in Cubase 5, resolves a long‑standing issue concerning a conflict between automation events and MIDI Controller data, whereby having Volume automation events on a MIDI track would result in MIDI Controller 7 (Volume) messages being sent to your MIDI device. This wasn't necessarily a problem, but the same MIDI track could simultaneously contain MIDI Controller 7 data that could conflict with the automation, and Cubase had no way of dealing with the fact that two completely different sets of MIDI Controller 7 messages were being sent to the same device.

This issue has received some serious thought by Steinberg, and it's now possible to define how automation and MIDI Controller events should interact with each other via a new Part Merge Mode option on automation tracks. When you work with MIDI Controller data in the controller lanes or the Key, Drum or In‑place editor, the familiar coloured blocks still represent controller data, but a new automation‑style blue outline appears along the top. The blue line is accompanied by a second, grey outline that shows you the actual value of the MIDI Controller that will be sent to your MIDI device — the consolidation of MIDI Controller and automation data into a single stream of messages for any given type of MIDI Controller.

There are a number of different Part Merge Mode options to set how automation and Controller events interact, and the default is Average, meaning that an average value between the controller data in a part and the Automation events on a track is used. Personally, I found the Modulation option more sensible for me, since it provides a more obvious trim‑like functionality, allowing you to keep the basic shape of your controller data, but adjust it proportionally with automation events. But the good thing is that, whichever mode works best for you, it's possible to set your own defaults for each type of MIDI Controller individually, using the MIDI Controller Automation Setup window.

The new MIDI Controller automation handling is something certain Cubase users have been waiting for since Cubase SX 1, and although it can initially seem a little complicated, it's a very thorough and well thought‑out solution.

Pros

  • VariAudio provides Melodyne‑like editing of monophonic audio right inside Cubase's Sample editor.
  • VST Expression makes using sampled instruments that feature key‑switchable articulations far more manageable.
  • Many new plug‑ins that you'll actually want to have!

Cons

  • Third‑party developers have been slow to embrace VST3‑related technologies, leaving some of Cubase's potential untapped for the majority of non‑Steinberg plug‑ins and content.

Summary

Cubase 5 is arguably the most impressive release of Cubase since its reboot with SX seven years ago. It offers useful, fun, and innovative new features and improvements that should benefit both existing Cubase users and the new users this release will surely attract.

Information

Cubase 5 $499; Cubase Studio 5 $299. Upgrades $199 (Cubase 4 or SX3 to Cubase 5), $249 (Cubase Studio 4 or SL3 to Cubase 5), $99 (Cubase Studio 4 or SL3 to Cubase Studio 5).

Yamaha Corporation +1 714 522 9011.

infostation@yamaha.com

www.yamaha.com

www.yamaha.co.jp/english

www.steinbergsites.com


Published March 2009