Welcome to No Limit Sound Productions. Where there are no limits! Enjoy your visit!
Welcome to No Limit Sound Productions
Company Founded | 2005 |
---|
Overview | Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting. |
---|---|
Mission | Our mission is to provide excellent quality and service to our customers. We do customized service. |
Saturday, September 6, 2025
Friday, September 5, 2025
Cubase 13: Using The Vocoder Plug-in
A classic vocoder setup, with the vocal audio track (red) acting as the modulator and Vocoder’s internal synth providing the carrier. In the small inset image (highlighted in the blue box) you can see in the MIDI track’s Inspector panel that the MIDI out from this track has to be routed to the specific instance of Vocoder that’s inserted on the audio track.
Cubase 13 brought with it the welcome return of Steinberg’s Vocoder plug‑in...
For users of Cubase Pro and Artist, version 13 brought with it the return of an old favourite: Steinberg’s Vocoder plug‑in has finally made it into the 64‑bit world and, while the basic concept remains the same, it has also undergone a smart visual makeover. Vocoders are perhaps most popular in electronic music styles, in which the classic ‘robot voice’ is often heard, but if you’re prepared to experiment a little you’ll also find that the revamped Vocoder can conjure up a much wider range of effects. In this month’s column, I’ll run through how you might go about this, and you’ll also find some audio examples on the SOS website (https://sosm.ag/cubase-0324) to accompany each of the main stages I describe.
Vocoder 101
Put simply, a vocoder allows you to take some of the sonic characteristics from one sound (called the ‘modulator’) and apply them to another sound (known as the ‘carrier’) — by far the most common example is when a vocal modulator is applied to a synth‑sound carrier. The pitch of the resulting sound is always determined by the MIDI note(s) used to trigger the synth, but the sound of the voice modulates the synth sound, so its character changes: the effect is like making the synth ‘talk’. Depending on the MIDI note data received by the synth, you can get the classic monotonic robot voice effect or something with more melodic and/or harmonic content.
You can use sound sources other than a voice as your modulator input to Vocoder, though, and while, just like most vocoder plug‑ins, Vocoder includes a synth engine to serve as the carrier, with a little side‑chain tomfoolery its magic can also be applied to an another synth, such as Retrologue or Padshop.
Insert This Way Up
Vocoder an audio effect plug‑in, so the most obvious options is to place it in an Insert slot on an audio track, and I’ll focus on that route here. But note that you could use it as a send effect inserted on an FX track. A scenario where that might be useful is where you know that you’ll want to blend an unprocessed (or differently processed) version of the modulator sound with the ‘vocoded’ version.
The opening screenshot (above) summarises the basic configuration for the Insert effect route. An instance of Vocoder has been inserted in the top‑most audio track (coloured red). This contains a sung vocal melody that will act as our modulator. Vocoder’s UI is shown in the middle of the screen: the Carrier section provides the controls for the internal synth engine, while the Modulator section controls how the incoming audio signal is used to modulate the carrier sound. The specific settings I’ve used here are based on the Smooth 16 preset but with a few of my own tweaks, and are easy to recreate. Note that MIDI is set to External (allowing you to control the pitch from a recorded MIDI track or external keyboard), the Bands parameter is set to 16, and both the Talk Thru and Gap Thru settings are at zero percent (I’ll come back to these last two options).
The bottom‑most MIDI track (in green) provides the MIDI input to Vocoder to control the pitch of the carrier (synth) sound. As shown in the small inset image from the MIDI track’s Inspector panel, the MIDI out from this track has to be routed to the specific instance of the Vocoder plug‑in. If you’re playing MIDI note data in ‘live’ during playback rather than using pre‑recorded MIDI data, you’ll need to select the MIDI track and record enable it (or engage the monitor button) for the note data to be forwarded to the Vocoder.
Do Adjust What’s Set
Page 158 of the PDF Plugin Reference Manual takes you through all of Vocoder’s controls in detail, but a few are worth highlighting here. For example, in the Carrier section you can use the Noise Mix (and Noise Mod) and/or Bright controls to blend in a bit of an ‘edge’ to the processed sound if you need it to cut through a mix a little more. In the central panel, adjusting the number of frequency bands in the processing will influence the audio quality of the result, with more bands tending to allow the nature of the modulator signal to come through more clearly in the final output.
In the Modulator section, the Min Freq and Max Freq act almost as high‑ and low‑pass filters, while the Bandwidth knob, which sets the frequency bandwidth used by each band, can dramatically change the tonality of the eventual sound (higher values produce a fuller sound).
The function of the Talk Thru and Gap Thru controls are worth noting. Both controls let you blend in the unprocessed modulator sound into the final output. Talk Thru sets the level of this unprocessed sound while Vocoder is receiving notes, and Gap Thru sets it when no MIDI notes are being received. You can, of course, set a balance of the two controls that allows the unprocessed modulator source to be heard at all times, but they give you more flexibility over when, and how much, the unprocessed modulator sound source is heard than a simple wet/dry control might.
Let’s imagine that you want to hear the unprocessed vocal most of the time, but trigger the Vocoder as a spot effect so that the processed sound totally replaces the unprocessed one only on a few words/phrases. In this case, you’d set the Talk Thru control to zero and the Gap Thru control to a suitable non‑zero value. On playback, in the absence of any MIDI note input, you’d hear just the unprocessed modulator signal (a vocal in this example). Then, as soon as the Vocoder received a MIDI note (or notes), the unprocessed sound would be replaced by the vocoded sound. Different combinations of these controls allow you to achieve different outcomes to suit your needs.
Take Note
While the sonic character is controlled by the nature if the carrier and modulator, the MIDI notes play a significant role in the musical usefulness of Vocoder’s output. Single note lines let your synth ‘sing’ the phrase in a melodic fashion. If you use MIDI notes whose length spans several syllables or words of the original sung phrase, then you can easily achieve the classic (clichéd?) robot voice effect. But if you match the timing of your MIDI note onsets to those of the sung phrase, you can create all sorts of alternative melodic variations not present in the original.
You can use MIDI chords as your note input too, and in this case Vocoder will generate vocoded harmonies based on those chords. This can generate some quirky backing vocals if you just follow your project’s chord progression and, depending on how you play the chords (simple block chords or with more variety by adding inversions or extensions), you can create either a static robotic style or something more akin to a real backing vocal group, though of course with a more synthetic quality to the actual sound.
Incidentally, you can feed a live audio source into your Vocoder track and play in live MIDI note data at the same time, so that whatever you sing will be ‘vocoded’ on the fly. Because you’re performing both the modulator input and MIDI note input at the same time, it’s very easy to get them in sync; simple melody or chords, there’s a lot of fun to be had here.
Don’t Do Normal
The examples described above use a combination of a voice‑based modulator and Vocoder’s synth engine as the carrier. That will let you create the classic vocoder effects, but if your creative streak runs deeper there are two further options to explore. First, you can experiment with different input sounds as your modulator. For example, vocalised vowels (rather than sung words) can be interesting, especially if you change the tonality of the sound as you sing — in effect, you’re using your vocal sound as a type of sweepable band filter — and many solo instruments, notably guitars, can be used in a similar fashion. Or, if you want things to get really weird, try something rhythmic like drum or percussion loops, as I’ve done for some of the audio examples.
You can experiment with using different synths (or other sources) as your carrier sound simply by routing them to Vocoder’s side‑chain input.
By using Vocoder’s side‑chain capability, you can use an external synth (in this case Retrologue) to supply the carrier sound.
Second, you can experiment with using different synths (or other sources) as your carrier sound simply by routing them to Vocoder’s side‑chain input — the internal synth engine will be bypassed and the side‑chain source used as the carrier. Given the basic nature of Vocoder’s synth engine, you might imagine that using a more sophisticated synth such as Retrologue or Padshop would instantly create a more interesting result. It might, or it might not: picking the modulator and carrier that might play nicely together can be something of an unpredictable process that requires experimentation and a little patience — but it can be rewarding too, and well worth the time investment!
Thursday, September 4, 2025
Wednesday, September 3, 2025
Cubase 13: Convincing String Arrangements
Screen 1: The Iconica Sketch patch and MIDI track layout required for the workflow described here is very straightforward. Based on a simple chord sequence, some ‘before and after’ example MIDI clips are shown on the right.
Get better results from Iconica Sketch’s strings with the Cubase Logical Editor.
Iconica Sketch is an excellent, compact orchestral library (it’s under 5GB) included in the Pro, Artist and Elements versions of Cubase 13. As the name suggests, it’s pretty good for sketching out ideas, but the sounds themselves are capable of much more than this — they come from Steinberg’s full (190GB) and very impressive Iconica. Sketch includes multi‑articulation patches for each major instrument section, but there are no ‘full’ ensemble patches. This is perhaps not such a bad thing, because although ensemble patches can provide instant gratification from a few simple chords, they’re of little use if you’re trying to write something a real orchestral string section might play — to do that, you really need to create separate lines for each string sub‑section.
To turn your simple chords into those individual MIDI lines for Iconica Sketch, Pro and Artist users need to do just a little patch configuration, and then take a quick dip in Cubase’s Logical Editor. Although we’re looking at strings here, the process can easily be adapted for the brass and woodwind sections.
Patch Work
The workflow I propose comprises two main steps, but there’s an optional third one too. For the first step, the basic configuration is shown in Screen 1. This includes arranging the instrument patches in a single instance of HALion Sonic: these run from Violin I on MIDI channel 1 through to Basses on MIDI channel 5. Note that I’ve also edited the keyswitch assignments for the individual patches to span C‑1 to F#‑1. As I’ll explain shortly, this will allow you to include real‑time articulation switching for all the sub‑sections. It’s worth mentioning that all these patches offer the same seven performance articulations. For the shorter articulations, velocity controls dynamics, while for the sustained articulations you use the mod wheel to add crescendo/decrescendo to the performance.
Finally, alongside the Instrument track that’s hosting HALion Sonic, I’ve added five suitably named MIDI tracks to the project. In their individual Inspector panels, I have set all of these to send MIDI data to HALion Sonic, but note that each uses the MIDI channel number required to target the desired instrument patch (channel 1 for Violins I, channel 2 for Violins II, and so on).
Now, select these five MIDI tracks (to record‑arm them all) and play some four‑note chords on a MIDI keyboard, plus any keyswitches needed to switch articulations in your performance. These full chords will get transmitted to all five tracks, and if you engage record when doing this, you’ll end up with five identical MIDI clips. This might initially sound fairly epic — there are a lot of strings, so it makes a big sound — but it’s not really what we want here, because every note in the chord is being played by all five sections. In a real string section, when it comes to basic chordal parts each sub‑section usually tends to play a single note of the chord (unless a sub‑section is playing ‘divisi’ and being divided into further sub‑groups).
Think Logically
So, how do you thin out the five identical MIDI clips such that each one contains one particular note from the full chord and, when combined, the section as a whole plays the full chord, as might be the case with a real string section? You could edit each clip manually, of course, muting or deleting notes from each one to leave each sub‑section playing a single note at a time. But that task gets tedious very quickly, and can interrupt your creative flow. Thankfully, with just a little work in the Logical Editor, you can automate this process pretty easily.
Screen 2: Two examples of Logical Editor presets that allow each string sub‑section to play a single‑note line from a four‑note chord.
Screen 2 shows two examples of the Logical Editor presets required for this. The main part of the screen shows a preset I’ve named Isolate String Basses Celli. There are two filters specified in the Event Target Filters panel, and both act to select/deselect specific notes. Thanks to the Context Variable entry in the second line, the first two lines only select notes within the clip when Note Number in Chord is not zero. Zero represents the lowest note in a chord, so this selects every note within a MIDI chord except the lowest one. Lines 3 and 4 apply a second note‑based filter that selects notes only if their pitch is greater than or equal to C0. This criterion means that any keyswitches used during a performance (all of which lie below C0) will not be selected. There are no entries in the Event Transform Actions panel, but note that it is set to Delete. When these criteria are combined, then, and the Logical Editor preset is executed on one of our MIDI clips (either the Basses clip or the Celli clip) Cubase selects and deletes all notes above C0 that are not the lowest note in the chord and are above C0. In other words, we end up with a clip containing just the lowest note in the chord plus any keyswitches, meaning our bass (or cello) patch will now just play a single line from the full chord.
You’ll need five similar Logical Editor presets in total, one for each sub‑section/part. They all follow a similar pattern, but obviously some tweaks are required. As an example, the second part of Screen 2 shows just the Event Target Filters for my Isolate Strings Violins I preset (the Event Transform Actions panel remains the same). Note that the only difference is that, in this case, the filtering leaves in the fourth highest note within the chord (Parameter 2 is set to 3) as well as the keyswitches.
Make Mine A Macro
Once the five Logical Editor presets have been created, thinning out each MIDI clip simply involves selecting the clip and executing the appropriate preset. If you wish, this can be achieved from a menu: MIDI/Logical Editor/Apply Preset. This pops open a panel with all the Logical Editor presets listed, and it’s very easy to step through the five MIDI clips to do what’s necessary. You can also assign any Logical Editor preset to a key command. But for a faster approach, check out the Macro shown in Screen 3, which can be executed with a single keystroke. Providing you have the MIDI tracks laid out in the way suggested here, selecting the topmost MIDI clip (for the Violins I MIDI track) and executing the Macro will execute the required Logical Editor preset for Violins I, then select the next MIDI clip (thanks to the Navigate – Down command) and repeat the process. In all, it takes less than a second to transform your chord into five MIDI clips, each with the desired single‑note line, and with your keyswitches remaining intact.
Screen 3: If you want to work fast, try chaining the various Logical Editor presets into a Macro that you can execute with a single keystroke.
Make It Real (Time)
To close, let’s consider one question that the above approach raises: why can’t this ‘chord separation’ task be carried out in real time, as you play the MIDI chords? Well, unfortunately, Cubase’s MIDI Input Transformer (the real‑time version of the Logical Editor) doesn’t support the Context Variable option, I suspect because the unpredictable timing of incoming notes makes this sort of processing trickier to perform reliably. I do wonder, though, whether Steinberg might be able to add such capability to the (already excellent) Chord Pads system. Those trigger the whole chord at once, so in theory at least, it would be possible — in fact, it would be a very cool solution to this problem, and could also have applications beyond just orchestral strings. Pretty please, Steinberg?
In the meantime, in case you do a lot of orchestral composing, would like to do this in real time and have some money to throw at the problem, it’s worth drawing your attention to a Mac/Windows app called Divisimate (https://divisimate.com), which you might want to check out. This software sits between your MIDI keyboard and Cubase and, amongst other things, allows you to split out the notes of an incoming live chord onto different MIDI channels.
The Logical Editor approach I’ve described here already gives you a great starting point for creating more detailed string (or brass/woodwind) arrangements in double quick time.
Of course, whether a part is sustained or played rhythmically, there can be (much!) more to creating a string performance than relying exclusively on block chords. Indeed, for a convincing result, further refinement of the parts may well be required. You might try lowering the basses by an octave, or revoicing the other notes within the chord to create different inversions. Equally, you might edit the Violins I and II lines to let them ‘cross’ in pitch, and create more harmonic interest. Or perhaps you could let the basses, celli, violas and second violins carry the chord and add a melodic top line in the first violins. While it would be great to see a real‑time option for this sort of thing in the Cubase feature set in the future, the Logical Editor approach I’ve described here already gives you a great starting point for creating more detailed string (or brass/woodwind) arrangements in double quick time. And if you’d like to hear what all this can sound like, check out the two audio examples we’ve put on the SOS website (https://sosm.ag/cubase-0424).
Tuesday, September 2, 2025
Monday, September 1, 2025
Cubase 13: Using The VocalChain Plug-in
VocalChain: all you need to add polish to your vocals in a single plug‑in.
Cubase’s VocalChain can polish and add character to your vocals in an instant.
For Artist and Pro users, the return of the Vocoder plug‑in (which we explored in the March 2024 column) was not the only significant addition in Cubase 13: Steinberg also added the new VocalChain plug‑in. While this essentially combines the facilities offered by a number of Cubase’s existing plug‑ins, it’s impressive just how quickly it lets you go from raw vocal to a polished mix‑ready sound. So, with a few vocal examples at hand (you can listen on the SOS website: https://sosm.ag/cubase-0524), let’s explore the possibilities.
Go With The Flow
The main screen above shows VocalChain in action. Arranged down the left edge is the full set of processing modules offered, and these are arranged as three sections, Clean, Character and Send. Your audio is processed through these in order to apply ‘corrective’ processing, add character/sonic flavour and then ambience/stereo imaging. Individual modules can be engaged or bypassed as required and, in a section, you can change the order of individual modules (drag a module up/down to reposition it). With a total of 16 modules (you can use them all if you need to), this is quite a toolkit. But because it loads as a single plug‑in and everything’s available in a single window, it’s very easy to navigate.
The GUI provides three different levels of control. In the screenshot, the Overview tab is selected (top‑left, highlighted in yellow), and beneath the spectrum display you then get access to the most significant parameter from each module. However, select the Clean, Character or Send tabs (when selected, these are highlighted in blue, cyan and green, respectively), and the choice of controls changes to focus on the modules in this section, with more control over specific modules. Finally, select an individual module, and the display changes again to provide access to the full control set for that module. It’s a clever bit of design that means you can quickly switch between different levels of editing.
There’s also a set of style/genre‑based presets to get you started, and these should not to be underestimated. OK, so there’s no AI involved here (VocalChain doesn’t listen to your audio and then make some setting suggestions in the way that, say, iZotope’s Nectar might) but they’re well worth exploring and can get you off to a flying start. You just find a preset that provides a suitable starting point and then tweak to taste, using any of the three control levels described above.
Time To Tweak
In terms of that tweaking, a sensible initial task is to use the input and output metering on the right to set your levels. Setting the input level control to get your signal into the green coloured range of the meter is a good start, as it will most likely ensure your signal hits the first active dynamics stage in the preset’s design at an appropriate level. You can then adjust the output level to find the happy place where the vocal sits most comfortably in the mix. It’s also interesting to watch the two meters during playback: with two compression stages, two dynamic filters and two de‑essers available, there can be a serious amount of dynamics management going on, should you need it.
While tweaking, one further feature makes it much easier to evaluate the impact of the changes you are making: the ability to solo each module. In the list of modules, this solo mode can be activated via the small ‘s’ button located to the left of each module’s name. Once activated, all other modules are bypassed (so the overall signal level might also change), but it allows you to more easily focus on what the current module is doing to your vocal’s sound. And, by also using the selected module’s bypass button, you can easily assess the impact the module is having on the unprocessed signal. These auditioning options are particularly useful for the various EQ, dynamics, filter and exciter/saturation modules.
Make It Pop
So, what about the processing itself? As I said, there’s a lot packed in here, but a few highlights can serve as examples — remember to check out the audio examples on the SOS website if you want to hear some of these options in context.
A common pop production technique is to add some ‘weight’ to a lead vocal by blending in a vocal double an octave below the main sung line...
Let’s start with a ‘pop’ vocal example. VocalChain includes a number of suitable presets, such as Perfect Pop Dry Vocal or Shiny Pop Vocal, that can deliver a very crisp and compressed, if (deliberately) not particularly natural starting point. Another common pop production technique is to add some ‘weight’ to a lead vocal by blending in a vocal double an octave below the main sung line, and the Lead Vocal Reinforce preset does just that. While it also provides dynamics and EQ settings that are suitable for modern pop, the ‘weight’ is added using the Pitch module.
As shown in the screenshot, this can be used to apply some automatic pitch correction (either subtle or not so subtle — try the Trap Icon preset), but that’s not being used here. Instead, this preset uses the Detune and Formant controls to pitch‑shift the vocal down by an octave, along with a suitable downward shift of the formants that makes this down‑pitched voice sound a little more natural. Finally, the Mix control has been used to set the blend of the original voice and the ‘octave down’ version. So it’s the same vocal, but with more ‘weight’.
The Pitch module does automatic pitch correction but can also be used to fake a vocal double.
Rock On
Lots of rock or metal singers can achieve aggressive vocal distortion through their singing technique, but this is also something you can enhance or create through processing. Here, the Hot Rock Hot Valve Mic Chain preset does just that. While the Character section’s Exciter module contributes, it’s the Saturator module that does the heavy lifting.
As shown in the screenshot (and can be heard in the audio examples), using the Distortion mode and the Drive control maxed out, this preset doesn’t hold back, but it illustrates what’s possible. It’s also worth noting that the Filter Bank is engaged — this focuses the distortion in the 500Hz‑3.5kHz region. It’s a very useful option and, in this case, it enhances the gritty, lo‑fi nature of the sound. If you want to dial it back a bit, then Tape and Tube modes, and different Drive and Mix settings, make that easy. And, of course, all these controls can be automated in Cubase if you want to add that saturated edge just to specific words or phrases in the performance.
The Delay module also features a Filter Bank, but this feature really shines in the Ducker, where it can help prevent your ambience effects from adding clutter to a busy mix.
Duck Duck Go
The benefit of the Send section is that it avoids the temptation of sending your lead vocal to reverb or delay effects used for more general duties in your project and, instead, you can configure settings specifically for the vocal part. In busy mixes (for example, an uptempo EDM project), too much delay or reverb can easily muddy a mix. However, as the Platinum Female Vocal Chain preset illustrates, VocalChain’s toolset allows you to manage this while still getting epic with your vocal ambience.
The Filter Bank feature in the Saturator module lets you target the specific frequency range for any distortion.As shown for the Delay module in the final screenshot, two particular features are useful. First, as with the Saturator module, both the delay and reverb modules offer a Filter Bank, allowing you to trim out frequencies in the delay repeats (or reverb) so you don’t get excessive low mids (to clog up the mix) or (at the top end) repeats fighting with your hi‑hats. However, it’s the ducker’s Amount and Release controls that are the stars of the show. They allowing you to suppress the level of the delay (or reverb) while the source vocal is present, and then control how quickly that ducking is released (so you hear the delay in all its glory) between the vocal phrases. It’s a classic trick, and VocalChain makes it very easy to pull off.
Join The Chain Gang
There are plenty of very capable third‑party ‘vocal signal chain’ plug‑ins designed to tackle the same task, including some powerful ones that feature AI assistance. But until AI can read our minds, it can’t know exactly what kind of sound we’re trying to create, so there is always going to be project‑specific tweaking to be done. Arguably, VocalChain’s presets can provide just as valuable a starting point as many AI plug‑ins, and because the GUI makes it really easy to adjust every component in a single window, it’s super easy to tweak your vocal sound to suit the mix. Of course, the potential of getting quick results is only one aspect of using VocalChain.
There’s a lot more to explore in the plug‑in, so it’s a topic I’ll probably return to in a future column.
Published May 2024