Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, June 30, 2021

Audio Warp

Cubase's audio warp facilities provide a powerful toolkit for manipulating the timing of recorded audio to fit grids and grooves.

Back in September, I looked at applications of the off-line Time Stretch function. Under the broad heading of 'audio warp', Cubase also features some non-destructive tempo and pitch-shifting tools. These provide a number of creative and corrective possibilities, but the emphasis of this sort of processing is mainly on loops, so I'll concentrate upon loop-based applications in this column.

The audio warp capabilities of Cubase that we're about to explore are undoubtedly very powerful, but I'd offer a word of caution: much as I like Cubase's audio warp functionality, if I'm working with projects that are dominated by commercial sample library loops and require simple tempo-matching, I'll usually go instead for a tool designed specifically for that job (in my case, Sony's Acid Pro). However, when it's just a few loops that require processing, or I want to create loops from performances I've recorded myself, or if something more than basic tempo-matching is needed, audio warp is definitely up to the task.

Coming To Terms With The Terms

Before getting to grips with practicalities, three terms need to be understood: 'hitpoints', 'warp tabs' and 'Q-points'. In essence, these can all be thought of as 'markers' that are positioned within an audio clip, and control how Cubase then manipulates the timing of the audio in the clip.

Audio warp is useful for correcting timing drifts in live performances, as shown here with an acoustic guitar part. Warp tabs (in orange) have been used to drag the chord to the starts of bars 18-21 so that the whole performance will play in time with the project tempo. The chord at the start of bar 22, which is a little late compared with the tempo-based grid, has yet to be processed.Audio warp is useful for correcting timing drifts in live performances, as shown here with an acoustic guitar part. Warp tabs (in orange) have been used to drag the chord to the starts of bars 18-21 so that the whole performance will play in time with the project tempo. The chord at the start of bar 22, which is a little late compared with the tempo-based grid, has yet to be processed.Warp tabs are usually placed at particularly important musical time positions. For example, they might be placed at the first beat of every bar in an audio event. Once the warp tabs are in place, the playback of the audio event can then, for example, be stretched to force any tempo variation in the performance to match the exact tempo of the project (I'm not suggesting metronomic music is always a good thing, just that this can be done if required).

In contrast, hitpoints are usually placed at attack transients in an audio file — the most obvious example would be hitpoints at the start of each drum hit in a drum loop. If both warp tabs and hitpoints were placed in the same audio performance, some of them may well coincide (for example, at the attack of a well-timed kick drum on the first beat of the bar) but others might not, particularly if the musical performance plays off the beat. It is in these circumstances that the distinction between warp tabs and hitpoints becomes clear: the former are used to mark the positions of musically important intervals such as bars and beats within a performance, while the latter pick out every rhythmic event within the audio, and thus may be positioned off the beat as well as on it.

Less commonly used are Q-points. These can be thought of as extra 'markers' between two hitpoints and they're useful if a particular hit has either a slow attack or some sort of secondary peak. In the case of the slow attack, the hitpoint might be located at the start of the sound, but the rhythmically important point is where the sound peaks, as this is the section of the sound that needs to be locked into any quantising or time-stretching that is being done. Placing a Q-point at the peak gives the audio warp process more information to go on, so that the time-shifting is achieved in a more musically appropriate fashion.

To The Point

The main application of warp tabs is to get an audio file with a varying tempo to play in time with a fixed project tempo. Useful though that can be in some circumstances (for example, when you have a live take that requires a little more consistency in terms of tempo, so that it can be used as a basis for overdubbing), perhaps the most fun is to be had with hitpoints, so let's concentrate on what these can do for us. The Cubase Operations Manual describes the basic steps involved in creating and editing hitpoints, so only a brief recap is required here. I'll then move on to focus on some uses for these tools.

Hitpoints (in blue) added to a drum loop. Light blue indicates a hitpoint added automatically, based upon the Hitpoint Sensitivity slider setting, while dark blue hitpoints are those added manually.Hitpoints (in blue) added to a drum loop. Light blue indicates a hitpoint added automatically, based upon the Hitpoint Sensitivity slider setting, while dark blue hitpoints are those added manually.The toolbar for the Sample Editor contains the key buttons. In the strip at the left-hand end is the Hitpoint Edit button (selected and displayed in blue). Immediately to its right are the Audio Tempo Definition tool and the Warp Samples buttons. Further to the right are the Hitpoint Mode button (also in blue) and the Hitpoint Sensitivity slider. The button with the musical note icon (also in blue) is the Musical Mode button, while the Warp Setting drop-down menu allows the type of material to be specified.The toolbar for the Sample Editor contains the key buttons. In the strip at the left-hand end is the Hitpoint Edit button (selected and displayed in blue). Immediately to its right are the Audio Tempo Definition tool and the Warp Samples buttons. Further to the right are the Hitpoint Mode button (also in blue) and the Hitpoint Sensitivity slider. The button with the musical note icon (also in blue) is the Musical Mode button, while the Warp Setting drop-down menu allows the type of material to be specified.Creating hitpoints is a simple, four-step process. First, the loop is opened in the Sample Editor by double-clicking on the event in the Project window. Second, the Audio Tempo Definition Tool is used to specify the length and time signature of the loop — and from this the original tempo is calculated automatically. Third, the Hitpoint Mode button is selected and a series of hitpoints is then calculated automatically. Finally, the number of hitpoints can be adjusted via the Sensitivity slider, and hitpoints can be manually adjusted via the mouse — what you are aiming for here is to tidy up the hitpoint positions, so that each prominent note or drum hit attack has its own hitpoint positioned at its start.

Depending on what you want to use the hitpoints for (see below), some further manual editing might be required. With Hitpoint Mode selected in the Sample Editor, individual hitpoints can be dragged by placing the mouse on their handles (the small triangles that appear at the top of each hitpoint). If required, hitpoints can be deleted by holding the Alt key while hovering over the handle. Holding down the Alt key anywhere else turns the cursor into a pencil tool and allows new hitpoints to be added.

While this sounds like quite a bit of work, in practice (and with practice!) it usually takes just a few seconds, and the loop is then ready for some corrective or creative manipulation. If all you want the loop to do is follow the project tempo, then clicking the Musical Mode button in the Sample Editor will do the trick: the loop will then play in sync with the project and any tempo changes within it.

Here's one further brief trick, if creating loops from your own recordings. Prior to creating the hitpoints, first make the usual adjustments to the event in the Project window or Sample Editor to isolate just the section you want to loop. Then, with the event selected in the Project window, choose 'Audio / Bounce Selection', and when prompted replace the original Event with the bounced copy. In my experience, this makes it easier to use the Audio Tempo Definition tool to define the details needed to apply Musical Mode for basic tempo-matching.

Real-time Pitch-shifting

Audio warp can apply pitch-shifting on playback as well as time-shifting. Usefully, this can be done for an individual slice within a beat-sliced loop. Whether working with individual audio events in the Project window or within the Audio Part Editor, the Info Line includes both Transpose (semitone adjustment) and Finetune (±100 cent adjustment) fields. Changing these settings might, for example, be used to give a snare drum a tighter sound or a kick drum a little more bottom end.

Groovy Baby!

Tempo-matching aside, one of the most useful creative options provided by audio warp is the ability to groove-quantise audio — that is, to take the groove from one audio performance (for example, a drum loop) and apply it to another (for example, a bass or guitar loop). This can be musically very satisfying, as it is often the slight variations in the groove (playing slightly ahead or slightly behind the strict beat defined by the tempo) that can give a performance its distinct feel. The technique can be particularly useful if you are combining commercial library loops from different sources; a little judicious use of groove quantise can help match the rhythmic timing of the playing more tightly.

Once a groove has been extracted, it appears as a preset in the Quantize Type drop-down menu.Once a groove has been extracted, it appears as a preset in the Quantize Type drop-down menu.The bass loop in track 2 has been groove-quantised, based upon a groove that was extracted from the drum loop in track 1. Comparing the quantised bass with the original (track 3) immediately shows how some of the bass notes have been repositioned to provide a tighter match to the drum hits.The bass loop in track 2 has been groove-quantised, based upon a groove that was extracted from the drum loop in track 1. Comparing the quantised bass with the original (track 3) immediately shows how some of the bass notes have been repositioned to provide a tighter match to the drum hits.Three versions of the same drum loop: the original; beat-sliced as an audio part; and beat-sliced and dissolved into individual audio events.Three versions of the same drum loop: the original; beat-sliced as an audio part; and beat-sliced and dissolved into individual audio events.Unsurprisingly, the most common starting point for groove quantising is a drum loop, although there is no reason why another instrument cannot be used as the basis for defining the groove. It is worth spending a little extra time on hitpoint placement, by auditioning each section of the loop to ensure the hitpoint is positioned correctly on the attack of each drum hit. Once this is done, the 'Audio / Hitpoints / Create Groove Quantize From Hitpoints' option can be selected. This extracts the groove and adds it to the list of groove presets available from the drop-down Quantize Type menu in the Project window.

The groove can then be applied to any other audio performance in the project. Selecting a suitable audio event and then choosing 'Audio / Realtime Processing / Quantize Audio' will apply whatever quantise groove is currently selected in the Quantize Type drop-down menu. When you first start experimenting with this process, it can be handy to make a duplicate copy of the audio track prior to applying the groove. If you then apply Bounce Selection (as described above) to the audio event on the duplicate track, pan it hard right, and then apply the groove quantise to the original event and pan its track hard left, it becomes easy to hear exactly what the groove quantise is doing and identify how well the process has worked.

The Slice Is Right

The other obvious use of audio warp is for ReCycle-style beat-slicing: in essence, each hitpoint is used to define a slice in the loop, and individual slices can then be manipulated. Once the loop has been cut into its individual slices, the options include being able to create variations of the loop's performance, making entirely new loops based on sounds within individual slices, replacing sounds in a loop and processing individual slices. Again, drum loops are prime candidates for the beat-slicing process.

Having created and edited the hitpoints in your loop, the only other detail to observe before beat-slicing is that Musical Mode is not engaged in the Sample Editor before returning to the Project window. With the audio event selected, the 'Audio / Hitpoints / Create Slices From Hitpoints' menu option will create the slices and tempo-match the loop to the project. The original audio event is replaced in the Project window by a new audio part (an audio part contains a series of linked audio events) in which the slices are held as individual audio events.

While the sliced loop provides tempo-matching, the key reason for slicing is the further editing options it opens up. Double-clicking on the audio part in the Project window will open it in the Audio Part Editor and this provides plenty of editing options. Alternatively, you can select the 'Audio / Dissolve Part' menu option, which replaces the audio part with individual audio events, each of which represents one slice, and editing can then be done in the Project window. Before doing any editing, it is a good idea to make a copy of the audio part. If you then make any changes to this copy, Cubase will prompt you to create a 'New Version', and if you do so the original will be left intact.

In this example, a beat-sliced drum loop has been dissolved and edited on three tracks to create a variation on the original. The kick-drum slices have been moved to track 2 for separate EQ and compression, while on the third track a kick extracted from a different loop has been used to emphasise beat 1, the snare slice has been reversed (and gain reduced) ahead of beat 2 and an additional kick-drum hit has been added after beat 4.In this example, a beat-sliced drum loop has been dissolved and edited on three tracks to create a variation on the original. The kick-drum slices have been moved to track 2 for separate EQ and compression, while on the third track a kick extracted from a different loop has been used to emphasise beat 1, the snare slice has been reversed (and gain reduced) ahead of beat 2 and an additional kick-drum hit has been added after beat 4.A few brief examples illustrate the possibilities of working with the beat-sliced loop. For example, as shown in the screenshot on the right, a drum loop is being edited in the Project window, having been dissolved into its individual audio events. The kick-drum slices have been moved to a fresh track (making sure that their position is maintained by selecting the Grid Relative setting for snapping). Separate processing can then be applied to the kick — perhaps changing its level, adjusting its EQ or applying some compression. Similar things could obviously be done with other elements from the loop.

A second possibility is the replacement of individual hits within a loop. You sometimes find that a loop sounds right musically, but has a weak snare or kick. Being able to isolate the individual sounds through beat-slicing allows them to be replaced. This might involve taking a snare or kick sound from one loop and using it to replace (or perhaps layer with) the same drum sound in another loop. Again, arranging the loops to be used in a series of tracks in the Project window makes the whole process easy to perform.

Finally, variations on the original loop can be created by processing, moving, muting or adding individual hits. For example, extra kick-drum hits might be added to make the loop a little busier. These can easily be added on another track, and once you are happy with the loop variation the two tracks can be bounced down (via the Export / Audio Mixdown option) to create a single audio event (the latter will be easier to handle if you need to copy and paste it to build a complete drum track).

And Finally...

These brief examples really only scratch the surface of what the Cubase audio warp features are capable of doing. However, if you do work with loops as part of your music-making process, Cubase's audio warp functionality is most certainly worth taking the time to explore. While simple tempo-matching is, in itself, extremely useful, the creative possibilities go way beyond that, and with groove quantise and beat-slicing Cubase can hep you get considerable extra mileage from any loop-library material that you might own. 

Join The Q

As mentioned in the main text, occasionally you might need to add Q-points to hitpoints in order to achieve the most appropriate tempo-matching. Q-points do not, however, appear by default. Instead, you need to enable them via the Audio page of the Preferences window. Once done, any hitpoints created will appear with a Q-point handle that can be positioned as required via the mouse.



 


Published November 2007

Monday, June 28, 2021

Solving MIDI Timing Problems

 By Martin Walker


Early notes, late notes, stacked notes... There's a range of problems that could stem from MIDI timing issues - but fortunately we have solutions to offer.

Here, at maximum zoom, with the Ruler set to Seconds (one horizontal division on the grid measures 10ms), you can see the MIDI timing results for my PC. The top (yellow) notes are the original hand-drawn 16th notes; the six sets of purple notes show three passes using the Maple virtual MIDI device, first without and then with system timestamp enabled; these were followed by six sets of green notes for the same options, using my Emu 1820M MIDI ports; and a final set of six blue notes for the Emu using emulated DirectMusic ports. As you can see, on my PC ticking the 'Use system timestamp' box always results in much tighter timing, and the tightest timing is given by Windows MIDI drivers with this option (light green).Here, at maximum zoom, with the Ruler set to Seconds (one horizontal division on the grid measures 10ms), you can see the MIDI timing results for my PC. The top (yellow) notes are the original hand-drawn 16th notes; the six sets of purple notes show three passes using the Maple virtual MIDI device, first without and then with system timestamp enabled; these were followed by six sets of green notes for the same options, using my Emu 1820M MIDI ports; and a final set of six blue notes for the Emu using emulated DirectMusic ports. As you can see, on my PC ticking the 'Use system timestamp' box always results in much tighter timing, and the tightest timing is given by Windows MIDI drivers with this option (light green).Most people simply install Cubase and get on with recording and playing back MIDI and audio tracks with no problems. However, a few — particularly PC users — experience annoying MIDI issues, such as erratic timing, events that are consistently recorded too early or too late and doubling (or even tripling) of note data. In extreme cases no data may be recorded at all, or every single MIDI event recorded during a lengthy take may end up appearing at the start of the Cubase part. People who are affected by such issues obviously search out possible cures, but even Cubase users who don't have specific problems should really check the various options to ensure they achieve the tightest possible MIDI timing. Let's find out how.

Background

A significant proportion of Cubase MIDI problems arrived when Steinberg launched their Midex range of interfaces, because they used DirectMusic drivers instead of the more typical Windows MIDI version, largely because this format offered a more precise timestamp, and therefore the likelihood of tighter MIDI timing. Unfortunately, not many other MIDI interfaces had DirectMusic drivers, and in their absence Windows creates 'emulated' DirectMusic drivers with much higher latency and generally lower performance.

Cubase might therefore find Windows MIDI drivers, true DirectMusic drivers, and emulated DirectMusic drivers, and unless all but the most appropriate one is hidden you can end up with two or even three sets of MIDI inputs and outputs, leading to doubled or tripled sets of data during recording or playback. If you end up using emulated drivers your data could be recorded early or late, piled up at the start of a Part, or not recorded at all.

Steinberg's answer was a filter that guessed at the most appropriate MIDI drivers and hid all the others, but the first filter version didn't always choose correctly, and many musicians had to 'unhide' the filtered versions (by dragging the file named 'ignoreportfilter' from the MIDI Port Enabler folder into the main Cubase folder), and then manually configuring their MIDI ports.

However, from Cubase SX/SL and Nuendo versions 3.0.1 onwards, Steinberg introduced a more refined filtering regime: if true DirectMusic drivers are detected, they are used and if not, Windows MIDI drivers are used instead, while emulated DirectMusic ports are never used by default. The filter hides all the unused MIDI ports, and in the majority of systems it works really well.

Windows Timers

A second set of MIDI timing problems arose due to PCs providing two different timers. Older versions of Windows, older MIDI interfaces, the VST and ASIO specifications, plus Cubase/Nuendo (and many other sequencers) normally rely on Windows' TGT (timeGetTime) timer, which apparently has a resolution of one millisecond. Meanwhile, some newer audio and MIDI interfaces, such as those with true DirectMusic drivers, plus some other sequencers, use the QPC (QueryPerformanceCounter) timer, which can theoretically be more precise (with a timestamp based on units of 0.1ms).

While a few motherboards keep both clocks in perfect sync, on many systems these two clocks slowly drift apart, so if Cubase is following the TGT timer and your MIDI interface is timestamping data according to the QPC timer, your MIDI can get seriously out of kilter. The answer, which Steinberg first implemented in Nuendo 2 and Cubase SX 2.3, is a software switch labelled 'Use system timestamp' which, when ticked, instructs these sequencers to follow the QPC timer instead of the TGT one.

By the way, this is a Windows rather than a Cubase-specific issue. Cakewalk's Sonar, to give another sequencer example, has a parameter named 'IgnoreMidiInTimeStamps' in its TTSEQ.ini initialisation file, with a default value of zero that you can change to '1' if your MIDI data is being recorded at the wrong time or it is drifting.

Up until Cubase SE/SL/SX 3.0.1 and Nuendo 3.0.1, Steinberg's 'Use system timestamp' option only affected DirectMusic drivers, so if you suffered from strange timing anomalies when using Windows MIDI drivers, and your interface didn't provide bona fide DirectMusic drivers (few do even now), the only solution was to enable the emulated DirectMusic ports, by bypassing Steinberg's 'ignoreportfilter', and try those instead, with the timestamp box ticked.

However, from Cubase SE3, SL and SX 3.1 and Nuendo 3.1 onwards, a separate 'Use system timestamp' option has been available in the Windows MIDI page, so even in multi-interface systems using both DirectMusic and Windows MIDI drivers you can cure timing problems individually. You can find these timestamp tick-boxes by opening the Device Setup window from the Devices menu — there's one in the DirectMusic page and a second in the Windows MIDI page of the MIDI section.

You can also find out which clock is used by a MIDI interface that has non-DIrectMusic drivers (the majority), by running Jay Levitt's handy MIDITime utility (www.jay.fm/miditime/). You connect your chosen interface In and Out via a MIDI loopback cable and then run the utility, which simply sends MIDI data round the loop, compares its timestamping against both system clocks, and tells you whether your system needs to have the 'Use system timestamp' box ticked.

Live MIDI Buffering Jitter

Many musicians find it difficult to get their heads around the concept of live MIDI buffering jitter, so here are the results of some practical tests. The top two yellow tracks show the original hand-drawn MIDI notes, and the very tight timing after looping them back via the Maple Virtual MIDI Cable. The output of this second track is routed to the LM7 drum synth, triggering a short clave sound, and what gets recorded is shown in the uppermost audio track. Beneath this are audio loopback captures of what you actually hear from the soft synth in real time, with increasing audio-interface buffer sizes. Even at 10ms latency you should be able to spot slight differences in spacing between the notes, while at higher buffer sizes the effects are greatly magnified, and notes will begin to emerge in 'clumps'. I've also shown two passes for the 50ms and 100ms buffer sizes, to show how this clumping will vary depending when the notes are played, compared with the creation of each new interface buffer.Many musicians find it difficult to get their heads around the concept of live MIDI buffering jitter, so here are the results of some practical tests. The top two yellow tracks show the original hand-drawn MIDI notes, and the very tight timing after looping them back via the Maple Virtual MIDI Cable. The output of this second track is routed to the LM7 drum synth, triggering a short clave sound, and what gets recorded is shown in the uppermost audio track. Beneath this are audio loopback captures of what you actually hear from the soft synth in real time, with increasing audio-interface buffer sizes. Even at 10ms latency you should be able to spot slight differences in spacing between the notes, while at higher buffer sizes the effects are greatly magnified, and notes will begin to emerge in 'clumps'. I've also shown two passes for the 50ms and 100ms buffer sizes, to show how this clumping will vary depending when the notes are played, compared with the creation of each new interface buffer.If you're struggling with jittery MIDI timing problems when playing soft synths 'live', it may be due to an entirely separate issue that affects quite a few sequencers on both Mac and PC platforms, including Cubase, Logic, Reaper and Sonar, amongst others. Your performance will be captured exactly as you played it, and will also play back exactly the same, but since you hear jittery timing when actually playing, it makes it more difficult to play consistently in the first place.

This (as I explained in detail in SOS September and October 2002, in 'The Truth About Latency') is because most sequencers effectively quantise incoming MIDI data before sending it 'live' to a soft synth or sampler. The synth's output is calculated for each audio buffer and then sent to your audio interface to be heard, and the most common way of doing this is to process all relevant MIDI data (both already recorded in the track, and any 'live' data that you're currently playing) before starting to process the next audio buffer.

Unfortunately, most sequencers choose not to calculate any offsets within the next buffer relating to 'live' MIDI data — they just quantise them all to the nearest buffer boundary, and rely on the buffers being short enough to mask unwanted rhythmic artifacts. The main reason they do this is to keep every note's MIDI latency as low as possible, but at the expense of extra jitter.

For instance, if you play regular 16th notes at 120bpm, each note will occur at an interval of 125ms, but when a soft synth is played 'live' through an audio interface with a buffer size of 5ms you'll perhaps hear them with spacings such as 125ms, 125ms, 125ms, 130ms, 120ms, 130ms, 125ms and so on, where occasional notes get shoved into adjacent buffers. For most people this is still scarcely audible, but if you raise the buffer size to 20ms then you might hear a string of 'live' notes emerging with spacings of 120ms, 120ms, 120ms, 140ms, 120ms, 120ms, 140ms and so on: the 'granularity' has increased.

There's a very easy way to find out whether your sequencer suffers from live MIDI buffering jitter. Simply increase the audio interface buffer size to the maximum setting available and play in some fast, regular, repeated notes. Most people should be able to hear the difference once the buffer size has increased to 1024 samples (12ms at 44.1kHz) and if you increase it to 4096 or beyond you'll hear your live notes start to emerge in irregular clumps, despite being recorded perfectly (see the screenshot below for more extreme examples).

Conversely, there's also a very easy way to minimise these 'live' timing jitter problems: always drop to a lower buffer size, of 6ms or less, when recording a 'live' soft-synth performance. Although the recorded data will be identical, you'll hear a more accurate live rendition of what you're playing.

The Real World

While I suspect that the majority of Cubase users don't have MIDI timing problems, those that do soon speak up, so you can find quite a few complaints on the Cubase forums about MIDI recordings where all the data ends up way ahead of the audio (sometimes by several beats) or way behind. The answer is nearly always to tick the relevant 'Use system timestamp' option, but you can't assume that this is always the best setting — it depends both on your PC's motherboard and the make and model of your MIDI interface (or your audio interface, if that provides the MIDI ports). It's quite possible for Cubase to work really well until you change your interface, whereupon your MIDI timing suddenly goes completely screwy.

Because the two PC clocks may gradually drift apart, if you have unsuitable settings you may find that Cubase starts off with very good MIDI timing, but gets gradually worse the longer it's running. During really long sessions, notes being recorded may eventually appear in the correct position in a part initially and then suddenly jump forwards or backwards in time. The longer your PC is powered up, the bigger these jumps become, but if you close down Cubase and then re-launch it immediately, the problem will disappear.

Such long-term drifts mean that many users rarely notice any problem until they are deep into a session, and then they panic, try the system timestamp option, and find it cures their immediate problem. However, since they don't know why this works and everything seems to be hunky dory the next time they launch Cubase, they are reluctant to leave the timestamp option ticked permanently. The problem is compounded by the fact that different MIDI interfaces may need a different setting, so if, for example, you use a couple of keyboards that have their own USB MIDI connections or several different MIDI interfaces to connect all your MIDI gear, you may enjoy excellent timing from one but poor timing from another one. With the correct combination of MIDI driver and timestamp setting your MIDI timing should always remain consistent.

I suffered recently from the exact problem I've just explained. For several months I'd been happily using a two-octave Evolution MK225C keyboard for MIDI editing, connected via its own USB port. However, when I wanted to record some new performances using my 88-key CME UF8, connected via the MIDI port of my Emu 1820M interface, all the recorded notes ended up piled up at the beginning of the part. As most people would, I just switched back to the MK225C to finish my work, but the next day I took some time to investigate further, only to find that the UF8 recorded notes perfectly. Then, when I attempted to use it to record some MIDI controller data, it once again all ended up at the beginning of the part. So I ticked the 'Use system timestamp' option and my timing problems immediately disappeared.

Testing The Options

If you want to find out once and for all which combination of settings provides the tightest timing for your particular motherboard timer and MIDI interface, you should ideally test all four possible combinations of tick-box and MIDI driver (Windows MIDI driver with and without system timestamp, and real or emulated DirectMusic driver with and without system timestamp).

Because even emulated DirectMusic drivers may benefit from the increased timing resolution of the QPC clock, you may, in some cases, end up with better timing using these than using real Windows MIDI drivers. If you're really determined to explore every possibility, you could also try combinations of Ins and Outs; a few musicians have even found that their best timing comes from using Windows MIDI for their MIDI Outs, but emulated DirectMusic for their MIDI Ins, for example.

To test the timing of any sequencer you just need to create a few bars filled with hand-drawn 16th notes, using the pencil tool (to provide a regular signal), and then route this to a MIDI output that you can connect back to the MIDI input of another track, so you can capture the combined latency and jitter of these MIDI ports.To test the timing of any sequencer you just need to create a few bars filled with hand-drawn 16th notes, using the pencil tool (to provide a regular signal), and then route this to a MIDI output that you can connect back to the MIDI input of another track, so you can capture the combined latency and jitter of these MIDI ports.The test itself is easy, and is exactly the same for any sequencer. Start by creating a song running at 120bpm, and make sure Auto Quantise is disabled; create a MIDI part lasting several bars; and then use the pencil tool to fill it with continuous 16th notes. Route the output of this track to the MIDI Out to be tested. Next, create a second MIDI track and leave its output unconnected but route its input to the MIDI In to be tested. Finally, connect a MIDI cable between this MIDI output and input, then select the second track and record for several bars, so that the hand-drawn notes from the first track are passed through the MIDI output, then back through the MIDI input and recorded onto the second track.

Now you can zoom in on both tracks simultaneously, to see how far apart individual notes are on each. On the recorded track, the note starting positions may be consistently earlier or later than those of the hand-drawn notes (latency), but start-position timing is also likely to vary a little between notes (jitter). If you switch your Ruler readout to 'seconds' you'll be able to measure the timing difference for each note in milliseconds. Do this for a dozen or so notes and you'll have a good idea of both latency and jitter.

Try the same test several times, creating a new recorded track each time, and vary system settings, such as ticking the 'Use system timestamp' box. You'll soon see which setting gives the tightest results. You can also perform the same tests with first Windows MIDI and then DirectMusic drivers (real or emulated) to find out the tightest combination of settings for your system. As you can see from the screenshots earlier on in this article, ticking this option made a vast difference to absolute timing on my system, but little difference to jitter.

Fine Tuning

Another neat trick is to do the same tests after installing a Virtual MIDI Cable (VMC). This software utility adds virtual MIDI Inputs and Outputs to your system, so you can route MIDI data between different applications, but for our purposes it's perfect for sending MIDI data between the output and input of your sequencer without the added latency and jitter of your MIDI interface, so you can directly measure the MIDI timing of the sequencer. If you have timing issues, this technique will prove whether the problem is due to the interface or the sequencer. I installed Jeff Hurchalla's Maple Virtual MIDI Cable (www.hurchalla.com/Maple_driver.html). As I discovered, by testing it using Evert van der Poll's MIDItest utility (http://earthvegaconnection.com), the Maple cable imposes a negligible delay, of 0.01ms, and jitter too low to measure.

Using MIDItest you can soon get a good idea of the combined latency and jitter of an In and Out port on your MIDI interface alone. Here I've tested the Maple Virtual MIDI Cable device, proving it to have negligible latency of 0.01ms and jitter of 0.00ms — so you can effectively consider it a direct connection between MIDI In and Out, which makes it an ideal way to measure Cubase's own latency and jitter without the added effects of a hardware MIDI interface.Using MIDItest you can soon get a good idea of the combined latency and jitter of an In and Out port on your MIDI interface alone. Here I've tested the Maple Virtual MIDI Cable device, proving it to have negligible latency of 0.01ms and jitter of 0.00ms — so you can effectively consider it a direct connection between MIDI In and Out, which makes it an ideal way to measure Cubase's own latency and jitter without the added effects of a hardware MIDI interface.My best setting was Windows MIDI with timestamp enabled, when I measured random note timing offsets between 0ms and 3ms with the Maple VMC loopback, and delays of between 1ms and 4ms with my Emu MIDI port loopback. In other words, the combination of my motherboard timers, Windows and Cubase alone produced a maximum jitter of 3ms, while the jitter was identical after adding my MIDI port, but with 1ms additional latency (which ties in with my MIDItest utility latency and jitter results). Bear in mind also that these results are the total of both MIDI input and output jitter: when using VST Instruments you'd only experience the input side, giving a likely MIDI jitter of possibly 1.5ms. Since a 16th note at 120bpm happens every 125ms, this is equivalent to a tiny 1.2 percent variation in 16th-note position — which sounds good to me!

With USB and Firewire audio interfaces that also provide MIDI ports, you are sending both audio and MIDI data down a single cable, so it's possible that MIDI timing may suffer when you're playing back or recording lots of simultaneous audio tracks. Even using a separate MIDI interface, or one that's part of a PCI or PCe audio interface, your MIDI timing many be compromised when your sequencer is being stressed in other ways. To test this out, add the 16th-note MIDI track and corresponding MIDI loopback recording track to an existing song that contains plenty of other audio and MIDI tracks. This way you can check your results under 'real-world' conditions. It may also be worth trying a change to the Audio Priority setting in the Advanced Options of Cubase Device Setup.

Sadly, these tweaks don't seem to work for everyone. There's a small minority of musicians that have tried all the options and still suffer from MIDI notes being placed too early in the part, sometimes by a large but consistent amount of several tens of milliseconds. Since such problems tend to happen with a particular combination of components, some people may have had no timing problems for years but then start to suffer when they move to a new PC. Sometimes changing the MIDI interface, or even the motherboard, will cure the problem, but if sequencer developers would offer us a MIDI offset parameter, a single tweak of this could remove recording and playback latency, leaving just the jitter component



Published December 2007

Friday, June 25, 2021

Using Arpache

 By John Walden


It's so easy to reach for the same audio plug-ins time after time - but MIDI plug-ins such as Arpache can bring something different and shouldn't be overlooked.

Given the extensive and sophisticated audio-processing functionality provided by all modern DAWs, it can be easy to overlook the powerful creative options offered by their less glamorous MIDI features, particularly MIDI plug-ins. So this month we'll concentrate on a couple such plug-ins — Arpache 5 and its more advanced sibling Arpache SX — and explore how you can use them to breathe life into your project.

Arpache 5

I don't know which wag at Steinberg came up with the name Arpache for the arpeggiator plug-in, but I'll set Shadows jokes aside for the moment (the editor has threatened me with something unpleasant if I stray too far into that territory...).

Although they may look simple, Arpache 5 and Arpache SX are both capable of producing excellent results. I'll start with Arpache 5, and as the PDF Plug-in Reference manual does a reasonable job of describing the basic controls, only the briefest of recaps is required here, before we move on to consider what it can bring to your music.

Arpache 5 with a setup suitable for a basic up-and-down arpeggio.

The Quantize, Length and Semi-Range settings define the basic properties of the arpeggio. Quantize controls the bar divisions at which the arpeggio notes will appear, with 32nd notes producing something quite manic at all but the slowest tempos, and a value of 1 producing one note from the arpeggio each bar. Both dotted and triplet versions are available for all time intervals between these extremes. The Length setting controls the length of the notes in the arpeggio. If note lengths are kept short (16th or 32nd settings), an almost staccato feel is created regardless of the synth sound source being used. At longer note lengths, the nature of the MIDI sound source is more significant, as the sustain and decay properties of the patch might come into play. Experimenting with the relationships between the synth sound and the Length setting can produce some interesting variations, although things can get a little OTT if you combine a short Quantize setting, a long Length setting, and a sound source with lots of sustain. Finally, the Semi-Range setting simply defines the range of semitones from which notes for the arpeggio will be taken, relative to the position of the lowest note being played.

The top three buttons in the Playmode section are straightforward, setting the sequence of notes in the arpeggio to play either up, down or up and down. The lower buttons are rather more interesting. The '?' button simply randomises the arpeggio note order. Depending upon the sound being used, this can create a nice variation on the straight-up or straight-down patterns. Perhaps more useful is the Order On button: with this engaged, the note order of the arpeggio is defined using the Play Order facility, which allows a sequence of up to eight notes to be specified. The number relates to the MIDI notes being played into Arpache 5 via the MIDI track, starting with the lowest note. In Play Order mode you can create some almost riff-like progressions (which work well for the usual mid-range keyboard parts) but it's also easy to create interesting bass lines.

Experimentation is the name of the game here, as it can take some time to work out just how the various controls interact with each other. Fortunately, for the Play Order mode there is a small number of presets for users to explore — and you can also save your own patterns as presets.

Wot, No Electro?

Of course, nobody would want to use Arpache 5 to create something suitable for a synth-based dance track... would they? Oh, alright then, if you want to, you can create the classic (clichéd?) synth chord patterns that will return you to a land of '80s pop, or place you very firmly into certain styles of dance music. Basic Apache 5 settings aside, all this requires is a suitable sound source, and the Halion One patch 'Polymood' makes a decent starting point — though there are plenty of other preset sounds in the various Cubase 4 VST instruments that you could put to good use.

Play Order mode offers more control over the form of the arpeggio created by Arpache 5.

There are also some less obvious applications for Arpache 5. For example, used with a suitable bass pad-like sound, a combination of slow Quantize (such as a setting of 4 to produce quarter notes) and a Length setting of 1 (so that each note in the arpeggio sustains for a whole bar) can be used to generate a drone-like bass part, which will have some nice timbral movement as the sustained notes are brought in and out of the arpeggio. Used with the right sound source (for example, Halion One's 'Close To The Edge' patch) and at a suitably slow tempo, this can be made to sit anywhere between a melodic bass pad and sound design. You could also add in some atmospheric percussion (such as the 'Storm' style from Groove Agent 3) — and things can start to get quite scary!

In fact, Arpache 5 can be very effective with percussive and drum sounds, so let's look at two examples that provide a useful way of exploring this.

First, try a percussive synth sound using your VSTi of choice (the 'Djembe+Marimba Layer' patch in Halion One would be a suitable starting point) and, with a medium-to-slow tempo (70-90bpm), set a Quantize value of 16 and Semi-Range of 12 (the Length setting doesn't make any difference with this kind of sound). It is then a case of experimenting with the Playmode settings. Although the most interesting effects can be created by using the Play Order options, even a simple up and down configuration can produce some interesting rhythmic effects. By gradually adding and subtracting notes from the MIDI keys being held, you can change the rhythmic feel, adding movement and dynamics to the performance.

Exactly the same process can work equally well with straight drum-kit samples, and any GM-based drum kit could serve as a starting point. You're unlikely to come up with a traditional rock or pop drum-pattern using this method, but for a more abstract or experimental piece the approach can generate plenty of interesting material.

SXing Things Up

Arpache 5 has been around for a considerable time but (fortunately) Steinberg retained it when they introduced Arpache SX, as Cubase moved into its SX era, and both these plug-ins have been preserved in Cubase 4.

It doesn't have to be just dance music: here, Halion One is providing a bass synth pad via a slow arpeggio from Arpache 5. I've also added atmospheric drums from Groove Agent, to create a rather unsettling feel!

In terms of basic operation, the two are similar, but Arpache SX replaced the Play Order options of its predecessor with the more sophisticated SEQ mode. Again, the basic use of each control for Arpache SX is described in the Plug-in Reference PDF.

What this PDF doesn't do such a good job of is illustrating the potential of SEQ mode. The key creative element of this mode is the ability to define aspects of the arpeggio from an existing MIDI phrase. This is best done by recording a short phrase (one, two or four bars usually works best) into a MIDI track. This phrase is then simply dragged and dropped onto the box in the lower-left corner of the Arpache SX window.

Depending on how the various settings are then configured, selecting SEQ mode allows the relative pitches, velocities and timing of the notes in the MIDI phrase to control how the arpeggio is created by the plug-in. It is also important to note that the number of different MIDI note pitches within the dropped phrase can have an influence upon how Arpache SX works its magic — and a phrase with a larger number of MIDI pitches will produce more complex (and more unpredictable) arpeggios. If you want to explore this further, simply drag and drop a suitable MIDI phrase, select SEQ mode as the Arp Style, and then work through the steps described below.

If Quantize is set to Source, then the timing of the notes in the arpeggio is taken directly from the MIDI phrase, and if the MIDI phrase contains 16th notes, it may produce a fairly standard-sounding arpeggio pattern. However, if the sequence has a few 'missing' notes in an otherwise 16th note pattern, a more interesting rhythmic element is created in the arpeggio. Incidentally, the Quantize value can also be set to something other than Source while in SEQ mode — this simply forces the pitch pattern contained in the dropped phrase onto a regular timing grid defined by the Quantize setting.

The Arpache SX plug-in offers plenty of flexibility via its SEQ mode. A MIDI phrase can be dropped into the bottom-left box to provide the arpeggio source.

The pitches of the arpeggio that's created depend on the Trigger Mode setting, and it is here that the number of different pitches in the dropped MIDI phrase interacts with the number of different MIDI notes being held as a chord and fed to the Arpache SX input. Let's consider an example where the dropped phrase contains five different pitches, but only a three-note chord is being played into Arpache SX to drive the arpeggio. Arpache tries to match the pitches of the MIDI input to the relative pitches of the dropped phrase. If the number of pitches is not the same, then the plug-in needs to know how to deal with that and the Trigger Mode setting provides it with that information. If, for example, we wanted to create a more traditional arpeggiator-style result, the Sort First setting would provide a good starting point: as there are fewer MIDI notes being input than there are different pitches in the dropped phrase, the first input note is used repeatedly to fill in the gaps in the matching process — and this note therefore appears more frequently in the arpeggio. In contrast, if the Sort Normal setting is used, Arpache SX only matches pitches up to the number of input notes: no notes are substituted to fill the 'missing' pitches and, as a result, there are some gaps in the arpeggio. This isn't necessarily a bad thing, as it can create some unexpected and often interesting rhythmic effects.

As a quick aside here, if Trigger is chosen as the Trigger Mode, then Arpache SX simply triggers the original phrase contained in the dropped MIDI part. If just a single MIDI note is played into Arpache SX to create the arpeggio, this will be used as the root note for the dropped phrase, and when you play a different single note it will simply transpose that phrase. This provides a very simple way of triggering and transposing a riff, and it is also an obvious candidate for bass-line construction.

The upper track, containing chords, was used to create an arpeggio with Arpache SX. The lower track shows the result of applying the Merge MIDI In Loop process. This new MIDI part can then be edited and brought back into Arpache as required.

A final touch of dynamics can be added via the Velocity Source buttons. The three available options — SEQ, Input and Fixed — are fairly self-explanatory. In Fixed mode, the notes of the arpeggio all have the same velocity, and this obviously tends to produce a fairly static (in terms of volume dynamics) output. In SEQ mode, the velocity of the notes in the dropped MIDI phrase controls the velocity of the same steps in the arpeggio, and by editing the note velocities before dropping the phrase into Arpache you're able to add some very controlled volume dynamics — and considerable rhythmic interest — to the resulting arpeggio. With Input as the Velocity Source, the volume dynamics are controlled by the velocities of the individual notes being played into Arpache SX and, again, this allows the player to add some real dynamics to their performance.

Although there's plenty of fun to be had by dropping any old MIDI phrase into Arpache SX, perhaps a more obvious (though no less interesting) way of using the SEQ mode is to import a MIDI element from elsewhere in your project to use as the arpeggiator source — and a short MIDI bass line or drum phrase can work very well here. The result is a synth arpeggio that is, in some way, rhythmically linked to the source phrase. This can be really effective in helping to generate a tight musical groove, while achieving an arpeggio with a much less 'robotic' feel.

Look What I Played!

Between them, the Arpache 5 and Arpache SX arpeggiators offer a bewildering array of possibilities that could be used in your projects, but occasionally there are times when you can't quite get the result that you want from them.

Trigger Mode influences how Arpache SX deals with matching the number of different pitches contained in the dropped phrase to the number of pitches in the chord arriving at the MIDI input. This setting can produce some interesting variations in the resulting arpeggio.

For example, perhaps you've got a great result, except for a few problem notes that simply don't work in the context of the musical arrangement in which the arpeggio is being played. Fortunately, it is possible to transform the output from either plug-in into a conventional MIDI part, containing all the notes from the arpeggio, and this part can then be edited using Cubase's standard MIDI editing tools. Whether you simply need to remove the odd note that is surplus to requirements, or change note velocities to control the volume dynamics, this means that you have complete control over the final performance.

To do this, you need to use the Merge MIDI In Loop function. First, you solo the MIDI track that contains the MIDI part you wish to process. The Left and Right locators must then be set around this part (if you select the part and then press 'P', which is the 'Locators To Selection' key command, the locators will automatically be placed around it). The MIDI / Merge MIDI In Loop menu option will then bring up a small dialogue box, and if you tick the 'Include Inserts' and 'Erase Destination' options Cubase will obligingly replace the existing MIDI part (which will usually be the chords you are using to drive Arpache) with the actual notes that have been created by the arpeggio process — very neat! Once you're done with processing, though, make sure you deactivate the Arpache plug-in on the track, or you'll find yourself facing arpeggiated mayhem! 



Published January 2008