I remain baffled by the CPU load in Cubase SX 2 (as shown in the VST Performance indicator). I'm particularly curious to know why in my larger projects the indicator shows a constant load (typically 80 percent or more) even when I'm not playing anything back! What exactly is the CPU doing when nothing is happening in the project? My projects typically have 15 to 25 audio tracks, five to 10 virtual-instrument tracks and a couple of MIDI tracks, with five or so group channels and maybe a couple of FX Channels. Some of the channels have an insert effect or two, typically a compressor or gate, and there's a couple of aux channels for send effects.
SOS Forum Post
PC music specialist Martin Walker replies: When Cubase isn't playing back, the CPU overhead is largely down to the plug-ins, all of which remain 'active' at all times. This is largely to ensure that reverb tails and the like continue smoothly to their end even once you stop the song, and it lets you treat incoming 'live' instruments and vocals with plug-ins before you actually start the recording process. However, this isn't the only design approach — for instance, Magix's Samplitude allows plug-ins to be allocated to individual parts in each track, which is not only liberating for the composer, but also means that they consume processing power only while that part is playing.
Of all the plug-ins you'll be using frequently, reverbs are often the most CPU-intensive, so make sure you set these up in dedicated FX Channels and use the channel sends to add varying amounts of the same reverb to different tracks, rather than using them as individual insert effects on each track. You can do the same with delays and any other effects that you 'add' to the original sound — only those effects like EQ and distortion where the whole sound is treated need to be individually inserted into channels.
The other main CPU drain for any sequencer when a song isn't playing back comes from software synths that impose a fixed overhead depending on the chosen number of voices. These include synth designer packages such as NI's Reaktor and AAS's Tassman, where their free-form modular approach makes it very difficult to determine when each voice has finished sounding. However, fixed-architecture software synths are more likely to use what is called dynamic voice allocation. This only imposes a tiny fixed overhead for the synth's engine, plus some extra processing for each note, but only for as long as it's being played.
If you use a synth design package like Reaktor or Tassman, try reducing the maximum polyphony until you start to hear 'note-robbing' — notes dropping out because of insufficient polyphony — and then increase it to the next highest setting. This can sometimes drop the CPU demands considerably. Many software synths with dynamic voice allocation can also benefit from this tweak if they offer a similar voice 'capping' preference.
Anyone who has selected a buffer size for their audio interface that results in very low latency will also notice a hike in the CPU meter even before the song starts, simply due to the number of interrupts occurring — at 12ms latency the soundcard buffers need to be filled just 83 times a second, but at the 1.5ms this happens 667 times a second, so it's hardly surprising that the CPU ends up working harder. For proof, just lower your buffer size and watch the CPU meter rise — depending on your interface, the reading may more than double between 12 and 1.5ms. You'll also notice a lot more 'flickering' of the meter at lower latencies. If you've finished the recording process and no longer need low latency for playing parts into Cubase, increase it to at least 12ms.
Finally, if some of those audio or software synth tracks are finished, freeze them so that their plug-ins and voices no longer need to be calculated. Playing back frozen tracks will place some additional strain on your hard drive, but most musicians run out of processing power long before their hard drives start to struggle.
Published February 2006
No comments:
Post a Comment