Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, April 1, 2015

Q What’s the best way to downsample?

Sound Advice : Mixing




Hugh Robjohns



Hello SOS team, and thank you for the best education I could hope for! I want to ask what method you consider to be the best for downsampling. I have started working at 24-bit and 96kHz and am noticing the benefits in quality, but I’m confused about the best method for getting back to CD quality (16-bit, 44.1kHz). I have no problem understanding dithering 24-bit audio to 16-bit, but am less clear about downsampling.

Although it’s not the only measure of a  sample-rate converter’s (SRC) quality, these plots show a  clear difference in the aliasing artifacts caused by the SRC in Cubase versions 4-8 (top) and the linear-phase SRC in Voxengo’s R8Brain Pro. The latter clearly performs better.

Before attempting to master (once the mix is done and in stereo format) I take the file and downsample from 96kHz to 44.1kHz, but there is a definitely noticeable degrading of the high end when I’ve done this. I have used various tools for this, from Cubase’s built-in options to Voxengo R8Brain (apparently one of the best), but nothing seems to have worked. Ultimately, I’ve resorted to mastering my track and dithering to 16-bit, then simply recording my converter output (outputting at 24-bit, 96kHz) back into the computer at 16-bit, 44.1kHz. I seem to have achieved the best results this way — if it sounds good, then it is good, as they say.



As the converters in my audio interface are not of the highest quality, though, I can’t tell if this is doing more harm than good. Could you please shed some light on this and let me know how to get this part of the process right?



Justin Shardlow, via email



Although it’s not the only measure of a sample-rate converter’s (SRC) quality, these plots show a clear difference in the aliasing artifacts caused by the SRC in Cubase versions 4-8 (top) and the linear-phase SRC in Voxengo’s R8Brain Pro. The latter clearly performs better.Although it’s not the only measure of a sample-rate converter’s (SRC) quality, these plots show a clear difference in the aliasing artifacts caused by the SRC in Cubase versions 4-8 (top) and the linear-phase SRC in Voxengo’s R8Brain Pro. The latter clearly performs better.SOS Technical Editor Hugh Robjohns replies: When you get into the back-room mechanics, asynchronous sample-rate conversion (whether it be software- or hardware-based) is inherently a fairly complex subject. In essence, though, it all comes down to a virtual reconstruction of the full audio waveform from the source digital samples, and then calculating the amplitude values of that waveform at the specific moments in time when the output samples are required (at whatever new rate). This same core process applies regardless of whether it is upsampling or downsampling, although the latter requires an additional stage of low-pass anti-alias filtering to comply with the new Nyquist limits (allowing nothing through above half the sample rate).



If you think about it, this is exactly the same process as when you send audio out through your interface’s D-A converter and back through its A-D converter. The D-A converter reconstructs the analogue waveform, and the A-D re-digitises it, via the appropriate anti-aliasing filter, at the new sample rate. If your A-D has a 16-bit mode you could perform both the word-length reduction and dithering as part of the same process, too.



Hardware and software asynchronous sample-rate converters (SRCs) employ some fairly complex mathematics processes, obviously, but when performed correctly this is a very mature and highly accurate science. It is a fact, for example, that good SRC processes easily outperform the best converters in terms of dynamic range and distortion.



So, is the analogue process doing more harm than good? Theoretically, yes it is, because a good SRC should maintain a greater dynamic range and lower distortion. But that’s the theory, and modern converters are superb these days — as you say, if it sounds good, it is good!



Regarding software SRCs, though, as you’ve discovered not all are designed equally well. In fact, some are positively atrocious. You can compare and contrast a wide range of different software SRCs at http://src.infinitewave.ca. As it happens, the SRC algorithms in Cubase (versions 4-8) are fairly poor, as the Sweep and Tone plots on that web site reveal very clearly in the obvious aliasing patterns and spectral detritus. I am a little perplexed, though, at your comments about the R8 algorithm because that is, indeed, one of the better SRC algorithms on the market (as the plots also indicate very clearly).



I don’t know which version of R8 you’re using, but neither the free nor pro versions suffer from aliasing or other artifacts. So, assuming the software is working correctly, there are a number of possible reasons for the HF degradation you’ve experienced. The most obvious first place to look is the newly imposed Nyquist roll-off, although this should be well above the hearing of most people, and the A-D converter in your analogue conversion process should be applying a similar roll-off, too.



So if it’s not the presence of the roll-off, could it be something to do with the filter type? The filters in most modern A-D converter chips are not quite as steep as those in decent software SRC’s, and most actually allow some aliasing because they are only about -6dB at the Nyquist frequency. In contrast, most decent SRC algorithms have much better filters that really do stick to the rules (Cubase doesn’t, but R8 does!).



Another possibility could be the time-domain aspect of the filter. The R8 Brain Pro version allows the selection of linear-phase or minimum-phase filters, which you may perceive as having slightly different characters. Linear-phase filters introduce a small amount of pre-ringing (which can’t happen in the analogue world), while minimum-phase does not. But again, D-A and A-D filters are usually linear-phase with the attendant pre-ringing.



Finally, I wonder if the issue you’re experiencing is actually related to signal level, rather than high-frequency content, since you’re working with ‘mastered’ material. This could be our old nemesis, the dreaded inter-sample peak! If you have normalised the signal so that peak samples hit 0dBFS, it is possible for peaks in the waveform to rise above the height of those existing samples.



As I’ve explained, SRC processes calculate the precise amplitude of the waveform in between any existing samples, and any inter-sample peaks could be higher than 0dBFS, resulting in clipping distortion and aliasing. In contrast, your analogue conversion process probably manages to avoid this clipping issue because of a slightly lower analogue signal level into the A-D. So it might be worth leaving a few decibels of headroom on your 96kHz file before the SRC process, using R8, and seeing if that helps.    


Monday, March 30, 2015

Q How does mastering differ for vinyl and digital releases?

Sound Advice : Recording



Obviously a vinyl record is a different thing from a CD or a WAV file, but does it require a separate, dedicated master, or are the two formats basically made from the same mastered file?



Eric James


Jasper King, via email


For the most part, the mastering process for vinyl and digital formats can be the same — and any guesswork around things like the stereo spread of bass frequencies is probably best left for the cutting engineer.

SOS contributor Eric James replies: This is a timely question — mastering for vinyl has recently gone from being a specialised rarity to a common extra that clients ask for in addition to the digital master. And it usually is that way around: a digital main release (CD or download) with a vinyl version, perhaps for a shorter run, for sale at gigs, and sometimes as part of a marketing plan. The proportion of projects that are primarily for vinyl release, secondarily digital, is very much less. I mention this because the format of the primary offering can sometimes make a difference.



The short answer to the question though is yes, sort of: separate masters are required for CD replication or digital distribution and vinyl records. However, in the majority of cases the mastering processing can be the same for both, as the crucial differences between them are practical (ie. the level and extent of limiting, the word length, and the sequencing of the files). A digital master for CD has to have a 16-bit word length, and it can be as loud and as limited as the client’s taste or insecurity dictates; with the vinyl master there is a physical limit to what can be fed to the cutting head of the lathe, and so heavily clipped masters are not welcome and can only be accommodated, if at all, by serious level reduction. For vinyl, the optimum source is 24-bit, dynamic, and limited either extremely lightly or not at all. The sequencing difference is that delivery from mastering for digital is either individual WAV files for download or a single DDPi file for CD replication, whereas for vinyl the delivery is generally two WAV files, one for each side of the record.



For the most part, the mastering process for vinyl and digital formats can be the same — and any guesswork around things like the stereo spread of bass frequencies is probably best left for the cutting engineer.For the most part, the mastering process for vinyl and digital formats can be the same — and any guesswork around things like the stereo spread of bass frequencies is probably best left for the cutting engineer.Photo: JacoTen / Wikimedia CommonsThis is how it generally works at my own, pretty typical, facility. We run the mastering processing through the analogue chain, gain-staging so that the final capture is a louder but, as yet, unlimited version of the master. To this we can subsequently add level and required limiting. In the simplest scenario, then, this as-yet unlimited version can serve as the vinyl master, and a different version, which has had gain added, becomes the digital master. This works best when the primary focus is the vinyl, as the louder digital version benefits from the preserved dynamics in the vinyl master.



Things can get more tricky if the primary focus is the digital master, and especially when that is required to be fairly loud. You can’t simply take an unlimited file and add 4 or 6 dB of limiting without sonic consequences, and so for loud CD masters, we normally add another step of gain-staging and include some light limiting during the initial processing run, the result being a louder master to begin with for the second stage of adding gain.



This is not always the way the issue is presented: there is sometimes talk of different EQ settings and the use of elliptical filters and whatnot. The fact is, though, that the EQ considerations offered as being necessary for vinyl are pretty much a desiderata for most decent digital masters too. For example, extreme sibilance, often mentioned, certainly is a problem for vinyl, but then it’s hardly desirable for CD playback either. Another myth is the ‘bass width’. I’ve been told by alleged label experts that (and here I quote) “vinyl masters need to be mono in low frequencies, and the low end, like 80-200 [Hz], almost mono.” If this were true, it would make you wonder how classic orchestral recordings (which have the double basses well off-centre to the far right) ever managed to get cut to vinyl! The fact is that wildly out-of-phase and excessive bass can be problematic, and if it’s present in a mix, a certain amount of taming will be needed in the mastering stage. But even then, if the master is going to a reputable cutter it is best to leave that decision to them.  


Friday, March 27, 2015

Q Can I feed my monitors from my audio interface’s headphone output?

Sound Advice : Recording



I want to add another set of monitors for A/B referencing, but my interface has only one set of monitor outputs. I don’t want to degrade the signal in any way. Is it possible to simply use a splitter adaptor to separate the stereo channels from the headphone output into a left and right output to the monitors (without any buzz or hum)? I don’t really need the switching facility as I’d use software for that.



Hugh Robjohns



Via SOS forum

Your interface’s headphone output can be used to feed audio to your monitor speakers, but it’s unlikely to be the best option.

Your interface’s headphone output can be used to feed audio to your monitor speakers, but it’s unlikely to be the best option.Your interface’s headphone output can be used to feed audio to your monitor speakers, but it’s unlikely to be the best option.SOS Technical Editor Hugh Robjohns replies: You can do that but it is an unbalanced signal, so the likelihood of ground-loop hums and buzzes is inevitably quite high. Headphone outputs also tend to be noisier and often suffer higher distortion than dedicated line outputs. (At the end of the day, it’s a small power-amp stage rather than a proper line driver.)



There are plenty of ready-made balanced line-level switch boxes on the market, though (try Coleman, ART or Radial, for example). However, most such products seem surprisingly expensive for such a simple device — so it might be worth considering investing in an affordable passive monitor controller, which can cost very little more. I know you said that you’d manage the selection via your software, but in the event of an error on your system resulting in full-scale digital noise being sent to the monitors, it’s handy to have a hardware volume control or mute button to hand!