Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, September 26, 2018

Q. How can I boost bass on tracks for the Internet?

I have a few tracks that I'd like to put on-line, but I'm having trouble getting the bass to sit forward enough in the mix to be audible on standard speakers. The mix sounds fine on my studio monitors, but when it's played back elsewhere, the bottom end is missing. These are bass-led tracks, so the low end really needs to sit up-front.

I don't want to just crank the bass up and swamp everything, and equally I don't really want to change the source patch, as it's the sound I want. Can you suggest anything?

SOS Forum Post

  Q. How can I boost bass on tracks for the Internet?
Boosting lower-mid frequencies can help to give the impression of deep bass on domestic loudspeaker systems. You can do this with EQ, or by using specially designed software, such as Waves' Maxx Bass. As that software's GUI shows, the the original bass (the blue curve) is augmented by a secondary hump of low-mid boost: the yellow curve.

Editor In Chief Paul White replies: The problem here is that most home systems, and those connected to most computers, won't be able to reproduce the deep bass that you can hear on your studio monitors. However, a lot of what sounds like bass is actually in the 150-250Hz range, so it's here that you should be applying boost, possibly in conjunction with some shelving low-cut filtering below 70Hz or so. Alternatively, you could try using software such as Waves' Maxx Bass to process your bass sounds (www.waves.com), as this moves more of the energy into the lower-mid range to create the illusion of deep bass on smaller speakers.

Before you begin processing, it's important to play some commercial mixes that you're familiar with over your monitors, so you can establish a reference point. Bear in mind that your room could be emphasising bass frequencies, making you mix with less bass than you actually need. Once you've completed a mix, double-check it on other systems (large and small) to see how the bass sounds.

When it's time to encode your mixes as MP3s, there are some points that you should consider. First, it's probably wise to mix a separate version for MP3 encoding, so you can apply slightly different techniques. As your material is bass-driven, try not to over-compress, as this can lead to a muddled low end, partly because the HF content is attenuated and shifted around by the encoding process. Second, reduce your peak levels by one or two decibels, to give the encoder a bit of space to breathe. This can be done by either lowering the output level of your mix-buss limiter, or normalising the mixed file appropriately.



Published September 2007

Monday, September 24, 2018

Q. What PC recording setup should I start with?

I'm wanting to get a recording setup on my PC, but I don't want to spend a great deal of money. Ideally, I'd like a freeware PC software package with which I can lay down some basic tracks. I need to be able to input MIDI data using my controller keyboard, so something with various sound banks would be ideal.

Jeff Scott

PC music specialist Martin Walker replies: Many musicians find that the easiest (and most cost-effective) way to get started in PC-based sequencing is to buy a decent audio interface, since many of these are now bundled with cut-down versions of flagship sequencer applications which, despite their 'entry-level' status, are extremely capable. Such bundles change from time to time, but a visit to the interface manufacturer's web site should soon tell you what's currently included. For instance, many M-Audio products bundle Ableton Live Lite 5, while others may feature a cut-down version of Steinberg's Cubase.

If you've already got a suitable audio interface for your purposes, then you could consider a freeware Windows XP sequencing package, such as those I reviewed in last month's PC Musician feature (www.soundonsound.com/sos/sept07/articles/pcmusician_0907.htm). For laying down basic tracks with a MIDI keyboard you might like to try Luna Free (www.mutools.com), which supports both audio and MIDI recording and playback, and provides both piano-roll and event-list editors, as well as supporting VST plug-in effects and VST instruments. Another one to try might be the freeware version of Anvil Studio (www.anvilstudio.com), a more traditional MIDI-based sequencer offering comprehensive Staff, Lyrics, Piano Roll, Drum, Loops, Audio, and Event editors, along with support for a single mono/stereo audio track, which may appeal more if you 'read the dots'. There are also plenty of entry-level applications for sale from developers including Cakewalk and Steinberg (I discussed many of these in SOS April 2005 as part of my 'Easier Alternatives To Flagship Music Apps' feature).

Luna Free is a simple yet effective application which, as its name suggests, is free! It works on Macs and PCs, and can be upgraded (at a small cost), should the limitation of only being able to run four VST instruments simultaneously put you off.  
Luna Free is a simple yet effective application which, as its name suggests, is free! It works on Macs and PCs, and can be upgraded (at a small cost), should the limitation of only being able to run four VST instruments simultaneously put you off. 

Some sequencers do include various software synths that will provide you with a basic set of sounds to help lay down your tracks, but if you want a more comprehensive collection of instruments for a minimal outlay, there are several approaches.

Some people get by with Microsoft's GS Wavetable SW Synth (bundled with Windows), which features a set of General MIDI sounds derived from Roland's well-respected Sound Canvas technology. You simply point to this synth in your sequencer as the output device for MIDI playback, but I suspect this synth will prove unequal to your particular task, since it has a high latency (the time between pressing a key and hearing the sound).

A better solution for you might be a freeware Soundfont player, which is effectively a VST Instrument that you can use inside any VST-compatible sequencer application, in conjunction with your audio interface's low-latency ASIO driver (Soundfonts were originally sample banks in a format that was compatible with Creative Sound Blaster soundcards, but Soundfont files can now be loaded into some software synths). One Soundfont player that I can recommend is RGC Audio's freeware SFZ (www.rgcaudio.com/sfz.htm). Windows already includes a 2MB General MIDI soundbank in Soundfont format (CT2MGM.SF2, which you'll find in your C:\Windows\System32 folder), and you can load this into the SFZ player. A good source of other freeware Soundfont sample banks is Hammer Sound www.hammersound.net).

If you want a versatile selection of more up-to-date sounds and you're prepared to spend some cash, then you may instead prefer the all-in-one approach of one of the virtual studio packages: a single software application that contains a virtual version of everything you might find in an electronic music studio, including synthesizers, sample players, drum machines, effects, a sequencer to record and play back the notes, and an audio mixer to mix them all together. Examples include Arturia's Storm (www.arturia.com), Cakewalk's Project 5 (www.cakewalk.com), FL Studio (www.fruityloops.com), and the one that started it all, Propellerhead Reason (www.propellerheads.se). Demo versions of all of these are available, and are generally fully functional except that you can't save your work.



Published October 2007

Friday, September 21, 2018

Q. Should I EQ first or compress first?

I was told recently that EQ and filtering should be done after compression, because the compression colours the EQ if done before. Is this so? And if so, why? Isn't it a good idea at least to filter out the low noise you don't want before compression?

SOS Forum Post
 
Ableton's Live Lite comes bundled with many affordable audio interfaces and is fairly well-equipped, as this screenshot shows.  
Ableton's Live Lite comes bundled with many affordable audio interfaces and is fairly well-equipped, as this screenshot shows.  

SOS contributor Mike Senior replies: There are no hard and fast rules here. A lot of it has to do with the way you work, and for subtle EQ settings I don't think it's particularly important which way around you plumb the two processors. However, in principle there's one straightforward reason why it makes sense to compress before you EQ, especially when you're first learning about processing. Let's say, for the moment, that you've already set up a compression sound you like for a particular track in your mix, and then decide to use a pre-compression equaliser to adjust the track's tonality. Any boost or cut you apply with the EQ controls will change the overall level of the signal relative to the compressor threshold setting you've already chosen, and will therefore mess with your carefully tweaked compression sound, unless you keep revisiting the threshold and/or ratio controls to compensate.

Pre-compression EQ also usually appears less responsive than post-compression EQ, as the compressor's gain changes fight the EQ gain adjustments. This can be disconcerting when you're still getting to grips with this kind of processing, and it encourages you to go for heavier processing than is actually necessary.

Q. Should I EQ first or compress first? 
As there are no hard and fast rules for the order in which you arrange EQs and compressors, many products (both software and hardware) feature dynamics-ordering switches. The controls of Metric Halo's Channel Strip plug-in (left) and Focusrite's ISA430 unit (right) are shown here. As there are no hard and fast rules for the order in which you arrange EQs and compressors, many products (both software and hardware) feature dynamics-ordering switches. The controls of Metric Halo's Channel Strip plug-in (left) and Focusrite's ISA430 unit (right) are shown here.

In practice, I find that I tend to EQ after I compress for most common tonal-shaping tasks, so that I don't have to worry about the two processes interacting. If ever I find myself EQ'ing before the compressor it's usually when I'm having problems getting the compressor to respond suitably. A common example of this is when an acoustic guitar has been recorded with a mic too close to the sound hole. A mic in this position often captures unappealing low-frequency resonances, and these can really hit the compressor hard, causing it to respond erratically to certain notes and strums and not others. Cutting the low-frequency resonances before the compressor can help tame the low-frequency anomalies before they reach the compressor, making for more transparent and natural processing. No amount of low-frequency EQ after the compressor can do this. Another situation like this is where a singer occasionally taps their foot on their mic stand: the low-frequency thump will trigger a brief and unmusical gain dip from the compressor unless low-frequency EQ has been used to remove it first.




Published October 2007

Wednesday, September 19, 2018

Q. Should I record my vocals with more than one microphone?

By Mike Senior

There seem to be instances where several mics are used to record an instrument, such as a guitar amp or drum kit, but I've never heard of this approach with vocals. Is there a reason for this? Does anyone have any experience of recording vocals with multiple mics at different distances from the singer?

SOS Forum Post

Some interesting effects can be achieved using multiple microphones on vocalists. As ever, experimentation is the key... 
Some interesting effects can be achieved using multiple microphones on vocalists. As ever, experimentation is the key...
 
SOS contributor Mike Senior replies: There is one good reason why vocals aren't often recorded with multiple close mics, when guitar amps and drums kits regularly are: most singers move around a little (or a lot!) while they sing. These movements can change the relative distances between the singer and each of the microphones. If the microphones are both fairly close to the singer, it effectively means that you get two very similar recorded waveforms which keep shifting very slightly out of alignment with each other, resulting in a kind of subtle phasing effect.

If you get the mics close enough to each other you can restrict the effects of the phasing to the extreme high frequencies, but this can still pose a problem, given that high frequencies are usually so important to expensive-sounding recorded vocal sounds. It's not something you can easily correct by shifting the waveforms around in your sequencer, either, as the alignment of the recorded waveforms will constantly change as the performer moves.

That said, there are some famous recorded examples of vocals recorded with two fairly close mics combined. For example, John Hudson talked about recording Gary Glitter and Tina Turner using two close mics back in SOS May 2004 (www.soundonsound.com/sos/may04/articles/classictracks.htm), although he made it clear that he used the technique primarily to gain extra control over their extremely wide performance dynamics.

Supplementing a single close mic with an ambient mic or two, however, offers a bit more potential. If the ambient mic is at least a couple of metres away, its recorded waveform will be different enough from that of the close mic that the phasing effects will tend to be much less noticeable; it will pick up a much more complex signal combining the direct vocal sound with reflected sound from the room.

Probably the best-documented example of a recording where ambient vocal mics were used is David Bowie's 'Heroes', which Tony Visconti discussed in the SOS October 2004 Classic Tracks feature. A heavily compressed Neumann U47 close mic was joined by Neumann U87s further away, the latter being gated in such a way that they only opened to give a more reverberant sound when Bowie sang loud. Michael Stavrou also talks about using this kind of technique in his book 'Mixing With Your Mind', although he suggests controlling the levels of the ambient mics with mixer automation, which allows for a bit more precision at mixdown.

Jack Douglas, engineer for many of Aerosmith's most successful albums, has also mentioned that he often combined close and ambient mics for Steven Tyler's vocals. Unusually, though, he used a Shure SM57 dynamic up close and a heavily compressed Sennheiser shotgun mic about five feet away. The SM57 not only captured part of the final sound, but was also useful for giving Tyler something to focus on, keeping his position fairly consistent relative to the shotgun mic. For similar reasons, some engineers who only record a single vocal mic will still use a second mic as a prop to keep the vocalist rooted in the main mic's sweet spot, particularly where no pop shield is being used.



Published November 2007

Monday, September 17, 2018

Q. Why doesn't live music sound good through my earplugs?

By Hugh Robjohns

I wear custom-made earplugs when I go to live gigs, to protect my hearing. These have physical (and therefore passive) filters, but I find that their frequency response is less than ideal, and it takes some of the enjoyment away from many concerts that I've attended.

Recently, I saw a guitarist perform wearing a pair of closed headphones. He used these for both hearing protection and personal foldback purposes. The combination of a good closed headphone, a set of decent microphones and an amplifier, all built into a headset, would be an ideal solution for me (although it would look ridiculous). However, I don't know if this type of 'active hearing protection' is commercially available.

Daniel Andriessen

Technical Editor Hugh Robjohns replies: The idea of hearing protection is to reduce the level (and/or duration) of noise reaching the ears, and the easiest and best way is a simple broadband attenuator fitted in the ear canals. The fit is critical, of course, to ensure that sound can't find a way in around the earplug, and although some of the the generic earplugs work well, the best and most comfortable solution is to have a set of customised personal earplugs made, as you already have.

These passive earplugs, which are made by Dutch hearing-protection equipment manufacturers Alpine, can be fitted with different filters and ear-moulds. 
These passive earplugs, which are made by Dutch hearing-protection equipment manufacturers Alpine, can be fitted with different filters and ear-moulds.The big problem, though, as you point out, is that in musical applications we don't want to reject as much sound as possible; we want to reduce it by a reasonable amount, while retaining a flat frequency response. This last point is critical, as most earplugs are intended for industrial applications, where the aim is to reduce ambient noise levels as much as possible, while still allowing speech to be intelligible so that wearers can communicate.

Speech requires only a relatively small bandwidth, hence the non-uniform frequency response of the common filters that you mention. Most offer much greater attenuation at high frequencies, so music sounds dull and unbalanced.

However, there are specialist companies that manufacture earplugs and acoustic filters that are intended specifically for musicians and DJs, and which maintain a broadly flat frequency response while providing a useful degree of attenuation. For example, in the UK I have found that Sensorcom (their web site is at www.sensorcom.com) are one of the best and most helpful companies when it comes to understanding the key issues and coming up with the solutions from a musician's perspective, but there are others.

Solutions like the Alpine Musicsafe product (which I use myself) employ passive designs, either with generic or custom-moulded earplugs, and a range of interchangeable filters to provide varying levels of attenuation.

Q. Why doesn't live music sound good through my earplugs?
I don't know the specifics of the monitoring system that was being used by the performing guitarist you saw, but I would hazard a guess that he was simply wearing closed-backed headphones to help reduce the ambient noise a little, while also having his monitor mix relayed to enable him to hear what he was doing clearly. In a really noisy environment, you could use proper industrial noise-protecting headphones for greater reduction of ambient noise. But I would be very surprised if there was any 'active noise reduction' going on here. Also, the attenuation provided by ordinary headphones won't have anything like a flat frequency response anyway, so you're not much better off!

In-ear monitors are generally intended simply to provide a clearly audible monitor mix, and to avoid the problems with floor wedges and high volumes of foldback on stage. They do attenuate background noise, but only as a side-effect of the fact that there's a great lump of plastic filling up your ear canal; and they don't attenuate noise linearly.

Westone Gennum's SD1 is a system that employs sub-miniature microphones mounted on custom-mouldable earpieces. The signals from the mics are fed through a DSP in the belt-pack (which can be configured using Mac and PC software), then fed to the user's ear. The wearer can set the amount of ambient signal they hear, while benefiting from the tight seal of the ear-mould.  
Westone Gennum's SD1 is a system that employs sub-miniature microphones mounted on custom-mouldable earpieces. The signals from the mics are fed through a DSP in the belt-pack (which can be configured using Mac and PC software), then fed to the user's ear. The wearer can set the amount of ambient signal they hear, while benefiting from the tight seal of the ear-mould.  

A relatively new development (check out page eight of SOS February 2007) is about as close as you'll get to your ideal solution. The Westone Gennum SD1 (www.in-earmonitor.com) is a very clever system that uses earpieces fitted with custom moulds, fed with the output from a binaural pair of microphones, which are mounted on the outside of the ear-moulds. The mics' signals are routed through a digital signal processor in the SD1 belt-pack, then sent back to the wearer's ear. This allows the wearer to control the listening level and frequency response of the signal they hear. It's expensive, though, and I've never used one, so I'm not sure what the quality would be like. While this system could be described as 'active', it isn't what is normally meant by active hearing protection.

Active hearing protection is used professionally in applications such as aircraft pilots' headsets, as well as in various consumer systems intended for listening to music while in noisy environments. But I don't think there are systems available for the kinds of application you are talking about (listening to music at live gigs). Active noise cancellation tends to be most effective for relatively constant lower-frequency noise, such as the roar of aircraft engines and train wheels. This ability to reduce LF noise through cancellation is handy because most passive solutions aren't as effective at low frequencies; but by combining the two approaches you can create a very effective hearing-protection system (albeit a bulky one). However, there is a down side, which is that active noise cancellation introduces an unpleasant phasey character to mid-range frequencies. That's not a major concern if you're flying a plane across the Atlantic, but it doesn't make listening to music a very enjoyable experience.

Personally, I think the most practical and effective solution is to use passive attenuators in the form of custom-moulded earplugs, but make sure they are professional designs intended specifically for musicians. They aren't perfect, and in an ideal world they wouldn't be necessary, but I've found them to be perfectly acceptable and very practical.



Published August 2007

Friday, September 14, 2018

Q. What is the difference between Passive and Active EQs?

I've never known what the difference is between a passive EQ and an active one. Could you explain?
Passive EQ: A simple cut-only EQ can be made with passive components, but will reduce the level and potentially degrade the quality of the audio. 
Passive EQ: A simple cut-only EQ can be made with passive components, but will reduce the level and potentially degrade the quality of the audio.

SOS Forum Post

Technical Editor Hugh Robjohns replies: In terms of the raw circuit elements involved, the answer is 'not much'. But the way in which those circuit components are used is radically different between passive and active equalisers.
 Most forms of audio equalisation are inherently 'lossy' processes in terms of signal level, and simple filter circuits are actually frequency-selective attenuators; they reduce the signal level above or below a frequency determined by the component values. So if you want to make a simple cut-only equaliser (as shown in the top diagram, right), it can be done quite easily with purely passive components (capacitors, inductors and resistors), all carefully chosen to provide the desired turnover frequencies and slopes. With this kind of design, power is not needed at all, but the type of equalisation that can be achieved is limited to simple high- and low-pass filters, and basic band-pass filters with gentle slopes.

The loss of signal level through a purely passive filter stage is often undesirable, and the turnover frequencies and slopes may be affected by the impedances of the source and destination equipment. For these reasons it is common practice to incorporate transformers and buffering amplifiers to help guarantee consistent performance, and to compensate for losses through the filters. In this case, although the equalisation itself is still passive, power will be required for the amplifiers that are present in the circuit.

Buffered EQ: Introducing a buffer amplifier post-EQ can make up for the lost level.  
Buffered EQ: Introducing a buffer amplifier post-EQ can make up for the lost level.
With an amplifier in the box, it obviously becomes easy to introduce gain, and that allows the design to be configured to boost frequencies as well as to cut them (as shown in the bottom diagram, below). This is achieved in a 'passive' design by configuring the filters to have a fixed loss across all frequencies in their default 'flat position', and this broad-band attenuation is compensated for by the buffer amplifier. To introduce a frequency boost, what you do is reduce the amount of attenuation through the filter at the desired boost frequency, such that the gain in the overall system turns that into a frequency boost. Many classic vintage equalisers were designed to operate in this way, and there are a great many advocates of this approach, arguing that it has sonic benefits.

The more modern approach, though, is to incorporate the equaliser components within the negative feedback loop of the amplifier itself (see the diagram, right). Negative feedback around an amplifier circuit has been used since the late 1920s to help reduce distortion and unwanted non-linearities, but it can also be used to introduce specific frequency responses in a very controlled and predictable way. This is standard practice in every commercial mixing console and most modern outboard equalisers of every kind, and it affords a great deal of flexibility and sophistication in the performance of the equaliser. The adjustable bandwidth (or Q) of parametric equalisers is really only achievable in a practical way using active techniques.

Feedback EQ: Incorporating the EQ circuitry into the negative feedback loop of the amplifier is a common approach to modern active EQ designs. However, the complications that are involved with the design can degrade the output signal. 
Feedback EQ: Incorporating the EQ circuitry into the negative feedback loop of the amplifier is a common approach to modern active EQ designs. However, the complications that are involved with the design can degrade the output signal.
 One of the most important advantages of this approach is that gain is only introduced when the EQ settings demand it: there is no overall gain involved all the time, as there is in the buffered equaliser design mentioned earlier. That makes the design less noisy and improves the headroom margin.

As we all know, though, there is no such thing as a free lunch, and there are some potential issues relating to feedback equalisation. These include gain-bandwidth restrictions, limited slew rates (the maximum rate of change of the circuit's output voltage) and phase-response anomalies, all of which are dependent on the amplifier design itself. However, a competent designer using good-quality components can easily render all of these utterly insignificant for audio applications. Even so, some audiophiles cite these issues as reasons for preferring the simpler passive approaches.



Published July 2007

Wednesday, September 12, 2018

Q. How should I treat my project studio?

 
A good starting point when tackling an untreated room is to hang traps behind each speaker and to the sides of the listening position. If there's enough space, additional treatment can be added on the ceiling above the listening position, as well as in the corners, as shown here.  
A good starting point when tackling an untreated room is to hang traps behind each speaker and to the sides of the listening position. If there's enough space, additional treatment can be added on the ceiling above the listening position, as well as in the corners, as shown here.

I'm just about to get started on quite a big project in my new home studio. I'm a bit concerned about the acoustics of the room, so I've been re-reading Studio SOS from July 2006 where Paul and Hugh sorted Hilgrove Kenrick out with some great acoustic trapping, and I'm thinking of making some myself. I just wanted to ask if you thought it would help in my situation.

My room is 4.6m x 4.1m, with a height of 2.4m. The walls taper in at a 45-degree angle about 30cm below the ceiling. The floor is covered in thin carpet, and there is one single-glazed window about 2m x 1m in the front wall. I can record instruments and vocals in the adjoining room if necessary, to reduce noise from my workstation (an Apple Macbook running Logic Pro).

But I'm more concerned about playback and EQ. I'm unsure about the best place to put the workstation, and I'm a bit worried about the listening environment; I'm not sure whether I'll be able to hear all frequencies correctly and therefore provide good-quality mixes. Do you think the traps and foam will help, if positioned in a similar manner to the article?

Simon Greenwood

Editor In Chief Paul White replies: The traps we made for Hilgrove would certainly work in your room, and if you're willing to get your hands dirty with the manufacturing process, I'd start by making one for each side of the listening position and another pair to situate behind the speakers. As a starting point, try setting your speakers up around 18 inches from the wall and make sure they're angled towards your head, with the minimum of reflecting surfaces between you and them.

Your room's dimensions are a bit closer to a square than I'd like, but at least it is large enough that you won't need to sit too close to the centre, which is where the bass becomes most unpredictable. If you have space, add further traps in the corners of the room, again as we did in Hilgrove's studio. Alternatively you could put them between the wall and ceiling on the walls that don't have angled sections.


Q. How should I treat my project studio?

Q. How should I treat my project studio?Logic's Test Oscillator (above) can be assigned as a source on an instrument track, then sequenced to play sine tones at semitone intervals, using the Matrix Editor, as shown below.

I wouldn't recommend using EQ to try to fix the monitoring, as it is rarely very successful. Instead, use your ears (and some test tones) to analyse the best placement for your speakers. This is a fairly laborious task, but one that's certainly worth the effort. Here's what to do.

Set up ascending sine-wave tones in semitone steps, starting from around 30Hz. As you have Logic Pro, you could use Test Oscillator, which is listed as a source on an instrument track, to generate these steps. Alternatively, you could open an instance of EXS24, which has a sine-wave tone as its default sound. You could, of course, use ES2 on an instrument track, and set up a sine wave on one of the oscillators (making sure the other two oscillators are switched off), but you must be careful to check that all modulations, LFOs, envelopes and, most importantly, velocity sensitivity controls are having no affect on the output.

All three of these options give you the flexibility to trigger the tones using a MIDI controller keyboard, enabling you to find out where the troublesome frequencies are at the point in the room where you play the keyboard. But sequencing a stepped sweep (as shown in the screenshot, bottom left) is a better option, as it allows you to listen to other parts of your room. When programming the steps, it's important to check that MIDI velocities are the same, and that notes don't overlap (if using ES2, set your sine-wave generator to 'mono' mode).

Once you've set up the sequence and ensured that there are no programmed jumps in volume, listen for excessively loud or quiet notes around the room. Moving your speakers further from or closer to the walls will change this, so look for the spot that gives the smoothest response. A further piece of acoustic foam on the ceiling directly above your knees in your normal monitoring position will help kill reflections from that source.


Published July 2007

Monday, September 10, 2018

Q. What are the reference levels in digital audio systems?

I came across your review of the Digidesign M Box 2 Pro, in which it is stated that a signal indicating -18dBFS should, in professional equipment, produce an output of 0dBu. Later in the piece, the reviewer stated that the actual output of the unit is -12dB, but neglects to say whether that's dBu or dBV. I would be pleased if you could confirm that this is indeed a normal specification.

Secondly, in SOS January 2007's review of the Emu 0404 audio interface, the unit's analogue outputs are specified at +12dBV balanced and +6dBV unbalanced, but there's no mention of source. I assume these are not 'operating levels', and if they are maximums, they are far from impressive. Could you clear that up too?
Lastly, I've seen it stated that recording too hot in digital systems (within, say, -3dBFS) alters the character of the sound, and that we should be operating at -18dBFS. As an electronics engineer, I cannot see that it matters where in the dynamic range you put a digital recording, since the I/O relationship is linear. But I'm willing to believe otherwise, since I am only really familiar with analogue.

Dave Macready

Q. What are the reference levels in digital audio systems?Comparisons between traditional professional analogue console signal levels and the EBU R68 (above) and SMPTE RP155 (below) recommended digital equivalents. 
Comparisons between traditional professional analogue console signal levels and the EBU R68 (above) and SMPTE RP155 (below) recommended digital equivalents.  

Technical Editor Hugh Robjohns replies: In answer to your first question, you're correct in thinking that 0dBu is equal to -18dBFS in professional equipment. This is the European Broadcasting Union (EBU) standard alignment, officially called R68, and we have mentioned it in numerous SOS articles, particularly those concerned with A-D/D-A converters, and metering, such as the one in SOS June 2000. In America they use a different standard, known to some as RP155, where +4dBu equals -20dBFS. (This standard thus offers 6dB more headroom.) This is specified by the Society of Motion Picture and Television Engineers (SMPTE).

Regarding the information about the M Box 2, I believe the reviewer was referring to dBu measurements. He most probably discovered that the unit is calibrated to provide a semi-professional standard output level. The professional standard reference is +4dBu, while the semi-pro reference is -10dBV, and, because these two figures use different reference points, there is just under 12dB of difference between the two.

The Emu 0404 also appears to be set up as a semi-professional unit, and the figures you quote are peak levels, although they're specified in an unusual way. The figure +12dBV is about 4V and equivalent to roughly +14dBu. The implication of using the dBV reference is that the nominal level is -10dBV, and therefore suggests that the alignment is set to the equally unusual '-22dBFS = -10dBV' reference.

Some equipment (such as the Digidesign 002 audio interface, pictured here) gives you the ability to change the operating level of the line-level audio inputs. This allows easy integration with both professional and semi-pro setups. 
Some equipment (such as the Digidesign 002 audio interface, pictured here) gives you the ability to change the operating level of the line-level audio inputs. This allows easy integration with both professional and semi-pro setups.

To address your last point, about recording in the topmost part of the digital audio spectrum: assuming correct dithering is taking place the I/O relationship in a digital system should be linear, as you suggest. But headroom is still important in many digital processing applications. In the good old analogue days, a professional mixing console was designed to work with a nominal level of 0dBu or +4dBu. Peak signal levels were generally constrained by hand or limiter to about 8dB above that nominal level, maybe +12dB if people wanted to record 'hot'. But the clipping point of the console was a good +24dBu in most cases, sometimes 4dB more; that headroom was essential to allow the passage of brief, high-level transients that weren't visible on the meters.

When working with 24-bit digital systems, it makes absolute sense to maintain a very similar gain structure and headroom, for both technical and operational convenience. That means building in typically about 20dB of headroom above the nominal signal level, with most peaks reaching no higher than about -10dBFS. Only the rarest fast transients should kick up above that. When working in this way, the system noise floor will still be a good 90-100dB below the nominal level, which is directly comparable with a good-quality analogue console.

Operationally, working with this kind of headroom contingency means no longer having to worry about clipping and overloads, and internal mixing and signal processing generally sounds cleaner and more analogue-like because you are not forcing the system to use floating-point maths (not all systems appear to work correctly in this regard).



Published August 2007

Friday, September 7, 2018

Q. Should I buy a Headphone Amp?


Q. Should I buy a Headphone Amp?
I recently purchased some Sennheiser HD600 headphones and have been told that I'll need a good headphone amplifier to make the most of them. Currently, I use the headphones output of my audio interface (a Digidesign M Box 2) to power my cans, but it seems to distort at high levels. Do you think it would be advisable to get myself a headphone amp? If so, how would I get the signal to the headphone amp from my audio interface? I'm thinking of getting either the Presonus HP4, Samson's C Que 8, the CME Matrix Y, or the Behringer AMP800, mainly because the configuration of the inputs and outputs are suitable for my system. It's probably worth mentioning that I use a small Yamaha desktop mixer as a front-end to my DAW.

Michael Fearn

PC music specialist Martin Walker replies: Most audio interfaces provide fairly clean-sounding headphone outputs, although it can be tricky to predict how loud a particular set of phones can go through a particular headphone amp without sounding strained. According to the Digidesign web site, the M Box 2's headphone output can provide six milliwatts into 50Ω, and since most such amps provide less power into higher impedances such as the 300Ω of the Sennheiser HD600s, a dedicated headphone amp might help you gain increased level while helping your new headphones to sound as clean as possible.

Q. Should I buy a Headphone Amp?
Sennheiser's HD600 (right: reviewed in SOS June 2002) are open-backed, reference-quality headphones. They are renowned for their wide and spacious sound, but are at their best when used with a good headphone amplifier, such as the Grace M902 pictured below.

However, the models you propose all have four headphone outputs, which is fine if you need to plug in four pairs of headphones (so that an entire band can monitor simultaneously, for example), but is not so suitable if you simply want to get higher audio quality for your single pair of Sennheisers. It sounds as though the latter is the case, so you simply need to get hold of a single stereo headphone amp and connect it between the stereo outputs of your audio interface and the inputs of your Yamaha mixing desk. By spending your money on this one headphone output you should be able to get better audio quality than spreading it across four.

I say 'should', but, unfortunately, while there are lots of handy multiple-output headphone amps at budget prices, once you start looking for high-quality, single-output headphone amps the prices tend to shoot up alarmingly. The Sennheiser HD600s are superb headphones, and some people are prepared to spend over £1000 on a headphone amp to get the very best out of them. Take the Grace M902 (www.gracedesign.com), which I mentioned in my January 2007 feature on headphone mixing (www.soundonsound.com/sos/jan07/articles/mixingheadphones.htm). It costs £1400, but is far more than a rudimentary volume control. Another widely recommended amp is Graham Slee's Monitor Class model, which costs £475 (www.gspaudio.co.uk). Of course, you may still consider this 'silly money'.

I've scoured the Internet trying to find a reasonably priced headphone amp to use with my Sennheiser HD650's, since it seems bizarre to end up paying several times more than the cost of your audio interface just to improve slightly on its integral headphone output. But while there are plenty of audiophile products with prices to match, there currently seem to be few products available to suit more modest budgets.
One possibility is Creek's OBH21 (www.creekaudio.co.uk), which retails at £190. I haven't auditioned it myself, but I know of happy HD600/650 owners using this model. Another is the Rega Ear at around £150 (www.rega.co.uk/html/ear_2001.htm), which, again, is used by many HD600/650 owners, while the Pro-Ject Head Box Mk 2 (www.sumikoaudio.net/project/products/headbox2.htm) seems a bargain at £75 (yet still wins awards for its audio quality compared to the average headphone socket found on a hi-fi amp), and will deliver 60 milliwatts into 300Ω phones, or 330 milliwatts into 30Ω models, which is a lot more than the M Box 2! What's more, it has additional line-level outputs that you could connect to your mixer.

The Pro-Ject Head Box Mk 2 is available from local hi-fi shops or can be bought on-line (in the UK) from various retailers, including Noteworthy Audio (www.noteworthyaudio.co.uk), Stone Audio (www.stoneaudio.co.uk), or Superfi (www.superfi.co.uk). Readers in the USA seem to benefit from a much wider selection of indigenous products. Visit Head Room (www.headphone.com), and you'll find a huge range of headphone amps.


Published August 2007

Wednesday, September 5, 2018

Q. What's the best way to connect my guitar to my soundcard?

By Paul White
Sequis' Motherload: Get the sound of a guitar cab running at '11' without waking the neighbours.Sequis' Motherload: Get the sound of a guitar cab running at '11' without waking the neighbours.

I'm struggling to find the answer to a really simple query: will connecting my guitar to my Marshall amplifier, selecting the undistorted channel and then connecting the amp's line output to my soundcard give me the same result as connecting my guitar to a DI box or guitar preamp, then plugging the output into my soundcard? I'm trying to connect my electric guitar to my PC so that I can use my amp-modelling software.
Bradley Howard

Editor In Chief Paul White replies: In theory there's no reason not to do the former of your two options, as long as you can adjust the level of your amplifier to stop it overloading the soundcard input. However, the sound you hear won't be the same as that of the guitar cabinet, as the speakers filter and colour the sound in a very obvious way. Clean sounds may be fine but overdriven sounds tend to be thin and buzzy with too much high end if you simply DI them. However, there are several solutions, the first being to feed the line output from the guitar amplifier into your PC via a line-level speaker-simulator box such as the Hughes and Kettner Red Box, which we briefly reviewed in SOS November 2000. Note, though, that if your amp is a tube model you shouldn't run it without either the speakers or a dummy load attached, as you could blow the output stage.

Another alternative is to DI the sound from your amp into the soundcard as you suggested, but then use your software guitar-amp simulator with just the speaker-cab simulation section switched on. This should get you back to somewhere near the miked sound of the amp. Of course, using your amp-modelling software you could do away with your Marshall altogether when recording, and instead use a cheap active DI box to match the impedance. You can pick these up from around £20. Cheaper still, if you have any guitar pedals you can use these between the soundcard and the guitar, though pedals that feature a true mechanical bypass won't act as impedance matchers in their bypass position.

Having mentioned some of the the lower-budget options, I'll skip to the higher end. If you really want to capture the sound of your Marshall without terrorising the neighbourhood, a combined power soak and speaker simulator is your best bet. The most convincing one we've tried so far is the Motherload from Sequis. You can read our review in SOS July 2005 (www.soundonsound.com/sos/jul05/articles/sequis.htm). It's not cheap but it does a fabulous job.



Published September 2006

Monday, September 3, 2018

Q. Why is my Dual Tube Channel so noisy? Is it a faulty unit?

By Hugh Robjohns
Mindprint's Dual Tube Channel (DTC) remains the top choice of many professional users.Mindprint's Dual Tube Channel (DTC) remains the top choice of many professional users.

I recently purchased a Mindprint DTC and I've noticed that it produces quite a lot of hiss. It is not so obvious to the ear at first but after individually compressing instruments and playing them together, it becomes quite nasty. Also, if you are monitoring the signal from the DTC, you notice quite a large amount of noise on the analyser. I purchased the 24-bit S/PDIF module to find the same happens there. Whilst I was sold this as a mastering tool I am wondering if this is a design flaw. I have read some good reviews (including yours from SOS June 2002), but on a few web sites, similar problems are mentioned. I have done all the regular troubleshooting, such as changing cables and checking the power distribution of my system, and it all looks fine. I am wondering why my DTC is so noisy and if it is supposed to be?

Chris Frost

SOS Technical Editor Hugh Robjohns replies: This is tricky to answer without actually hearing the problem you are complaining about and knowing how you are using the product. However, the DTC remains a favourite processor of mine, and I've not had any serious noise problems when I've used it, so I would suspect either an operational problem or a faulty unit.

As it uses valves, it is possible that you have a faulty one, which could lead to excessive noise. Changing the valves is not difficult and replacements aren't expensive — but it might be worth getting the product properly checked-over by a qualified technician, in case anything more serious is wrong. Unlikely, but it's always best to get it checked.

Perhaps the more likely problem is an operational one. Setting an appropriate gain structure is important to optimise the signal levels through the unit. The other thing that intrigues me is that you say: "It is not so obvious to the ear at first but after individually compressing instruments and playing them together, it becomes quite nasty". Noise will always add and build in level, so processing individual instruments with the DTC will always produce a noisier result than simply processing the final mix. But I wonder if, in fact, you are over-compressing the instruments.

Ideally, you could send a short extract of some affected material, but it may be easier and quicker to get the unit checked-out or compared with another unit to make sure all is working as it should.

I've always found Mindprint to be helpful in resolving issues like this, so giving them a call might be a worthwhile step to take as well. Mindprint +49 6851 9050.


 
Published October 2006