Welcome to No Limit Sound Productions. Where there are no limits! Enjoy your visit!
Welcome to No Limit Sound Productions
Company Founded | 2005 |
---|
Overview | Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting. |
---|---|
Mission | Our mission is to provide excellent quality and service to our customers. We do customized service. |
Thursday, February 28, 2019
Wednesday, February 27, 2019
PC Notes
By Martin Walker
If you're upgrading to Windows 7, you might be
considering buying a 64‑bit version to access more RAM. But will you
still be able to use all your 32‑bit plug‑ins?
As I mentioned in last month's PC Notes, Windows 7 is now available, and while a new operating system is highly unlikely to make audio software run more efficiently, many musicians with ambitious sampling requirements (such as those running giant orchestral libraries) are considering buying a Windows 7 64‑bit version so that they can utilise more than 4GB of RAM.
Musicians also seem to be beseeching developers to release native 64‑bit versions of all their favourite VST instruments and plug‑ins, but I doubt that this will happen widely just yet. Few developers can justify spending all the time needed to perfect a new 64-bit release that would sound identical to the 32‑bit version, unless users are prepared to pay for it. It seems far more likely that 64‑bit versions will appear, along with new features, as chargeable upgrades.
Building Bridges
For many musicians, the lack of 64‑bit plug‑in versions may not be a problem, since most 64‑bit sequencers provide 'bit bridge' wrappers, so that you can carry on using your existing 32‑bit plug‑ins inside your new 64‑bit environment. There may be little practical advantage to having a 64‑bit version of your plug‑ins and instruments. After all, the latest BitBridge XR incarnation in Cakewalk's Sonar X64 allows 32‑bit plug‑ins to collectively access up to 128GB of RAM (the maximum supported by Vista Ultimate X64 or Windows 7).On the other hand, Steinberg's VST Bridge for Cubase/Nuendo plug‑ins can only share a maximum of 4GB RAM (which is still significantly more than the standard 2GB of 32‑bit applications), but sadly it's not proving very compatible with third‑party products. Steinberg insist that VST Bridge is a 'transitional aid', and that it's up to all other developers to release native 64‑bit versions of all their products, but for the reasons given above I don't think this will ever happen across the board, and there are plenty of 32‑bit products (including some marketed by Steinberg) whose development has already ceased.
However, for those who do have compatibility problems with their 32‑bit plug‑ins inside 64‑bit sequencers, there's also a third‑party product that seems to be gaining a lot of fans. The demo version of jBridge (http://jstuff.wordpress.com/jbridge) can be downloaded and run for 20 minutes before it times out, and once you're happy that it runs your 32‑bit stuff, it's only 15 Euros to purchase a license. You can also use jBridge in conjunction with the dxshell wrapper available from Polac (http://xlutop.com/buzz/zip/dxshell_v1.0.2b.zip), to enable older DX plug‑ins, instruments and MIDI‑based MFX plug‑ins to be run in 64‑bit sequencer hosts, and even to run 64‑bit plug‑ins in 32‑bit hosts.
Bounce Metronome Pro
Have you ever found it difficult to play along with a standard click track? Bounce Metronome Pro (www.bouncemetronome.com) is a PC application supporting all time signatures, swing, and lots of clever stuff like paradiddles, polyrhythms and gradual tempo changes. Its secret is incorporating various visual options such as 3D bouncing balls, animated drumsticks and conductor's baton graphics. Rather like those used in karaoke machines, their rise and fall incorporates a 'gravity bounce' that feels like having your own conductor to help you keep in time.I found it reliable enough to abandon audio clicks altogether and use as a silent metronome, which also makes it useful for deaf musicians. Studio owners could display it on screen in their live rooms to keep players in time without them requiring headphones, while a special screen reader is available to blind musicians too. It's a shame the current version can't sync to a sequencer, but I suspect that may come later. It's impossible to judge how well it works from from a static screenshot, but animations and demos are available on the web site, and the full version costs just $19.90.
PC Snippets
Audiophile USB? If you have a D‑A converter connected to your PC via a USB cable, you may be interested in trying a new 'audiophile' USB driver from Aqvox (www.aqvox.de) that claims to provide a more open and transparent sound, by bypassing the Windows Kernel and substituting code of its own. Currently available for Windows 2000, XP 32‑bit, XP 64‑bit and Vista 32‑bit, the driver has a trial version you can download to judge for yourself, while the full version costs 99 Euros.First USB 3.0 motherboard: Asus (www.asus.com) have unveiled their snappily named Xtreme Design P7DP55DE‑E Premium motherboard, the first to feature support for SuperSpeed USB, aka USB 3.0, which manages transfer rates of up to 10 times that of USB 2.0. It features Intel's P55 chip set, but uses a third‑party controller chip to add the USB 3.0 support.
Spyware Doctor with Antivirus 2010: This highly recommended utility is now compatible with Windows 7 as well as Vista and XP and, as you might expect, incorporates various improvements in its detection of 'nasties'. However, just as important for musicians, it also has a new optional Game Mode, which, when activated, skips scheduled scans or updates, disables alerts and pop‑ups, and sets real‑time protection to a lower level. It detects when your PC is running in Full Screen mode (which most applications do if you press the F11 key), making it ideal for audio sequencers and software, multimedia presentations, or movies where you temporarily need maximum performance with no interruptions. A year's subscription costs just $39.95 for up to three PCs (www.pctools.com/spyware‑doctor‑antivirus/).
Published January 2010
Tuesday, February 26, 2019
Monday, February 25, 2019
Q. Do balanced connections prevent ground loops?
By Various
I've carefully wired up my gear using all balanced inputs and outputs, and proper balanced cables, but I'm still getting occasional digital hash in the background. What have I missed?
Jamie, via email
SOS columnist Martin Walker replies: Ground‑loop problems can be absolutely infuriating, and I wrote a step‑by‑step guide to tracking them down back in SOS July 2005 (/sos/jul05/articles/qa0705_1.htm). In essence, you have to temporarily unplug all the cables between your power amp and mixer. If the noises go away, you've found the location of your problem. If not, plug them back in and try unplugging whatever gear is plugged into the mixer — and so on down the chain.
The majority of ground‑loop problems occur with unbalanced connections, so my next advice would have been to replace the offending unbalanced cable with a balanced or pseudo‑balanced version. However, as you've found, sometimes such problems occur even in fully balanced setups where you carefully connect balanced outputs of one device to balanced inputs of another via 'two‑core plus screen' balanced cables.
I recently had just such a problem in my own studio and, to make it even worse, it was an intermittent one, so whenever I got close to discovering its cause, it mysteriously vanished again. Here's what I did to track it down, so others can try some similar detective work in their own setups.
First of all, you've got to be systematic, and note down everything you try, particularly with an intermittent problem, so you don't have to start from scratch every time it occurs. In my case, I could hear the digital low‑level hash through my loudspeakers even with my power‑amp level controls turned fully down, and it also persisted when I turned off the D‑A converter box feeding my power amp. However, it completely disappeared as soon as I disconnected both cables between the D‑A output and power amp input.
These quick tests confirmed that the noise wasn't coming from the output of the converter, or from the power amp itself, but instead from a ground loop completed when the two were connected. However, just like you, I was already using balanced cables. I double‑checked the wiring of both of my XLR balanced cables and there were no errors: the screen of the cable was connected to pin 1 at each end, the red core connected to pin 2 at each end, and the blue (or black) core to pin 3 at each end. So far, so good.
Next, I double‑checked with a multimeter that there was no electrical connection between the metalwork of the two devices via my equipment rack (a common source of ground‑loop problems, and curable by bolting one of the devices to the rack using insulated washers or 'Humfrees'). Again, there was no problem.
The best wiring for balanced audio equipment is to tie the cable screen to the metal chassis (right where it enters the chassis) at both ends of the cable, which guarantees the best possible protection from RFI (Radio Frequency Interference). However, this assumes that the interconnected equipment is internally grounded properly, and this is where things can go awry. The cure is to disconnect one end of the cable screen, and the best choice to minimise the possibility of RFI is the input end (as shown in the diagram).
By this time, my intermittent problem had disappeared again, so here's another tip. I carefully cut the screen wire of one of my two cables just before it arrived at pin 1 of the XLR plug, but left the other cable unmodified. Then, the next time the ground loop problem occurred a few days later I quickly unplugged the unmodified cable, whereupon the noise disappeared immediately. This proved that I'd correctly tracked down the problem, and modifying the other cable in the same way ensured that it never happened again.
Published January 2010
Friday, February 22, 2019
Q. Is phasing affecting the sound of my double-tracked vocals?
By Various
Via SOS web site
SOS contributor Mike Senior replies: Yes, if you double‑track very closely, you'll inevitably get some phase‑cancellation between the two layers, but that's not a problem; it's an inherent part of what makes double‑tracking sound the way it does. However, the potential for phase cancellation between the parts won't be nearly on the same scale as with the two signals of a multi‑miked guitar amp, because, firstly, the waveforms of two different vocal performances will never match anywhere near as closely; and, secondly, the phase relationship between the performances will change from moment to moment, especially if you're moving around while singing. Furthermore, in practice a vocal double‑track often works best when it's lower in level than the lead, in which case any phase‑cancellation artifacts will be much less pronounced.
For these reasons, nasty tonal changes from double‑tracking haven't ever really presented a major problem for me, and if they're regularly causing you problems, I suspect you might be trying to match the layers too closely at the editing stage. Try leaving a little more leeway for the timing and see if that helps for a start — just make sure that the double‑track doesn't aniticipate the lead if you don't want it to draw undue attention to itself. Similarly, try to keep pitch‑correction as minimal as you can (especially anything that flattens out the shorter‑term pitch variations), because that will also tend to match the exact frequency of the two different waveforms. In fact, if there are any notes that sound really phasey to you, you might even consider shifting one of the voices a few cents out of tune to see if that helps. Anything you can do to make the double‑track sound less similar to the lead can also help, whether that means using a different singer (think Lennon and McCartney), a different mic, or a different EQ setting. You may only need the high frequencies to provide the double‑tracking effect, and these are unlikely to phase as badly as the low frequencies.
Published February 2010
Thursday, February 21, 2019
Wednesday, February 20, 2019
Q. How can I create the sound of a crowd?
By Various
I can most compare the feel I'm trying to achieve to the tracks 'Dungeness' and 'You Know' by Athlete. I've tried several overdubs of my own voice, and a few of my mates have given it a go too, but it's still not sounding right. Is it a case of literally squeezing a crowd into my living room and recording them all at once, or should I use a multitude of different tones/pitches/styles from fewer voices?
Via SOS web site
SOS contributor Mike Senior replies: If you want this kind of crowd sound, you'll get the best results if you use as many different people as possible. Overdubbing just a couple of people multiple times is very time‑consuming and is unlikely to sound that convincing. Much better to get a half‑dozen people in a room and record them all at once. You'll get more voices in less time, and the result will sound more convincingly crowd‑like because of the variations between the performers' voices.
Even with a larger handful of people, you'll still probably want to layer up a few takes to fill things out a bit, spreading them out to some extent across the stereo spectrum when you mix. If you can slightly rearrange the positioning of the performers between takes, that will also introduce a bit more variety, and you might consider changing mics, too. In case you've not already spotted it, I noticed that those Athlete songs include lower harmonies as well, which thicken the texture, so if you don't have anything like that in your song, you might want to think something up.
One practical problem you'll have to deal with, though, is delivering a cue mix to the performers, as I'm guessing that you may not have enough headphones and headphone amplifiers to give each performer their own foldback. One solution would involve first routining the parts in the control room until the performers are comfortable with what they're doing. In any group of singers, you'll find that there are one or two who lead, while the others follow, so when the time comes to record, give your available headphones to the leaders and instruct the rest of the group to follow them. As likely as not, everyone will be able to hear a little headphone spill as well, which will help timing, but if it's still a problem, get some cans on yourself and beat time in the live room.
This setup can work if your singers are fairly confident (or amply refreshed!), but the most common drawback with too few headphones is that the singers without them will feel a bit exposed without a cue mix and hence perform a bit tentatively. If this proves to be a problem, the alternative would be to use speaker‑based monitoring in the live room while recording. The difficulty there, however, is monitor spill, and although you can put the speaker in the null of a directional mic to reduce its pickup (a figure‑of‑eight mic will work best here), you'll inevitably find some of the cue mix leaking into the background of your takes.
This has two ramifications: first, you need to make sure that the arrangement of your backing track doesn't change significantly after the crowd overdubbing sessions, otherwise the spill may produce an unwanted 'ghost' of any parts that have later been removed; and second, you'll need to work with the miking distance and the monitoring level to keep the spill level within reasonable limits. Given that there's no avoiding the spill, I'd also recommend recording for long enough on either side of the vocal parts that you have some freedom to decide exactly where to fade the spill in and out at the mixdown stage. It may sound odd if the spill cuts out abruptly at the end of the last phrase, for example, rather than waiting until a song‑section boundary.
Whether monitor spill is an issue or not, I reckon you're probably better off trying to catch the sound as dry as possible, as most small‑room sounds are unlikely to aid the effect you're after. This leaves you more flexibility to simulate a larger, more crowd‑pleasing acoustic artificially. As to what effects to use, a lot of people would instinctively reach for reverb, but I think you'll probably get much closer to the sound you're after if you rely more on slapback delay. Try delay times in the region of 100ms. If you're after a slightly more aggressive tone, you might consider sending the delay's output through a guitar amp modeller as well.
Usually I find that a decent slapback does enough that you can then use reverb just for some subtler blending or to sketch in an impression of a large room size, both of which roles can actually be filled by an effect with a fairly quick decay, to avoid cluttering the mix.
Published February 2010
Tuesday, February 19, 2019
Monday, February 18, 2019
Q. What’s the best way to organise samples and effects?
By Various
If I buy a sample library, I usually drop its contents into the 'Sample Library' folder on my hard drive, but that's ended up as rather a mess, and I don't know how best to organise it. Where should I start?
Chris, via email
PC Notes columnist Martin Walker replies: There are three main aspects of this subject to consider: location, performance and organisation. Let's discuss each one in turn.
First, given the large size of many of today's sample libraries, it makes sense to keep them all grouped together. However, don't dump them all in the same hard‑drive partition as your operating system and applications, as this partition will end up many tens of gigabytes in size, and then you're less likely to back it up regularly, which is asking for trouble. It's far safer to store sample libraries on a different partition or drive.
This approach can also help with the second aspect, performance. Even if your samples are loaded into RAM in their entirety, keeping them together on a well‑defragmented partition will minimise loading times compared with having them scattered all over the place among the OS and applications on a single huge drive. Moreover, many samplers now stream audio data in 'real time' from the hard drive, so storing them in one place avoids the drive read/write heads having to work harder darting about all over the place, potentially limiting the maximum polyphony you can achieve.
So musicians should ideally store all their sample libraries on one separate drive or partition, but if you need polyphony greater than a couple of hundred simultaneous voices, it's probably worth splitting them across two or more dedicated sample drives. This is particularly true if you're using huge orchestral sample libraries, since you can dedicate each drive to a different section of the orchestra, and they will share the streaming load, allowing greater polyphony overall.
When it comes to the organisation of your own personal sample collection, ultimately the most important aspect (as with any filing system) is that you can find what you're looking for as quickly and efficiently as possible, so you can continue the creative process rather than getting frustrated trying to track down a particular sound. How you do this is very much a personal thing, and also depends on how big your sample collection is. If, for instance, your music uses lots of individual drum hits, it makes sense to start with a folder named Drums, and within that create subfolders for Kicks, Snares, Hi‑hats, Toms, Cymbals, and so on, since this is the thought process you're likely to be having when you're searching for drum sounds. If this still leaves you with many dozens of samples within each subfolder (for instance), divide each existing folder into further sub‑categories, such as Acoustic/Electronic, Hard/Soft, Dry/WithFX and keep refining your scheme until you feel that each folder contains a manageable number of files. Similarly, instruments can be sorted by genre (rock, jazz, metal and so on), acoustic/electronic characteristic, or according to their timbre, while Drum Loops are probably best grouped in folders sorted by tempo, and then subdivided by genre.
Such a scheme of organisation should work well for standard sample libraries, but many of the modern ones intended for specific software samplers, such as Logic's EXS24, Gigastudio and NI's Kontakt are already highly organised by the developer into subfolders. I've reviewed such libraries, which contain hundreds or even thousands of individual files sorted into stereo/surround and high/low CPU versions, as well as sound categories. Here you're entering dangerous territory, since each preset may use several dozen associated samples, and impulse responses for added reverb. If you start shuffling files, you risk getting 'missing sample' error messages. With this type of library, I tend to leave well alone.
See the latest PC Notes column on page 150 of this issue for another idea to help you to navigate the sample and sound files on your hard drive.
Published February 2010
Saturday, February 16, 2019
Friday, February 15, 2019
Finding Synth Presets; Music From Images
By Martin Walker
Find that elusive synth preset when you can't
remember its name, generate music from your own images, and catch up
with the latest PC news bites.
With some soft synths now offering thousands of presets, it can be incredibly frustrating to track down a particular favourite. Even those soft synths incorporating comprehensive databases organisable by genre, instrument type, or timbre, such as 'mellow' or 'spiky',can still have you stumped when all you can remember about the sound you want is that its name included the word 'Metal' or featured a sample named 'tortured' something.
Examine32 Text Search
This month, I ran into exactly that problem, and at first I thought I'd use Windows' own File Search function and its 'containing text' option to type in the word or phrase I was looking for. However, this approach proved to have limitations, so I went on the hunt for a suitable third‑party utility. After discarding quite a few,I finally came up with 'Examine32 Text Search' from UK‑based Aquila Software (www.examine32.com).
This will search through both text and binary (data) files, so you can use it to track down text inside most proprietary preset formats, such as VST FXP/FXB files, Spectrasonics' multi‑gigabyte DAT files, Camel Audio ACP presets, and many others. It also offers the option of simpler text searches (single names such as the 'Metal' example I gave above) or more complex Logical searches where you know the preset you're seeking, for instance, includes both 'Adagio' AND 'Metal', or 'Adagio' but NOT 'String', and you can launch it directly from Windows Explorer with a right‑click in the folder where your presets are stored.
I've found Examine32 Text Search incredibly useful over the last few weeks, and with a bit more detective work I used it for more specialist searches, such as isolating all the presets in a library that use MIDI aftertouch (very handy if you own a keyboard with this feature), and those created by a designer whose sounds I like. You can use the handy 'Save Search' function to preserve such exotica for future use. The demo runs for 30 days, so you can judge for yourself how useful it is with your own sound libraries, although a single‑user license costs only around $45.
SpectroBits
Over the years, various academic PC utilities have been released that convert bitmap images into audio files, but most do so off‑line, so you have to click a 'Render' button and wait a few seconds each time you edit one of your pictorial extravaganzas, before you can hear how it 'sounds'.However, one that's rather more immediate is SpectroBits from Japanese developer g200kg (www.g200kg.com/index_e.html). Described as a 'spectrogram‑based synthesizer', and entered for the KVR Developer Challenge 2009, it's unusual in being a VST Instrument that runs inside any VST‑compatible host. This means that its base note can be controlled in real time from a MIDI keyboard, which makes things far more interesting for the performer!
You can either load in existing picture files in BMP, JPEG, PNG or GIF formats to see how they sound, or create your own from scratch using the selection of bitmap‑editing tools provided, including various sizes, densities and colours of pens, airbrushes and lines. The main window always displays your current image, but over this you can superimpose the audio spectrogram or waveform display as it scrolls in real time.
As well as various controls that interpret the image in different ways before converting it to audio, SpectroBits provides a synth engine with Mono/Poly option, a simple Attack/Release volume envelope, and delay, reverb and chorus effects. Most audio host applications will, however, let you capture your real‑time performance as it happens, which means that you can generate long, evolving soundscapes with your MIDI performance and brush strokes, and then edit and further treat them later on (or just cut out the best bits). Anyone interested in spectral synthesis should love this, especially as it's freeware!
PC Snippets
Windows 7 Drivers: Microsoft's latest operating system is already proving popular with musicians, although there are still quite a few hardware audio devices that don't have suitable drivers, preventing some from making the transition. The biggest casualties are eight‑port MIDI interfaces, and one of the few models I tracked down that advertises Windows 7 drivers is ESI Audio's M8U XL (www.esi‑audio.com/products/m8uxl).Entangled Species: AAS have released a bank of 128 presets for their expressive String Studio VST Instrument. 'Entangled Species' is created by Canadian composer and sound designer David Kristian, and as well as the expected warm pads, deep drones, and expressive solo strings, this library also pushes String Studio way beyond its comfort zone, offering fractured scrapes and tortured clusters that sound like escapees from an Alfred Hitchcock soundtrack. The CPU load can occasionally be high, but I was amazed at some of the new sounds David had coaxed from the AAS physically modelled engine. Entangled Species is something really special, especially at the bargain price of $39 from the on‑line AAS shop (www.applied‑acoustics.com).
Fatter Platters: Those who need vast quantities of drive space for audio and video storage will be pleased by the latest jump to higher density 500GB platters. Seagate (www.seagate.com) were the first to release a two‑platter 7200rpm hard drive offering 1TB of storage (the Barracuda 7200.12), but now Western Digital (www.wdc.com) have taken the lead with their new Caviar Black 2TB hard drive, featuring four platters. Both offer maximum sustained transfer rates of about 130MB/second, which is enough to run a huge number of simultaneous audio tracks of sample voices.
Published February 2010
Thursday, February 14, 2019
Wednesday, February 13, 2019
Q. How do I know a mic is worth the money?
By Various
What differences can you hear when comparing inexpensive and expensive equipment? As I do a lot of vocal recording, I'd like to splash out on a really good microphone. But how can I be sure that an expensive microphone is worth the money? What am I listening for?
Sarah Betts, via email
SOS Technical Editor Hugh Robjohns replies: The benefits extend far wider than just the sound, but basically you're listening for an improvement over your current mic, and you then need to decide if the price justifies that improvement, bearing in mind the law of diminishing returns. Going from a £50 mic to a £200 mic will usually bring about very obvious sound improvements. Going from £200 to £1000 will bring smaller improvements, which may not always be obvious. And going from £1000 to £5000 will bring smaller benefits still. Some people will believe the improvements are worth the expense, others won't!
However, you'll know immediately and quite instinctively when you find a mic that is well suited to your voice, and that doesn't always mean the mic needs to be expensive. If you're looking for a general-purpose mic, expensive usually equates to increased flexibility in use. But if it's a mic that will always be used on your voice and nothing else, finding a mic that suits your voice is the prime directive.
Sonic fidelity or accuracy is generally an expensive thing to engineer into a microphone, and the most expensive mics are generally pretty accurate. But recording vocals is rarely about accuracy. It's more to do with flattery, and different voices need to be flattered in different ways. When working with a new vocalist, I'll usually try a range of mics to see which one works best with their voice. Sometimes the most expensive mic gives the best results, but it's equally likely that it will be a less expensive model. U2's Bono famously records his vocals using a Shure SM58, and he seems happy with the results!
But, as I said, there's more to an expensive mic that just the sound. More expensive mics tend to be built to higher standards. They tend to include internal shock-mounting for the capsule, to reduce handling noise. They are thoroughly tested to comply with the design specifications and provide consistent results. Being better constructed, they tend to have longer working lives and can be maintained by the manufacturer relatively easily. They also generally deliver a very usable (although that might not necessarily equate to 'the best') sound whatever the source, without needing much EQ to cut through in the mix.
Less expensive mics often sound great on some things but terrible on others, often needing a lot of EQ to extract a reasonable sound within a mix. Often they're less well manufactured, which reduces their working life expectancy and, once broken, can rarely be repaired.
Published March 2010
Tuesday, February 12, 2019
Saturday, February 9, 2019
Friday, February 8, 2019
Q. How do I tune my kick-drum samples to fit with my song?
By Various
It was recommended to me that I tune kick‑drum samples to the key of my song, so I used a tuner and an analyser on my stereo master and soloed the kick-drum track. The tuner said that it was playing an Eb at 78Hz. As my song was in Bb, I used a pitch‑shifter to tune the kick drum down to Bb, but it sounded really bad. What's the right way to go about this in a digital environment? I appreciate that I could just use my ears, but I'm interested in it from a technical perspective.
Via SOS web site
SOS contributor Mike Senior replies: The discussion of drum tuning tends to receive most coverage with regard to live kits and band recording, but the same kinds of issues also apply for kits in hip‑hop, R&B, and indeed any other style based heavily on programmed drum samples. If any drum sample has a prominent pitched element to its sound, there is the potential for that pitch to conflict with the harmonies of the production as a whole, and if there's a clash, the pitch of the drum sound in question won't tend to blend as well with the mix. This isn't necessarily a problem — and in some cases it can help the drum sound poke out of the mix and remain more up‑front. However, if you do match the drum sample's pitch to a note that blends into the track, that causes the pitched element to be masked by the other instruments in the track better, so its pitched component becomes less noticeable. This means that you can often mix the drum higher in the balance, thereby emphasising the noisier elements of the sound and making the drum feel punchier. So, to some extent, drum tuning is an artistic decision as much as a technical one. With kick-drum samples specifically, however, there is the added issue that many powerful‑sounding electronic kick drums (most notably the Roland TR808 kick) incorporate a very prominent, pitched, sub‑bass tone, and this will almost always make a mess of your song's harmonies if it doesn't fit in with the key, so it's usually safest to tune it to the key note.
Pitch‑shifting isn't the way to do this, though, because even the best pitch‑shifters tend to compromise the attack of percussive sounds. It's much better to simply speed up or slow down the sample's audio as a traditional sampler might do. There's a way to do this with the audio editing tools in most sequencers as well, but failing that you could import the sound into a dedicated hardware or software sampler and adjust it from there. Yes, the tone of the drum will change, but you won't get the kinds of nasty, flammy artifacts you'll get from a pitch‑shifter. I also wouldn't trust a tuner to reliably report the perceived pitch of a drum sound. Pitch‑detection algorithms are pretty good, but they won't always agree with what you're actually hearing. Trust your ears.
The simple answer to the question of how the lowest frequencies of the kick combine with those of the bass is: they don't! In almost all cases, one of them gives way to the other to avoid exactly the kinds of problems you're anticipating. With 808‑style kicks, you may even want to high‑pass filter the bass line to keep low‑end sludginess at bay. In the case where the bass has the sub frequencies, the kick can often be surprisingly light sounding, but you usually won't notice because it never actually appears in the arrangement without the bass.
Published March 2010
Thursday, February 7, 2019
Wednesday, February 6, 2019
Q. How can I achieve clarity in my mixes?
By Various
I have noticed that, when matching up my tracks with more professional productions, my songs are lacking a certain clarity. For example, I almost always have to turn up the high EQ on my mixer just to get the highs sounding more prominent when matched up with other tracks. (You can hear some example of my tracks at www.bigcontact.com/mikebeeds.) I'm guessing it could be a number of things. For one, my monitors aren't the best in the world, so would you suggest that I get monitors with more dynamic range? Some people have suggested that the cone size of the speaker (five‑inch versus eight‑inch) can make all the difference when hearing the mix. Or could it be that I am not using a high‑pass filter? At this point, I have no idea how to use this filter, but I've read numerous times in tutorials and books that it's very useful for giving a track more clarity. Any help would be greatly appreciated!
Via SOS web site
SOS contributor Mike Senior replies: Clarity is one of the hardest things to achieve when mixing, because it derives from a complex combination of many different arrangement, audio‑editing and processing techniques. You're right in thinking that your monitoring can make a difference, but I don't think that a change from five‑ to eight‑inch drivers will make an enormous impact on the quality of your mixes, to be honest. If you've got some money to spend, put it into acoustic treatment first, including bass trapping, and then maybe look to upgrade your monitors to something more full‑range — perhaps something with a separate subwoofer. (There was lots of information about DIY acoustic treatment back in our special acoustics issue in SOS
December 2007, if you need some advice on that front.)
Other than monitoring issues, there are a few quick tips I can suggest from listening to the tracks on your site. You already mentioned high‑pass filtering, a mixing process that effectively just cuts away the low end of any given track below a specified frequency, and that is indeed a very useful tool for clearing out the kind of clutter from the low end and mid‑range of your mix that tends to lead to poorly defined sounds. In fact, if you're working on small monitors in a domestic environment without any acoustic treatment, it's not a bad idea to high‑pass filter everything except the lowest bass instruments, raising each track's filter in turn while listening to the whole mix and aiming to go as high as you can without losing anything important. That way, you're sure to scotch any rogue sub‑bass information that might otherwise eat up your mix headroom. I reckon you should probably pay particular attention to your pad sounds, as these are very full‑sounding in the tracks I've listened too, and seem to be the prime culprits. If you want them to be that warm‑sounding to start with, all well and good, but when more interesting things come along in the mix, start rolling off some low end from the pads.
Your bass is another instrument that is reducing clarity, because it's quite thick‑sounding in the 200‑600Hz region, where it's overlapping a great deal with the pads and various other instruments. A well‑placed EQ cut could work wonders there. If the bass isn't then audible enough in the mix, you could try layering a second, quieter but brighter synth sound an octave higher over the bass, to add more higher mid‑range to the line, and that should get it to cut through more.
You're using a lot of reverb in the mixes to give a lush sound, and although this is perfectly in keeping with the style, it can make it difficult to keep individual sounds distinct. First off, don't bother adding much, if any, reverb to the pads, as pads rarely need reverb and it'll take up too much of the space in your mix. Where you do use reverb on any instrument, take some time to EQ the reverb return in each case, at least high‑pass filtering in order to keep the low end from sludging up. There's usually little to be gained from adding reverb to low‑frequency instruments such as bass and kick drum anyway, but even the more mid‑range instruments will send enough low frequencies into your reverbs to cause problems. Finally, I'd also experiment with swapping some of the reverb for tempo‑synchronised delays, which can sound equally lush (and also nicely rhythmic), but which won't obscure other instruments as readily.
I have noticed that, when matching up my tracks with more professional productions, my songs are lacking a certain clarity. For example, I almost always have to turn up the high EQ on my mixer just to get the highs sounding more prominent when matched up with other tracks. (You can hear some example of my tracks at www.bigcontact.com/mikebeeds.) I'm guessing it could be a number of things. For one, my monitors aren't the best in the world, so would you suggest that I get monitors with more dynamic range? Some people have suggested that the cone size of the speaker (five‑inch versus eight‑inch) can make all the difference when hearing the mix. Or could it be that I am not using a high‑pass filter? At this point, I have no idea how to use this filter, but I've read numerous times in tutorials and books that it's very useful for giving a track more clarity. Any help would be greatly appreciated!
Via SOS web site
SOS contributor Mike Senior replies: Clarity is one of the hardest things to achieve when mixing, because it derives from a complex combination of many different arrangement, audio‑editing and processing techniques. You're right in thinking that your monitoring can make a difference, but I don't think that a change from five‑ to eight‑inch drivers will make an enormous impact on the quality of your mixes, to be honest. If you've got some money to spend, put it into acoustic treatment first, including bass trapping, and then maybe look to upgrade your monitors to something more full‑range — perhaps something with a separate subwoofer. (There was lots of information about DIY acoustic treatment back in our special acoustics issue in SOS
December 2007, if you need some advice on that front.)
Other than monitoring issues, there are a few quick tips I can suggest from listening to the tracks on your site. You already mentioned high‑pass filtering, a mixing process that effectively just cuts away the low end of any given track below a specified frequency, and that is indeed a very useful tool for clearing out the kind of clutter from the low end and mid‑range of your mix that tends to lead to poorly defined sounds. In fact, if you're working on small monitors in a domestic environment without any acoustic treatment, it's not a bad idea to high‑pass filter everything except the lowest bass instruments, raising each track's filter in turn while listening to the whole mix and aiming to go as high as you can without losing anything important. That way, you're sure to scotch any rogue sub‑bass information that might otherwise eat up your mix headroom. I reckon you should probably pay particular attention to your pad sounds, as these are very full‑sounding in the tracks I've listened too, and seem to be the prime culprits. If you want them to be that warm‑sounding to start with, all well and good, but when more interesting things come along in the mix, start rolling off some low end from the pads.
Your bass is another instrument that is reducing clarity, because it's quite thick‑sounding in the 200‑600Hz region, where it's overlapping a great deal with the pads and various other instruments. A well‑placed EQ cut could work wonders there. If the bass isn't then audible enough in the mix, you could try layering a second, quieter but brighter synth sound an octave higher over the bass, to add more higher mid‑range to the line, and that should get it to cut through more.
You're using a lot of reverb in the mixes to give a lush sound, and although this is perfectly in keeping with the style, it can make it difficult to keep individual sounds distinct. First off, don't bother adding much, if any, reverb to the pads, as pads rarely need reverb and it'll take up too much of the space in your mix. Where you do use reverb on any instrument, take some time to EQ the reverb return in each case, at least high‑pass filtering in order to keep the low end from sludging up. There's usually little to be gained from adding reverb to low‑frequency instruments such as bass and kick drum anyway, but even the more mid‑range instruments will send enough low frequencies into your reverbs to cause problems. Finally, I'd also experiment with swapping some of the reverb for tempo‑synchronised delays, which can sound equally lush (and also nicely rhythmic), but which won't obscure other instruments as readily.
Published August 2009
Monday, February 4, 2019
Q. What’s the best way to organise samples and effects?
By Various
If I buy a sample library, I usually drop its contents into the 'Sample Library' folder on my hard drive, but that's ended up as rather a mess, and I don't know how best to organise it. Where should I start?
Chris, via email
PC Notes columnist Martin Walker replies: There are three main aspects of this subject to consider: location, performance and organisation. Let's discuss each one in turn.
First, given the large size of many of today's sample libraries, it makes sense to keep them all grouped together. However, don't dump them all in the same hard‑drive partition as your operating system and applications, as this partition will end up many tens of gigabytes in size, and then you're less likely to back it up regularly, which is asking for trouble. It's far safer to store sample libraries on a different partition or drive.
This approach can also help with the second aspect, performance. Even if your samples are loaded into RAM in their entirety, keeping them together on a well‑defragmented partition will minimise loading times compared with having them scattered all over the place among the OS and applications on a single huge drive. Moreover, many samplers now stream audio data in 'real time' from the hard drive, so storing them in one place avoids the drive read/write heads having to work harder darting about all over the place, potentially limiting the maximum polyphony you can achieve.
So musicians should ideally store all their sample libraries on one separate drive or partition, but if you need polyphony greater than a couple of hundred simultaneous voices, it's probably worth splitting them across two or more dedicated sample drives. This is particularly true if you're using huge orchestral sample libraries, since you can dedicate each drive to a different section of the orchestra, and they will share the streaming load, allowing greater polyphony overall.
When it comes to the organisation of your own personal sample collection, ultimately the most important aspect (as with any filing system) is that you can find what you're looking for as quickly and efficiently as possible, so you can continue the creative process rather than getting frustrated trying to track down a particular sound. How you do this is very much a personal thing, and also depends on how big your sample collection is. If, for instance, your music uses lots of individual drum hits, it makes sense to start with a folder named Drums, and within that create subfolders for Kicks, Snares, Hi‑hats, Toms, Cymbals, and so on, since this is the thought process you're likely to be having when you're searching for drum sounds. If this still leaves you with many dozens of samples within each subfolder (for instance), divide each existing folder into further sub‑categories, such as Acoustic/Electronic, Hard/Soft, Dry/WithFX and keep refining your scheme until you feel that each folder contains a manageable number of files. Similarly, instruments can be sorted by genre (rock, jazz, metal and so on), acoustic/electronic characteristic, or according to their timbre, while Drum Loops are probably best grouped in folders sorted by tempo, and then subdivided by genre.
Such a scheme of organisation should work well for standard sample libraries, but many of the modern ones intended for specific software samplers, such as Logic's EXS24, Gigastudio and NI's Kontakt are already highly organised by the developer into subfolders. I've reviewed such libraries, which contain hundreds or even thousands of individual files sorted into stereo/surround and high/low CPU versions, as well as sound categories. Here you're entering dangerous territory, since each preset may use several dozen associated samples, and impulse responses for added reverb. If you start shuffling files, you risk getting 'missing sample' error messages. With this type of library, I tend to leave well alone.
See the latest PC Notes column on page 150 of this issue for another idea to help you to navigate the sample and sound files on your hard drive.
Published February 2010
Q. Can I use vintage hi-fi for monitoring?
By Various
There's a lot said on discussion fora about the need to use studio monitors instead of hi‑fi speakers, although there are some high‑end models that are intended for both markets. I can understand why the cheap boxes you get for most 'hi‑fi' devices wouldn't be good for the job of studio monitoring, but I have been listening more and more on a vintage pair of three‑way KEFs that I picked up for £80, via a Rotel RA611 amp that cost me £30. The clarity and separation are excellent and the transient response seems to be good: there's very little by way of bass overhang. I seem to be able to 'see' into the mix as well as I can with my studio monitoring system — although there's more focus (as opposed to detail) on the mid‑range in the studio system.
This got me wondering whether one might not be better off spending a limited studio budget on some choice vintage hi‑fi rather than on new speakers. Am I missing some obvious pitfall? For £250 you could get a couple of very nice sets of KEFs these days.
Rex Baines
SOS Technical Editor Hugh Robjohns replies: Assuming we're talking about the really high‑quality end of the vintage hi‑fi speaker market, I'd suggest the only significant pitfall is in the word 'vintage.'
It is certainly true that the high‑end, high‑quality hi‑fi from yesteryear (particularly the prestige British brands like KEF, Harbeth, Quad, Meridian, and many others) set standards of reproduction that some budget 'pro' monitors fail to meet even today. And as you say, there are often serious bargains to be found.
A good three‑way, high‑end, vintage hi‑fi speaker is also very likely still to outshine an average budget, active, two‑way monitor in terms of detail and clarity. So I'm not really surprised to hear your favourable comments about the KEFs.
However, whereas professional studio montors are normally intended to be as 'transparent' and accurate as possible, it should be remembered that some hi‑fi was (and is) deliberately designed to flatter. Arguably, this is less of an issue with the higher‑cost, higher‑quality hi‑fi speakers, and traditionally the classy British manufacturers have mostly tended to aim for accuracy and sonic neutrality anyway — the Quad electrostatic speaker being a prime example.
Then there is the issue of listening levels. Most hi‑fi speakers aren't designed to cope with realistic reproduction levels of kick drums and bass guitars, nor extended playing time at elevated levels. In contrast, professional monitors are — or should at least have electronic protection to enable to them to survive such demanding use! Again, high‑end vintage hi‑fi speakers may well be more robust than their cheaper siblings, and a home studio environment will tend to require generally lower levels than a professional studio too, but I think this is an area where a degree of caution should be exercised.
And then we come to that term, vintage. Loudspeakers are electro‑mechanical devices, and they will wear out over time. Drive‑unit suspensions will degrade and deteriorate, sealing gaskets may harden and allow noisy air leaks from the cabinet, capacitors may leak and change value, with corresponding crossover anomalies, and the wiring connections may tarnish and corrode, leading to distortion. In most cases, these things can be fixed relatively easily, should they occur, but clearly they should be taken into consideration when looking to buy vintage speakers for use as monitors in a home studio.
Published August 2009
Saturday, February 2, 2019
Friday, February 1, 2019
Q. What’s the best way to connect gear digitally?
By Various
I currently have an Emu 1820M audio interface and a Line 6 Pod XT Pro modelling guitar preamp, and am about to get a PreSonus Digimax FS eight‑channel mic preamp. The thing is, I haven't got a clue as to how you would connect all these devices together, or even if it is possible.
Via SOS web site
SOS contributor Martin Walker replies: It's often possible to connect gear in several ways, but there will nearly always be a 'best' way that should ensure the highest audio quality. First of all, avoid unnecessary A‑D and D‑A conversions.
With your gear, for instance, if you simply connected your Pod analogue out to one of your 1820M analogue inputs, this would pass the signal through the D‑A converter in the Pod and then an A‑D converter in the Emu, which would compromise the signal slightly. It's far better to connect the S/PDIF digital output of your Pod to an S/PDIF input on the Emu, bypassing these two conversion stages. The PreSonus Digimax FS also provides separate analogue outputs for each of its eight mic preamps, but once again you're better off connecting the PreSonus ADAT output to the Emu ADAT input, rather than tying up every Emu analogue input!
Next, with any combination of digitally connected devices you have to decide which should be set to its 'internal' clock setting, and thus provide the master clock signal, via its S/PDIF, ADAT or AES/EBU outputs, to the other devices. These, in turn, should all be set to 'external' clock and become 'slaves', locking everything together in perfect digital sync.
Choosing the device to provide the master clock is the key to achieving the best audio quality. Theory states that you should always choose as master the device whose clock offers the lowest jitter levels, which in this case would be the PreSonus, with its JetPLL jitter‑reduction technology. However, in practice this choice is often more complicated.
The two most critical points, as far as digital clocking are concerned, are when analogue signals are converted to digital by the A‑D conversion process during recordings, where any digital 'shaking' (jitter) will result in a permanently 'blurred' recording that can't be corrected or improved later on. When digital audio is converted back to analogue, so we can hear it through loudspeakers or headphones, any further digital shakiness will blur existing recordings.
Like many budget interfaces, Emu's 1820M works rather well on its Internal clock, but its jitter levels increase if you switch to external clock, however good that external clock is. So for best results when using a budget interface, you should generally allow it to be the master device during playback and when recording through its analogue inputs.
The beauty of the jitter‑reduction technology featured in the PreSonus Digimax FS (and various other devices) is that even when slaved to the Emu's more jittery clock it will nevertheless significantly reduce its jitter levels on the way in, so your mic recordings will still sound pristine. To do this, just connect a cable between the Emu S/PDIF or ADAT output and the corresponding digital input on the Presonus (it doesn't matter which, since only the embedded clock signal is being utilised). The Pod XT manual doesn't mention jitter reduction, so when recording guitar you should probably switch that to be the master device and the Emu to slave.
Whenever you're faced with several clocking choices, try each one in turn and listen carefully. The one offering lowest jitter should provide a stereo image that's both wider and deeper; you should notice more 'air' and space in recordings, and you should also be able to hear further into the mix, with distant sounds revealed better.
Published August 2009
Subscribe to:
Posts (Atom)