Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Friday, November 30, 2012

Haydn: Orlando Paladino / Harnoncourt · Berliner Philharmoniker

Q: What is the difference between TDM and RTAS plug-ins?

 I have a Pro Tools HD3 system, so I have question for you. What is the difference between TDM and RTAS plug-ins in Pro Tools? How many percent?

By David Mellor, Course Director of Audio Masterclass

(This response concerns Avid Pro Tools systems. The concepts however are applicable to all DAWs.)

If you have a Pro Tools HD3 system, then clearly you have a lot more money to spend than most recording enthusiasts. Or rather you did have a lot more money until you spent it!

The difference between TDM and RTAS plug-ins is that TDM plug-ins run on a card installed in your computer, RTAS plug-ins run on the computer itself.

The main advantage of TDM plug-ins is that they have their own dedicated processing resources, independent of most of the functions of the computer. RTAS plug-ins have to share processing power with whatever else the computer is doing.

TDM processing power comes in nice neat blocks, so when you have selected a plug-in for a track, you will always have enough processing power to run it. Since RTAS plug-ins share processing resources, there may be times when sufficient resources are not available. Recording or playback will stop.

With TDM plug-ins, you can easily see when all of your resources are allocated. Even when fully allocated, things will run totally smoothly. With RTAS, you're never quite sure whether you are coming up to the limit, or risk going beyond it. Not until recording or playback stops anyway.

Although there are other differences between TDM and RTAS plug-ins, they hardly make any difference in practice. If you can hear any difference in sound quality between the same plug-in in its TDM and RTAS versions, then you have the ears of a super-hero.

So if you want ease of use and can afford it, TDM is the way to go. If you can't afford it, then it will have to be RTAS.

It's hard to quantify the difference in terms of percentage. People who are light users of plug-ins won't notice the difference, so zero percent. People who use a lot of plug-ins will find TDM a tremendous relief from frustration. I would rate the benefit at 1000% or more!

By the way, there are alternative systems that use processing resources other than the computer's central processor. They can be just as good as TDM in providing certainty of resource availability.

 
Publication date: Monday August 09, 2010
Author: David Mellor, Course Director of Audio Masterclass

Korg All Access: Aaron Draper Preps for Dr. Dre's Coachella Set with his Wavedrum

Thursday, November 29, 2012

What happened to MIDI? Where did it go?

 We used to be in love with MIDI. But you hardly hear of it these days. Has it gone away, or is it just keeping quiet?

By David Mellor, Course Director of Audio Masterclass

MIDI was a revolution in music and audio. Before MIDI, which means before 1982, every musical instrument manufacturer had their own way of connecting their equipment together.

So you could connect Yamaha to Yamaha, Roland to Roland. But if you tried to connect Yamaha to Roland you would come unstuck.

Once MIDI (Musical Instrument Digital Interface) had taken off however, you could connect anything to anything.

This allowed interoperability between just about any musical equipment you could desire to own.

But something else much more important happened...

The MIDI sequencer was invented!

The concept of the MIDI sequencer is that you can record key presses from a musical keyboard into a computer. Then you can use that to build up several tracks, each controlling a MIDI sound generator that would create the audio live from the note data.

The sound generator would be a synthesizer or sampler, or perhaps a module whose sounds were based on samples although you couldn't add your own.

So in the 1980s and 1990s everything was MIDI-this, MIDI-that, MIDI-MIDI-MIDI...

You couldn't get away from it.

But we don't talk much of MIDI anymore, so what happened to it?

What happened to MIDI was first that MIDI sequencers acquired audio recording functions.

Loop-based music was already popular. Loops were recorded into samplers and triggered by notes recorded into MIDI sequencers.

But with an audio sequencer, you could create a loop directly in a track without the need for a sampler. So MIDI wasn't required.

Loop-based music became incredibly popular, and people would add drums and guitars to loops. Single-sample sounds could be positioned on a track wherever you wanted, once again without the need for a sampler.

And where you needed several sound modules, or a multi-timbral module, to create audio from a MIDI sequencer, you could just play keyboard sounds directly into your audio sequencer.

So gradually almost everything MIDI disappeared from the studio.

Except that it didn't disappear, it went underground.

So these days you probably don't connect your music keyboard to your computer through a MIDI cable and MIDI interface; you connect it with a USB cable.

It isn't actual MIDI data flowing from your music keyboard to your computer, but it is very MIDI-like instructure...

Just as MIDI had note-on and note-off messages (and no such thing as a 'note-continue'), these same types of messages are used today.

You can play software instruments directly from your USB-connected music keyboard and record their audio output. Or you can record MIDI tracks that contain note data but no audio; these MIDI tracks will drive software instrument tracks.

These MIDI tracks contain data that is exactly like MIDI - which notes you played, how hard you played them, modulation wheel and pitch-bend data etc. All corresponding directly to the MIDI of old.

It could be that we are overdue for a revolution, and something much better than MIDI could be invented now. However, the legacy of the past is deeply entrenched and it is likely that we will be using this 'under the surface' MIDI for decades to come.

Publication date: Friday January 29, 2010
Author: David Mellor, Course Director of Audio Masterclass

ACID Music Studio 9

Q: Why is one channel always higher in level than the other?

 A RecordProducer.com reader finds himself always with the left meter showing higher levels than the right. Why is this?

By David Mellor, Course Director of Audio Masterclass
 
I was once in a mastering studio with a very experienced mastering engineer. It wasn't my session and I was just observing. I kept quiet so that the expert could do his job, but I couldn't help noticing that, one track after another, he always kept the left meter a couple of dB higher in level than the right.

At the end of the session I couldn't resist asking him why this was so.

"I just like it that way." was his reply.

I can think of all sorts of reasons why this isn't the right thing to do. But I don't make my living day-in-day-out from mastering, with a stream of high-paying clients coming steadily through the door.

But suppose you find yourself doing this, automatically and almost without thinking, like the reader who asked the question. Why should this be so?

Well indeed, you might just like it. If you do, and you are sure of what you are doing, there is nothing holding you back other than the needs of your clients and your potential market.

But if it is just happening for no particular reason, it indicates one of two possibilities...

The first is that there could be something wrong with your equipment or monitoring surroundings. These days, it is very unlikely that anything that comes before the monitor outputs of your interface could be causing the problem.

In the 'olden days' of audio, it could be a hundred things, including inaccurate meters. But that wouldn't happen now.

No, the problem is likely to be downstream of your monitor outputs - your power amplifier or your speakers.

And it could be your acoustics. Something about the shape of your room is emphasizing one channel over the other. It is always best that your mixing room is symmetrical about a line projecting out from halfway between your speakers.

The simple way to check your equipment is to swap everything around from left to right. If the problem changes channel, then you can quickly home in on the cause.

But suppose your equipment is perfect? Then it could be your hearing. Probably no-one has perfectly symmetrical hearing and it could indeed be the case that one of your ears is a couple of dB down on the other, and you are setting your faders and pans to compensate.

Once again there is a solution. If you make a switch that can instantly swap the two monitor channels, then if you set your faders and pans so that the relative 'weight' of sound is pretty much the same either way, then you can be confident your stereo balance is fine.

That could be the end of my article, but I have one more point...

A lot of people would simply look at the stereo output meters and assume that if they were balanced, then everything is fine. But the correct way to judge a mix is on what you hear, not on what you see. Meters are useful, but for anything other than clip indications they can never be more than a guide.
Publication date: Monday October 31, 2011
Author: David Mellor, Course Director of Audio Masterclass

Wednesday, November 28, 2012

Haydn: Symphony No. 100 "Military" / Schiff · Berliner Philharmoniker

Q: What is groove in MIDI?

 A Record-Producer.com reader asks a simple but interesting question. Musicians can groove, but can MIDI groove too?

By David Mellor, Course Director of Audio Masterclass

First, a very quick primer on MIDI...

MIDI isn't quite all the rage that it once was. In fact some people who are otherwise in very good control of their DAW software don't really know what it is.

I could make this a history lesson, but that would be long and tedious, so to explain quickly and simply, when you record a virtual instrument track, it isn't the sound of the instrument that is recorded. Instead, a recording is made of the keys you press on your music keyboard. When the track is played back, the virtual instrument is played from that data, so it sounds just like it did when you played it yourself.

We can call this MIDI data, because it works in the same way as a MIDI track that is connected via the MIDI OUT socket of your interface (assuming it has one) to a physical MIDI sound module.

So you can choose to record audio, or choose to record MIDI. So why record MIDI? Here are some good reasons...

  • You can change the instrument completely. What you recorded as a virtual saxophone, for example, can easily be changed to a virtual clarinet, without having to re-record.

  • You can change the tempo easily. Granted, this is possible with audio too these days, but it's inherent in MIDI and it always works perfectly.

  • You can edit the notes and the way you played them. Played a wrong note? Then just edit it to the correct one.

  • Quantization! You can easily convert a sloppy performance into a super-tight one. Once again, you can do this in audio these days, but it is second-nature to MIDI and there are never any sonic degradations.
In the groove

Listen to some music played by a learning musician. Apart from the wrong or shaky notes, the performance will be 'wooden'. Now listen to a band of dyed-in-the-wool jazzmen. The swingometer goes straight to the max!

Clearly the experienced musicians have groove and the learner has yet to acquire it.

When music is written out on paper, it is presented in terms of half-notes, quarter-notes, eighth-notes and the occasional triplet. Play it like that and it will sound mechanical. A good player will used the printed rhythms as a guide and let the music flow. Note lengths will be subtly adjusted and the result - even in classical music - will be groove.

Now when you record a MIDI track (or a virtual instrument track), chances are you will listen to your work and hear a certain amount of sloppiness in timing. No problem - select the quantize function and everything will be quickly fixed.

The problem is that straight quantize sets everything to a rigid grid pattern. The sloppiness is gone, replaced by a mechanical accuracy that doesn't sound like a real performance.

One answer to this is to use swing. You will find this in most quantization menus or windows. Here, pairs of eighth-notes are modified so that the first is slightly longer than the second. Instant jazz.

Although swing can often be better than straight quantization, we can do better.

If you look further into the details of the quantize menu or window, you might find a 'groove' function with various options. Grooves are rhythmic patterns that can be imposed on your playing so that the end result mimics a skilled player. Swing is a simple groove, but grooves can be as complex as you want to make them.
Often a number of standard grooves are offered, but you may be able to make your own. You could even find a groove on an old jazz record and emulate that.

It has to be said however that to get the most out of groove quantization, as it is often called, you have to be very patient and painstaking in seeking out the exact groove that is right for your purpose.

But then, taking extra trouble is the mark of the successful, or soon-to-be successful, producer.

Publication date: Friday April 08, 2011
Author: David Mellor, Course Director of Audio Masterclass

Haydn: Symphony No. 99 / Rattle · Berliner Philharmoniker

Tuesday, November 27, 2012

It's true! Vinyl IS better than digital!!

 It's a topic of endless debate, but really there is no way a stream of digits can beat the real vinyl experience.

By David Mellor, Course Director of Audio Masterclass

For people of a certain age, there is no doubt that digital audio is superior to anything analog has to offer.

To be this 'certain age' you would have been in your professionally formative years in the early to mid-1980s.

Anyone who had acquired professional experience by this time, but was still young enough to be receptive to new developments, would have been in no doubt that digital audio was MASSIVELY better than analog.

The problem is that, like most people, experience acquired during one's formative years becomes hardened and ossified. People become 'set in their ways'.

But digital ways are not always the best ways, and the comparison with vinyl is a case in point.

By any objective measurement, an uncompressed digital recording is better than an analog recording on vinyl. The frequency response can be much better, the distortion and noise are very much lower. And there are no clicks. Well, not if everything is working properly.

So a digital recording is better than vinyl then?

Well no.

I would contend that any recording made up until around 1985 was made to sound at its best on vinyl.
If you were a producer, you would want the listening experience to be at its best for the buyers of your product.

There would of course be some differences between the sound you heard through the studio monitors and the sound of the end-product, but you would allow for that and make compensations - both technical and musical.

So transferring a pre-1985 master tape to CD may be closer to what the producer heard in the studio, but it isn't necessarily closer to the producer's intentions.

So what about modern recordings - surely they sound better in a digital format?

Well yes, except for one thing...

We are all still hooked on the sounds of the past.

Vintage microphones, vacuum tubes, so-called 'classic' equipment. It's all so popular that I don't have to argue my point any further.

And vinyl is part of the sound of the past that we still seem to love so much.

Take that out of the chain, and something is missing.

Maybe the answer is to master to vinyl, then transfer that to digital. But then people would start worrying - as they do - about the quality of the analog-to-digital conversion.

I have to say that I love my vinyl collection. I buy records cheaply secondhand then transfer them to my iPod. The records themselves are stored in the attic.

But then that may say something about the music you can find on vinyl - and how set in my ways my musical tastes have become!

Publication date: Friday February 12, 2010
Author: David Mellor, Course Director of Audio Masterclass

Korg In The Studio - Krome Music Workstation -- TouchView Navigation Tips & Tricks

The Beatles original audition tape - is it a fake?

 The tape that got The Beatles rejected by Decca Records in 1962 has unexpectedly been rediscovered. But is it just a (money-making) fake?

By David Mellor, Course Director of Audio Masterclass

According to an article in the Daily Mail and elsewhere, The Beatles' original audition tape for Decca Records, made in 1962, has been rediscovered after having lain dormant among a collection of memorabilia.

Clearly this recording of ten songs will be of significant interest to Beatles fans, and to whoever is willing to pay possibly £30,000 or more at auction.

But I have to ask the question whether this tape is genuine. It may be a complete fake, or it may be a copy of the original tape.

The reason I wonder is that the spool and box pictured are most definitely not of 1962 vintage. I am absolutely certain that this style of spool was not introduced by Ampex until at least the mid 1970s. A more dedicated enthusiast of recording history may be able to date it more precisely. I further wonder whether Decca would have used a US brand of tape when UK-manufactured tape was available and import duties were high.

I even question the writing in what seems to be felt-tip pen, the modern version of which was only introduced in 1962 and was not in common use until later in the 1960s.

The Beatles audition tape, inside box

Then there is the discrepancy between the outside of the box that states 'stereo 1/2 tk' and the label inside that states '2 track mono'. Decca engineers would not have allowed any confusion to arise over whether the recording was mono or stereo. If the recording was made in stereo (which it could have been in 1962) then it would indeed be stereo half-track. '2 track mono' could make sense as it might refer to a mono recording made on a stereo machine with identical signals going to both tracks. Playback would be a little less noisy on a similar machine rather than on a full-track mono machine that would also pick up noise from the guard band between the tracks.

Of course, it may be that the tape itself is the original, wound onto a different spool and placed in a different box. And even if it is a copy of decent quality then it will certainly make interesting listening.

P.S. One more point - The Dolby noise reduction system was not available until 1965, and the Dolby tone (as mentioned on the inside label) introduced even later!Note on copyright: As a news item specifically about the appearance of this item, fair use is claimed in respect of the photographs.
Publication date: Sunday November 25, 2012
Author: David Mellor, Course Director of Audio Masterclass

Monday, November 26, 2012

Finale Tips Tip #8 SmartShape tool Shortcuts

Q: How can I get my music up to a level that is acceptable for radio?

 How can I get my music up to a level that is acceptable for radio? My music always seems to be a few decibels lower than the accepted level.

By David Mellor, Course Director of Audio Masterclass

This is a situation where I have to wonder whether you are asking the right question.

Perhaps the question should be, "How can I get my music played on radio?"

The answer to that is to write a great song and make a great recording of it. If you can make a recording that will thrill people when they hear it, then the odd decibel or two simply won't matter.

But I suspect that the question is really about how can you get your recordings to sound as loud as commercially released recordings.

Here's a test...

Get hold of a compilation CD of recent chart songs. (If you're serious about getting your music on the radio, you'll have a collection of them already.)

(The reason I say 'CD' rather than 'download' is that CD recordings are not subject to the MP3 or AAC encoding process, either of which will degrade the sound quality and makes comparisons more difficult.)

Now, using the cleverness of your computer, make a new CD with one of your recordings inserted among the professionally-made recordings.

What you will almost certainly find is that your song will be significantly lower in level than the commercially-released tracks, even if you have normalized your recording right up to peak level.

The reason for this is that commercial recordings are always 'mastered' after mixing. One of the functions of mastering is to increase the loudness of the recording. This is done using a combination of compression, limiting and multi-band compression.

There is a lot of skill in this. All of these processes can easily make the mix sound worse, even if it is louder. The trick is to get the mix to sound louder, without significant degradation.

So if you are determined that you want to be able to compete in terms of loudness, you need to equip yourself with the appropriate tools.

Mastering plug-ins are available that will help you increase the loudness of your recordings. They don't work by magic - you have to learn how to use them effectively.

Over time however, you should be able to make recordings that sound as loud as commercially released recordings.

Now all you have to do is write a great song!

Publication date: Thursday July 01, 2010
Author: David Mellor, Course Director of Audio Masterclass

Finale Tips Tip #7 Adding Multiple Articulations

Saturday, November 24, 2012

Q. Is the quality of S/PDIF connections on soundcards variable?

Is there any difference between the quality of S/PDIF connections on low-end and high-end soundcards, or am I right in thinking that a low-end card with S/PDIF I/O (and the ability to clock from the A-D converter) should be adequate?
Via SOS web site

If a soundcard is well made, its S/PDIF interface should be quality independent, so the difference between low-end and high-end cards should be minimal in this respect.
SOS Technical Editor Hugh Robjohns replies: In theory, S/PDIF is quality independent, assuming that the physical interface is engineered reasonably in the first place. It is purely about transferring the data — there’s no jitter to worry about — so, provided you have decent 75Ω cables of modest length, it should just work. 
I’ve had very few problems with S/PDIF interfaces, and the few issues I did find were actually caused by ground loops.
Personally, I prefer AES3 interfaces, because they will cope with longer cables and are always ground-free, transformer-coupled connections (often S/PDIF is as well, but not always). And XLRs are so much more reliable than RCA phono plugs!  

Finale Tips Tip #6 Repitch Tool

Friday, November 23, 2012

An acoustician's Night at the Opera

 How is it that opera singers can reach the back rows of the upper balcony of a 2000-seat theater, without amplification? Is is something to do with acoustics?

By David Mellor, Course Director of Audio Masterclass
 
Before movies were invented, there is no doubt that opera was the world's most expensive art form. In fact even now, it is probably the second most expensive art form in terms of production cost. No wonder the ticket prices are so high.

But it is also an art form that relies more than any other on having good acoustics. Opera is performed in auditoria of up to 2000 seats, and sometimes more, without any form of amplification.

That accounts for the vocal style of opera. Opera singers have to sing with throats wide open simply to shift enough air to reach to the rearmost rows of the upper balcony. You can consider top opera singers to be vocal athletes on a par with Olympic medallists.

However, the acoustics of the opera house have to help. Theaters and concert halls generally come in two shapes - the 'shoe box' shape approximating to a cuboid, or fan shape where the body of the auditorium progressively widens towards the rear.

It is commonly felt that the shoe box or narrow fan concentrates sound so that it doesn't lose so much level as it travels. This will certainly help the singers.

The singers can also be helped by the stage directions and even the set. If a singer is close to the front of the stage ('downstage'), then the energy of his or her voice travels directly into the auditorium. But in the far upstage, a lot of energy is lost into the fly tower of the theater where it is absorbed by the backcloths hanging there.

So if the director wants to give singers a hard time, he will have them sing their important lines from an upstage location.

The directionality of the human voice is fairly wide, particularly in the lower frequencies. So any hard surfaces in the set will provide reflections that will reinforce the sound. Particularly if the singer and set are in a downstage location.

Now a problem - the orchestra. The orchestra in an opera performance is capable of many times the sound power of a singer (although it has to be said that sopranos can be very penetrating). So the orchestra is sunk into a pit - the orchestra pit - so not only does it not obstruct the stage visually, it is also screened acoustically.

People sitting in the stalls seats will neither be able to see the orchestra nor hear it directly. To compensate for a slight dulling of the orchestral sound this can produce, many theaters have a section of the ceiling over the orchestra pit inclined at 45 degrees so that sound rising vertically from the orchestra is reflected into the auditorium.

The problem of keeping the orchestra down in level so the singers can be heard clearly was certainly on opera composer Richard Wagner's mind. So much so that he had an opera house built - the Festspielhaus Bayreuth - with an extra-deep pit

With such acoustic problems it is a wonder that opera works at all. In fact sometimes it doesn't...

There are many occasions in opera where composers have misjudged the balance between singers and orchestra. And these days directors expect a high degree of freedom in where they can place singers on stage.

So on the odd occasion where a singer is not clearly audible, amplification may be provided. This is not done throughout the opera, but just the specific lines where the problem occurs. Ideally no-one in the audience should be aware that there is any sound engineering involved.

Not even acousticians.

One last point is that opera singers sing so loud that they sometimes cannot hear the orchestra clearly, even though it is right in front of them. So it is not unusual to provide foldback from the pit to the stage. So even if nothing is amplified for the audience, the singers on stage can benefit from modern sound engineering techniques.
Publication date: Thursday March 10, 2011
Author: David Mellor, Course Director of Audio Masterclass

Finale Tips Tip #5 More Simple Entry Tips

Scientific test picks out best converter!

 Put four converters to the test, in a pro studio with pro testers. See which one comes out best. Or is the result a foregone conclusion?

By David Mellor, Course Director of Audio Masterclass

Recently reported in the audio press is a test of four A-to-D and D-to-A converters. These converters are in the first rank of professional equipment, hardly home studio gear.

The studio where the test was done is one of the best in the world. And the people doing the listening have the kind of experience over many years that anyone would die for.

So the converters were hooked up, whereupon the hooker-up - if that's a word - left the room so that no-one taking part in the test knew which converter they were listening to, not even the person in the room switching between them.

This kind of test is known to science as 'double blind'.

This means that not only do the people being asked to make the judgments not know what they are listening to, neither does the person conducting the test.

The person who does know which is which needs to be completely away from the experiment where they cannot possibly influence the outcome, however unconsciously.

The group decided that Converters B and D were both very good, one better on vocals, the other better on everything else.

Converter B is a product of the company whose representatives organized the test.

So the product that the organizers of the test clearly hoped would have won, actually did win. Well it came joint first. I'd have to say that if it's the best for vocals, then it is in first place as vocals are more important than anything else.

So this is the point in this article where you would expect rampant cynicism. Of course Converter B is going to win, if the manufacturer's representatives are organizing the test!

Well in fact we have no reason to believe that there was any underhandedness of any kind. There is no reason to think this is anything other than a genuine result.

However...

In science there has to be a clear distinction between a test that you can learn something from, and a test that is of dubious or perhaps no value.

And one of the indicators of a flawed test, to a scientist, is that the result of the test supports the objectives of whoever organized or paid for it.

It doesn't matter how fair minded you are and how scrupulously the test is conducted. There will always be the temptation that should the test go the 'wrong way', the results will not be published. So only the results of 'successful' tests would be presented to the public any other test reports would simply be filed in the trash can.

As you may have noticed, no-one has been named in this article. That's because we don't doubt the honest intentions of the people involved. We don't doubt the validity of their conclusions.

However, anywhere else we would wonder how many test results had been trashed because they went the wrong way.

Testing is important, but it is also important when evaluating the results of a test to know of any circumstance that could invalidate the test, or flag up a potential conflict of interest.

By the way, the manufacturer of Converter B had another of their products in the test.

It came last!
Publication date: Wednesday January 05, 2011
Author: David Mellor, Course Director of Audio Masterclass

Thursday, November 22, 2012

Finale Tips Tip #4 Simple Entry Tool

Another way software updates can screw your business

Another way software updates can screw your business


I wrote about the potential perils of software updates recently. Downtime is an irritation if you have a hobby. If you have a business then time spent not being productive is a much more serious issue.

But if you have a full backup of your boot disk, you can update important software safe in the knowledge that if something goes wrong, or if the update proves buggy, you can always return to your original set up.

Well that's true. But we had an issue recently where our backup was of no use to us.

Like almost everyone else who uses a digital audio workstation, we use virtual instruments here.

There happened to be one that we particularly like to use for which an upgrade had recently been announced. By recently, we mean not too recent for any early bugs not to have been cleared up. We have a business to run so we like to be cautious.

We bought this particular virtual instrument as part of a suite of instruments and, as it turned out, the price to upgrade the whole suite was very attractive. So we unlocked the vault in which the RP debit card is securely kept and made the purchase online.

Part of the plan was that we would download the update for the particular instrument we were interested in, then upgrade the rest when the DVD came via snail mail. The downloads were HUGE and the one instrument alone took all night on our broadband connection.

Next morning we applied the update to our main DAW system. Of course we have a backup boot disk, so we could get back to work quite quickly should the update prove problematical.

But the update went smoothly and the instrument functioned perfectly. A success therefore!

So back to work... let's open the session we are working on and earn some money.

The session uses the newly updated instrument, and the old versions of the other instruments in the suite.
But...

Although the updated instrument worked fine, the others required reauthorization. (They authorize via iLok.)

No problem, we had the authorization keys. Except they didn't work because they had already been used.

So to cut to the chase, we could only use one instrument out of the suite of instruments. And since the problem was with the authorization, we couldn't use a different DAW, or return to the old version on our backup boot disk.

We couldn't install the other upgrades because we didn't yet have them.

In short, we were screwed.

So the next step was to contact support. We have a support code so we were hopeful of getting a quick solution to our problem.

So we logged a support request expecting a prompt answer.

Prompt? The response took a full EIGHT DAYS! And it didn't solve the problem.

Well fortunately we had already resolved the issue by downloading the other updates, which took a whole day and night because of their size. We spent that day working on a different project that didn't require these instruments.

And the moral?

Well the moral of the story is that there's always another snake in the jungle that's out to bite you.

The problem here was that the license on the iLok covered a whole suite of instruments. When updated, it covered only the new versions of the instruments and the old versions could no longer be used.

It may have been possible to use iLok's Zero Downtime if planned for in advance. Apparently you can load 14-day licenses into a spare iLok and activate them should a problem with your main iLok occur. That of course costs $49.95 for the spare iLok and a $30/year subscription. That's the money honest users have to pay to cover the cost of software piracy.

Ultimately the only real safeguard against problems such as this is to perform upgrades only when you have a reasonable window of time before you have a paying client in the studio, or a project to finish against a deadline.

And as I said before, if recording is a fun hobby for you, then upgrading can be part of that fun. But if you run a business, make sure that the money keeps on rolling in!
Publication date: Thursday December 02, 2010
Author: David Mellor, Course Director of Audio Masterclass

Finale Tips Tip #3 Selection Tool Keystrokes

Wednesday, November 21, 2012

Can your virtual orchestra imitate a real one exactly?

 Virtual instruments are getting better all the time. But does a virtual orchestra always sound like a real one?

By David Mellor, Course Director of Audio Masterclass

I'm just going to pick on one example here - that of divisi. If your orchestral virtual instrument can do this, then it will sound just a little more like a real orchestra than one that cannot.

Think of a real orchestra, but just the string instruments - first violins, second violins, violas, cellos and basses. That, according to simple arithmetic, should give us the potential for five-part harmony. In real life however the basses normally double the cellos an octave lower, or provide a bass when the cellos are playing a melody line, or sometimes get let loose for a special sonic effect. (And don't forget all those bars' rest.)

Normally therefore the string section is capable of four-part harmony. But what if the composer wants more lines? Or what if he or she wants shimmering high harmonies in the violins with no other instruments playing?

Well since there are several players in each section, up to a dozen or more in each violin section for example, they can easily be split up to play the extra lines.

In musical language, this is called divisi.

With a virtual orchestra, there shouldn't be any problem. Since you can play any instrument (counting a whole section here as an instrument) polyphonically, then you can sequence as many string lines as you like.

But it won't quite sound the same.

If the first violins of a real orchestra play divisi, then each line has only half the number of players. In a virtual orchestra without divisi, each line sounds the same when played alone or played together. And when played together the level is around 3 decibels louder, whereas a real orchestra would not change.

It sounds like a small point, but an orchestral virtual instrument that doesn't have a divisi feature will sound thickened and congested compared to a real orchestra when composed for in this way.

Of course, there aren't that many listeners who would notice this. But for a composer who takes pride in his or her work, it makes a significant difference.
Publication date: Monday October 31, 2011
Author: David Mellor, Course Director of Audio Masterclass

Finale Tips Tip #2 The Selection Tool

Pandora Internet radio - artists get less than previously claimed

 A blog post from a senior representative of Pandora claimed that 2000 acts will earn more than $10,000 each next year. He actually meant $4500.

By David Mellor, Course Director of Audio Masterclass

A recent blog post mentioned here, by Tim Westergren, Founder of Internet radio service Pandora, claimed that little-known artists Donnie McClurken, French Montana and Grupo Bryndis were earning significant sums. Least-known of the three, Grupo Bryndis, were said to be on track to earn $114,192 in the current year. That's a lot of money to most musicians, and a very welcome sign of good things to come in the future of the Internet.

However, not all is quite how it seems. $114,192 is in fact the sum paid by Pandora, not the amount received by Grupo Brindis. Pandora's payments for the use of music on their service are made to SoundExchange, the non-profit performance rights organization that collects statutory royalties from satellite radio, Internet radio, cable TV music channels and similar platforms for streaming sound recordings in the USA.

SoundExchange divides the net royalties 50% to the owner of the sound recording, 45% to the artist and 5% to session musicians, after a taking an administration fee of 5.3% straight off the top. (Pandora pays an additional smaller amount to songwriters, but I'll leave this issue until another day.)

Looking at the figures quoted by reliable sources on the Internet and elsewhere, it seems that Pandora is paying around 50% of their revenue to the music industry.

So if a certain artist or band isn't earning $10,000 a year as Westergren might have implied, at least it is earning $4500. And their label (which put up the money to get them on the Internet 'air' in the first place) gets paid too. And don't forget the 5% left over for session musicians.

For me, it's hard to see the negative in this. Comments I have seen such as 'Pandora must die' simply do not reflect the fact that any payment at all to musicians for Internet performances of their music is a huge advance over the 0% revenue share from piracy (which largely exists to make a profit for the pirates). Pandora represents progress and a means for musicians to earn a living other than from the concert tickets and T-shirts that without paid-for music on the Internet would otherwise be the only viable business model.

The trick is going to be to balance out the revenue so that musicians and labels get a fair share, while allowing businesses like Pandora to be profitable. If Pandora can't be profitable, then there will be no money to be shared by anyone.

Clearly, musicians need to press hard for their fair share. Any business of whatever kind will always seek to reduce the cost of their raw materials. So Pandora will always seek for its payments to musicians to be less, but hopefully not so much less that it kills the geese that are laying the golden eggs.

In the shorter term, I see a lot of debate and argument. In the longer term however I see a valuable revenue stream that will put money in the pockets of the people who truly deserve it - musicians, music-loving labels that finance their work, and indeed the services that provide the listening public with access to music. There's a potential win-win-win here and I for one very much look forward to it.
Publication date: Tuesday November 20, 2012
Author: David Mellor, Course Director of Audio Masterclass

How to write for orchestra - even if you don't know a note of music

 You don't need years of music theory training to write for an orchestra. If the sounds are in your head, technology can get them out.

By David Mellor, Course Director of Audio Masterclass

You don't need years of music theory training to write for an orchestra. If the sounds are in your head, technology can get them out.

Once it took years of training to become a musician. For example, it takes about ten years to learn to play the violin to a standard sufficient to play in an orchestra. And you have to have the natural talent for it, otherwise it will be ten years wasted.

To compose music for orchestra takes just as long. It takes years to become imbued with the orchestral tradition. You don't just have to learn musical notation, you have to learn about all the capabilities of the various instruments. You have to be able to auralize (imagine in your head) how they sound - individually and in combination. And you have to know all the things that players find difficult - there's no point in writing something that will hardly ever be played properly.

But these days, you can short circuit all of that time and training. You will still need a good musical imagination though - there's no substitute for that. Yet.

To write for orchestra in the modern way, you need a master keyboard, a DAW and a good orchestral software instrument library. Don't record audio, record MIDI tracks so that your key presses are recorded, rather than the audio signals.

You can easily build up layers of all the orchestral instruments. Your symphony, or film sound track, will be finished in no time at all.

But a performance made up from samples isn't the same as a performance given by an orchestra. It sounds kind of orchestral, but it isn't the real thing. So you need to go to the next stage...

When you have finished inputting all the notes, the next step is to turn it into musical notation. If you can print out a conductor's score and set of musicians' parts, then you have a composition!

There are many softwares that can turn MIDI data into musical notes that you can print out. However, they often don't do it all that well. Musicians need the notes to be set out in a certain way, or it doesn't make a lot of sense to them. To do this takes human intervention and expertise.

Unfortunately, without the training and background, you don't have that. In fact, even if you do know musical theory thoroughly, unless you have actually written out music for other musicians to play and taken some feedback (often vitriolic - you would be surprised some of the words classical musicians know), you won't be able to do it well enough.

The answer is to engage a copyist. A musical copyist is a person who specializes in writing out music. Often they have another agenda, such as struggling to be a successful composer or musician in their own right, but it's the copying that pays the bills.

In the old days, they would copy out a composer's rough score into one that could be engraved and printed. These days, they take your MIDI sequence and put it into a specialized score writing software such as the wonderful Sibelius. Even Sibelius doesn't do everything automatically, the copyist has to use considerable skill to transform your date into musician-friendly music. He or she will need to consult with you on dynamics and other details too. If something looks unplayable, they can advise you.

The end result will be a score and a set of parts that can be performed by a real orchestra, whether classical or for a film or TV soundtrack.

All you need is the imagination to make the music in the first place. Technology is wonderful.

Publication date: Friday March 19, 2010
Author: David Mellor, Course Director of Audio Masterclass

Tuesday, November 20, 2012

Finale Tips Tip #1 QuickStart

Do you have to understand electronics to be a sound engineer?

 I feel that I am struggling with electronics. Do I really need to understand this or is it something I can ignore?

By David Mellor, Course Director of Audio Masterclass

Audio equipment works by electronics, so if you have a certain amount of understanding you will gain a good deal of confidence in what you are doing.

However technology has moved on and it is less useful for a sound engineer to understand electronics than it used to be.

For example, imagine you were a sound operator in a theatre musical in the 1990s and while you were preparing the mixing console for an evening performance you found a problem. If you had a good grasp of electronics you would be better able to describe the problem to a maintenance engineer and everything would be working properly much more quickly.

Indeed, you might have found a way of working around the problem, although you wouldn't have been expected to fix it yourself.

These days however digital consoles, electronically speaking, mostly either work or don't work. Knowing electronics would not give you any advantage.

Where a basic understanding of electronics will help, as one example, is in the selection of equipment such as microphone preamplifiers.

Manufacturers' advertising material often talks about such equipment in terms of its electronic design. So if you know what a Class A amplifier is, for example, you are at an advantage over someone who doesn't.

The best plan is to have a really good try at understanding the basics of electronics, from Module 1 of the Audio Masterclass Sound Engineering and Music Production Online Course. If this works for you, that's great.

If you find that it is too abstract and you don't get on with the topic, don't worry. Many working sound engineers have only a very sketchy knowledge but they manage just fine. Concentrate on the areas that are best matched to your talents.
Publication date: Wednesday June 16, 2010
Author: David Mellor, Course Director of Audio Masterclass

Introduction To Consumer Applications

Monday, November 19, 2012

Do you have 'Perfect EQ'?

 Some people have perfect pitch. They can tell you the letter name of any musical note instantly. So do some people also have 'perfect EQ'?

By David Mellor, Course Director of Audio Masterclass

You know how some people have perfect pitch - they can name any musical note instantly? I don't have perfect pitch and that's something I'm thankful for. I can see the usefulness of it, but I can also imagine significant drawbacks.

I like to listen to music and, free from perfect pitch, I don't have to worry about musical theory, what key the music is in, and whether the musicians are tuned to concert pitch or military band pitch. I can imagine that a listener with perfect pitch hears more of the problems than the wondrous sonorities on offer.

My point is however that there is no way I can ever understand what it's like to have perfect pitch. I can never know how music really does sound to such a person. And they can't understand how music sounds to me either. If they listen to me play, they might realize that I've tuned my instrument a quarter-tone sharp, where I wouldn't know without a reference. But they might not hear the interactions between harmonies the way I do, because I'm not aware of the exact pitches of the individual notes.

But it struck me recently that it might be possible for someone to have such a thing as 'perfect EQ'.

The classic scenario would be where a certain instrument in the mix seems to need a mid-band EQ cut, but most recordists would have to sweep the frequency control to find the right frequency, then fine-tune the gain and Q. Granted, the more experience you have, the quicker this process becomes, and in a sense it can become almost automatic. But I suspect it is a rare person who can listen to a sound and know exactly what EQ it needs, then set that EQ without the need for further adjustment.

But are there such people? Do people exist who can sense the spectrum of an audio signal directly and without thinking about it, like people with perfect pitch sense notes?

That's one question, but there's another...

Suppose that an engineer doesn't have 'perfect EQ', and some people do. What do those people think of that engineer's work? Does it sound like a jumble of frequencies to them?

By the way, I can tell you that perfect pitch isn't always what it's cracked up to be. I once knew an amateur cello player who had an acute sense of perfect pitch. His playing was always out of tune though. Somehow there's a distinction between what you want to hear and what you actually do hear!

P.S. I am totally convinced that success in recording is not down to the quality of your hearing but the degree of attention you pay to what you are listening to. So not having perfect pitch, or 'perfect EQ', is nothing to worry about!
Publication date: Tuesday May 03, 2011
Author: David Mellor, Course Director of Audio Masterclass

KORG nanoSERIES 2 Slim-line USB-MIDI Controllers: Video Overview

Why do mixing console preamps have high-pass filter buttons?

 Look at any decent mixing console and you will see high-pass filters in each and every mic preamp section. Why is this, and what are they used for?

By David Mellor, Course Director of Audio Masterclass

The mixing console illustrated is the Soundcraft GB2. I could have chosen just about any reasonable-quality and better mixing console that there is, but I was able to source a clear photo of this one easily. And I've met designer Graham Blyth (and heard his excellent piano playing), so why not?

As you will see, each preamp section features a gain control, a phase control, a phantom power button, and the button in question here - a 100 Hz high-pass filter.
There are three reasons why this button is commonly seen...

Transformer saturation

First is that it's a historical remnant from the days when mic preamplifiers had transformers on the input. An iron-cored transformer can magnetically saturate on high-level, low-frequency signals, causing distortion. If the excess low-frequency content can be filtered out before the transformer, then all will be well.

Having said that, transformers have largely been eradicated for reasons of cost, and the level of the signal would have to be pretty high to cause any problems. Still, it's good to know about this feature of audio science.

By the way, here's an interesting mention of the issue...

Proximity effect

The proximity effect occurs with directional microphones when positioned close to the sound source - the low frequency end rises.

Although popping in a high-pass filter is unlikely to correct the effect exactly, it's a quick fix if you feel you need it.

Bass clutter

Now here is the real value of the high-pass filter...

In a typical mix, most of the low end will just be meaningless clutter. This applies particularly in live mixing. Yes, the bass drum and bass guitar need to be bassy. But hardly anything else does. Often you will find that cutting the low bass as a matter of habit (except on bass instruments) leads to much cleaner mixes.

Of course if you are recording, you can easily apply this fix later on. But in live sound you only get one chance. And having a simple one-button solution to cleaning up the bass is handy indeed.
Publication date: Monday November 12, 2012
Author: David Mellor, Course Director of Audio Masterclass

Saturday, November 17, 2012

Take Your Music Further with Finale 2012

Q. Is flutter echo a problem in a well-treated room?

Sound Advice : Recording
My daughter managed to play a tough piece she’s been practising on the keyboard this weekend. She played it so well that we clapped our hands... then we noticed how strange the clapping sounded. It rang on but died very quickly, and for the time it rang on, it sounded very metallic and almost robotic.That was close to the middle of the room. The room is partially treated at the moment, with panels at the side-wall reflection points and ceiling, one on the ceiling, and three corner superchunks. I tried clapping again with some further panels on the side walls directly to the left and right of where I was sitting, and the noise disappeared. I understand enough to realise the sound is the clap bouncing back and forth between the two walls, and I’m guessing that this is what folk refer to as flutter echo. What I’m a little less sure about is whether it is a problem, and what — generally — a hand clap should sound like in a well-treated room.
Via SOS web site
SOS Technical Editor Hugh Robjohns replies: If we’re talking about the sound in a control room, the point is what the room sounds like when listening to sound from the monitor speakers. It is conceivable that, by design (or coincidence), the acoustics could well sound spot on for sounds from the speakers, but less accurate or flattering for sources elsewhere. And, unless you’re planning on recording sources in the control room at the position you were clapping your hands, those flutter echoes might not represent a problem or require ‘fixing’.
However, in general, strong flutter echoes are rarely a good thing to have in a control room and I’d certainly be thinking about putting up some absorption or diffusion on those bare walls to prevent such blatant flutter echoes.

Flutter echoes in a studio can be distracting and fatiguing, so it’s often worth putting up some absorbent foam on bare walls to reduce them. Don’t overdo it, though: you need to maintain a balanced acoustic.
You shouldn’t go overboard with the room treatment, though, because while working in a control room that has ‘ringy’ flutter echoes or an ultra-live acoustic can be very distracting and fatiguing, so too is trying to work in a room that sounds nearly as dead as an anechoic chamber!
Of course, traditional control rooms are pretty dead, acoustically speaking, and that is necessary so that you can hear what you are doing in a mix without the room effects dominating things. But the key is to maintain a balanced acoustic character across the entire frequency spectrum. The temptation in your situation might simply be to stick a load of acoustic absorbers on the walls, and that would almost certainly kill the flutter echoes, but in doing so there is also a risk that you’d end up with too much HF and mid-range absorption in the room (relative to the bass-end absorption).
That situation would tend to make the room sound boxy, coloured and unbalanced, and that’s why a better alternative, sometimes, is to use diffusion rather than absorption; to scatter the reflections rather than absorb them. The end result is the same, in that the flutter echoes are removed, but the diffusion approach keeps more mid-range and HF sound energy in the room.
The question of which approach to use — diffusion or absorption (or even a bit of both) — depends on how the rest of the room sounds, but from your description I’d say you still had quite a way to go with absorption before you’ve gone too far.
To sum up, I’d suggest that you’re not worrying unnecessarily, and that it would help to put up some treatment to reduce those flutter echoes.  

Friday, November 16, 2012

New for ACID Music Studio 8: TruePianos™ Amber Lite soft synth

Q: How do I connect my subwoofers to my mixer?

 How do I connect my powered subs ( I think my powered subs already have a built-in crossover) to my mixer? Through the auxiliary or main output?

By David Mellor, Course Director of Audio Masterclass

(We will presume here that the subwoofers have internal power amplifiers, as it makes the explanation simpler. The same would apply however if the power amplifiers were external, it's just a little more hooking up to do.)

Your question does not specify whether you are using your subs for monitoring in the studio, or for live sound. Let's assume for now that you are in the studio.

We do not recommend connecting the subwoofers directly to the mixing console in any way.

The reason for this is that every studio needs a monitoring system that above every other factor is consistent. It's nice to have a wide frequency range, nice to have low distortion, nice that it goes loud enough.

But all the other factors take second and progressively lesser places in comparison with consistency. If your monitoring is the same from day to day, you can learn to work around any imperfections. And since no monitor system is perfect, this will always be the case.

If your monitoring changes from day to day, then really you won't have a clue what you are listening to and your mixes will be dreadful.

So the only possible reason for connecting the subs to the mixing console would be so that you could make adjustments, and that is precisely what you should not be doing at the console in the studio.

All adjustments to the monitoring system should be done among the crossovers, amplifiers and loudspeakers - nowhere else. You should take as much time as you need to optimize your monitoring. And once you have decided on the best settings, leave it alone!

Setting up a subwoofer system is easier than it used to be.

In the 'olden days' the monitor output from the mixing console would connect to a crossover that would separate the mid and high frequencies, which would go to the power amplifiers for the main monitors, and the low frequencies, which go to the subs.

These days, the crossover is more likely to be built into the subwoofer. Take for an example the Wharfedale EVP-X18PB. This has a single 18 inch drive unit powered by a 400 watt amplifier.

In addition however it has connections for the left and right stereo signals from the monitor output of the mixing console. These lead internally to a crossover that separates the lows from the mids and highs.

The mids and highs go to two outputs, which you can then connect to the amps driving your main monitor loudspeakers.

The lows from the two channels are summed and are used to drive the sub. The sub has a level control so that you can blend it with the output of your main monitors. There is a phase switch too - to test this put the speakers close together, and use the setting where you hear the most bass.

Combining the two channels into one sub is a useable option. Low frequencies are not particularly directional. Of course it's better to use two subs if you can afford it.

The key to using subs successfully for monitoring is to match the output from the main monitors and the subs at the crossover frequency. This is difficult to do unless you have an acoustic level meter, but if you play the subs on their own and listen to the highest frequencies they produce, then lock these frequencies in your head and listen out for them when all of the monitors are playing. Balance the sub(s) so that this band of frequencies is at the same subjective level as all the other frequencies.

Live Sound

Although in theory in live sound it would be nice to think of the main speaker stacks as gigantic hi-fi speakers, in practice there are benefits to be gained from subjective optimization from venue to venue. For a fixed installation, studio practice as described above applies. For a traveling PA, then you can either do it the 'proper' way, or connect the subs to the console so that you can tweak the sound more easily.

Publication date: Saturday October 23, 2010
Author: David Mellor, Course Director of Audio Masterclass

Korg In The Studio - Krome Music Workstation -- TouchView Navigation Tips & Tricks

Thursday, November 15, 2012

Isn't it time you tried a REALLY different microphone?

 You've tried all the usual microphones and are tired of their sound? Why not try something that is really over the edge...

By David Mellor, Course Director of Audio Masterclass

If you're into microphones then you might have noticed that there is a certain 'sameyness' about the standard models.

You might choose to use a dynamic mic, a ribbon, a small- or large-diaphragm capacitor, or a tube mic, maybe even a vintage model.

Each type of mic has its own characteristic sound, but within types they sound quite similar. Yes there are differences between large-diaphragm capacitor mics, for instance, but they are not huge differences, like the differences between mic types.

So to get a sound that is really different, perhaps it would be an idea to choose a mic that stands out from the crowd.

And of course we have an example - the Coles 4104 commentator's lip mic. We saw this example in this eBay auction (bear in mind that this page on eBay will be removed at some point after the auction closes). At the time of writing, the auction is still open so you could buy this very one. Here are some more tasty photos...

Coles 4104
Coles 4104
Coles 4104
Coles 4104
Coles 4104

By the way, we don't have any connection with the seller other than we asked his permission to use the photos. The auction closes (or closed, depending on when you read this) on October 19, 2008.

The Coles 4104 is a noise-canceling microphone. It subtracts sound arriving from a distance while leaving sound immediately in front of the microphone untouched.

This makes it ideal as a sports commentator's mic, where there is likely to be a lot of background noise. The mic is held with the upper guard piece touching your top lip. This makes it a no-brainer for a non-technical person to use the mic.

You could try noise canceling for yourself with two directional microphones - place them back to back and flip the phase of the rear mic. Speak into the front mic from a close distance. Since background noise arrives at both mics more or less equally, flipping the phase of the rear mic makes it cancel out to a significant degree. But since the sound of your voice is much stronger in the front mic, it hardly cancels at all.

But the Coles 4104 has another trick - it is very good at handling the pops and breath noise that you get when a mic is used close to the mouth. It's a design that other manufacturers might consider taking a look at.

Oh, and there's one more feature - this mic is insensitive at the sides. This means that two commentators can sit next to each other and leakage will be minimal.

You have already heard this mic on many occasions on TV. Even beyond the realms of sport it is useful for outside broadcasting in general.

As well as its useful features for its intended purpose, this mic has a characteristic sound all of its own. You won't find another microphone that sounds like it.

The sound is amazingly clean considering how close to the mouth it is used. You couldn't say that it is an accurate sound, but it's something that could be used in many contexts as a contrast to the standard mic sound.

There's another use for it in live sound - you know that you occasionally hear a song that features a distorted vocal, either all the way through or in segments? (Can we blame John Lennon for starting that?).

Well if you use a distortion effect on stage you will find that the high gain involved increases the risk of feedback significantly.

But if you use the Coles 4104 for this purpose, then since it rejects the sound coming from the speakers, it is very robust against feedback.

In summary, this mic is excellent for its intended purpose. But it also has an interesting sound that might find a place in your studio, or perhaps even live.

As they say on eBay - Happy Bidding!

Note: This auction is now closed. The winning bid was £217 UK pounds.
 Publication date: Sunday March 22, 2009
Author: David Mellor, Course Director of Audio Masterclass