Howdy folks, it’s your wee pal Alex, back for more fun!
Here at Voquent, we generally do the audio editing in house and the voice actors who record projects with us will send their audio completely raw, with no processing whatsoever. This means the edit can be done from a clean start, and we’re not wasting time fixing bad edits.
However, there are a few tricks and principles voice actors can bear in mind to make life easier for the production engineer and anyone else involved in the project.
1. DO NOT ‘OVER-PROCESS’ with EQ or other effects
This is the main thing to avoid and contributes to a lot of wasted engineer time.
EQ (short for ‘equalizer’) is something that changes the tonal quality of audio. It works through complex phase algorithms that highlight or reduce certain frequencies. This can be used, for example, to remove low-end rumble, or add some ‘air’ in the high frequencies and make it sound more crisp. Frequency is measured in hertz, so knowing which frequency to edit to achieve your desired result is vital.
I will say it only once. If you are not experienced using EQ, just don’t use it!
An engineer would always rather have something with a little bit of rumble, or a little bit too much sibilance, than something that has had too much removed. It’s much harder to add it back in once it’s been taken out!
Also: NEVER BOOST FREQUENCIES WITH EQ.
A dubbing mixer may occasionally boost certain frequencies where it works to make a voice cut through sound effects or music, or to just improve the overall tone, but this will always be very gently done.
Unless you are mixing something for a final broadcast, do not boost anything. This is because you won’t know if it’s going to interfere with some other sounds that are yet to be added, such as music or sound effects.
Professional noise reduction plug-ins can also introduce more noise and digital artifacts. It is best to avoid using them too frequently, or else it is impossible for an engineer to fix.
Remember – somebody else can’t add in what you’ve already taken out, but they can usually take out what you’ve left in!
2. USE VOLUME AUTOMATION to adjust relative levels, rather than compression
The reason not to use a compressor is similar to the reasons not to use EQ. It’s quite tricky to get right!
A compressor is a tool that automatically reduces the relative loudness (determined by the ‘ratio’ of the compressor) of peaks that are higher than a certain threshold measured in decibels.
You may have seen the word “automatic” in the previous sentence and thought “why would I not use that, it’ll save so much time!” but there is some skill involved.
Compressors work across all frequencies simultaneously. You may have a sibilant (an ‘ess’) that is a higher decibel level than the rest, but it is not perceived as louder because of the different frequencies making up the sound. It will be compressed regardless of the perceived loudness. There is a difference between perceived loudness and absolute loudness (amplitude).
Using volume automation to adjust the level of specific sounds that sound like they are too loud. Certain syllables, like plosives such as ‘P’, will not be loud enough to hit the compressor, but will have a noticeable ‘pop’. I will explain how to edit plosives later, but the general principle of using volume automation to manually adjust the levels of loud sounds is a useful rule to abide by.
If your audio software doesn’t have volume automation, like Audacity or some of the less expensive DAWs, then there is still a way to follow this procedure and manually edit the levels. What you need to do is select or ‘cut’ the bit of audio (for example, an ‘S’) and then go through whichever process your DAW requires you to do to reduce the level or gain. Then, and this is vital, you need to crossfade the audio with the reduced volume with the rest of the audio around it.
See tip #5 for more information on fades and crossfades.
3. MANUALLY REDUCE BREATHS don’t remove them completely
This is similar to the previous point about using volume automation, but this time specifically to reduce breath sounds.
There are plug-ins made by companies such as Izotope that automatically reduce breaths based on a certain threshold. This works similarly to a noise gate, which is often built into a compressor as a separate set of adjustable parameters.
However, the same arguments apply as with compression: relying on a plug-in to do it automatically can miss some breaths, accidentally cut out ‘breathy’ syllables and words (such as ‘H’ sounds) and not reduce breaths that are technically quiet, but have some other unpleasantness about them (clicky etc.).
The other important thing to remember is not to remove the breath completely. Just reduce the levels. Having silence in between talking sounds very unnatural. Which leads to my next point.
4. MASK SOUNDS YOU REMOVE WITH ROOM TONE, from elsewhere in the recording
No matter how good your setup is, every recording will have some background noise.
It’s unavoidable! Whether it’s the gentle sound of the air in the room, or the squeaking of the full latex suit you are wearing whilst you record, there will always be something.
If there are sounds you must remove, such as mic bumps, coughs, mistakes etc., then you need to cover it up with something.
The thing about the real world is that it is never actually silent. Having audio that jumps from talking (with the inevitable background noise (however low it may be)), to silence (sometimes only for a fraction of a second), will sound very unnatural.
5. USE FADE-INS AND FADE-OUTS on all edits
This is vitally important but it is something frequently missed.
Just like how complete silence is unnatural, a sudden “cliff edge” reduction in volume, or a sudden increase, sounds very strange to the ear.
There are slight differences in fading in and out depending on what edit you’ve made. It’s easier to demonstrate with screenshots, so here goes.
As you can see in the above image, the black line that represents the volume of each clip dips slightly. This is to reduce some background noise and breaths.
This particular screenshot is from Pro-Tools, and the way it looks here is specific to how Pro-Tools works. (Displaying the clip-gain line is very useful, Pro-Tools users!) However, the principle is well exemplified here.
There is a gentle slope into and out of the fade. This is vital to ensure there is no little ‘pop’ noise whenever you reduce the volume too quickly. It is much more natural sounding to make something ‘fade’ to get a bit quieter.
As I mentioned before, there is basically no such thing as true silence. Suddenly dropping the volume will ALWAYS result in a jarring ‘pop’ occuring in the middle of a sound, even if it’s very quiet. It is good practice to use fades in every situation.
The reason for using a crossfade in this example, rather than using the volume automation I mentioned in tip 2, is because we’re placing two separate audio files next to each other. This could be for several reasons, but it’s usually to remove a long gap, cut in an alternate take of a certain line, or to remove mistakes and unpleasant noises.
The advantage of the crossfade is that it fades out at the end of the first audio file, while simultaneously fading in the start of the next one. This ensures a seamless blend, which is particularly useful if the quality of the background noise changes slightly, or if there is a rustle or breath near the start of the next line.
Just bumping two bits of audio together and not using a crossfade will always lead to a ‘pop’ sound being audible as the audio suddenly ‘jumps’. If you blow up the size of the waveform to an enormous size, you will see a sudden disruption in the sound wave. The sound wave should be one continuous line. When you slam a new waveform up against it, the fluidity will be interrupted and it will sound more like a square wave, which is not a sound you hear in nature. This is basically fine if you’re making sick electronic music, but probably not so good for voice over which should sound natural.
In the screenshot, the section with the red arrow pointing at it is half of a crossfade. This is just a simple fade out. It’s for similar reasons to using a crossfade: to prevent a ‘pop’ when the waveform suddenly stops.
ALSO READ: 5 Tips For Perfect Microphone Technique
6. EDIT PLOSIVES AND ESSES MANUALLY
The reasoning for this is similar to de-breathing, especially for ‘S’ sounds and other sorts of hiss.
When you have a piercing sibilant, it is best to follow the exact same process as reducing the level of breaths – don’t reduce it too much, and ensure there is a fade in and fade out to the surrounding audio. Sometimes a very moderate EQ cut around the high frequencies can help reduce overall sibilance, but again, don’t overdo it.
A 12dB reduction is perceived as a reduction in volume of around 50%. If all of your esses are half as loud as the rest of it because you radically cut 7-10kHz from your entire recording, then it’s going to sound weird! Also remember that EQ will affect the entire audio. Every natural sound has a fundamental frequency, with harmonics that are higher frequencies ‘above’ it. While the frequency that you have EQ’d out may be the fundamental (i.e. the ‘base’ frequency) of the ‘S’ that you are trying to reduce, it could be an important harmonic to add clarity to other sounds.
It’s much better to manually go through and reduce the level of esses first, and apply EQ on either a very delicate overall basis or preferably an individual basis if you think you need to. Only do this if it is an extremely piercing sound. Generally reducing the level is best.
A general rule of thumb is:
- If there is sibilance all the way through and not just on the ‘S’ sounds in words, apply a light EQ cut at the appropriate frequency
- If the sibilance is only on the ‘S sounds in words, go through and manually reduce the volume level of the esses.
Another similar type of sound that needs specific manual editing are plosives, which generally the letter ‘P’ and sometimes ‘B’ or ‘D’.
‘T’ is technically a sibilant, and won’t be edited the same way as plosives – treat ‘T’ like an ‘S’
It’s easier to show what an edited plosive ‘looks like’ in Pro-Tools with another screenshot:
This is what an edited plosive looks like in Pro-Tools, with another screenshot. See? Easy! I will explain what is happening here.
The fade-in at the start is the key to editing a plosive. More often than not that is all that will be needed.
When you say a ‘P’, a small puff of air is expelled. This is why pop-shields are recommend, to prevent most of this air from hitting the microphone at all.
However, not every ‘P’ will be caught by the pop-shield, and some editing will still be required.
Fading in any syllable will generally sound quite unnatural. If you compare the image above with the images of fades earlier in this article, you will see that the fade in for the plosive is much steeper and faster than crossfades and other fades. The above image is zoomed in quite far, so the fade is just a matter of milliseconds long here. This is because you still want to keep the essence of the plosive, so it recognisably sounds like a ‘P’, but just very quickly and gently remove the puff of air at the start.
It’s very easy to over-edit plosives this way, so follow the general rule: keep the fade in extremely short. If it starts to sound like a ‘B’ or sometimes even an ‘E’, you’ve gone too far, and you should just leave it alone entirely.
The second part of this edit is a tiny little dip in volume after the fade in. If you look at the clip gain line (the black line across the middle of the waveform), you will see this small piece of volume automation.
Some plosives have a low frequency ‘raspberry’ sounding noise after the initial puff of air. Rather than cutting it out (and using a crossfade, as mentioned earlier!), it is best to just slightly dip the volume of these unpleasant sounds, taking care to keep the slope in and out gentle.
The principles of good audio editing all overlap!
7. GET YOUR MIC TECHNIQUE AND SETUP RIGHT
This is the simplest one – just record it right in the first place and you won’t have to do any editing at all.
- Ensure you have a microphone that sounds good for your voice, and you won’t need to EQ.
- Turn down the gain on your interface, project your voice, and get a nice quiet room, and you won’t have to use any noise reduction plug-ins.
- Practice your microphone technique and vocal control, and you won’t have to compress or use volume automation on your recording.
- Get your breath control nice and quiet and you won’t need to edit them at all.
- If you ensure you do as many takes as it needs until you get one perfect take, you won’t need to mask any edits with room tone or use crossfades to join edits together.
- Get a decent pop-shield and stand further away from your microphone and you won’t need to worry about editing plosives.
- Drink plenty of water and train your voice well and you won’t need to even reduce esses!
TOP RECORDING TIP: Everybody makes mistakes in a recording, particularly in long-form context like audiobooks. It can be a pain listening through the WHOLE thing to remove mistakes. A quick and easy solution to this is to either clap, or even use a dog training clicker, to indicate when you have made a mistake! This will show up as a sharp ‘peak’ on the waveform when you are editing your audio, so you can edit entirely visually and not need to listen to that particular bit at all. This enormously speeds things up!
The best cure is prevention. If you do have the need to edit, following these basic tips (numbers 4 and 5 are probably the absolute most important ones!) and your clients and audio engineers should be having a great time working with you 😊
More useful articles by Alex:
Everything you need to deck out your home studio professionally, whatever your budget.