Voice Acting For Dummies

Home > Other > Voice Acting For Dummies > Page 36
Voice Acting For Dummies Page 36

by David Ciccarelli


  Creating ambiance

  You may want to include other sounds in your audio production that give the listener the feeling that he’s in another location. Ambiance consists of sounds present in your recording environments that create a feeling. Similar to a beautifully crafted, descriptive sentence that transports your imagination to another time and place, you can create this type of ambiance in your recordings.

  For example, a voice-over that’s recorded at a baseball game has an ambiance with cheering fans, vendors selling treats, or the crack of a bat hitting a ball. These are production decisions made after you’ve completed the recording and you’re looking to embellish it through sound design.

  Using sound effects

  Recording intelligible dialogue is the number one priority of your audio recording. Adding sound effects should augment the original recording. A few examples would be doors slamming, cars passing by, or birds singing in the trees. The purpose of sound effects is to create the illusion that the auditory environment is real, rather than fabricated.

  Placing an emphasis on selected sounds can create tension, atmosphere, and emotion in your recording. It can also impart personality to demo. Sound effects can exaggerate or diminish the listeners’ perception of a voice actor and the characters that they may be portraying. Clocks ticking can make a character sound busy or impatient; whistling can make a character sound relaxed or free-spirited. Carnival noises can make a character sound silly.

  Sound effects fall into two main categories:

  Specific sound effects: An element with a specific hit point such as a door slamming or car crashing

  Background sound effects: Ambiance, birds, traffic, air conditioner, machinery, and so on

  Sound effects come from many sources:

  Production reels

  Commercial libraries

  Your own library

  Synthesizers and samplers

  Location recording

  Foley studio

  Mixing Your Voice-Over

  Mixing is balance engineering. The mix is the time for combining art with technology. In the mix, not only do you need to balance the volume of the various tracks, but also you’re placing them in space by putting your tracks in the left or right speaker. This technique is known as balancing.

  Balancing the width or pan of your mix doesn’t have to be constant. Consider the listener’s perspective for stereo instruments, such as a piano. When listening to a piano, you hear the higher notes coming out of the left speaker and the lower notes out of the right speaker. If you’re going to get creative with this type of balancing, remember to maintain either the audience perspective (what the instrument would sound like if you were sitting in the audience) or the player perspective (what the instrument would sound like if you were the one playing).

  For voice-overs, it’s almost always centered. Because it’s only a single track of voice that’s recorded, it should be placed in the center and not panned to the left or right.

  Keep all lead instruments or vocals in the center for proper balance. The bottom line is that creating consistency and predictability gives your listener a sense of comfort.

  Close your eyes when listening to your mix. Focus on just what you’re hearing. After sitting in a chair for a long time, you become accustomed to hearing your mix from that position. Get up and take a step back, even to the back of the room. Notice how it sounds different. Make the small adjustments you need to so everything sounds well-balanced.

  In this section, you discover how mixing your audio, be it an audition, a demo, or even a big project for your newest client, involves balancing the voice-over, music, and sound effect tracks for optimal clarity and impact.

  Planning your mix

  The best way to start your mix is to think ahead with the end goal in mind. You most likely have a vision of what you want your finished recording to sound like. Your goal is to maintain that sound from the beginning to the end of your production.

  Building your mix

  Start by setting your lead vocal volume to a good level. Because your voice is the central focus, all other elements in the mix are secondary. Gradually adjust the volume faders until all the elements within your demo are set at appropriate levels. This is called a static mix. Keep the focus on your voice. Your voice should be the loudest and clearest element of your mix.

  Enhance the sonic quality of your recording by isolating individual tracks using the solo button. By pressing Solo, all other tracks will be muted, allowing you to only hear the track you’ve designated. Two special effects that you can apply to either a selected region of audio or to your entire production are covered in this section.

  Volume faders

  Volume faders control the volume. Each track in your mix has it own fader that controls the volume level.

  Fade in, fade out

  When you wrote your voice-over script, you left room for mixing. In fading in and fading out, you can mix in your intro, the section that announces your name, and the kind of demo that you’re voicing. It’s most effective if this is the voice of an announcer, distinct from your own. Some professionals prefer to introduce themselves while others employ an actor of the opposite gender to record their intros.

  For example, if you’re producing a podcast, you can try fading your background music in when a new segment begins. Lower the volume level of the music when you are speaking so that your listeners can hear every word you say. At the end of the segment, fade your background music out. Use musical transitions between the various segments of your recording. These musical transitions are known as bumpers, stages, or sweepers.

  Getting Acquainted with Production Techniques and Tools

  Any device that processes audio can be considered a signal processor. Elements that can be processed are the frequency, amplitude, pitch, time, and phase. In some recording studios, you’ll see racks of audio processing equipment. Why do they need so much equipment? Many pieces of audio equipment are dedicated signal processors, which means they only can perform adjustments to one of the elements of sound, but they do it really well with a lot of precision and control. Multi-signal processors allow you to manipulate two or more elements.

  Nowadays, you don’t have to invest in an entire room full of audio equipment or have a dozen dedicated signal processors. Most audio recording software programs have a standard set of signal processing capabilities. Plus, you can often add on other functionality by installing a plug-in.

  Regardless of whether you’re creating a studio with a lot of signal processing hardware or whether you’re taking the simplified route and sticking with software exclusively, you can achieve amazing results.

  Frequencies

  Frequency is the number of sound waves per unit time. Check out Table 19-1 for a chart of frequency ranges.

  Table 19-1Frequency Ranges and Their Characteristics

  Frequency Ranges

  Characteristic

  Very Low

  20 Hz–200 Hz

  Low Middle

  200 Hz–1000 Hz

  High Middle

  1000 Hz–5000 Hz

  High

  5000 Hz–16000 Hz

  Very High

  16000 Hz–20000 Hz

  Audio bandwidth

  You can pack a lot of sonic information in the mid-range (from 800 Hz to 4000 Hz), because that’s where the human ear is most sensitive.

  Controlling frequencies

  An equalizer is any device, hardware or software, whose primary function is to modify the frequency response of the audi
o. Equalization is the process of altering the frequency response of a signal so certain frequencies become more or less pronounced than others. Equalizers give control over the harmonic and enharmonic partial content.

  Equalizers can be considered to be musical. For example, the low “E” on a keyboard is exactly 80 Hz. A tuning fork used to tune instruments in an orchestra rings an “A” note and is exactly 440 Hz. Think of how the concert master tunes before the orchestra begins to play. The pitch the violinist plays is A440.

  Equalization was originally invented to compensate for signal loss in telephones and radio when hearing the person speaking on the other end was difficult. By boosting the signal around the 1000 Hz mark, the human voice became clearer and more audible. This is why your friends and family all have a similar sound when speaking over the phone; the frequencies have been boosted in such a way as to make it easier to hear.

  Also known as equalization or EQ, filters are used to increase or decrease the volume level in a specific range of audio frequencies. The most common filters are the simple bass and treble controls found on inexpensive stereo systems, which act on a broad range of frequencies. Other more sophisticated filters are designed to surgically boost or cut very narrow bands of the audio spectrum.

  When it comes to mixing, a general rule is to decrease rather than increase frequencies wherever possible. Decreasing undesired sounds is always less obtrusive, and increasing too much can make a track too loud and lead to digital distortion when encoding. The end result will be a clearer sounding audio recording.

  Graphic equalizers

  There are professional and consumer graphic equalizers that visually display the curves on a screen. When you make adjustments to a graphic equalizer, you’re adjusting the center frequency. Professional equalizers have 27–32 bands.

  Roll-off filters

  Roll-off filters cut out all the frequencies above or below the cut-off point. Filters are often used in live sound, but also have their place in the recording studio. You filter out unwanted sounds by sorting out the frequencies that make up the sounds. Essentially, you’re deciding which frequencies you don’t want to hear and are eliminating them.

  A high-frequency (HF) roll-off removes all the frequencies above the cut-off point. A low-frequency (LF) roll-off removes all the frequencies below the cut-off point, known as the cutoff frequency. The cutoff frequency is 3 decibels (dB) below the maximum output. The cutoff frequency is sometimes called a breakaway point or turnover frequency.

  You may use an HF roll-off filter to eliminate an unwanted high-pitched hiss. On the low end, you may use an LF roll-off filter to eliminate rumbling street noise. Setting the roll-off filter at 80Hz is likely going to get rid of a lot of that rumbling you hear in an amateur recording.

  Be conservative when using roll-off filters because they’re blunt instruments that all but eliminate all sounds above or below the frequency of choice. The side effects are that your audio production sounds likes it’s missing a floor or doesn’t have any weight to it. On the other end of the spectrum, your recording sounds like it doesn’t have air to breathe because you cut it all out.

  The recommendation then is that if you are going to roll off low frequencies, never roll off more than 160 Hz and below. On the high end, you can likely roll off 16,000 Hz or sometimes read as 16 kilohertz (kHz) and above without really noticing.

  Shelving filters

  As the simplest form of filtering, shelving increases or decreases all frequencies above or below a fixed frequency. A bass shelving filter, also called a low-pass filter, increases or decreases everything below its fixed center frequency, the cutoff frequency. Likewise a treble shelving filter, also called a high-pass filter, increases or decreases everything above its fixed center. A single control typically adjusts the amount of amplification (increase) or attenuation (decrease), also known as boost or cut.

  Say that you renovated your kitchen and have just put all the dishes away. The plates are in their place and the cups too. Then you realize that you want more room for your plates and you’ll need to raise the shelf of your cups. With your cups still on the shelf, you raise the entire shelf at once, which raises all the cups too. That’s similar to how the shelving equalization method works. All cups, I mean, frequencies are affected and raised by the same amount.

  If you increase a shelving filter by 15 dB at the 10 kHz mark, you cause every other frequency above 10 kHz to also be amplified by 15 dB.

  Because you’re using a shelving filter, you’ve raised all the frequencies at once. These filters are useful for making broad changes like reducing boomy bass and wind noise. The output of your recording software can easily be overloaded by too much bass or treble, so it is recommended that you use these filters to cut or decrease high and low frequencies to prevent digital distortion.

  Bandpass filters

  Bandpass filters can be used to increase or decrease audio on both sides of a center frequency. Bandpass filters are commonly used as mid-range filters, because they have little effect on either high or low frequencies. The familiar graphic equalizer is just a set of bandpass filters tuned to different center frequencies.

  More sophisticated versions, called sweepable bandpass filters, have an additional control allowing you to change the center frequency. Bandpass filters are useful for increasing the intelligibility of a speaker without increasing hiss or background noise. A variation of the bandpass filter is the notch filter, which increases or decreases all frequencies except those around the center frequency.

  Parametric filters

  Think of a parametric filter as a surgical editing tool for very precise equalization adjustments. A parametric filter is a bandpass filter with an additional control to adjust the width of the frequency band being affected.

  The peak method for applying a parametric filter is created when the center frequency is amplified (increased) or attenuated (decreased). Three factors make up the bell curve:

  Amplitude: The amount of boost or cut to the frequency

  Center frequency: The specific frequency that you select

  Quality factor: Sometimes just labeled “Q,” the ratio of center frequency to bandwidth; the higher the Q, the narrower the bandwidth

  Sibilance is a high-frequency characteristic accentuated by high-frequency peaks. Sibilance can occur when you say words that begin with an “s.” It’s more common among female actors because the tone of their voices is of a higher frequency. To avoid this, move yourself off-axis, meaning that you’re not directly in front of the microphone, but rather off to the side slightly.

  Reduce dynamic ranges

  A compressor’s basic function is to reduce the dynamic range of an audio recording, which is the difference between the loudest and softest sounds in a recording. By reducing the volume of the loudest sounds, a compressor lets you raise the level of the entire audio track, making it all sound louder than it originally was. Compression can be a big help in achieving intelligible audio tracks with a more uniform volume that will sound great on any stereo system.

  A compressor consists of a level detector that measures the incoming signal, and an amplifier that controls the gain by the level detector. A threshold control sets the level at which compression begins. Below the threshold, the compressor acts like a straight piece of wire. But when the input level reaches the threshold, then the compressor begins reducing its output level by an amount determined by the ratio control.

  The ratio control establishes the proportion of change between the input and output levels. If you set the compression ratio to 2:1, then when the input signal gets twice as loud, the output signal will increase by only half. If you set the ratio to its maximum (10:1 or more), the compressor becomes a “limiter” that locks the maximum level at the threshold. While a compressor can level out a recording, high levels of compression can also introduce artifacts i
ncluding “pumping,” in which there is an audible up and down change in volume of a track, or “breathing,” which sounds like someone inhaling and exhaling as the background noise level goes up and down.

  Done properly, compressing the dynamic range will normalize the audio until its loudest point is at maximum level. The overall signal level is now higher, which makes for clearer audio, and also reduces encoding distortion. The only downside of normalizing is that it increases the noise as well as the audio signal, so it should be used carefully. It should be your last step before exporting your finished production, and you may not need it at all.

  Increase dynamic ranges

  As the level of the audio signal gets louder, the expander’s amplifier turns up further, making loud signals even louder. An expander can be used to reduce noise in a process called downward expansion. In this case, you set the threshold just above the level of the background noise. The expander will then raise the volume of everything above the threshold, but won’t change anything below the threshold, thereby lowering the perceived background noise.

  Creating space with reverb and delay

  When mixing, you can give the listener a feeling that the sound is present or absent through the use of time and time delays. For example, a sound will be perceived more present if there are no time delays, no echo, or no reverberations. Alternatively, a sound is perceived more absent if there’s a lot of echo. A sound that echoes appears to be coming from far away.

  Echo

  An echo is an example of sound being altered in time. The sound travels from your mouth and slaps up against the wall (of the canyon or building) and then returns to your ear. At a canyon, you may even hear multiple echoes because of the time it takes for the sound to bounce off each side of the canyon.

  Reverb

  A more subtle example of the time effect is how an instrument sounds in a music hall or amphitheatre. The big open space with tall walls is designed to bounce the sound around and then out to the audience.

 

‹ Prev