It’s always fun to talk about how new tools and innovations can create efficiencies, but many sound and picture editors don’t take advantage of the capabilities that are literally built into the solutions they use on a daily basis. A prime example of this is the audio capabilities that are built into Premiere Pro that editors either do not use or in some cases realizes are right there. What does it means for an editor to take full advantage of the tools that they have right in front of them?
To explore what these capabilities look like and how they should both evolve and be leveraged, PVC experts Scott Simmons and Woody Woodhall connected with Durin Gleaves from Adobe. Scott in an editor in Nashville, Tennessee, which has often has him doing all the audio mixing for a given project, which isn’t ideal but is the reality of that production budget. Woody is the owner of Allied Post Audio in Santa Monica, California, where he serves as the main rerecording mixer, supervising sound editor, and sound designer. He posts feature films, feature documentaries, and a lot of television programming, and he also runs Los Angeles Post Production Group (LAPPG), which is a thriving networking community for post production. Durin Gleaves is the Product Manager of Audio at Adobe. He arrived at the company shortly after Adobe acquired the application that became Audition, which became his main focus, but over the last few years, he’s helped define audio workflows across multiple applications in the digital video and audio segment of Adobe.
Their conversation was so big that we broke it up into three parts. They touch on everything from to logistics of working with AFF and OMF files to how Audition and Premiere work together to the potential of automated audio extraction to defining what it truly means for editors to “edit your way.”
Part 1 – Part 2 – Part 3
Scott Simmons: You mentioned a couple of specific audio terms that not a lot of editors are familiar with and may not want to really learn about, which is why I wanted to talk through my approach as an editor working in Premiere and dealing with the audio.
As an editor, I think in terms of the visuals but I’m always looking at the different aspects of the audio that I’m using in any project. I do a lot of corporate work, which often means interviews and company music, as well as a lot of natural sound. As I’m building an edit, I’m always thinking about the relationship between these three things. That’s what it takes to get the viewer to feel and respond to what’s going on with what we’re doing, because the interviews, the music and those sounds need to connect with what they see on the screen. So I’m always thinking about how these things working together. Where do I have a particular hit or beat in the music that we can maybe use to punctuate something that we’re seeing on screen?
As I’m doing all of that, keeping everything organized in the Premiere timeline to make editing those things is easy. When you look down at your timeline, it’s nice to know that all of my interviews are on track one through four, my nat sound is on five and six, and my music is on seven through twelve. I think that’s something that’s always worth thinking about as an editor starts to build up their edit.
When you’re thinking about the capabilities built into Premiere that can enable this kind of approach, are you focused on enabling certain things that you’d otherwise only get from a certain plug-in? Or is the idea to have everything internally in Premiere that one would need to do mixes at a certain level? Where do you see Premiere or Audition’s capabilities ending, or is that even something you’re thinking about when designing these audio capabilities?
Durin Gleaves: With regards to where we draw the line between what we put engineering resources into versus leaving to third parties is a lot of why I have a job. A lot of what I do is figure out where and how those lines are drawn.
There are specific effects and workflows that are just going to be required. If somebody was on a desert island and all they had was Premiere, these are the things they’re going to need to get done. Beyond that, a lot of it comes down to taste. There’s a number of different ways to reduce noise or reverb something in Premiere. For a lot of media that works great, because they’re really good for certain kinds of noise, certain kinds of echo, etc. But there may be plug-ins out there that are available from other parties that are more suitable for different situations, or different recording situations, or just different problems. It may not make sense for Premiere to spend a lot of time building something that’s really only going to be suitable for a handful of users when a third party has already developed a plug-in that’s affordable and does the job.
Scott Simmons: Going back to that concept of “edit your way,” as someone who knows Premiere really well and likes multiple ways to do a single task, I love to be able to edit my own way. But it may be that editing your way is too complex for some people. So if there are seven ways to change the volume of a clip in the timeline, maybe that’s too many. So is that concept of “edit your way” more of a creative thought or technical concept? Or is it trying to find that happy marriage between the creative and technical?
Durin Gleaves: The audio functionality that’s in Premiere is actually really deep. It’s effectively a digital audio workstation under the hood. The biggest problem that we’re working through right now is related to Premiere being designed with picture first in mind. So the audio functionality is sometimes obscured behind UI and UX or just scattered in different places. At the time it was implemented, audio aspects in Premiere weren’t given as much weight as color features, for example.
That’s related to one of the tasks I’m facing right now, which is about re-envisioning or redesigning workflows and the UI itself to make all of that functionality more accessible. As an example, I think that I think we counted seven to ten different ways of changing the gain of a clip in the timeline. That’s great if you want flexibility, but it’s really hard to tell someone this is the way they should adjust the gain of a clip and make it very simple and straightforward and easy to learn. We want to simplify as much of that as we can, without losing any of that functionality in the background.
Scott Simmons: You mentioned the many ways to adjust the gain on the clip and experienced editors like that. They like the choice to be able to work their way. However, that’s somewhat contradictory to making things easier.
With audio mixing, you have clip mixing and track mixing but you also have a clip mixer and a track mixer. When you put them up side by side, they both look very similar and it can be a bit of a confusing thing as you are learning how to add to mix audio. I’ve always counted that as a plus because it gives you multiple levels in which you’re able to mix audio. One thing about mixing audio in Premiere that I have always appreciated is that it allows you to start from a really strong place. For a while, no other NLE could do what Premiere was doing on the audio front. The ability to have both clip-based mixing and track-based mixing was a real differentiator.
All of that said, you mentioned making Premiere more approachable and simple, so how do you look at something like a clip mixer and a track mixer to keep that functionality but also make it simpler and less confusing?
Durin Gleaves: These are the distinctions that we want editors to understand and explore.
Coming from an audio background myself, I know that editors tend to think of things like effects in terms of tracks. It’s a very audio-centric approach to applying effects and tools. On the other hand, video editors usually come from a clip-based approach, which is less about what they can put on a track to affect everything that’s on it. That’s partly why several years back, we took that approach with audio effects inside Premiere and Audition, in that you could have the best of both worlds. You could do one or the other or both.
Certainly, there are times when you want certain effects that are only going to affect the clip, maybe noise reduction, and you want that noise reduction effect to follow wherever you happen to move that clip. If you move it up and down different tracks, if you put it further down the timeline, you want that effect and the automation that’s associated with that particular snippet of audio to follow it wherever it goes.
There are also instances where you want whatever happens to be on a track to have some effect. Maybe you have an echo because it’s a shot that’s in a cave but all your voiceover was done in the studio. So as you’re rearranging and moving things around and adjusting the VO to figure out what that narrative story is, you don’t want the effects to change. You want them to continue on. That’s where that track versus clip options comes into effect. That gives you flexibility and control.
With interfaces like the clip mixer versus track mixer, you’re right. All of the sudden you’ve got two different mixers and they look pretty similar. They behave similarly, but there are weird differences between the two just because of the nature of what they’re doing.
That’s why I’d love to work with our design team to really explore what we could do if we re-approached that setup. Could we re-envision how those work without losing functionality? Could I keep everything with the panels but could make it easier and more intuitive? So that we wouldn’t have these two similar panels that do different things? Could we find a way to simplify that experience and that workflow but still make it accessible for somebody who doesn’t have a strong opinion about track versus clip effects?
Scott Simmons: I’ll give you some unsolicited feedback about all of that. And it ties into a discussion/argument I’ve had with a number of Pro Tools audio mixers over the years.
You’re right that editors think on a clip basis, so I’m going to raise the gain and I’m going to do rubber banding and keyframing on the clip versus doing these things on the track. But what many people don’t realize is that in Premiere, you can turn on track-based keyframing and rubber banding. That’s where, at least in Premiere, if you just use your mouse to hover over that line of the audio keyframe where if you drag it up, it turns the clip up. If you drag it down, it turns the clip the clip down. You can put keyframes in there and you can ramp the volume or the gain up and down.
You can do that on the track as well with a little bit of complexity by digging into the track control panel. You can keyframe the volume and the gain on the track. All of this is reflected in the track mixer. The problem with that is that you never lock picture so once you start to shift clips around any volume keyframing you’ve done on a track is moot and null and void at that point.
Now, professional mixers have worked this way for years but it’s not something that editors do. So here’s my unsolicited suggestion: I would love a clip mixer because that would let me quickly move up and down so that I can grab the sliders and see where things are going. I could type in specific numbers to raise the gain up or down. But in the track world, I would love to have some sort of panel where I can just apply an effect only to the track that has nothing to do with raising the level or the gain up and down on the track itself. I don’t need to be able to automate with read, latch, touch and write the different types of keyframing on the track. The reason for that is because I believe the large majority of editors that are doing their mixing in Premiere never do much actual track mixing with the key framing the track gain ever. I think that is so underused by video editors because we’re never locking picture. It’s just not the way that we that we think.
Woody, I’ll toss that to you and tell me where I’m wrong because Pro Tools mixers mix at the track level all the time so why don’t video editors do this? Because we just don’t.
Woody Woodhall: Well, I’m not sure about that!
What I’ll say is that one of the major things that has changed with the acceptance and prevalence now of clip gain in a lot of the NLE’s is that before, I used to see editors do things like if a track was recorded very low, they might make four copies of it to make it, in their mind, four times louder. As a sound person, I have completely different tools and thoughts about how you would change the gain on that low recording. I would throw away the three extras and figure out how to make the one sound great.
Now though, I get “mixes,” and that’s very much in quotes. I get mixes that are pretty much just clip-gained throughout. They’re not using the built-in mixer at all, so everything runs at unity gain, in other words, the faders parked at zero. The entire show has been mixed by gaining clips.
Scott Simmons: But hold on for one second, because I wanted to say that they are using the mixer because whenever you adjust your rubber banding on a clip in the timeline, the clip mixer fader will raise in conjunction with that. So I think that maybe that’s where this bit of confusion comes with new people coming to Premiere. Because they see the clip mixer and the track mixer and they look so similar. I think that’s what Durin is talking about. It’s a struggle to try to figure out how to simplify this but still have the same power and functionality.
Woody Woodhall: Gotcha. So, that’s probably why we have this discrepancy because I don’t use Premiere in that way.
In Pro Tools, the clip gain is just something that you can apply right on the clip. There’s no additional window for that or anything. So when I open it and Pro Tools, a clip might say 14db on it, so what that means to me is that no matter what the original level the clip was recorded at, the editor has now added 14db of gain, and I can quickly see that throughout, clip by clip.
Scott Simmons: Is that a bad thing?
Woody Woodhall: Not necessarily, but for good or bad, we mixers will typically blow all that stuff out. I basically bring all the clips back to Unity gain, or in other words their original recorded levels. I’ll look at a timeline and I’ll see that an editor has pulled the clip gain of all the music tracks down 26db. That makes sense because music is usually mixed as close to zero as possible so it needs to be lowered to be properly mixed against the dialog. With very low recorded clips they may have added a crazy amount of gain, but I don’t hold picture editors too much to account with their temp mixes or any of that stuff. That’s because I understand that they have their own schedules and deadlines, and they’re just trying as best they can to get something that the producer will go, “okay good, Woody will fix it.”
Scott Simmons: Yes, and in defense of the picture editors, they still have to take low-recorded audio and make it hearable for the people that are approving the show. There’s just no way around that.
In Premiere, there are multiple ways to make that happen, the easiest of which is just dragging the clip gain as high as you can go. That’s the simplest way to do it. I would even suggest that there are many editors in Premiere who only mix by grabbing the rubber band on the clip in the timeline and turning it up or down. They never actually open up the clip mixer and see that’s there’s another level of capabilities and minutiae that’s really good to use when you go into the clip mixer.
Those options are important because regardless of someone’s skill level or preference, they allow you to get stuff done quicker. So maybe you can try something different, or experiment with an idea, or just get home at a more reasonable hour. All of which are good things.
Part 1 – Part 2 – Part 3
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now