I’ve used NLEs before for short, basic projects that I’ve shot myself, where I control all the original audio and video input sources. I’m competent with the basics of how NLEs work. But I’m utterly mystified as to how the better faneditors will shorten or lengthen a scene and have the music and atmospheric sounds flow seamlessly. I imagine it only appears to be seamless, but music-wise, it has to be pretty much perfect. In terms of atmospheric sounds, I’m sure there can be more leeway. And dialogue starts and stops completely enough that I’m not at all confused about how that would be edited, as mentioned above, I have edited before.
So really my main question is about music. When the original scene lasts from 1:00 to 2:00, and has music beginning at 1:00 and ending at 2:00, but I want to extend the scene to last from 1:00 to 3:30, having the cues for the swells or crashes in the same places relative to the visual events, which are now at different times, how do I do that? I’ve thought about it and it seems that without tracks being separated (sorry, an unspoken assumption here) moving music around would be impossible.
Are a lot of editors finding cleanly separated audio channel sources or something? I’m assuming at best faneditors get a video track and a right, left, and center audio track, all of which have the same audio but in different volumes.
So really my main question as about how you edit sound when the source has all the sounds mixed together in a single track, or several tracks each of which have a mixture music/atmospherics/dialogue 😛
Do you use filters to artificially separate the three elements (foleyed, dialogue, music)?
Anyway, information anyone? Thanks!