Oh I’m sure. I think it’s like 85% there already anyway. I’m just don’t have a creative itch for its use.
I will say, something that does entice me about using AI in edits is the problem solving of making specific models to blend to a scene. So far, in threads like Hal’s Ascendant, everyone is focused on training models based on clean, high quality audio[book] sources. And while that’s ElevenLab’s best practice suggestion, that’s based on the hypothetical commercial use they’re leveraging for. If you’re trying to get a new line to blend perfectly, to emulate the background noise and quality of a specific vocal clip - the AI will do that at a certain clarity. Those are the imperfections or incomplete data for a model that’s meant to be a holistic replacement for a voice actor, but IMO has worked perfectly in my private tests for blending a new line to a specific scene.