logo Sign In

New AI "Deepfake" for fixing shoddy dialogue?

Author
Time

I initially thought of posting this to the Rise of Skywalker Ascendant thread, but I think it’s scope and potential is too big to not share.

In simple words, an AI named “LivePortrait” has recently released that claims to do a “video-to-video” deepfake, which means giving it input video, i.e. a clip of a character saying something, and changing it to not only lip-sync, but literally remap and change their entire facial expression down to their teeths and mouths and eyes by only feeding it a reference video of someone doing that facial motion.

It’s a bit hard to explain, but heres a video explaining the process. It does get a bit technical, but the UI interface is user-friendly and easy to use:
https://www.youtube.com/watch?v=cucaEEDYmsw

I think it’s a huge game-changer for fan-edits in the future, as you can literally change an actor’s facial performance to suit whatever need is desired. This could prove really useful in slightly changing the context of scenes, such as in Hal’s TFA edit where Finn’s mouth movements changed so he could say “It’s the First Order, they’ve found us” to Han, instead of the Hosnian Prime destruction.

AI has really been kicking off recently in current culture, and it’s implementation in fan-edits for purposes such as voice cloning changed the way fan-edits can be made, and now with things like generative video, LivePortrait, you can literally make anything you want as long as it fits the context of the scene and themes of the film.

this post has been edited.

Author
Time

Very impressive. Significantly better than the image manipulator tools previously available.

I’ve experimented with Pinokio before, mostly for deepfakes, and the applications on there had a hard time with certain lighting conditions and facial distances from the camera in movies. For example, I tried replacing Rey’s father’s face with a young Ian McDiarmid, and it didn’t work very well in the scene where he’s on Ochi’s ship. Mostly due to the lighting, but also because he’s pretty far from the camera. I wouldn’t be surprised if the same limitations apply here.

Author
Time
 (Edited)

Wow, this looks like a game-changer for the world of fan edits. With a tool like this, it becomes easier to insert new dialogue and change the entire context of a scene.
Some of the goals that I had for my (hypothetical) sequel trilogy edits were to change Rey’s backstory and remove the Resistance entirely (Having instead a Republic vs First Order conflict).

Author
Time

All I can think of at the moment is adding short little shots where a character bluntly states the explicit change that the fan editor wants to make.

Han steps out of Maz’s cantina and says, “The First Order, they’ve caught up with us with their flagship and will be coming to get us any minute.”

Finn chastises DJ onboard their stolen getaway ship, “When you casually lead us to this ship after we escaped from our cell, I figured you owned it!”

Rey picks up a dagger from the ground of the underground cave and says, “I sense that this dagger can communicate things to me telepathically, but only if I agree to tap into my inner darkness.”

I know I’ve made some very poor decisions recently.

Author
Time

I’ve looked into a lot of the advancing technology, and it does indeed bode well for the future of fan edits.

I myself, had an idea of restoring cut scenes from ‘The Good, The Bad, and the Ugly’ with deepfakes of the original actors’ voices.

I recall seeing something recently that was for use in the context of “dubbed” films; essentially allowing the new dubbed voice to change the on-screen character/actor’s mouth and expressions (to an extent) to flow better with the dialogue. It was quite intriguing. That could go back to something like a Spaghetti Western to sync and adjust the mouth movements (though I quite like the charm of those films, it almost seems sacrilegious to “fix” them).

The limitations (at the moment) seem to be the extremes. Once the expressions of the face go beyond that of the original actor, it starts to look off. Like the video example above. De Niro goes from looking quite believable to doing that Mummy face.

The really exciting part of all of this, is that a lot of the very convincing technology is available to regular people.

"The other versions will disappear. Even the 35 million tapes of Star Wars out there won’t last more than 30 or 40 years. A hundred years from now, the only version of the movie that anyone will remember will be the DVD version [of the Special Edition], and you’ll be able to project it on a 20’ by 40’ screen with perfect quality. I think it’s the director’s prerogative, not the studio’s to go back and reinvent a movie." - George Lucas

<span> </span>

Author
Time

Jar Jar Bricks said:

Very impressive. Significantly better than the image manipulator tools previously available.

I’ve experimented with Pinokio before, mostly for deepfakes, and the applications on there had a hard time with certain lighting conditions and facial distances from the camera in movies. For example, I tried replacing Rey’s father’s face with a young Ian McDiarmid, and it didn’t work very well in the scene where he’s on Ochi’s ship. Mostly due to the lighting, but also because he’s pretty far from the camera. I wouldn’t be surprised if the same limitations apply here.

I feel like that would work more effectively with EbSynth or something like that, but basically doing a face-swap composite first then using EbSynth.

this post has been edited.

Author
Time

As convenient as this is, just as I left this site and went on to YouTube, a new video by the Corridor Crew tackled this exact technology and used it to basically make movie characters explain their sequels.

The workflow was mainly using ComfyUI, with ElevenLabs AI for the voice cloning, and the results look pretty good if I say so myself.

https://youtu.be/PrPY_USgngg

I can genuinely see this as a potential to swap lines out from a character, like in TROS, to probably make Leia have more speaking roles, or something like that.

this post has been edited.

Author
Time
 (Edited)

Yeah that video is basically what Hal was joking about in his post on here lol.

But I could see it working in the ways you’re describing. We could have her say something to Rey at the end of their conversation after “I think I’m just tired, that’s all” or perhaps to Snap when Rose is storming off. But I can’t seem to think of any instance where she has a visible line we’d like to replace or where her face is even visible for our current AI lines.

EDIT: Perhaps there’s enough visible space after Leia tells Rey “No” concerning finding Palpatine?

Author
Time

I would use to A) fix the scary digital actors in Rogue One. B) to give Tarkin an actual scene in Revenge of the Sith. C) Give Dooku a an actual scene in TPM. I’m a Hammer fan.