I initially thought of posting this to the Rise of Skywalker Ascendant thread, but I think it’s scope and potential is too big to not share.
In simple words, an AI named “LivePortrait” has recently released that claims to do a “video-to-video” deepfake, which means giving it input video, i.e. a clip of a character saying something, and changing it to not only lip-sync, but literally remap and change their entire facial expression down to their teeths and mouths and eyes by only feeding it a reference video of someone doing that facial motion.
It’s a bit hard to explain, but heres a video explaining the process. It does get a bit technical, but the UI interface is user-friendly and easy to use:
https://www.youtube.com/watch?v=cucaEEDYmsw
I think it’s a huge game-changer for fan-edits in the future, as you can literally change an actor’s facial performance to suit whatever need is desired. This could prove really useful in slightly changing the context of scenes, such as in Hal’s TFA edit where Finn’s mouth movements changed so he could say “It’s the First Order, they’ve found us” to Han, instead of the Hosnian Prime destruction.
AI has really been kicking off recently in current culture, and it’s implementation in fan-edits for purposes such as voice cloning changed the way fan-edits can be made, and now with things like generative video, LivePortrait, you can literally make anything you want as long as it fits the context of the scene and themes of the film.