Heres a new breakthrough in Deepfake technology: Lip-sync and vocal synthesis and stitching done in less than a minute.
I presume this can be really useful in a lot of cases, letting the actors say what we want them to say through the use of AI.
Even the first video they pulled the bait-and-switch with is kind of unconvincing; my initial reaction was that they must have intentionally chosen a video with already bad lip sync as a base to make their result look better by comparison. But it’s early days into the technology, and this is a big step forwards; give it five years and it’ll probably be flawless.