FrankB said:
Joel Hruska said:
Alright. Now I see it. If that’s what you call a major anti-aliasing problem, I strongly suggest you never watch the NTSC credits. I cannot upload them to YouTube in SD because the YT algorithm utterly destroys SD content, so I have to give you screenshots – but take a look.
Looks like shifted fields. Can you upload the credits, or a few seconds of it, without conversion somewhere?
I do not understand where the problem realy lies?
Look at the degree of aliasing in those images, and I think you’ll understand why I’m experimenting with TR2=4 or TR2=5. TR2=3 creates a ripple across the front of the station that TR2=4 (at least) helps fix.
Seems not the right way to handle this, but let me see it first - maybe I will have to admit it IS that horrible. But hard to believe.
The reason to write an article telling people how to deal with PAL is so that PAL people know how to best convert the show. I am not advocating for some kind of code of honor. I want to produce the best overall version of Deep Space Nine. But the goal is to provide a “Best-in-class” improvement method to everyone, which means I’m also interested in the best way to handle PAL.
Not much difference to NTSC. After you made NTSC sources progressive and the anti-aliased possible rest of aliasing which came from reordering fields it should be pretty much the same procedure plus a slowdown at the very end.
Doom9 is correct that IVTC is complicated. The only way to perform it perfectly, as far as I know, is to hand-comb every scene and manually tune the frame order method via a TFM OVR file.
It is even better not to write a file for TFM, but to IVTC completely by hand. Not very complicated, I do this all the time, if the pattern doesn’t change too often. With the amount of episodes it is quite a job, though, but writing files for TFM also… If you like, I can post a typical script for by-hand-IVTC.
However – it turns out that the following actually works pretty damn well:
TFM()
TDecimate()
Would be easy then, but if you do so with native 29.97(i or p) content there will be missing frames and skipping (is that the right word?) all the time.
I have also developed a secondary method of creating a 60 fps method of the show that matches the quality of what I’ve shown you.
No good idea, because it doesn’t remove stuttering from telecine-double-fields. That is the whole problem with content like this. Progressive 30, 60 will stutter with originally film=24fps. How could this be avoided? There have to be double frames, at least fields, or can you divide 60 through 24?
Still searching for a way to automate it perfectly (the commands above are automatic, but not perfect).
Don’t exactly know, what’s your goal. But simply the first necessary, to IVTC, is not perfectly automatically possible. That’s why we do it by hand if ever possible.
Initial evaluation of naive implementation of AniMaxx’s algorithm suggests it’s oversharpening in my case. I like the overall output otherwise. Going to adjust some variables, then toss in the rest and see what it looks like. 😃
I wouldn’t sharpen at all, only a bit unsharpmasking, but that’s another question… 😉
Would be easy then, but if you do so with native 29.97(i or p) content there will be missing frames and skipping (is that the right word?) all the time.
There isn’t.
While I have self-identified as a noob, I need to ask you to trust me to some extent. I am not always right about what I am seeing, or why I’m seeing it, and I don’t always understand how to fix a problem I encounter, but I have spent thousands of hours staring at this show in the last nine months. I have accelerated it to every single frame multiple between 23.976 and well above 1000. I have interpolated it with at least six different filters. I have tested source derived from MakeMKV, Handbrake, and DVD Decrypter. When I decided I wanted to test the impact of using Handbrake to process the DVD, I ripped the same DVD more than 250 times in a several-day period. I ran every single potential combination of filters, frame rates, and output file types that I was interested in, and then I started watching them – at between 3-5 minutes per watch – to evaluate the impact of various settings.
I won’t tell you that I watched the entire set of videos, because I didn’t. Once I was able to start predicting what output would look like, I was able to speed through my testing more quickly. Eventually I realized these methods would not serve me, and I quit after looking at ~120 videos of the 250 or so I rendered.
I’m not explaining this to boast or brag or to come off as thin-skinned. You don’t know me from Adam, and you don’t have any specific reason to think I know what I’m talking about, especially when I openly identify as being very new to this work. I figure it’s on me to explain the level of detail and rigor with which I have approached the process.
Here is what it means when I say that the TFM / TDecimate combination works. It means that I’ve run it against multiple episodes with a high film percentage, but have seen only very isolated incidents of motion judder. I’m comfortable asking people to accept 3 seconds of bad motion in a 45-minute episode of TV. If I have to ask for 5 seconds, I’ll wince, but I’ll do it. If I have to ask someone to tolerate 10 seconds of bad motion, total, in one 45 minute video, I will do so only after I have made every effort I can make to fix it. If I had to ask someone to tolerate 60 seconds of bad footage in an episode, I’d refuse to publish my methods. That’s why I didn’t publish my methods in April or May. They weren’t worth publishing.
Here. If you want to see the difference between my old methods and my new ones, watch this other version of the DS9 credits that I did (I’m going to give you the time codes in the episode to show you the differences directly):
Watch the following: https://www.youtube.com/watch?v=YbjqaVHZvAA
0:21 - 0:48
0:59 - 1:14
1:27 - 1:36
Now, compare those exact same sections to the credits I gave you this morning. Ignore any differences in image quality and just look at the motion.
https://www.youtube.com/watch?v=zMaMHT4skn0
The first video shows you what my original approach looked like. It’s very nearly as pretty in terms of achieved image quality, but the motion is jerky and poor at multiple points. Too many points.
The video I sent you this morning is the video created when you use my script and preface it with: TFM() / TDecimate()
After the blue flash, the credits are in 29.97 until the end. And I’ve tested the method in more than just the credits.
I’m basically willing to tolerate less-than-great motion in a small portion of one scene, maybe two scenes tops. So far, that TFM / Tdecimate approach has hit that quality level. It might prove unable to do so in certain places. Where it can’t, I’ll find a different method and publish it, or nudge people towards Orinoco.
No good idea, because it doesn’t remove stuttering from telecine-double-fields. That is the whole problem with content like this. Progressive 30, 60 will stutter with originally film=24fps. How could this be avoided? There have to be double frames, at least fields, or can you divide 60 through 24?
Here’s how my method of doing this works.
We create two files, not just one. First script, for first file:
TDeint(mode=1, type=2, tryweave=true, mtnmode=3, full=false, ap=10, aptype=2, slow=2)
This script orders TDeint to produce doubled frame output, to perform kernel interpolation, to attempt to repair a frame by weaving if the result is fewer combing artifacts than deinterlacing would create, and to only deinterlace interlaced frames. This preserves the progressive frames baked into the NTSC source. Type=5 was the only option that came close to Type=2 in overall image quality, and the bi-directional blending Type=5 introduces causes more frame blending issues at scene boundaries. Type 5 occasionally fixes a problem in Type 2, but it causes them far more often than it repairs them.
Now we set this clip aside and turn our attention to the other. We run QTGMC against the second clip with the exact same processing steps I use otherwise except, we don’t use progressive repair mode. We use interpolation mode, and we take the frame rate up to 59.94 fps.
Here’s why we do this: The deinterlaced clip created by TDeint will have perfectly smooth motion thanks to the frames it interpolates. Those frames will be well-blended because the script above performs very good blending during the same scene. Scene boundaries, however, present a problem. It is not unusual to find a character with a fireball blowing out of her forehead at a scene boundary because the previous scene was an explosion.
But there’s a way to fix this: Repair.
We now take our two files and write the following script:
clip1=FFVideoSource(“C:\DS9S6D2\Sacrifice-TDeint.mkv”)
clip2=FFVideoSource(“C:\DS9S6D2\Sacrifice-QTGMC-ToPairWithTDeint.mkv”)
Repair(clip1, clip2, 9)
This process repairs the broken frames that had bad interpolated data in them.
This method – which I nicknamed Orinoco – is motion-perfect and quality-identical to the episode videos you have already watched. I do not have footage to show you straight to hand, though I could get some with a bit of processing time, but I have no reason to lie. I have published my methods and staked my reputation on them.
I was going to go to press with Orinoco, and only Orinoco, before I found my 23.976 fps solution at the last second. When I say the image quality is identical to what you have seen with perfect motion, I do not mean “Except in the places I don’t want to admit exist.” I mean “In every single instance I am aware of thus far.”
I have high quality standards. I don’t like to compromise on them.