logo Sign In

Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released) — Page 5

Author
Time

And here are the video files; please not that I haven’t tweaked the audio pitch yet, so it does sound to high, but I will handle that later, for now it’s just a visual preview, ok?

First the link for the original SD-Quality video from the PAL-DVD (source):
https://drive.google.com/file/d/1RzZISla7xctAzI9zQ9PsogxxKwUlznCo/view?usp=sharing

And now to compare the filtered and AI-enhanced 4K-clip:
https://drive.google.com/file/d/1_L8a5gtwftVfMROA01WiYLBl_jWF8_vX/view?usp=sharing

Author
Time

Animaxx said:

pahuffman said:

Here’s another DS9 upscale project. Looks like he’s achieved good results and moving on to upscaling the whole series:

https://www.extremetech.com/extreme/314653-remastering-deep-space-nine?fbclid=IwAR0M0qhlJZ8qm95SCzE-UduVZv-0lL-rDni48DdD-bfuc3-wKWX-iepe61g

That is the project from Joel, who just joined us around here. But thanks for looking around.

Oh good! I’m glad Joel’s here. I didn’t realize he was in the thread.

Thanks, guys, for all the hard work!

Author
Time
 (Edited)

It does.

I am not certain if it is capable to clean the PAL source up quite as much as the NTSC source, but the PAL source seems to respond well to similar filters. The PAL conversions looked good to me. I prefer the NTSC audio, having grown up with it, and there’s a very slight color difference between PAL and NTSC, but it seems to me like those conversions were done really well.

Until recently, I thought I might use the PAL source myself.

Ani,

One difference I do know between PAL and NTSC is that PAL doesn’t have the same problems with aliasing that I’m trying to clean up with commands like TR2=5 or TR2=4. Some of the issues I have spent time fixing for NTSC just aren’t in the PAL copy.

The native DVD credits for NTSC look terrible compared to PAL.

Author
Time

Joel Hruska said:

It does.

I am not certain if it is capable to clean the PAL source up quite as much as the NTSC source, but the PAL source seems to respond well to similar filters. The PAL conversions looked good to me. I prefer the NTSC audio, having grown up with it, and there’s a very slight color difference between PAL and NTSC, but it seems to me like those conversions were done really well.

Until recently, I thought I might use the PAL source myself.

Ani,

One difference I do know between PAL and NTSC is that PAL doesn’t have the same problems with aliasing that I’m trying to clean up with commands like TR2=5 or TR2=4. Some of the issues I have spent time fixing for NTSC just aren’t in the PAL copy.

The native DVD credits for NTSC look terrible compared to PAL.

The audio I will be able to handle (I know how to adjust the original NTSC audio so it fits/syncs with PAL without any pitch issues (so the original will be preserved, for I prefer it as well, besides the NTSC is there in 448 kbits, while the english PAl only has 384).

Also, I have developed a modified qtgmc string to better accomodate PAL and I also added a couple of filter tweaks, so the PAL will look “cleaner/sharper”.

I have already done a sample according to my setup and will upload comparison shots as well as a sample soon.

Author
Time

Joel Hruska said:
One difference I do know between PAL and NTSC is that PAL doesn’t have the same problems with aliasing that I’m trying to clean up with commands like TR2=5 or TR2=4. Some of the issues I have spent time fixing for NTSC just aren’t in the PAL copy.

If the NTSC sources are THAT bad, that you have to anti-alias by QTGMC-temporal-filtering with a window of 4 or 5, it’s really time for the PAL sources… I would never use this. If QTGMC, then always TR2=0 and StabilizeNoise=false. There are better ways to filter noise.

The native DVD credits for NTSC look terrible compared to PAL.

The PAL opening credits seem to stutter at nearly every episode by the end (wormhole opens), but there are a few episodes where the conversion is stutter-free - I just watched a lot of episodes, but forgot to take a note for the good ones.

Animaxx said:

The audio I will be able to handle (I know how to adjust the original NTSC audio so it fits/syncs with PAL without any pitch issues

You don’t have to change anything with audio (except there were small cut-differences, I don’t know), just slow the picture finally down to its (nearly) original 23.976fps with “assumefps(24000,1001)”.

Author
Time

Ok guys, as I had promised, here are the new sample images from the updated/improved work I have done.
As before, images on the left are source (PAL-DVD), images in the right are upscaled/enhanced.

I think you will note there is a little more sharpness as well as detail (compared to my previous version).








I really think this is the way to go.

Author
Time

You guys know when I was satisfied? When I was finally able to see the freaking Aztec-Pattern on the ships hull (last shot) - I know, I am so ready for the looney bin, but I used to/still do build starship models, so details are a point of obsession for me …

Author
Time
 (Edited)

FrankB said:

Joel Hruska said:
One difference I do know between PAL and NTSC is that PAL doesn’t have the same problems with aliasing that I’m trying to clean up with commands like TR2=5 or TR2=4. Some of the issues I have spent time fixing for NTSC just aren’t in the PAL copy.

If the NTSC sources are THAT bad, that you have to anti-alias by QTGMC-temporal-filtering with a window of 4 or 5, it’s really time for the PAL sources… I would never use this. If QTGMC, then always TR2=0 and StabilizeNoise=false. There are better ways to filter noise.

FrankB,

Pleasure to meet you. Before we discuss relative processing technique I should probably provide you some samples. For example:

https://www.youtube.com/watch?v=zMaMHT4skn0

That’s the most recent version of the NTSC credits that I’ve done (set YT to 4K for actual quality). Check two things, specifically:

1). The degree of antialiasing in the nacelles on the runabouts as they fly past just before the blue light burst.
2). The station pan immediately following the blue light burst.

Compare against the following in the same two areas:

https://www.youtube.com/watch?v=OqG_A72Q5fM

You will find that the antialiasing is improved in both areas, with the last ripples of QTGMC’s earlier run quashed now. These are visible in the station pan. Now, quashing the ripple has caused a buzzing artifact later on in the credits that I’m exploring methods to get rid of. I’m confident I’ll nuke it. The net effect of TR2=4 or TR2=5 is a substantial improvement in the final output.

If the NTSC sources are THAT bad, that you have to anti-alias by QTGMC-temporal-filtering with a window of 4 or 5, it’s really time for the PAL sources… I would never use this. If QTGMC, then always TR2=0 and StabilizeNoise=false. There are better ways to filter noise.

Then please, by all means, check the output links I’ve posted or any of the others on my channel. I would love to hear your improvements for further improving the quality of my work. I just posted 15 videos on Monday in various Topaz modes and across multiple episodes. I can provide actual clips of these rather than links as well, but YT is probably easiest for speed and to give you a good idea. If you set YT to 4K I don’t lose much quality, even if it is compressed.

https://www.youtube.com/user/Dputiger

I have spent 20-40 hours per week for the past nine months running thousands of encodes of Deep Space Nine. DS9, however, is also my first project. I have never found a formula to beat the following (with the exception of the TR2=4 / TR2=5 settings, which are new introductions to my method of handling the project and are still under evaluation).

If you run the script below without calling TR2=4 / 5 in either line – leaving it at default in both calls – you get the second video. If you call it as below as TR2=5 in both scripts, you get the first video.

QTGMC2 = QTGMC(Preset=“Very Slow”, SourceMatch=3, TR2=5, InputType=2, Lossless=2, MatchEnhance=0.75, Sharpness=0.5, MatchPreset=“Very Slow”, MatchPreset2=“Very Slow”)
QTGMC3 = QTGMC(preset=“Very Slow”, SourceMatch=3, Lossless=2, Sharpness=0.5, MatchEnhance=0.75, InputType=3, TR2=5)
Repair(QTGMC2, QTGMC3, 9)

I’m actively testing the impact of only calling TR2=4 /5 in one of the two runs before the repair. Oftentimes, this still provides a subtler version of the same effect and I prefer to use the lightest touch possible. The script above is the one that actually produced the output you watched, so that’s the one I’ll stick with for now.

If you want 23.976 fps output, just throw TFM() and Tdecimate() ahead of the QTGMC calls.

I almost forgot. I can give you screenshots of the difference between the baseline DVD and the results of my method with the TR2=4/5 call in it.

Baseline DVD. From PastPrologue.

https://www.extremetech.com/wp-content/uploads/2020/09/Screenshot-705.jpg

Identical screenshot after processing. Zero upscale:

https://www.extremetech.com/wp-content/uploads/2020/09/Screenshot-708.jpg

Finally, the image after upscale.

https://i.imgur.com/AZaBHLI.png

The slight color shift is something I know how to unwind. Just hadn’t done it yet.

So that’s a demonstration of the level of quality and noise processing I currently achieve, at both the frame level and in motion.

If you know a better way to clean up the former into the latter – possibly by preserving more detail on Bashir’s forehead, where my method is losing some of it – I’d love to incorporate it. I’m willing to trade a little forehead wrinkles on a 27 year-old actor in exchange for the improvements the above generates, but I’d prefer to get more improvement without trading anything in the process. 😃

If you’d like to see clips via a different method I can also make that happen. Let me know if you have a Microsoft account.

Author
Time
 (Edited)

Animaxx said:

Ok guys, as I had promised, here are the new sample images from the updated/improved work I have done.
As before, images on the left are source (PAL-DVD), images in the right are upscaled/enhanced.

I think you will note there is a little more sharpness as well as detail (compared to my previous version).








I really think this is the way to go.

What’s your current script?

BTW, I compared our Sisko shots. Here’s my version.

https://i.imgur.com/P0xfEAM.png

I’m not sure why yours is so stretched, but the quality there looks about the same to me. I took the liberty of cropping your image to show just a 4:3 ratio, and the one thing I noticed is that you might be using an additional sharpening method compared to my own: Your Sisko’s ears are slightly sharper.

I also compared our starship shots. You on the left, me on the right:

https://i.imgur.com/Rf5tjAM.png

We’re very close, but you’ve got a distinctly sharper edge at the moment than I do. How are you getting it? Different QTGMC settings, or a different filter altogether?

PS – Sharpening filters should be applied after QTGMC if you intend to use anything but QTGMC. In a lot of cases, the Repair run I do will wipe out sharpening changes applied by other filters. Also, if you set a “Sharpness” variable when using my approach (if you experiment with it), make sure you set one in both QTGMC2 and QTGMC3. Variables are not passed between them. If you do not set the variable in both runs, no level of sharpness you attempt to apply in QTGMC2 will ever work. Neither will the vast majority of other sharpening filters if they are run before QTGMC. I believe this may be because QTGMC has its own functions to limit sharpness, or it may be the result of the Repair command. Either way, I wanted to make you aware of it.

Author
Time
 (Edited)

I’m not sure why yours is so stretched, but the quality there looks about the same to me. I took the liberty of cropping your image to show just a 4:3 ratio, and the one thing I noticed is that you might be using an additional sharpening method compared to my own: Your Sisko’s ears are slightly sharper.

Well, that is the result of StaxRip doing some cropping, my player wanting to stretch to fill the screen. I assure you, the final product won’t look as “stretched”, furthermore I cut of the black bars on top/bottom, so the image appears larger (so you can compare more closely without resorting to zoom.

What’s your current script?
We’re very close, but you’ve got a distinctly sharper edge at the moment than I do. How are you getting it? Different QTGMC settings, or a different filter altogether?

You’re updated settings were my inspiration and a sort of “jumping-of-point” for me, but I realised that PAL had it’s rather unique properties in terms of noise, artefacts and so on; after I had applied most of your settings, I quickly realized that I hadn’t jumped far enough and decided to do a little experimenting on my own.

The basis for my work are the vob-files I got from ripping the PAL-Disc with DVD Shrink.

Then I dumped that into StaxRip.
Aside from the automatic source settings, the “first” filter is QTGMC in Progressive Full Repair Mode.
Afterwards, my customized string reads/writes as follows:

QTGMC(preset=“Placebo”, InputType=3, sourceMatch=3, lossless=2, MatchEnhance=1.0, MatchPreset=“Placebo”, MatchPreset2=“Placebo”, sharpness=1.0, SMode=2, tr2=2, Rep0=11, Rep1=9, Rep2=9, RepChroma=true, EdiMode=“EEDI3+NNEDI3”, Sbb=0, NoiseProcess=1, ChromaNoise=true, DenoiseMC=true, NoiseTR=2, GrainRestore=1.0, NoiseRestore=0.1, NoiseDeint=“Generate”, StabilizeNoise=true, Border=true, ediThreads=8)

A few words on that:
I increased the MatchEnhance to 1.0 and while I know it could create more noise, it also provides a bit more detail.
I pushed the sharpness setting to 1.0 but also specified the SMode=2 to allow for “vertical max/min average + 3x3 kernel” to get better results; in addition I did not set a clear value for sharpness limiting - I know, it’s risky and rarely my output can look close to the edge of oversharpening, but it’s an ok trade-off having one or two seconds of that compared to having a better look overall throughout the episode.
I adjusted the rep-settings to minimize blur for motion search, initial and final output after temporal smooth, despite not having set TR0 and TR1 values - I have found that this produces a clearer image and better motion quality with quick camera pans, I also included the ChromaRep.
Furthermore, I added the interpolation with “EEDI3+NNEDI3” to help with the “half-frame-to-full-frame-issue” during deinterlacing, since StaxRip still shows an interlaced source from the PAL, despite being flagged as progressive. And while I was comparing, it also seemed to better handle the visual flow after deinterlacing (less image stutter).
The “Sbb=0” is usually a default setting, but I wrote it in anyway, so QTGMC knows not to back-blend-in the blurr-difference pre-post-sharpening.
When it comes to Noise (which is related to the increased MatchEnhance and sharpness), I chose to use the stronger NoiseProcess=1 to adapt for the previously increased settings, I also included ChromaNoise here.
The DenoiseMC was modified to have a better noise vs. detail detection and in combination with NoiseTR helps to better identify what is actually noise and what detail (that I want to keep, naturally).
More on that, I decided to raise the value of “GrainRestore” to 1.0, cause with the PAL it preserves things better, albeit keeping a little more grain, but that’s visually ok for me, I don’t need things to be “super smooth” in that regard.
Finally the Border=true setting is added to remove flickering on top/bottom borders, which happens with PAL when explosions or very fast moving objects are in those areas. It doesn’t handle it all the time, but reduces the effects.

Then I used a second filter, “Deblock_QED”, because the PAL is quite blocky, especially when explosions or quick camera pans happen. I modified the string so the “quant1=35” (default would be 24) increases the removal of outer edges on blocks. I don’t dare go higher, for then other lines will become distorted or have weird visual effects happening - unfortunately it doesn’t get rid of all blocks, but reduces them enough to barely see them (mostly with quick movement and/or explosions).

Then I used a third filter, “pSharpen”. Now I don’t know how familiar you are with that one, but I love it. It uses a “two-point approach” on every pixel, comparing to the min/max of it’s spatial neighbor whilst avoiding overshoot and also giving you the option when to use compensation for that avoidance.
I also raised the super sampling factors to reduce aliasing.
Now I know, doing additional sharpening can lead to terrible results (considering I also pushed it with qtgmc), but I think I have made a reasonable compromise.
My specific pSharpen string as follows: pSharpen(strength=75, threshold=90, ss_x=2.0, ss_y=2.0)

Also (4th filter) AntiAliasing comes into play with MMA2/MAA2.
I know aliasing is not present “as much” with PAL, but the instances it happens it get’s distracting very fast. So I really pushed it around here as well, including chroma.
String: MAA2(mask=1, chroma=true, ss=2.0, aa=128, aac=128, threads=4, show=0)

The 5th and (almost) final filter is to handle the Color Grading, I went for GradFun3, since it gives to most pleasing results. I only do that because with PAL you can sometimes see banding, especially with stronger colors.
The string here as follows: GradFun3(smode=2).

And the last “filter” is the automatic crop.

When it comes to Topaz VEAI I know you prefer Gaia CG and while I worked with it as well for some time, I do not use it for DS9 - here I use Gaia HQ.
It’s a personal preference for the following reasons:
Gaia CG smoothens/cleans out to much detail (in my opinion) and makes the image look a little “too artifical”, faces are a particular issue here (wax effect).
Furthermore, HQ adds a little more sharpening while keeping a bit more grain/noise, which is nice for I tried to preserve some when doing QTGMC.
And: Sometimes Gaia CG has an issue with introducing a visual “grid pattern” that hasn’t been solved yet and especially during nature shots (see the holodeck scene with Jake and his father) the trees don’t look right, that really is something Topaz should take care of.

So that’s it. I hope that was useful for you; perhaps you could adapt some of my suggestions to your work.
I would like to give something back for you have really helped me with mine.

Author
Time

snip
(or maybe SNIP, because that’s a big bit I’m clipping) 😉

So that’s it. I hope that was useful for you; perhaps you could adapt some of my suggestions to your work.
I would like to give something back for you have really helped me with mine.

Just wanted to say: I’m going to read this in the morning because it’s very late here (or very early), but I’m really glad I was able to help you.

There are something like 3-6 remastering projects going on right now for various Star Trek properties. Maybe even more. I’m aware of you, myself, Project Defiant, and QueerWorm’s project over on Github. There are at least a dozen other people I know of who aren’t really working on public “projects” per se, but have been quietly building their own remasters / upscale collections. And that’s just the folks I know of who are specifically working with AI, as opposed to people who were performing all the other work you can use AviSynth for over the years.

We all have different short-term goals in terms of preferred filters or AI models, but ultimately, we all want the same thing: The best possible version of Deep Space Nine, for whatever device/region/bit-rate/quality-level anybody wants. I’m happy to contribute to that effort.

I want our collective efforts to stand as a counter-weight to the absolutely terrible condition of the show on Netflix. I want people who see the show on Netflix, and like it, and wonder if a better version might exist somewhere, to discover that one absolutely does, and that if they’re willing to buy a set of DVDs and invest some time and patience, they can have a version of the show that approaches HD quality.

Thanks for sharing your code. Going to experiment tomorrow. 😃

Author
Time
 (Edited)

Completely understandable. Time zones bite. Read and experiment whenever you have time.

I can’t always work on this project when I want to either. After all we all have day jobs, many or at least some probably have families and then there is always that little thing (like an email from a friend or someone asking you to give them a hand) that blows up to be much more and … bam - you’re side tracked again.

But since we are all doing this for free, I think we’re coming ahead pretty well.

And yes, the netflix version is … I don’t think a word has been invented yet to accurately describe my feelings on that one, but I would like to paraphrase Garak: “To see that one burn wouldn’t exactly be tragic”.

But then again, times move fast and physical storage mediums are becoming more and more a thing from the past, streaming and online cloud storage is where we are headed with warp speed.
A friend of mine who used to be the manager of a classical “video store” (yes, they even had VHS tapes until about 2012) said “The way things are going, DVDs will suffer the same fate as VHS; BluRay will probably indure still but streaming will replace that as well some day.” The rate things are going, he expects the DVD to be more of an collectors thing within the next 5-10 years.

And that is (in my eyes) something to be sad about, since there are many show and movies that never made the jump from dvd to bluray, and even if they are available at streaming services, they are being transferred so carelessly/badly, that the original source is still a better way to go (if you can still find it).

The same said friend had the opportunity to take any vhs tapes he wanted for free when they were cleared out.
I can still remember him carrying boxes into his small car, afterwards telling me he couldn’t belive how insane it was by the company that owned the store chain to go like “Well, that’s that, no one wants that old stuff, so collect it and throw it out”.
Thankfully, he managed to “sneak out” a lot before that happened and luckily no one noticed (or perhaps even cared). He still feels sorry today for not being able to save them all - he rescued about 3.000 tapes, among them old disney classics, movies and so on (can you imagine what his place looks like?).
But he told me that pales in comparison to the total stock they had (which was about 8.000 with the last inventory he did back then).

And there are still those gems sometimes, like extended versions or “prints” that were done once and forgotten.

Anyway, let’s hope our efforts will matter to people/fans.

Author
Time

Joel Hruska said:
FrankB,

Pleasure to meet you. Before we discuss relative processing technique I should probably provide you some samples. For example:

Nice to meet you, too. You are right: Theoretical discussions are always a bit too - theoretical. Your results are astonishing, especially the captions! I am still sceptical against the whole AI-upsizing (why I wrote here in another thread, if you are interested in pure theory I can search for it), but it seems that maybe I am too old meanwhile - maybe a mix of that fact and some true facts…
But it looks great!
Critics and proposal: For my taste a bit too LESS noise. Maybe you should consider to
-denoise in avisynth (as you did), because the AI-denoising may be worse in quality, thus having full control of the denoising
-scale up denoised, which is necessary for the AI in order not to produce too much “new details from noise”
New:
-mix back some of the original noise(!) - which makes it more natural. F. e. just resize the original in avisynth with nnedi3 or so and mix it back with overlay(…, opacity=0.2) or similar. We do this very often, and it’s common practice in studios to re-noise.

The net effect of TR2=4 or TR2=5 is a substantial improvement in the final output.

You are right concerning aliasing. But you have to pay with less detail before AI (I suppose).
I don’t like the QTGMC “input type” > 0, also because in some scenes it works pretty well, and sometimes suddenly there is quite no effect.

I have spent 20-40 hours per week for the past nine months running thousands of encodes of Deep Space Nine. DS9, however, is also my first project.

I wish I had the time for my private projects, too. Hats off to all your efforts, great that there are still people who really pull off something.

QTGMC2 = QTGMC(Preset=“Very Slow”, SourceMatch=3, TR2=5, InputType=2, Lossless=2, MatchEnhance=0.75, Sharpness=0.5, MatchPreset=“Very Slow”, MatchPreset2=“Very Slow”)
QTGMC3 = QTGMC(preset=“Very Slow”, SourceMatch=3, Lossless=2, Sharpness=0.5, MatchEnhance=0.75, InputType=3, TR2=5)

After a lot of experiments some years ago I decided not to use “placebo” and “very slow” any more, because you lose too many details. In this special case (to feed the AI upscaler) it may be good - but as I said before: You should consider to put SOME of the noise back in the end…

Repair(QTGMC2, QTGMC3, 9)

That seems interesting, I never had this idea!

If you want 23.976 fps output, just throw TFM() and Tdecimate() ahead of the QTGMC calls.

But this would ruin the original 29.97i (cgi) sequences? Or aren’t there any? I am sure there must be, I never checked this myself up to now, just picked it up from doom9 postings.

Baseline DVD. From PastPrologue.
Identical screenshot after processing. Zero upscale:

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.
But maybe this is all obsolete with the PAL sources? I am ashamed not to find time for even look at it (apart from watching some epissodes in the late evening, when my brain doesn’t want to think any more…)

If you know a better way to clean up the former into the latter – possibly by preserving more detail on Bashir’s forehead, where my method is losing some of it – I’d love to incorporate it.

We should postpone everything else until you tried the PAL sources, shouldn’t we?
But again: Astonishing!

Author
Time

FrankB said:

Joel Hruska said:
FrankB,

Pleasure to meet you. Before we discuss relative processing technique I should probably provide you some samples. For example:

Nice to meet you, too. You are right: Theoretical discussions are always a bit too - theoretical. Your results are astonishing, especially the captions! I am still sceptical against the whole AI-upsizing (why I wrote here in another thread, if you are interested in pure theory I can search for it), but it seems that maybe I am too old meanwhile - maybe a mix of that fact and some true facts…
But it looks great!
Critics and proposal: For my taste a bit too LESS noise. Maybe you should consider to
-denoise in avisynth (as you did), because the AI-denoising may be worse in quality, thus having full control of the denoising
-scale up denoised, which is necessary for the AI in order not to produce too much “new details from noise”
New:
-mix back some of the original noise(!) - which makes it more natural. F. e. just resize the original in avisynth with nnedi3 or so and mix it back with overlay(…, opacity=0.2) or similar. We do this very often, and it’s common practice in studios to re-noise.

The net effect of TR2=4 or TR2=5 is a substantial improvement in the final output.

You are right concerning aliasing. But you have to pay with less detail before AI (I suppose).
I don’t like the QTGMC “input type” > 0, also because in some scenes it works pretty well, and sometimes suddenly there is quite no effect.

I have spent 20-40 hours per week for the past nine months running thousands of encodes of Deep Space Nine. DS9, however, is also my first project.

I wish I had the time for my private projects, too. Hats off to all your efforts, great that there are still people who really pull off something.

QTGMC2 = QTGMC(Preset=“Very Slow”, SourceMatch=3, TR2=5, InputType=2, Lossless=2, MatchEnhance=0.75, Sharpness=0.5, MatchPreset=“Very Slow”, MatchPreset2=“Very Slow”)
QTGMC3 = QTGMC(preset=“Very Slow”, SourceMatch=3, Lossless=2, Sharpness=0.5, MatchEnhance=0.75, InputType=3, TR2=5)

After a lot of experiments some years ago I decided not to use “placebo” and “very slow” any more, because you lose too many details. In this special case (to feed the AI upscaler) it may be good - but as I said before: You should consider to put SOME of the noise back in the end…

Repair(QTGMC2, QTGMC3, 9)

That seems interesting, I never had this idea!

If you want 23.976 fps output, just throw TFM() and Tdecimate() ahead of the QTGMC calls.

But this would ruin the original 29.97i (cgi) sequences? Or aren’t there any? I am sure there must be, I never checked this myself up to now, just picked it up from doom9 postings.

Baseline DVD. From PastPrologue.
Identical screenshot after processing. Zero upscale:

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.
But maybe this is all obsolete with the PAL sources? I am ashamed not to find time for even look at it (apart from watching some epissodes in the late evening, when my brain doesn’t want to think any more…)

If you know a better way to clean up the former into the latter – possibly by preserving more detail on Bashir’s forehead, where my method is losing some of it – I’d love to incorporate it.

We should postpone everything else until you tried the PAL sources, shouldn’t we?
But again: Astonishing!

I will provide my PAL-version soon in the release topic where I will invite both of you - perhaps together we can come up with something.

Author
Time
 (Edited)

Oh Frank, I see I hadn’t posted my sample images around here, I will now, so you can see a PAL-Version with some of Joels key points as well as my own tweaking, so here goes:








EDIT: Sorry for the double post, I had skipped to far/fast. My bad.

Author
Time
 (Edited)

They look quite good, especially the last one! And no aliasing at all, or have I overlooked it? But you can only assume how good they really are, because of the low resolution.
One strange effect: Everything in the last shot is sharper, somehow “thinner”, (apparently) more detailled, but the upper line of the tractor beam is somehow jaggy. It seems that AI has no real plan for these kind of “lines”.

Author
Time
 (Edited)

FrankB said:

Joel Hruska said:
FrankB,

Pleasure to meet you. Before we discuss relative processing technique I should probably provide you some samples. For example:

Nice to meet you, too. You are right: Theoretical discussions are always a bit too - theoretical. Your results are astonishing, especially the captions! I am still sceptical against the whole AI-upsizing (why I wrote here in another thread, if you are interested in pure theory I can search for it), but it seems that maybe I am too old meanwhile - maybe a mix of that fact and some true facts…
But it looks great!
Critics and proposal: For my taste a bit too LESS noise. Maybe you should consider to
-denoise in avisynth (as you did), because the AI-denoising may be worse in quality, thus having full control of the denoising
-scale up denoised, which is necessary for the AI in order not to produce too much “new details from noise”
New:
-mix back some of the original noise(!) - which makes it more natural. F. e. just resize the original in avisynth with nnedi3 or so and mix it back with overlay(…, opacity=0.2) or similar. We do this very often, and it’s common practice in studios to re-noise.

I’m experimenting with renoise options at the moment, to see how they impact things. Also, yes, trying to calibrate the proper amount of processing and denoising to do in the front-end before letting an AI program have a go at it. Some AI models include denoising that works effectively but I’d rather not have to use them in the first place.

The net effect of TR2=4 or TR2=5 is a substantial improvement in the final output.

You are right concerning aliasing. But you have to pay with less detail before AI (I suppose).
I don’t like the QTGMC “input type” > 0, also because in some scenes it works pretty well, and sometimes suddenly there is quite no effect.

I have spent 20-40 hours per week for the past nine months running thousands of encodes of Deep Space Nine. DS9, however, is also my first project.

I wish I had the time for my private projects, too. Hats off to all your efforts, great that there are still people who really pull off something.

Because I was able to work at this project as part of my job, I was able to treat it equivalently to part of my job. This is not to say I haven’t invested huge amounts of time, because I had everything else to do with my job while working on this as well. None of my work responsibilities got shifted. But I also wanted to try and build a new coverage area for ExtremeTech in this space, so… mutual priority alignment.

QTGMC2 = QTGMC(Preset=“Very Slow”, SourceMatch=3, TR2=5, InputType=2, Lossless=2, MatchEnhance=0.75, Sharpness=0.5, MatchPreset=“Very Slow”, MatchPreset2=“Very Slow”)
QTGMC3 = QTGMC(preset=“Very Slow”, SourceMatch=3, Lossless=2, Sharpness=0.5, MatchEnhance=0.75, InputType=3, TR2=5)

After a lot of experiments some years ago I decided not to use “placebo” and “very slow” any more, because you lose too many details. In this special case (to feed the AI upscaler) it may be good - but as I said before: You should consider to put SOME of the noise back in the end…

Repair(QTGMC2, QTGMC3, 9)

That seems interesting, I never had this idea!

If you want 23.976 fps output, just throw TFM() and Tdecimate() ahead of the QTGMC calls.

But this would ruin the original 29.97i (cgi) sequences? Or aren’t there any? I am sure there must be, I never checked this myself up to now, just picked it up from doom9 postings.

It’s not the FX scenes that are automatically in 29.97. In fact, in the first season, at least some episodes are basically 100% film. I don’t know when this stops. In others, like Sacrifice of Angels, most of the battle scenes are 23.976 fps, though there’s one post-credits scene that has preserved incidents of 3:2 pulldown in a 29.97 fps stream. That one threw me for awhile, trying to figure out how that could happen. Baked-in source error is awesome.

Baseline DVD. From PastPrologue.
Identical screenshot after processing. Zero upscale:

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.

I’m not seeing it. I see one pattern that might be what you are talking about, but doesn’t come across as aliased when the actor is in motion:

Here’s a pair of enlarged shoulder shots from the same base pair of images.

https://i.imgur.com/OnEo5w3.png (DVD)
https://i.imgur.com/rOgxQEE.png (upscale)

Are you referring to the very faint line above Bashir’s right (left from our perspective) shoulder?

But maybe this is all obsolete with the PAL sources? I am ashamed not to find time for even look at it (apart from watching some epissodes in the late evening, when my brain doesn’t want to think any more…)

If you know a better way to clean up the former into the latter – possibly by preserving more detail on Bashir’s forehead, where my method is losing some of it – I’d love to incorporate it.

We should postpone everything else until you tried the PAL sources, shouldn’t we?
But again: Astonishing!

I don’t see why. I have access to the whole PAL show if I want it, but I also have episodes on-hand from S1 and S6. The PAL quality, as near as I can tell, is virtually identical to NTSC quality with the following differences:

1). Motion is intrinsically smoother and easier to deal with. NTSC can be brought back to PAL quality in this regard, but it’s taken me more work to do it.
2). There’s a very slight color shift, at least in S6. Colors that are slightly more blue in NTSC are slightly more purple in PAL.
3). PAL is stretched slightly and just slightly blurrier by default. Compared this frame-by-frame in NTSC vs. PAL editions of S6.
4). PAL, of course, has the 4% audio shift.

Because I want to create a project for people to do at home with legal source, asking people to buy PAL is pretty tricky. I want to write an article about the best way to deal with PAL, possibly in partnership with the folks like yourselves and my friend Cyril who have worked on it, if people were amenable to that. Either way, though, I want to be able to give solutions for both.

Author
Time

Animaxx,

Going with the following initial run to blend our two approaches:

QTGMC2=QTGMC(“Preset=Very Slow,” InputType=2, SourceMatch=3, Lossless=2, MatchEnhance=1.0, MatchPreset=“Placebo,” MatchPreset2=“Placebo,” sharpness=1.0, SMode=2, TR2=4, Rep0=11, Rep1=9, Rep2=9, RepChroma=True, EdiMode=“EEDI3+NNEDI3”, Sbb=0, NoiseProcess=1, ChromaNoise=True, DenoiseMC=True, NoiseTR=2, GrainRestore=1.0, NoiseRestore=0.1, NoiseDeint=“Generate,” StabilizeNoise=True)

QTGMC3=QTGMC(“Preset=Very Slow,” InputType=3, SourceMatch=3, Lossless=2, MatchEnhance=1.0, MatchPreset=“Placebo,” MatchPreset2=“Placebo,” sharpness=1.0, SMode=2, TR2=4, Rep0=11, Rep1=9, Rep2=9, RepChroma=True, EdiMode=“EEDI3+NNEDI3”, Sbb=0, NoiseProcess=1, ChromaNoise=True, DenoiseMC=True, NoiseTR=2, GrainRestore=1.0, NoiseRestore=0.1, NoiseDeint=“Generate,” StabilizeNoise=True)
Repair (QTGMC2, QTGMC3, 9)

One thing I want to make sure you know is that using the command structure I was using, with variables loaded into QTGMC2 and “ReuseGlobals” called – doesn’t work. I mean, it works, as in, it delivers output, but QTGMC3 does not inherit variables from QTGMC. The “global” call is still confined to the QTGMC2 function.

So if you write a script like this:

QTGMC=QTGMC2(Placebo, Everything tweaked, tuned, and precisely chosen)

And you pair it with this:

QTGMC=QTGMC3(InputType=3, Reuse Previous Globals)

What you actually get, in terms of output, is your QTGMC2 data run against a bog-standard default QTGMC3 run with only InputType=3 as an assigned variable.

Writing out both lines changes the output.

Author
Time

Joel Hruska said:
I’m experimenting with renoise options at the moment, to see how they impact things. Also, yes, trying to calibrate the proper amount of processing and denoising to do in the front-end before letting an AI program have a go at it. Some AI models include denoising that works effectively but I’d rather not have to use them in the first place.

My opinion: Better choice. Because it will kill details rather than create “new ones”, if you let the AI denoise. Just my experience, there may be other scenarios. Renoising - to say it again - is best if you use the original noise. This is not common practice, by the way. Mostly they use some more or less random algorithms, as you surely know.

It’s not the FX scenes that are automatically in 29.97. In fact, in the first season, at least some episodes are basically 100% film. I don’t know when this stops. In others, like Sacrifice of Angels, most of the battle scenes are 23.976 fps, though there’s one post-credits scene that has preserved incidents of 3:2 pulldown in a 29.97 fps stream. That one threw me for awhile, trying to figure out how that could happen. Baked-in source error is awesome.

So there ARE pulldowned 23.976 and native 29.97 scenes, right? Just this fact makes it A LOT harder to deal with the NTSC sources instead of PAL, if you finally want
-progressive (to use with AI)
-stutter-free
results.
In addition the PAL sources have less aliasing. So two very strong reasons to use PAL as sources.
And are there also scenes where they overlayed both? Am I right with this speculation? We had this in a series I worked at a few years ago, also SciFi, made about the same time as DS9, a bit earlier. The overlayed scenes are definitely not stutter-free, and there is no way to IVTC it 100% correctly.
We handled, by the way, the native 29.97 scenes different from the well IVTCed 23.976-scenes and had to convert it with Alchemist optical flow, which was the best option at that time (today also AI is somehow better in these cases…).

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.

I’m not seeing it. I see one pattern that might be what you are talking about, but doesn’t come across as aliased when the actor is in motion:

The most left part with the light background. A big difference concerning “staircases”.

I don’t see why. I have access to the whole PAL show if I want it, but I also have episodes on-hand from S1 and S6. The PAL quality, as near as I can tell, is virtually identical to NTSC quality with the following differences:

Reasons are above. Of course, your decision.

1). Motion is intrinsically smoother and easier to deal with. NTSC can be brought back to PAL quality in this regard, but it’s taken me more work to do it.

I will have to check this once myself. I only read that IVTC seems complicated at doom9.

2). There’s a very slight color shift, at least in S6. Colors that are slightly more blue in NTSC are slightly more purple in PAL.
3). PAL is stretched slightly and just slightly blurrier by default. Compared this frame-by-frame in NTSC vs. PAL editions of S6.
4). PAL, of course, has the 4% audio shift.

These are of course no significant reasons to take the PAL sources, I agree.

Because I want to create a project for people to do at home with legal source, asking people to buy PAL is pretty tricky.

Come on… I do love this code of honour that people obey here, but the difference between PAL and NTSC sources of the same show is purely technical.

I want to write an article about the best way to deal with PAL

What do you mean by “deal with PAL”? To convert it (back) to NTSC? For me it’s no question, I am in PAL-country, for me the question always is “how to deal with NTSC”…

Author
Time
 (Edited)

FrankB said:

Joel Hruska said:
I’m experimenting with renoise options at the moment, to see how they impact things. Also, yes, trying to calibrate the proper amount of processing and denoising to do in the front-end before letting an AI program have a go at it. Some AI models include denoising that works effectively but I’d rather not have to use them in the first place.

My opinion: Better choice. Because it will kill details rather than create “new ones”, if you let the AI denoise. Just my experience, there may be other scenarios. Renoising - to say it again - is best if you use the original noise. This is not common practice, by the way. Mostly they use some more or less random algorithms, as you surely know.

It’s not the FX scenes that are automatically in 29.97. In fact, in the first season, at least some episodes are basically 100% film. I don’t know when this stops. In others, like Sacrifice of Angels, most of the battle scenes are 23.976 fps, though there’s one post-credits scene that has preserved incidents of 3:2 pulldown in a 29.97 fps stream. That one threw me for awhile, trying to figure out how that could happen. Baked-in source error is awesome.

So there ARE pulldowned 23.976 and native 29.97 scenes, right? Just this fact makes it A LOT harder to deal with the NTSC sources instead of PAL, if you finally want
-progressive (to use with AI)
-stutter-free
results.
In addition the PAL sources have less aliasing. So two very strong reasons to use PAL as sources.
And are there also scenes where they overlayed both? Am I right with this speculation? We had this in a series I worked at a few years ago, also SciFi, made about the same time as DS9, a bit earlier. The overlayed scenes are definitely not stutter-free, and there is no way to IVTC it 100% correctly.
We handled, by the way, the native 29.97 scenes different from the well IVTCed 23.976-scenes and had to convert it with Alchemist optical flow, which was the best option at that time (today also AI is somehow better in these cases…).

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.

I’m not seeing it. I see one pattern that might be what you are talking about, but doesn’t come across as aliased when the actor is in motion:

The most left part with the light background. A big difference concerning “staircases”.

Alright. Now I see it. If that’s what you call a major anti-aliasing problem, I strongly suggest you never watch the NTSC credits. I cannot upload them to YouTube in SD because the YT algorithm utterly destroys SD content, so I have to give you screenshots – but take a look.

https://i.imgur.com/JBV6gtu.png

That’s a close-in zoom on the station and the runabout passing it. Here’s a few frames in sequence. I’m going to give you each frame so you can see what I’m working with:

https://i.imgur.com/t0QiE1a.png
https://i.imgur.com/qGbQlBX.png
https://i.imgur.com/r9PcDef.png
https://i.imgur.com/uILnJdN.png
https://i.imgur.com/cHK9fCC.png
https://i.imgur.com/BeHLD4Z.png
https://i.imgur.com/gmUSqFp.png

Look at the degree of aliasing in those images, and I think you’ll understand why I’m experimenting with TR2=4 or TR2=5. TR2=3 creates a ripple across the front of the station that TR2=4 (at least) helps fix.

Deinterlacing is enabled in these images. That’s not what the output looks like without deinterlacing. That’s what the output looks like with deinterlacing, although it is being assembled from the 29.97 fps D2V file. If you watch a MakeMKV file it’s not so terrible looking, though I was never able to find a way to start this process with a MakeMKV-derived file and wind up with smooth motion at the end of it. I’m not saying it isn’t possible – but I searched for a way to create smooth motion out of a VFR MakeMKV base-file for about six of the nine months I worked on this project, and found a way to create first a perfectly smooth 60 fps version and then a 23.976 fps version (with help from Cyril, who is also working on the PAL version of the show). Regardless, the credits are the absolute worst-looking part of the show on NTSC.

I don’t see why. I have access to the whole PAL show if I want it, but I also have episodes on-hand from S1 and S6. The PAL quality, as near as I can tell, is virtually identical to NTSC quality with the following differences:

Reasons are above. Of course, your decision.

1). Motion is intrinsically smoother and easier to deal with. NTSC can be brought back to PAL quality in this regard, but it’s taken me more work to do it.

I will have to check this once myself. I only read that IVTC seems complicated at doom9.

2). There’s a very slight color shift, at least in S6. Colors that are slightly more blue in NTSC are slightly more purple in PAL.
3). PAL is stretched slightly and just slightly blurrier by default. Compared this frame-by-frame in NTSC vs. PAL editions of S6.
4). PAL, of course, has the 4% audio shift.

These are of course no significant reasons to take the PAL sources, I agree.

Because I want to create a project for people to do at home with legal source, asking people to buy PAL is pretty tricky.

Come on… I do love this code of honour that people obey here, but the difference between PAL and NTSC sources of the same show is purely technical.

I want to write an article about the best way to deal with PAL

What do you mean by “deal with PAL”? To convert it (back) to NTSC? For me it’s no question, I am in PAL-country, for me the question always is “how to deal with NTSC”…

The reason to write an article telling people how to deal with PAL is so that PAL people know how to best convert the show. I am not advocating for some kind of code of honor. I want to produce the best overall version of Deep Space Nine. But the goal is to provide a “Best-in-class” improvement method to everyone, which means I’m also interested in the best way to handle PAL.

Doom9 is correct that IVTC is complicated. The only way to perform it perfectly, as far as I know, is to hand-comb every scene and manually tune the frame order method via a TFM OVR file.

However – it turns out that the following actually works pretty damn well:

TFM()
TDecimate()

I have also developed a secondary method of creating a 60 fps method of the show that matches the quality of what I’ve shown you. The same Repair command I mention above will get rid of the frame blends and interpolations created by using TDeinterlace in double-output method and raising the frame rate to 60 fps. I may have even found a method of doing it all in a single script without a quality loss (right now, you have to use two different files to create a third using my 60 fps conversion method, IF you want the final quality to match the 23.976 fps method and to show no blended frames). I may have found a way to do it in a single ingestion with no loss of quality.

My Rio Grande encode uses 23.976 but is not perfect. It handles the majority of video content correctly but not all of it. Right now, my solution to that problem is to keep looking for ways to improve it while offering Orinoco at 60 fps as an alternative if someone runs into a problem episode. If I have to ask people to accept a few seconds of jerky motion every few episodes as a cost of getting this kind of quality improvement (or, alternately, to build that episode in 60 fps), that was a fair-sounding tradeoff to me.

One of the major goals of my project is an automated process. I want people to need to make as few script changes as possible and while I’m willing to do a custom script to fix an episode if I must, I’m trying to avoid that being necessary. The hand-combing and tuning that works for IVTC if you do it manually will unwind DS9 in perfection, but I didn’t want to do the scene-by-scene combing that it apparently requires. Still searching for a way to automate it perfectly (the commands above are automatic, but not perfect).

Initial evaluation of naive implementation of AniMaxx’s algorithm suggests it’s oversharpening in my case. I like the overall output otherwise. Going to adjust some variables, then toss in the rest and see what it looks like. 😃

Author
Time

Initial evaluation of naive implementation of AniMaxx’s algorithm suggests it’s oversharpening in my case. I like the overall output otherwise. Going to adjust some variables, then toss in the rest and see what it looks like.

I am happy to see that a few pointers from my approach worked, but I was actually afraid it would oversharpen in your case. My initial NTSC sharpening approach was enough for that medium, but PAL is softer, so I had to adjust - in your case you will have to reduce.

Author
Time

Also Joel, I think it is really kind and generous of you to try to offer solutions for both standards.

That would probably help a lot of people, who for some reason may just have access to one of the formats or simply don’t want to spend money again on something they already own, but still would like to “spice it up a bit”.

And besides: In the end every attempt benefits us all, for I have the dream that our persistence and idealism will someday shame someone at CBS into thinking “Oops, fans outsmarted and outworked us, now everyone can have it for free? Let’s jump on that train and make some cash!”
And if they did it well, I would gladly pay.

One can dreeam, right?

Author
Time

I’m enjoying all this technical detail. I have no bloody clue what is being said (it may as well be in Klingonese for all I know) but it looks like there’s several people who really know what needs to be done here.

As for CBS, every time somebody uploads a test video, I go watch a couple of episodes on Netflix. I think it’ll be the viewing figures for DS9 and VGR that make the case to CBS that they might have to pull their fingers out eventually as they’ll be the only two of the nine produced TV shows without a HD release.

Author
Time
 (Edited)

Artan42 said:

I’m enjoying all this technical detail. I have no bloody clue what is being said (it may as well be in Klingonese for all I know) but it looks like there’s several people who really know what needs to be done here.

As for CBS, every time somebody uploads a test video, I go watch a couple of episodes on Netflix. I think it’ll be the viewing figures for DS9 and VGR that make the case to CBS that they might have to pull their fingers out eventually as they’ll be the only two of the nine produced TV shows without a HD release.

Believe me, this is what I feel like when people start talking avisynth. I am still learning while doing but to be honest if there is a way to use a gui/user interface which is visually pleasing I prefer that.

I guess if you were to do it all the time, it becomes second nature, but I think everything in the general direction of avisynth isn’t for me.