logo Sign In

FrankB

User Group
Members
Join date
12-Jan-2017
Last activity
25-Apr-2024
Posts
48

Post History

Post
#1374796
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Joel Hruska said:
Alright. Now I see it. If that’s what you call a major anti-aliasing problem, I strongly suggest you never watch the NTSC credits. I cannot upload them to YouTube in SD because the YT algorithm utterly destroys SD content, so I have to give you screenshots – but take a look.

Looks like shifted fields. Can you upload the credits, or a few seconds of it, without conversion somewhere?
I do not understand where the problem realy lies?

Look at the degree of aliasing in those images, and I think you’ll understand why I’m experimenting with TR2=4 or TR2=5. TR2=3 creates a ripple across the front of the station that TR2=4 (at least) helps fix.

Seems not the right way to handle this, but let me see it first - maybe I will have to admit it IS that horrible. But hard to believe.

The reason to write an article telling people how to deal with PAL is so that PAL people know how to best convert the show. I am not advocating for some kind of code of honor. I want to produce the best overall version of Deep Space Nine. But the goal is to provide a “Best-in-class” improvement method to everyone, which means I’m also interested in the best way to handle PAL.

Not much difference to NTSC. After you made NTSC sources progressive and the anti-aliased possible rest of aliasing which came from reordering fields it should be pretty much the same procedure plus a slowdown at the very end.

Doom9 is correct that IVTC is complicated. The only way to perform it perfectly, as far as I know, is to hand-comb every scene and manually tune the frame order method via a TFM OVR file.

It is even better not to write a file for TFM, but to IVTC completely by hand. Not very complicated, I do this all the time, if the pattern doesn’t change too often. With the amount of episodes it is quite a job, though, but writing files for TFM also… If you like, I can post a typical script for by-hand-IVTC.

However – it turns out that the following actually works pretty damn well:

TFM()
TDecimate()

Would be easy then, but if you do so with native 29.97(i or p) content there will be missing frames and skipping (is that the right word?) all the time.

I have also developed a secondary method of creating a 60 fps method of the show that matches the quality of what I’ve shown you.

No good idea, because it doesn’t remove stuttering from telecine-double-fields. That is the whole problem with content like this. Progressive 30, 60 will stutter with originally film=24fps. How could this be avoided? There have to be double frames, at least fields, or can you divide 60 through 24?

Still searching for a way to automate it perfectly (the commands above are automatic, but not perfect).

Don’t exactly know, what’s your goal. But simply the first necessary, to IVTC, is not perfectly automatically possible. That’s why we do it by hand if ever possible.

Initial evaluation of naive implementation of AniMaxx’s algorithm suggests it’s oversharpening in my case. I like the overall output otherwise. Going to adjust some variables, then toss in the rest and see what it looks like. 😃

I wouldn’t sharpen at all, only a bit unsharpmasking, but that’s another question… 😉

Post
#1374762
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Joel Hruska said:
I’m experimenting with renoise options at the moment, to see how they impact things. Also, yes, trying to calibrate the proper amount of processing and denoising to do in the front-end before letting an AI program have a go at it. Some AI models include denoising that works effectively but I’d rather not have to use them in the first place.

My opinion: Better choice. Because it will kill details rather than create “new ones”, if you let the AI denoise. Just my experience, there may be other scenarios. Renoising - to say it again - is best if you use the original noise. This is not common practice, by the way. Mostly they use some more or less random algorithms, as you surely know.

It’s not the FX scenes that are automatically in 29.97. In fact, in the first season, at least some episodes are basically 100% film. I don’t know when this stops. In others, like Sacrifice of Angels, most of the battle scenes are 23.976 fps, though there’s one post-credits scene that has preserved incidents of 3:2 pulldown in a 29.97 fps stream. That one threw me for awhile, trying to figure out how that could happen. Baked-in source error is awesome.

So there ARE pulldowned 23.976 and native 29.97 scenes, right? Just this fact makes it A LOT harder to deal with the NTSC sources instead of PAL, if you finally want
-progressive (to use with AI)
-stutter-free
results.
In addition the PAL sources have less aliasing. So two very strong reasons to use PAL as sources.
And are there also scenes where they overlayed both? Am I right with this speculation? We had this in a series I worked at a few years ago, also SciFi, made about the same time as DS9, a bit earlier. The overlayed scenes are definitely not stutter-free, and there is no way to IVTC it 100% correctly.
We handled, by the way, the native 29.97 scenes different from the well IVTCed 23.976-scenes and had to convert it with Alchemist optical flow, which was the best option at that time (today also AI is somehow better in these cases…).

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.

I’m not seeing it. I see one pattern that might be what you are talking about, but doesn’t come across as aliased when the actor is in motion:

The most left part with the light background. A big difference concerning “staircases”.

I don’t see why. I have access to the whole PAL show if I want it, but I also have episodes on-hand from S1 and S6. The PAL quality, as near as I can tell, is virtually identical to NTSC quality with the following differences:

Reasons are above. Of course, your decision.

1). Motion is intrinsically smoother and easier to deal with. NTSC can be brought back to PAL quality in this regard, but it’s taken me more work to do it.

I will have to check this once myself. I only read that IVTC seems complicated at doom9.

2). There’s a very slight color shift, at least in S6. Colors that are slightly more blue in NTSC are slightly more purple in PAL.
3). PAL is stretched slightly and just slightly blurrier by default. Compared this frame-by-frame in NTSC vs. PAL editions of S6.
4). PAL, of course, has the 4% audio shift.

These are of course no significant reasons to take the PAL sources, I agree.

Because I want to create a project for people to do at home with legal source, asking people to buy PAL is pretty tricky.

Come on… I do love this code of honour that people obey here, but the difference between PAL and NTSC sources of the same show is purely technical.

I want to write an article about the best way to deal with PAL

What do you mean by “deal with PAL”? To convert it (back) to NTSC? For me it’s no question, I am in PAL-country, for me the question always is “how to deal with NTSC”…

Post
#1374741
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

They look quite good, especially the last one! And no aliasing at all, or have I overlooked it? But you can only assume how good they really are, because of the low resolution.
One strange effect: Everything in the last shot is sharper, somehow “thinner”, (apparently) more detailled, but the upper line of the tractor beam is somehow jaggy. It seems that AI has no real plan for these kind of “lines”.

Post
#1374714
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Joel Hruska said:
FrankB,

Pleasure to meet you. Before we discuss relative processing technique I should probably provide you some samples. For example:

Nice to meet you, too. You are right: Theoretical discussions are always a bit too - theoretical. Your results are astonishing, especially the captions! I am still sceptical against the whole AI-upsizing (why I wrote here in another thread, if you are interested in pure theory I can search for it), but it seems that maybe I am too old meanwhile - maybe a mix of that fact and some true facts…
But it looks great!
Critics and proposal: For my taste a bit too LESS noise. Maybe you should consider to
-denoise in avisynth (as you did), because the AI-denoising may be worse in quality, thus having full control of the denoising
-scale up denoised, which is necessary for the AI in order not to produce too much “new details from noise”
New:
-mix back some of the original noise(!) - which makes it more natural. F. e. just resize the original in avisynth with nnedi3 or so and mix it back with overlay(…, opacity=0.2) or similar. We do this very often, and it’s common practice in studios to re-noise.

The net effect of TR2=4 or TR2=5 is a substantial improvement in the final output.

You are right concerning aliasing. But you have to pay with less detail before AI (I suppose).
I don’t like the QTGMC “input type” > 0, also because in some scenes it works pretty well, and sometimes suddenly there is quite no effect.

I have spent 20-40 hours per week for the past nine months running thousands of encodes of Deep Space Nine. DS9, however, is also my first project.

I wish I had the time for my private projects, too. Hats off to all your efforts, great that there are still people who really pull off something.

QTGMC2 = QTGMC(Preset=“Very Slow”, SourceMatch=3, TR2=5, InputType=2, Lossless=2, MatchEnhance=0.75, Sharpness=0.5, MatchPreset=“Very Slow”, MatchPreset2=“Very Slow”)
QTGMC3 = QTGMC(preset=“Very Slow”, SourceMatch=3, Lossless=2, Sharpness=0.5, MatchEnhance=0.75, InputType=3, TR2=5)

After a lot of experiments some years ago I decided not to use “placebo” and “very slow” any more, because you lose too many details. In this special case (to feed the AI upscaler) it may be good - but as I said before: You should consider to put SOME of the noise back in the end…

Repair(QTGMC2, QTGMC3, 9)

That seems interesting, I never had this idea!

If you want 23.976 fps output, just throw TFM() and Tdecimate() ahead of the QTGMC calls.

But this would ruin the original 29.97i (cgi) sequences? Or aren’t there any? I am sure there must be, I never checked this myself up to now, just picked it up from doom9 postings.

Baseline DVD. From PastPrologue.
Identical screenshot after processing. Zero upscale:

Sorry, but in screenshot 2 there is more aliasing than in 1. Look at the shoulder.
But maybe this is all obsolete with the PAL sources? I am ashamed not to find time for even look at it (apart from watching some epissodes in the late evening, when my brain doesn’t want to think any more…)

If you know a better way to clean up the former into the latter – possibly by preserving more detail on Bashir’s forehead, where my method is losing some of it – I’d love to incorporate it.

We should postpone everything else until you tried the PAL sources, shouldn’t we?
But again: Astonishing!

Post
#1374496
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Joel Hruska said:
One difference I do know between PAL and NTSC is that PAL doesn’t have the same problems with aliasing that I’m trying to clean up with commands like TR2=5 or TR2=4. Some of the issues I have spent time fixing for NTSC just aren’t in the PAL copy.

If the NTSC sources are THAT bad, that you have to anti-alias by QTGMC-temporal-filtering with a window of 4 or 5, it’s really time for the PAL sources… I would never use this. If QTGMC, then always TR2=0 and StabilizeNoise=false. There are better ways to filter noise.

The native DVD credits for NTSC look terrible compared to PAL.

The PAL opening credits seem to stutter at nearly every episode by the end (wormhole opens), but there are a few episodes where the conversion is stutter-free - I just watched a lot of episodes, but forgot to take a note for the good ones.

Animaxx said:

The audio I will be able to handle (I know how to adjust the original NTSC audio so it fits/syncs with PAL without any pitch issues

You don’t have to change anything with audio (except there were small cut-differences, I don’t know), just slow the picture finally down to its (nearly) original 23.976fps with “assumefps(24000,1001)”.

Post
#1373393
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Animaxx said:
Nothing ever really produced the desired effect.

Because of the pulldowned sequences that will ALWAYS stutter if not properly decimated. That’s a fact.

I guess I will try the PAL-discs now, since they have a professionally done 25 FPS, which is present uniform and throughout. Perhaps that will work.

I guess I took my mouth a bit too full (can you say this in English?). Yesterday I watched two episodes, and I have to admit there are a few doubles per episode. But only two or three (plus a few always by the end the openig credits), but despite that it really still seems a much better starting point.

But what bugs me most are the release policies by all those multimedia-corporations: I mean when we take a long hard look at how many times certain series were released, re-released, new editions done and then re-re-released again and so on, there was really no time or option to work on them so they could be presented in a standard up-to-date form (like constant framerate)?

Not 100% necessary in the NTSC-world, but costs money. That’s all.

Of course, I am no expert on the subject, but I would reckon’ this could be done automatically these days?

Not in this case. Doing it perfect is really work - and for the c)-scenes still impossible, if you don’t mix (overlay) film and cgis again from the beginning. This would mean HIGH cost. - And THEN they could also scan the film parts again in HD, adapt or improve the cgi-parts and release proper BluRays…

After all, we fans have bought so many releases and special editions, paid for so many streaming options and downloads, there was no money left to do this? Seriously???

They did it with TNG, looked at the results, and decided so. Their reasons must be clear, otherwise they would have produced HD also for DSN and Voyager. Why shouldn’t they?

Post
#1373323
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

You will get the usual stuttering in cases a) and c). You just make two full-frames out of two fields with this, that’s all. Every section that used pulldown (80-90% of an episode) will stutter, because no doubled frame had been decimated. It’s harder to realize with cartoon, because of natural frame doublings, but try it with natural content.
There is no way, really no way, to get this stutter-free without removing (decimating) the doubled fields (after correct TFM, the doubled frames) of pulldown.

And, by the way, for the b)-type (original 29,97 cgi) this is not the best way to handle either. TFM is a field-matcher, and if you use it on really interlaced content there is nothing to match, because each field has been taken (created) at its very own time, no progressive frames to reconstruct, because there are none. So if the origin is really interlaced you will get
-TFM upper field --> result = interlaced, because there was nothing to match --> post processing of TFM realizes this and if not set to 0 deinterlaces (but not with the best possible quality)
-TFM lower field --> the same, maybe there is a switch that uses the other field for post-processing-deinterlacing, I am not sure at the moment, if not you will get twice the same.
So you get two parts where no TFM-field-matching happened, but only p-p-deinterlacing in rather medium quality, because TFM is not made for this situation. The p-p-deinterlacing is made for the few frames that couldn’t have been matched correctly in an a)-situation (pulldowned 23,976-material).
If you want each field’s content in full-size in 59,9 you should simply bob with highest quality (QTGMC again).

Post
#1373212
Topic
Star Trek Deep Space Nine - NTSC DVD Restoration & 1080p HD Enhancement (Emissary Released)
Time

Forgive me, but…
I read this thread from nearly the beginning and all of the discussion on doom9 ( https://forum.doom9.org/showthread.php?t=181209 ). 95% of your and all the others efforts seem to have been spent with trying to IVTC the NTSC-sources.
With the NTSC-masters - as you of course know, just to specify again - the problems were:

a) film, pulldowned to 29,97 with changing patterns, as usual
b) a lot of scenes, mostly CGI, originally produced in 29,97i
c) worst of all a lot of scenes with overlays of the two above(!), whch had been mentioned only sporadically, but is an eminent problem, because of being absolutely irreversible.
No matter how you handle this kind of content, you will never achieve results free from stutter, at least not the c)-scenes - also not by using VFR.

But there are also PAL-DVDs of the series, and as I just read, you own them. And concerning IVTC they did a really good job with these! I am currently watching it, one to three episodes every evening, and stutter-isues are very, very rare, about one or two each three or four episodes! (And I am trained to recognise these.)
So the question for me all the time was: Why don’t you just use the PAL-DVDs, deinterlace properly (QTGMC) if necessary, slow down and scale up, which was the real goal of this lovely project?

Post
#1366783
Topic
Song Of The South - many projects, much info & discussion thread (Released)
Time

Cleaning cartoon with RemoveDirt mostly does a good job, but kills details when movement is fast (look at the grindstone beginning at about 1.22). I would try to clean most of the scenes like you did, but exclude such fast movement scenes (or parts of it). Sorry for the critics, just my opinion, and the feeling that the very best is good enough for this kind of film.

Post
#1325375
Topic
Info: Gigapixel AI vs infognition Super Resolution / What to use to upscale SD to HD or 4K
Time

phoenixobia said:
Interesting. Makes sense. I just don’t know scripting and how to learn to do it. But I know it’s great.

-Install AviSynth (for better compatibility first 32Bits)
-Write a script (you can do it with notepad) containing an Input-Source and a return, f. e.:

v=avisource(“D:\Videos\1.avi”)
return v

Save it with extension AVS.
Every video app that opens the avi via the installed handler correctly opens this avs now as a video, VirtualDub2 recommended. That’s all. When you write something between the two lines it will become interesting. There are also many more “source”-filters that can open nearly any possible format. I would recommend the package LSMASHsource, that comes with LWLIBAVVIDEOSOURCE. Opening with this can last a while, because it writes an index, but will afterwards be fast. And it handles sources very much correctly without changing anything. There are more, also based on ffmpeg. Between the lines you can do almost everything with the video, more than you ever dreamt about…

But the big downside is the frame rate when it comes to 24fps film, that shift of speed and change of pitch in the audio after conversion to 25fps is so noticeable and annoying to me. I know how to convert a PAL source back to progressive 23.976fps and change the audio speed to match the new framerate to reverse this effect but when audio speed becomes slower, audio quality decreases to some extent because it goes from high pitch to low pitch.

Yes, you are right. Especially a problem when you release something on BluRay with correct 23.976 or 24 fps, but the dubbing had been made for PAL-TV. So you hve to slow sound down in speed. The quarter tone lower in my opinion is not as hearable or annoying as the lower speed. One can correct the pitch, but this does not solve the problem. You can only release the BluRay in 25fps, but this causes other problems and the picture remains too fast…
In spite of these problems I highly prefer PAL, when I look at Pulldown and IVTC issues.

film --> scanned with pulldown (telecined) --> stored really uncompressed or losslessly compressed --> IVTCed
would give you 100% progressive frames back, that’s right.

Yes, Laserdisc is uncompressed because it’s analog, correct?

Yes, but the problem I mentioned was not the Laserdisc or your transfer to harddisk, but the possible steps that happened before to the material from scanning the progressive zelluloid-film until it had been copied to a Laserdisc. But I fear we are somehow off-topic meanwhile…

Post
#1325232
Topic
Info: Gigapixel AI vs infognition Super Resolution / What to use to upscale SD to HD or 4K
Time

phoenixobia said:

As for IVTC, I haven’t tried Avisynth yet but I’ve read about it and would try it. I am aware of the change of pattern that can happen but I think that could be the case with any algorithm. As you mentioned, doing those by hand to produce progressive results is the best way to have perfect results but I don’t see how it can be fast. Shouldn’t that take forever? Please elaborate on that.

The speed of a hand-made IVTC in avisynth (using SelectEvery) is lightning fast. But I think you meant to produce the script. The script that handles the different patterns by hand can in most cases (>99%) be made in about 15-30 minutes. There are only three patterns in 99,5% of all cases, the only exceptions of this I had with Japanese NTSC-sources so far, they sometimes have a different way of pulldown, but also not often.
So, you just have to specify those three patterns, handle them with variables, and put all together with Trim’s.
A question of how exact you want to have your result. In a professional setting, where you have to do IVTC for series with >100 parts, you sure wouldn’t do this by hand, but if you do something with love - as everybody here does, I think - and willing to spend some time more to achieve the very best result, it’s no question, is it?
I can only encourage you and everyone to try avisynth. It’s the most flexible thing to handle video out there, and you have FULL control of what’s happening.

Second, I’m not sure I understand the part you talk about telecining while scanning.
If the original footage has been 24fps film/animation and telecined (3:2 pulldown) to 29.97fps which is the case with Laserdiscs, IVTC is the process to get rid of those added frames and turn it to 23.976fps. The results should have no jagged edges or as you say staircase-artefacts and no half resolution. My IVTC video has no rough edges if that’s what you mean, so please explain this.

This ugly pulldown-thing that you have with NTSC (I am lucky to live in PAL region) is produced in different ways. Sometimes while scanning as one process (older sources), or later. But this is not the point of what I meant.
The point is compressing while interlaced. Maybe this Laserdisc-source comes from an older scan. At a very early “stage” it has been copied to a - let’s say - DigiBeta-cassette that then was archived. DigiBeta-format is quite good, but compressed, not much, but lossyly compressed. If the source copied to a DigiBeta is progressive, you won’t notice the compression with your eyes, no chance. But a pulldowned source is combed… The most ugly thing with pulldown is, that in almost any case you do not simply add frames by doubling every fourth, no - it’s fields that are added - as you know of course - that results in combing. Lossy compressions - reagrdless of how good they are - do harm in these cases. Let me specify:

film --> scanned with pulldown (telecined) --> stored really uncompressed or losslessly compressed --> IVTCed
would give you 100% progressive frames back, that’s right.

film --> scanned with pulldown --> maybe once stored uncompresed --> copied to and archived as DigiBeta --> even copied to another medium via SDI or similar interfaces --> IVTCed
will result in small jagged edges (thanks for the term), that increase with sharpening, even with AI. There is no lossy compression algorithm that does not produce ANY edges when handling combed material. You won’t notice it in most cases, but you will see it, if you sharpen, and that’s what is done here.
The later the Pulldown happens, the higher are chances to get more or less artefactless original progressive frames back.

Below are the images. It’s best to download and see at 100% but you can still see the difference here.

These look damned good! One could critisize many things, but to my feeling they really look good. I only doubt that this couldn’t have been achieved also with more conventional things than AI - and I doubt that GP doesn’t take advantage of these. 😉

Post
#1325185
Topic
Info: Gigapixel AI vs infognition Super Resolution / What to use to upscale SD to HD or 4K
Time

Sounds as if you tried a lot not to lose anything around upscaling. Really good is, that you exported from GPVE to lossless images, to collect these together afterwards. How well does the heart of this, GP Video Enhance, seem to work for you? I found it rather weak. Can you post some screens?

May I admit some things? I feel free to do so, maybe I can help:

For IVTC, one of the key points in your workflow, there are much better ways. Most avisynth-experts use TIVTC which produces excellent results without any loss in quality if you use avisynth correctly. Even better is to IVTC by hand with avisynth (even if the doom-cracks don’t like this… 😉 ) Very often you have only one, or less than 4 or 5 pattern-changings to remove, which is by hand done quite fast, and you get 100% jitter-free results, which NO automatic algorithm can achieve.
Also, unfortunately, if telecining (pulldown) had been done directly while scanning and later different compressions were made, there often remain “staircase”-artefacts (don’t know the right term in English) after IVTC. Hard to correct these without losing resolution, but if you plan to upscale/sharpen afterwards, often better to remove it at the cost of a bit less resolution, which you “get back” (not really) with your upscale. Otherwise these staircases will be more and more visible. If you have a source with absolutely NO such artefacts after IVTC, you are lucky.

At the end I would rather export to some lossless codec, then it is possible to later improve something, edit something a. s.o. But maybe you just didn’t mention it.

Post
#1325101
Topic
Info: Gigapixel AI vs infognition Super Resolution / What to use to upscale SD to HD or 4K
Time

I know, nobody wants to hear, but…
Just to assure that I am no newbie in all this, I do film restoring (editing, color correcting, encoding a.s.o) professionally since almost 20 years.
As all of you know, there is no real way to upscale anything really (except for vector graphics…), because you cannot generate any detail that isn’t there, out of nothing. There are many cases where there ARE more details, but not obviously visible, where you can do a nice job with the right sharpen algorithms, but also this is no “real” upscaling. Clear.

This was the situation, until somebody had the idea to use AI…
Due to rather quick inventions and later “central” implementations, a lot of developers presented a lot of software, that at the end of the day mostly used the same algorithms, and this looked very, very intersting in the beginning. There were and are samples over samples that look unbelievably good, quite magic.
So I tested everything I could get - whenever I found the time for it - I thought: First see, then think. If it worked really well, the algorithms are not worth to think about, at least for me, who did not intend to take part in this by coding, just as a user.

The first upscale I did (after a lot of testing I decided to use Gigapixel for it) was for a reproduction of an old painting, to get a more detailled printing master. This worked incredibly well! In THIS case…
It worked, because the brushstrokes are some kind of pattern that made it easy for Gigapixel to find SIMILAR PATTERNS in its database. And this is what it’s all about in the end, and with all machine learning:

To find a similar pattern that you can use for details! But…

This finding similarities is done by accident! Only similarity counts! That means, that if the structure of a distant grass valley for some reasons looks like nearby green coloured hair, then the machine learning algorithm MAY decide to use this structure and add details to the grass from somebody’s green hair.
You MAY not notice this in a picture - sometimes you don’t, somtimes you do, accidentally.
But in a video you always notice it, because the used patterns CHANGE all the time. That means, in one frame AI uses different patterns than in the next ones. So this generates a lot of noise, unsteadiness, flicker a.s.o.
I made a lot of tests with it, wrote some not too bad avisynth-scripts to compensate these effects - not useless, but also not convincing in the end. No real upscaling possible, least of all upscaling old, not so good material, which would have been a great goal!

The newer thing about real video-AI-upscaling then was that they tried to use
-frames from environmental frames of the same scene as part of the database, a wise decision to find really similar structures, but at the cost of not finding more sharpness in most cases…
-implement more steadiness in the selected patterns
So I tested f. e. Gigapixel’s video-tool that one mentioned above.
As expected the winning of details was MUCH less than with image-upscaling. Comparable to a rather medium conventional-“Upscale” quality.

So I began to think about the situation. My opinion: The problem with all this is:
Added details are still added by simple SIMILARITY, regardless of what origin they have!
So this all is still no real AI, that would work similar to a human restorer. When a restorer adds a lost detail by f. e. restoring a painting, he knows WHAT he restores! F. e. some grass in the distance…
So, what is needed in the future is, to build real HUGE nested databases of all the things you can see out there,
with all nesting, linking, rating, comparing, a. s. o. that a human brain does. So that an algorithm in the end will KNOW that it has to take the RIGHT pattern, and not something that looks just similar.
Until then we will sure ecperience some improvements - but if you really look at close range, it will always have some nature of Frankenstein.