logo Sign In

Star Wars GOUT in HD using super resolution algorithm (* unfinished project *) — Page 53

Author
Time

Wow, thanks for that mouse-over compare! While I’m moving the mouse on and off the pictures, I can easily angle my viewing and better see the fix on this angle-sensitive LCD screen. (CRT, come back! You were good!) It looks even better than I realized, especially on those black crayon-lines on many edges – gone!

I’ve found that DeHalo-alpha() tends to lighten the picture overall (it must be the “additive” way the fix is obtained). That’s the reason for Tweak(), to restore the luminance level. In the 1st picture group (previous post) I put in a -4 brightness. Some areas still looked a little light. So on this last group I put in a -5 brightness. What it really needed was brightness AND contrast adjustments, to balance the spectrum from top to bottom. I tried that the 1st time around, but Avisynth’s no-feedback commands made it a frustrating endeavor. That’s why I finally left contrast at it’s default value (does nothing) in Tweak() – just to let it be known it’s there and can/should be used, too.

Again, don’t let post-application “softness” mislead. It was that way before “they” applied the wrong “type” of good-news/bad-news sharpening. I made sure to limit the re-sharpening just enough so as to roughly match the original picture’s sharpness without much affecting noise, jaggies, and outline remnants, with the numbers I worked up.

LimitedSharpen() is one of those non-intuitive-numbers functions. More experimentation than in my proof-of-concept would be needed to work out stronger sharpening without also “enhancing” picture artifacts. (It’s the same with DeHalo_alpha() and it’s functionality. More experimentation need there, too.)

Author
Time
 (Edited)

Most of the time I align the before/after shots in Photoshop because with the possible zoom you can see the changes in very good detail. ScreenshotComparison.com works well for more obvious and not so subtle differences, but I think nobody has really noticed the brightness/contrast change, although it stands out pretty much when aligned.

I agree that we need more testing for brightness and contrast correction values with Tweak(). This process probably is not lossless, is it? We actually pull the values down (by DeHalo) and then interpolate them up again, right?

Also more testing for DeHalo, of course.

Are your pictures based on the SR v15 script with your additions?

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time
 (Edited)

DeHalo_alpha() actually lightens the source. It is Tweak() that pulls it back down again (“bright=-5”). Avisynth docs note that Tweak()'s variables are floating-point. One should be able to set “bright=-4.5” and get a compromise-image between the two previous test results.

Aside from non-pro LCD monitors’ “what you see isN’T what you get”, the sequence of processing in Avisynth may significantly affect the result. I didn’t realize all the extra processing DrDre was throwing at it outside of the super resolution program until I read his last version: Star Wars GOUT in HD using super resolution algorithm (page 50) - SRV15.

While I’m at it, here’s a warning about ConvertTo???() – to change the colorspace. It’s considered prudent to make minimum use of this function by grouping same-colorspace processing under a single ConvertTo, or by only using functions that operate under the same colorspace. Colorspace conversions have rounding errors, which compound in the destination. I counted one, two, three, four, … well, allot of ConverTo’s, and each use negatively impacts the final product’s integrity. That part of the script, at the very least, should be optimized.

Oh, yes, I used DrDre’s SRV15’s screenshot from the Star Wars SRV11-to-SRV15 comparison - frame 8228, also from page 50.

.

@ DrDre
BTW, AWESOME project!! Loved following it and your results!
(“THX 1138” laserdisc needs a similar treatment. Hmmm, but why does that ring a bell? Oh, yeah … another movie from that same Lucas guy!)

Author
Time

My 2 cents: I am confused about how the “After” was constructed (if it is SRV15 or not), but I am not a fan of the result. There are several issues: The darker part at the very top of the soldier’s helmet is removed; his mouth looks strange, the brightness (black level) of the image changed, and overall detail is reduced. It would be interesting to see the bluray of this scene in comparison to see how it is supposed to look.

Author
Time

thorr said:
It would be interesting to see the bluray … to see how it is supposed to look.


(Sorry, but first reaction is … that it’s a funny thing to say around here.)

But, yes, the Blu-ray does show the helmet almost like it was processed (no solid black line along the front):

(fullsize 1920x824: http://www.vcdq.com/files/samples/16-2011/64867/96247362.jpg)

about how the “After” was constructed (if it is SRV15 or not)

The “before” was SRV15 and the “after” was SRV15 processed with the Avisynth code shown with the picture(s) posted here.

There are several issues: The darker part at the very top of the soldier’s helmet is removed;

All those deep-dark parts were bad-sharpening artifacts. The remaining lighter line on the very top of the helmet is a residual halo and should be removed, too, possibly with a 2nd pass of DeHalo_alpha() at new setting for that.

his mouth looks strange,

Only because … “We’re on a diplomatic mission … cough, choke, gag, sputter.”

the brightness (black level) of the image changed,

Already noted and with a mention on an easy fix.

and overall detail is reduced.

Overall noise is reduced. Detail is still there. See the mention about sharpening and DeHalo-ing.

Hope that helps!

Author
Time
 (Edited)

Spaced Ranger said:

DeHalo_alpha() actually lightens the source. It is Tweak() that pulls it back down again (“bright=-5”). Avisynth docs note that Tweak()'s variables are floating-point. One should be able to set “bright=-4.5” and get a compromise-image between the two previous test results.

Right, I got it confused there. Although an interpolation still remains?

Aside from non-pro LCD monitors’ “what you see isN’T what you get”, the sequence of processing in Avisynth may significantly affect the result. I didn’t realize all the extra processing DrDre was throwing at it outside of the super resolution program until I read his last version: Star Wars GOUT in HD using super resolution algorithm (page 50) - SRV15.

Several people have stated that they noticed a green shift in V15 (see Page 15, by the way: Are we not able to directly link to posts here?!). Personally I would try it with V12, maybe I will find some time for that.

It’s considered prudent to make minimum use of this function by grouping same-colorspace processing under a single ConvertTo, or by only using functions that operate under the same colorspace. Colorspace conversions have rounding errors, which compound in the destination.

DrDre already tried to keep conversions to a minimum as he stated on many pages ago where he also resolved a problem that came from colorspace conversions. Can’t find it anymore. Why don’t all plugins even support one “standard” colorspace like RGB32?

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time

Intruder said:

Spaced Ranger said:

DeHalo_alpha() actually lightens the source. It is Tweak() that pulls it back down again …

Right, … Although an interpolation still remains?

Yes. There isn’t any correction that can be made within DeHalo_alpha(). Tweak() or another function must do it.

Several people have stated that they noticed a green shift in V15 (see Page 15,

My guess is the color shift is compounded from all those ConvertTo’s.

by the way: Are we not able to directly link to posts here?!).

We can. It’s clunky, but you click on the poster’s name and up comes ALL his posts. One must wade through them (by date is a must) to come to the isolated post with link directly to it. Then use markdown-it format to make a referring link in one’s post.

DrDre already tried to keep conversions to a minimum as … he also resolved a problem that came from colorspace conversions. … Why don’t all plugins even support one “standard” colorspace like RGB32?

It’s the history of Avisynth. Processing some colorspaces were easier to figure out and faster to execute than others. Later, they started work on other well used colorspaces. By then, many had already written their plug-ins for the earlier colorspaces and moved on. So it a hodge-podge of timing and missed opportunities and lost interest.

Now that computers are fast enough, Avisynth can/should be re-written from the ground up in an “uncompressed” colorspace. (Easy for me to say.) Oh, wait, aren’t they doing something like that now? (The system I use is NOT up to date. So I’m not keeping watch on development.)

Author
Time

Nowhere currently.

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time

Intruder said:

Nowhere currently.

Yeah. I think Dre lost compatibility with some of his plugins after a system upgrade. But he has made the scripts available. I think he and others are very likely apprehensive about the idea of spending a month or two in processing time with a decent computer that could be used for other things. And now we’ve seen such great 35mm options start to open up. The GOUT is being phased out of despecialized.

It turned out to be very useful for projects like Hal9000’s fan edits, though. The DVD-sourced scenes in his Episode III look extremely good.

Author
Time
 (Edited)

Yes, I have the scripts saved to my HDD and also fiddled around with them, but I don’t think the work will really be worth it with all the new scans and possibly even a rumoured OUT release from Disney (which I don’t believe in at this moment).
But it was interesting to see what can be done with those techniques.

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time
 (Edited)

I think I’m a little late to the party, and correct me if this isn’t necessarily the place to contribute my sample, but I’ve been doing a lot of tests with SuperResolution of the SSE to see how much detail can be resurrected from it. The SSE is a fine (if not the best) clean up job and restoration, yet lacks a certain crispness to it. If there’s one thing I like about the Blu-ray (possible the only thing) is how sharp and clean it is. I have noticed that the grain present in some shots of the SSE, and most likely the source, is a little soft; It makes what’s in frame feel distant in a way.

I’ve used both SuperResolution (faster almost real-time) and Adobe’s Detail-preserving Upscaling (more accurrate but resource intensive and borderlines crashing) in After Effects to see what works best. In some cases I’ve considered using Denoiser II to reduce the grain and add it back in later (just bear with me). Also, order of operations is very important when it comes to adding effects and upscaling. In what order the effects were done gave VERY different results.

Here’s what I was considering in my tests

  • SuperResolution or Detail-preserving Upscale?
    - i.e. How much time am I willing to give a shot? Which produces better results on its own?
  • 1080p? 4K? Upscale to 4k then scale down to 1080p?
  • Denoiser II before or after upscale? How much reduction? How much Enhancement?

1080p comparison
http://screenshotcomparison.com/comparison/187722


At first it seems subtle, but the more you look at it and at the different things in frame you can see the crispness I’m aiming to bring out in the SSE. Threepio, Artoo, Luke and Ben now have definite features and are all around less blurry.

I don’t have a 4K monitor but at 4K it looks even better because of the 1:1 pixel ratio. I have zoomed in here to create a 1:1 pixel ratio on my 1080p monitor.
It shows the SSE at 200% compared with my Upscaling test at 4K
http://screenshotcomparison.com/comparison/187728


When matching the 1080p and 4K at a zoom, you can see the aliasing present in the 1080p version.
http://screenshotcomparison.com/comparison/187724

However, comparing the 1080p version to the straight 4K version both fully framed (on a 1080p monitor) they look identical. So you would need a 4K display to appreciate a 4K ‘master’.

A 4K version of the SSE is not out of the question, just not feasible at the moment.

It doesn’t hurt to offer help, but it always hurts to disregard those that do.

Author
Time

From what I’ve tested I would say that it’s very time consuming. Making sure what processes are being done to the footage are constructive to the shot and don’t take away anything or create unwanted artifacts. Aside from that the SuperResolution tool is very fast compared to the Detail-preserving Upscale effect, however the latter produces better results. That ends up as significant render time for each shot along with whatever you decide to do in addition (Like Denoising, adding grain back and color correcting/grading the footage to match a source).

That is basically why. Or many just haven’t discovered it yet.

It doesn’t hurt to offer help, but it always hurts to disregard those that do.

Author
Time

Looks very good! Great work!

Author
Time

vexedmedia said:

From what I’ve tested I would say that it’s very time consuming. Making sure what processes are being done to the footage are constructive to the shot and don’t take away anything or create unwanted artifacts. Aside from that the SuperResolution tool is very fast compared to the Detail-preserving Upscale effect, however the latter produces better results. That ends up as significant render time for each shot along with whatever you decide to do in addition (Like Denoising, adding grain back and color correcting/grading the footage to match a source).

That is basically why. Or many just haven’t discovered it yet.

I know my blu-ray player is able to enhance the picture quality of DVDs it plays, and it doesn’t take a second longer for the player to make that conversion. Is that or a similar feature to SuperResolution or Detail-Preserving Upscale, and if so, why is the BD player able to perform it so quickly, and if not, why not use the same optimization software/technology(or w/e it is) for every movie?

youtube.com/c/StarWarsComparison

Author
Time

Interesting to see that the 4k looks so much better than 1080p at high zoom level.

Blackout said:
I know my blu-ray player is able to enhance the picture quality of DVDs it plays, and it doesn’t take a second longer for the player to make that conversion. Is that or a similar feature to SuperResolution or Detail-Preserving Upscale, and if so, why is the BD player able to perform it so quickly, and if not, why not use the same optimization software/technology(or w/e it is) for every movie?

I would guess it’s more something like detail-preserving upscale, but not as good and complex, therefore less intensive to compute. There might be a dedicated chip for upscaling in the player which has hardware features that improve the speed of such calculations. Or it is the software that is tailored very well for the chip.
If someone could grab the output signal of their upscaled image from the player and compare it with the results from this thread, that would be cool.

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time

Those comparaisons are very impressive, but is there realy a consequent gain of information in the picture ? We see a beautiful sharpening but it could simply be a consequence of the added fine grain, no ?

Author
Time

I just read about a new upscaling program that may interest those following this thread. From http://www.the-digital-picture.com/News/News-Post.aspx?News=19089

"Developer Steffen Gerlach has created a free program for Windows users – A Sharper Scaling – which he claims produces better enlargements than Photoshop’s bicubic interpolation.

For those that want to preserve details in large prints while cropping heavily, this might be a plausible solution. "

Software and additional info found here:
http://a-sharper-scaling.com/

Disclaimer: I have no association with the software or links above, nor have I tried the software. YMMV, etc.

If your crop is water, what, exactly, would you dust your crops with?

Author
Time

Too bad it only works with 8-bit color images. I’m not seeing a lot of difference though between the “conventional upscale” and ASS (A Sharper Scaling - unfortunate abbreviation)

Imgur

(This is the LPP, pre-color correction).

TheStarWarsTrilogy.com.
The007Dossier.com.
Donations always welcome: Paypal | Bitcoin: bc1qzr9ejyfpzm9ea2dglfegxzt59tys3uwmj26ytj

Author
Time

Today, Google introduced their super-resolution technology based on machine learning, RAISR:

https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html

Maybe at some point we will see an implementation that could be useful for our case.

Darth Id on ‘Why “Ben”?’:

And while we’re at it, we need to figure out why they kept calling Mark Hamill’s character “Luke Skywalker,” since it’s my subjective opinion that his name is actually Schnarzle Shnuzzle.  It just doesn’t make sense!

Damn you George Lucas for never explaining why they all keep calling Schnarzle “Luke”!

Damn You!!!

Author
Time

I’m glad you posted this. Those results look very promising!

It doesn’t hurt to offer help, but it always hurts to disregard those that do.

Author
Time
 (Edited)

Intruder said:

Today, Google introduced their super-resolution technology based on machine learning, RAISR:

https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html

Maybe at some point we will see an implementation that could be useful for our case.

Well, you need a huge data base of images, and a super fast search engine to match your low resolution image features to the high resolution image features in your data base. You then average the best results, and voila, there’s your high resolution image. The method is actually pretty similar to the method applied by Mike Verta, to enhance the detail of Legacy, minus the upscaling.

Author
Time

DrDre said:

Intruder said:

Today, Google introduced their super-resolution technology based on machine learning, RAISR:

https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html

Maybe at some point we will see an implementation that could be useful for our case.

Well, you need a huge data base of images, and a super fast search engine to match your low resolution image features to the high resolution image features in your data base. You then average the best results, and voila, there’s your high resolution image. The method is actually pretty similar to the method applied by Mike Verta, to enhance the detail of Legacy, minus the upscaling.

Dre. Did anything ever come from this? Or was it too time consuming, among other things?

Author
Time

No, I thought about it, but it indeed was too time consuming.