logo Sign In

Info: The Ultimate Super Resolution Technique — Page 4

Author
Time

One of the things I’ve noticed about about any type of noise reduction or temporal super resolution is a slight (or occasionally intense) flattening of the image. I was curious, is there a way to generate a depth map from the original video, upscale the map and reapply it after all filtering and upscaling has taken place? More than anything, depth is the one feature that I’ve found Super Resolution techniques to be lacking in. This doesn’t apply to the stacking concept really, since it is single frame temporal.

Preferred Saga:
1/2: Hal9000
3: L8wrtr
4/5: Adywan
6-9: Hal9000

Author
Time
 (Edited)

Flattening is definitely a problem when applying a sharpening filter to an image, for example the 2011 Blu-ray has such severe sharpening that blurred background elements become as sharp as foreground elements.

This shouldn’t be a problem with the technique I’m describing, since it SHOULD increase the sharpness of all parts of the image equally, rather than sharpening blurry areas only. The reason it should do this is because it is the focus of the lens which creates the illusion of depth of field, and more accurately pinpointing the photons actually preserves innate blurriness where the lens is out of focus. A basic super resolution technique of averaging several sequential frames may cause a loss of depth of field due to a slight softening of the parts of the image which are in focus, especially if these areas are in motion. I have described earlier how sharp edges can be blurred through frame averaging.

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time

There is one potential problem with my theoretical technique, which has to do with errant grain.

With an original negative, the center of the dye cloud will correspond with actual image detail almost 100% of the time. However, when a print is made from the o-neg, the dye clouds are the basis of the new image. The 2nd generation print will have the o-neg grain and a further dye cloud pattern. If you were to trace the centers of these clouds, some of them would perfectly overlap the grains of the previous print and retain all the detail of the o-neg, but most photons would have struck some other place on the o-neg dye clouds. These new clouds would have slightly incorrect value and placement in the image. This is what I’m going to call ‘errant grain’. If a print reaches something like 4th generation, which is about right for most prints of Star Wars, then this problem could be quite severe, and despite this precision the image will be somewhat soft. I’m still thinking of how to correct this issue.

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time

It sure is quiet around here.

Anyway, here’s one extremely arduous method of tackling the problem of errant grain due to multiple print generations:

For Star Wars, the best non-technicolor prints we have are apparently release prints. This means that they are probably 3 generations removed from the negative.
O-neg
Interpositive (IP)
Internegative (IN)
Release Print

Since Star Wars was so successful, there were several Interpositive prints struck over its lifetime. From these, many more Internegatives were struck, since they would get worn out during the production of release prints.

Now if we had access to a good IP or IN the work would be greatly reduced, though these prints were usually too worn out to be much use. Barring this, we have to use release prints. Dye cloud mapping of several release prints should produce an accurate image of the Internegative that produced them. Since there were probably several internegatives made of each interpositive, the release prints would have to be organized by the internegative that made them, and the process applied to those groups separately. Perhaps you can see where this is going. Say you need four versions of a single frame from prints generated by a single internegative to correctly recreate the internegative grain. In order to recreate the interpositive grain, you’d need to have four fully recreated internegatives, and to recreate the grain of the o-neg, you would need four fully recreated interpositives. All told, 64 release prints from the various IPs and INs would need to be correctly identified, scanned and mapped. It’s quite absurd.

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time

That started me thinking (very dangerous) about the various sized “film grains” of the emulsion, that might hinder your idealized points (would you weight it’s size, which itself is the average of light that originally exposed it?). Those grains are large enough to be on pixel boundaries to have influence on multiple pixels, too. Also, the emulsion is thick enough to have multiple planes of unique clouds that might hide other clouds among the layers. (It quickly becomes complicated.)

I expect you saw this (or something like it), which contains quotes from the originating KODAK “H-1” tech publication:

Film Grain Resolution & Fundamental Film Particles 200704xx Tim Vitale states:

The Kodak publication itself is now only at Internet ArchiveInternet Archive - KODAK Motion Picture Film: H-1. PDF files also have been saved by IA but one is missing - pp.80-85.