logo Sign In

Post #1014547

Author
NeverarGreat
Parent topic
Info: The Ultimate Super Resolution Technique
Link to post in topic
https://originaltrilogy.com/post/id/1014547/action/topic#1014547
Date created
30-Nov-2016, 2:29 PM

@g-force: Noise reduction can work on a single image, and uses algorithms that reduce variance in luminance over small areas while leaving larger variances alone. This leads to the ‘plastic-y’ look of many film restorations. This process isn’t that. This process requires the use of several similar frames to build a map of the image, where the center of the grains are treated as the image detail. In fact, one could consider this to be a sort of anti noise reduction, since the ‘noise’ is the only thing that remains of each frame. Remember, in film, the grains are the ‘DNA’ of the image. If you only had a map of where each grain was positioned in the image and its color/luminosity, you could re-develop a convincing facsimile of the image, with grain intact. On the other hand, if you erased the grain (or placed new grain after erasing it), the DNA would be lost and no further information could be gained from this process.

UnitéD2 said:

Very interesting !

Would it work with prints done by imbibition (Technicolor) or only with prints done by emulsion of multiple layers (as Eastman)? If I’m not mistaken, it is the difference between the two process.

I’ve scanned some Technicolor frames at fairly high resolution, and I doubt that this process would work on those frames because of how soft the grains are. You are correct that the print is made by imbibing it with dye, not using film grain to ‘grow’ a new image, so unless the grains from the source print are still identifiable, the process will not work.

@cameroncamera: I see what you’re going for here, I think. That would be a process for upscaling a digital image, but if I’m reading it right, wouldn’t there be an issue with duplicating image detail across pixels, causing another form of interpolation smearing? Each image is expanded so that there is a one pixel gap between each pixel. If A is a pixel and B is an empty pixel, the result would be this:

ABABAB
BBBBBB
ABABAB
BBBBBB

The second frame would then be shifted one pixel to the right, filling in the B spaces in the 1st and 3rd rows. The third frame would be shifted down, so that half of the 2nd and 4th rows would be filled, and the fourth frame would be shifted down and to the right, completing the picture.

So far that’s your process, as I understand it. Now imagine that the image showed a red light in the upper left corner of the image, taking up only one ‘A’ pixel. If each of the four frames showed relatively similar detail, the upscaled image would show that single pixel of red repeated four times in a box configuration.

I don’t have any idea about upscaling a digital image, since the pixels are the detail. Perhaps the only way to really upscale digital content like that would be through an adaptive learning algorithm such as the ones being developed by Google, wherein it identifies common objects and upscales their detail with images culled from a library of images. http://newatlas.com/google-raisr-image-upscaling-super-resolution/46434/