logo Sign In

NeverarGreat

User Group
Members
Join date
11-Sep-2012
Last activity
25-Apr-2024
Posts
7,651

Post History

Post
#1015387
Topic
Rogue One * <em>Spoilers</em> * Thread
Time

I know this is a tangent from a tangent, but why is it that Obi-wan and Yoda dissipated as they died? I know the official explanation is that they are now in the ‘netherworld of the Force’ and can only appear as ghosts to those they knew, but that netherworld doesn’t require their physical bodies to allow them to project ghost bodies, otherwise Anakin wouldn’t have been able to do it in ROTJ.

Conspiracy theory: Neither Obi-wan nor Yoda is actually dead, instead they teleported via the Force to another location in the galaxy and only appear occasionally in ghost form to other people. It makes some sense, since Luke assumes that Yoda is too strong in the Force to actually die, and Yoda seems to imply that it is possible. If it is the Force that prolongs life, Obi-wan could still technically be alive - it even looks like he aged a bit in ghost form, a highly suspicious activity for a ghost.

😉

Post
#1015358
Topic
Info: STAR WARS - Magic Bullet Grades
Time

iraneal said:

Yeah JawsTDS I agree with you, Episode 4. it’s probably the slowest moving of the first three, and the one with the least story. The First 3 episodes were really good, and at that time after watching those episodes I became a huge fan of Starwars series and couldn’t stop myself to such a new star wars genre. Since one of my friends is a huge fan of playing strategies based games, which he use to buy from online stores like https://www.instant-gaming.com/en/, PC CD keys etc…

These bots are getting really good. I almost believe that it’s just someone who’s particularly clueless about this forum.

Post
#1015243
Topic
Rogue One * <em>Spoilers</em> * Thread
Time

Anchorhead said:

darthrush said:
Sometimes it’s ok for a female to be a badass. Some people obviously can’t get this through their heads. I don’t hear anyone complaining about what seems to be a scene of Donny Yen kicking serious ass with a stick.

They don’t feel their masculinity is threatened when it’s another man showing strength. The fans who are so bothered by Rey and Jyn have issues that exist outside of whenever they’re sitting in a movie theater.

It’s clear they’re feeling threatened at this point because they haven’t seen the film. They’re already trying to diminish her importance by using a derogatory term to describe her. It’s not just her specifically, it’s the idea of her.

I don’t have a problem with badass women (see: Fury Road, Alien, etc) but in those films the male costars didn’t look surprised at their badassery. If there’s any problem with this scene, its with the guy being surprised in a situation where anyone’s overriding desire should be survival. I’d personally be grateful that someone had my back. Granted, this is one scene in isolation, so perhaps she’s revealing a facet of her character that wasn’t apparent before and the surprise is warranted. I don’t know. I think Rey is a different issue, since she actually has aspects of her personality that defy even her expectations. In both cases, we are working with incomplete information, so I’m withholding judgement until all the movies are out.

TL;DR: Jyn is a BAMF, Cassian needs to get over himself.

Post
#1015145
Topic
Info: The Ultimate Super Resolution Technique
Time

UnitéD2 said:

I wonder something Neverar : do you think that “dye cloud algorithm” will preserve the color timing of the image, even with a very high range ? Because the density and the inequal repartition of the silver particle are taken in consideration for the printing process to obtain a printed image faithful to the captured image. If you build an averaged image with the maximum of dye clouds, it will be very, very dark, no ?

This is one of the questions I have as well. Since the grains are necessarily what darken the image, if you were to use only the grains, the image would probably be quite dark. I expect there would need to be a luminosity adjustment based on the average luminosity of the frame before this process is done. But after the adjustment, there shouldn’t be a problem with brightness.

As for color timing, since the process would need to be done to each of the 3 color layers individually, the above process of adjusting the luminosity could be applied to each of the layers separately. In this way, even if one layer were to become quite a bit darker than the others with this process, the adjustment could correct for this. I expect this is where averaging whole frames (rather than separate pixels) would be very useful for preserving the original color grading.

Post
#1014789
Topic
Info: The Ultimate Super Resolution Technique
Time

I’d never heard of Drizzle before. Interesting. The link mentions that the ‘Drizzled’ image has a bit more noise than the original, presumably because every pixel from the original 9 captures is used.

@Poita: I agree that with enough stacked sources, all the prints will be able to agree on an image. Using a simple average, variations in luminosity due to grain disappear. You are left with a very clean image, but it is stuck at the resolution of the average resolution of your prints, at least if you’re only using the same frame from different prints (since these frames will all share the negative grain). But even if you’re using multiple frames in a sequence, there is the potential for detail loss:
Edge Loss
Just focus on two pixels of output for a film scan across four frames.

In the first two frames a photon of white light strikes the rightmost pixel. The subsequent dye cloud obliterates the edge, causing the leftmost pixel to become mostly white as well.
In the second two frames a photon of blue light strikes the leftmost pixel. This causes a similar blue blur across the rightmost pixel.

None of these frames is accurate to the original subject, and a simple average of these four frames would arguably be a worse result than simply keeping one frame over another (though on a macro scale the image would be less noisy). If you add another frame to the equation, tipping it in favor of the blue dye cloud, for example, a weighted average would still have a blur caused by this dye cloud. The only way I can see for the program to correctly retain this edge would be for it to measure the dye cloud centers and keep the most accurate parts of each frame. It’s like an intelligent Drizzle for film.

Post
#1014686
Topic
Info: The Ultimate Super Resolution Technique
Time

poita said:

“This is not possible with a digital video signal, since again, there is no way to tell which pixel is more likely than another to contain detail rather than noise.”

If you have enough samples, you can statistically work out which pixels contain noise and which contain information, that is the basis of image stacking. The signal to noise ratio is lowered by the root of the number of images.

Or do you mean images originally captured on a digital camera, not digital scans of film based capture?

I feel that scanning multiple prints at high resolution, and then stacking them achieves the same thing, but I’m probably missing something in my understanding here.

I’d love to see a proof of concept from your work, I’m happy to provide you with some higher resolution images, something like 6000x4000 pixels should be enough to work with?

I’m all for any technique that might improve the image.

I’m referring here to images captured digitally rather than on film. A digital image has a rectangular grid of pixels which are evenly spaced, each with luma/chroma information (in the best case scenario). Stacking a series of similar digital images will allow you to perform statistical analysis and improve the clarity of the image - for example two frames would agree on the color of a pixel and one frame would disagree, and the disagreeing pixel would be erased. However, what if that single pixel was the correct color? To return to the example above with the image of the blue edge: There is almost no chance of any single frame having a correct value for areas close to the edge, because of the blurriness of the dye clouds comprising the image. If there was an errant frame with the correct value, a statistical analysis would eliminate that value because it is not in agreement with the rest. In short, with a digital image, there is no way for an algorithm to reliably determine what is signal vs noise with only a single image, apart from applying a very destructive selective smoothing or sharpening operation.

Film is fundamentally different, in that you can determine the centers of dye clouds with far greater precision than the perceived resolution of the image. The halide crystals are microscopic, so determining where they were is determining precisely where the photons hit the film. There is no way to do this in a digital format. Since we can theoretically identify details with microscopic precision in film, image stacking using only the areas of actual detail will yield a far more impressive result than simply averaging the values of masses of overlapping dye clouds. Here’s how I envision it working:
Pixel Grid
Say that this grid is the output resolution, which is less than the extremely high scanning resolution but still substantially above the resolution of the image. For each blue grain in the image, the center is found. This is where photons have hit the film, and is the only real image information. In this example, there are only about 6 dye cloud centers, meaning that only 6 pixels are assigned values. With enough sources, the entire pixel grid can be filled in. The way this differs from averages or even weighted averages is that only the real information in the image is in the final product. There is no blurriness resulting from interactions between the dye clouds, since the program intelligently uses dye cloud center proximity rather than averages.

And yes, I’d be happy to play around with some high-res images, though I’m no Dr Dre. If a proof of concept were to be made, it would need to be done by someone who knows how to code. I’m the 1% inspiration guy in this case. 😉

Post
#1014657
Topic
Info: The Ultimate Super Resolution Technique
Time

The thing about film, as opposed to video, is that the detail captured by film can theoretically be precisely mapped to the very center of the dye clouds that make up the grain, whereas there is no way to know which pixels of a video image contain useful detail. Here’s a quick and dirty visualization of the idea:
Original Edge
Say that this is the edge of an object. When captured on film, photons of light bounce onto the film and react with microscopic silver halide crystals. When the film is developed, the halide crystals react with the surrounding substrate and create dye clouds, which can be viewed as grain:
Grainy Edge
Now this sharp edge is blurred by the process of creating the dye clouds. Compounded by several layers of colored substrate, the grain becomes complicated, such as in the ‘before’ example posted by Poita above. In that example, beautiful though it is, the image can never be sharper than the sharpness of the dye clouds. To understand this, consider that no matter how many pictures are taken of a sharp edge, each one will be blurred by the radius of the dye cloud. No matter how these are averaged together, and no matter at what resolution, this dye cloud radius remains.

Now if you were to map the centers of the dye clouds, it may look something like this:
Dye Cloud Centers

At this point the algorithm can determine the most valuable parts of the image. The key to greater resolution is that each frame of film contains a slightly different grain pattern, meaning that the centers of these dye clouds shift. Out of 20 or so frames, they may all have a substantial number of dye cloud centers that are in unique positions. Finally one can intelligently find the points in the film most likely to contain unique image data, on a photon by photon basis. This is not possible with a digital video signal, since again, there is no way to tell which pixel is more likely than another to contain detail rather than noise. Combining enough images makes it possible to recover a clean image with a sharp edge, almost regardless of the coarseness of the grain.

With an averaging super-resolution algorithm blurriness can never fully be removed. with Halide Crystal mapping you can potentially regain the unblurred edge.

Post
#1014547
Topic
Info: The Ultimate Super Resolution Technique
Time

@g-force: Noise reduction can work on a single image, and uses algorithms that reduce variance in luminance over small areas while leaving larger variances alone. This leads to the ‘plastic-y’ look of many film restorations. This process isn’t that. This process requires the use of several similar frames to build a map of the image, where the center of the grains are treated as the image detail. In fact, one could consider this to be a sort of anti noise reduction, since the ‘noise’ is the only thing that remains of each frame. Remember, in film, the grains are the ‘DNA’ of the image. If you only had a map of where each grain was positioned in the image and its color/luminosity, you could re-develop a convincing facsimile of the image, with grain intact. On the other hand, if you erased the grain (or placed new grain after erasing it), the DNA would be lost and no further information could be gained from this process.

UnitéD2 said:

Very interesting !

Would it work with prints done by imbibition (Technicolor) or only with prints done by emulsion of multiple layers (as Eastman)? If I’m not mistaken, it is the difference between the two process.

I’ve scanned some Technicolor frames at fairly high resolution, and I doubt that this process would work on those frames because of how soft the grains are. You are correct that the print is made by imbibing it with dye, not using film grain to ‘grow’ a new image, so unless the grains from the source print are still identifiable, the process will not work.

@cameroncamera: I see what you’re going for here, I think. That would be a process for upscaling a digital image, but if I’m reading it right, wouldn’t there be an issue with duplicating image detail across pixels, causing another form of interpolation smearing? Each image is expanded so that there is a one pixel gap between each pixel. If A is a pixel and B is an empty pixel, the result would be this:

ABABAB
BBBBBB
ABABAB
BBBBBB

The second frame would then be shifted one pixel to the right, filling in the B spaces in the 1st and 3rd rows. The third frame would be shifted down, so that half of the 2nd and 4th rows would be filled, and the fourth frame would be shifted down and to the right, completing the picture.

So far that’s your process, as I understand it. Now imagine that the image showed a red light in the upper left corner of the image, taking up only one ‘A’ pixel. If each of the four frames showed relatively similar detail, the upscaled image would show that single pixel of red repeated four times in a box configuration.

I don’t have any idea about upscaling a digital image, since the pixels are the detail. Perhaps the only way to really upscale digital content like that would be through an adaptive learning algorithm such as the ones being developed by Google, wherein it identifies common objects and upscales their detail with images culled from a library of images. http://newatlas.com/google-raisr-image-upscaling-super-resolution/46434/

Post
#1014456
Topic
Info: The Ultimate Super Resolution Technique
Time

I have been kicking around an idea for a sort of end-all, be-all technique for recruiting image detail from film grain. I briefly described it on the 35mm thread and talked with Dr Dre about it as well. The somewhat involved explanation in my original message is below, but first, some background on the film process and film grain:

http://cool.conservation-us.org/coolaic/sg/emg/library/pdf/vitale/2007-04-vitale-filmgrain_resolution.pdf

So let’s take Mike’s Legacy project as an example of how detail recruitment is done in its current best-case scenario. From my message to Dr Dre:

Mike’s method involves stacking up to five of the same frame on top of each other from sources of varying quality, and doing a weighted average of the pixel values. He also recruits data from neighboring frames, but let’s just focus on the stacked frames for now. So say he has 5 stacked frames. Two are slightly sharper (Tech frames) and 3 are slightly softer (Kodak frames). If you do a weighted average of every pixel, the softer frames will tend to override the sharper frames, since there are more of them. The result is a cleaner, but softer image. If you’re looking to retain detail, your best bet is to stick with the sharpest frame and discard the rest, since any averaging will invariably soften the image, regardless of the increase in clarity.

I think there’s a way to keep both the detail and the grain-free look. You would probably need a scan that is in 8-10k quality, so that each grain (more accurately each dye cloud) is distinguishable from another, at least mathematically. It could be that a 4k scan may have this level of detail. In any case, it should be possible for a sufficiently robust algorithm to examine the frame and identify the center of each dye cloud, recognizing local minima in luminosity for each color layer. With this map generated, it makes transparent the pixels not directly surrounding the center of each dye cloud, so you have in effect made cheesecloth of the image. You keep only the center of the dye cloud, information that is the most likely to have come from an actual photon impacting the silver halide crystal at the center of the dye cloud. In a way, the map should contain all of the actual color and luminosity information necessary to digitally ‘develop’ a new image by expanding each dye cloud back to its original size. However, with multiple stacked images, this process is repeated for each one and the results of this are overlaid, with the sharpest image on top and the softest image on the bottom.

This process could also be applied to neighboring frames in a more traditional super resolution method. If the center of two dye clouds is targeted to the same pixel, then and only then should a weighted average of the pixels be applied.

TL;DR version: Taking the pixels from a high resolution scan that are most likely to contain actual image information and discarding the space in between dye cloud centers, then overlaying multiple prints or sequential static frames, you should theoretically be left with an image with much higher detail and sharpness, since only the noise is removed, and the detail is multiplied over the number of sources used.

It would be interesting to run an experiment to test this theory. One would need to take a sequence of photos of the same object, preferably with grainy film stock, then develop them and scan them with a high resolution film scanner. After that, you would need a program that identified local luminance minima (the darkest parts of the grain are where a dye cloud formed) then applied a weighted transparency to any pixels not in these sections. Process the images, then overlay them, and compare the result with the same image stack run through a conventional super-resolution algorithm. With enough frames, one could upscale the resolution with a similar jump in actual image detail until running into the limit of resolution dictated by the quality of the camera lens.

This process, if it works, could perhaps be used on the infamous speeder and sandcrawler shots to bring them in line with the rest of the film.

Post
#1014451
Topic
Single Pass Regrade of Grindhouse ESB (Released)
Time

It looks really good!

The hue shift is definitely a part of what must be done for this version. I did a similar hue shift in my LUT way back when the grindhouse was first released:

http://screenshotcomparison.com/comparison/154445/picture:0

http://screenshotcomparison.com/comparison/154447/picture:0

I believe this is the correct link:
https://mega.nz/#!fFlhDYBA!hRF4lpAM6wpP984TMlx61RdFKXxL1wn-COz3X8rqrkA

If I were to do it again, it would probably have a saturation increase like yours, though most video players can handle that without a LUT.

Post
#1008488
Topic
Project #4K77
Time

I wasn’t trying to start any drama, I just wanted to clarify if this was a new scan or not. It’s sometimes difficult to determine what sources are new or not, with so many floating around here. And that’s exciting! It feels like a renaissance of Star Wars around here recently, with things happening so fast it can make your head spin.

Post
#1008101
Topic
Project #4K77
Time

Williarob said:

Two new videos were posted in the last 24 hours, one on some of the new techniques we’re using to clean the film:

Project #4K77 Techniques

and another about the 35mm sources:

Project #4K77 Techniques

Anyone who was wondering, but are they using THAT Technicolor scan will be pleased to see that we actually have a BETTER technicolor print to play with than the one we were experimenting with earlier in the year…

Excellent news! I wonder about this Tech print though, since Mike said he had 2 Tech scans, one in rough shape and one that was quite clean. Are we sure that this isn’t the same as the clean one that he is using? The only print I know of that is in better shape than that is the one used at the Senator theater screening.

Also, perhaps the ending music on your videos a tad loud? It’s kind of startling.