logo Sign In

camroncamera

User Group
Members
Join date
21-Aug-2014
Last activity
3-Nov-2021
Posts
104

Post History

Post
#1442457
Topic
Terminator 2 Extreme Edition: HD WMV - <strong>info wanted</strong>
Time

Apologies for the necropost; has there been any updates to crack this Easter Egg?

I bought this T2 Extreme Edition DVD when it was first released in 2003, but the bonus HD version for PC playback has always been problematic for me. At the time, my PC was under powered to play it without stuttering.

A few years later I tried again with a more powerful PC build and it was glorious! Seems like the included player app was buggy though, and many years later I loaded the disc again on yet a newer PC build, but had no luck getting the file to play any more.

Post
#1016489
Topic
Why were miniatures shot in multiple passes?
Time

EJones216 said:

It’s about generating the cleanest possible matte on bluescreen, which in opticals isn’t as forgiving or adjustable as electronic/digital. Once that’s done, the additional passes can bring the element closer to its intended look without needing to worry about corrupting the matte. (this multi-pass technique is still done today in CG effects though I imagine for different reasons, as well as Laika’s stop-motion films which do separate exposures with and without greenscreen to have both the matte and no green spill)

This was true for Vinton/Laika in the early 2000’s as well, when their animation was still mostly shot on 35mm but posted on video. The post company in Portland that I worked for at the time did the Vinton/Laika telecine transfers. It was common for VFX shots that would require later Flame compositing, to be shot on a motion control rig with a beauty pass, a matte pass, a practical light pass (if applicable), and a background plate pass. That was for shots that had repeatable camera moves and little-to-no hand-animation such as static miniatures or product photography. I don’t think it was done much in those film days, but for hand-animated shots requiring both beauty- and matte-passes, the lighting would have to be alternated between animated frames and the camera would shoot two frames per pose. The beauty frames and the matte frames would have to be sorted and grouped together in post.

Post
#1015037
Topic
Preserving the...<em>cringe</em>...Star Wars Holiday Special (Released)
Time

SKot said:

camroncamera said:
Wow, I was already very impressed with the theatrical showing in Vancouver earlier this month (presumably from the same source that’s been projected there three years in a row). The only obvious flaw I can remember was the short glitch in the Diahann Carroll performance.

That glitch has been painstakingly patched over, and it’s really hard to spot where.

Mavimao said:
Awesome! Will check it out when I get back from the in laws. Are you able to say what the source of this specific version is?

From off a master copy, as noted by the lack of commercials and lack of voice-over on the credits.

–SKot

SKot, will you be presenting the newly-remastered SWHS at the Kiggins Theatre in Vancouver next week?

Post
#1014895
Topic
Info: The Ultimate Super Resolution Technique
Time

poita said:

Oh yeah, normally you would do a ton of pre-processing before stacking, and then more cleanup after.
I literally knocked that out in about 15 minutes, 10 minutes of which was computation time, so it is a poor example, I just wanted to illustrate the technique.
Even doing it quite poorly gave enough of a result to show how it works.

Poita just open a film restoration school already, you know we’ll all enroll 😃

Post
#1014611
Topic
Info: The Ultimate Super Resolution Technique
Time

NeverarGreat said:

@cameroncamera: I see what you’re going for here, I think. That would be a process for upscaling a digital image, but if I’m reading it right, wouldn’t there be an issue with duplicating image detail across pixels, causing another form of interpolation smearing? Each image is expanded so that there is a one pixel gap between each pixel. If A is a pixel and B is an empty pixel, the result would be this:

ABABAB
BBBBBB
ABABAB
BBBBBB

The second frame would then be shifted one pixel to the right, filling in the B spaces in the 1st and 3rd rows. The third frame would be shifted down, so that half of the 2nd and 4th rows would be filled, and the fourth frame would be shifted down and to the right, completing the picture.

So far that’s your process, as I understand it. Now imagine that the image showed a red light in the upper left corner of the image, taking up only one ‘A’ pixel. If each of the four frames showed relatively similar detail, the upscaled image would show that single pixel of red repeated four times in a box configuration.

I don’t have any idea about upscaling a digital image, since the pixels are the detail. Perhaps the only way to really upscale digital content like that would be through an adaptive learning algorithm such as the ones being developed by Google, wherein it identifies common objects and upscales their detail with images culled from a library of images. http://newatlas.com/google-raisr-image-upscaling-super-resolution/46434/

Ok yes I am glad my description made sense. Mine is an untested concept, though it may me exactly the manner in which typical temporal super-resolution algorithms work. I doubt that I am coming up with anything new. I think, however, that film-based image sequences could result in better upscaling than images originating from a digital sensor. (I do understand that a digitized film frame sequence is captured with a digital sensor, but I have hunch that the random film grain and slight gate weave of analog image capture would work advantageously for temporal super-resolution.)

You do bring up an excellent point about tiny image details that are effectively 1 pixel in size being inappropriately quadrupled when upscaled. In my example I presented a strict sequential approach, where the first pixel of the first of four frames is always placed in the upper left position on the larger canvas, then the second frame slots into the next available position, and so on. But, what if the super-resolution algorithm could intelligently place the expanded frames to maximize detail? Say, the super-resolution algorithm could talk to a stabilization algorithm? Perhaps analysis of random film grain and slight gate weave amongst the selected film sequence frames could rearrange the upscaled pixels to maximize real detail instead of magnifying errors in image detail.

Post
#1014469
Topic
Info: The Ultimate Super Resolution Technique
Time

Fascinating idea, I’d love to see some examples.

Your “cheesecloth” analogy is similar to an untested super-resolution concept of mine; I imagine a still frame of limited spatial resolution, say 960x540 (1/4 of 1920x1080 HD), in a digitized image sequence. If the goal is to upscale the still image with real detail rather than interpolated pixels, first I would like to enlarge the chosen still image in an unusual manner: copy the chosen 960x540 still image “X” as a layer atop a blank 1920x1080 “O” canvas, where the 960 horizontal pixels and 540 vertical pixels are spaced out with blank (transparent) pixels between each sequential picture element.

For example, Pixel Row 001, Column 001 of the still image (upper right-hand corner, presumably) fits on pixel 0001,0001 of the canvas. Pixel 0001,0002 of the canvas is left blank. Canvas “O” Pixel 0001,0003 contains Image “X” Pixel 001,002. Canvas “O” pixel 0001,0004 is left blank, but Image “X” pixel 001,003 is mapped to Canvas “O” position 0001,0005, and so on for the remainder of canvas line 1.

Canvas line 2 is left completely blank all the way across. However, Canvas position 0003,0001 is filled with Image Pixel 002,001, but Canvas position 0003,0002 is left blank. Then the spaced out pixel pattern repeats.

In other words, we are stripping each image pixel away from its neighbor, row by row and column by column, and putting one blank pixel space between them. We aren’t throwing away any pixels, but we are shuffling each image pixel over (along rows) or down (columns) in a quasi checkerboard fashion.

In text form (monospaced font would work best):
Image “X” at 4 pixels x 3 rows:
XXXX
XXXX
XXXX

Image “X” scaled by a factor of 4 with typical spatial interpolation, where only “X” are unprocessed pixels and “x” are interpolated pixels (false detail is 75% of the upscaled image):
XxXxXxXx
xxxxxxxx
XxXxXxXx
xxxxxxxx
XxXxXxXx
xxxxxxxx*

Canvas “O” at 8 pixels x 6 rows:
OOOOOOOO
OOOOOOOO
OOOOOOOO
OOOOOOOO
OOOOOOOO
OOOOOOOO

Image “X” layered atop Canvas “O” with no scaling, in upper left corner:
XXXXOOOO
XXXXOOOO
XXXXOOOO
OOOOOOOO
OOOOOOOO
OOOOOOOO

Image “X” layered onto Canvas “O” with every-other pixel spacing beginning at 0001,0001:
XOXOXOXO
OOOOOOOO
XOXOXOXO
OOOOOOOO
XOXOXOXO
OOOOOOOO

So far what I have tried to illustrate is a small image that has been peppered onto a larger canvas without destructive upscaling. However, think of the pixel “peppers” landing in a strict grid pattern, and the canvas as clear glass rather than white canvas.

Other than the example of a typical spatial upscale with the very large number of interpolated pixels, we have upscaled nothing, nor performed any super-resolution upscaling. So next let us choose another still frame from our image sequence, ideally one which is nearly identical to the earlier chosen frame, such as the very next frame. Little, if any, motion of the camera or subject… perhaps the film grain is the biggest change from the previously chosen still frame… perfect! We will now repeat the peppering of the image “Y” pixels onto a canvas of exactly 4 times the image’s dimensions; however there is an important change from the earler expansion. This time, the still image “Y” pixels will be remapped as follows:

Image “Y” layered onto Canvas “O” with every-other pixel spacing beginning at 0001,0002:
OYOYOYOY
OOOOOOOO
OYOYOYOY
OOOOOOOO
OYOYOYOY
OOOOOOOO

We will choose two more still images from the image sequence and repeat the pixel remapping, each onto a 4X canvas, this time beginning on canvas line 2.

Image “Z” layered onto Canvas “O” with every-other pixel spacing beginning at 0002,0001:
OOOOOOOO
ZOZOZOZO
OOOOOOOO
ZOZOZOZO
OOOOOOOO
ZOZOZOZO

Image “W” layered onto Canvas “O” with every-other pixel spacing beginning at 0002,0002:
OOOOOOOO
OWOWOWOW
OOOOOOOO
OWOWOWOW
OOOOOOOO
OWOWOWOW

With the four separate expanded canvases we’ve created, we are now ready to combine them into a single super-resolution image by stacking them, allowing the solid pixels to show through the transparent layered blank canvas pixels to reveal a single image composite:
XYXYXYXY
ZWZWZWZW
XYXYXYXY
ZWZWZWZW
XYXYXYXY
ZWZWZWZW

Post
#1001626
Topic
Project #4K77
Time

I don’t know how much crowd-sourced help you’d like to have… but it would be cool if there’s a way for contributors to be able to easily access each shot at any completed stage of restoration. For example:

raw scan only
raw scan + Dr Dre color restoration
color restoration + stabilization
stabilization + rough dirt pass
rough dirt pass + fine dirt pass
fine dirt pass + final scene-to-scene color correction
etc. etc.

One-shot-at-a-time lossless downloads would prevent a whole movie from running wild on the internet, and restoration enthusiasts could learn and sharpen their skills on easily-manageable chunks. The obvious downside is the massive duplication of material when successive passes are rendered and uploaded. Some work, though, could be XML-only corrections that the end user renders on their own workstation. This would help alleviate duplicate fully-rendered version-after-versioned footage from accumulating on the project server.

Post
#1001461
Topic
Project #4K77
Time

Williarob said:

Exclusively for OT Members, for a limited time only: here is a longer 4K preview (H.265, no audio).

https://we.tl/aFQWacjyQp

As you’ll see, the other shots are not quite so clean, but watched on my 1080p monitor it looks pretty good. Less good at 4K I imagine, but I don’t have a 4K monitor or TV 😃

UHD Video sample looks amazing on my Dell 3216Q. The whites are still a bit pink, and the shadows are a bit green, but the overall image quality put a big grin on my face.

Post
#1001444
Topic
Star Wars GOUT in HD using super resolution algorithm (* unfinished project *)
Time

I just read about a new upscaling program that may interest those following this thread. From http://www.the-digital-picture.com/News/News-Post.aspx?News=19089

"Developer Steffen Gerlach has created a free program for Windows users – A Sharper Scaling – which he claims produces better enlargements than Photoshop’s bicubic interpolation.

For those that want to preserve details in large prints while cropping heavily, this might be a plausible solution. "

Software and additional info found here:
http://a-sharper-scaling.com/

Disclaimer: I have no association with the software or links above, nor have I tried the software. YMMV, etc.

Post
#946181
Topic
Estimating the original colors of the original Star Wars trilogy
Time

I just read about a new iOS app for “scanning” faded photo prints with the iPhone camera. The initial comments aren’t very encouraging, but it seems that this is proof of a broader (and marketable) hunger for color-restoration apps amongst photo enthusiasts:
http://www.dpreview.com/news/2782682139/unfade-for-ios-scans-and-restores-old-prints

Note some comments mentioning “Digital ROC Pro” on the Nikon Super Coolscan 9000ED still film scanner. This is one of the older color-restoration tools I’ve used with my own Coolscan 9000ED, and mentioned in one of my earlier comments back on page 1 of this thread. Digital ROC Pro seems to have been left to languish in 32-bit EOL-land, but might be worth a look if you haven’t already done so.

camroncamera said:

DrDre, I don’t think I’ve read you mentioning it… have you heard of the Digital ROC color-restoration processing plug-in by Kodak subsidiary Applied Science Fiction?
http://www.asf.com/products/plugins/rocpro/pluginROCPRO/
I don’t know that the product has been updated for many years. My Nikon film scanner came bundled with a version, but it only works during the scanning of film. Worked fairly well. Reminds me of your work on restoring color to red-faded motion picture film scans.

Post
#919869
Topic
Estimating the original colors of the original Star Wars trilogy
Time

DrDre, I don’t think I’ve read you mentioning it… have you heard of the Digital ROC color-restoration processing plug-in by Kodak subsidiary Applied Science Fiction?
http://www.asf.com/products/plugins/rocpro/pluginROCPRO/
I don’t know that the product has been updated for many years. My Nikon film scanner came bundled with a version, but it only works during the scanning of film. Worked fairly well. Reminds me of your work on restoring color to red-faded motion picture film scans.

Post
#915559
Topic
Info: How to build a film scanner (need advise &amp; help, please)
Time

camroncamera said:

poita said:

You only need it for capture, so when capturing one reel at a time, and with lossless compression, you can get away with using that drive. You just capture until the drive is 80% full, and then copy it off to HDD storage, and then delete it and continue your capture.

PDB said:

poita, thanks for sharing all this info. Its great to read.

Mind if I ask some questions:

What format are you capturing in; Tiff? I assume the full bit depth of the camera, 12-bit?

With the earlier mention of lossless compression, I am wondering what capture format is ideal as well.

Why are you triggering the LED light source instead of leaving it always on? Is it to save on the life of the LED?

I think I can help on this one. If your film motion is continuous through your scanning rig with a constant light source and no shutter, your scan will be streaked as the film is pulled through the projector gate at the same instant the image sensor is making an exposure. The reason for triggering the LED light source is to freeze the film image in the gate for the imaging sensor. It works on the same principle as a flash photo that freezes action. (This flashing must be synchronized with the image capturing sensor, lest you capture half of one frame, the other half of the next frame, and the frameline between them.) If the motion of the film through the scanner is intermittent, that is, if the frames stops in the gate for a moment in the same fashion that a movie camera briefly stops each frame in the gate and then opens its shutter to make an exposure (often 1/48th second for 24FPS photography), theoretically you might be able to get away with a constant light source. http://forums.kinograph.cc/t/image-sensor-optical-components/44/37

Post
#915553
Topic
Info: How to build a film scanner (need advise &amp; help, please)
Time

poita said:

You only need it for capture, so when capturing one reel at a time, and with lossless compression, you can get away with using that drive. You just capture until the drive is 80% full, and then copy it off to HDD storage, and then delete it and continue your capture.

PDB said:

poita, thanks for sharing all this info. Its great to read.

Mind if I ask some questions:

What format are you capturing in; Tiff? I assume the full bit depth of the camera, 12-bit?

With the earlier mention of lossless compression, I am wondering what capture format is ideal as well.

Why are you triggering the LED light source instead of leaving it always on? Is it to save on the life of the LED?

I think I can help on this one. If your film motion is continuous through your scanning rig with a constant light source and no shutter, your scan will be streaked as the film is pulled through the projector gate. The reason for triggering the LED light source is to freeze the film image in the gate for the imaging sensor. It works on the same principle as a flash photo that freezes action. (This flashing must be synchronized with the image capturing sensor, lest you capture half of one frame, the other half of the next frame, and the frameline between them.) If the motion of the film through the scanner is intermittent, that is, if the frames stops in the gate for a moment in the same fashion that a movie camera briefly stops each frame in the gate and then opens its shutter to make an exposure (often 1/48th second for 24FPS photography), theoretically you might be able to get away with a constant light source. http://forums.kinograph.cc/t/image-sensor-optical-components/44/37