logo Sign In

Star Wars 1977 Technicolor IB print color references (matched to print) — Page 4

Author
Time
 (Edited)

Whist these are really cool, the scanner is not designed for IB Tech, and doesn’t have the correct light source, sensor, or post processing LUTs available to get it to be accurate to a projected print.
It is a cool thing to do, but it isn’t going to be accurate, even the response curve of the sensor is going to be considerably off for this film type.

Donations welcome: paypal.me/poit
bitcoin:13QDjXjt7w7BFiQc4Q7wpRGPtYKYchnm8x
Help get The Original Trilogy preserved!

Author
Time
 (Edited)

I agree with regards to luminosity, and saturation given that the light source is obviously different from a 35mm projector.

However, I disagree with respect to the hues, which are accurately represented, as the scanner is color calibrated with a professional it-8 color target manufactured by Lasersoft. This color calibration target effectively yields the post-processing LUT you mentioned, and is used internally by the scanning software (Silverfast Ai) to calibrate the colors before each scanning session.

While the film type will have a slight effect on color representation, tests by professionals have shown that color fidelity is roughly ~95% when using a generic it-8 color target, which is logical given that the sensor of the scanner is almost allways the source of the largest inaccuracies, with the last 5% improvement gained by using a higher quality scanner and a film specific color target, if available.

More info can be found here:

http://www.filmscanner.info/en/Scannerkalibrierung.html

In any case, given that I can personally view the frames themselves, and compare them to the scans, I can attest to the accuracy of the hues produced by the scans.

Author
Time
 (Edited)

The calibration to an it-8 target is the problem. You need a target for the specific film type you are trying to scan. A Kodak ASA 50 calibration target will only screw you up a little if you are scanning Fuji ASA 50, but when it comes to cine film, you absolutely need to know, and cater for the stock response you are working with.
Using the incorrect colour target, does as you mention, adjust the internl scanner processing, and almost ensures the colour is off.
The film response curves are radically different on feature film stocks to print stocks, and IB is its own thing again, the colour, particularly outside the mid-tones will be skewed. The DMin and DMax is wildly off, as are the response curves compared to a say Fuji Provia 100F it8 target.

If you are viewing the prints projected with the correct light source, at the correct FL on a screen in a darkened room, and matching to what you see, then I agree, you can adjust the scan as best as you can to match what you are seeing on the projected print. If you are looking at the film with a light box surrounded by ambient light, then the colour perception is going to be way off.
Colour is a slippery beast.
Of course the software used to view the results and monitor calibration is another issue, but I’m sure you are running calibrated monitors and software.
It is amazing that even different browsers on the same calibrated PC will give different looking images on OT.COM, sometimes radically so, I’ve gone to comment on someone’s colour corrections in the forums a few times, only to realise that viewing images inside a browser on a forum is completely different to how their images look on their computer.

I love the work you are doing, but when downloading the images, and watching the IB prints here, the colour looks quite different.

Donations welcome: paypal.me/poit
bitcoin:13QDjXjt7w7BFiQc4Q7wpRGPtYKYchnm8x
Help get The Original Trilogy preserved!

Author
Time
 (Edited)

poita said:

Whist these are really cool, the scanner is not designed for IB Tech, and doesn’t have the correct light source, sensor, or post processing LUTs available to get it to be accurate to a projected print.
It is a cool thing to do, but it isn’t going to be accurate, even the response curve of the sensor is going to be considerably off for this film type.

Yeah I was going to say the same thing. That’s a scanner designed for photographic film, not motion-picture film. I don’t mean to troll the thread or curtail your enthusiasm for colour correction DrDre, but how is it any different from just getting the scan of the full film and then balancing the soundtrack to look consistent?

Also, all professional commercial scanning units perform their own white balance/light calibration prior to scanning each reel. For example you can see it in action at 2:15 in this video:

[ Scanning stuff since 2015 ]

Author
Time
 (Edited)

Balancing the soundtrack or white balancing isn’t going to get you anywhere, as the sensor response is far more complex than a simple RGB curves adjustment will allow you to correct. Then there’s the bigger problem, that color perception can differ quite substantially from person to person:

http://www.livescience.com/21275-color-red-blue-scientists.html

So, adjusting colors watching a projected print may seem like a good idea, but in many ways the way our eyes and brains sense and interpret colors is quite similar to how a scanner sensor works. You might adjust the colors to roughly match what you personally are seeing, but someone else may sense and interpret these colors differently. Taking a number of different people and having them adjust colors by eye watching a screen, will almost certainly yield quite a variation of results, similar to taking a number of different scanners and comparing the responses of the scanner sensor. Now matching the tones you see on a calibrated screen to a projection will bring the colors closer to what they should be, even if we percieve them differently from each other. However, the accuracy will depend heavily on the sensitivity of our eyes to different tones, which in of itself depends heavily on hue, luminosity, and saturation.

Now, the scanner detects the light after it has passed through the dyes and film. This light has a specific distribution of wave lengths, depending on the combination of dyes and film, and thus determine it’s color. While it is true, that a different film stock will alter the colors, this should not affect the color calibration, which is simply mapping the colors detected by the sensor after passing through the dyes and film onto a reference file, which was also calibrated based on a combination of dye and film. The important point here is, that it is the color of the light detected by the sensor, which is calibrated, not the color of the dyes themselves. While the color gamut for a piece of Technicolor film will be different from the color gamut of the it-8 target, they will overlap to a large degree. In other words there usually exists a unique combination of Technicolor dyes and film, which transmits the same color light as a color patch on the it-8 target, even if the dyes and film are different for the it-8 target. The inaccuracies introduced at this point are caused by the precision of the sensor, and reflectance of the film material, which is generally very small compared to the transmitted light captured by the sensor.

I might actually agree the darkest and brightest colors may be skewed somewhat for these scans, since the it-8 color target is geared mostly towards correcting the mid-tones. However, I do think the mid-tones should be accurate. Therefore, I still believe using a generic it-8 color calibration target is the next to most objective and accurate way (assuming you don’t have a film specific color target) to ensure hue representation is accurate over a large range of tones, even if contrast and saturation will heavily influence how each of us percieve these hues when contrast and saturation is adjusted.

Author
Time
 (Edited)

To illustrate the accuracy of the scans, I will compare Mike Verta’s calibrated photographs of a Technicolor print screening, which he claims is 98% accurate in terms of hues, with my own calibrated scan. Let’s first look at the two images:

Mike Verta photograph:

DrDre scan:

Now, despite the fact that the Mike Verta print was projected with a 1970s bulb, the colors are visually very close. The Mike Verta photo is slightly more yellow, which in this case is logical, considering the light source used to project it.

Author
Time
 (Edited)

Let’s quantify the difference. I will first color match my scan to the Mike Verta photograph such that they can be directly compared:

The comparison yields the following result.

Red:

Green:

Blue:

Average:

Nowhere in the frame does difference in the red, green and blue channel exceed 5%.

Author
Time

As we talk about projected prints, what would be the correct way of reproducting a yellow bulb effect in RGB ?

A gamma reduction of the blue channel and, very lightly, of the green ? Or is it more complicated ?

Author
Time

Yeah, that should get you pretty close.

Author
Time
 (Edited)

DrDre said:

Balancing the soundtrack or white balancing isn’t going to get you anywhere, as the sensor response is far more complex than a simple RGB curves adjustment will allow you to correct.

I didn’t say that it does, just that all professional scanning machines do a calibration prior to scanning, meaning once you’ve calibrated one reel all reels using the same film can be corrected with a common LUT.

So, adjusting colors watching a projected print may seem like a good idea, but in many ways the way our eyes and brains sense and interpret colors is quite similar to how a scanner sensor works. You might adjust the colors to roughly match what you personally are seeing, but someone else may sense and interpret these colors differently.

I would believe that those that do professional colour correction would have taken perception tests, as well as a robust colorblindness test to ensure they don’t have even a hint of mild colour deficiency. Although I do think you’re overstating the problem, especially since colorblindness is hereditary on the sex chromosome and consequently affects only 1 in 200 women. Other than that, yes of course we all have individual perception of colour, but that’s because we will have a unique number of photoreceptor cells in a unique ratio of S, M, L type cones and rods, and the photoreceptor cells can have different biological characteristics in each person making them sensitive to slightly different types of light. Bad diet can adversely affect photoreceptor cells. But if you have 20/20 vision and no signs of colour deficiency it shouldn’t matter.

Now, the scanner detects the light after it has passed through the dyes and film. This light has a specific distribution of wave lengths, depending on the combination of dyes and film, and thus determine it’s color. While it is true, that a different film stock will alter the colors, this should not affect the color calibration, which is simply mapping the colors detected by the sensor after passing through the dyes and film onto a reference file, which was also calibrated based on a combination of dye and film.

I disagree, and I’m sure poita knows far more about this than I do as a layperson. The issue is that colours are not on the film, the colours are produced by shining a carbon-arc lamp through the film and then projecting it onto a particular type of screen. Represented by this easy to remember formula:

colours = print + light source + reflection surface

When you scan a film it has a different light source, a different sensor and no reflection surface, what you’re trying to achieve is how to make (argument’s sake):

print + LED light + Colour CCD sensor + calibrated monitor + LUT = print + carbon-arc lamp + cinema screen

What your argument is is that the print doesn’t matter because:

LED light + Colour CCD sensor + calibrated monitor + LUT = carbon-arc lamp + cinema screen

But how do you know that’s true?

[ Scanning stuff since 2015 ]

Author
Time

I think we all agreed long ago that we cannot reproduce the colors perfectly - every print was slightly different whether it was Eastman, Technicolor or some other print stock. Added to this is the fact that each of these prints was then shown in theaters using a variety of bulbs (and how fresh those bulbs were would also matter) and projected onto different types of screens… The fact is that there is just no way it can be done. And even if it could, the chances that these perfectly matched colors would then be faithfully reproduced on every brand and flavor of consumer television is about 0%.

In my opinion, Dre’s process here is producing some great looking colors. Are they accurate? Obviously not 100% but I bet they fall within 10% of what they are supposed to be and surely that is close enough for our purposes. As I scroll up and down this thread, I don’t see a huge difference between one iteration and the next. Throw the images on Screenshot comparison and mouse over and the difference is still pretty subtle in most cases - at least to my eyes. As long as the skin tones look natural, X-Wings have red stripes and sand is, well… sand colored - I’m happy.

So don’t be discouraged Dre - Keep up the good work!

TheStarWarsTrilogy.com.
The007Dossier.com.
Donations always welcome: Paypal | Bitcoin: bc1qzr9ejyfpzm9ea2dglfegxzt59tys3uwmj26ytj

Author
Time
 (Edited)

Here is what I quickly get when I apply on your scans a single color matching model generated with Mike Verta’s pic :

lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur
lmgur

Author
Time
 (Edited)

RU.08 said:

DrDre said:
So, adjusting colors watching a projected print may seem like a good idea, but in many ways the way our eyes and brains sense and interpret colors is quite similar to how a scanner sensor works. You might adjust the colors to roughly match what you personally are seeing, but someone else may sense and interpret these colors differently.

I would believe that those that do professional colour correction would have taken perception tests, as well as a robust colorblindness test to ensure they don’t have even a hint of mild colour deficiency. Although I do think you’re overstating the problem, especially since colorblindness is hereditary on the sex chromosome and consequently affects only 1 in 200 women. Other than that, yes of course we all have individual perception of colour, but that’s because we will have a unique number of photoreceptor cells in a unique ratio of S, M, L type cones and rods, and the photoreceptor cells can have different biological characteristics in each person making them sensitive to slightly different types of light. Bad diet can adversely affect photoreceptor cells. But if you have 20/20 vision and no signs of colour deficiency it shouldn’t matter.

It shouldn’t matter in the range of colors, where our eyes are the most sensitive, but even for people with so-called 20/20 vision, color sensitivity varies from color to color and from person to person, depending on various factors:

As you can see, our eyes are much less sensitive to variations in reds and blues, and this is on average.

Even if we were all to perfectly percieve each tone, curves adjustment generally only get’s you so far in terms of color matching one source to the other. So, some type of algorithmic color calibration is in order.

Now, the scanner detects the light after it has passed through the dyes and film. This light has a specific distribution of wave lengths, depending on the combination of dyes and film, and thus determine it’s color. While it is true, that a different film stock will alter the colors, this should not affect the color calibration, which is simply mapping the colors detected by the sensor after passing through the dyes and film onto a reference file, which was also calibrated based on a combination of dye and film.

I disagree, and I’m sure poita knows far more about this than I do as a layperson. The issue is that colours are not on the film, the colours are produced by shining a carbon-arc lamp through the film and then projecting it onto a particular type of screen. Represented by this easy to remember formula:

colours = print + light source + reflection surface

When you scan a film it has a different light source, a different sensor and no reflection surface, what you’re trying to achieve is how to make (argument’s sake):

print + LED light + Colour CCD sensor + calibrated monitor + LUT = print + carbon-arc lamp + cinema screen

What your argument is is that the print doesn’t matter because:

LED light + Colour CCD sensor + calibrated monitor + LUT = carbon-arc lamp + cinema screen

But how do you know that’s true?

No, this is way too simplistic.

print + carbon-arc lamp (variable) + cinema screen (variable) => print color (under white light) + carbon-arc lamp color (variable) + cinema screen color (variable) = film color (variable)

For this thread I’m after the print color (under white light), as the lamp color and cinema screen color are not a constant. For example lamps from different manufacturers will differ slightly, and even those by the same manufacturer will emit a slightly different color when they age. Cinema screens exist in varrying quality, and their effect on the colors will thus also vary. Additionally their combined effect is fairly minimal, as I’ve shown in the above example, and these are relatively easy to correct for.

Now a scanner will give you:

print + LED-light + CCD sensor => print color (under white light) + LED-light color + sensor response curve

print color (under white light) + LED-light color + sensor response curve = color measurement (uncalibrated)

By calibrating the software on a known target, we get:

reference target + LED-light + CCD sensor => target color (under white light) + LED-light color + sensor response curve

target color (under white light) = reference color

reference color + LED-light color + sensor response curve = color measurement (uncalibrated)

color measurement (uncalibrated) + calibration = reference color

calibration = - LED-light color - sensor response curve

Now we do a calibrated measurement:

print + LED-light + CCD sensor + calibration => print color (under white light) + LED-light color + sensor response curve + calibration

print color (under white light) + LED-light color + sensor response curve + calibration = color measurement (calibrated)

print color (under white light) + LED-light color + sensor response curve + (- LED-light color - sensor response curve) = color measurement (calibrated)

print color (under white light) = color measurement (calibrated)

This is what I’m after!

color measurement (calibrated) = print color (under white light)

Author
Time
 (Edited)

UnitéD2 said:

Here is what I quickly get when I apply on your scans a single color matching model generated with Mike Verta’s pic :

Exactly, colors will tend to be slightly more warm and yellow, but overall the difference is very minimal, as was also recently confirmed by a friend of mine, who’s an avid film collector. Most people wouldn’t notice the difference.

Author
Time
 (Edited)

Williarob said:

I think we all agreed long ago that we cannot reproduce the colors perfectly - every print was slightly different whether it was Eastman, Technicolor or some other print stock. Added to this is the fact that each of these prints was then shown in theaters using a variety of bulbs (and how fresh those bulbs were would also matter) and projected onto different types of screens… The fact is that there is just no way it can be done. And even if it could, the chances that these perfectly matched colors would then be faithfully reproduced on every brand and flavor of consumer television is about 0%.

In my opinion, Dre’s process here is producing some great looking colors. Are they accurate? Obviously not 100% but I bet they fall within 10% of what they are supposed to be and surely that is close enough for our purposes. As I scroll up and down this thread, I don’t see a huge difference between one iteration and the next. Throw the images on Screenshot comparison and mouse over and the difference is still pretty subtle in most cases - at least to my eyes. As long as the skin tones look natural, X-Wings have red stripes and sand is, well… sand colored - I’m happy.

So don’t be discouraged Dre - Keep up the good work!

Thanks williarob, I will! 😃

Just to emphasize my point…

2004/2011 official master:

Mike Verta photograph (with 1970s carbon-arc lamp & 1970s cinema screen):

DrDre scan (with 2017 LED light & 2017 CCD-sensor & calibration):

Now unless someone can point out some glaring color differences, I rest my case…

Author
Time

those red dots

just kidding! Keep up the great work DRE, the effort you put into all of this is appreciated! Amazing things get done when people get invested like this. 😃

Author
Time

I have a few more Mike Verta photographs for which I also have the frames. I will post a few more comparisons soon.

Author
Time

UnitéD2 said:

Here is what I quickly get when I apply on your scans a single color matching model generated with Mike Verta’s pic :

Saving those images back for my own personal reference… That’s awesome.

Author
Time
 (Edited)

DrDre said:

It shouldn’t matter in the range of colors, where our eyes are the most sensitive, but even for people with so-called 20/20 vision, color sensitivity varies from color to color and from person to person, depending on various factors:

Right, I didn’t disagree but can we both agree you’re talking about photoreceptors in the eye and the neurological links in the brain? And the S/M/L cones in particular? All cones are sensitive to all colours, which is probably why we can’t see the same dynamic range that a 16-bit digital colour sensor can which is receptive to only one type of colour. Anyway most variating in how we percieve colour is due to people having a different ratio of L-type to M-type cones in the eye, which is believed to vary greatly, but I don’t see how it would affect someone with 20/20 vision to match two colour sources accurately with the right tools and methods.

For this thread I’m after the print color, as the lamp color and cinema screens are not a constant factor (for example lamps will emit a slightly different color when they age, and cinema screens exist in varrying quality), and their effect fairly minimal as I’ve shown in the above example. Additionally these are relatively easy to correct for.

Sure, carbon arc lamps age greatly, and if you run a twin-projector set up with two lamps that aren’t the same age or don’t recieve equal wear because you run them with a bias towards one projector then you can end up with the picture looking different on one.

print + led + CCD sensor => print color (under white light) + led color + sensor response curve

How about this?

print (variable) + led (variable) + CCD sensor (variable) => …

There is cross-contamination across the Cyan/Magenta/Yellow dyes, because just like the photoreceptors in our eyes the pixel sensitivity in the CCD, the dyes are not completely transparent to wavelengths outside of their main “colour”.

DrDre said:

Mike Verta photograph (with 1970s carbon-arc lamp & 1970s cinema screen):

DrDre scan (with 2017 LED light & 2017 CCD-sensor & calibration):

Now unless someone can point out some glaring color differences, I rest my case…

I can bring that a bit closer to Mike’s photo, it’s still not 100% though (I lack the expertise or proper tools):

[ Scanning stuff since 2015 ]

Author
Time

The scans still have a tendency to look a bit flat, so here are some updates:

Author
Time
 (Edited)

I really like the look of the conference room now, and reel 5 is looking nice. Some issues:

DrDre said:

The scans still have a tendency to look a bit flat, so here are some updates:

There’s a lot of yellow in these images. The first three have a good amount of blue in the highlights to somewhat balance this out (and nice job on removing the greenish tint), but the second three are just too yellow. Note the blue of R2’s dome in image 2 versus images 4 and 5 - the blue is almost nonexistent. Of course, each batch comes from different reels, and this collector’s reel 3 has a consistent yellow push that doesn’t look at all natural. This is why I am wary of judging the color from a single Tech source - the factory had noted quality issues after all. Here’s how the Falcon shot looks with a blue curves adjustment:
Blue Falcon
http://screenshotcomparison.com/comparison/212208

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time

NeverarGreat said:

I really like the look of the conference room now, and reel 5 is looking nice. Some issues:

DrDre said:

The scans still have a tendency to look a bit flat, so here are some updates:

There’s a lot of yellow in these images. The first three have a good amount of blue in the highlights to somewhat balance this out (and nice job on removing the greenish tint), but the second three are just too yellow. Note the blue of R2’s dome in image 2 versus images 4 and 5 - the blue is almost nonexistent. Of course, each batch comes from different reels, and this collector’s reel 3 has a consistent yellow push that doesn’t look at all natural. This is why I am wary of judging the color from a single Tech source - the factory had noted quality issues after all. Here’s how the Falcon shot looks with a blue curves adjustment:
Blue Falcon
http://screenshotcomparison.com/comparison/212208

I actually only adjusted the contrast, so the green is still in there somewhere 😉. I agree those three frames are very yellow.

Author
Time
 (Edited)

RU.08 said:
Right, I didn’t disagree but can we both agree you’re talking about photoreceptors in the eye and the neurological links in the brain? And the S/M/L cones in particular? All cones are sensitive to all colours, which is probably why we can’t see the same dynamic range that a 16-bit digital colour sensor can which is receptive to only one type of colour. Anyway most variating in how we percieve colour is due to people having a different ratio of L-type to M-type cones in the eye, which is believed to vary greatly, but I don’t see how it would affect someone with 20/20 vision to match two colour sources accurately with the right tools and methods.

It affects someone in the sense, that they won’t be able to distinguish between two shades of a color. For example one person’s eyes will be less sensitive to red, such that they percieve one shade of red, whereas to someone else it’s a different shade. That other person might have eyes, that are less sensitive to blue, and therefore not notice the subtle differences between two different blues. It’s nothing to do with color blindness, it’s just that our eyes, like any sensor, have a lower detection limit, which differs from person to person.

I think it would actually be an interesting experiment to let 10 different people grade a scan, while viewing the same print.