logo Sign In

Post #1082132

Author
DrDre
Parent topic
Star Wars 1977 Technicolor IB print color references (matched to print)
Link to post in topic
https://originaltrilogy.com/post/id/1082132/action/topic#1082132
Date created
8-Jun-2017, 9:30 AM

RU.08 said:

DrDre said:
So, adjusting colors watching a projected print may seem like a good idea, but in many ways the way our eyes and brains sense and interpret colors is quite similar to how a scanner sensor works. You might adjust the colors to roughly match what you personally are seeing, but someone else may sense and interpret these colors differently.

I would believe that those that do professional colour correction would have taken perception tests, as well as a robust colorblindness test to ensure they don’t have even a hint of mild colour deficiency. Although I do think you’re overstating the problem, especially since colorblindness is hereditary on the sex chromosome and consequently affects only 1 in 200 women. Other than that, yes of course we all have individual perception of colour, but that’s because we will have a unique number of photoreceptor cells in a unique ratio of S, M, L type cones and rods, and the photoreceptor cells can have different biological characteristics in each person making them sensitive to slightly different types of light. Bad diet can adversely affect photoreceptor cells. But if you have 20/20 vision and no signs of colour deficiency it shouldn’t matter.

It shouldn’t matter in the range of colors, where our eyes are the most sensitive, but even for people with so-called 20/20 vision, color sensitivity varies from color to color and from person to person, depending on various factors:

As you can see, our eyes are much less sensitive to variations in reds and blues, and this is on average.

Even if we were all to perfectly percieve each tone, curves adjustment generally only get’s you so far in terms of color matching one source to the other. So, some type of algorithmic color calibration is in order.

Now, the scanner detects the light after it has passed through the dyes and film. This light has a specific distribution of wave lengths, depending on the combination of dyes and film, and thus determine it’s color. While it is true, that a different film stock will alter the colors, this should not affect the color calibration, which is simply mapping the colors detected by the sensor after passing through the dyes and film onto a reference file, which was also calibrated based on a combination of dye and film.

I disagree, and I’m sure poita knows far more about this than I do as a layperson. The issue is that colours are not on the film, the colours are produced by shining a carbon-arc lamp through the film and then projecting it onto a particular type of screen. Represented by this easy to remember formula:

colours = print + light source + reflection surface

When you scan a film it has a different light source, a different sensor and no reflection surface, what you’re trying to achieve is how to make (argument’s sake):

print + LED light + Colour CCD sensor + calibrated monitor + LUT = print + carbon-arc lamp + cinema screen

What your argument is is that the print doesn’t matter because:

LED light + Colour CCD sensor + calibrated monitor + LUT = carbon-arc lamp + cinema screen

But how do you know that’s true?

No, this is way too simplistic.

print + carbon-arc lamp (variable) + cinema screen (variable) => print color (under white light) + carbon-arc lamp color (variable) + cinema screen color (variable) = film color (variable)

For this thread I’m after the print color (under white light), as the lamp color and cinema screen color are not a constant. For example lamps from different manufacturers will differ slightly, and even those by the same manufacturer will emit a slightly different color when they age. Cinema screens exist in varrying quality, and their effect on the colors will thus also vary. Additionally their combined effect is fairly minimal, as I’ve shown in the above example, and these are relatively easy to correct for.

Now a scanner will give you:

print + LED-light + CCD sensor => print color (under white light) + LED-light color + sensor response curve

print color (under white light) + LED-light color + sensor response curve = color measurement (uncalibrated)

By calibrating the software on a known target, we get:

reference target + LED-light + CCD sensor => target color (under white light) + LED-light color + sensor response curve

target color (under white light) = reference color

reference color + LED-light color + sensor response curve = color measurement (uncalibrated)

color measurement (uncalibrated) + calibration = reference color

calibration = - LED-light color - sensor response curve

Now we do a calibrated measurement:

print + LED-light + CCD sensor + calibration => print color (under white light) + LED-light color + sensor response curve + calibration

print color (under white light) + LED-light color + sensor response curve + calibration = color measurement (calibrated)

print color (under white light) + LED-light color + sensor response curve + (- LED-light color - sensor response curve) = color measurement (calibrated)

print color (under white light) = color measurement (calibrated)

This is what I’m after!

color measurement (calibrated) = print color (under white light)