Dre, awesome job!
Idea: it’s possible to regrade a whole film using just a trailer for reference, right?
In theory yes, but the quality of the final result will depend on the consistency of the source, and the reference. If the color grading of the source you want to regrade varries wildly, in the worst case it’s possible you might need a reference for each shot. Garbage in is garbage out.
The algorithm recognizes the colors of a scene, and ensures the colors for each scene for which you have a reference accurately matches the reference. For the scenes for which it does not have a reference it estimates the colors based on the sources it has available. This is why this method works much better than color matching a shot, and using a LUT based on that shot to correct other scenes. Such a LUT would only contain information about the colors in that shot. If for example another person would walk into the frame, the colors could be off by quite a bit, especially if the colors are altered heavily. The self-learning algorithm would use the color information of that person from other shots, without adjusting the colors of the rest of the shot, as each pixel is graded individually. This also ensures color consistency between shots.