Sign In

Color matching and prediction: color correction tool v1.3 released!

Replies
878
Author
Time

The color matching tool v1.2 released! Please send me a PM, if you’re interested.

Updates for v1.2 include:

An upgrade of the color matching algorithm. The stabilization parameter has been replaced by a smoothing parameter, and now is in the range from 0 to 1 (default 0.01).

Updates for v1.1 include:

The algorithm has been upgraded, such that creating a color matching model now only takes a few seconds, even for a set of 4K frames.

The color correction tool v2.2 has been dubbed the color matching tool v1.0 to distinguish it from the upcoming color balancing tool v1.0. The functionality of the color matching tool v1.0 is the same as the color correction tool v2.2.

Updates for v2.2 include:

An exported LUT should now also work in Adobe After Effects. In addition LUT creation is now faster.

Updates for v2.1 include:

  1. A fast processing mode, that allows for significantly faster model building. Creating a color correction model at the default settings (using 10 color spaces) now only takes up to 2 minutes, independent of the frame resolution, while creating a single space model only takes up to 10 seconds.
  2. The single color space model is now available by setting the number of color spaces to 1.

Updates for v2.0 include:

  1. A new color matching algorithm, with improved stabilization.
  2. An option to increase the number of color spaces, that the algorithm uses to match the source and reference (max 100). Increasing the number of color spaces leads to more accurate results, but is also slower.
  3. A new stabilization parameter that has a range of 0 to 25. Use this option, if the source is noisy, or if the reference colors are inconsistent. Increasing this value improves the quality of the output image. Usually a value of 0 to 5, will lead to a much improved result, without seriously affecting the quality of the color match. Higher values may result in a slower convergence to the desired color palette, thus requiring a larger number of color spaces.

Here’s a post on thestarwarstrilogy.com:

http://www.thestarwarstrilogy.com/starwars/post/2015/12/14/Dr-Dres-Magical-Color-Matching-tool

A special thanks to thestarwarstrilogy.com for the post!

What can the tool do? It can accuratelly match the colors between a source and a reference. A color matching model can be constructed, that can then be used to correct other frames.

Here’s an example.

Team Negative1 35 mm scan of the 1997 Star Wars Special Edition:

Bluray:

Bluray matched to reference:

Other frame bluray:

Other frame bluray corrected using color matching model calibrated on the reference frame:

How does it work?

Here’s an amazing video tutorial made by originaltrilogy.com member williarob:

https://youtu.be/5OtaGT3A8Bs

When you’ve downloaded the file named ColorMatchv1_2_pkg.exe, execute the file. You will be asked to install the MATLAB runtime environment. After you have finished installing, a new executable named ColorMatchv1_2.exe will be available. Open this file as administrator, else it will not work.

A few words of advice on using the GUI. The GUI itself is pretty self explanatory.

The process is as follows:

  1. Select a test image. A figure will open, showing the image. You will be able to crop the frame, with your cursor. If you don’t want to crop the frame, close the figure window to be able to continue.

  2. Select a reference image. A figure will open, showing the image. You will be able to crop the frame, with your cursor. If you don’t want to crop the frame, close the figure window to be able to continue.

  3. Build a color correction model. There are two processing options: fast processing mode (default), and normal processing mode. Fast processing mode is significantly faster, especially for high resolution frames, but could in theory be less accurate, although in practise this will rarely be the case. You can set the number of color spaces that will be used to build the color correction model (minimum is 1 color space, which closely mimics histogram matching, albeit far more stable, maximum is 100 color spaces). Increasing this number will lead to more accurate results, but is also slower. Depending on the resolution/size of the images after cropping and your hardware, this may take 0-1 minutes in normal processing mode, and roughly 0-30 seconds in fast processing mode, using the default settings, on an Intel Core i5. A figure will open showing you the test frame as it is being matched. With each iteration it should be closer to the reference. There is a smoothing parameter (range 0-1) that can be increased, if the source is noisy, or if the reference colors are inconsistent. Normally this could lead to unwanted artifacts in the output image. Increasing this number will prevent this, and improves the quality of the output image. Usually a value of 0.01 to 0.1, will lead to a much improved result, without affecting the quality of the color match. Higher values may result in a slower convergence, thus requiring more color spaces.

  4. Save the color correction model for later (optional).

  5. Import a color correction model (optional).

  6. Import any number of images, and color correct them with a color correction model you just built or imported. The images will be saved in a newly created directory named “Corrected” with the same name as the original images. Color correcting a frame may take anywhere between 5 and 20 seconds, depending on the resolution/size of the frame, and of course your hardware.

  7. Export a 3D LUT (lookup table) for use in other software programs, such as Davinci Resolve or Adove After Effects.

When building a color correction model you should consider the following:

  1. The model assumes the test and reference images (frames) are identical, aside from the color. In other words it’s important the images are cropped in the same way (to a reasonable degree). Incorrect cropping may lead to artifacts.

  2. When using a print or a low quality source as a reference, there may be color variations within the frame. For example some parts may be darker or brighter than others. If you use the full frame for building a color correction model, it will try and fail to reconcile these differences, resulting in artifacts. The best way to go, is to select a consistent part of the frame, select the same part for the reference, and then build the color correction model.

  3. In theory you can match any source to a reference, but there are limitations in practise. You have to consider that a limited color depth may result in artifacts. Crushed dark colors or blownout light colors are notoriously difficult to regrade, but they also may affect the color matching in other areas of the frame. In such cases increasing the stabilization parameter should reduce artifacts, but they are sometimes unavoidable.

  4. Although you could regrade an entire film, based on a single reference frame, this will probably not work in practice, because one reel may have degraded in a different way than another or one scene may have been color graded differently from another. In principle it is possible that each frame will have to be matched individually, but usually a film is graded on a scene by scene basis, so a single reference will suffice for a particular scene.

Hope you enjoy the tool. Of course if you use the tool for your projects, any acknowledgements will be appreciated. The same is true for any comments, critisism or suggestions you may have. In that case write a post in this thread or send me a PM.

================================================================================================

Original start of the thread:

I decided to move the color matching discussion from the super resolution thread to a new thread. I wrote a script in MATLAB that matches the colors of the same frame between two different sources. The color correction can be transfered to other frames, although there’s of course no guarantee these will be correct.

Here’s an en example for frame 8228 of Star Wars, where I matched Harmy’s Despecialized Edition to the GOUT, GKar, and what appears to be a scan from a 35 mm print.

Here are the before after comparisons:

http://screenshotcomparison.com/comparison/138410

http://screenshotcomparison.com/comparison/138411

http://screenshotcomparison.com/comparison/138412

Here are the comparisons to the reference frames:

http://screenshotcomparison.com/comparison/138432

http://screenshotcomparison.com/comparison/138433

http://screenshotcomparison.com/comparison/138434

As you can see crushed whites, and blacks are difficult to match correctly, but the overall agreement is very good.

This post has been edited.

Author
Time

Now for a more interesting comparison, I matched frame 8228 for Team Negative1's video sample to four different sources: GOUT, GKar, 35 mm print, and Harmy's Despecialized Edition. 

Here are the before after comparisons:

http://screenshotcomparison.com/comparison/138435

http://screenshotcomparison.com/comparison/138436

http://screenshotcomparison.com/comparison/138437

http://screenshotcomparison.com/comparison/138438

Here are the comparisons to the reference frames:

http://screenshotcomparison.com/comparison/138439

http://screenshotcomparison.com/comparison/138440

http://screenshotcomparison.com/comparison/138441

http://screenshotcomparison.com/comparison/138442

Again the color matches very well. However, the quality of the color corrected Team Negative1 frame is not that good. There is not much detail. In some ways the LPP scan is comparable to the GOUT and GKar. This can have three reasons I think:

1) The diference in quality is due to compression

2) The quality of the scan is not that good

3) The print itself is in pretty poor shape

Author
Time

Here's three tests of calibrating the color model on a single frame (8228), and then using this model to correct other frames for reel 1.

Calibrated on the GOUT:

http://screenshotcomparison.com/comparison/138447

http://screenshotcomparison.com/comparison/138448

http://screenshotcomparison.com/comparison/138449

Calibrated on 35 mm frame:

http://screenshotcomparison.com/comparison/138451

http://screenshotcomparison.com/comparison/138452

http://screenshotcomparison.com/comparison/138453

Compression artifacts are horrible, but it works oke, although direct matching to a reference frame will always give better results. Personally I actually like the 35 mm frame versions a bit better than the GOUT, but that's a matter of taste. Although Darth Vader in the last frame seems a bit on the green side. ;-)

This post has been edited.

Author
Time

Seems like you used something like this: (http://originaltrilogy.com/forum/topic.cfm/Colour-matching-for-fan-edits/topic/7257/)

Because I implemented this method in Scilab and I get those grainy pictures as well.

http://screenshotcomparison.com/comparison/138463

However, if you look at the face, you'll see that the colors are matching the original more using the PDF method. But mine has more noise.

http://screenshotcomparison.com/comparison/138462

This post has been edited.

Author
Time

Very impressive! Can you also use this model to color correct other frames in the same scene? In other words if I give you a random frame, could you match that to the GOUT without a similar GOUT frame or do you always need a reference frame? The way I set up the model, it can in principle correct all frames based on a single frame, assuming the color degradation or color adjustments are very similar between frames. 

For example, I used the color adjustment model calibrated on frame 8228 of the GOUT to adjust another frame in the Tantive IV scene for the -1 video sample:

Before versus after:

http://screenshotcomparison.com/comparison/138468

Independent color correction vs GOUT:

http://screenshotcomparison.com/comparison/138469

This allows for the color correction of the whole film with only a limited number of reference frames.

This post has been edited.

Author
Time

Yes, you can correct other frames of a scene as well. The algorithm has an input image and a target image. The goal is to get the colors from the target image into the input image. However, the algorithm only provides good results, if the input image and target image look similar. So it would only make sense to match with frames of the same scene.

Author
Time

I see, but that does mean that the frame with the soldier's face could not be corrected based on the Darth Vader frame. The two frames are not similar in the sense that they depict a different scenery. 

Author
Time

Quite impressive guys!  DrDre's method seems to work well except for the reds.  They are darker/blacker than they should be.  If you can fix that, it would probably be closer to the PDF Color method which is currently even more impressive.

Author
Time

DrDre said:

I see, but that does mean that the frame with the soldier's face could not be corrected based on the Darth Vader frame. The two frames are not similar in the sense that they depict a different scenery. 

 I would think even your method wouldn't work well in some cases.  For example, if you took a desert scene and applied it to a snow scene, it probably wouldn't look right.

Author
Time

That depends. If the colors were adjusted or have faded in the same way in different scenes it would work just fine. The Tatooine scenes in -1 preservation have a very similar blue hue to them, so I expect the color corrected version would look very similar to the reference, even if the calibration was done based on a Tantive IV frame.

If the color was adjusted on a scene by scene basis or if the reels were stored under different conditions it would most likely not work very well.

This post has been edited.

Author
Time

I managed to produce the same picture as you, DrDre:

http://screenshotcomparison.com/comparison/138479

The algorithm matches the probability density function (PDF) of each image. If you do this with just the original color space, the colors will hardly mach. So the idea of the algorithm is to transform the color space and match the PDF again. The transformation of the color space is done by multiplying the picture with a rotation matrix. The algorithm does this N times. In this example I calculated 50 rotation matricies and used all of them to create the image in post #5. In the image of this post I still calculated 50 rotation matrices, but just used the first 49 of them to do the color transfer. In this image I used just one rotation. 

This post has been edited.

Author
Time

It would be interesting to try using the Tantive IV scene and others from Mike Verta's videos and applying it to see how they look.  :-)

Author
Time

Yep, that looks pretty much exact. Would be happier if I could match yours, though. ;-)

Author
Time

You can always try the Matlab script from the thread I linked.  


I have to correct myself though. I said that I used 49 rotations to create the image, which looks exactly like yours. Turns out that's wrong. My algorithm just used 1 rotation and matched the PDF. This means that your algorithm probably just matches the PDFs as well. If you want a get closer to the target colors, you have to match the PDFs in different color spaces as well. This is probably what makes the PDF algorithm look more closer than your algorithm.  


Anyway, I'm going to modify my script, and then I will post more comparisons. Probably tomorrow.

Author
Time

Here is GOUT,GKar,35mm compared to Harmy matched to Gout,Gkar,35mm using PDF matching with 50 rotations.

http://screenshotcomparison.com/comparison/138463

http://screenshotcomparison.com/comparison/138550

http://screenshotcomparison.com/comparison/138548

DrDre compared to PDF 

http://screenshotcomparison.com/comparison/138462

http://screenshotcomparison.com/comparison/138551

http://screenshotcomparison.com/comparison/138549

Here is GOUT compared to TeamNegative matched to Gout

http://screenshotcomparison.com/comparison/138552

DrDre compared to PDF

http://screenshotcomparison.com/comparison/138553

Here a frames of TN matched to Frame 8228 of GOUT compared to DrDre's result:

http://screenshotcomparison.com/comparison/138556

http://screenshotcomparison.com/comparison/138557

Then Scilab crashed and I forgot to save the rotation matrices, which means I have to recalcule them again. I will try it with just 20 matrices this time, because I want to know if less matrices are good enough and the calculation of 50 matrices takes about 3 hours, compared to about 45 minutes of 20.

But you can already see, that the PDF algorithm does worse than DrDre's algorithm when it comes to colouring frames with a different reference frame. With PDF you are limited to frames of the same shot.

Author
Time

I've slightly optimized my color correction script, but the PDF algorithm is still better at matching frames of the same shot.

However, the real strength of my current method is that it can color correct frames from completely different scenes. 

Again the model is calibrated on frame 8228, where -1 video sample is matched to the GOUT:

http://screenshotcomparison.com/comparison/138571

http://screenshotcomparison.com/comparison/138572

Subsequently, the color correction model is used to correct a frame, which depicts a different scene with different colors.

Before versus after:

http://screenshotcomparison.com/comparison/138573

GOUT versus independent color correction:

http://screenshotcomparison.com/comparison/138574

Author
Time

That's really impressive and quite useful. The PDF transfer generally doesn't provide good results if the target frame is of lower resolution and/or compressed. In that case your script could just use another frame and provide better results. Your method also seems more fitting for cases like The Hobbit edits, where people try to match the colors of each film. 

Author
Time

If we apply the color correction model directly to the Tantive IV scenes of the bluray it actually works pretty well. The bluray color adjustments are fairly consistent, except for a odd green hue in the opening scenes of the film. 

Here are some examples of color corrections done, based solely on frame 8228 of the GOUT:

http://screenshotcomparison.com/comparison/138583

http://screenshotcomparison.com/comparison/138584

and the same, but now based on frame 8228 of the 35 mm print (brightness slightly adjusted):

http://screenshotcomparison.com/comparison/138587

http://screenshotcomparison.com/comparison/138588

Author
Time

Thanks! 

I'm entirely unfamiliar with the methods you guys are describing. How much more difficult will it be to eventually process longer scenes? How long is the rendering for a frame?

Author
Time

For the GOUT the processing is quite fast, because colors are pretty consistent for the entire movie. So after a color model has been built, it only takes a few seconds per frame. However, since I'm doing it in MATLAB, I have to convert the raw avi to image files, and then import in MATLAB, adjust colors and then export. Subsequently the loose images will be converted back to avi. 

Author
Time

towne32, if you can provide me some screenshots, then I can do it for you using the PDF method. 

I'm currently using SCILAB, which a free software that provides many similar features as MATLAB. But I plan to write a Avisynth-Plugin that implements the PDF algorithm.

To the top