logo Sign In

Automated color grading and color matching with a Machine Learning Algorithm

Author
Time
 (Edited)

How many of you doing manual color corrections have ever asked the question, I wish I could speed up this process? What do you do when you only have color references for part of the film you are correcting? How many of you wish you could do color corrections like many of the talented folk here on these boards, but lack the talent or the desire to spend time doing it?

While working on my little Technicolor scanning project, I realized I’m missing a complete reel of references. Since I don’t have the talent or the patience to do all the manual labour, I decided to do, what I do best: build an algorithm to do it for me. I’ve recently been preoccupied with Machine Learning in my normal line of work, and this gave me an idea:

What if I could train an algorithm to do what another professional, or fan has done using a number of reference frames, and then repeat the process on a completely different frame from a different scene?

Theoretically it should be possible. In fact, it is! 😃

As proof here’s a Star Wars bluray regrade with a self-learning algorithm, based on eight example regrades for different scenes (not including this one) provided by NeverarGreat on his thread:

Comparison:

http://screenshotcomparison.com/comparison/212890

Here are a couple more regraded shots by the self-learning algorithm, based on references provides by NeverarGreat:

=========================================================================================

Original start of the thread:

As a first test I took three of NeverarGreat’s regrades for training my algorithm:

Next I allowed my algorithm to regrade another frame from the bluray:

Of course I was hoping it would look something like NeverarGreat’s result:

Here’s what the algorithm came up with:

Not bad for a first try…😃

Of course there will be limitations and such, but I think I’ve embarked on a new adventure…

Author
Time

If I well understand, it is like your color matching model but with an averaging effect ?

Do you think it could really work to correct parts of a film for which multiple prints are used ? And even with one print, if scenes are scanned with different settings ?

Author
Time
 (Edited)

I will answer your question only once, if you don’t mind…😉 No, it’s a bit more complicated than that. This is still a simple version based on three frames, but ultimately the model should be able to recognize different types of scenes, and grade them differently, if it is trained to do so.

Author
Time
 (Edited)

Here’s another example for a frame, where I don’t have a NeverarGreat reference.

Bluray:

Bluray regraded per the NeverarGreat style by the algorithm:

Author
Time
 (Edited)

…and another:

Bluray:

Bluray regraded per the NeverarGreat style by the algorithm:

It’s interesting to note, that the algorithms has learned from NeverarGreat, how to reduce the magenta staining in the bluray color grading.

Author
Time

Id love to see this fix The Phantom Menace HDTV and Attack of the Clones Bluray.

Preferred Saga:
1/2: Hal9000
3: L8wrtr
4/5: Adywan
6-9: Hal9000

Author
Time

‘My work here is done’ now has a new meaning 😉

An excellent start in any case. It will be interesting to see how it behaves with more material, especially since it will have to interpret skin tones seen under a variety of locations and lighting conditions. Heck, even I have trouble with that.

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time

Well, it’s been great Neverar, your services are no longer required.

Also, this is mind blowing!

-G

Author
Time

Fantastic!

Assimilate THIS!

Author
Time
 (Edited)

Fascinating stuff- I spent a little time grappling with machine learning in college, while working on a research project designed to increase the efficacy and efficiency of early Glaucoma detection… Machines can produce some amazing results when given the right parameters!

Author
Time

DrDre said:

I will answer your question only once, if you don’t mind…😉 No, it’s a bit more complicated than that. This is still a simple version based on three frames, but ultimately the model should be able to recognize different types of scenes, and grade them differently, if it is trained to do so.

Firstly - you’re a genius! You continue to amaze me the way you innovate to automate this process.

In theory, using your machine learning algorithm you could take any set of sources and apply it to quickly reproduce that version of the film to the bluray (or any other video source)?

For example, if you wanted to create a GOUT timed version of the bluray you could apply GOUT frames and it would reproduce that timing? (Just using as an example, not a big fan of the GOUT timing)

Also, is it a case that the more frames you add as references the more accurately the algorithm can reproduce that source? E.g. using more NeverarGreat frames as references would more accurately reproduce his colour timing through the whole film?

Finally, would (in theory) this work for contrast and brightness?

Love your work!

Author
Time

Stotchy said:

DrDre said:

I will answer your question only once, if you don’t mind…😉 No, it’s a bit more complicated than that. This is still a simple version based on three frames, but ultimately the model should be able to recognize different types of scenes, and grade them differently, if it is trained to do so.

Firstly - you’re a genius! You continue to amaze me the way you innovate to automate this process.

In theory, using your machine learning algorithm you could take any set of sources and apply it to quickly reproduce that version of the film to the bluray (or any other video source)?

For example, if you wanted to create a GOUT timed version of the bluray you could apply GOUT frames and it would reproduce that timing? (Just using as an example, not a big fan of the GOUT timing)

Also, is it a case that the more frames you add as references the more accurately the algorithm can reproduce that source? E.g. using more NeverarGreat frames as references would more accurately reproduce his colour timing through the whole film?

Finally, would (in theory) this work for contrast and brightness?

Love your work!

Thanks! 😃 Yes, in principle you could recreate a GOUT timing by using a set of GOUT timed references, and then have the algorithm reproduce that timing on the rest of the film. Like you said, the more references you have, the better the results are likely to be. It would also reproduce the contrast and brightness adjustments.

Author
Time

I’m curious what would happen is you took the Lord of the Rings colors and placing them on The Hobbit’s colors. Would the color palette and contrast change or would it just be the color palette. Idk what I’m talking about at this point, I’m just interested to see what would happen.

Author
Time
 (Edited)

Deloreanhunter12 said:

I’m curious what would happen is you took the Lord of the Rings colors and placing them on The Hobbit’s colors. Would the color palette and contrast change or would it just be the color palette. Idk what I’m talking about at this point, I’m just interested to see what would happen.

The algorithm needs to be trained first on some references to learn how you would like to see the film graded. If you have 35mm frames for example, you can use a couple of them as references to train the algorithm, that you would like the film to closely resemble the 35mm print, even for shots where you don’t have references.

For someone doing manual color grading, the algorithm could be used to provide a better starting point for a final color grading. You first teach the algorithm how to color grade like you, based on a number of manual color adjustments you provide. You subsequently let the algorithm color grade the other shots, based on what it learned from your work. Once the algorithm is finished color grading rest of the film, you can make final adjustments as you see fit.

Author
Time

NeverarGreat said:

‘My work here is done’ now has a new meaning 😉

An excellent start in any case. It will be interesting to see how it behaves with more material, especially since it will have to interpret skin tones seen under a variety of locations and lighting conditions. Heck, even I have trouble with that.

Is there any scene that you’ve found challenging? Would actually have some value to you then and all you have to do is show the frame you want predicted.

Author
Time

The biggest challenge is just keeping the tones consistent across the entire film, while at the same time avoiding the ‘samey’ look of an overagressive blanket color grade. It’s a fine line, and from the example frames above it looks like this monotone look could be a problem when using this software. With the 3 frames that Dre chose, you can see the variation in the skin tones from location to location. Yet with the two examples, the color of Obi-wan’s face looks the same in the desert as it does in the Death Star.

You probably don’t recognize me because of the red arm.
Episode 9 Rewrite, The Starlight Project (Released!) and ANH Technicolor Project (Released!)

Author
Time
 (Edited)

DrDre said:
You first teach the algorithm how to color grade like you, based on a number of manual color adjustments you provide.

Can’t you use the changes that your color-correction tool makes to teach the algorithm? Presumably, the internal workings of the former could be expressed as a series of adjustments.

EDIT: Is there a risk of color fluctuations within a shot? That’s a problem that’s come up with some of the other automated solutions.

Author
Time

NeverarGreat said:

The biggest challenge is just keeping the tones consistent across the entire film, while at the same time avoiding the ‘samey’ look of an overagressive blanket color grade. It’s a fine line, and from the example frames above it looks like this monotone look could be a problem when using this software. With the 3 frames that Dre chose, you can see the variation in the skin tones from location to location. Yet with the two examples, the color of Obi-wan’s face looks the same in the desert as it does in the Death Star.

In reality you would use at least 10 reference frames in varying conditions to train the algorithm. So, if properly trained, the algorithm should not have a problem with skin tones under varrying lighting conditions.

Author
Time
 (Edited)

Dr. Dre… would you mind showing me how the blu ray would regrade with his style with this shot? This whole scene kills me trying to figure out saturation, gamma, everything about it… I would love to see how NeverarBot pulls it off. Don’t use this image BTW, this is just a reference to get the frame, I don’t have the blu ray handy… I think this is from my horrible attempt at adjusting Towne32’s 2.7.

Author
Time

I actually don’t have the NeverarBot available right now, but I will get back to it on Friday.

Author
Time

The NeverarBot has been upgraded. Here’s the Obi-Wan shot Dreamaster requested.

Bluray:

Bluray regarded by the NeverarBot: