logo Sign In

Automated color grading and color matching with a Machine Learning Algorithm — Page 2

Author
Time

Looks great. Neverar can retire early.

As with much of your software, Dre, when I look at this I think “oh, this has commercial potential, why is he wasting his skills on us?”. Which got me thinking, what are the potential legal ramifications of machine learning algorithms like this? If Neverargreat was a commercial entity, and you built an algorithm that used examples of his to learn from, but didn’t actually include any of his work in the software, does he still have any stake in what that ‘bot’ produces? Presumably not, in cases where an AI is trained from the work of numerous artists. But if you really are training a bot to be a Neverar that doesn’t need food/water/sleep, how does that work out?

Author
Time
 (Edited)

Here are seven more regrades by the NeverarBot.

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Author
Time

towne32 said:

Looks great. Neverar can retire early.

As with much of your software, Dre, when I look at this I think “oh, this has commercial potential, why is he wasting his skills on us?”. Which got me thinking, what are the potential legal ramifications of machine learning algorithms like this? If Neverargreat was a commercial entity, and you built an algorithm that used examples of his to learn from, but didn’t actually include any of his work in the software, does he still have any stake in what that ‘bot’ produces? Presumably not, in cases where an AI is trained from the work of numerous artists. But if you really are training a bot to be a Neverar that doesn’t need food/water/sleep, how does that work out?

Well, in a way the algorithm is attempting to distill some of Neverar’s preferred colors from the reference frames. I suspect you cannot simply use someone else’s work as reference in a commercial setting, as you are then profiting from his hard work. It’s a smart algorithm (if I say so myself), but garbage in is garbage out. If it wasn’t for Neverar’s amazing talent, the algorithm would have nothing to show for.

Author
Time
 (Edited)

How long does the algorithm take on each frame?

Also, can we get a lightsaber shot?

Preferred Saga:
1/2: Hal9000
3: L8wrtr
4/5: Adywan
6-9: Hal9000

Author
Time

Depending on a number of parameters it will take a few minutes to much longer. These were all done in a few minutes.

I could do a lightsaber shot, but since NeverarGreat rotoscoped some of the lightsaber effects, I suspect it won’t be able to reproduce those effects. It’s a color grading tool, not a restoration tool.

Author
Time
 (Edited)

Wouldn’t this need some sort of feedback loop / evaluation in order to ‘learn’? or maybe that is a distinction between machine learning and neural networks? But it just seems common sense to me that this thing will only do a good job on color matching for new unknown frames if it has some way to evaluate the choices it made. (but then again, sometimes common sense is wrong about these things).

Author
Time

dahmage said:

Wouldn’t this need some sort of feedback loop / evaluation in order to ‘learn’? or maybe that is a distinction between machine learning and neural networks? But it just seems common sense to me that this thing will only do a good job on color matching for new unknown frames if it has some way to evaluate the choices it made. (but then again, sometimes common sense is wrong about these things).

I’m not much of a fan of neural networks personally. Keep it simple and stupid, is my credo. The more complicated you make these things, the better it works for some situations, while screwing up stuff, that should be straightforward.

Author
Time
 (Edited)

Sooo let me get this straight. Wait, let me step back. This seems awesome. Okay, that’s done. But I’m not sure what this is exactly.

What exactly do you feed the bot at the beginning? The corrected frames plus equivalent original frames, right? Any guidelines, like make sure the primaries are well-represented? Number of frames? Mix interior/exterior lighting?

Then step 2 machine learning happens, which I’m perfectly happy to leave at: ???

Then for step 3, here’s where this crap always falls apart for me. It seems to produce, what’s the word… damn good results. Are there exceptions? Does it fall down on filtered shots? Grain, dirt, damage, misalignment? And if it doesn’t, umm, I dunno, I was just wondering when we could take a turn with this shiny new toy of yours?

Project Threepio (Star Wars OOT subtitles)

Author
Time
 (Edited)

CatBus said:

Sooo let me get this straight. Wait, let me step back. This seems awesome. Okay, that’s done. But I’m not sure what this is exactly.

What exactly do you feed the bot at the beginning? The corrected frames plus equivalent original frames, right? Any guidelines, like make sure the primaries are well-represented? Number of frames? Mix interior/exterior lighting?

Yes, you use the original and uncorrected frames to feed to the model. The more frames you have, the better the results in general.

Then step 2 machine learning happens, which I’m perfectly happy to leave at: ???

Then for step 3, here’s where this crap always falls apart for me. It seems to produce, what’s the word… damn good results. Are there exceptions? Does it fall down on filtered shots? Grain, dirt, damage, misalignment? And if it doesn’t, umm, I dunno, I was just wondering when we could take a turn with this shiny new toy of yours?

One of the major challenges is getting the algorithm to recognize different scenes from NeverarGreat’s color timing. Overall he removed the blue cast from the bluray, but there are exceptions. For example for the Death Star conference room scene the walls appear more blue in his regrade.

Bluray:

NeverarGreat regarde:

So, if I let the algorithm regrade a shot from the same scene, I want to see a very similar color timing with the same walls, same skin tones, same color clothes etc, without having to tell the algorithm which scene it’s regrading.

Bluray:

Bluray regraded by NeverarBot:

Bluray:

Bluray regraded by NeverarBot:

Another example is the Leia shot aboard the Tantive IV, where NeverarGreat added a greenish hue to the walls.

Bluray:

NeverarGreat regarde:

Again, if I let the algorithm regrade a shot from this scene, I want to see a very similar color timing, with the same greenish walls, Leia’s red lips, etc.

Bluray:

Bluray regraded by NeverarBot:

Bluray:

Bluray regraded by NeverarBot:

Author
Time

DrDre said:

Depending on a number of parameters it will take a few minutes to much longer. These were all done in a few minutes.

So, would it be possible at some point to have the bot crawl through all of the shots in a scene (or film, or selections of scenes you deem it fit to regrade) and make the correction it’s trained to make. Then, produce LUTs based on the changes made (assuming LUTs could vary slightly between shots in the same scene), and apply that to the shot as a whole? Just trying to think of how a whole film would be graded with a few minutes per frame. Cloud computing/supercomputers of course would get through it quickly.

Author
Time

towne32 said:

DrDre said:

Depending on a number of parameters it will take a few minutes to much longer. These were all done in a few minutes.

So, would it be possible at some point to have the bot crawl through all of the shots in a scene (or film, or selections of scenes you deem it fit to regrade) and make the correction it’s trained to make. Then, produce LUTs based on the changes made (assuming LUTs could vary slightly between shots in the same scene), and apply that to the shot as a whole? Just trying to think of how a whole film would be graded with a few minutes per frame. Cloud computing/supercomputers of course would get through it quickly.

Exactly! For scenes for which it doesn’t have a reference, it will make a best guess, based on the preferences it has learned. As we’ve seen earlier, this often gives good results, although in some cases it may need additional references, if a scene was graded very differently on the bluray, compared to your own preference.

Author
Time

On that note, how does the sunsets scene look?

Author
Time

Here’s the double sunset, and a lightsaber shot, both of which are less than ideal shots 😉.

Bluray:

Bluray regraded by the NeverarBot:

Bluray:

Bluray regraded by the NeverarBot:

Author
Time

DrDre said:

Here’s the double sunset, and a lightsaber shot, both of which are less than ideal shots 😉.

Bluray:

Bluray regraded by the NeverarBot:

I think those regraded colors are the reason Neverargreat put the 77 sabers back in though, no?

Author
Time

Well those shots look great. Has there been any shot you’ve tried so far, where things go really wrong?

Author
Time
 (Edited)

towne32 said:

Well those shots look great. Has there been any shot you’ve tried so far, where things go really wrong?

Not really…they’re all clear improvements over the bluray, even if some might require a little tweaking. These were created with just eight of Neverar’s regrade examples. The great thing is, that depending on your tastes you can create a HarmyBot, kk650-Bot, DreamasterBot, SwazzyBot, or perhaps even a Towne32Bot 😉. Alternatively, you could take some Technicolor print scans, or another official color grading, and train the algorithm to reproduce those colors.

Author
Time
 (Edited)

This is an interesting shot, because Darth Vader has a very strong blue cast for the bluray frame.

Bluray:

Bluray regraded by the NeverarBot:

Author
Time

Very cool stuff Dr Dre

Author
Time
 (Edited)

Thanks! 😃 Here’s one more shot before I go to sleep…

Bluray:

Bluray regraded by the NeverarBot: