logo Sign In

Colour matching for fan edits

Author
Time
I see a lot of people are now pulling in various sources to insert into their fan edits.
People are pulling footage from Blade Runner, Dune and other films, and stuff they have shot themselves and cobbling it all together, or mixing SE and LD footage together etc.

Colour matching can be a pain in the arse, so I just thought I'd throw in a method that can be used pretty much automatically, even if the scenes are very different.
This would allow you (for example) to take the colour scheme from a room in Blade Runner and match it to a room from Revenge of the Sith, or colour match the entire SE to match one of the LD releases.

The crux of the method is to get the probability density function from one image and to apply it to the other. Obviously a one dimensional pdf isn't going to be a very good match with two very differrent images, but if you feed the results of one back into the equation then you get it moving towards convergence, and can in fact get very close to an exact match. (i.e. use it iteratively) It isn't very computationally intensive and can achieve amazing results.

If anyone is interested to give it a go I'll post some reference papers and the math.
Author
Time
Oh please do post links to papers!

Has this been implemented in any generally available software? Are there any special tricks for video?
Author
Time
And this be used for adding color to black and white footage?
Author
Time
Yes! Please!!!

I'm doing a project right now where (off topic details omitted, tho' it's kindof an interesting... yeah, maybe not). And there are projects planned where I'd be mixing (variously) multi-gen VHS, NTSC & PAL DVD, HD, LD (NTSC, maybe PAL), other people's DV... sometimes even compositing different sources, like this project...

One day I found... 10 years had got behind me. Next day was worse.

 

Download  shows from Cable DVR (Updated! Yes, it needs a rewrite, but it's worth slogging through, anyway).

Author
Time
I'd be interested in knowing how to do this.
Author
Time
Sluggo, potentially you could use it to add to black and white, especially if there were some colour photos around of the scenes in question.
Lars, I'm not sure if it is being used in off the shelf software, but if you can program at all you can implement it yourself - the math isn't that out there.
The best thing is once you have it in your library you can automate just about any grading job where you have a reference you are matching to.
Author
Time
OK, I found some sample matlab code(not my code, but the base I used - I'll post credit it for it when I find whre I got it from originally), I'm still digging through my CDs to find the whitepapers, but this should give you an overview, you can see that the computation load is minimal.
Matlab is of course the perfect way to show this as the process stays linear and is a matrix fest, you could say matlab was written to do this

Edit: Tried to make smileys go away but didn't work.

%
% simple implementation of N-Dimensional PDF Transfer
%
% [DR] = pdf_transferND(D0, D1, rotations);
%
% D0, D1 = NxM matrix containing N-dimensional features
% rotations = { {R_1}, ... , {R_n} } with R_i PxN
%
% note that we can use more than N projection axes. In this case P > N
% and the inverse transformation is done by least mean square.
% Using more than N axes leads to a more stable (but also slower)
% convergence.
%
function [DR] = pdf_transferND(D0, D1, Rotations)

nb_dims = size(D0,1);

relaxation = 1;
DR = D0;

for it=1:length(Rotations)

R = Rotations{it};
nb_projs = size(R,1);

%% apply rotation

D0R = R * D0;
D1R = R * D1;

%% find data range
for i=1:nb_projs
datamin(i) = min([D0R(i, D1R(i,]);
datamax(i) = max([D0R(i, D1R(i,]);
end

%% get the marginals
for i=1:nb_projs
step = (datamax(i) - datamin(i))/300;
p0R{i} = hist(D0R(i,, datamin(i):stepatamax(i));
p1R{i} = hist(D1R(i,, datamin(i):stepatamax(i));
end

%% match the marginals
for i=1:nb_projs
f{i} = pdf_transfer1D(p0R{i}, p1R{i});
scale = (length(f{i})-1)/(datamax(i)-datamin(i));
D0R_(i, = interp1(0:length(f{i})-1, f{i}', (D0R(i, - datamin(i))*scale)/scale + datamin(i);
end

D0 = relaxation * R \ (D0R_ - D0R) + D0;
end

end
%%
%% 1D - PDF Transfer
%%
function f = pdf_transfer1D(pX,pY)
nbins = max(size(pX));

PX = cumsum(pX);
PX = PX/PX(end);

PY = cumsum(pY);
PY = PY/PY(end);

% inversion
PX = [0 PX nbins] + (0:nbins+1)*1e-10;
PY = [0 PY nbins] + (0:nbins+1)*1e-10;

f = interp1(PY, [0 ((0:nbins-1)+1e-16) (nbins+1e-10)], PX,'linear');
f = f(2:end-1);
end

Author
Time
Aww crap how do I make : ) not turn into smileys?
Anyway here is the rotation code as well.

% rotations = find_all(ndim, NbRotations)
%
% ndim = 2 or 3 (but the code can be changed)
%
%
% code for generating an optimised sequence of rotations
% for the IDT pdf transfer algorithm
% although the code is not beautiful, it does the job.
%
function rotations = find_all(ndim, NbRotations)

if (ndim == 2)
l = [0 pi/2];
elseif (ndim == 3)
l = [0 0 pi/2 0 pi/2 pi/2];
else % put here initialisation stuff for higher orders
end

fprintf('rotation ');
for i = 1NbRotations-1)
fprintf('%d ...', i );
l = [l ; find_next(l, ndim)]; %l(end,+ones(1,ndim-1)*pi/2)]
fprintf('\b\b\b', i );
end

M = ndim;

rotations = cell(1,NbRotations);
for i=1:size(l, 1)
for j=1:M
b_prev(j, = hyperspherical2cartesianT(l(i,(1:ndim-1) + (j-1)*(ndim-1)));
end
b_prev = grams(b_prev')';
rotations{i} = b_prev;
end


end
%
%
function [x] = find_next( list_prev_x, ndim)

prevx = list_prev_x; % in hyperspherical coordinates
nprevx = size(prevx,1);
hdim = ndim - 1;
M = ndim;

% convert points to cartesian coordinates
c_prevx = zeros(nprevx*M, ndim);
c_prevx = [];
for i=1:nprevx
for j=1:M
b_prev(j, = hyperspherical2cartesianT(prevx(i,(1:hdim) + (j-1)*hdim));
end
b_prev = grams(b_prev')';
c_prevx = [c_prevx; b_prev];
end

c_prevx;

options = optimset('TolX', 1e-10);
options = optimset(options,'Display','off');

minf = inf;
for i=1:10
x0 = rand(1, hdim*M)*pi - pi/2;
x = fminsearch(@myfun, x0, options);
f = myfun(x);
if f < minf
minf = f;
mix = x;
end
end

%%
% f - Compute the function value at x
function [f] = myfun(x)
% compute the objective function
c_x = zeros(M, ndim);
for i=1:M
c_x(i, = hyperspherical2cartesianT(x((1:hdim) + (i-1)*hdim));
end
c_x = grams(c_x')';
f = 0;
for i=1:M
for p=1:size(c_prevx, 1)
d = (c_prevx(p, - c_x(i, ) * (c_prevx(p, - c_x(i, )';
f = f + 1/(1 + d);
d = (c_prevx(p, + c_x(i, ) * (c_prevx(p, + c_x(i, )';
f = f + 1/(1 + d);
end
end
end
%%

end


%
%
function c = hyperspherical2cartesianT(x)

c = zeros(1, length(x)+1);
sk = 1;
for k=1:length(x)
c(k) = sk*cos(x(k));
sk = sk*sin(x(k));
end
c(end) = sk;

end

% Gram-Schmidt orthogonalization of the columns of A.
% The columns of A are assumed to be linearly independent.
function [Q, R] = grams(A)

[m, n] = size(A);
Asave = A;
for j = 1:n
for k = 1:j-1
mult = (A(:, j)'*A(:, k)) / (A(:, k)'*A(:, k));
A(:, j) = A(:, j) - mult*A(:, k);
end
end
for j = 1:n
if norm(A(:, j)) < sqrt(eps)
error('Columns of A are linearly dependent.')
end
Q(:, j) = A(:, j) / norm(A(:, j));
end
R = Q'*Asave;
end
Author
Time
That's a lot of abacadabra to me. So how do we use this? Can it be used as an avisynth script?
Fez: I am so excited about Star Whores.
Hyde: Fezzy, man, it's Star Wars.
Author
Time
Err no, it is matlab code.

If you haven't played with matlab before there is a great wiki entry on it.

I've found one of the whitepapers, I'll host it for a while here:
colourmatching

If you read the whitepaper then follow the sample code it should become clear what the code is doing, and then you could incorporate it into your own programs.
Author
Time
That code is all double dutch (sorry Arnie!) to me as well. Damn forum software.

Putting it simply, for people with simple brains like me:

Lets say you have a frame of video, so many pixels, each pixel has a value for Y, U and V. (Could also work in RGB?)

Draw a bar chart - or histogram - for the Y (luminance) values; the x axis as the values from 0 to 255 and the y axis is the number of pixels having that particular Y value. The shape you end up with is governed by the "look" of the image; Mike Verta gives a good editorial on his page here (read the bit titled "crunch time".

A probability density function produces a curve that approximates to the shape of the graph (the function called the normal distribution is the one most people have heard of).

The process that Laserman is describing is a method of generating a function that produces a curve approximating to video 1's histogram, then adjusting the values of video 2 so that they match this curve.

I would love to see someone with coding ability take this on and come up with an AVISynth filter. Have you tried the AVISynth development forum at doom9 to see if there's any interest?

Guidelines for post content and general behaviour: read announcement here

Max. allowable image sizes in signatures: reminder here

Author
Time
BTW, the second part of the paper with the denoising/degraining algorithms also looks highly promising, I never actually took much notice of it before as when I first got into this I was only interested in the colour matching - but that noise reduction could be quite something.
Author
Time
OK. So when I have frame A and I want to give it the "look" of frame B. What do I have to do exactly?
Fez: I am so excited about Star Whores.
Hyde: Fezzy, man, it's Star Wars.
Author
Time
Alrighty then! Manually it is! <---- intentional smiley.

The unintentional smileys make it seem less threatening, at least.


Ok, all kidding aside, I wonder if I'll be able to make anything out of this? Daaaaang.

I looked over the .pdf file, and what Moth3r said, and it seems to be something I could eventually figure out.

Right now, I'm overtaxing my brain with too many proggies & problems at once. But I think I can get though the current project without it. I will probably need that kind of power in the future.


Keep it coming, maybe I'll wrap my mind around it one of these days.


Matlab isn't that expensive for a student, which I'm not. Once I tried "student" it wouldn't let me see commercial or government (which I'm not, either). I hope I can afford the commercial one. But I wish they had one for fan-editor.

An Avisynth version would be awesome!

One day I found... 10 years had got behind me. Next day was worse.

 

Download  shows from Cable DVR (Updated! Yes, it needs a rewrite, but it's worth slogging through, anyway).

Author
Time
Originally posted by: Moth3r

The process that Laserman is describing is a method of generating a function that produces a curve approximating to video 1's histogram, then adjusting the values of video 2 so that they match this curve.

I would love to see someone with coding ability take this on and come up with an AVISynth filter. Have you tried the AVISynth development forum at doom9 to see if there's any interest?


That sounds similar to the Avisynth filter "Colourlike" that Desree posted about in the Auto Color Correction thread.

Thanks for the info, Laserman. Although this is the first time I've heard of matlab code, it sounds interesting. I'll have to try wrapping my mind around this too.


Author
Time
Actually this is a very different method to colourlike and gets around the problem of trying to do a straight histogram swap which can end up with the wrong pixels changing colours.

Matlab does have a 15 day trial, just fill in the form on their site and put something like you want to see if matlab will work for a colour grading project and they should release you a trial version.

Otherwise you could check out boneless.

Maybe I was incorrect in my theorem.

1) All Star Wars OT fans are geeks.
2) All geeks have been at sometime in their life programmers.

I was originially just throwing this out there for DE as I incorrectly thought he wanted to colour correct the SE but was struggling, and thought he could easily code this in C for his project - I didn't realise he was happy with the SE colour, so I thought I'd pop this on here anyway for anyone else.

If you walk through the code most of it is plain english and commented as well, and if you look at the code while following the article it should make sense.

I've never tried to write an AVISynth plugin, I'm not really a Windows guy, but if they use C# as their dev platform then it should be easy to port. It really is just a bunch of matrix transforms, some histograms and table lookups. You can do it all in matlab though.

Doom9 would probably have a bunch of people that could give it a shot in avisynth. Perhaps someone could post a 'call for programmers' over there and see if any Star Wars fans are lurking who are hot coders. There are a lot of small apps that could really help the SW fan edit scene, and a library could be built up that would help everybody.
The Open Source Fix Star Wars Project.

Author
Time
Originally posted by: Moth3r
That code is all double dutch (sorry Arnie!) to me as well. Damn forum software.

Putting it simply, for people with simple brains like me:

Lets say you have a frame of video, so many pixels, each pixel has a value for Y, U and V. (Could also work in RGB?)

Draw a bar chart - or histogram - for the Y (luminance) values; the x axis as the values from 0 to 255 and the y axis is the number of pixels having that particular Y value. The shape you end up with is governed by the "look" of the image; Mike Verta gives a good editorial on his page here (read the bit titled "crunch time".

A probability density function produces a curve that approximates to the shape of the graph (the function called the normal distribution is the one most people have heard of).

The process that Laserman is describing is a method of generating a function that produces a curve approximating to video 1's histogram, then adjusting the values of video 2 so that they match this curve.

I would love to see someone with coding ability take this on and come up with an AVISynth filter. Have you tried the AVISynth development forum at doom9 to see if there's any interest?


Good summary Moth3r, yes it will work in RGB, (I did it for RBG, haven't tried it with YUV) effectively with the matrix transforms you are pushing it through other colourspaces on the way, (The more rotations the closer the match) it would really just be up to the programmer what they wanted to support as an input.

If you look at the graphs on the last pages of the article you can see that with every rotation the graphs look more like each other (i.e. they approach convergence) with enough rotations they actually *do* converge and both images will have the exact same pdf. I find this stunning, and couldn't believe it when I first read the article, but they go on to give the proof and it is simple and elegant.
Now someone with a far bigger brain than mine could find the optimal rotations to get you there, but in practice a random selection of rotations will do.

Thanks for the link to Mike's page, I hadn't read all that before, interesting about the music.
Author
Time
I thought this was a bit harsh though re SW fan film makers
(Mike Verta)
Star Wars Fan Films: A Review
You Know Who You Are
*snip*
Having a camera doesn’t make you a photographer. Final Draft software doesn’t make you a writer. Telling your fat friends in Jedi bathrobes to make mean faces doesn’t make you a director,
*snip*

The truth is, the world is full of photographers, writers, composers, directors, actors, and editors, who have made their craft a life’s pursuit; who have more to contribute to the world than meaningless ego-glorification pieces. Maybe if you’re lucky, you’ll never have to face them in a darkened theater. Until then, fill up the world’s bandwidth with your incomprehensible, uninteresting stories and painful, cringe-worthy dialogue. Humiliate your friends in costume and watch your film over and over and over. Proclaim yourself an emperor in your shiny new clothes; but don’t call yourself a filmmaker.


Well Writers write, film makers make films.

If you make a film you are a film maker - period.

You might not be a good one (yet) but if you are taking the time out to try.
By actually doing it you learn, and even the worst fan films actually take a lot of time and dedication to get finished. (If nothing else you learn how *hard* it is to make good films.)

You learn what works and doesn't work. You learn how to take (lots of) criticism. You learn how to complete something, and you put yourself out there (which can be scary) and each time you do it you get better.

You may get bitten by the bug and spend more and more time doing it. You might get more interested go and get some real training, and sooner or later if you stick with it, you will start making good films - or at least ones that don't suck.

Anyone who thinks "my stuff is shit, I'll never get anywhere" - just go and rent Peter Jackson's film "Bad Taste" and try and sit through it. Nobodies first movies are all that great.

Film making is a craft learned by doing. It is hands on and takes dedication, time and a willingness to learn.
Tomorrows next great writer, director, SFX artist could be 12 years old at home with imovie and a DV camera making the worst fan films ever made *right now*, but if they stick with it, they might one day produce a breathtaking work of art.
Author
Time
Okay it sounds interesting. So far I've been doing things by hand and by eye. Would you mind putting up an example or 2 [ie: screenies] Laserman? That shold help me decide whether to give this a try.

I'm currently having trouble colour matching two seperate shots that weren't intended to match initially. I don't wanna reveal what i'm working on until I have something worth showing off. Going for a partial sepia look.

In the words of Turkleton: "Learn by doing!"
Author
Time
Download the colourmatch document from my post, it has example images in there.

http://www.mudgee.net/ot/Picture1.png

This may be a closer example for you though with moving an image to a particular look using another (e.g. sepia style)

http://www.mudgee.net/ot/Picture2.png
Author
Time
Originally posted by: Laserman
Maybe I was incorrect in my theorem.

1) All Star Wars OT fans are geeks.
2) All geeks have been at sometime in their life programmers.
Nothing wrong with your theory, it's just that Sinclair BASIC is not much use in this case...

Guidelines for post content and general behaviour: read announcement here

Max. allowable image sizes in signatures: reminder here

Author
Time
Hmmm, Sinclair basic did have basic array support, random number generator, trig functions, but the fact that a single frame of video contains 20 times as much memory as the sinclair could hold would be a challenge!
Playing pangolins would probably be more fun.

5 REM Pangolins
10 LET nq=100: REM number of questions and animals
15 DIM q$(nq,50): DIM a(nq,2): DIM r$(1)
20 LET qf=8
30 FOR n=1 TO qf/2-1
40 READ q$(n): READ a(n,1): READ a(n,2)
50 NEXT n
60 FOR n=n TO qf-1
70 READ q$(n): NEXT n
100 REM start playing
110 PRINT "Think of an animal.","Press any key to continue."
120 PAUSE 0
130 LET c=1: REM start with 1st question
140 IF a(c,1)=0 THEN GO TO 300
150 LET p$=q$(c): GO SUB 910
160 PRINT "?": GO SUB 1000
170 LET in=1: IF r$="y" THEN GO TO 210
180 IF r$="Y" THEN GO TO 210
190 LET in=2: IF r$="n" THEN GO TO 210
200 IF r$<>"N" THEN GO TO 150
210 LET c=a(c,in): GO TO 140
300 REM animal
310 PRINT "Are you thinking of"
320 LET P$=q$(c): GO SUB 900: PRINT "?"
330 GO SUB 1000
340 IF r$="y" THEN GO TO 400
350 IF r$="Y" THEN GO TO 400
360 IF r$="n" THEN GO TO 500
370 IF r$="N" THEN GO TO 500
380 PRINT "Answer me properly when I'm","talking to you.": GO TO 300
400 REM guessed it
410 PRINT "I thought as much.": GO TO 800
500 REM new animal
510 IF qf>nq-1 THEN PRINT "I'm sure your animal is very", "interesting,
but I don't have","room for it just now.": GO TO 800
520 LET q$(qf)=q$(c): REM move old animal
530 PRINT "What is it, then?": INPUT q$(qf+1)
540 PRINT "Tell me a question which dist ","inguishes between "
550 LET p$=q$(qf): GO SUB 900: PRINT " and"
560 LET p$=q$(qf+1): GO SUB 900: PRINT " "
570 INPUT s$: LET b=LEN s$
580 IF s$(b)="?" THEN LET b=b-1
590 LET q$(c)=s$(TO b): REM insert question
600 PRINT "What is the answer for"
610 LET p$=q$(qf+1): GO SUB 900: PRINT "?"
620 GO SUB 1000
630 LET in=1: LET io=2: REM answers for new and old animals
640 IF r$="y" THEN GO T0 700
650 IF r$="Y" THEN GO TO 700
660 LET in=2: LET io=1
670 IF r$="n" THEN GO TO 700
680 IF r$="N" THEN GO TO 700
690 PRINT "That's no good. ": GO TO 600
700 REM update answers
710 LET a(c,in)=qf+1: LET a(c,io)=qf
720 LET qf=qf+2: REM next free animal space
730 PRINT "That fooled me."
800 REM again?
810 PRINT "Do you want another go?": GO SUB 1000
820 IF r$="y" THEN GO TO 100
830 IF r$="Y" THEN GO TO 100
840 STOP
900 REM print without trailing spaces
905 PRINT " ";
910 FOR n=50 TO 1 STEP -1
920 IF p$(n)<>" " THEN GO TO 940
930 NEXT n
940 PRINT p$(TO n);: RETURN
1000 REM get reply
1010 INPUT r$: IF r$="" THEN RETURN
1020 LET r$=r$(1): RETURN
2000 REM initial animals
2010 DATA "Does it live in the sea",4,2
2020 DATA "Is it scaly",3,5
2030 DATA "Does it eat ants",6,7
2040 DATA "a whale", "a blancmange", "a pangolin", "an ant"


edit: Fix syntax error pointed out by Moth3r , and to get it right this time!
Author
Time
There's a syntax error in line 30.

But at least this forum software doesn't insert smileys all over that code.

Guidelines for post content and general behaviour: read announcement here

Max. allowable image sizes in signatures: reminder here