Oh man, wrong section. Can some mod please move it in the right section? Thanks.
This is the result of playing around with Dr.Dre’s awesome color matching tool. I matched the colors from the German SE Trailer to 1997 TB of Empire Strikes Back. I tried several different frames of the trailer and ended up just using one frame to match the colors. All the images below were created with the same color model.
I’m really excited about the results, given the poor condition of the trailer. The regrade gives it a film like look. Sure, the colors aren’t 100% correct, but I think they are good enough. However, I’m not an expert on these kind of things, so I’m curious what you guys think.
Pretty exciting teaser.
Here’s a wild prediction based on the trailer: Phasma is attacking the Resistance base at the beginning similar to ESB, which causes Poe to drag the still comatose Fin with him. Fin then stays in coma until the end of the movie, but he’s able to use the force while still being comatose to safe the day.
How does the LUT hold up on other shots? Sometimes one frame is enough to get pretty good results. I used a single shot for my own TDK regrade, which was pretty close to the screener. Granted, I had to try different shots, but the results were pretty awesome. Maybe it works here too.
TPM = AOTC = ROTS = R1
I tried to figure out a ranking of the prequels and R1, but then I realized that I never want to watch any of those movies again. R1 may be better directed, but its story and characters are just as boring as those of the prequels.
A more honest view on her life: https://www.youtube.com/watch?v=zOcHS3l6MtY
Honestly, I don’t have a problem if her absence in the ninth movie causes story problems. I would prefer it over CGI, recast, or a shoe-horned death scene. I hope the writers have enough taste to not play on our emotions over her death in the next two movies.
Rest in peace.
As someone who didn’t enjoy the movie, I toyed with the idea of cutting everything expect the final battle. Basically reduce the cast to nameless soldiers who are just attacking the base. The result would be a short movie. Don’t really know if it is possible.
Here's the prologue of FOTR EE. It has no sound, as I'm currently lacking a program to edit dts or ac3 files.
It's not a final file and still work in progress. So far I've spent 50 hours creating this file. I've used a single color space model with stabilization parameter 500.
Shots which are using blending are really challenging to convert. There is also additional noise introduced by this algorithm. Maybe a stabilization parameter of 1000 can fix this.
That sounds awesome, DrDre! Maybe you could specify the cropping areas in this file, too? You just need two diagonal points.
Yes, I'm going to share FOTR EE recolored once it's finished. I'm using a single color space model, because it's faster this way. The colors are not 100% correct, but good enough. I will probably post the prologue tomorrow.
Thank you, DrDre, this helps.
Another suggestion: Transfering a whole film shot by shot is very time consuming. Let's say you build a color model based on the first frame of the shot and transfer the rest of the shot using this model. For the next shot, you build a new color model, and so on. Right now, you have to sit in front of your computer, even though it's a very automitc process. So it would be good if you could specify the shots, the reference frame and the model type in a text file. The program would read this file and perform the transfers automatically.
Example: The input frames are called "inputX.bmp" where X is a number, the reference frames "refX.bmp"
One line for a shot could look something like this:
input100.bmp and ref110.bmp are used to build a multi color space model with stabilization parameter 500. After this is done, input100.bmp to input400bmp are transfered. The idea is to write several of these lines in a single textfile. The program can work through them without somebody having to be in front of the computer.
Great tool! I'm currently using it to color transfer Fellowship of the Ring EE. It does a good job. However, I've run into following problem and have no idea what causes it:
I'm using a single color space model and have tried the stabilization parameters 0, 1, 500 and 1000. I get similar results for all of them.
You should be able to view it now.
Here's the Joker/Mob scene using the full colors. It took me about 3 hours to make.
Ive tried two ways to implement the mapping from one frame to another in multiple rotations.
The first is the one you described. The problem here is that there are still many-to-many mappings in the rotated color spaces. Granted, the ranges are usually pretty narrow, but this has still an effect along all rotations. My algorithm selects the value which is most often mapped. So, there is information lost in each rotation. This is why only few rotations in different frames will get the correct colors for those rotations, but not for many rotations.
The second one only works for frames of the same shot. I've noticed that each RGB color of the input frame is mapped to exactly one RGB color in the full rotated colored output frame. In my tests, there were usually about 50k-60k unique colors in one frame. So it's hardly enough to recolor a whole film based on a single frame, using this method. However, if I use the color transfers of the reference frame, I can usually correctly recolor 95% to 99% of the pixel of the same shot frames. The remaining pixels are recolored according to the mapping function obtained from simple histogram matching.
I'm now able to reference from a single frame, too. However, it only works with one rotation.
Here's how the algorithm works. First, I match the normalized cumulative histogram of each color channel. This results in a 1-to-1 transfer of each value to the target value. For example, the red value of 185 may change to 233. I'll save this 1-to-1 mapping and use it for other pictures. The result is really great and the algorithm saves a lot of time with the color mapping process.
But as I've said, it only works with simple histogram matching, because when I apply the pdf algorithm, there is 1-to-many mapping. And I haven't found a way yet, to calculate how the next value is choosen. Since the next values are usually in a range (let's say 185 changes to values between 225-240), I've calculated the mean value and used it for one to one mapping. Doesn't work, unfortunately. I've also tried to map from rotation to rotation, but even then one value is mapped to many others.
So yeah, the mapping based on a refernce frame is a great addition, but so far it only works with with histogram matching/one rotation. It will result in a good color transfer, which is probably enough most of the time.
That looks really good. Great work!
The clip took me about 8 hours to make. However, the first 6 hours were spent on the full rotations. And I started to use multithreading towards the end.
LOTR can also be fixed. But only if you go the full rotations.
Here's a video sample of The Dark Knight Blu Ray matched to Screener colors.
I've used mostly just one rotation, as it went faster this way. There are however, small shots at the beginning that use all rotation matrices, which results in a better matching.
Here are the three images compared:
I'm going to play with the rotaion matrices a little bit. Maybe I will calculate even less than 20, because the matching is really slow, if all 20 rotations are used. There is also sometimes heavy noise, if more than one rotation is used.
Thanks DrDre, g-force and thorr.
Here are Harmy and Blu Ray matched to GOUT colors:
As you can see, the PDF algorithm doesn't really work well with bad target images. The colors are matched, but there are problems with the sky and the resulting images are way to grainy.
To reduce the grain artefacts, I implemented the grain removal algorithm described in the paper that also came up with the PDF-algorithm.
The idea is to match the gradient of the result picture with the gradient of the input picture without losing the colors of the target image. This should reduce the noise introduced by a bad target picture.
Here are the matched GOUT colors compared to the degrain algorithm:
Here are the final pictures compared to the GOUT:
Here is Blu Ray vs Blu Ray to Gkar
Here is Gkar vs Blu Ray to Gkar
towne32, if you can provide me some screenshots, then I can do it for you using the PDF method.
I'm currently using SCILAB, which a free software that provides many similar features as MATLAB. But I plan to write a Avisynth-Plugin that implements the PDF algorithm.
That's really impressive and quite useful. The PDF transfer generally doesn't provide good results if the target frame is of lower resolution and/or compressed. In that case your script could just use another frame and provide better results. Your method also seems more fitting for cases like The Hobbit edits, where people try to match the colors of each film.
Here is GOUT,GKar,35mm compared to Harmy matched to Gout,Gkar,35mm using PDF matching with 50 rotations.
DrDre compared to PDF
Here is GOUT compared to TeamNegative matched to Gout
DrDre compared to PDF
Here a frames of TN matched to Frame 8228 of GOUT compared to DrDre's result:
Then Scilab crashed and I forgot to save the rotation matrices, which means I have to recalcule them again. I will try it with just 20 matrices this time, because I want to know if less matrices are good enough and the calculation of 50 matrices takes about 3 hours, compared to about 45 minutes of 20.
But you can already see, that the PDF algorithm does worse than DrDre's algorithm when it comes to colouring frames with a different reference frame. With PDF you are limited to frames of the same shot.