logo Sign In

GOUT image stabilization - Released — Page 10

Author
Time

ok,

finally got it up and running..

1) problem - ok, i fixed it nevermind, needed the FFTW3.dll in c:\windows\system32

2) somehow i missed out on the  ' vagueDenoiser' DLL, and it was missing, got it...

ok, attempting to run it now..

will discuss later on...

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time
 (Edited)

alright,

i ran my first test, using the first 5 minutes of episode 4, HC enc 0.23 (there's about 1 million options for this, anyone know some good settings??????), and a few hours later got a result..

i think i may need to tweak some settings in the script, or the changes might be very subtle...

 

i tried to write a script to compare the two videos, vertically,

but couldn't get it to work.......(i actually put a reduceBy2 in front, and that didn't work either)..

#StackVertical("M:\data\movie\dvd\star wars ep #\4-gout\VTS_03_1.VOB","M:\data\movie\dvd\star wars ep 4-gout\gout.mpg" )

 

then i tried to overlay the clips, and i kept getting an error back from loading the .VOB

#clip1 = DirectShowSource("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.VOB")
#clip2 = DirectShowSource("M:\data\movie\dvd\star wars ep 4-gout\gout.mpg")
# Overlay(clip1, clip2, mode="blend", opacity=0.5)

awhile ago, i loaded the Haali media splitter, but i think i'm going to uninstall it, as it is causing a lot

of problems..

anyways, its cool fiddling around with this stuff......................................

 

_____________________________________________________________________________________________

i started working on a frame reference guide for the scene changes, here's a part of the spreadsheet (not done yet)

 

beginning frame   end frame            description                                               type of frame (i,b,p)    end time

---------------------------------------------------------------------------------------------------------------------------------------------------------------

0 108 Blank P 00:00:04.505
109 275 fox intro B 00:00:11.470
276 281 Blank B 00:00:11.720
282 469 lucasfilm limited B 00:00:19.561
470 525 Blank B 00:00:21.897
526 656 a long time ago b 00:00:27.361
657 703 Blank p 00:00:29.321
704 890 star wars P 00:00:37.120
891 2655 star wars crawl b 00:01:50.736
2656 2890 stars p 00:02:00.537
2891 2941 planet 1 b 00:02:02.664
2942 2997 planet 2 p 00:02:05.000
2998 3126 horizon of large planet b 00:02:10.380
3127 3219 blockade runner b 00:02:14.259
3220 3537 giant Imperial Stardestroyer (underneath/back) p 00:02:27.522
3538 3686 imperial stardestroyer (front) p 00:02:33.737
3687 3734 blockade runner (rear hit) p 00:02:35.739
3735 3858 c3po r2d2 interior p 00:02:40.911
3859 3904 Rebel troopers rush past the robots (panning right to left) b 00:02:42.829
3905 3948 Rebel troopers rush past the robots (front) p 00:02.44.665
3949 4031 c3po r2d2 talking p 00:02:48.126
4032 4057 c3po closeup p 00:02:49.211
4058 4099 r2d2 closeup p 00:02:50.962
4100 4189 c3po closeup (2) p 00:02:54.716
4190 4237 r2d2 closeup (2) in corridor p 00:02:56.718
4238 4301 c3po and r2d2 walking down corridor p 00:02:59.388
4302 4329 Rebel troopers rush past the robots (2) front p 00:03:03.555
4330 4384 rebel troopers crouch and aim at door p 00:03:02.849
4385 4428 rebel troopers crouch (closeup) p 00:03:04.685
4429 4487 c3po we're doomed p 00:03:07.145
4488 4520 r2d2 beeps p 00:03:08.522
4521 4576 c3po they’ll be no escape for the princess this time p 00:03:10.857
4577 4635 r2d2 beeps (3) p 00:03:13.318
4636 4713 c3po whats that p 00:03:16.571
4714 4755 The nervous Rebel troopers aim their weapons (back) p 00:03:18.323
4756 4797 The nervous Rebel troopers aim their weapons (front) p 00:03:20.075
4798 4974 The Imperial craft has easily overtaken the Rebel Blockade p 00:03:27.457
4975 5013 r2d2 and c3po standing p 00:03:29.084
5014 5046  The nervous Rebel troopers aim their weapons (front) p 00:03:30.460
5047 5145 closeup of face of rebel trooper p 00:03:34.589
5147 5186 door p 00:03:36.299

etc..

i'm sure you could 'tweak' different scenes with different settings...

i'll post a link if i finish it..........................its very time consuming watching for each scene change..

(by the way, this is the NTSC version)....

 

more fun coming up...............

 

later

-1

 

[no GOUT in CED?-> GOUT CED]

Author
Time

OT: (off topic)

 

sorry, does anyone know how to edit embedded comments

like the table above? i can't add to it......i wanted to update it,

but it won't let me..

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time

My early '90s VHS recording of Star Wars is much brighter than GOUT, and looks much better except for the moving horizontal lines in the picture and the ringing noise in the mono audio. Anyway, I made GOUT look much brighter, using G-Force's script with this VirtualDub plugin and these settings. Unfortunately, I do not have ANH, so I used TESB.

Some screenshots for comparison (G-Force on top, Some_Person on the bottom):

Author
Time
negative1 said:

i tried to write a script to compare the two videos, vertically,

but couldn't get it to work.......(i actually put a reduceBy2 in front, and that didn't work either)..

#StackVertical("M:\data\movie\dvd\star wars ep #\4-gout\VTS_03_1.VOB","M:\data\movie\dvd\star wars ep 4-gout\gout.mpg" )

 

then i tried to overlay the clips, and i kept getting an error back from loading the .VOB

#clip1 = DirectShowSource("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.VOB")
#clip2 = DirectShowSource("M:\data\movie\dvd\star wars ep 4-gout\gout.mpg")
# Overlay(clip1, clip2, mode="blend", opacity=0.5)

 

 -1,

you need to run the .VOB and the .mpg through DGDecode first, and load the resulting .d2v file with Mpeg2Source instead of DirectShowSource.

And you have to make sure they are the same size.

 

clip1 = Mpeg2Source("X:\...\...\xxx.d2v")

clip2 = Mpeg2Source("X:\...\...\yyy.d2v")

StackVertical (clip1,clip2).ReduceBy2()

 

The Overlay option won't work very well, because if you get them lined up correctly, it will look like just an average of both.

BTW: the results are indeed supposed to be subtle, and reducing both by 2 just makes the differences less noticable. Watch the original, note how much it jiggles, and then watch the output of the script and see how it jiggles much less, is sharper, better color, better contrast, less jaggies, etc. Please note that the script doesn't "remove" any of these problems, just lessens them without (hopefully) adding artifacts.

-G

Author
Time
g-force said:

 

 -1,

you need to run the .VOB and the .mpg through DGDecode first, and load the resulting .d2v file with Mpeg2Source instead of DirectShowSource.

And you have to make sure they are the same size.

 

clip1 = Mpeg2Source("X:\...\...\xxx.d2v")

clip2 = Mpeg2Source("X:\...\...\yyy.d2v")

StackVertical (clip1,clip2).ReduceBy2()

 

The Overlay option won't work very well, because if you get them lined up correctly, it will look like just an average of both.

BTW: the results are indeed supposed to be subtle, and reducing both by 2 just makes the differences less noticable. Watch the original, note how much it jiggles, and then watch the output of the script and see how it jiggles much less, is sharper, better color, better contrast, less jaggies, etc. Please note that the script doesn't "remove" any of these problems, just lessens them without (hopefully) adding artifacts.

-G

 

thanks g-force, will try it out...

basically, what i wanted to see, is how much the 'jiggle', and other options affect it...

i also wanted to draw a fixed (slightly cropped) box around the original to see how much the variation is ..

thats why i wanted to try the overlay,  so i could 'see' how much is stabilized....etc..............i will play around with it more..

thanks again..

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time
negative1 said:

 

draw a fixed (slightly cropped) box around the original to see how much the variation is ..

 ooh!, that's a good idea. Should work!

 

-G

Author
Time
 (Edited)

ok, thanks to gforce, i've done a ton of testing, checking options, overlaying things etc..

but first a few preliminaries.........................i love image processing, so a lot of this stuff is very natural to me,

i also have a major in mathematics, with a specialization in analysis...algorithms/etc.....so that's how i tend to

look at things..........................................i'm not so sure about my eyesight, or aesthetics, and i'm just being

thrown into video processing/film techniques etc via the 35mm esb project................anyways.........

1) i really need to go back and REALLY understand what this script is doing , because i think it will

help everyone that doesn't understand the functions to understand some principles behind them,

if they want to improve on them

 

2)  i have only run a few tests with the gforce script, (since they're very time consuming), so i don't think

my results, or tests will be that conclusive........................that said, i've watched the first 11800 frames of ep4

about 50 times now........so i'm getting a pretty good grip on the scene changes, shots, and different types of

tests with - scrolling text, stable background-moving image, moving image-moving objects , stable background - stable objects,

with a mix of fx shots, and also different camera shots................................................(take that with a grain of salt)

 

3) i'm going to be posting images, videos, and some math stuff (hopefully won't be too scary)....................back to the idea

of putting a box around the image.........................well it worked, kind of.............but it doesn't help at all, because the image

seems to be stabilized along the x-axis, with the movement being on the inner frame..........(i'll post pictures)..............

so i'm going to go back, and overlay a GRID over the video, to show you a more precise idea of whats going on

 

4) there's some really psychedelic effects going on when you overlay the gforce version vs gout, but it does show the local effects

of the filters............................................i think the corrections due to color/grain/artifact processing overcomes the image

stabilization bit (which might not be that significant).................................................what i did was blow up the video to 150%

and watch it that way, and if you observe the frame changes that way, you can see how jittery the ENTIRE movie is .....

 

i hope you don't have to do a crop around it, and adjust all the frames.................but since i'm mapping out the scene changes

by frame..................it might be possible to localize the stability that way, not sure yet.............

 

ok, i'll start posting pictures, its much easier to discuss that way..

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time
 (Edited)

Hey negative1, think you might run this script on your ESB35mm? At least, the color part. Because your custom ESB one doesn't look too bad!

edit - oh, that's some person.  My bad.

A Goon in a Gaggle of 'em

Author
Time
bkev said:

Hey negative1, think you might run this script on your ESB35mm?  At least, the color part.  Because your custom ESB one doesn't look too bad!

 its a little early for this..... but one of the reasons i'm studying this script so hard,

is because it might/will come in handy later on..................but it will need to be tweaked...

and also a frame guide will probably have to be produced....we'll see....

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time

-1,

Honestly, there is not much of this script that I could imagine using on a 35mm transfer of the film. Here is why:

1. Color. You are going to have an RGB signal to work with, which is much easier than Luma and Chroma to adjust.

2. Global motion stabilization: You are going to have the frame edges to work from for this, something that is lost on the DVD, and there will be much more accurate ways of stabilizing WITH the frame edges as a reference.

3. Local motion stabilization: this is an effect of film warp, and how the MPEG encoder handles this. You may very well have some film warp, but you won't have the MPEG effects on the film warp to worry about, so a more gentle local motion stabilization should be used if any is needed at all.

4. Jaggies - you won't have any (hopefully)

5. Sharpening - You shouldn't need this if the capture is decent

6. Subtitles - No need

7. Degrain - Okay, there's the exception to all of this. You are going to want to degrain. Admittedly, there are a lot better algorithms than what I'm using, but my script trys to do too much stuff so I've really had to compromize here. I've tried to get the most bang for the buck on this, but if you want the best degrainer, use TemporalDegrain(). Study that, as my script borrows heavily from it. It's really the best, and since you won't need to do much else, it should run at an acceptable speed.

-G

Author
Time
 (Edited)
g-force said:

-1,

Honestly, there is not much of this script that I could imagine using on a 35mm transfer of the film. Here is why:

1. Color. You are going to have an RGB signal to work with, which is much easier than Luma and Chroma to adjust.

2. Global motion stabilization: You are going to have the frame edges to work from for this, something that is lost on the DVD, and there will be much more accurate ways of stabilizing WITH the frame edges as a reference.

3. Local motion stabilization: this is an effect of film warp, and how the MPEG encoder handles this. You may very well have some film warp, but you won't have the MPEG effects on the film warp to worry about, so a more gentle local motion stabilization should be used if any is needed at all.

4. Jaggies - you won't have any (hopefully)

5. Sharpening - You shouldn't need this if the capture is decent

6. Subtitles - No need

7. Degrain - Okay, there's the exception to all of this. You are going to want to degrain. Admittedly, there are a lot better algorithms than what I'm using, but my script trys to do too much stuff so I've really had to compromize here. I've tried to get the most bang for the buck on this, but if you want the best degrainer, use TemporalDegrain(). Study that, as my script borrows heavily from it. It's really the best, and since you won't need to do much else, it should run at an acceptable speed.

-G

 

 

your right g-force,

i know that............................however, in studying your script,

i've learned a lot of invaluable lessons....................................something i couldn't force myself to do on my own..........

its a great reference tool..

(also, i've got a TON of laserdisc transfers i want to

start messing around with also)..

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time

ok,

i'm not sure if anyone else there is checking this out of not,

 

but if you really want to start learning how to play around with avisynth, g-force

has totally opened my eyes to the power behind it/and plugins/scripting...............i used to do gui coding, image analysis,

web development............................but all that stuff seems pretty boring, compared to these, since we actually get a result

that is so fun to play with.......

-------------------------------------------------------------------------------------------------------------------------------------------------------------

i mentioned earlier that i have tested a sample segment to see what the gforce-gout script does...well

i ran into a problem..

 

http://img70.imageshack.us/img70/9260/gforcearprobfa7.jpg

 

notice the aspect ratio of the second video? right, i forget to take that into account..

so i went back to the script..

 

# show clips in variables clip1/clip2 on top of each other
# clip1
# clip2

clip1 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v")
clip2 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\gout.d2v")

clip2 = clip2.BilinearResize(720,360)

StackVertical (clip1,clip2).ReduceBy2()

ShowFrameNumber(offset=9, text_color=$ff0000)

and now i have something to compare...........

http://img530.imageshack.us/img530/9905/gforcearfix4363avo5.jpg

however, after watching closely, and blowing it up larger, i still couldn't

really see the effects............................next step...........overlaying the image...

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time

ok,

i'll take a step back.............i wanted to see where the jitter, and shake was coming from,

so i wanted to overlay a box over both the original video, and the gout version, and compare

those..so.............. i created a still image with a box surrounding the video, and then overlayed them..(i had to back later, and adjust contrast, etc)..

 

****************NOTE : IGNORE THE JPG COMPRESSION, the actual videos are much clearer ****************************

---------------------------------------------------------------------------------------------------

# overlay  clip with a box

clip1 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v")

subs = ImageSource("C:\data\vid\to_sort\vidcap\mpg\test01-boxbl.jpg").ConvertToRGB32

Overlay(clip1, subs, mode="Difference", opacity=0.4)

ShowFrameNumber(offset=9, text_color=$ff0000)

---------------------------------------------------------------------------------------------------

http://img92.imageshack.us/img92/1255/test01boxbltd3.jpg

----------------------------------------

http://img530.imageshack.us/img530/6474/goutbox9008bq4.jpg

---------------------------------------------------------------------------------------------------

i did the same for the gforce version...

-----------------------------------------------------

http://img517.imageshack.us/img517/367/gforcebox01blep5.jpg

-----------------------------------------------------

http://img92.imageshack.us/img92/8286/gforcetest01of3.jpg

-----------------------------------------------------

 

but you know what? i still didn't see the video go outside the frame for the original?

 

however, there was a ton of jitter/shake WITHIN the frame.........from frame to frame

when you stepped through, in many of the shots, the static camera shots, the fx shots

everywhere...............i never realized how much SHAKE there was going on?????

so what can we do next? ......

later

-1

 

 

 

 

 

[no GOUT in CED?-> GOUT CED]

Author
Time

at this point,

i wanted to overlay the videos to see what was going on...maybe that would help....

yes it sure did!!!          there's a lot of options for overlaying, so i tried them all, and actually a few were

very cool looking..........i will actually post some video this time also, because that's the only way to really see the effect..

 

---------------------------------------------------

basic overlaying video script

--------------------------------------------------

# overlay 2 clips
# resize the GOUT to the same size, and the reset position to match
# alter the opacity for different effects..

clip1 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v")

clip2 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\gout.d2v")

Overlay(clip1, clip2.BilinearResize(720,360), y=60, mode="blend", opacity=0.9)

ShowFrameNumber(offset=9, text_color=$ff0000)

----------------------------------------------

i needed to make sure the two videos were lined up correctly.................(after correcting for aspect ratio, positioning)..

========================

http://img92.imageshack.us/img92/3007/goutoverlay7240ws0.jpg

check.... this will be our reference frame.................

-----------------------------------------

ok,now for some fun............subtracting one clip from the other showed pixels that have changed (the darker parts stayed the same)..

mode= Subtract

Add This will add the overlay video to the base video, making the video brighter. To make this as comparable to RGB, overbright luma areas are influencing chroma and making them more white.
Subtract The opposite of Add. This will make the areas darker.

------------------------------------------

http://img530.imageshack.us/img530/8866/goutsubtract7240yv3.jpg

 

========================

mode= exclusion , very trippy....................it looks like each object gets a nice mask around it, with weird inverted colors...

Exclusion This will invert the image based on the luminosity of the overlay image. Blending with white inverts the base color values; blending with black produces no change.

------------------------------------------

http://img71.imageshack.us/img71/9508/goutexclusion7240gg1.jpg

 

=======================

mode = difference, this was the most useful to me...........to see that there were changes between the pixel values.......

Difference This will display the difference between the clip and the overlay. Note that like Subtract a difference of zero is displayed as grey, but with luma=128 instead of 126. If you want the pure difference, use mode="Subtract" or add ColorYUV(off_y=-128).
----------------------------------------

http://img61.imageshack.us/img61/3972/goutdifference7240ri5.jpg

 

i will post the videos shortly.............

 

ok, now that we know quantitatively what the image changes are, and the pixel changes.............but how can we visualize

it in another way?

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time
 (Edited)

ok,

we couldn't use the box method, because it wasn't fine grained enough for what we wanted, so lets improve it, by using a grid:

pixel line spacing = 10 pixels

------------------------------------

http://img61.imageshack.us/img61/4546/gridbwjt0.jpg

 

now lets use the overlay function again

------------------------------------------------------

# overlay 2 clips
# using a grid, and change opacity
# alter the opacity for different effects..

clip1 = Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v")

subs = ImageSource("C:\data\vid\to_sort\vidcap\mpg\grid-bw.jpg").ConvertToRGB32

Overlay(clip1, subs, mode="Difference", opacity=0.4).crop(0, 100,0,-100)

ShowFrameNumber(offset=9, text_color=$ff0000)
=================================

http://img61.imageshack.us/img61/1912/goutgrid7240sp3.jpg

 

also, did it for the gforce version..........and now you can watch a box, and see if the changes

within it 'shake', or watch a segment, and see how much movement there is !!!!!!!!!! quite a bit THROUGHOUT THE ENTIRE FILM..

[by the way, i compared it to the SE version, and they pretty much fixed all of it there.......]

 

so now, we take this result, combine it with splitscreen, and finally........................................

--------------------------------------------------------------------------------------------------------------------

# show clips in variables clip1/clip2 on top of each other (grid versions)
# clip1
# clip2

clip1 = AviSource("M:\data\movie\dvd\star wars ep 4-gout\grid-gout.avi")
clip2 = AviSource("M:\data\movie\dvd\star wars ep 4-gout\grid-gforce.avi")

clip2 = clip2.BilinearResize(720,300)

StackVertical (clip1,clip2)

ShowFrameNumber(text_color=$ff0000)

---------------------------------------------------------------------------------

http://img515.imageshack.us/img515/804/splitgrid7240hu9.jpg

 

so, now we have a way to VISUALLY see the changes, in a much easier fashion..................

-----------------------------------------------------------------------------------

 

i will post the videos shortly............................

 

next up, an in depth look at exactly what the gforce script does...(hopefully clearer to laymen like myself).........

later

-1

 

[no GOUT in CED?-> GOUT CED]

Author
Time

Major update to the script today! This is the best so far. ;)

-G

Author
Time

cool thanks G,I really want to go back and mess with this some more,but before on my old P4 PC the encode time was just waaay to crazy long,but now with my new Quad Core Phenom,I am hoping it will be alot less time.

thanks for all the updates.

 

So have you actually encoded all these for yourself yet?

Author
Time

####**********************NOTE  : THIS IS FOR THE OLDER VERSION ********************************

###_______________________________________________________________________________________

 

###*********************I'm still trying to understand it, before i move onto the new version *******************

 

#########################################
########## GOUT Filter By G-force V.1.27 ##########
#########################################

########## setup stage
SetMemoryMax(100) # <-----Play around with this for (maybe) faster encodes
sw_frame_no = 704 # <-----Enter number of first frame of "Star Wars" logo
PAL = false # <-----Set to false for NTSC, true for PAL
Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v") # <-----Set path

########## cut off some of the black bars, anti-alias, resize to 16x9 AR


#------------------ NNEDI ---------------------------------------------------------------
# NNEDI is an intra-field only deinterlacer. It takes in a frame, throws away one field, and then
# interpolates the missing  # pixels using only information from the kept field. It has same rate and
# double rate modes, and works with YUY2 and YV12 input. NNEDI can also be used to enlarge images by
# powers of 2.

#------------------ Spline16Resize ------------------------------------------------------
#  Spline16Resize(clip clip, int target_width, int target_height, float src_left, float src_top,
# float src_width, float src_height)
# Two spline based resizers. Spline16Resize is slightly faster than Spline36Resize

 

(PAL==false) ? Crop(0,96,0,-96,align=true).NNEDI(dh=true,field=1).Spline16Resize(720,384) :
\ Crop(0,120,0,-120,align=true).NNEDI(dh=true,field=1).Spline16Resize(720,448)

########## set black level, adjust gamma, saturation/hue

#------------------ levels ------------------------------------------------------------

# Levels(clip input, int input_low, float gamma, int input_high, int output_low, int output_high,
# bool coring)

# The Levels filter adjusts brightness, contrast, and gamma. The input_low and input_high parameters
# determine what input pixel values are treated as pure black and pure white; the output_low and
# output_high parameters determine the output values corresponding to black and white; and the gamma
# parameter controls the degree of nonlinearity in the conversion. To be precise, the conversion
# function is:

# output = [(input - input_low) / (input_high - input_low)]^(1/gamma) * (output_high - output_low) +
# output_low

Levels(10,1.13,255,0,255)


#------------------ tweak ---------------------------------------------------------------
#Tweak(clip clip [, float hue] [, float sat] [, float bright] [, float cont] [, bool coring] [, int
# startHue] [, int endHue] [, int maxSat] [, int minSat] [, int interp])

# This function provides the means to adjust the hue, saturation, brightness, and contrast of a video
# clip. In v2.60, both the saturation and hue can be adjusted for saturations in the range [minSat,
# maxSat] and hues in the range [startHue, endHue]. interp interpolates the adjusted saturation to
# prevent banding.


Tweak(sat=1.08,hue=-4)

########## global motion stabilization stage
orig = last

#------------------ TemporalSoften --------------------------------------------------
# TemporalSoften(clip clip, int radius, int luma_threshold, int chroma_threshold [, int scenechange]  [, int mode])

# The SpatialSoften and TemporalSoften filters remove noise from a video clip by selectively blending
# pixels. These filters can work miracles, and I highly encourage you to try them. But they can also
# wipe out fine detail if set too high, so don't go overboard. And they are very slow, especially
# with a large value of radius, so don't turn them on until you've got everything else ready.

# TemporalSoften is similar to SpatialSoften, except that it looks at the same pixel in nearby
# frames, instead of nearby pixels in the same frame. All frames no more than radius away are
# examined. This filter doesn't seem to be as effective as SpatialSoften.

# TemporalSoften smoothes luma and chroma separately, but SpatialSoften smoothes only if both luma
# and chroma have passed the threshold.

# The SpatialSoften filter work only with YUY2 input. You can use the ConvertToYUY2 filter if your
# input is not in YUY2 format.

# Starting from v2.5 two options are added to TemporalSoften:

# An optional mode=2 parameter: It has a new and better way of blending frame and provides better
# quality. It is also much faster. mode=1 is default operation, and works as always.
# Added scenechange=n parameter: Using this parameter will avoid blending across scene changes. 'n'
# defines the maximum average pixel change between frames. Good values for 'n' are between 5 and 30.
# Requires ISSE.


temp = TemporalSoften(7,255,255,25,2)


#------------------ RestoreMotionBlocks----------------------------------------------------

# RemoveDirt is a temporal cleaner for Avisynth 2.5x. It has now become an AVS script function, which
# involves RestoreMotionBlocks and various filters from the RemoveGrain package.

# RestoreMotionBlocks(filtered, restore, neighbour, neighbour2, alternative, gmthreshold, mthreshold,
# pthreshold, cthreshold, noise, noisy, dist, tolerance, dmode, grey, show, debug)
# http://www.removedirt.de.tf/


# SCSelect
# SCSelect is a special filter, which distinguishes between scene begins, scene ends and global
# motion. The output of SCClense is used as an "alternative" clip for RestoreMotionBlocks. It can
# hardly used for other purposes, because it can only make proper decisions if there are a lot of
# motion blocks. Only if the percentage of motion blocks is > gmthreshold, then RestoreMotionBlocks
# chooses a frame from the clip specified with the alternative variable and then there are always a
# lot of motion blocks, if gmthreshold is not too small (gmthreshold >= 30 should be sufficiently
# large). SCSelect yields nonsense results if there are only few motion blocks. SCSelect is used as
# follows:

# SCSelect(clip input, clip scene_begin, clip scene_end, clip global_motion, float dfactor, bool
# debug, bool planar)

# Let us discuss this script in some detail. Firstly, we apply the brutal temporal clenser from the
# RemoveGrain package to obtain the clip "clensed". Then we use the filters ForwardClense and
# BackwardClense from RemoveGrain to construct the clip "alt", which is then used as the
# "alternative" variable in the subsequent RestoreMotionBlocks. While Clense does a lot of cleaning
# it certainly creates a lot of artifacts in motion areas. In the script function RemoveDust, the
# clip "clensed" is repaired entirely by the Repair filter from the RemoveGrain package. In
# RemoveDirt this repair is only made in motion areas. The static areas are not repaired. Since the
# clip is used only for restoring motion areas, we can use the much stronger Repair mode 16 (in
# RemoveDust usually modes 2 or 5 are used), which restores thin lines destroyed by clense. Finally,
# because there may be some left over from temporal cleaning especially when grain is dense, we use
# the spatial denoiser RemoveGrain(mode=17) to remove these dirt or grain rests.

 


RestoreMotionBlocks(temp,orig,neighbour=temp,alternative=orig,gmthreshold=100,
\ dist=1,dmode=0,debug=false,noise=10,noisy=15,grey=false,show=false)

#------------------  interleave -------------------------------------------------------
# Interleave(clip1, clip2 [, ...])

# Interleave interleaves frames from several clips on a frame-by-frame basis, so for example if you
# give three arguments, the first three frames of the output video are the first frames of the three
# source clips, the next three frames are the second frames of the source clips, and so on.


Interleave(last,orig)

#------------------ DePanEstimate -------------------------------------------------------

# DePanEstimate function
# This function uses phase-shift method (by fast Fourier transform) for global motion estimation.
# It uses some central region of every frame (or field) as FFT window to find the inter-frames
# correlation and to calculate the most appropriate values of vertical and horizontal shifts,
# which fit current frame to previous one. The some relative correlation parameter is used as
# trust measure and for scene change detection. In zoom mode, plugin uses left and right sub-windows
# to estimate both displacements and zoom. Output is special service clip with coded motion data
# in frames, and optional log file.


# Function call:

# DePanEstimate ( clip, int range, float trust, int winx, int winy, int dxmax, int dymax,
# float zoommax, bool improve, float stab, float pixaspect, bool info, string log, bool debug,
# bool show, string extlog, bool fftw)

# Notes. trust parameters defines some threshold value of inter-frame similarity (correlation).
# It defines how similar must be current frame to prev frame in the same scene.
# DePanEstimare detects scenechange at current frame, if current correlation value is below the
# threshold. Set it lower to prevent false scene change detecting, set it higher to prevent
# skipping of real scenechange (with shaking). Default value is good for most video, but you can
# test it in info mode.


mdata = DePanEstimate(last,range=1,trust=0,dxmax=1,dymax=0)

#------------------  DePan -------------------------------------------------

# DePan (client) - make full or partial global motion compensation
# DePanInterleave (client) - generates long motion compensated interleaved clip
# DePanStabilize (client) - stabilizes motion
# DePanScenes(client) - scene change indication

# DePan function
# It generates the clip with motion compensated frames, using motion data previously calculated
# by DePanEstimate.

# Function call:
# DePan (clip, clip data, float offset, int subpixel, float pixaspect, bool matchfields,
# int mirror, int blur, bool info, string inputlog) 

# data - special service clip with coded motion data, produced by DePanEstimate

# offset - value of compensation offset for all input frames (fields) (from - 10.0 to 10.0,
# default =0)
#    = 0 is null transform.
#    = -1.0 is full backward motion compensation of next frame (field) to current,

# Note: The offset parameter of DePan is extended version of delta parameter of GenMotion.

DePan(last,data=mdata,offset=-1)

#------------------ SelectEvery -------------------------------------------

# SelectEvery(clip clip, int step-size, int offset1 [, int offset2, ...])
# SelectEvery is a generalization of filters like SelectEven and Pulldown.
# SelectEvery(clip, 2, 0) # identical to SelectEven(clip)

SelectEvery(2,0)

########## local motion stabilization / degrain stage
unfilter = last

#------------------ VagueDenoiser ------------------------------------------


# This is a Wavelet based Denoiser.
# Basically, it transforms each frame from the video input into the wavelet domain,
# using various wavelet filters. Then it applies some filtering to the obtained coefficients.
# It does an inverse wavelet transform after. Due to wavelet properties, it should gives a
# nice smoothed result, and reduced noise, without blurring picture features.
# This wavelet transform could be done on each plane of the colorspace.
# This filter uses a wavelets from Brislawn tutorial.

# Syntax of VagueDenoiser filter
# VagueDenoiser (clip, int "threshold", int "method", int "nsteps", float "chromaT",
# bool "debug", bool "interlaced", int "wavelet", bool "Wiener", float "wratio",
# integer "percent", clip "auxclip")

# threshold: a float (default=0)
#     Filtering strength. The higher, the more filtered the clip will be. Hard thresholding can use a higher threshold than Soft thresholding before the clip looks overfiltered.
#     If set < 0, then luminosity denoising will be disabled
#     If set = 0, then threshold is estimated automatically (adaptive)

#    The filtering method the filter will use.
#    3 : Qian's (garrote) thresholding. Scales or nullifies coefficients
# - intermediary between (more) soft and (less) hard thresholding.

# nsteps: (default=4)
#    Number of times, the wavelet will decompose the picture.
#    High values can be slower but results will be better.
#      Suggested values are 3-6.
#    Picture can't be decomposed beyond a particular point (typically,
#      8 for a 640x480 frame - as 2^9 = 512 > 480)

# chromaT: a float (default=-1)
#     Set threshold value for Chroma filtering. It is slower but give better results
#    If set < 0, then Chroma denoising will be disabled (default mode)
#    If set = 0, then threshold is estimated automatically (adaptive)

source = last.VagueDenoiser(0,3,4,-1)

#------------------ Repair ----------------------------------------------------------

# http://www.removegrain.de.tf/

# It is generated from the same source code as RemoveGrain, but serves a very different purpose.
# Instead of removing grain, Repair should remove artifacts introduced by previous filters.
# It does so by comparing the video before these filters were applied with the video after
# application of these filters. Thus Repair requires two clips as input. The first clip is the
# filtered and the second the unfiltered clip. Otherwise Repair uses the same variables as
# RemoveGrain, but modes 11-17, which do not make sense for Repair are disabled. Thus there are
# currently 10 Repair modes.

# RemoveGrain(input, mode=n) = Repair(input, input, n+1) for n=0,1,2,3.
# The higher the mode (from 1 to 4 only) the stronger the artifact removal. With mode=4,
# Repair(filtered, original, mode=4) is fairly close to RemoveGrain(original, mode= 4, limit= 0).

Repair(temp,source,mode=9)

#------------------ MVAnalyse --------------------------------------------------

# Filters that use motion vectors have common parameters. Those are the scene-change detection
# thresholds, and the mmx / isse flags. They also use one or several vectors stream,
# which are produced by MVAnalyse.

# MVAnalyse (clip, int "blksize", int "blksizeV", int "pel", int "level",
# int "search", int "searchparam", int "pelsearch", bool "isb", int "lambda", bool "chroma",
# int "delta", bool "truemotion", int "lsad", int "pnew", int "plevel", bool "global",
# int "overlap", int "overlapV", string "outfile", int "sharp", clip "pelclip", int "dct",
# int "divide", int "idx")

# Estimate motion by block-matching method and produce special output clip with motion vectors data
# (used by other functions).
# Some hierarchical multi-level search methods are implemented (from coarse image scale to finest).
# Function uses zero vector and neighbors blocks vectors as a predictors for current block.
# Firstly difference (SAD) are estimated for predictors, then candidate vector changed by some
# values to some direction, SAD is estimated, and so on. The accepted new vector is the vector
# with minimal SAD value (with some penalty for motion coherence).

# isb : allows to choose between a forward search (motion from the previous frame to current one)
# for isb=false and a backward search (motion from following frame to the current one)
# for isb=true (isb stands for "IS Backward", it is implemented and named exactly as written here,
# do not ask. Default isb=false.

# delta : set the frame interval between the reference frame and the current frame.
# By default, it's 1, which means that the motion vectors are searched between the current
# frame and the previous ( or next ) frame. Setting it to 2 will allow you to search mvs between
# the frame n and n-2 or n+2 ( depending on the isb setting ).

# pel : it is the accuracy of the motion estimation. Value can only be 1, 2 or 4. 1 means a
# precision to the pixel. 2 means a precision to half a pixel, 4 means a precision to quarter a
# pixel, produced by spatial interpolation (better but slower). Default is 2 since v1.4.10.

# sharp: subpixel interpolation method for pel=2.
# Use 0 for soft interpolation (bilinear), 1 for bicubic interpolation (4 tap Catmull-Rom),
# 2 for sharper Wiener interpolation (6 tap, similar to Lanczos).
# Default is 2.

# idx: index of clip. It allows to internally reuse (share) the multilevel data computed for
# frames of the clip by several instances of the filter (or by other functions) for more fast
# processing and memory usage decreasing. It is especially useful for pel=2 to avoid computing
# the interpolation twice, when doing a forward & backward search on the same clip. If you use idx,
# you should always use positive values, and you should only use the same value for filters which
# work on the same clip ( else, the analysis won't work properly ). By default, a unique
# negative number is given for each filter instance (which will create its own multilevel data).

# overlapv: vertical block overlap value. Default is equal to horizontal.
# Must be even for YV12 and less than block size.

bw_vec1 = last.MVAnalyse(isb=true, delta=1,pel=2,sharp=1,overlap=4,idx=1)
fw_vec1 = last.MVAnalyse(isb=false,delta=1,pel=2,sharp=1,overlap=4,idx=1)

#------------------ MVCompensate -------------------------------------------------------

# MVCompensate (clip, clip "vectors", bool "scbehavior", int "mode", int "thSAD", bool "fields",
# clip "pelclip", int "idx")

# Do a full motion compensation of the frame. It means that the blocks pointed by the mvs in
# the reference frame will be moved along the vectors to reach their places in the current frame.

# thSAD is SAD threshold for safe (dummy) compensation. If block SAD is above the thSAD,
# the block is bad, and we use source block instead of the compensated block.
# Default is 10000 (practically disabled).

# idx works the same way as idx in MVAnalyse.

bw1 = source.MVCompensate(bw_vec1,idx=2,thSAD=400)
fw1 = source.MVCompensate(fw_vec1,idx=2,thSAD=400)
Interleave(bw1,source,fw1)

#------------------ Clense -----------------------------------------------------
# "clense" median filter,

Clense(reduceflicker=false).TemporalSoften(1,255,255,25,2)
SelectEvery(3,1)

########## temporal min/max sharpening stage

#------------------  MT_Logic ----------------------------------------------------

# http://manao4.free.fr/mt_masktools.html
# mt_logic : clip clip1, clip clip2, string mode("and")


# Applies the function defined by mode to clip1 and clip2.
#  Possible values for mode are :
# "and" : does a binary "and" on each pairs of pixels ( 11 & 5 is computed by converting them to
# binary, and to and all the bits : 11 = 1011, 5 = 101, 11 & 5 = 1 ).
# "or" : does a binary "or" on each pairs of pixels ( 11 | 5 = 1011 | 101 = 1111 = 15 ).
# "xor" : does a binary "xor" on each pairs of pixels ( 11 ^ 5 = 1011 ^ 101 = 1110 = 14 ).
# "andn" : does a binary "and not" on each pairs of pixels ( 11 & ~5 = 1011 & ~101 = 1011 & 11111010
# = 1010 = 10 ).
# "min" : gives the minimum of each pairs of pixels.
# "max" : gives the maximum of each pairs of pixels.


pmax = source.MT_Logic(bw1,"max").MT_Logic(fw1,"max")
pmin = source.MT_Logic(bw1,"min").MT_Logic(fw1,"min")

#------------------ MT_Lutxy -------------------------------------------------------

# mt_lutxy : clip clip1, clip clip2, string expr("x"), string yexpr("x"), string uexpr("x"),
# string vexpr("x")


# It applies a two-parameters function defined by expr to all the pixels.
# The function is written is reverse polish notation.
# If yexpr, uexpr or vexpr isn't defined, expr is used instead.


MT_lutxy(last,last.RemoveGrain(12,-1),"x y - 1.5 * x +",U=2,V=2)

#------------------ MT_CLamp -----------------------------------------------------

# mt_clamp : clip c, clip bright_limit, clip dark_limit, int overshoot(0), int undershoot(0)


# Forces the value of the first clip to be between bright_limit + overshoot and dark_limit
# - undershoot.
# Gives unwanted results if bright_limit + overshoot < dark_limit - undershoot.


MT_Clamp(pmax,pmin,1,1,U=2,V=2)

########## restore stars stage

#------------------ MT_Binarize ---------------------------------------------------


# mt_binarize : clip c, int threshold(128), bool upper(false)


# If upper is false, forces all values strictly over threshold to 0, and all others to 255.
# Else, forces all values strictly over threshold to 255, else to 0.
# upper = true is equivalent to mt_lut("x threshold > 0 255 ?"), but faster.
# upper = false is equivalent to mt_lut("x threshold > 255 0 ?"), but faster.

mask = last.MT_Binarize(threshold=20,upper=true).MT_Expand

#------------------ Mask -------------------------------------------------------------

# Mask
# Mask(clip clip, mask_clip clip)

# Applies a defined alpha-mask to clip, for use with Layer, by converting mask_clip to
# greyscale and using that for the mask (the alpha-channel) of RGB32.
# In this channel "black" means completely transparent, white means completely opaque).


halfmask = mask.ReduceBy2

#------------------ YToUV -------------------------------------------------------------

# YToUV puts the luma channels of the two clips as U and V channels.
# Image is now twice as big, and luma is 50% grey.
# Starting from v2.51 there is an optional argument clipY which puts the luma channel of
# this clip as the Y channel.


mask = YToUV(halfmask,halfmask,mask)

#------------------ MT_Merge ---------------------------------------------------------

# mt_merge : clip clip1, clip clip2, clip mask, bool "luma"(false)


# It's the backbone of the framework. It merges two clips according to the mask.
# The bigger the mask value, the more the second clip will be taken into account
# ( the actual formula is y = ((256 - m) * x1 + m * x2 + 128) / 256 )
# luma is a special mode, where only the luma plane of the mask is used to process all three channels.
# u and v are defaulted to 2 (that way, the resulting clip contains the chroma of clip1,
# and looks right).


MT_Merge(last,unfilter.MT_Logic(last,"max",U=2,V=2),mask,U=3,V=3)

########## remove sides, add borders
(PAL==false) ? Crop(8,8,-4,-12,align=true).AddBorders(6,58,6,58) :
\ Crop(4,6,-4,-8,align=true).AddBorders(4,70,4,72)

 

 

------------------------

ok, i'll work on the new version also..

 

later

-1

[no GOUT in CED?-> GOUT CED]

Author
Time
dark_jedi said:

 

So have you actually encoded all these for yourself yet?

 No, not yet. I did have a complete encoding of a previous version of the script, but the encoding settings were not really great, so I took the opportunity to make additional changes to the script. I'm still not completely happy with the left over grain, but I just haven't found a resonably fast and artifact free way to get rid of it yet. (BTW, my previous script used VagueDenoiser, and that has some issues, so I removed it)

-G

Author
Time

I'm still not completely happy with the left over grain, but I just haven't found a resonably fast and artifact free way to get rid of it yet. (BTW, my previous script used VagueDenoiser, and that has some issues, so I removed it)

-G

 

Maybe "Neat Video" for VirtualDub from

http://www.neatvideo.com

as denoiser could help. It is just a suggestion.

It is normally a plugin for VirtualDub but can be loaded in Avisynth as described here

http://www.neatvideo.com/files/NVVDUG.pdf

on page 36.

But I'm not sure, if the demo-version could do the job. The other versions are quite expensive...

 

Author
Time

Okay, I've finally got some good degraining that I'm extremely happy with. Check out the new script!

 

-G

Author
Time
g-force said:

Okay, I've finally got some good degraining that I'm extremely happy with. Check out the new script!

 

-G

Done.

This looks much, much better than with the previous scripts. I will do some more encodings when I have more time next week but at the moment I think sooner or later I have to encode the whole movies again, this is really worth it.

 

Author
Time
g-force said:

Okay, I've finally got some good degraining that I'm extremely happy with. Check out the new script!

 

-G

 

 Thanks for all of your work on this g-force.  Are there any new avs filters needed for the new script?