####**********************NOTE : THIS IS FOR THE OLDER VERSION ********************************
###_______________________________________________________________________________________
###*********************I'm still trying to understand it, before i move onto the new version *******************
#########################################
########## GOUT Filter By G-force V.1.27 ##########
#########################################
########## setup stage
SetMemoryMax(100) # <-----Play around with this for (maybe) faster encodes
sw_frame_no = 704 # <-----Enter number of first frame of "Star Wars" logo
PAL = false # <-----Set to false for NTSC, true for PAL
Mpeg2Source("M:\data\movie\dvd\star wars ep 4-gout\VTS_03_1.d2v") # <-----Set path
########## cut off some of the black bars, anti-alias, resize to 16x9 AR
#------------------ NNEDI ---------------------------------------------------------------
# NNEDI is an intra-field only deinterlacer. It takes in a frame, throws away one field, and then
# interpolates the missing # pixels using only information from the kept field. It has same rate and
# double rate modes, and works with YUY2 and YV12 input. NNEDI can also be used to enlarge images by
# powers of 2.
#------------------ Spline16Resize ------------------------------------------------------
# Spline16Resize(clip clip, int target_width, int target_height, float src_left, float src_top,
# float src_width, float src_height)
# Two spline based resizers. Spline16Resize is slightly faster than Spline36Resize
(PAL==false) ? Crop(0,96,0,-96,align=true).NNEDI(dh=true,field=1).Spline16Resize(720,384) :
\ Crop(0,120,0,-120,align=true).NNEDI(dh=true,field=1).Spline16Resize(720,448)
########## set black level, adjust gamma, saturation/hue
#------------------ levels ------------------------------------------------------------
# Levels(clip input, int input_low, float gamma, int input_high, int output_low, int output_high,
# bool coring)
# The Levels filter adjusts brightness, contrast, and gamma. The input_low and input_high parameters
# determine what input pixel values are treated as pure black and pure white; the output_low and
# output_high parameters determine the output values corresponding to black and white; and the gamma
# parameter controls the degree of nonlinearity in the conversion. To be precise, the conversion
# function is:
# output = [(input - input_low) / (input_high - input_low)]^(1/gamma) * (output_high - output_low) +
# output_low
Levels(10,1.13,255,0,255)
#------------------ tweak ---------------------------------------------------------------
#Tweak(clip clip [, float hue] [, float sat] [, float bright] [, float cont] [, bool coring] [, int
# startHue] [, int endHue] [, int maxSat] [, int minSat] [, int interp])
# This function provides the means to adjust the hue, saturation, brightness, and contrast of a video
# clip. In v2.60, both the saturation and hue can be adjusted for saturations in the range [minSat,
# maxSat] and hues in the range [startHue, endHue]. interp interpolates the adjusted saturation to
# prevent banding.
Tweak(sat=1.08,hue=-4)
########## global motion stabilization stage
orig = last
#------------------ TemporalSoften --------------------------------------------------
# TemporalSoften(clip clip, int radius, int luma_threshold, int chroma_threshold [, int scenechange] [, int mode])
# The SpatialSoften and TemporalSoften filters remove noise from a video clip by selectively blending
# pixels. These filters can work miracles, and I highly encourage you to try them. But they can also
# wipe out fine detail if set too high, so don't go overboard. And they are very slow, especially
# with a large value of radius, so don't turn them on until you've got everything else ready.
# TemporalSoften is similar to SpatialSoften, except that it looks at the same pixel in nearby
# frames, instead of nearby pixels in the same frame. All frames no more than radius away are
# examined. This filter doesn't seem to be as effective as SpatialSoften.
# TemporalSoften smoothes luma and chroma separately, but SpatialSoften smoothes only if both luma
# and chroma have passed the threshold.
# The SpatialSoften filter work only with YUY2 input. You can use the ConvertToYUY2 filter if your
# input is not in YUY2 format.
# Starting from v2.5 two options are added to TemporalSoften:
# An optional mode=2 parameter: It has a new and better way of blending frame and provides better
# quality. It is also much faster. mode=1 is default operation, and works as always.
# Added scenechange=n parameter: Using this parameter will avoid blending across scene changes. 'n'
# defines the maximum average pixel change between frames. Good values for 'n' are between 5 and 30.
# Requires ISSE.
temp = TemporalSoften(7,255,255,25,2)
#------------------ RestoreMotionBlocks----------------------------------------------------
# RemoveDirt is a temporal cleaner for Avisynth 2.5x. It has now become an AVS script function, which
# involves RestoreMotionBlocks and various filters from the RemoveGrain package.
# RestoreMotionBlocks(filtered, restore, neighbour, neighbour2, alternative, gmthreshold, mthreshold,
# pthreshold, cthreshold, noise, noisy, dist, tolerance, dmode, grey, show, debug)
# http://www.removedirt.de.tf/
# SCSelect
# SCSelect is a special filter, which distinguishes between scene begins, scene ends and global
# motion. The output of SCClense is used as an "alternative" clip for RestoreMotionBlocks. It can
# hardly used for other purposes, because it can only make proper decisions if there are a lot of
# motion blocks. Only if the percentage of motion blocks is > gmthreshold, then RestoreMotionBlocks
# chooses a frame from the clip specified with the alternative variable and then there are always a
# lot of motion blocks, if gmthreshold is not too small (gmthreshold >= 30 should be sufficiently
# large). SCSelect yields nonsense results if there are only few motion blocks. SCSelect is used as
# follows:
# SCSelect(clip input, clip scene_begin, clip scene_end, clip global_motion, float dfactor, bool
# debug, bool planar)
# Let us discuss this script in some detail. Firstly, we apply the brutal temporal clenser from the
# RemoveGrain package to obtain the clip "clensed". Then we use the filters ForwardClense and
# BackwardClense from RemoveGrain to construct the clip "alt", which is then used as the
# "alternative" variable in the subsequent RestoreMotionBlocks. While Clense does a lot of cleaning
# it certainly creates a lot of artifacts in motion areas. In the script function RemoveDust, the
# clip "clensed" is repaired entirely by the Repair filter from the RemoveGrain package. In
# RemoveDirt this repair is only made in motion areas. The static areas are not repaired. Since the
# clip is used only for restoring motion areas, we can use the much stronger Repair mode 16 (in
# RemoveDust usually modes 2 or 5 are used), which restores thin lines destroyed by clense. Finally,
# because there may be some left over from temporal cleaning especially when grain is dense, we use
# the spatial denoiser RemoveGrain(mode=17) to remove these dirt or grain rests.
RestoreMotionBlocks(temp,orig,neighbour=temp,alternative=orig,gmthreshold=100,
\ dist=1,dmode=0,debug=false,noise=10,noisy=15,grey=false,show=false)
#------------------ interleave -------------------------------------------------------
# Interleave(clip1, clip2 [, ...])
# Interleave interleaves frames from several clips on a frame-by-frame basis, so for example if you
# give three arguments, the first three frames of the output video are the first frames of the three
# source clips, the next three frames are the second frames of the source clips, and so on.
Interleave(last,orig)
#------------------ DePanEstimate -------------------------------------------------------
# DePanEstimate function
# This function uses phase-shift method (by fast Fourier transform) for global motion estimation.
# It uses some central region of every frame (or field) as FFT window to find the inter-frames
# correlation and to calculate the most appropriate values of vertical and horizontal shifts,
# which fit current frame to previous one. The some relative correlation parameter is used as
# trust measure and for scene change detection. In zoom mode, plugin uses left and right sub-windows
# to estimate both displacements and zoom. Output is special service clip with coded motion data
# in frames, and optional log file.
# Function call:
# DePanEstimate ( clip, int range, float trust, int winx, int winy, int dxmax, int dymax,
# float zoommax, bool improve, float stab, float pixaspect, bool info, string log, bool debug,
# bool show, string extlog, bool fftw)
# Notes. trust parameters defines some threshold value of inter-frame similarity (correlation).
# It defines how similar must be current frame to prev frame in the same scene.
# DePanEstimare detects scenechange at current frame, if current correlation value is below the
# threshold. Set it lower to prevent false scene change detecting, set it higher to prevent
# skipping of real scenechange (with shaking). Default value is good for most video, but you can
# test it in info mode.
mdata = DePanEstimate(last,range=1,trust=0,dxmax=1,dymax=0)
#------------------ DePan -------------------------------------------------
# DePan (client) - make full or partial global motion compensation
# DePanInterleave (client) - generates long motion compensated interleaved clip
# DePanStabilize (client) - stabilizes motion
# DePanScenes(client) - scene change indication
# DePan function
# It generates the clip with motion compensated frames, using motion data previously calculated
# by DePanEstimate.
# Function call:
# DePan (clip, clip data, float offset, int subpixel, float pixaspect, bool matchfields,
# int mirror, int blur, bool info, string inputlog)
# data - special service clip with coded motion data, produced by DePanEstimate
# offset - value of compensation offset for all input frames (fields) (from - 10.0 to 10.0,
# default =0)
# = 0 is null transform.
# = -1.0 is full backward motion compensation of next frame (field) to current,
# Note: The offset parameter of DePan is extended version of delta parameter of GenMotion.
DePan(last,data=mdata,offset=-1)
#------------------ SelectEvery -------------------------------------------
# SelectEvery(clip clip, int step-size, int offset1 [, int offset2, ...])
# SelectEvery is a generalization of filters like SelectEven and Pulldown.
# SelectEvery(clip, 2, 0) # identical to SelectEven(clip)
SelectEvery(2,0)
########## local motion stabilization / degrain stage
unfilter = last
#------------------ VagueDenoiser ------------------------------------------
# This is a Wavelet based Denoiser.
# Basically, it transforms each frame from the video input into the wavelet domain,
# using various wavelet filters. Then it applies some filtering to the obtained coefficients.
# It does an inverse wavelet transform after. Due to wavelet properties, it should gives a
# nice smoothed result, and reduced noise, without blurring picture features.
# This wavelet transform could be done on each plane of the colorspace.
# This filter uses a wavelets from Brislawn tutorial.
# Syntax of VagueDenoiser filter
# VagueDenoiser (clip, int "threshold", int "method", int "nsteps", float "chromaT",
# bool "debug", bool "interlaced", int "wavelet", bool "Wiener", float "wratio",
# integer "percent", clip "auxclip")
# threshold: a float (default=0)
# Filtering strength. The higher, the more filtered the clip will be. Hard thresholding can use a higher threshold than Soft thresholding before the clip looks overfiltered.
# If set < 0, then luminosity denoising will be disabled
# If set = 0, then threshold is estimated automatically (adaptive)
# The filtering method the filter will use.
# 3 : Qian's (garrote) thresholding. Scales or nullifies coefficients
# - intermediary between (more) soft and (less) hard thresholding.
# nsteps: (default=4)
# Number of times, the wavelet will decompose the picture.
# High values can be slower but results will be better.
# Suggested values are 3-6.
# Picture can't be decomposed beyond a particular point (typically,
# 8 for a 640x480 frame - as 2^9 = 512 > 480)
# chromaT: a float (default=-1)
# Set threshold value for Chroma filtering. It is slower but give better results
# If set < 0, then Chroma denoising will be disabled (default mode)
# If set = 0, then threshold is estimated automatically (adaptive)
source = last.VagueDenoiser(0,3,4,-1)
#------------------ Repair ----------------------------------------------------------
# http://www.removegrain.de.tf/
# It is generated from the same source code as RemoveGrain, but serves a very different purpose.
# Instead of removing grain, Repair should remove artifacts introduced by previous filters.
# It does so by comparing the video before these filters were applied with the video after
# application of these filters. Thus Repair requires two clips as input. The first clip is the
# filtered and the second the unfiltered clip. Otherwise Repair uses the same variables as
# RemoveGrain, but modes 11-17, which do not make sense for Repair are disabled. Thus there are
# currently 10 Repair modes.
# RemoveGrain(input, mode=n) = Repair(input, input, n+1) for n=0,1,2,3.
# The higher the mode (from 1 to 4 only) the stronger the artifact removal. With mode=4,
# Repair(filtered, original, mode=4) is fairly close to RemoveGrain(original, mode= 4, limit= 0).
Repair(temp,source,mode=9)
#------------------ MVAnalyse --------------------------------------------------
# Filters that use motion vectors have common parameters. Those are the scene-change detection
# thresholds, and the mmx / isse flags. They also use one or several vectors stream,
# which are produced by MVAnalyse.
# MVAnalyse (clip, int "blksize", int "blksizeV", int "pel", int "level",
# int "search", int "searchparam", int "pelsearch", bool "isb", int "lambda", bool "chroma",
# int "delta", bool "truemotion", int "lsad", int "pnew", int "plevel", bool "global",
# int "overlap", int "overlapV", string "outfile", int "sharp", clip "pelclip", int "dct",
# int "divide", int "idx")
# Estimate motion by block-matching method and produce special output clip with motion vectors data
# (used by other functions).
# Some hierarchical multi-level search methods are implemented (from coarse image scale to finest).
# Function uses zero vector and neighbors blocks vectors as a predictors for current block.
# Firstly difference (SAD) are estimated for predictors, then candidate vector changed by some
# values to some direction, SAD is estimated, and so on. The accepted new vector is the vector
# with minimal SAD value (with some penalty for motion coherence).
# isb : allows to choose between a forward search (motion from the previous frame to current one)
# for isb=false and a backward search (motion from following frame to the current one)
# for isb=true (isb stands for "IS Backward", it is implemented and named exactly as written here,
# do not ask. Default isb=false.
# delta : set the frame interval between the reference frame and the current frame.
# By default, it's 1, which means that the motion vectors are searched between the current
# frame and the previous ( or next ) frame. Setting it to 2 will allow you to search mvs between
# the frame n and n-2 or n+2 ( depending on the isb setting ).
# pel : it is the accuracy of the motion estimation. Value can only be 1, 2 or 4. 1 means a
# precision to the pixel. 2 means a precision to half a pixel, 4 means a precision to quarter a
# pixel, produced by spatial interpolation (better but slower). Default is 2 since v1.4.10.
# sharp: subpixel interpolation method for pel=2.
# Use 0 for soft interpolation (bilinear), 1 for bicubic interpolation (4 tap Catmull-Rom),
# 2 for sharper Wiener interpolation (6 tap, similar to Lanczos).
# Default is 2.
# idx: index of clip. It allows to internally reuse (share) the multilevel data computed for
# frames of the clip by several instances of the filter (or by other functions) for more fast
# processing and memory usage decreasing. It is especially useful for pel=2 to avoid computing
# the interpolation twice, when doing a forward & backward search on the same clip. If you use idx,
# you should always use positive values, and you should only use the same value for filters which
# work on the same clip ( else, the analysis won't work properly ). By default, a unique
# negative number is given for each filter instance (which will create its own multilevel data).
# overlapv: vertical block overlap value. Default is equal to horizontal.
# Must be even for YV12 and less than block size.
bw_vec1 = last.MVAnalyse(isb=true, delta=1,pel=2,sharp=1,overlap=4,idx=1)
fw_vec1 = last.MVAnalyse(isb=false,delta=1,pel=2,sharp=1,overlap=4,idx=1)
#------------------ MVCompensate -------------------------------------------------------
# MVCompensate (clip, clip "vectors", bool "scbehavior", int "mode", int "thSAD", bool "fields",
# clip "pelclip", int "idx")
# Do a full motion compensation of the frame. It means that the blocks pointed by the mvs in
# the reference frame will be moved along the vectors to reach their places in the current frame.
# thSAD is SAD threshold for safe (dummy) compensation. If block SAD is above the thSAD,
# the block is bad, and we use source block instead of the compensated block.
# Default is 10000 (practically disabled).
# idx works the same way as idx in MVAnalyse.
bw1 = source.MVCompensate(bw_vec1,idx=2,thSAD=400)
fw1 = source.MVCompensate(fw_vec1,idx=2,thSAD=400)
Interleave(bw1,source,fw1)
#------------------ Clense -----------------------------------------------------
# "clense" median filter,
Clense(reduceflicker=false).TemporalSoften(1,255,255,25,2)
SelectEvery(3,1)
########## temporal min/max sharpening stage
#------------------ MT_Logic ----------------------------------------------------
# http://manao4.free.fr/mt_masktools.html
# mt_logic : clip clip1, clip clip2, string mode("and")
# Applies the function defined by mode to clip1 and clip2.
# Possible values for mode are :
# "and" : does a binary "and" on each pairs of pixels ( 11 & 5 is computed by converting them to
# binary, and to and all the bits : 11 = 1011, 5 = 101, 11 & 5 = 1 ).
# "or" : does a binary "or" on each pairs of pixels ( 11 | 5 = 1011 | 101 = 1111 = 15 ).
# "xor" : does a binary "xor" on each pairs of pixels ( 11 ^ 5 = 1011 ^ 101 = 1110 = 14 ).
# "andn" : does a binary "and not" on each pairs of pixels ( 11 & ~5 = 1011 & ~101 = 1011 & 11111010
# = 1010 = 10 ).
# "min" : gives the minimum of each pairs of pixels.
# "max" : gives the maximum of each pairs of pixels.
pmax = source.MT_Logic(bw1,"max").MT_Logic(fw1,"max")
pmin = source.MT_Logic(bw1,"min").MT_Logic(fw1,"min")
#------------------ MT_Lutxy -------------------------------------------------------
# mt_lutxy : clip clip1, clip clip2, string expr("x"), string yexpr("x"), string uexpr("x"),
# string vexpr("x")
# It applies a two-parameters function defined by expr to all the pixels.
# The function is written is reverse polish notation.
# If yexpr, uexpr or vexpr isn't defined, expr is used instead.
MT_lutxy(last,last.RemoveGrain(12,-1),"x y - 1.5 * x +",U=2,V=2)
#------------------ MT_CLamp -----------------------------------------------------
# mt_clamp : clip c, clip bright_limit, clip dark_limit, int overshoot(0), int undershoot(0)
# Forces the value of the first clip to be between bright_limit + overshoot and dark_limit
# - undershoot.
# Gives unwanted results if bright_limit + overshoot < dark_limit - undershoot.
MT_Clamp(pmax,pmin,1,1,U=2,V=2)
########## restore stars stage
#------------------ MT_Binarize ---------------------------------------------------
# mt_binarize : clip c, int threshold(128), bool upper(false)
# If upper is false, forces all values strictly over threshold to 0, and all others to 255.
# Else, forces all values strictly over threshold to 255, else to 0.
# upper = true is equivalent to mt_lut("x threshold > 0 255 ?"), but faster.
# upper = false is equivalent to mt_lut("x threshold > 255 0 ?"), but faster.
mask = last.MT_Binarize(threshold=20,upper=true).MT_Expand
#------------------ Mask -------------------------------------------------------------
# Mask
# Mask(clip clip, mask_clip clip)
# Applies a defined alpha-mask to clip, for use with Layer, by converting mask_clip to
# greyscale and using that for the mask (the alpha-channel) of RGB32.
# In this channel "black" means completely transparent, white means completely opaque).
halfmask = mask.ReduceBy2
#------------------ YToUV -------------------------------------------------------------
# YToUV puts the luma channels of the two clips as U and V channels.
# Image is now twice as big, and luma is 50% grey.
# Starting from v2.51 there is an optional argument clipY which puts the luma channel of
# this clip as the Y channel.
mask = YToUV(halfmask,halfmask,mask)
#------------------ MT_Merge ---------------------------------------------------------
# mt_merge : clip clip1, clip clip2, clip mask, bool "luma"(false)
# It's the backbone of the framework. It merges two clips according to the mask.
# The bigger the mask value, the more the second clip will be taken into account
# ( the actual formula is y = ((256 - m) * x1 + m * x2 + 128) / 256 )
# luma is a special mode, where only the luma plane of the mask is used to process all three channels.
# u and v are defaulted to 2 (that way, the resulting clip contains the chroma of clip1,
# and looks right).
MT_Merge(last,unfilter.MT_Logic(last,"max",U=2,V=2),mask,U=3,V=3)
########## remove sides, add borders
(PAL==false) ? Crop(8,8,-4,-12,align=true).AddBorders(6,58,6,58) :
\ Crop(4,6,-4,-8,align=true).AddBorders(4,70,4,72)
------------------------
ok, i'll work on the new version also..
later
-1