logo Sign In

althor1138

User Group
Members
Join date
12-Feb-2011
Last activity
20-Aug-2023
Posts
637

Post History

Post
#500874
Topic
circle/clock wipe functions in avisynth
Time

I was having a hard time finding anything like this so I wrote my first 2 functions.  I made them specifically to repair the 4-eyed stormtrooper scene in my hybrid script.  The frame length of the clips should probably be divisible by 4 but they seem to work with any number.  Mileage might vary.

**EDIT**

I re-wrote clockwipe to use duration instead of a startframe and endframe.

 

Function circlewipe(clip bottom, clip top, int "Duration", float "cirx", float "ciry", float "startsize", float "endsize", float "sharpness")
{
    #Set Defaults
    Duration=Default(Duration,24)
    width=bottom.width()
    height=bottom.height()
    cirx=Default(cirx,width/2)
    ciry=Default(ciry,height/2)
    startsize=Default(startsize,70)
    endsize=Default(endsize,9000)
    sharpness=Default(sharpness,27)
    
    #Make Mask and Overlay it onto clips
    animatecircle=Animate(top,0,Duration,"gramama",1,1,cirx,ciry,startsize,1,1,cirx,ciry,endsize).spline64resize(width/10,height/10)\
    .tweak(cont=sharpness).blur(1.58).blur(1.58).spline64resize(width,height)
    circlemask=mask(top.converttorgb32(),animatecircle.converttorgb32())
    circlemaskoverlay=overlay(top,bottom,mask=circlemask.showalpha(),opacity=1)
    return(circlemaskoverlay)
    
}

 

Function clockwipe(clip bottom,clip top, int "Duration", float "blurriness")
{   
    
    width=bottom.width()
    height=bottom.height()
    wb=blankclip(bottom,duration,color=$FFFFFF)
    bb=blankclip(bottom,duration,color=$000000).spline64resize(width/2,height/2)
    hs=hshear(wb,0,$000000,0,duration/4,90)
    vs=vshear(wb,0,$000000,0,duration/4,90)
    topright=hs.crop(0,0,-width/2,-height/2).trim(0,duration/4-1) ++ bb
    bottomright=blankclip(topright,length=duration/4,color=$FFFFFF) ++ vs.crop(width/2,0,0,-height/2).trim(0,duration/4-1) ++ bb
    bottomleft=blankclip(topright,length=duration/2,color=$FFFFFF) ++ hs.crop(width/2,height/2,0,0).trim(0,duration/4-1) ++ bb
    topleft=blankclip(topright,length=duration/4+duration/2,color=$FFFFFF) ++ vs.crop(0,height/2,-width/2,0).trim(0,duration/4-1) ++ bb
    sh1=stackhorizontal(topleft,topright)
    sh2=stackhorizontal(bottomleft,bottomright)
    sv=stackvertical(sh1,sh2).trim(1,duration).spline64resize(width/10,height/10).blur(blurriness).blur(blurriness).blur(blurriness).blur(blurriness).blur(blurriness).spline64resize(width,height)
    clockmask=mask(top.converttorgb32(),sv.converttorgb32())
    clockoverlay=overlay(top,bottom,mask=clockmask.showalpha(),opacity=1)
    
    return(clockoverlay)
}

 

Post
#497441
Topic
Yet another preservation, Star Wars Trilogy: Throwback Edition (* unfinshed project *)
Time

Yes you are right.  The dvd contains the progressive frames and pulldown flags are applied to make the framerate appear on the television as 29.97 fps just like the CAV Laserdiscs.  At least this is how I interpret it.

The important thing to know is that the DVD's, Definitive collection laserdiscs, and the faces laserdiscs all originate from the same 35mm film which  had the DVNR applied before it was struck to film(as far as I know). This 35mm print of the gout films was blurred before it was ever transferred to laserdisc or DVD.

Post
#497430
Topic
Yet another preservation, Star Wars Trilogy: Throwback Edition (* unfinshed project *)
Time

rmclain, the framerate on the dvd's is 29.97. Somewhere during the rip process you have applied an inverse telecine, reducing the framerate to 23.976.  The gout dvd's were created from the same 35mm film print which was used for the definitive collection laserdiscs (as far as i know).

The film was scanned via a telecine machine at 24fps because that is what film is captured at.  On the Definitive collection laserdiscs the frames are stored at 24 fps and a pulldown flag on the disc signals the player to apply a pulldown, making the output to the tv 29.97 fps. On the faces laserdiscs the pulldown is already encoded onto the disc and the player just assumes that it is 29.97 fps.

On the Gout DVD's the film was rescanned and digitally transferred to DVD at 29.97 fps with the pulldown applied already (as far as i know).

The motion problems are from the DVNR which was applied to the 35mm film print before it was ever transferred to laserdisc or DVD.

Post
#497401
Topic
2006 SE/GOUT hybrid script project - UPDATE: 05-24-11 see 1st & 2nd post...
Time

Thanks for the info Darth Mallwalker. 

RU.08, I appreciate anybody that wants to help.  I hope this info from Darth Mallwalker will give you what you need.  I know Pittrek had done a PAL version of the script and he said that he had a problem with the move along sequence in mos eisley.  I've double-checked it on the NTSC side of things and it is correct.

As far as the plugins for avisynth.  I believe  most of them are core filters.  The rest should be available for download on the avisynth wiki page under external filters.  I believe the latest versions should work just fine.

Right now I'm going through the script and trying to figure out which scenes should be weighted average and which should use the chroma copy.  I prefer the weighted average method as long as it works because i don't have to worry about noise or bad alignment.  When this is finished i'm gonna try to replace more of the wipes.  It should be pretty close to finished after this.  I'm always open to ideas to make it even better though.

Post
#484287
Topic
Puggo Strikes Back! (Released)
Time

ok I've edited the script in post 139 to incorporate the mask/overlay method.  You should be able to just open the second part of the script and try it directly.  The overlay in the variable avgcap1cap2 is set to mode=blend.  I'm not sure if that is best so you can try to change it to luma as well.  I don't think it will make that much of a difference though.

*EDIT*

If it's mixing the wrong parts of the images just let me know and I'll switch things around in the script.

Post
#484268
Topic
Puggo Strikes Back! (Released)
Time

Puggo - Jar Jar's Yoda said:

That's a promising idea.  I was wondering if there was a way of taking the bright clip and chopping out all information above 128 luma, taking the dark clip and chopping out all information above 128, and then simply adding them together.

Or maybe something slightly more sophisticated would be to do the above on the extremes (say, above 192 and below 64), and blending the center range.

Of course, there might be some strange effects I haven't considered.  If there is a way of "chopping" above/below a certain luma with some small degree of fuzziness, that would probably reduce the possibility of artifacts.

I'm not positive but I believe this script is doing that.  I create a mask of the brighter clip and then invert it so that only the darker areas get overlaid onto the darker clip.  This leaves the bright areas of the dark clip untouched while passing the dark areas of the bright clip onto the corresponding areas of the dark clip. If i understand correctly grey pixels in a mask allow the pixels from the image to pass through with an opacity based on how close they are to white. This should mean that the middle ranges are a blend of both.  I've been checking this method by using the gout and a bright version of the gout and i've not seen any artifacting spatially or temporally.

Post
#484258
Topic
Puggo Strikes Back! (Released)
Time

Puggo - Jar Jar's Yoda said:

 If there were a way to have it choose pixels based on which pixel is less extreme (that is, pick the pixel closer to the midpoint, which should use the lighter source in the dark areas, and the darker source in the light areas - rather than averaging them), then it could probably be used everywhere.  Hmm, I wonder if anyone has ever written a script to do that?

I actually think I've found a way to do exactly this in avisynth using only the mask,showalpha, and invert filters.  Here are some pictures to demonstrate.  It is an extreme comparison because I brightened one image considerably just to demonstrate what is happening.

Normal frame

Masked bright frame overlaid on top of normal

Bright frame

 

The image doesn't look great because I way overbrightened the bright frame to intentionally blow out the whites to show that it successfully only brightens the dark areas from the normal pic.

If you are interested let me know.  For now though,  I must do some housework or the wife will kill me when she comes home.

Post
#484139
Topic
Puggo Strikes Back! (Released)
Time

Puggo - Jar Jar's Yoda said:

althor1138 said:

The base clip should probably be the darker clip and the overlay clip should be the lighter clip.  Then change the opacity to try and find what you are looking for.  opacity=1 might even work.

Ok, that gives me some things to experiment with.  By "base clip", do you mean cap1 or cap2 in the script?

The base clip is the "bottom"clip in overlay.  The overlay clip is the clip put on top of the base clip.  the base clip is always the first clip entered into the overlay filter. So the base clip in the filter below should be your darkest clip, regardless of whether it's cap1 or cap2.  If they are backwards, just flip them around.

Overlay(BASE clip,overlay clip,mode="luma",opacity=0.5)

Post
#484072
Topic
Puggo Strikes Back! (Released)
Time

Puggo - Jar Jar's Yoda said:

The scripts for aligning and combining the clips also worked!  So now I have to identify those scenes that might benefit from it, because judging from the output many scenes won't.  If there were a way to have it choose pixels based on which pixel is less extreme (that is, pick the pixel closer to the midpoint, which should use the lighter source in the dark areas, and the darker source in the light areas - rather than averaging them), then it could probably be used everywhere.  Hmm, I wonder if anyone has ever written a script to do that?  As it is, it may only be useful for improving sections that are already pretty good in both sources.

Pretty remarkable nonetheless.  I could see using this for a variety of applications besides this one.

I'm glad it works.  Maybe you could try to change this variable:

avgcap1cap2=overlay(cap1out,cap2out,mode="blend",opacity=0.5)

to something like this:

avgcap1cap2=overlay(cap1out,cap2out,mode="luma",opacity=0.5)

                                    (darkclip,lightclip,mode="luma,opacity=0.5)

The base clip should probably be the darker clip and the overlay clip should be the lighter clip.  Then change the opacity to try and find what you are looking for.  opacity=1 might even work.

It might work, might not.

Post
#483363
Topic
Puggo Strikes Back! (Released)
Time

Thorr, it adjusts itself.  As long as the frames are identical in each capture.

I've changed the script so that it automatically will align all three captures of the first reel.  I've added a couple steps but I did a test run last night and this way is far superior to the old version(i'd say it works perfectly). It goes like this.

Step1:  Save this as avs file, input paths, and open it in virtualdub:

#reel1cap3=avisource("PATH TO FILE") #This video will be altered to match source video. Trim so that frames match source.
reel1cap2 = avisource("PATH TO FILE") #This video will be altered to match source video. Trim so that frames match source.
reel1cap1 = avisource("PATH TO FILE") #This is the source video.
blank=reel1cap1.blankclip() #blankclip used for forced scenechange in next line
inter=interleave(reel1cap2,reel1cap1,blank) #This will force a scene change after every comparison of one set of frames.
#inter=interleave(reel1cap2,reel1cap1,blank,reel1cap3,reel1cap1,blank) #This will force a scene change after every comparison of one set of frames.
return(inter)

Step2:

Add the Deshaker filter and configure pass 1.  The only changes I made was this: Scale:FULL,UsePixels:ALL,DeepAnalysis:30%.

Take note of the destination for the log file and press ok. Then you can press F5 to preview output from start.  Let the whole thing finish and then close Vdub and move on to next step.

Step 3:

Save this script to an avs file and input path to files and path to deshaker log. The output will be the average of all 3 captures or you can output each capture individually.

#cap3=avisource("PATH TO FILE") #this video will be altered to match source video. Trim so that frames match source.
cap2=avisource("PATH TO FILE") #this video will be altered to match source video. Trim so that frames match source.
cap1=avisource("PATH TO FILE") #this is the source video.
blank=cap1.blankclip()
#reel1cap3=interleave(cap2,cap1,blank,cap3,cap1,blank).depan(subpixel=2,pixaspect=1.0,inputlog="**PATH TO DESHAKER.LOG**").selectevery(6,3)
#reel1cap2 = interleave(cap2,cap1,blank,cap3,cap1,blank).depan(subpixel=2,pixaspect=1.0,inputlog="**PATH TO DESHAKER.LOG**").selectevery(6,0)
reel1cap2=interleave(cap2,cap1,blank).depan(subpixel=2,pixaspect=1.0,inputlog="**PATH TO DESHAKER.LOG**").selectevery(3,0)
reel1cap1 = cap1

#cap3w=reel1cap3.Width()
#cap3h=reel1cap3.Height()
cap2w = reel1cap2.Width()
cap2h = reel1cap2.Height()
cap1w = reel1cap1.Width()
cap1h = reel1cap1.Height()
borw = (cap1w-cap2w)/2
borh = (cap1h-cap2h)/2
#reel1cap3=reel1cap3.AddBorders((654-reel1cap3.Width)/2, (480-reel1cap3.Height)/2, (654-reel1cap3.Width)/2, (480-reel1cap3.Height)/2, color=$000000)
reel1cap2= reel1cap2.AddBorders((654-reel1cap2.Width)/2, (480-reel1cap2.Height)/2, (654-reel1cap2.Width)/2, (480-reel1cap2.Height)/2, color=$000000)
cap1p = reel1cap1.AddBorders((654-reel1cap1.Width)/2, (480-reel1cap1.Height)/2, (654-reel1cap1.Width)/2, (480-reel1cap1.Height)/2, color=$000000)
int = Interleave(cap1p,reel1cap2)
#int1=Interleave(cap1p,reel1cap3)
est = int.depanestimate(range=1,pixaspect=1,zoommax=1.5,improve=false,trust=0)
#est1=int1.depanestimate(range=1,pixaspect=1,zoommax=1.5,improve=false,trust=0)
dep = int.depaninterleave(est,pixaspect=1,prev=0,next=1,subpixel=2,mirror=0,blur=0)
#dep1=int1.depaninterleave(est1,pixaspect=1,prev=0,next=1,subpixel=2,mirror=0,blur=0)
cap1out=reel1cap1
cap2out = dep.SelectEvery(4, 1)
#cap3out=dep1.SelectEvery(4,1)
#avgcap2cap3=overlay(cap2out,cap3out,mode="blend",opacity=0.5)
#avgcap1cap2cap3=overlay(avgcap2cap3,reel1cap1,mode="blend",opacity=0.33)
maskcap2=mask(cap2out.converttorgb32(),cap2out.converttorgb32()).showalpha().invert()
avgcap1cap2=overlay(cap1out,cap2out,mask=maskcap2.showalpha(),mode="blend",opacity=1,greymask=true)

#return(avgcap1cap2cap3)
return(avgcap1cap2)

 

Post
#483232
Topic
Puggo Strikes Back! (Released)
Time

**EDIT**

See post #139 for newer,better version

reel1cap2 = avisource("PATH TO FILE") #this is the video that will be altered.
reel1cap1 = avisource("PATH TO FILE") #this is the video that will be the source that the other is matched to.

colw = reel1cap2.Width()
colh = reel1cap2.Height()
bww = reel1cap1.Width()
bwh = reel1cap1.Height()
borw = (bww-colw)/2
borh = (bwh-colh)/2
reel1cap2= reel1cap2.AddBorders((654-reel1cap2.Width)/2, (480-reel1cap2.Height)/2, (654-reel1cap2.Width)/2, (480-reel1cap2.Height)/2, color=$000000)
bwp = reel1cap1.AddBorders((654-reel1cap1.Width)/2, (480-reel1cap1.Height)/2, (654-reel1cap1.Width)/2, (480-reel1cap1.Height)/2, color=$000000)
int = Interleave(bwp,reel1cap2)
est = int.depanestimate(range=1,pixaspect=1,zoommax=1,improve=false,trust=0)
dep = int.depaninterleave(est,pixaspect=1,prev=0,next=1,subpixel=2,mirror=0,blur=0)
acol = dep.SelectEvery(4, 1)
avgcap1cap2=overlay(reel1cap1,acol,mode="blend",opacity=0.5)

return(avgcap1cap2)

 

Post
#483224
Topic
Puggo Strikes Back! (Released)
Time

Hi Puggo,

I'd first like to say thanks for preserving these films so that they are available to all of us.

I noticed while reading this thread that you are looking for a better way to auto-align the frames from several of your reel captures.

I know of an avisynth script that should do a better job of auto-alignment than photoshop or vegas.  I've never used the vegas method but from what you are describing it's not that great. 

If you are interested just let me know what resolution your captures are and I can edit the script to fit the resolution and then post it here in this thread for you.

 

 

Post
#479292
Topic
2006 SE/GOUT hybrid script project - UPDATE: 05-24-11 see 1st & 2nd post...
Time

I think the less resizing you do the better off you will be. I think it would be better if you didn't resize the pal sources to match the script.  Just change the cropping and resizes in the script to match the sources you are using. I have no experience with Pal actually.  I live in PAL land but I'm american and most of my dvd's are region 1.  Are PAL dvd's not natively yv12? I tried to make sure there wouldn't have to be any color format changes.

1) I will have to find my pal discs and do a comparison using your script modification to see if there is a difference in the sources.

2) The gout colors are bad (so is the SE) and I'm currently encoding with g-force's script so that I can use it instead of the raw gout(virtualdub is telling me 6 days as of right now before it's finished). I'd suggest this since the raw footage is a bit of an eyesore when pit against the clarity of the SE.

3) I will probably substitute gout footage for all of the se changes after I have g-force's version to work with.

4)See #2 lol.

5) I've just double checked the move along sequence and didn't see any repeating sequence.  Could be a difference in the sources or the script is wrong and my eyes are fooling me.