Bunch of questions
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Bunch of questions
I've kinda been posting a lot here recently, but I've got some real questions and I don't know where else I would ask them. Thanks for all your help so far, everyone.
1. Is there a way to show only the chroma channels? I've searched for it in a few places but no answer yet. I know a few ways to show only Luma at least...
2. I've noticed that everyone seems to recommend Spline36Resize instead of Spline64Resize, which seems like it would be higher quality. Is there a reason for this aside from speed?
3. Obviously you lose chroma resolution when changing from RGB to YUY2 and YV12, but what kind of loss is there when going from those to RGB? I know I've seen Mister Hatt comment that is isn't entirely lossless. I'm still new to colorspaces in general so I don't really know why, though I do understand the basics.
4. I've seen nnedi2_rpow2 recommended for resizing with powers of two, and also not recommended. I was wondering when it's actually appropriate to use it.
5. When using a d2v source, I seem to be decimating frames, using cleanup filters, and then using changefps to bring the footage back to NTSC video (some cons require this and it gives a little more precision when editing the actual AMV). Logic would tell me this is correct, if redundant, in that it allows temporal filters to work better. Is this normal, or just unnecessary?
6. This is the longest question. I've seen some amazing results with time scaling in some videos, and I've been experimenting with different ways of doing it. My current thought process tells me that changefps would be best for anime in most cases since anime is often animated at low framerate anyway and the motion isn't precise, while in most other cases changefps would introduce visible jerkiness, such as when used on computer graphics and live action. I also know about convertfps for blending, which seems like a bad idea in a lot of cases unless you really need smooth motion and a little blending is acceptable. Finally, there is interpolation. I would like to experiment with this method more, but the only one I've found so far is MSU FRC. Don't get me wrong, it can work pretty well some times, but most of the results I get are unusable. What I want to know is, am I on the right track, and is there anything that I have missed or that I am forgetting?
The way I do my time scaling at the moment is a changefps line followed by an assumefps("ntsc_video") line, which allows me to very precisely change how fast or slow I want the video to play. I mount scripts and use them in Vegas with all the slow filters commented out while editing and re-enabled for rendering. It runs fast enough on my computer. Am I getting any real benefit from doing it this way, or is it just slow, clunky, and unnecessary when I could be using the time scaling in Vegas?
7. Last one. Just curious where everyone gets all these different plug-ins aside from doom9 and avisynth.org.
Thanks for reading. At least these seem to be all the avisynth questions I have for now.
1. Is there a way to show only the chroma channels? I've searched for it in a few places but no answer yet. I know a few ways to show only Luma at least...
2. I've noticed that everyone seems to recommend Spline36Resize instead of Spline64Resize, which seems like it would be higher quality. Is there a reason for this aside from speed?
3. Obviously you lose chroma resolution when changing from RGB to YUY2 and YV12, but what kind of loss is there when going from those to RGB? I know I've seen Mister Hatt comment that is isn't entirely lossless. I'm still new to colorspaces in general so I don't really know why, though I do understand the basics.
4. I've seen nnedi2_rpow2 recommended for resizing with powers of two, and also not recommended. I was wondering when it's actually appropriate to use it.
5. When using a d2v source, I seem to be decimating frames, using cleanup filters, and then using changefps to bring the footage back to NTSC video (some cons require this and it gives a little more precision when editing the actual AMV). Logic would tell me this is correct, if redundant, in that it allows temporal filters to work better. Is this normal, or just unnecessary?
6. This is the longest question. I've seen some amazing results with time scaling in some videos, and I've been experimenting with different ways of doing it. My current thought process tells me that changefps would be best for anime in most cases since anime is often animated at low framerate anyway and the motion isn't precise, while in most other cases changefps would introduce visible jerkiness, such as when used on computer graphics and live action. I also know about convertfps for blending, which seems like a bad idea in a lot of cases unless you really need smooth motion and a little blending is acceptable. Finally, there is interpolation. I would like to experiment with this method more, but the only one I've found so far is MSU FRC. Don't get me wrong, it can work pretty well some times, but most of the results I get are unusable. What I want to know is, am I on the right track, and is there anything that I have missed or that I am forgetting?
The way I do my time scaling at the moment is a changefps line followed by an assumefps("ntsc_video") line, which allows me to very precisely change how fast or slow I want the video to play. I mount scripts and use them in Vegas with all the slow filters commented out while editing and re-enabled for rendering. It runs fast enough on my computer. Am I getting any real benefit from doing it this way, or is it just slow, clunky, and unnecessary when I could be using the time scaling in Vegas?
7. Last one. Just curious where everyone gets all these different plug-ins aside from doom9 and avisynth.org.
Thanks for reading. At least these seem to be all the avisynth questions I have for now.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.
- mirkosp
- The Absolute Mudman
- Joined: Mon Apr 24, 2006 6:24 am
- Status: (」・ワ・)」(⊃・ワ・)⊃
- Location: Gallarate (VA), Italy
- Contact:
Re: Bunch of questions
1. levels(0,1,255,128,128) is a somewhat ghetto way that could get you the result you're seeking.
2. IIRC Hatt once told me that spline64 can mess with gradients and sharpness of the image, so spline36 actually ends up having better results. Prolly better to wait for him for a confirmation, though.
3. I remember a thread on doom9 that showed really well how the YUV -> RGB conversion didn't get colours to be changed exactly, and in fact a lot of colours were dropped/shifted to a similar one in the conversion. There even was an animated gif Dark_Shikari made that was immediate and actually trippy to look at.
4. Works well with upscaling anime, not so much with cg and live action perhaps, though. Should wait Hatt for this one, too...
5. Bad idea. They animate at 23.976, but have to telecine to 29.97 by adding an extra frame (ok, terrible explanation since what happens is different, but it's kinda sorta close and gives you an idea). Using changefps after decimate makes the decimation useless and the animation jerky (all the pans and zooms in the anime will be raped). Of course, there do exist some shows which are animated (at least partially ─ hybrid anime isn't too uncommon) at 29.97, so in those cases you wouldn't be IVTCing. But for the rest, edit at 23.976 to have the proper smoothness and playback speed intended, and when sending your amv to a con, just telecine it.
6. Like I said, keep the footage at 23.976. "Perfect" time scaling would be interpolation with Twixtor, but then again Twixtor is far from perfect, and can (and actually will) introduce some eyecancerous frames especially with anime. Still if you need to do speedups/slowdowns, do them in your NLE, as you'll have a much finer control over it.
7. I subscribed to this blog's feeds, dunno about what others do: http://blog.niiyan.net/
2. IIRC Hatt once told me that spline64 can mess with gradients and sharpness of the image, so spline36 actually ends up having better results. Prolly better to wait for him for a confirmation, though.
3. I remember a thread on doom9 that showed really well how the YUV -> RGB conversion didn't get colours to be changed exactly, and in fact a lot of colours were dropped/shifted to a similar one in the conversion. There even was an animated gif Dark_Shikari made that was immediate and actually trippy to look at.
4. Works well with upscaling anime, not so much with cg and live action perhaps, though. Should wait Hatt for this one, too...
5. Bad idea. They animate at 23.976, but have to telecine to 29.97 by adding an extra frame (ok, terrible explanation since what happens is different, but it's kinda sorta close and gives you an idea). Using changefps after decimate makes the decimation useless and the animation jerky (all the pans and zooms in the anime will be raped). Of course, there do exist some shows which are animated (at least partially ─ hybrid anime isn't too uncommon) at 29.97, so in those cases you wouldn't be IVTCing. But for the rest, edit at 23.976 to have the proper smoothness and playback speed intended, and when sending your amv to a con, just telecine it.
6. Like I said, keep the footage at 23.976. "Perfect" time scaling would be interpolation with Twixtor, but then again Twixtor is far from perfect, and can (and actually will) introduce some eyecancerous frames especially with anime. Still if you need to do speedups/slowdowns, do them in your NLE, as you'll have a much finer control over it.
7. I subscribed to this blog's feeds, dunno about what others do: http://blog.niiyan.net/
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Re: Bunch of questions
Thanks for the quick response! Unfortunately I do still have to make the AMV at NTSC video rate if I end up wanting to submit it to a convention, so I would think best to change beforehand rather than possibly mess up the sync after it's finished. The way I understand it, telecine duplicates fields in an interlaced context rather than full frames, but I'm probably used to it since I see it on TV all the time. I suppose I'll just have to be careful. Some anime scenes show no problems from adding frames, but as you said pans can be a lot trickier. I'll figure it out on a case-by-case basis I guess. Thanks for the good advice.
Would the timescaling in the NLE be higher quality, or just faster and easier? After what you said, my gut says I should just let it rest for now and use the NLE.
Also, that really sucks about converting to RGB. Vegas won't accept anything else I send it.
Would the timescaling in the NLE be higher quality, or just faster and easier? After what you said, my gut says I should just let it rest for now and use the NLE.
Also, that really sucks about converting to RGB. Vegas won't accept anything else I send it.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.
- mirkosp
- The Absolute Mudman
- Joined: Mon Apr 24, 2006 6:24 am
- Status: (」・ワ・)」(⊃・ワ・)⊃
- Location: Gallarate (VA), Italy
- Contact:
Re: Bunch of questions
Imho it's still better to edit at 23.976 though: once the con is over, you'll be left with an online distro copy that will still have the framerate issue for the rest of the time ahead. Editing 29.97 will be jerky regardless, so I'd personally make sure that at least the online distro copy will be perfect in that sense. I mean, seriously, you shouldn't dumb down your project settings to fit in a convention that should revise their settings (inb4 Hatt and "they should just allow H.264+FLAC MKVs hurfdurf" ).
Quality wouldn't really change, but keep in mind that when doing a framerate change in avisynth, it'll change the speed for the whole thing. Avisynth is not vfr aware, so you can't have scenes going at different speeds directly out of it (you'd need timecodes or something, I guess, but I very well doubt vegas takes those), thus I believe changing the speed in your NLE is your best choice. Doing it properly in avisynth will be much harder, since you'll have to be messing with applyrange and assumefps+changefps (or assumefps+convertfps, your call) but the end result wouldn't be as fast to achieve, or possibly not as precise as needed as you could get by just tweaking the speed exactly how you need it within the NLE.
Since you're already using Vegas, there's no point in even trying to preserve yv12 as that's probably going to mix up your colormatrix/colorspace regardless, so I guess just let avisynth deal with the conversion to rgb as it can't be avoided, apparently. I guess you could do a yv12 lags with rgb suggested for output ─ lags has its issues, but it doesn't even matter anymore once you start talking AMV editing and NLEs... >_>;
Although it seems weird that Vegas won't accept yv12... I thought it was supposed to import H.264 just "fine" (at least sort of, I guess). o.O
Quality wouldn't really change, but keep in mind that when doing a framerate change in avisynth, it'll change the speed for the whole thing. Avisynth is not vfr aware, so you can't have scenes going at different speeds directly out of it (you'd need timecodes or something, I guess, but I very well doubt vegas takes those), thus I believe changing the speed in your NLE is your best choice. Doing it properly in avisynth will be much harder, since you'll have to be messing with applyrange and assumefps+changefps (or assumefps+convertfps, your call) but the end result wouldn't be as fast to achieve, or possibly not as precise as needed as you could get by just tweaking the speed exactly how you need it within the NLE.
Since you're already using Vegas, there's no point in even trying to preserve yv12 as that's probably going to mix up your colormatrix/colorspace regardless, so I guess just let avisynth deal with the conversion to rgb as it can't be avoided, apparently. I guess you could do a yv12 lags with rgb suggested for output ─ lags has its issues, but it doesn't even matter anymore once you start talking AMV editing and NLEs... >_>;
Although it seems weird that Vegas won't accept yv12... I thought it was supposed to import H.264 just "fine" (at least sort of, I guess). o.O
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Re: Bunch of questions
That makes a lot of sense. I think I will be making it with NTSC film rate. As far as changing the framerate before, what I would do is, say I want the rate to be 130% of what it is normally, my math says you want to changefps to 130% of what the target is, which was previously NTSC video. I would calculate that, then use that number for the first line. In the script, it would appear like this:
The original framerate wouldn't actually matter in the equation, and the output would be time scaled as needed. And obviously 23.0539 is a little too precise because it will either equal a certain number of frames or another, which will then be assumed to be a different value anyway. Of course it's really just academic at this point, since it is a rather clunky way of doing it. But it does work.
What would be the best way to change the FPS to NTSC video rate after production for convention submission? Would I want to use avisynth ChangeFPS for that, or is there a better way?
Sorry for so many questions, I want to do it as well as I can, and I am using the advice given.
Code: Select all
ChangeFPS(23.0539)
AssumeFPS("ntsc_video)
What would be the best way to change the FPS to NTSC video rate after production for convention submission? Would I want to use avisynth ChangeFPS for that, or is there a better way?
Sorry for so many questions, I want to do it as well as I can, and I am using the advice given.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.
- mirkosp
- The Absolute Mudman
- Joined: Mon Apr 24, 2006 6:24 am
- Status: (」・ワ・)」(⊃・ワ・)⊃
- Location: Gallarate (VA), Italy
- Contact:
Re: Bunch of questions
When doing the MPEG2 with TMPGEnc (or whatever other program you use to make the mpg), you should have the option to encode 23.976 progressive with a soft telecine to 29.97 applied. This basically makes the video telecined over playback on devices that can't properly support 23.976, but will still mantain the video encoded at that framerate, so you save space (less frames to store) and also smoothness on devices that can playback at 23.976 (it will ignore the pulldown flag and playback progressive).
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Re: Bunch of questions
I just re-read what you said and I think I understand what you mean by changing it for the whole thing. I really don't like using the entire episodes as clips and trimming them in Vegas, so instead I use the Trim function in avisynth to make "clips". Since I don't have to worry about changing anything else, and I can just unmount and remount the script, timescaling that way is easy and it gives you a lot of flexibility. You also retain the benefit of having the entire episode in case the "clip" was trimmed to short.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.
- mirkosp
- The Absolute Mudman
- Joined: Mon Apr 24, 2006 6:24 am
- Status: (」・ワ・)」(⊃・ワ・)⊃
- Location: Gallarate (VA), Italy
- Contact:
Re: Bunch of questions
But aren't you going to load an incredible amount of scripts like that? The more scripts you have loaded at the same time, the less stable and fast the editing will be. If you make an avs script for every scene, then it's not going to be a very smooth editing session... if you plan on trimming scenes, then make lossless clips, if you plan to use avs, then make just a few avs scripts.
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Re: Bunch of questions
I see the wisdom in that. I've crashed my computer with mounted scripts before... though I was doing things I ought not to have been doing with them. I guess it is a tradeoff between flexibility and speed/stability. Right now I think I'll use scripts for editing until I have them edited how I wish in order to maintain the flexibility, then use rendered lagarith when I'm finished with that section. The biggest issues I've had so far are flexibility and stability. It's a learning process for me, and you're helping me see a much bigger picture.
*Update* Since I moved to Vegas 10 it seems to at least recognize the YV12 clips, but it runs a lot slower than when it uses the RGB. I'm thinking edit with RGB and final render with YV12 when speed is not a concern. Vegas doesn't seem to mind even the most blatant bait-and-switch I've given it so far.
*Update* Since I moved to Vegas 10 it seems to at least recognize the YV12 clips, but it runs a lot slower than when it uses the RGB. I'm thinking edit with RGB and final render with YV12 when speed is not a concern. Vegas doesn't seem to mind even the most blatant bait-and-switch I've given it so far.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.
- Cannonaire
- Joined: Wed May 05, 2010 5:59 pm
- Status: OVERLOAD
- Location: Oregon
Re: Bunch of questions
Still looking for answers to a few questions. Regarding question 1, there has to be a better way to show only the chroma channels...
When timescaling in Vegas, it blends frames to achieve the results. A search of the help file doesn't specify any way to change this behavior, and blending is the last thing I want at the moment. I guess for now I have to stick to the method I know (ChangeFPS+AssumeFPS), unless someone else knows how to change the behavior in Vegas.
When timescaling in Vegas, it blends frames to achieve the results. A search of the help file doesn't specify any way to change this behavior, and blending is the last thing I want at the moment. I guess for now I have to stick to the method I know (ChangeFPS+AssumeFPS), unless someone else knows how to change the behavior in Vegas.
Think millionaire, but with cannons. || Resident Maaya Sakamoto fan.