Registered Member
|
> I've used the Deshaker plugin, but there are a few caveats unless things have changed in recent times
Interesting Caveats. IMHO those are applicable to very advanced users. Having compared nominal videos (taken hand held, and also using a monopod, with a Canon HF S10 Leigra digital video camcorder) and compared the output before and after applying the wine/VirtualDub deshaker plugin, I have a firm opinion that in many cases the improvement using deshaker, despite your caveats, is significant. ie I won't compose a video project again without checking for a deshaker improvement (and in 90% of the cases the improvement from deshaker in the videos that I take is massive). Now I would like to see a comparable (for quality) Linux package that is as easy to use (and by no means is deshaker that easy, but it is functional). Note I am a total clutz with MS-Windows apps, having left Windows for Linux in 1998, but I am aware of no multiple transcoding being necessary with deshaker, unless one is referring to 2 passes. One can also specify a very high bit rate, such that such two pass transcoding has IMHO less of an effect, as most computers can not handle the high bit rate anyway (so nothing really noticeable is lost) I also note that every Linux suggestion for stabilization I've tried has left me completely puzzled and unable to apply Linux stabilization as the authors/packagers giving advice as to how to use the packages assume a level of knowledge well beyond anything I'll ever take the time to learn. I do hope all those working on Linux packages succeed, but please, when you get it to a level of reasonable functionality, please keep in mind that not all of us want to learn the difference between RGB, YCbCr, YV12 or YUY2. Anyway, its an interesting thread. |
Registered Member
|
A two pass stabilizer is better, because it can react to "future" events - but because of the two passes, it can not be implemented as a simple pipeline type plugin. Therefore, the best stabilization will never be as simple as a brightness adjust...
BTW, the two passes do not increase the number if image conversions, since the image is only transformed on the second pass. A single pass stabilizer is possible, the mjpegtools stabilizer and the (lost?) centipede's frei0r plugin are of this type. A short summary of linux video stabilization possibilites that I know, would be: -------------------------------------------------------------------------------- MJPEGTOOLS: command line tool, basic single pass, no de-rotation, no empty edge fill, no rolling shutter compensation CENTIPEDE: GUI Frei0r tool based on openCV, currently of unknown availability, basic single pass, no de-rotation, no empty edge fill, no rolling shutter compensation. TRANSCODE: command line tool, two pass, does de-rotation, does some edge fill (the webpage says it can also use previous frames for edge fill, but it did not look like that to me - checked the source code and saw that it is just the unmoved current frame), no rolling shutter compensation. VIRTUAL DUB UNDER WINE: GUI, two pass, does de-rotation, does edge fill using previous and future frames, has some kind of rolling shutter compensation. (the stabilization plugin is "external" to virtual dub, and as far as I know, not open source.) Of course, this is not a native linux application, and it is a bit of work to get it to run under wine. My experience (not that extensive...) with them is the following: In most cases, I use Transcode, which does a pretty good job. My only gripe is the primitive edge compensation. If I can find the time, I plan to make a patch for that and send it to the author... I have also stabilized some HV30 stuff, and did not miss the rolling shutter compensation very much (some people say it is questionable how much can be done about that at all...). I am quite satisfied with the results, got good results in most cases, where I shot the video handheld while standing still. Also does a good job of taking out the remaining shake from walking shots with my manfrotto modosteady. I am no adrenaline guy myself and have no experience with that, but I can understand that with extremely jerky helmet-cam action video (like the above bycicle video) there are limits to what post-festum stabilization can do. But still, while the results probably won't be perfect, the "watchability" of the video will certainly be increased. |
Registered Member
|
@oldpc "but please, when you get it to a level of reasonable functionality, please keep in mind that not all of us want to learn the difference between RGB, YCbCr, YV12 or YUY2.", no problem, but to 'learn' the differences between the colourspaces would take a book, however 'appreciating' the differences and the sequence of events from when we import video to when we encode out is a benefit to maintaining some sort of quality control, especially with regard to colour correction/grading and problem solving.
I understand your comment but please don't assume just because many are amateurs or using kdenlive out of interest in a private capacity that we shouldn't strive for the best we can achieve. :-) The reference to mulitple encoding / transcoding was simply that when import video into Vdub in one format we have to get the 'deshaked' video back out and if we are then intending putting it back into an NLE, how does that happen? Do we frameserve into kdenlive? Not possible. It has to be reencoded again. It's all well and good if we deshake our final movie then encode out from Vdub but that's a bit daft if our 'shaky' video element is a fraction of the whole movie, or do we choose the frame sequences from our final movie and deshake them then reencode our edited colour corrected movie back out to final again? @Marko, I shot some handheld, holding my cam as still as I could manage. Deshaked it all on best settings throughout, set rolling shutter % as per 550D and still got jelly cam out. :-) But good to hear others having more success. :-) |
Registered Member
|
Marco, thanks for the summary !
Yellow, your points noted. I hope some day to learn enough to be as capable of doing this video processing as the two of you. For me its simply difficult (currently not possible) to use anything other than the deshaker/virtual dub method, as all other methods assume a level of video understanding that I for one do not have (and hence the stabilization efforts with anything other than virtualdub consistently fail). |
Registered Member
|
I note almost 6 months have gone by since the last post on this stabilization subject ? Has there been any progress in making this more user friendly for the average user?
At the moment I am still using wine/VirtualDub/deshaker-plugin as my stabilization method, applying it to clips that need stabilizing. Needless to say an integrated method in Linux if functionally superior would be preferable. Even a basic guide as to a Linux "only" stabilization method would be a helpful Linux only start. |
Registered Member
|
As far as I know, there is nothing new in this department....
One method that I forgot to mention above, is a possibility to stabilize video using cinelerra. (Just google "cinelerra stabilization"). It is single pass, does de-rotation, no empty edge fill, no rolling shutter compensation. In contrast to the other possibilities above, it uses only a single area tracking, and is very "manual". |
Registered Member
|
I tried various method here and I report my experiences :
- VirtualDubMod threw wine wasn't really productive for me, it need to much specific input files / large output / no multithread for my usage. May be I'm wrong using it and don't know how to setup it perfectly, but I spent 2 or 3 hours to make it works without a good result (in my opinion). - Transcode + vid.stab were my last test, and it's fast and do the job. Thanks to this tutorial : http://314bits.com/blog/2010/09/stabilize-video-in-ubuntu-linux/ , it took only few minute to setup it and do my first conversion on ubuntu. Why I didn't found this link before ? That's why I'm posting it to this famous thread on Google when searching about stabilisation/deshaking video on Linux/Ubuntu. |
Registered Member
|
Deevad, thanks for the http://314bits.com/blog/2010/09/stabilize-video-in-ubuntu-linux/ . I followed that guide, spent an hour trying different combinations of the transcode command and could not get one to work. I had the entire gammet of errors , and the best i could get was a green pattern of lines flashing across an output.
Its simply not user friendly enough, and IMHO needs a wrapper script for anyone who does not want to dive into the details of transcode to get this to work. I don't have the patience to spend more than an hour on anything that is as unproductive as that was for me. For example, in contrast, the wine method with VirtualDub only took me 15 minutes or so to get running and stabilizing videos. So I'll go back to the wine method with virtual dub and deshaker which works well for me. My guide for that is still here: http://forums.opensuse.org/forums/english/get-technical-help-here/how-faq-forums/unreviewed-how-faq/429428-howto-install-virtualdub-under-wine-deshaker-plugin.html But I appreciate the post, and I hope others have better success than I. Its good to see a pure linux approach, even if it is totally useless to me at this stage of its development. Maybe someday someone will create a wrapper script or a gui around it (with appropriate prompts/checks to ensure wrong arguments/values are not used). |
Registered Member
|
h-munster, no I confess I had not read of the "--mplayer_probe" flag.
Following your advice, and using the link you referenced, I gave it a try. Indeed that makes a world of difference, and the first time I tried stabilizing with transcode using that flag it worked. I'm both impressed and grateful. Thankyou. |
Registered Member
|
To my surprise video stabilization using transcode now works out of the box on Debian unstable with transcode from debian-multimedia.org. At least MJPEG and MPEG4 clips got a bit more stable even without the probe tricks. Edge compensation, if I that's the correct term, is visible in shaky frames as the edges seem to be moving around more than the center of screen and skiing clips have some trouble finding enough contrast but this seems like great start.
|
Registered Member
|
This function is in MLT now and hopefully to Kdenlive by the end of the year.
http://sourceforge.net/news/?group_id=96039&id=302819 |
Registered Member
|
Clicking "Play" on the linked demo video says
"This video has been removed by the user" :-( |
Registered Member
|
I encountered the problem with the one here: http://vstab.sourceforge.net/
but the one on youtube works now. |
Registered Member
|
I tried searching and found the mlt/melt info. Ran a stabilization on one of my clips. Anybody have a link to command line options for melt?
I copied from ddennedy's amd marco's examples. and entered: melt -verbose -profile quarter_ntsc_wide 100_0032.MP4 out=1000 -filter videostab -consumer xml:100_0032-vstab.mlt all=1 real_time=-2 melt 100_0032-vstab.mlt melt 100_0032-vstab.mlt -verbose -consumer avformat:100_0032-vstab.webm properties=webm real_time=-2 threads=3 (my video was 100_0032.MP4) How do I find out about different options, formats and properties? Anyone have a link to more information on videostab filter? I've messed with transcode and melt/videostab - but am just googling and guessing. |
Registered Member
|
melt --help will give the options, funny I've too being looking at melt on the CLI for this but at the moment have to encode out was hoping dragging the source clip and analyised mlt file into kdenlive would surfice but it crashes kdenlive, so waiting for integration into kdenlive interface. :-)
|
Registered users: Bing [Bot], Google [Bot], kde-naveen, Sogou [Bot]