Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to achieve an entirely static result? #80

Open
mviereck opened this issue Oct 16, 2019 · 3 comments
Open

How to achieve an entirely static result? #80

mviereck opened this issue Oct 16, 2019 · 3 comments

Comments

@mviereck
Copy link

mviereck commented Oct 16, 2019

As said in #78, I use vidstab to align images for focus stacking.
For this desire a series of captures of the same motive with slightly different focus is fused together to one overall sharp image.

For this goal the images must be aligned as good as possible.
In terms of vidstab, the video result should be a static camera showing no motive movement at all.

Currently I get pretty good results running vidstab three times with different shakiness values. On first iteration I use shakiness=10, than shakiness=5, finally shakiness=1.
Other parameters in use:
vidstabdetect: accuracy=15:mincontrast=0.1:stepsize=6
vidstabtransform: optzoom=0:smoothing=0

I also tried setups with tripod, but I get terribly shaken results. Maybe a bug, maybe I misunderstood something.

I want to ask you for the best possible parameter setup to achieve an entirely static result.
As been said, running vidstab three times with different shakiness gives pretty good results. But maybe it is possible to achive a static result with one iteration only?

To show an example:
A single capture out of 46 captures overall. Only parts of the image are in focus:
preview_0019

Final image caculated from 46 captures, aligned with vidstab, further image processing with imagemagick and enfuse:
project 2019-09-19 stack_0001 fuse median

@mviereck mviereck changed the title How to achieve entirely static result? How to achieve an entirely static result? Oct 16, 2019
@btzy
Copy link

btzy commented Jan 12, 2020

I'm having a similar issue. I captured slightly over a thousand images at regular intervals (mounted on a real tripod), and I'm trying to make a timelapse video from the still images. I had to change the batteries of my camera halfway through, so the photos taken after the battery change were facing a slightly different direction from the ones before the battery change. To correct for this, I'm trying to use vidstab to align the first set of images to the second. Like @mviereck, I tried the vidstab's tripod mode, which didn't seem to fix anything. I used these commands:

ffmpeg -framerate 30 -i P112%04d.JPG -filter:v vidstabdetect=tripod=1000 -f null -
ffmpeg -framerate 30 -i P112%%04d.JPG -filter:v vidstabtransform=tripod=1:crop=black:optzoom=0 -c:v libx264 -pix_fmt yuv420p tmp/output.mp4

I looked into the source code, which led me to the virtualTripod field of VSMotionDetectConfig, and it seems that it was only meaningfully used in motiondetect.c line 264 (of current master, aeabc8d), which is: if(md->conf.virtualTripod < 1 || md->frameNum < md->conf.virtualTripod). This seems to be the only place where the codepath for tripod mode for the first pass (vidstabdetect) differs from the normal mode. Furthermore, opening up transforms.trf with a text editor reveals that frame 1 has an empty list, while frame 1000 has a long list that looks like most other frames. It does seem like the calculations of the first pass are still being done in 'relative' mode. So for the second pass, I changed tripod=1 to relative=1:smoothing=0, which seems to be doing something close to what I want.

I would guess that md->frameNum is the current frame number, so to enable tripod mode for the whole video, we have to either set the reference frame to 1 (i.e. tripod=1 in the first pass), or modify the code. As a workaround, I copy the reference frame to the beginning of the video before running vidstab.

@georgmartius
Copy link
Owner

I see, this looks like a bug. The tripod mode was not too well tested. I will have a look.

@anonyco
Copy link

anonyco commented Jan 29, 2020

I have a (boring) stop motion film with tin cans for school. Feel free to use it to test your changes to the functionality of the tripod method. Here is a link to the first set of raw pictures: https://drive.google.com/file/d/1cu8JWBdSJm2bRpoAYJyzcmKznSWXeuBI/view

Some things to note:

  • Camera is randomly shaky from frame to frame by a few pixels because I am too poor to afford a remote controlled camera
  • Highly detailed background should give vid.stab an advantage if it can take it
  • Tripod only moves once due to being bumped by playful dog :)
  • Lighting differs very slightly due to shadows based upon the positioning of people and animals in the room
  • Camera had to be taken off and remounted several times due to crappy battery lasting a short time. I compared the new photos taken to the old photos in order to reposition the camera to almost where it was before.

I hope that this proves to be a good test for debugging your changes to vid.stab.

Best of luck,
AnonyCo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants