You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in tracking over image sequences, and have put together a basic key-frame approach using LightGlue as the matcher (obviously props, as this is an awesome piece of work).
Is this the optimal tracking strategy for matching over a larger time point?
I would have assumed that this is the case, but I've found that LightGlue can be sensitive to rotation of the keypoint locations.
As an example, I work with a macro-esque fixed camera rig with large relative rotations between multiple cameras.
As it's calibrated, it's possible to match between descriptors computed on rotated images, however I see a very large drop-off in performance if I try to match between keypoints with un-rotated locations (just the inverse rotation applied to the keypoint locations).
Clearly LightGlue is just a bit more complicated than a nn based matcher, and enforces some form of global consistency.
If that's the case, I would also expect that matching an arbitrary collection of keypoints to another arbitrary collection of keypoints could also fall afoul of this "consistency".
Have you encountered or mitigated this behaviour before, or have any insight you can offer?
The text was updated successfully, but these errors were encountered:
Hi,
I'm interested in tracking over image sequences, and have put together a basic key-frame approach using LightGlue as the matcher (obviously props, as this is an awesome piece of work).
Is this the optimal tracking strategy for matching over a larger time point?
I would have assumed that this is the case, but I've found that LightGlue can be sensitive to rotation of the keypoint locations.
As an example, I work with a macro-esque fixed camera rig with large relative rotations between multiple cameras.
As it's calibrated, it's possible to match between descriptors computed on rotated images, however I see a very large drop-off in performance if I try to match between keypoints with un-rotated locations (just the inverse rotation applied to the keypoint locations).
Clearly LightGlue is just a bit more complicated than a nn based matcher, and enforces some form of global consistency.
If that's the case, I would also expect that matching an arbitrary collection of keypoints to another arbitrary collection of keypoints could also fall afoul of this "consistency".
Have you encountered or mitigated this behaviour before, or have any insight you can offer?
The text was updated successfully, but these errors were encountered: