Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terrible performance on the Oxford Radar Robotcar Dataset as compared with KITTI #42

Open
jimaldon opened this issue Sep 24, 2019 · 9 comments

Comments

@jimaldon
Copy link

jimaldon commented Sep 24, 2019

The Oxford RR dataset was captured with a global shutter camera - and is in the kitti dataset.

Without resetting at about 1000 frame mark, it never initializes. When it does initialize, it loses track at random intervals (even when the car is moving forward without rotation at a constant velocity) - and requires resetting again.

What could explain this behaviour? Is there anything peculiar about this dataset that sets it apart from KITTI. Both don't have photometric calibration and are recorded with a global shutter camera.

I run LDSO on the rectified, undistorted images from the stereo/centre bumbebee camera and use the following calibration:

Pinhole 964.828979 964.828979 643.788025 484.407990 0
1280 960
crop
1280 960

Here's the radar robotcar dataset: https://dbarnes.github.io/radar-robotcar-dataset/datasets

@gaoxiang12
Copy link
Collaborator

Hi @jimaldon and @NikolausDemmel I think the -O2 or -O3 is missing in the cmakelist which probably causes poor performance when running the examples.

@NikolausDemmel
Copy link
Contributor

@gaoxiang12, that should not be the issue, since the default build type is Release, so unless you specify Debug explicitly, it should be an optimized build.

I think this is about odometry performance, not runtime speed.

@jimaldon From a quick glance at the webpage it looks like the car bonnet is in view. You might have to cut that, or implement something like a mask. Else if points on the bonnet are chosen, it won't fit together with the motion of the static scene.

@jimaldon
Copy link
Author

@NikolausDemmel Ah, that could be it - I'll try that!

I tried ORB-SLAM on the dataset without a mask and it seemed to perform okay - it must be robust to static occlusion somehow.

@NikolausDemmel
Copy link
Contributor

Yes, for ORB-SLAM, I guess RANSAC both during initialization and localization helps in this case compared to DSO.

@NikolausDemmel
Copy link
Contributor

PS: Would be great to see your results if you manage to get it to work (or not...)

@jimaldon
Copy link
Author

jimaldon commented Oct 4, 2019

So, in a preliminary attempt to implement a "mask" over an image, I restricted grid construction for corner feature detection for all pixels (horizontally) below a threshold which contains the car bonnet. It involved mostly changing stuff inFeatureDetector.cc::DetectCorners

While I can get LDSO to initialize and the performance is a lot better the before, it still doesn't compare with ORB-SLAM. For one, the process can't cope and quits about midway - and the map generated has a lot of inconstancies - mostly from not being planar and overestimating the pitch on mild inclines and declines of the road.

Here's a video of them visualization
https://drive.google.com/open?id=1am3xgN_RBEWQg8QnMUQJOtQ_8GogA7WU

@rui2016
Copy link
Collaborator

rui2016 commented Nov 7, 2019

Hi jimaldon,

from the video it seems LDSO is still not running properly.

Another easier way to remove the effect from the engine cover is to simply crop the image to remove the bottom part, then you don't need to modify any code. Could you try this?

@NikolausDemmel
Copy link
Contributor

I suggest you do what Rui did to determine if the masking is the issue or something else.

Other things that might lead to bad performance include bad calibration, both geometric and photometric, lack of known exposure times, etc...

For the mask, changing the feature detection is not enough. You also have to ensure there are no observations in target frames that fall on the mask (or hope the outlier detection catches it...). Moreover, if the images are distorted, DSO will do undistortion as a preprocessing step, so you need to ensure you also undistort the mask.

@rancheng
Copy link

Hi @jimaldon and @NikolausDemmel I think the -O2 or -O3 is missing in the cmakelist which probably causes poor performance when running the examples.

was trying to open a similar issue. but just found out that you have posted this. XD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants