Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce RbA results #1

Open
RGring opened this issue Nov 28, 2024 · 3 comments
Open

Reproduce RbA results #1

RGring opened this issue Nov 28, 2024 · 3 comments

Comments

@RGring
Copy link

RGring commented Nov 28, 2024

Hello.
Thanks for the simulator allowing to evaluate on more complex anomaly scenarios!

I would like to reproduce the results which are reported with RbA.

How can I generate the exact same images, that you used for training and evaluation? Will I get it, when I run the simulator with your default Definitions.py ? This is not exactly clear to me.

Thanks in advance!

@daniel-bogdoll
Copy link
Contributor

I would have to double check, but I'm pretty certain we used the legacy dataset for this: https://zenodo.org/records/11577567

@RGring
Copy link
Author

RGring commented Dec 4, 2024

Thanks for your reply. I had a look. Can I assume that you used all images that contain an anomaly in the evaluation split and the rest for training split? If not, could you provide a list of the images used for the different splits?

@daniel-bogdoll
Copy link
Contributor

Sorry, I don't have access to my work computer at the moment (on a research stay), so I'm only sure 90%. But I believe we took the whole legacy dataset for evaluation (also the frames without anomalies) and generated a new NORMALITY (Definitions.py) dataset in the size of the Cityscapes dataset that consists of 2,975 frames with CARLA towns 1-7,9,10, and the same weather settings as in the eval set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants