-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some questions about synthetic training data #1
Comments
Hi, |
Cool! It seems that DeepGTAV can do more than I imagined. I'll try it. But I'm still really confused about training ground truth. |
In this code, disparity is used as the ground truth. But I don't see why depth couldn't be used since depth and disparity are easily convertible to one another. Either depth or disparity can be used as ground truth for training. The reason why disparity is used here is because that is what was provided by default. |
okay, thanks. |
Did you use DeepGTAV from https://github.com/aitorzip/DeepGTAV to generate GTA data? I tried the DeepGTAV but cannot find the disparity information |
Hi Atapour,
thanks for sharing. the result is really amazing!!
But I'm not really sure whether what I understood about synthetic data is correct:
By exploiting the tool DeepGTAV, you put a camera on the visual car in GTA for data collection.
So that you can get training data from that camera's perspective.
Then, I'm wondering how you get the ground truth disparity.
Did you put two cameras on the car for triangulation calculation?
Could I have the training datasets you used for training? or just some sample pairs of data with ground truth.
Secondly, why not to train on depth directly instead of disparity, so that the model could directly output the depth?
thanks
The text was updated successfully, but these errors were encountered: