-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Other datasets #10
Comments
In general, you need to customize the dataloader and preprocess data for each dataset.
|
Thank you for your answer. |
Hello author: |
Typically, NVIDIA RTX 3090 is sufficient to train the model. Regarding |
In the process of training the UCF-QNRF dataset, I found that it takes about 6 minutes to train for one round and two minutes for each evaluation. Is this time consumption normal? I would like to ask the author how much time you spent on training at that time. |
We suggest preprocessing the UCF-QNRF dataset before training, because loading the original images during training is time-consuming. After preprocessing, one epoch will take less than 40 seconds if you use two NVIDIA RTX 3090 for training. |
I have processed the UCF-QNRF dataset according to the operations mentioned in the paper, that is, limiting the long sides to 1536 pixels, and processing the size of both image and ground-truth points. The other parts of the data loader are written with reference to SHA.py. Are there any other data preprocessing operations that I have missed? |
You shall resize the images and ground-truth points, and then save the preprocessed data. After that, you can use the preprocessed data to train the model. Resizing images on the fly is time-consuming. |
Can anyone share the JHU.py file , i m getting dimension mismatches when train.sh is run |
Have you reproduced the paper metrics of the UCF-QNRF dataset? Perhaps you would like to share the relevant code? |
Hello,
I would like to ask, if I want to retrain the models on the UCF and JHU datasets, what changes need to be made to the existing code?
The text was updated successfully, but these errors were encountered: