-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
regarding the QNRF dataset #11
Comments
Here are a few things you may check before training:
|
The MAE of the model I trained can only reach 96, which is still far behind the 79 in the paper. Do you know what may be causing this result? Maybe you would like to share the training code and training parameters of UCF-QNRF? |
Hello author, I would like to ask if parameters such as K in the model need to be modified when training the UCF-QNRF dataset? Because I trained for a long time and even increased the rounds to 4500, the model index obtained was only 93.5 , there is a certain gap from the paper's indicators. I would like to ask if you can provide the training log of UCF-QNRF |
Sorry for the late reply. We notice that the performance on the UCF-QNRF dataset seems somewhat behind the reported results. We will try to figure out the reasons and update accordingly. However, this may take some time. For your questions:
|
Hello author, have you solved the problem of large differences between the UCF-NRF data set and the paper's indicators? |
We found that abnormal loss values computed on dense images could affect the performance. A simple workaround is to eliminate such abnormal losses during training. |
What modifications should be made to the code? Can you please provide me with the modified code? |
While we are still experimenting with some modifications, here is a simple modification you may try:
to
Similarly, this also applies to pet.py L489. This modification aims to eliminate potential abnormal loss. In addition, we plan to update the configurations after we finalize the results. Thanks for your feedback. |
Thank you for your answer, I will modify it according to the suggestions you gave. In addition, dear author, may I ask whether the indicators of the NWPU-Crowd data set in the article were obtained by running on the verification set? |
We report the results on the NWPU-Crowd test set (see Table 2). |
Hello, author. I have made the modifications to the code according to your suggestions, but the result of the retrained UCF-QNRF dataset can only reach around 92 after 2500 training epochs. However, the best result before modification was 90. Do you know why this is the case? |
Did you modify |
Thanks for your suggestion, I have modified img_ds_idx = den_sort[len(den_sort)//2] to img_ds_idx = den_sort[1:len(den_sort)//2], next I will try to increase the dataloader in the ratio of 1/batch_size The number of training samples in . |
Your proposed increase of training samples in the data loader by a ratio of 1/batch_size, does it mean building an inner loop to add additional samples? Would you like to provide relevant code? |
Adding data resampling in |
@cxliu0 Thanks the author for providing precious information 👍 . However, it seems undeterministic to attain the same results reported in the paper. I got also 96.1 MAE on UCF-QNRF after 1500 epochs. @little-seasalt It is useful to follow your reproduced results as well. How about the results that you got on SHB and JHU-Crowd++? Thank you in advance 👍 In my case, I got the best results (over 5 experiments) 6.97 MAE (11.2 MSE) for SHB with the same code as SHA. And, I got the best results (over 3 experiments) 61.2 MAE (263 MSE) for JHU-Crowd++ test split with preprocessing the inputs as suggested by the author. @cxliu0 May I ask if we have any abnormal issues during reproduction? Thank you. |
The best result I got in SHB was 6.63 MAE and the best result I got in JHU-Crowd++ was 59.37 MAE. |
Thanks for your suggestion, I will continue to try to train the UCD-QNRF dataset so that its results are close to the paper's indicators. |
Dear author, after many attempts, I still cannot reproduce the paper indicators of the UCF-QNRF dataset. Can you provide the relevant code to successfully reproduce the indicators? Such as dataloader code and model-related code. |
We plan to update the configurations when we finalize the details. If you urgently want to reproduce the results, we may send you a rough version of the relevant code. |
Thanks for your answer. I would be grateful if you could send a rough version of the relevant code to [email protected]. |
@cxliu0 May you kindly send that version to me (my email is [email protected] ) as well? Thank you very much in advance. |
We will send you a rough version of the code in the next few days. |
您能否将该版本也发送给我(我的电子邮件 [email protected])?提前非常感谢你。 |
Dear author,
I have processed the QNRF dataset as per the requirements in your paper, limiting the long edge to within 1536 pixels and training it with a size of 256×256 pixels. I've kept the remaining parameters consistent with SHA and even increased the number of epochs to 3000. I also attempted the suggested changes regarding scale augmentation as you mentioned.
However, the MAE still remains around 100. I firmly believe that it's an issue with my settings. I wanted to ask if there might be a problem with my parameter configuration or if there's an issue with the preprocessing.
Thank you very much for your response.
The text was updated successfully, but these errors were encountered: