-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unusual Z-Value Predictions and Multiple Predictions per Class on Custom Dataset #22
Comments
Looks like the metrics are very good. So these large boxes should have low scores. What scores do they have? |
Clarification and Results Update: I would like to clarify regarding the performance metrics for class ID 6 and class ID 7. Specifically, I had mistakenly referred to the metrics as mAP when I intended to say AP (Average Precision). Additionally, I have found that by adjusting the score_thr to 0.7, I was able to effectively filter out some of the inaccurate predictions that were previously mentioned at the expense of mAP obviously. Updated Performance Metrics: Below are the detailed results for each class: |
here is the defined order of classes within our dataset:
Regarding the specific examples I mentioned earlier, where I observed unrealistic prediction values for the z-axis in the bounding boxes, those were related to the classes with ID 6 ("Base right ear") and ID 7 ("Base left ear"). It's important to note that these results were obtained using a score_thr of 0.3. By adjusting the score_thr to 0.7, I was able to effectively filter out some of the inaccurate predictions for these classes. But it dropped my metric to 0.2152 mAP@50 |
I see you have great metrics on all classes except these |
I do have them in my training data, I had thought of this. I will look through my data more carefully. What would you suggest for multiple predictions for one class as shown in this example: |
Try to play with |
That did fix my problem, thanks a lot |
Prerequisites
Searched through existing Issues and Discussions without finding a solution. Also reviewed relevant GitHub issues.
Experimented with adjusting score_thr to improve visualization outcomes.
Confirmed that the unexpected results are not due to errors in the training dataset.
Issue Description:
I am encountering issues with unrealistic prediction values and multiple predictions for the same object class while training a model on a custom dataset. Specifically, some predicted bounding boxes have z-values that are highly unrealistic (e.g., a maximum z-value of 4000 meters) and multiple predictions for a single class object. I am trying to understand how the network generates these predictions and what might be causing such unusual results.
Environment
Dataset and Model Performance:
Issue Details
During visualization tests, I observed some bounding boxes with unrealistic z-values. For example, in a generated text file, the format '{label} {x_min} {y_min} {z_min} {x_max} {y_max} {z_max}' shows z-values that are significantly off, such as (an example from the dataset):
(Link to the problematic bounding box file:1840_boxes.txt)
6 0.7325 0.0170 -8.6887 0.8170 0.1027 11.7481
6 0.4817 -0.1613 -321.8918 0.5632 -0.0771 325.0209
6 0.6177 -0.0779 -9.3888 0.7026 0.0087 12.4312
In particular, for class ID 6, which showed a mAP@50 of 0.50, and class ID 7, with a mAP@50 of 0.07, there were notably inaccurate predictions. While I understand that lower mAP scores might lead to poorer predictions, I have adjusted the score_thr to 0.6 for the following example, but it made no changes to the predicted bounding boxes. It did however decrease the mAP@50 to .73.
Questions:
Steps Taken
I have successfully created and trained the model on my custom dataset following the mmdetection3d documentation. The anomaly was detected during post-training visualization of the detection results.
The text was updated successfully, but these errors were encountered: