Replies: 1 comment
-
Hi @cece3014, The evaluation metrics are calculated per model and should include metrics for all videos that the model was run on (i.e. used to predict on). Note that metrics are computed with respect to ground-truth/user-labeled data - thus, you need user-labeled frames in all videos for which you would want metrics to be calculated for. Are you checking the metrics through Predict > Evaluation Metrics for Trained Models? Thanks, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
I hope this post finds you well! I am struggling to access evaluation metrics across all videos rather than the last video of my dataset. Do you have any suggestions on how?
Best,
Celest
Beta Was this translation helpful? Give feedback.
All reactions