We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如题,GFL可能会带来更高的AP,但是误报率呢 我用Nanodet训练自己的数据集,AP确实比yolov3高很多,但是误报率很高,猜想和GFL的置信度表达有关,可否解下疑惑,多谢
The text was updated successfully, but these errors were encountered:
按理说 AP 是通过score 排序,越多正确的排在前面AP越高,所以不太应该会导致误报率变高啊,有点奇怪。。。是不是需要调整一下卡的阈值呢?
Sorry, something went wrong.
嗯 我又看了下PR曲线,多出来的AP可能是牺牲precision换来的recall,试了下,调高阈值,误报率下去了,但是AP也下去了。
还有就是类别-置信度联合表达这个设定,算是一种集大成表示,但是太依赖IOU了,如果IOU有偏差,既影响类别判断,也影响置信度判断,不够鲁棒。这个是我个人理解,如果有什么不对的地方请指正。
No branches or pull requests
如题,GFL可能会带来更高的AP,但是误报率呢
我用Nanodet训练自己的数据集,AP确实比yolov3高很多,但是误报率很高,猜想和GFL的置信度表达有关,可否解下疑惑,多谢
The text was updated successfully, but these errors were encountered: