Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I really doubt the evaluation metric auc curve is not favorable? #72

Open
zhangrong1722 opened this issue Jul 14, 2019 · 1 comment
Open

Comments

@zhangrong1722
Copy link

zhangrong1722 commented Jul 14, 2019

Hi!
In this project, auc-roc score is selected as the evaluation metric to evaluate the segmentation performance. However, regarding roc curve, in the case where negative pixel is far more than positive case, the roc curve will be overly optimistic. For example, I found the positive pixel just account for 1% for one random batch during training. Actually, in segmentation task, we are more likely to focus on positive pixel(blood vessel). After achieving this project, I realized a high accuracy(0.97) and a relatively low dice coefficient(0.82) in validation set, where the two evaluation scores are not consistent and I think the dice coefficient is more authentic in segmentation task. Beside, same extracted patches from original images contain no positive pixels and UndefinedMetricWarning also is raised when I use the libary sklearn to calculate roc-auc score because true positive value should be meaningless in this case. Hence, I think it necessary to check the evaluation metric such as adding dice coefficient as another evaluation metric.

@mariogarcc
Copy link

@zhangrong1722 interesting. Could you do a small pull request with your implementation? I'm not too well versed on ML terms just yet to understand exactly what you're saying. Thank you kindly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants