You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
In this project, auc-roc score is selected as the evaluation metric to evaluate the segmentation performance. However, regarding roc curve, in the case where negative pixel is far more than positive case, the roc curve will be overly optimistic. For example, I found the positive pixel just account for 1% for one random batch during training. Actually, in segmentation task, we are more likely to focus on positive pixel(blood vessel). After achieving this project, I realized a high accuracy(0.97) and a relatively low dice coefficient(0.82) in validation set, where the two evaluation scores are not consistent and I think the dice coefficient is more authentic in segmentation task. Beside, same extracted patches from original images contain no positive pixels and UndefinedMetricWarning also is raised when I use the libary sklearn to calculate roc-auc score because true positive value should be meaningless in this case. Hence, I think it necessary to check the evaluation metric such as adding dice coefficient as another evaluation metric.
The text was updated successfully, but these errors were encountered:
@zhangrong1722 interesting. Could you do a small pull request with your implementation? I'm not too well versed on ML terms just yet to understand exactly what you're saying. Thank you kindly.
Hi!
In this project, auc-roc score is selected as the evaluation metric to evaluate the segmentation performance. However, regarding roc curve, in the case where negative pixel is far more than positive case, the roc curve will be overly optimistic. For example, I found the positive pixel just account for 1% for one random batch during training. Actually, in segmentation task, we are more likely to focus on positive pixel(blood vessel). After achieving this project, I realized a high accuracy(0.97) and a relatively low dice coefficient(0.82) in validation set, where the two evaluation scores are not consistent and I think the dice coefficient is more authentic in segmentation task. Beside, same extracted patches from original images contain no positive pixels and
UndefinedMetricWarning
also is raised when I use the libarysklearn
to calculate roc-auc score because true positive value should be meaningless in this case. Hence, I think it necessary to check the evaluation metric such as adding dice coefficient as another evaluation metric.The text was updated successfully, but these errors were encountered: