-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it worth it to test the model on Spider dataset's images #18
Comments
@jcohenadad @sandrinebedard @valosekj @NathanMolinier if you have any suggestions ! |
I tried |
these are the GT that @NathanMolinier used for total spine seg no? |
Yes indeed, but they are not perfect I think since they were generated by registering the PAM50 template to each subject space. Another idea would be to compare your model with totalspineseg to show that your model is more accurate. |
As we discussed in last dcm normalization meeting (2024/10/30), when asking "where to find some data with accurate groundtruth segmentations to test the model", one answer was to try it on Spider. The other idea was to create new groundtruths with sct_propseg on datasets we're interested in for a validation.
So the main advantage is that the spider segmentations are already available, on 257 subjects, but i's only lumbar, and not perfectly segmented : all of the images are over segmented, for differents reasons, see below
Here for instance (canal should be limited by the green line)
Or here, white voxels are segmented besides of being out of the dural sac, which is the limit we defined for the spinal canal
On the other side, I could generate some segmentations with sct_propseg, but it's quite time consuming for the correction part. However it would enable to create a validation set mixing differents datasets and with more accurate groundtruth segmentations.
So my question here would be : what should I choose ? Knowing that both is also possible, and considering that I would use those validaiton methods for the article about the model.
The text was updated successfully, but these errors were encountered: