Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feature to use manually corrected labels on T2w data #17

Closed
jcohenadad opened this issue Apr 4, 2024 · 13 comments
Closed

Add feature to use manually corrected labels on T2w data #17

jcohenadad opened this issue Apr 4, 2024 · 13 comments

Comments

@jcohenadad
Copy link
Member

jcohenadad commented Apr 4, 2024

As per #15, the script should be modified to use manually corrected labels if they exist. The labels should be located under ./derivatives/labels/ and have the suffix _seg on the T2w data.

@jcohenadad jcohenadad changed the title Add feature to use manually corrected labels Add feature to use manually corrected labels on T2w data Apr 4, 2024
@jcohenadad
Copy link
Member Author

Ah! It's already implemented 😅

And it is working. I tested it with a derivatives/labels T2w segmentation mask in one subjects, and the script used it:

👉 Processing: sub-BB277_T2

Looking for manual segmentation: /Users/julien/temp/Lydia/data_small/derivatives/labels/sub-BB277/anat/sub-BB277_T2_seg.nii.gz
Found! Using manual segmentation.

Closing

@jcohenadad
Copy link
Member Author

@Kaonashi22 you can already use the manually corrected segmentations for your project.

@Kaonashi22
Copy link

Version of the script: 78129ca

Regarding the issue about segmentations being overwritten, here is the path to the input directory: /dagher/dagher12/lydia12/SPINE_park/CENIR_ICEBERG_BIDS. There is no "derivatives" folder. I don't think the script generates this folder (and moves the segmentations there).

@jcohenadad
Copy link
Member Author

There is no "derivatives" folder.

Ah, so that's the problem. The manual segmentations should be located under a specific derivatives/ folder as per: #17 (comment)

I'll update the README with a clear example--

@Kaonashi22
Copy link

So, I should create the derivatives folder myself and move the segmentation files there?

@jcohenadad
Copy link
Member Author

yes, or, much better, use an automatic procedure that also creates the JSON file (important for tracking provenance): https://github.com/spinalcordtoolbox/manual-correction

we covered this in the latest SCT course (maybe @valosekj you know the exact passage in the video/slides)

@valosekj
Copy link
Member

we covered this in the latest SCT course (maybe @valosekj you know the exact passage in the video/slides)

Yes! The passage about manual corrections starts at 6:44:26.
We also have a step-by-step example of SC segmentation corrections on our wiki.
There is also this YT tutorial on correcting segmentations across multiple subjects. However, it is slightly obsolete (we now use https://github.com/spinalcordtoolbox/manual-correction instead of the script described in the video). But the overall concept is the same.

Let me know if you need any further info; I will be happy to help!

@Kaonashi22
Copy link

Thanks @valosekj and @jcohenadad.
I actually used the "manual-correction" module to correct the masks. In the end, the derivatives folder only includes subjects with these corrected masks, not those that didn't required a manual correction (as they were not in the yml file). So their segmentations are overwritten.
I guess I should manually export the masks of these subjects to the derivatives folder as well.

@Kaonashi22
Copy link

Would you know if the segmentations are identical between different runs? I already did the QC and manual corrections on previous segmentations. After reruning the pipeline, I obtained new segmentations, and I'm wondering whether I need to redo the QC and corrections.

@jcohenadad
Copy link
Member Author

I actually used the "manual-correction" module to correct the masks. In the end, the derivatives folder only includes subjects with these corrected masks, not those that didn't required a manual correction (as they were not in the yml file). So their segmentations are overwritten.
I guess I should manually export the masks of these subjects to the derivatives folder as well.

No you should not. What you describe is the correct procedure, except that the output folder should be a pristine folder, to avoid any conflict with previously-processed data. The way I do it, is by setting -path-output in my sct_run_batch command to be uniquely identified by the date and time, eg: -path-output results_20240411_091115. That way, the non-manually corrected segmentations are not overwritten, but are simply re-created. This is the best way to ensure reproducibility of the whole pipeline.

If you run out of space, feel free to delete your old results_* folders.

I hope that's clearer.

Would you know if the segmentations are identical between different runs?

Yes they are identical. Hence the concept of reproducibility. The only thing you need to care about are the manual corrections, which you should put under the derivatives folder.

@Kaonashi22
Copy link

I see, that's clearer, thanks!

@valosekj
Copy link
Member

In the end, the derivatives folder only includes subjects with these corrected masks

Yes, this is correct! The manual-correction module saves only the manually corrected masks under derivatives.

, not those that didn't required a manual correction (as they were not in the yml file).

Just for info. If you really needed to save masks that don't require manual correction under derivatives, you could you the -add-seg-only argument (described here). This argument will copy masks that aren't in -config list (i.e., those that didn't require correction) into derivatives.
This might be useful, for example, if you wanted to store masks generated by a specific algorithm version.

@Kaonashi22
Copy link

Kaonashi22 commented Apr 12, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants