-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support single z-stack tif file for input #67
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #67 +/- ##
==========================================
+ Coverage 89.07% 90.56% +1.49%
==========================================
Files 35 35
Lines 1355 1389 +34
==========================================
+ Hits 1207 1258 +51
+ Misses 148 131 -17 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! I just put a few small comments below.
Ideally this would also have a corresponding test that calls get_size_image_from_file_paths
with a single-file tif stack, similar to the one for getting size from a dir. Would you be happy to add this? Otherwise I can add a test after this is merged.
brainglobe_utils/image_io/load.py
Outdated
# read just the metadata | ||
tiff = tifffile.TiffFile(file_path) | ||
if not len(tiff.series): | ||
raise ValueError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally these would use ImageIOLoadException
rather than ValueError
. This would need some restructuring of ImageIOLoadException
though - so happy to leave as-is for now, and I'll look into this once it's all merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I left it as is then.
brainglobe_utils/image_io/load.py
Outdated
@@ -683,6 +684,30 @@ def get_size_image_from_file_paths(file_path, file_extension="tif"): | |||
Dict of image sizes. | |||
""" | |||
file_path = Path(file_path) | |||
if file_path.name.endswith(".tif") or file_path.name.endswith(".tiff"): | |||
# read just the metadata | |||
tiff = tifffile.TiffFile(file_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use with TiffFile(file_path) as tiff:
for this section? This will ensure the connection is closed correctly, as they require in their docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applied the change.
brainglobe_utils/image_io/load.py
Outdated
f"Found {axes} axes with shape {shape}" | ||
) | ||
|
||
image_shape = {name: shape[i] for name, i in indices.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could remove the indices = {ax: i for i, ax in enumerate(axes)}
line above, and do this in one line with something like: image_shape = {ax: sh for ax, sh in zip(axes, shape)}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I applied the change.
Co-authored-by: Kimberly Meechan <[email protected]>
I added a test as requested. I had to modify the
My tiff files come via BigStitcher via Fiji, and so have the correct axes set. I'm not sure if that's common. If not and typical imaging tiff files don't have the right axes set, we may have to use the Also perhaps |
I think in general it's rare that this type of data comes with correct metadata. There's a lot of homebuilt systems that output files without any metadata. |
I'd agree that the metadata often isn't set correctly in tiff files. The issue here is it needs to return the image shape in the style: |
Sounds like I should remove the check then, and not check the metadata for the order of axes.
I meant this parameter that we provide to However, if we're calling directly into cellfinder core through python, this parameter is not present (assuming in the future we'll use these functions to load cellfinder data - moving cellfinder loading functions into this repo)!? Should the Cell finder internally expects a certain axis order, but it isn't documented anywhere I think. In fact it took me a while to find this and it's still not quite clear to me why we do this: https://github.com/brainglobe/cellfinder/blob/e6a887f1721b89d51328214b108e0da5401a24a9/cellfinder/core/detect/filters/plane/plane_filter.py#L63. I do know when I tried removing the transpose, the subsequent code in cell splitting failed. |
@alessandrofelder do you know why cellfinder seems to need a specific orientation? The orientation (particularly in plane) shouldn't matter at all. The orientation parameters should only be used for atlas registration. |
Does anybody know what we need to get this PR merged? I'm slightly confused about the axis thing. As far as I know, we don't use image metadata anywhere in BrainGlobe. In particular, we don't read x/y/z - they are pretty meaningless because everyone has a different idea about what they mean. |
I agree with @adamltyson 's assertions that AFAIK
Be interesting to know how this failed 🤔 I don't see why removing the transpose would make any difference 😬 |
I'd suggest making the transpose question new separate issue, and merging this? |
Does this affect cellfinder though? |
Not sure I understand what you mean 🤔 |
I just meant does this PR have any impact upon cellfinder? I.e. is there a possibility that merging (and releasing) this, causes cellfinder problems for anyone. Just a naive question based on the comments above. If not, lets get it merged. |
OK, I follow now, thanks, and good point. |
@K-Meech @adamltyson in cases where we have a single tiff file containing a stack with not correctly set metadata, should we
? |
Do we need to set anything? We don't use the metadata (we define it elsewhere). |
The function IIUC when we read a 3D tiff we can get the shape of the stack through If there is metadata that contains If there is no axis metadata, or the axis metadata is not |
Based on usage here and here I think we assume zyx as the default. As an aside, I'm not sure what we should do with axis in BrainGlobe. It's currently not causing any (major) issues, but we have hundreds of references to axis order and they're not consistent. I think we should go full numpy and just have Perhaps we should decide a convention, and aim to gradually adopt it? |
yep, let's gradually aim to go full numpy, with axis_0 as the "depth" and as the number of 2D tiff files in a folder (when we load that kind of data). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ready to be merged now - thanks @matham !
(I made a slight change to accommodate cases where 3D tiff axis metadata is not set)
Merging as AFAICT @K-Meech 's requested changes have been addressed. |
If there's a standard order, It should probably be documented prominently in the code/docs that this is the order used internally!? Because it took me a bit to work out even the order that the input to detection expects. E.g. the docs in
But that is probably the first entry point for users that pass data, and from that it's not clear the expected order. It also wasn't clear in the docs how this all interacts with the |
If there's a standard order, It should probably be documented prominently in the code/docs that this is the order used internally!? There is, but only sort of. The way that cellfinder loads the data plane-by-plane, there is an assumption that one axis is lower resolution and slower to load. This however isn't necessary. It should probably be documented though.
It doesn't interact at all, only in use via |
This is part of this PR: brainglobe/cellfinder#397 and brainglobe/brainglobe-workflows#88.