Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enh] Handle segmenting image layers that have non-1 layer.scale #804

Conversation

psobolewskiPhD
Copy link

Closes #801

In this PR I add image_scale to the AnnotatorState.
This way, when a image layer in napari has layer.scale that is not all 1, the derived annotation layers can also use that scale.
I also switch from using viewer.cursor.position to viewer.dims.point which is the more canonical way of getting the current slice.
Additionally--and importantly--viewer.dims.point (as well as viewer.cursor.position) is in World coordinates (these are the scaled coordinates), so I transform this back to Layer data coordinates, because that is what is needed for the prompt.
Finally, I use np.round to make sure that when a prompt point is not exactly on a z-slice everything still works.

@psobolewskiPhD
Copy link
Author

I've tested this locally with 2D and 3D images that have scales that are not all 1.
All tests pass locally, because they probably don't have layers with scale--at least I didn't break anything?

========= 60 passed, 4 skipped, 1 xfailed, 5 warnings in 217.25s (0:03:37) =========

I'm happy to add a test with a layer with a set scale, but I think I need some pointers as to where to add it!

Copy link
Contributor

@constantinpape constantinpape left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @psobolewskiPhD ! All the changes make sense, but I am a bit unsure which data in the point / shape layer is stored in world coordinates and which is stored in data coordinates.

I would have assumed that we also have to redo the scaling when deriving the prompts in vutil.point_layer_to_prompts and vutil.shape_layer_to_prompts. E.g. here. Or is layer.data always in data coordinates?

position = viewer.cursor.position

position_world = viewer.dims.point
position = viewer.layers["point_prompts"].world_to_data(position_world)
z = int(position[0])

point_prompts = vutil.point_layer_to_prompts(viewer.layers["point_prompts"], z)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't we also need to scale the points in here if we have a scale?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, see my comment below. in point_layer_to_prompts everything is now in the image layer data coordinates, because the annotation layers have the same scale as the image layer. We had to transform viewer.dims.point because that is in world (scaled or canvas) coordinates.

@psobolewskiPhD
Copy link
Author

@constantinpape
It's quite confusing and obviously we should document better, but layer.scale maps between the data coords of layer.data and the world/canvas coordinates that are displayed. This means that points/shapes layers with scale (1, 1) will result in data arrays that have the canvas coords as their data coords!

Try these snippets in the napari console (or whatever interactive python you want)

import numpy as np

data = np.random.random([10, 10])
image_layer = viewer.add_image(data, scale=(.5, .5))

If you mouse over you will see the scaled, world coordinates, bottom right is (5, 5).
But image_layer.data is the numpy array, of shape (10, 10)
Let's add a points without scale (if you use the GUI it will have a scale!!)

points_no_scale = viewer.add_points()

Add a point somewhere -- bottom most right corner is the easiest -- and look at the data:

points_no_scale.data

You should see e.g. [4.9, 4.9].
Without a scale, data and world coordinates are mapped 1:1 (scale = (1, 1)), so the data coordinates of this Points layer are the same as the canvas/world coordinates, so the coordinate in the array is in the scaled, world coordinates -- the same as the status bar shows.

Add a points layer with scale:

points_scale = viewer.add_points(scale=(.5, .5))

Again add a point somewhere -- bottom most right corner is the easiest -- and look at the data:

points_scale.data

You should see something like [9.8, 9.9].
Now we see that when we store the data coordinate for the layer it's transformed from the world coordinates of the canvas by the inverse of the scale, so in this case it matches the data/pixel coordinates of the image layer.

So in the specific case in this PR:

  • the image layer has a scale
  • it's propagated to the annotation layers -- which with this PR should all get the scale of the image layer from AnnotatorState
    Now in segment_slice we want to know what slice the viewer is showing, so we use viewer.dims.point
    position_world = viewer.dims.point

    But dims.Point is in the canvas coordinates (see https://github.com/napari/napari/blob/c33f10261ededcc217c4d43f1cfce5daabd04f67/napari/components/dims.py#L59-L60), so in the next line we transform it to data coordinates of the Points layer so we get the proper z slice. This will be in the pixel coords that should match the image layer, as per discussion above.
    Then in point_layer_to_prompts, everything is correct because it's all done in the pixel coords of the image array.
    Same for the Shapes.

@constantinpape
Copy link
Contributor

Thank you for the explanation @psobolewskiPhD ! The design makes sense (but hard to know this ;)).

I will go ahead and merge this, I think the PR should take care of all points. We will test it before the next release (will def. come this year).

@constantinpape constantinpape merged commit 001878b into computational-cell-analytics:dev Dec 3, 2024
3 checks passed
@psobolewskiPhD
Copy link
Author

Thanks @constantinpape !
I think we can improve the docs on this, maybe a tutorial/how-to sort of like my post above would be a useful addition to explaining this? Any feedback welcome!

@constantinpape
Copy link
Contributor

maybe a tutorial/how-to sort of like my post above would be a useful addition to explaining this?

Yeah, I think a tutorial that shows how the scale parameter affects different layers would be most helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants