-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enh] Handle segmenting image layers that have non-1 layer.scale #804
[Enh] Handle segmenting image layers that have non-1 layer.scale #804
Conversation
I've tested this locally with 2D and 3D images that have scales that are not all 1.
I'm happy to add a test with a layer with a set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @psobolewskiPhD ! All the changes make sense, but I am a bit unsure which data in the point / shape layer is stored in world coordinates and which is stored in data coordinates.
I would have assumed that we also have to redo the scaling when deriving the prompts in vutil.point_layer_to_prompts
and vutil.shape_layer_to_prompts
. E.g. here. Or is layer.data
always in data coordinates?
position = viewer.cursor.position | ||
|
||
position_world = viewer.dims.point | ||
position = viewer.layers["point_prompts"].world_to_data(position_world) | ||
z = int(position[0]) | ||
|
||
point_prompts = vutil.point_layer_to_prompts(viewer.layers["point_prompts"], z) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we also need to scale the points in here if we have a scale?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, see my comment below. in point_layer_to_prompts
everything is now in the image layer data coordinates, because the annotation layers have the same scale as the image layer. We had to transform viewer.dims.point
because that is in world (scaled or canvas) coordinates.
@constantinpape Try these snippets in the napari console (or whatever interactive python you want)
If you mouse over you will see the scaled, world coordinates, bottom right is (5, 5).
Add a point somewhere -- bottom most right corner is the easiest -- and look at the data:
You should see e.g. [4.9, 4.9]. Add a points layer with scale:
Again add a point somewhere -- bottom most right corner is the easiest -- and look at the data:
You should see something like [9.8, 9.9]. So in the specific case in this PR:
|
Thank you for the explanation @psobolewskiPhD ! The design makes sense (but hard to know this ;)). I will go ahead and merge this, I think the PR should take care of all points. We will test it before the next release (will def. come this year). |
001878b
into
computational-cell-analytics:dev
Thanks @constantinpape ! |
Yeah, I think a tutorial that shows how the scale parameter affects different layers would be most helpful. |
Closes #801
In this PR I add
image_scale
to the AnnotatorState.This way, when a image layer in napari has
layer.scale
that is not all 1, the derived annotation layers can also use that scale.I also switch from using
viewer.cursor.position
toviewer.dims.point
which is the more canonical way of getting the current slice.Additionally--and importantly--
viewer.dims.point
(as well asviewer.cursor.position
) is in World coordinates (these are the scaled coordinates), so I transform this back to Layer data coordinates, because that is what is needed for the prompt.Finally, I use np.round to make sure that when a prompt point is not exactly on a z-slice everything still works.