Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

While acquiring a multi-D acquisition on only "emit" the last image to napari viewer #202

Open
CedricEsp opened this issue Oct 5, 2022 · 3 comments

Comments

@CedricEsp
Copy link

CedricEsp commented Oct 5, 2022

Hello,
I am not clear on how emit work in:

self._prep_hardware(event)
self._mmc.snapImage()
self.img = self._mmc.getImage()
self._events.frameReady.emit(self.img, event)

Trying to stream all the data during acquisition seriously slows down the acquisition and often end up in failure (the data are quite big). However, the user would like to have an idea on how the acquisition is going I then wonder if it's possible to only emit the current image and remove the previous image in napari. Obviously if I tried to directly add_image to the viewer I end up with a thread error.

Thank you for your help!

@tlambert03
Copy link
Member

tlambert03 commented Oct 5, 2022

Hi @CedricEsp,

Yes, handling the frameReady event is really where it all gets interesting. napari-micromanager kinda has one "standard" way of handling that event, but there's no reason you couldn't customize it fully to your needs.

the frameReady event means simply what it says: "pymmcore-plus has snapped a single frame, and it's ready and waiting for whatever you want to do to with it"

the "you" in that sentence might be napari-micromanager, or it could be an end-user (you). Ultimately, it's anything that has connected a callback function to the frameReady event, which will then be called with the new image as an argument each time a frame is snapped.

The default function in napari-micromanager that handles each new frame is defined here in MainWindow._on_mda_frame... and it is connected to the frameReady event right here:

# mda events
self._mmc.mda.events.frameReady.connect(self._on_mda_frame)

(that connection line is what causes MainWindow._on_mda_frame to be called every time the line self._events.frameReady.emit(self.img, event) is reached in the run_mda() method).

If you look into the source code of _on_mda_frame you'll see that its main job is mostly just to determine the index of the current frame in the full experiment, add it to the underlying data store, and then update the current position in the napari viewer:

# get the actual index of this image into the array and
# add it to the zarr store
im_idx = tuple(event.index[k] for k in axis_order)
self._mda_temp_arrays[str(event.sequence.uid) + channel][im_idx] = image
# move the viewer step to the most recently added image
for a, v in enumerate(im_idx):
self.viewer.dims.set_point(a, v)

so...

Trying to stream all the data during acquisition seriously slows down the acquisition and often end up in failure (the data are quite big). However, the user would like to have an idea on how the acquisition is going I then wonder if it's possible to only emit the current image and remove the previous image in napari.

yes, this would be possible, one of two ways:

  1. napari-micromanager itself could just have an additional "configurable" option that lets the end-user refine the behavior of that _on_mda_frame method, for example to throw away everything but the last frame. That said, the whole reason for using a zarr-store (at self._mda_temp_arrays) was precisely to avoid memory buildup with big data... So really, it should be able to be arbitrarily large without ending up in failure. If that's not what you're observing with napari-micromanager, it might just be a bug that indicates we need to make sure that no in-memory caching is going on under the hood or something.
  2. A more invasive option would be for you to just disconnect napari-micromanager's _on_mda_frame method and connect your own that does exactly what you want it to do.
main_window._mmc.mda.events.frameReady.disconnect(main_window._on_mda_frame)
main_window._mmc.mda.events.frameReady.connect(your_own_callback_function)

Obviously if I tried to directly add_image to the viewer I end up with a thread error.

if you are trying to connect your own callback, and doing something like calling viewer.add_image(), then you will need to make sure that that happens in the main thread. Note how we use the superqt.utils.ensure_main_thread decorator here:

@ensure_main_thread
def _on_mda_frame(self, image: np.ndarray, event: useq.MDAEvent):

you could use that same decorator to make sure that the function you're connecting to the frameReady event gets called in the main GUI thread.

hope that helps.

@CedricEsp
Copy link
Author

Thanks @tlambert03 that is super useful, I did end up finding the def _on_mda_frame but it took me a while, I should have asked earlier. Because, until your explanation I didn't understand very clearly how things connect with each other, I wrote a simple work around where I emit every specific z position and used a "simplified" MDAsequence to send to _on_mda_frame definitely not as clean as your option.

But what concern me is that:

That said, the whole reason for using a zarr-store (at self._mda_temp_arrays) was precisely to avoid memory buildup with big data... So really, it should be able to be arbitrarily large without ending up in failure. If that's not what you're observing with napari-micromanager, it might just be a bug that indicates we need to make sure that no in-memory caching is going on under the hood or something.

Indeed I would expect that it would not be an issue so I wonder if the engine I built as something wrong that would cause the issue? But indeed it slows down pretty dramatically after some frames if I emit them to napari viewer.

Here is a simplified version of the engine I use:

def run(self, sequence: MDASequence) -> None:
        """
        Run the multi-dimensional acquistion defined by `sequence`.
        Most users should not use this directly as it will block further
        execution. Instead use ``run_mda`` on CMMCorePlus which will run on
        a thread.
        Parameters
        ----------
        sequence : MDASequence
            The sequence of events to run.
        """
            self._prepare_to_run(sequence)
            cancelled = self._wait_until_event(event, sequence)

            # If cancelled break out of the loop (might create a blue screen..)
            if cancelled:
                break

            logger.info(event)

            self._prep_hardware(event)
            #adding break before first z to allow stage to move to slide
            if event.index.get("z") == 0:
                time.sleep(2.2)

           #Autofocusing every nth tiles
            if event.index.get("p") in AF_nth:
                if event.index.get("c")  == 0:
                    if event.index.get("z") == 0: 
                       pos = self.autofocus()

            event.z_pos += pos

            #capture image
            self._prep_hardware(event)
            self._mmc.snapImage()
            self.img = self._mmc.getImage()

            # save the images to disk
            self.write_data(event)

            if self._param["Streaming"]:
                 self._events.frameReady.emit(self.img, event)

        self._finish_run(sequence)

@CedricEsp
Copy link
Author

CedricEsp commented Oct 6, 2022

Note that I run many mdas and I use that function :

def run_many_mda(mdas: list[useq.MDASequence], core: CMMCorePlus = None) -> Thread:
    """
    run multiple separate MDAs in a loop without blocking the main thread
    """
    core = core or CMMCorePlus.instance()
    if core.mda.is_running():
        raise ValueError("Cannot start an MDA while the previous MDA is still running.")

    def f(mdas):
        for seq in mdas:
            core.mda.run(seq) # this is blocking so don't need to .join()

    th = Thread(target=f, args=(mdas,))
    th.start()
    return th

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants