Skip to content

Commit

Permalink
Merge branch 'main' into fix-embargo
Browse files Browse the repository at this point in the history
  • Loading branch information
CodyCBakerPhD authored Jun 10, 2024
2 parents e43da41 + d91a1d7 commit 935b286
Show file tree
Hide file tree
Showing 8 changed files with 98 additions and 67 deletions.
15 changes: 11 additions & 4 deletions .github/workflows/build_and_deploy_mac.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
name: Mac Release
run-name: ${{ github.actor }} is building a MAC release for NWB GUIDE
# NOTE: even though the runner is an arm64 mac, both x64 and arm64 releases will be made
# NOTE: even though the runner is an x64 mac, both x64 and arm64 releases will be made

on:
workflow_dispatch:

jobs:
deploy-on-mac:
runs-on: macos-latest
runs-on: macos-13
# NOTE: macos-latest is an arm64 mac, and the dependency sonpy (Spike2RecordingInterface) has a .so file that
# works only on mac x64. This causes issues building and deploying on mac arm64. So we use macos-13 (x64)
# to build and deploy both the x64 and arm64 versions of the app.
# NOTE: if changing this to macos-latest, make sure to use the apple-silicon conda environment.

defaults:
run:
Expand All @@ -29,7 +33,7 @@ jobs:
use-mamba: true

- name: Create and activate environment
run: mamba env update --name nwb-guide --file environments/environment-MAC-apple-silicon.yml
run: mamba env update --name nwb-guide --file environments/environment-MAC-intel.yml

- name: Use Node.js 20
uses: actions/setup-node@v4
Expand All @@ -40,7 +44,7 @@ jobs:
run: npm install --verbose

- name: Remove bad sonpy file (might make Spike2 format unusable on Mac - should exclude from selection)
run: rm -f /usr/local/miniconda/envs/nwb-guide/lib/python3.9/site-packages/sonpy/linux/sonpy.so
run: rm -f "$CONDA_PREFIX/lib/python3.9/site-packages/sonpy/linux/sonpy.so"

- uses: apple-actions/import-codesign-certs@v2
with:
Expand All @@ -55,4 +59,7 @@ jobs:
teamId: ${{ secrets.APPLE_TEAM_ID }}
appleId: ${{ secrets.APPLE_ID }}
appleIdPassword: ${{ secrets.APPLE_PASSWORD }}
# uncomment below to make build process extra verbose in case of failure
# DEBUG: electron-builder
# DEBUG_DMG: true
run: npm run deploy:mac
2 changes: 1 addition & 1 deletion .github/workflows/testing_flask_build_and_dist.yml
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ jobs:

# Fix for macos build - remove bad sonpy file
- if: matrix.os == 'macos-latest' || matrix.os == 'macos-13'
run: rm -f /Users/runner/miniconda3/envs/nwb-guide/lib/python3.9/site-packages/sonpy/linux/sonpy.so
run: rm -f "$CONDA_PREFIX/lib/python3.9/site-packages/sonpy/linux/sonpy.so"

- name: Build PyFlask distribution
run: npm run build:flask
Expand Down
12 changes: 6 additions & 6 deletions docs/tutorials/dataset.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Example Dataset Generation
==========================

Our tutorials focus on converting extracellular electrophysiology data in the SpikeGLX and Phy formats.
To get you started as quickly as possible, we’ve created a way to generate this Neuropixel-like dataset at the click of a button!
The NWB GUIDE tutorials focus on converting extracellular electrophysiology data in the SpikeGLX and Phy formats.
To get started as quickly as possible, you can use NWB GUIDE to generate a Neuropixels-like dataset at the click of a button!

.. note::
The **SpikeGLX** data format stores electrophysiology recordings.
Expand All @@ -17,7 +17,9 @@ Navigate to the **Settings** page using the button at the bottom of the main sid

Press the Generate button on the Settings page to create the dataset.

The generated data will populate in the ``~/NWB_GUIDE/test_data`` directory, where ``~`` is the home directory of your system. This includes both a ``single_session_data`` and ``multi_session_dataset`` folder to accompany the following tutorials.
The dataset will be generated in a new ``~/NWB_GUIDE/test_data`` directory, where ``~`` is the `home directory <https://en.wikipedia.org/wiki/Home_directory#Default_home_directory_per_operating_system>`_ of your system. This includes both a ``single_session_data`` and ``multi_session_dataset`` folder to use in the following tutorials.

The folder structure of the generated dataset is as follows:

.. code-block:: bash
Expand Down Expand Up @@ -52,6 +54,4 @@ The generated data will populate in the ``~/NWB_GUIDE/test_data`` directory, whe
│ │ └── mouse2_Session2/
│ │ ...
Now you’re ready to start your first conversion using the NWB GUIDE!
Now you're ready to start your first conversion using the NWB GUIDE!
37 changes: 22 additions & 15 deletions docs/tutorials/multiple_sessions.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Managing Multiple Sessions
==========================

Now, let’s say that youve already run some of your experiments and now you want to convert them all at the same time. This is where a multi-session workflow will come in handy.
Now, let's imagine that you've already run multiple sessions of an experiment and now you want to convert them all to NWB at the same time. This is where a multi-session workflow will be useful.

Begin a new conversion on the **Convert** page and provide a name for your pipeline.

Expand All @@ -12,19 +12,23 @@ Update the Workflow page to indicate that you'll:

#. Run on multiple sessions
#. Locate the source data programmatically
#. Specify your dataset location ``~/NWB_GUIDE/test-data/multi_session_dataset``, where **~** is the home directory of your system.
#. Specify your dataset location ``~/NWB_GUIDE/test-data/multi_session_dataset``, where ``~`` is the home directory of your system.
#. Skip dataset publication.

Leave the rest of the settings as they are.

.. figure:: ../assets/tutorials/multiple/workflow-page.png
:align: center
:alt: Workflow page with multiple sessions and locate data selected

Data Formats
------------

As before, specify **SpikeGLX Recording** and **Phy Sorting** as the data formats for this conversion.

Locate Data
-----------

This page helps you automatically identify source data for multiple subjects / sessions as long as your files are organized consistently.

.. figure:: ../assets/tutorials/multiple/pathexpansion-page.png
Expand All @@ -34,33 +38,33 @@ This page helps you automatically identify source data for multiple subjects / s
File locations are specified as **format strings** that define source data paths of each selected data format.

.. note::
Format strings are one component of NeuroConv's **path expansion language**, which has some nifty features for manually specifying complex paths. Complete documentation of the path expansion feature of NeuroConv can be found :path-expansion-guide:`here <>`.
Format strings are one component of NeuroConv's **path expansion language**, which has nifty features for manually specifying complex paths. Complete documentation of the path expansion feature can be found :path-expansion-guide:`here <>`.

While you dont have to specify format strings for all of the pipelines data formats, were going to find all of our data here for this tutorial. You'll always be able to confirm or manually select the final paths on the Source Data page later in the workflow.
While you don't have to specify format strings for all of the pipeline's data formats, we're going to find all of our data here for this tutorial. You'll always be able to confirm or manually select the final paths on the Source Data page later in the workflow.

Format strings are specified using two components: the **base directory**, which is the directory to search in, and the **format string path**, where the source data is within that directory.

The base directory has been pre-populated based on your selection on the Workflow page.

To avoid specifying the format string path by hand, we can take advantage of **Autocomplete**. Click the **Autocomplete** button to open a pop-up form that will derive the format string from a single example path.
To avoid specifying the format string path by hand, click the **Autocomplete** button to open a pop-up form that will derive the format string from a single example path.

.. figure:: ../assets/tutorials/multiple/pathexpansion-autocomplete-open.png
:align: center
:alt: Autocomplete modal on path expansion page

Provide an example source data path (for example, the ``multi_session_dataset/mouse1/mouse1_Session2/mouse1_Session2_phy`` file for Phy), followed by the Subject (``mouse1``) and Session ID (``Session1``) for this particular path.
Provide a source data path for Phy by either dragging and dropping the folder ``multi_session_dataset/mouse1/mouse1_Session2/mouse1_Session2_phy`` into the **Example Folder** box or clicking the box and selecting a folder. Then enter the Subject ID (``mouse1``) and Session ID (``Session1``) for this particular path.

.. figure:: ../assets/tutorials/multiple/pathexpansion-autocomplete-filled.png
:align: center
:alt: Autocomplete modal completed

When you submit this form, youll notice that the Format String Path input has been auto-filled with a pattern for all the sessions.
When you submit this form, you'll notice that the Format String Path input has been auto-filled with a pattern for all of the sessions, and a list of all of the source data found is shown in the gray box. Confirm that this list contains all four Phy folders.

.. figure:: ../assets/tutorials/multiple/pathexpansion-autocomplete-submitted.png
:align: center
:alt: Path expansion page with autocompleted format string

Repeat this process for SpikeGLX, where ``multi_session_dataset/mouse1/mouse1_Session2/mouse1_Session2_g0/mouse1_Session2_g0_imec0/mouse1_Session1_g0_t0.imec0.lf.bin`` will be the example source data path.
Repeat this process for SpikeGLX, where ``multi_session_dataset/mouse1/mouse1_Session2/mouse1_Session2_g0/mouse1_Session2_g0_imec0/mouse1_Session1_g0_t0.imec0.ap.bin`` will be the example source data path.

.. figure:: ../assets/tutorials/multiple/pathexpansion-completed.png
:align: center
Expand All @@ -70,15 +74,16 @@ Advance to the next page when you have entered the data locations for both forma

Subject Metadata
----------------
On this page you’ll edit subject-level metadata across all related sessions. Unlike the previous few pages, you’ll notice that
Sex and Species both have gray asterisks next to their name; this means they are **loose requirements**, which aren’t currently required

On this page, you can edit subject-level metadata that is the same for all sessions. Unlike the previous few pages, you'll notice that
Sex and Species both have gray asterisks next to their name; this means they are **loose requirements**, which aren't currently required
but could later block progress if left unspecified.

.. figure:: ../assets/tutorials/multiple/subject-page.png
:align: center
:alt: Blank subject table

In this case, we have two subjects with two sessions each. Lets say that each of their sessions happened close enough in time that they can be identified using the same **age** entry: ``P29W`` for ``mouse1`` and ``P30W`` for ``mouse2``.
In this case, we have two subjects with two sessions each. Let's say that each of their sessions happened close enough in time that they can be identified using the same **age** entry: ``P29W`` for ``mouse1`` and ``P30W`` for ``mouse2``.

We should also indicate the ``sex`` of each subject since this is a requirement for `uploading to the DANDI Archive <https://www.dandiarchive.org/handbook/135_validation/#missing-dandi-metadata>`_.

Expand All @@ -90,16 +95,18 @@ Advance to the next page when you have entered subject metadata for all subjects

Source Data Information
-----------------------
Because we used the Locate Data page to programmatically identify our source data, this page should mostly be complete. You can use this opportunity to verify that the identified paths appear as expected for each session.

Because we used the Locate Data page to programmatically identify our source data, this page should mostly be complete. Verify that the identified paths appear as expected for each session by clicking the "Phy Sorting" header to expand the section for Phy data and examining the "Folder Path" value. Do the same for the SpikeGLX data.

.. figure:: ../assets/tutorials/multiple/sourcedata-page.png
:align: center
:alt: Complete source data forms

One notable difference between this and the single-session workflow, however, is that the next few pages will allow you to toggle between sessions using the **session manager** sidebar on the left.
One notable difference between this and the single-session workflow is that the next few pages will allow you to toggle between sessions using the **session manager** sidebar on the left. Try this out. Under "Sessions", click "sub-mouse2" and "ses-Session1" to locate the source data for a different session from this subject.

Session Metadata
----------------

Aside from the session manager, the file metadata page in the multi-session workflow is nearly identical to the single-session version.

.. figure:: ../assets/tutorials/multiple/metadata-nwbfile.png
Expand All @@ -108,15 +115,15 @@ Aside from the session manager, the file metadata page in the multi-session work

A complete General Metadata form

Acting as default metadata, the information supplied on the subject metadata page has pre-filled the Subject metadata for each session.
The information supplied on the Subject Metadata page has been used to fill in the Subject metadata for each session.

.. figure:: ../assets/tutorials/multiple/metadata-subject-complete.png
:align: center
:alt: Complete Subject metadata form

A complete Subject metadata form

You'll notice that there's an **Edit Default Metadata** button at the top of the page. This feature allows you to specify a single default value for each property that is expected to be the same across all sessions. **Use this button to fill in general metadata for your sessions**, which will save you time and effort while ensuring your files still follow Best Practices.
You'll notice that there's an **Edit Default Metadata** button at the top of the page. This feature allows you to specify a single default value for each property that is expected to be the same across all sessions. **Use this button to fill in general metadata for your sessions**, such as the Institution, which will save you time and effort while ensuring your files still follow Best Practices.

Finish the rest of the workflow as you would for a single session by completing a full conversion after you review the preview files with the NWB Inspector and Neurosift.

Expand Down
Loading

0 comments on commit 935b286

Please sign in to comment.