Skip to content

Commit

Permalink
Merge branch 'main' into inspect-progress-bars
Browse files Browse the repository at this point in the history
  • Loading branch information
garrettmflynn committed May 14, 2024
2 parents 7724340 + 20cd235 commit d18b751
Show file tree
Hide file tree
Showing 35 changed files with 5,268 additions and 841 deletions.
83 changes: 83 additions & 0 deletions .github/workflows/testing-pipelines.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
name: Example Pipeline Tests
on:
schedule:
- cron: "0 16 * * *" # Daily at noon EST
pull_request:

concurrency: # Cancel previous workflows on the same pull request
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

env:
CACHE_NUMBER: 2 # increase to reset cache manually

jobs:
testing:
name: Pipelines on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
defaults:
run:
shell: bash -l {0}

strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
include:
- os: ubuntu-latest
label: environments/environment-Linux.yml

- os: macos-latest # Mac arm64 runner
label: environments/environment-MAC-apple-silicon.yml

- os: macos-13 # Mac x64 runner
label: environments/environment-MAC-intel.yml

- os: windows-latest
label: environments/environment-Windows.yml


steps:
- uses: actions/checkout@v4
- run: git fetch --prune --unshallow --tags

# see https://github.com/conda-incubator/setup-miniconda#caching-environments
- name: Setup Mambaforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-variant: Mambaforge
miniforge-version: latest
activate-environment: nwb-guide
use-mamba: true

- name: Set cache date
id: get-date
run: echo "today=$(/bin/date -u '+%Y%m%d')" >> $GITHUB_OUTPUT
shell: bash

- name: Cache Conda env
uses: actions/cache@v4
with:
path: ${{ env.CONDA }}/envs
key: conda-${{ runner.os }}-${{ runner.arch }}-${{steps.get-date.outputs.today }}-${{ hashFiles(matrix.label) }}-${{ env.CACHE_NUMBER }}
id: cache

- if: steps.cache.outputs.cache-hit != 'true'
name: Create and activate environment
run: mamba env update --name nwb-guide --file ${{ matrix.label }}

- name: Use Node.js 20
uses: actions/setup-node@v4
with:
node-version: 20

- name: Install GUIDE
run: npm ci --verbose

- if: matrix.os != 'ubuntu-latest'
name: Run tests
run: npm run test:pipelines

- if: matrix.os == 'ubuntu-latest'
name: Run tests with xvfb
run: xvfb-run --auto-servernum --server-args="-screen 0 1280x960x24" -- npm run test:pipelines
6 changes: 3 additions & 3 deletions docs/tutorials/dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ Navigate to the **Settings** page using the button at the bottom of the main sid

Press the Generate button on the Settings page to create the dataset.

The generated data will populate in the ``~/NWB_GUIDE/test_data`` directory, where ``~`` is the home directory of your system. This includes a ``data`` folder with the original data as well as a ``dataset`` folder that duplicates this ``data`` across multiple subjects and sessions.
The generated data will populate in the ``~/NWB_GUIDE/test_data`` directory, where ``~`` is the home directory of your system. This includes both a ``single_session_data`` and ``multi_session_dataset`` folder to accompany the following tutorials.

.. code-block:: bash
test-data/
├── data/
├── single_session_data/
│ ├── spikeglx/
│ │ ├── Session1_g0/
│ │ │ ├── Session1_g0_imec0/
Expand All @@ -31,7 +31,7 @@ The generated data will populate in the ``~/NWB_GUIDE/test_data`` directory, whe
│ │ │ │ ├── Session1_g0_t0.imec0.lf.bin
│ │ │ │ └── Session1_g0_t0.imec0.lf.meta
│ │ └── phy/
├── dataset/
├── multi_session_dataset/
│ ├── mouse1/
│ │ ├── mouse1_Session1/
│ │ │ ├── mouse1_Session1_g0/
Expand Down
15 changes: 4 additions & 11 deletions docs/tutorials/dataset_publication.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,17 @@ Dataset Publication
For this tutorial, we'll be adapting the previous :doc:`Multi-Session Tutorial </tutorials/multiple_sessions>` to publish our data to the DANDI Archive.

.. note::
This tutorial focuses on uploading to the Staging server.
Creating an account on DANDI requires approval from the archive administrators. Separate approval is required for both the main archive and the staging server.

**When working with real data, you'll want to publish to the Main Archive**. In this case, follow the same steps outlined here—except replace the Staging server with the Main Archive.
**This tutorial requires an account on the** :dandi-staging:`DANDI staging server <>`. You’ll want to publish your `real` data on the main archive, which will require a separate approval but otherwise follows the same workflow defined in this tutorial.

.. note::
Gaining access to DANDI requires approval from the archive administrators. Separate approval is required for both the main archive and the staging server.

**This tutorial requires an account on the** :dandi-staging:`DANDI staging server <>`.

We’re going to use the Staging server for this tutorial so we don’t crowd the main DANDI Archive with `synthetic` datasets! However, you’ll want to publish your `real` data on the main server—which will require a separate approval process.

Once you receive notice that your account was approved, you can move on to the next steps.
Once your account is approved, you can move on to the next steps.

Workflow Setup
--------------
1. Resume the conversion via the **Convert** page

2. Navigate to the **Workflow** page.
2. Navigate to the **Workflow** page using the navigation sidebar on the left.

a. Specify that you’d like to publish your data to the :dandi-archive:`DANDI Archive <>`.

Expand Down
13 changes: 13 additions & 0 deletions docs/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,19 @@ to the Neurodata Without Borders (NWB) format and uploading to the DANDI Archive

In these tutorials, you'll follow along on a :doc:`local installation of the GUIDE </installation>` as we detail the conversion process from initial setup to final upload.

.. note::
This tutorial focuses on uploading to the Staging server.

**When working with real data, you'll want to publish to the Main Archive**. In this case, follow the same steps outlined here—except replace the Staging server with the Main Archive.

.. note::

If you intend to complete the Dataset Publication section of this tutorial, you'll need to create an account on the DANDI Archive. This will need to be approved by the archive administrators and will require separate approval for both the main archive and the staging server.

**This tutorial requires an account on the** :dandi-staging:`DANDI staging server <>`.

We recommend that you create an account on the staging server before you begin the tutorials.

Before you begin these tutorials, **you'll need to generate the tutorial dataset** using the instructions on the Dataset page.


Expand Down
29 changes: 11 additions & 18 deletions docs/tutorials/multiple_sessions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,12 @@ Begin a new conversion on the **Convert** page and provide a name for your pipel
Workflow Configuration
----------------------

On the Workflow page, confirm that this pipeline will be run on multiple sessions. After this, also select that you’d like to locate the source data programmatically and skip dataset publication.
Update the Workflow page to indicate that you'll:

1. Run on multiple sessions
2. Locate the source data programmatically
3. Find source files inside ``~/NWB_GUIDE/test-data/dataset``, where **~** is the home directory of your system.
4. Skip dataset publication.

.. figure:: ../assets/tutorials/multiple/workflow-page.png
:align: center
Expand All @@ -35,9 +40,9 @@ While you don’t have to specify format strings for all of the pipeline’s dat

Format strings are specified using two components: the **base directory**, which is the directory to search in, and the **format string path**, where the source data is within that directory.

Given the structure of the tutorial dataset, we’ll select ``~/NWB_GUIDE/test-data/dataset`` as the **base directory**, where **~** is the home directory of your system.
The base directory has been pre-populated based on your selection on the Workflow page.

We can take advantage of the **Autocomplete** feature of this page. Instead of manually filling out the format string, click the **Autocomplete** button to open a pop-up form that will derive the format string from a single example path.
To avoid specifying the format string path by hand, we can take advantage of **Autocomplete**. Click the **Autocomplete** button to open a pop-up form that will derive the format string from a single example path.

.. figure:: ../assets/tutorials/multiple/pathexpansion-autocomplete-open.png
:align: center
Expand Down Expand Up @@ -81,20 +86,6 @@ We should also indicate the ``sex`` of each subject since this is a requirement
:align: center
:alt: Complete subject table

.. note::
If you're trying to specify metadata that is shared across sessions, you can use the **Global Metadata** feature.

Pressing the Edit Global Metadata button at the top of the page will show a pop-up form which allows you to provide a
single default value for each property, as long as it’s expected not to be unique.

These values will take effect as soon as the pop-up form has been submitted.

While Global Metadata is less relevant when we’re working with two subjeccts, this feature can be very powerful when you’re working with tens or even hundreds of subjects in one conversion.

We recommend using Global Metadata to correct issues caught by the **NWB Inspector** that are seen across several sessions.

You’ll be able to specify Global Metadata on the Source Data and File Metadata pages as well.

Advance to the next page when you have entered subject metadata for all subjects.

Source Data Information
Expand All @@ -117,14 +108,16 @@ Aside from the session manager and global metadata features noted above, the fil

A complete General Metadata form

Acting as global metadata, the information supplied on the subject metadata page has pre-filled the Subject metadata for each session.
Acting as default metadata, the information supplied on the subject metadata page has pre-filled the Subject metadata for each session.

.. figure:: ../assets/tutorials/multiple/metadata-subject-complete.png
:align: center
:alt: Complete Subject metadata form

A complete Subject metadata form

You'll notice that there's an **Edit Default Metadata** button at the top of the page. This feature allows you to specify a single default value for each property that is expected to be the same across all sessions. **Use this button to fill in general metadata for your sessions**, which will save you time and effort while ensuring your files still follow Best Practices.

Finish the rest of the workflow as you would for a single session by completing a full conversion after you review the preview files with the NWB Inspector and Neurosift.

Congratulations on completing your first multi-session conversion! You can now convert multiple sessions at once, saving you time and effort.
15 changes: 8 additions & 7 deletions docs/tutorials/single_session.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Converting a Single Session
===========================

As a researcher, youve just completed an experimental session and you’d like to convert your data to NWB right away.
Let's imagine you've just completed an experimental session and you’d like to convert your data to NWB right away.

Upon launching the GUIDE, you'll begin on the Convert page. If you’re opening the application for the first time, there should be no pipelines listed on this page.
Upon launching the GUIDE, you'll begin on the **Convert** page. If you’re opening the application for the first time, there should be no pipelines listed on this page.

.. figure:: ../assets/tutorials/home-page.png
:align: center
Expand Down Expand Up @@ -107,7 +107,7 @@ The Session Start Time in the **General Metadata** section is already specified

While the **General Metadata** section is complete, take some time to fill out additional information such as the **Institutional Info** box and the **Experimenter** field.

However, we still need to add the Subject information—as noted by the red accents around that item. Let’s say that our subject is a male mouse with an age of P25W, which represents 25 weeks old.
We also need to add the **Subject** information—as noted by the red accents around that item. Let’s say that our subject is a male mouse with an age of P25W, which represents 25 weeks old.

.. figure:: ../assets/tutorials/single/metadata-subject-complete.png
:align: center
Expand All @@ -116,16 +116,14 @@ However, we still need to add the Subject information—as noted by the red acce
The status of the Subject information will update in real-time as you fill out the form.


This dataset will also have **Ecephys** metadata extracted from the SpikeGLX source data.
This dataset will also have **Ecephys** metadata extracted from the SpikeGLX source data, though we aren't interested in modifying this information at the moment.

.. figure:: ../assets/tutorials/single/metadata-ecephys.png
:align: center
:alt: Ecephys metadata extracted from the SpikeGLX source data


Let's leave this as-is and advance to the next page.

The next step generates a preview file and displays real-time progress throughout the conversion process.
Let's leave this as-is and advance to the next page. This will trigger the conversion of your source data into a preview NWB file.

File Conversion
---------------
Expand All @@ -145,11 +143,14 @@ Conversion Preview
^^^^^^^^^^^^^^^^^^
On the Conversion Preview, Neurosift allows you to explore the structure of the NWB file and ensure the packaged data matches your expectations.

In particular, take a look at the lefthand metadata table and check that the information provided on the previous pages is present in the NWB file.

.. figure:: ../assets/tutorials/single/preview-page.png
:align: center
:alt: Neurosift preview visualization

Neurosift can be useful for many other exploration tasks—but this will not be covered in this tutorial.

Advancing from this page will trigger the full conversion of your data to the NWB format, a process that may take some time depending on the dataset size.

Conversion Review
Expand Down
Loading

0 comments on commit d18b751

Please sign in to comment.