Skip to content

Commit

Permalink
Merge branch 'port_dlc2nwb' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
vigji authored Jul 11, 2024
2 parents c5513ba + e6faa02 commit d9b5270
Show file tree
Hide file tree
Showing 50 changed files with 792 additions and 322 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: Build and Upload Docker Image of Rclone With Config to GHCR

on:
schedule:
- cron: "0 16 * * 1" # Weekly at noon EST on Monday
workflow_dispatch:

concurrency: # Cancel previous workflows on the same pull request
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
release-image:
name: Build and Upload Docker Image of Rclone With Config to GHCR
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ secrets.DOCKER_UPLOADER_USERNAME }}
password: ${{ secrets.DOCKER_UPLOADER_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true # Push is a shorthand for --output=type=registry
tags: ghcr.io/catalystneuro/rclone_with_config:latest
context: .
file: dockerfiles/rclone_with_config
provenance: false
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Docker CLI tests
name: NeuroConv Docker CLI tests
on:
schedule:
- cron: "0 16 * * *" # Daily at noon EST
Expand Down
39 changes: 39 additions & 0 deletions .github/workflows/rclone_docker_testing.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Rclone Docker Tests
on:
schedule:
- cron: "0 16 * * *" # Daily at noon EST
workflow_dispatch:

jobs:
run:
name: ${{ matrix.os }} Python ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.12"]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- run: git fetch --prune --unshallow --tags
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Global Setup
run: python -m pip install -U pip # Official recommended way

- name: Install pytest and neuroconv minimal
run: |
pip install pytest
pip install .
- name: Pull docker image
run: docker pull ghcr.io/catalystneuro/rclone_with_config:latest
- name: Run docker tests
run: pytest tests/docker_rclone_with_config_cli.py -vv -rsx
env:
RCLONE_DRIVE_ACCESS_TOKEN: ${{ secrets.RCLONE_DRIVE_ACCESS_TOKEN }}
RCLONE_DRIVE_REFRESH_TOKEN: ${{ secrets.RCLONE_DRIVE_REFRESH_TOKEN }}
RCLONE_EXPIRY_TOKEN: ${{ secrets.RCLONE_EXPIRY_TOKEN }}
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ repos:
- id: black
exclude: ^docs/

- repo: https://github.com/PyCQA/isort
rev: 5.13.2
hooks:
- id: isort
exclude: ^docs/
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.5
hooks:
- id: ruff
args: [ --fix ]

- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
Expand Down
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,28 @@
# Upcoming

### Deprecations
* The usage of `compression_options` directly through the `neuroconv.tools.audio` submodule is now deprecated - users should refer to the new `configure_backend` method for a general approach for setting compression. [PR #939](https://github.com/catalystneuro/neuroconv/pull/939)
* The usage of `compression` and `compression_opts` directly through the `FicTracDataInterface` is now deprecated - users should refer to the new `configure_backend` method for a general approach for setting compression. [PR #941](https://github.com/catalystneuro/neuroconv/pull/941)
* The usage of `compression` directly through the `neuroconv.tools.neo` submodule is now deprecated - users should refer to the new `configure_backend` method for a general approach for setting compression. [PR #943](https://github.com/catalystneuro/neuroconv/pull/943)
* The usage of `compression_options` directly through the `neuroconv.tools.ophys` submodule is now deprecated - users should refer to the new `configure_backend` method for a general approach for setting compression. [PR #940](https://github.com/catalystneuro/neuroconv/pull/940)

### Features
* Added docker image and tests for an automated Rclone configuration (with file stream passed via an environment variable). [PR #902](https://github.com/catalystneuro/neuroconv/pull/902)

### Bug fixes
* Fixed the conversion option schema of a `SpikeGLXConverter` when used inside another `NWBConverter`. [PR #922](https://github.com/catalystneuro/neuroconv/pull/922)
* Fixed a case of the `NeuroScopeSortingExtractor` when the optional `xml_file_path` is not specified. [PR #926](https://github.com/catalystneuro/neuroconv/pull/926)
* Fixed `Can't specify experiment type when converting .abf to .nwb with Neuroconv`. [PR #609](https://github.com/catalystneuro/neuroconv/pull/609)
* Remove assumption that the ports of the Intan acquisition system correspond to electrode groupings in `IntanRecordingInterface` [PR #933](https://github.com/catalystneuro/neuroconv/pull/933)
* Add ValueError for empty metadata in `make_or_load_nwbfile` when an nwbfile needs to be created [PR #948](https://github.com/catalystneuro/neuroconv/pull/948)

### Improvements
* Make annotations from the raw format available on `IntanRecordingInterface`. [PR #934](https://github.com/catalystneuro/neuroconv/pull/943)
* Add an option to suppress display the progress bar (tqdm) in `VideoContext` [PR #937](https://github.com/catalystneuro/neuroconv/pull/937)
* Automatic compression of data in the `LightnignPoseDataInterface` has been disabled - users should refer to the new `configure_backend` method for a general approach for setting compression. [PR #942](https://github.com/catalystneuro/neuroconv/pull/942)
* Port over `dlc2nwb` utility functions for ease of maintenance. [PR #946](https://github.com/catalystneuro/neuroconv/pull/946)



## v0.4.11 (June 14, 2024)

Expand All @@ -21,6 +37,7 @@
* Converter working with multiple VideoInterface instances [PR 914](https://github.com/catalystneuro/neuroconv/pull/914)



## v0.4.10 (June 6, 2024)

### Bug fixes
Expand Down
5 changes: 5 additions & 0 deletions dockerfiles/rclone_with_config
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
FROM rclone/rclone:latest
LABEL org.opencontainers.image.source=https://github.com/catalystneuro/neuroconv
LABEL org.opencontainers.image.description="A simple extension of the basic Rclone docker image to automatically create a local .conf file from contents passed via an environment variable."
CMD printf "$RCLONE_CONFIG" > ./rclone.conf && eval "$RCLONE_COMMAND"
ENTRYPOINT [""]
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import sys
import inspect
import sys
from pathlib import Path

sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
Expand Down
8 changes: 7 additions & 1 deletion docs/conversion_examples_gallery/conftest.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
from pathlib import Path

import pytest

from tests.test_on_data.setup_paths import ECEPHY_DATA_PATH, BEHAVIOR_DATA_PATH, OPHYS_DATA_PATH, TEXT_DATA_PATH
from tests.test_on_data.setup_paths import (
BEHAVIOR_DATA_PATH,
ECEPHY_DATA_PATH,
OPHYS_DATA_PATH,
TEXT_DATA_PATH,
)


@pytest.fixture(autouse=True)
Expand Down
6 changes: 5 additions & 1 deletion docs/developer_guide/docker_images.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ After building the image itself, we can publish the container with...

Though it may appear confusing, the use of the ``IMAGE_NAME`` in these steps determines only the _name_ of the package as available from the 'packages' screen of the host repository; the ``LABEL`` itself ensured the upload and linkage to the NeuroConv GHCR.

All our docker images can be built in GitHub Actions (for Ubuntu) and pushed automatically to the GHCR by manually triggering their respective workflow. Keep in mind that most of them are on semi-regular CRON schedules, though.



Run Docker container on local YAML conversion specification file
Expand All @@ -73,12 +75,14 @@ and can then run the entrypoint (equivalent to the usual command line usage) on
.. _developer_docker_details:

Run Docker container on YAML conversion specification environment variable
--------------------------------------------------------------------------

An alternative approach that simplifies usage on systems such as AWS Batch is to specify the YAML contents as an environment variable. The YAML file is constructed in the first step of the container launch.

The only potential downside with this usage is the maximum size of an environment variable (~13,000 characters). Typical YAML specification files should not come remotely close to this limit.
The only potential downside with this usage is the maximum size of an environment variable (~13,000 characters). Typical YAML specification files should not come remotely close to this limit. This is contrasted to the limits on the CMD line of any docker container, which is either 8192 characters for Windows or either 64 or 128 KiB depending on UNIX build.

Otherwise, in any cloud deployment, the YAML file transfer will have to be managed separately, likely as a part of the data transfer or an entirely separate step.

Expand Down
33 changes: 31 additions & 2 deletions docs/user_guide/docker_demo.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Docker Demo
-----------
NeuroConv Docker Demo
---------------------

The following is an explicit demonstration of how to use the Docker-based NeuroConv YAML specification via the command line.

Expand Down Expand Up @@ -116,3 +116,32 @@ Voilà! If everything occurred successfully, you should see...
Metadata is valid!
conversion_options is valid!
NWB file saved at /demo_neuroconv_docker/demo_output/phy_from_docker_yaml.nwb!
RClone With Config Docker Demo
------------------------------

NeuroConv also supports a convenient Docker image for running data transfers via `Rclone <https://rclone.org>`_.

To use this image, you must first configure the remote locally by calling:

.. code::
rclone config
And following all interactive instructions (defaults are usually sufficient).

The Docker image requires two environment variables to be set (see :ref:`developer_docker_details` for more details in a related process).

- ``RCLONE_CONFIG``: The full file content of the rclone.conf file on your system. You can find this by calling ``rclone config file``. On UNIX, for example, you can set this variable using ``RCLONE_CONFIG=$(<rclone.conf)`` from the folder containing the file
- ``RCLONE_COMMAND``: The Rclone command to run. For example, ``remote_name:source_folder destination_folder --verbose --progress --config ./rclone.conf``, where ``remote_name`` is the name used during initial setup through ``rclone config``, ``source_folder`` is the name of the folder you wish to transfer data from on that remote, and ``destination_folder`` is the local folder to transfer the data to.

Then, you can use the following command to run the Rclone Docker image:

.. code::
docker run -t --volume destination_folder:destination_folder -e RCLONE_CONFIG="$RCLONE_CONFIG" -e RCLONE_COMMAND="$RCLONE_COMMAND" ghcr.io/catalystneuro/rclone_with_config:latest
This image is particularly designed for convenience with AWS Batch (EC2) tools that rely heavily on atomic Docker operations. Alternative AWS approaches would have relied on transferring the Rclone configuration file to the EC2 instances using separate transfer protocols or dependent steps, both of which add complexity to the workflow.
36 changes: 25 additions & 11 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra --doctest-glob='*.rst'"
testpaths = [
"docs/conversion_examples_gallery/",
"tests"
]
doctest_optionflags = "ELLIPSIS"



[tool.black]
line-length = 120
target-version = ['py38', 'py39', 'py310']
Expand All @@ -19,19 +30,22 @@ extend-exclude = '''
)/
'''

[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra --doctest-glob='*.rst'"
testpaths = [
"docs/conversion_examples_gallery/",
"tests"


[tool.ruff]
exclude = [
"*/__init__.py"
]
doctest_optionflags = "ELLIPSIS"

[tool.isort]
profile = "black"
reverse_relative = true
known_first_party = ["neuroconv"]
[tool.ruff.lint]
select = ["F401", "I"] # TODO: eventually, expand to other 'F' linting
fixable = ["ALL"]

[tool.ruff.lint.isort]
relative-imports-order = "closest-to-furthest"
known-first-party = ["neuroconv"]



[tool.codespell]
skip = '.git*,*.pdf,*.css'
Expand Down
1 change: 0 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,6 @@
license="BSD-3-Clause",
classifiers=[
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
Expand Down
14 changes: 4 additions & 10 deletions src/neuroconv/datainterfaces/behavior/audio/audiointerface.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,19 @@

import numpy as np
import scipy
from pynwb import NWBFile, TimeSeries
from pynwb import NWBFile

from ....basetemporalalignmentinterface import BaseTemporalAlignmentInterface
from ....tools.audio import add_acoustic_waveform_series
from ....tools.nwb_helpers import make_or_load_nwbfile
from ....utils import (
DeepDict,
FilePathType,
get_base_schema,
get_schema_from_hdmf_class,
)


def _check_audio_names_are_unique(metadata: dict):
neurodata_names = [neurodata["name"] for neurodata in metadata]
neurodata_names_are_unique = len(set(neurodata_names)) == len(neurodata_names)
assert neurodata_names_are_unique, f"Some of the names for Audio metadata are not unique."
assert neurodata_names_are_unique, "Some of the names for Audio metadata are not unique."


class AudioInterface(BaseTemporalAlignmentInterface):
Expand Down Expand Up @@ -168,7 +164,7 @@ def add_to_nwbfile(
stub_frames: int = 1000,
write_as: Literal["stimulus", "acquisition"] = "stimulus",
iterator_options: Optional[dict] = None,
compression_options: Optional[dict] = None,
compression_options: Optional[dict] = None, # TODO: remove completely after 10/1/2024
overwrite: bool = False,
verbose: bool = True,
):
Expand All @@ -185,8 +181,6 @@ def add_to_nwbfile(
"stimulus" or as "acquisition".
iterator_options : dict, optional
Dictionary of options for the SliceableDataChunkIterator.
compression_options : dict, optional
Dictionary of options for compressing the data for H5DataIO.
overwrite : bool, default: False
verbose : bool, default: True
Expand Down Expand Up @@ -228,7 +222,7 @@ def add_to_nwbfile(
write_as=write_as,
starting_time=starting_times[file_index],
iterator_options=iterator_options,
compression_options=compression_options,
compression_options=compression_options, # TODO: remove completely after 10/1/2024; still passing for deprecation warning
)

return nwbfile
Loading

0 comments on commit d9b5270

Please sign in to comment.