From 48427e7f83fe9c6a48cd362f5665c710b8358d75 Mon Sep 17 00:00:00 2001 From: Ryan Ly Date: Tue, 6 Aug 2024 17:48:34 -0700 Subject: [PATCH 1/2] Attempt git credentials fix --- .github/workflows/data.yml | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/.github/workflows/data.yml b/.github/workflows/data.yml index cd0ca68..03aa641 100644 --- a/.github/workflows/data.yml +++ b/.github/workflows/data.yml @@ -19,8 +19,6 @@ jobs: CI_COMMIT_USER: 'github-actions[bot]' CI_COMMIT_EMAIL: 'github-actions[bot]@users.noreply.github.com' CI_COMMIT_MESSAGE: '[bot] update records' - CI_PUSH_REMOTE: git@github.com:$GITHUB_REPOSITORY.git - CI_PUSH_BRANCH: 'main' steps: - uses: actions/checkout@v4 with: @@ -40,12 +38,12 @@ jobs: - name: Commit & Push to repository run: | git config user.name "${{ env.CI_COMMIT_USER }}" - git config user.mail "${{ env.CI_COMMIT_EMAIL }}" + git config user.email "${{ env.CI_COMMIT_EMAIL }}" if [[ `git diff data` ]] ; then git add data git commit -m "${{ env.CI_COMMIT_MESSAGE }}" - git push "${{ env.CI_PUSH_REMOTE }}" "HEAD:${{ env.CI_PUSH_BRANCH }}" + git push else echo "No changes were found in data" exit 0 From 8c3036150a8d1429ebd2637a02e5c092835c5926 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 7 Aug 2024 00:58:30 +0000 Subject: [PATCH 2/2] [bot] update records --- data/records.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/records.json b/data/records.json index e5a8d95..2cd6414 100644 --- a/data/records.json +++ b/data/records.json @@ -1 +1 @@ -{"ndx-miniscope-record": {"ref": "ndx-miniscope-record", "record_url": "https://github.com/nwb-extensions/ndx-miniscope-record", "last_updated": "2019-10-16T05:56:05Z", "name": "ndx-miniscope", "version": "0.2.2", "src": "https://github.com/bendichter/ndx-miniscope", "pip": "https://pypi.org/project/ndx-miniscope/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-miniscope Extension for NWB:N\n\nThis is a Neurodata Extension (NDX) for Neurodata Without Borders: Neurophysiology (NWB:N) 2.0 that provides an extension to Device to hold meta-data collected by the Miniscope device."}, "ndx-simulation-output-record": {"ref": "ndx-simulation-output-record", "record_url": "https://github.com/nwb-extensions/ndx-simulation-output-record", "last_updated": "2019-10-16T06:23:42Z", "name": "ndx-simulation-output", "version": "0.2.5", "src": "https://github.com/bendichter/ndx-simulation-output", "pip": "https://pypi.org/project/ndx-simulation-output", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-simulation-output Extension for NWB:N\n\n## An extension for output data of large-scale simulations\n Developed in collaboration between the Soltesz lab and the Allen Institute during [NWB Hackathon #4](https://github.com/NeurodataWithoutBorders/nwb_hackathons/tree/master/HCK04_2018_Seattle/Projects/NetworkOutput) by Ben Dichter*, Kael Dai*, Aaron Milstein, Yazan Billeh, Andrew Tritt, Jean-Christophe Fillion-Robin, Anton Akhipov, Oliver Ruebel, Nicholas Cain, Kristofer Bouchard, and Ivan Soltesz\n\nThis extension defines two NWB neuorodata_types, `CompartmentSeries` and `Compartments`. `CompartmentSeries` stores continuous data (e.g. membrane potential, calcium concentration) from many compartments of many cells, and scales to hundreds of thousands of compartments. `Compartments` stores the meta-data associated with those compartments, and is stored in `SimulationMetaData`.\n\n![Image of CompartmentSeries](multicompartment_schema_1.png)\n\n\n## Guide\n### python\n#### installation\n```\npip install ndx-simulation-output\n```\n\n#### usage\n```python\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom datetime import datetime\nfrom ndx_simulation_output import CompartmentSeries, Compartments, SimulationMetaData\nimport numpy as np\n\n\ncompartments = Compartments()\ncompartments.add_row(number=[0, 1, 2, 3, 4], position=[0.1, 0.2, 0.3, 0.4, 0.5])\ncompartments.add_row(number=[0], position=[np.nan])\n\nnwbfile = NWBFile('description', 'id', datetime.now().astimezone())\n\nnwbfile.add_lab_meta_data(SimulationMetaData(compartments=compartments))\ncs = CompartmentSeries('membrane_potential', np.random.randn(10, 6),\n compartments=compartments, unit='V', rate=100.)\nnwbfile.add_acquisition(cs)\n\nwith NWBHDF5IO('test_compartment_series.nwb', 'w') as io:\n io.write(nwbfile)\n```\n\nconversion from SONTATA:\n```python\nfrom ndx_simulation_output.io import sonata2nwb\n\nsonata2nwb('sonata_fpath', 'save_path')\n```\n\n### MATLAB\n#### installation\n\ncommand line:\n```\ngit clone https://github.com/bendichter/ndx-simulation-output.git\n```\n\nin matlab:\n```matlab\ngenerateExtension('/path/to/ndx-simulation-output/spec/ndx-simulation-output.namespace.yaml');\n```\n\n#### usage\n```matlab\nnwb = nwbfile()\n\n[number, number_index] = util.create_indexed_column( ...\n {[0, 1, 2, 3, 4], 0}, '/acquisition/compartments/number');\n\n[position, position_index] = util.create_indexed_column( ...\n {[0.1, 0.2, 0.3, 0.4, 0.5], 0}, '/acquisition/compartments/position');\n\ncompartments = types.ndx_simulation_output.Compartments( ...\n 'colnames', {'number', 'position'}, ...\n 'description', 'membrane potential from various compartments', ...\n 'id', types.core.ElementIdentifiers('data', int64(0:5)));\n\ncompartments.position = position;\ncompartments.position_index = position_index;\ncompartments.number = number;\ncompartments.number_index = number_index;\n\nmembrane_potential = types.ndx_simulation_output.CompartmentSeries( ...\n 'data', randn(10,6), ...\n 'compartments', types.untyped.SoftLink('/acquisition/compartments'), ...\n 'data_unit', 'V', ...\n 'starting_time_rate', 100., ...\n 'starting_time', 0.0);\n \nsimulation = types.ndx_simulation_output.SimulationMetaData('compartments', compartments);\n \nnwb.general.set('simulation', simulation);\n\nnwb.acquisition.set('membrane_potential', membrane_potential);\n```\n\n## Talks\nBen Dichter*, Kael Dai*, Aaron Milstein, Yazan Billeh, Andrew Tritt, Jean-Christophe Fillion-Robin, Anton Akhipov, Oliver Ruebel, Nicholas Cain, Kristofer Bouchard, Ivan Soltesz. NWB extension for storing results of large-scale neural network simulations. NeuroInformatics. Montreal, Canada (2018). [video](https://www.youtube.com/watch?v=uuYQW0EE2GY).\n"}, "ndx-ecog-record": {"ref": "ndx-ecog-record", "record_url": "https://github.com/nwb-extensions/ndx-ecog-record", "last_updated": "2019-10-16T08:20:22Z", "name": "ndx-ecog", "version": "0.1.1", "src": "https://github.com/ben-dichter-consulting/ndx-ecog", "pip": "https://pypi.org/project/ndx-ecog/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-ecog Extension for NWB:N\n\nAuthor: Ben Dichter\n\nThere are three data types, `Surface`, `CorticalSurfaces`, and `ECoGSubject`. `CorticalSurfaces` is simply a group (like a folder) to put `Surface` objects into. `Surface` holds surface mesh data (vertices and triangular faces) for sections of cortex. `ECoGSubject` is an extension of `Subject` that allows you to add the `CorticalSurfaces` object to `/general/subject`.\n\n## Usage\n\n### python\n\ninstall:\n```bash\npip install ndx_ecog\n```\n\nwrite:\n```python\nimport pynwb\nfrom ndx_ecog import CorticalSurfaces, ECoGSubject\n\nnwbfile = pynwb.NWBFile(...)\n\n...\n\ncortical_surfaces = CorticalSurfaces()\n## loop me\n cortical_surfaces.create_surface(name=name, faces=faces, vertices=veritices)\n##\nnwbfile.subject = ECoGSubject(cortical_surfaces=cortical_surfaces)\n```\n\nYou can optionally attach images of the subject's brain:\n```python\nfrom pynwb.base import Images\nfrom pynwb.image import GrayscaleImage\n\nsubject.images = Images(name='subject images', images=[GrayscaleImage('image1', data=image_data)])\n```\n\nread:\n```python\nimport nwbext_ecog\nfrom pynwb import NWBHDF5IO\nio = NWBHDF5IO('path_to_file.nwb','r')\nnwb = io.read()\nnwb.subject.cortical_surfaces\n```\n\n### MATLAB\ninstall:\n```matlab\ngenerateExtension('/path/to/ndx-ecog/spec/ndx-ecog.namespace.yaml');\n```\n\nwrite:\n```matlab\ncortical_surfaces = types.ecog.CorticalSurfaces;\n\n%%% loop me\n surf = types.ecog.Surface('faces', faces, 'vertices', vertices);\n cortical_surfaces.surface.set(surface_name, surf);\n%%%\n\nfile.subject = types.ecog.ECoGSubject(name, cortical_surfaces);\n```\n"}, "ndx-fret-record": {"ref": "ndx-fret-record", "record_url": "https://github.com/nwb-extensions/ndx-fret-record", "last_updated": "2020-01-24T21:49:16Z", "name": "ndx-fret", "version": "0.1.1", "src": "https://github.com/ben-dichter-consulting/ndx-fret", "pip": "https://pypi.org/project/ndx-fret/", "license": "BSD", "maintainers": ["luiztauffer", "bendichter"], "readme": "# ndx-fret\n[![PyPI version](https://badge.fury.io/py/ndx-fret.svg)](https://badge.fury.io/py/ndx-fret)\n\nNWB extension for storing Fluorescence Resonance Energy Transfer (FRET) experimental data.\nA collaboration with [Jaeger Lab](https://scholarblogs.emory.edu/jaegerlab/), [Emory University](https://www.emory.edu/home/index.html) and [The Kavli Foundation](https://www.kavlifoundation.org/).\n\n

\n\n

\n\n### Python Installation\n```bash\npip install ndx-fret\n```\n\n### Python Usage\n```python\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom pynwb.device import Device\nfrom pynwb.ophys import OpticalChannel\nfrom ndx_fret import FRET, FRETSeries\n\nfrom datetime import datetime\nimport numpy as np\n\nnwb = NWBFile('session_description', 'identifier', datetime.now().astimezone())\n\n# Create and add device\ndevice = Device(name='Device')\nnwb.add_device(device)\n\n# Create optical channels\nopt_ch_d = OpticalChannel(\n name='optical_channel',\n description='optical_channel_description',\n emission_lambda=529.\n)\nopt_ch_a = OpticalChannel(\n name='optical_channel',\n description='optical_channel_description',\n emission_lambda=633.\n)\n\n# Create FRET\nfs_d = FRETSeries(\n name='donor',\n fluorophore='mCitrine',\n optical_channel=opt_ch_d,\n device=device,\n description='description of donor series',\n data=np.random.randn(100, 10, 10),\n rate=200.,\n)\nfs_a = FRETSeries(\n name='acceptor',\n fluorophore='mKate2',\n optical_channel=opt_ch_a,\n device=device,\n description='description of acceptor series',\n data=np.random.randn(100, 10, 10),\n rate=200.,\n)\n\nfret = FRET(\n name='FRET',\n excitation_lambda=482.,\n donor=fs_d,\n acceptor=fs_a\n)\nnwb.add_acquisition(fret)\n\n# Write nwb file\nwith NWBHDF5IO('test_fret.nwb', 'w') as io:\n io.write(nwb)\n print('NWB file written')\n\n# Read nwb file and check its content\nwith NWBHDF5IO('test_fret.nwb', 'r', load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n```\n"}, "ndx-icephys-meta-record": {"ref": "ndx-icephys-meta-record", "record_url": "https://github.com/nwb-extensions/ndx-icephys-meta-record", "last_updated": "2022-04-20T19:15:11Z", "name": "ndx-icephys-meta", "version": "0.1.0", "src": "https://github.com/oruebel/ndx-icephys-meta", "pip": "https://pypi.org/project/ndx-icephys-meta/", "license": "BSD 3-Clause", "maintainers": ["oruebel"], "readme": "# [Deprecated] ndx-icephys-meta Extension for NWB\n\n**Status:** The changes from this extension have been integrated with NWB and are part of then [NWB 2.4 release](https://nwb-schema.readthedocs.io/en/latest/format_release_notes.html#aug-11-2021). Use of this extension is deprecated. \n\n**Overview:** This extension implements the icephys extension proposal described [here](https://docs.google.com/document/d/1cAgsXv26BmQoVfa7Greyxs0oc4IGH-t5aJsm-AwUAAE/edit). The extension is intended to evaluate and explore the practical use of the proposed changes as well as to provide a reference implementation with the goal to ease integration of the proposed changes with NWB.\n\n## Install\n\n```\npython setup.py develop\n```\n\nThe extension is now also available on pip and can be installed via:\n\n```\npip install ndx-icephys-meta\n```\n\nThe extension is also listed in the (NDX catalog)[https://nwb-extensions.github.io/]. See [here](https://github.com/nwb-extensions/ndx-icephys-meta-record) for the catalog metadata record.\n\n\n## Building the spec documentation\n\n```\ncd docs\nmake html\n```\n\nThis generates the specification docs directly from the YAML specifciation in the ``spec`` folder. The generated docs are stored in ``/docs/build``\n\n## Running the unit tests\n\n```\npython src/pynwb/ndx_icephys_meta/test/test_icephys.py\n```\n\n## Content\n\n* ``spec/`` : YAML specification of the extension\n* ``docs/`` : Sources for building the specification docs from the YAML spec\n* ``src/spec/create_extension_spec.py`` : Python source file for creating the specification\n* ``src/pynwb/`` : Sources for Python extensions and examples\n * ``ndx_icephys_meta`` : Python package with extensions to PyNWB for read/write of extension data\n * ``ndx_icephys_meta/test`` : Unit test for the Python extension\n * ``ndx_icephys_meta/icephys.py`` : PyNWB Container classes\n * ``ndx_icephys_meta/io/icephys.py`` : PyNWB ObjectMapper classes\n * ``examples`` : Examples illustrating the use of the extension in Python\n\n\n## Example\n\nExamples for the Python extension are available at ``src/pynwb/examples``. The unit tests in ``src/pynwb/ndx_icephys_meta/test`` can also serve as additional examples.\n\nThe following shows a simple example. The steps with (A) - (E) in the comments are the main new steps for this extension. The other parts of the code are standard NWB code.\n\n```python\nfrom datetime import datetime\nfrom dateutil.tz import tzlocal\nimport numpy as np\nfrom pynwb.icephys import VoltageClampStimulusSeries, VoltageClampSeries\nfrom pynwb import NWBHDF5IO\nfrom ndx_icephys_meta.icephys import ICEphysFile # Import the extension\n\n# Create an ICEphysFile\nnwbfile = ICEphysFile(session_description='my first recording',\n identifier='EXAMPLE_ID',\n session_start_time=datetime.now(tzlocal()))\n\n# Add a device\ndevice = nwbfile.create_device(name='Heka ITC-1600')\n\n# Add an intracellular electrode\nelectrode = nwbfile.create_ic_electrode(name=\"elec0\",\n description='a mock intracellular electrode',\n device=device)\n\n# Create an ic-ephys stimulus\nstimulus = VoltageClampStimulusSeries(\n name=\"stimulus\",\n data=[1, 2, 3, 4, 5],\n starting_time=123.6,\n rate=10e3,\n electrode=electrode,\n gain=0.02)\n\n# Create an ic-response\nresponse = VoltageClampSeries(\n name='response',\n data=[0.1, 0.2, 0.3, 0.4, 0.5],\n conversion=1e-12,\n resolution=np.nan,\n starting_time=123.6,\n rate=20e3,\n electrode=electrode,\n gain=0.02,\n capacitance_slow=100e-12,\n resistance_comp_correction=70.0)\n\n# (A) Add an intracellular recording to the file\nir_index = nwbfile.add_intracellular_recording(electrode=electrode,\n stimulus=stimulus,\n response=response)\n\n# (B) Add a list of sweeps to the sweeps table\nsweep_index = nwbfile.add_ic_sweep(recordings=[ir_index, ])\n\n# (C) Add a list of sweep table indices as a sweep sequence\nsequence_index = nwbfile.add_ic_sweep_sequence(sweeps=[sweep_index, ])\n\n# (D) Add a list of sweep sequence table indices as a run\nrun_index = nwbfile.add_ic_run(sweep_sequences=[sequence_index, ])\n\n# (E) Add a list of run table indices as a condition\nnwbfile.add_ic_condition(runs=[run_index, ])\n\n# Write our test file\ntestpath = \"test_icephys_file.h5\"\nwith NWBHDF5IO(testpath, 'w') as io:\n io.write(nwbfile)\n\n# Read the data back in\nwith NWBHDF5IO(testpath, 'r') as io:\n infile = io.read()\n print(infile)\n\n```\n"}, "ndx-events-record": {"ref": "ndx-events-record", "record_url": "https://github.com/nwb-extensions/ndx-events-record", "last_updated": "2022-11-15T06:37:13Z", "name": "ndx-events", "version": "0.2.0", "src": "https://github.com/rly/ndx-events", "pip": "https://pypi.org/project/ndx-events/", "license": "BSD", "maintainers": ["rly"], "readme": "# ndx-events Extension for NWB\n\nThis is an NWB extension for storing timestamped event data and TTL pulses.\n\nEvents can be:\n1. **Simple events**. These are stored in the `Events` type. The `Events` type consists of only a name, a description,\nand a 1D array of timestamps. This should be used instead of a `TimeSeries` when the time series has no data.\n2. **Labeled events**. These are stored in the `LabeledEvents` type. The `LabeledEvents` type expands on the `Events`\ntype by adding 1) a 1D array of integer values (data) with the same length as the timestamps and 2) a 1D array of\nlabels (labels) associated with each unique integer value in the data array. The data values are indices into the\narray of labels. The `LabeledEvents` type can be used to encode additional information about individual events,\nsuch as the reward values for each reward event.\n3. **TTL pulses**. These are stored in the `TTLs` type. The `TTLs` type is a subtype of the `LabeledEvents` type\nspecifically for TTL pulse data. A single instance should be used for all TTL pulse data. The pulse value (or channel)\nshould be stored in the 1D data array, and the labels associated with each pulse value (or channel)\nshould be stored in the 1D array of labels.\n4. **Annotated events**. These are stored in the `AnnotatedEventsTable` type. The `AnnotatedEventsTable` type is a\nsubtype of `DynamicTable`, where each row corresponds to a different event type. The table has a ragged\n(variable-length) 1D column of event times, such that each event type (row) is associated with an array of event times.\nUnlike for the other event types, users can add their own custom columns to annotate each event type or event time.\nThis can be useful for storing event metadata related to data preprocessing and analysis, such as marking bad events.\n\nThis extension was developed by Ryan Ly, Ben Dichter, Oliver R\u00fcbel, and Andrew Tritt. Information about the rationale,\nbackground, and alternative approaches to this extension can be found here:\nhttps://docs.google.com/document/d/1qcsjyFVX9oI_746RdMoDdmQPu940s0YtDjb1en1Xtdw\n\n## Installation\n\n```\npip install ndx-events\n```\n\n## Example usage\n\n```python\nfrom datetime import datetime\n\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom ndx_events import LabeledEvents, AnnotatedEventsTable\n\n\nnwb = NWBFile(\n session_description='session description',\n identifier='cool_experiment_001',\n session_start_time=datetime.now().astimezone()\n)\n\n# create a new LabeledEvents type to hold events recorded from the data acquisition system\nevents = LabeledEvents(\n name='LabeledEvents',\n description='events from my experiment',\n timestamps=[0., 0.5, 0.6, 2., 2.05, 3., 3.5, 3.6, 4.],\n resolution=1e-5, # resolution of the timestamps, i.e., smallest possible difference between timestamps\n data=[0, 1, 2, 3, 5, 0, 1, 2, 4],\n labels=['trial_start', 'cue_onset', 'cue_offset', 'response_left', 'response_right', 'reward']\n)\n\n# add the LabeledEvents type to the acquisition group of the NWB file\nnwb.add_acquisition(events)\n\n# create a new AnnotatedEventsTable type to hold annotated events\nannotated_events = AnnotatedEventsTable(\n name='AnnotatedEventsTable',\n description='annotated events from my experiment',\n resolution=1e-5 # resolution of the timestamps, i.e., smallest possible difference between timestamps\n)\n# add a custom indexed (ragged) column to represent whether each event time was a bad event\nannotated_events.add_column(\n name='bad_event',\n description='whether each event time should be excluded',\n index=True\n)\n# add an event type (row) to the AnnotatedEventsTable instance\nannotated_events.add_event_type(\n label='Reward',\n event_description='Times when the subject received juice reward.',\n event_times=[1., 2., 3.],\n bad_event=[False, False, True],\n id=3\n)\n\n# create a processing module in the NWB file to hold processed events data\nevents_module = nwb.create_processing_module(\n name='events',\n description='processed event data'\n)\n\n# add the AnnotatedEventsTable instance to the processing module\nevents_module.add(annotated_events)\n\n# write nwb file\nfilename = 'test.nwb'\nwith NWBHDF5IO(filename, 'w') as io:\n io.write(nwb)\n\n# read nwb file and check its contents\nwith NWBHDF5IO(filename, 'r', load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-nirs-record": {"ref": "ndx-nirs-record", "record_url": "https://github.com/nwb-extensions/ndx-nirs-record", "last_updated": "2022-11-15T06:37:40Z", "name": "ndx-nirs", "version": "0.3.0", "src": "https://github.com/agencyenterprise/ndx-nirs", "pip": "https://pypi.org/project/ndx-nirs/", "license": "BSD 3-Clause", "maintainers": ["sumner15", "dsleiter", "ribeirojose"], "readme": "# ndx-nirs Extension for NWB\n\nThis is an [NWB](https://www.nwb.org/) extension for storing and sharing near-infrared spectroscopy (NIRS) data.\n\nIf you're new to NWB: \"Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build common analysis tools for neurophysiology data.\" ([source](https://www.nwb.org/nwb-neurophysiology/))\n\nThis extension defines the data specification for NIRS data in addition to providing a python API for reading and writing .nwb files containing data that follows this specification. The python package can be used with [pyNWB](https://github.com/NeurodataWithoutBorders/pynwb).\n\nThis extension has been officially accepted into the [Neurodata Extensions Catalog](https://nwb-extensions.github.io/) and can be found there along with other accepted extensions.\n\n## Introduction to NIRS\n\nNIRS uses near-infrared sources (from 780 nm to 2500 nm) to assess brain function by detecting changes in blood hemoglobin (Hb) concentrations. \n\nAs neural activity changes, blood volume and the concentration of hemoglobin in the local area changes through the neurovascular coupling phenomenon. NIRS techniques requires optical sources with two or more wavelengths in the near-infrared spectrum. One must have a wavelength above and one below the isosbestic point of 810 nm - the point at which deoxygenated hemoglobin (deoxy-Hb) and oxygenated hemoglobin (oxy-Hb) have identical absorption coefficients. Using the modified Beer-Lambert law (mBLL), NIRS techniques reveal changes in hemoglobin concentration. NIRS monitors hemoglobin levels through these optical absorption coefficients as a proxy for localized brain activity.\n\n## Purpose of the extension\n\nThe user-base of NIRS techniques continues to grow. In addition, NIRS techniques are often used in conjunction with other brain recording techniques (e.g. EEG) and/or use common stimuli or behavioral paradigms. The NWB NIRS extension provides a data standard for neuroscientist to share, archive, use, and build analysis tools for NIRS data. \n\nIntegration of NIRS into the NWB data standard affords all NIRS users interoperability with many of the data storage, processing, analysis, and visualization tools already integrated within NWB. \n\n## Modes of NIRS currently supported\n\nThis extension currently explicitly supports: \n\n1. Continuous Wave\n - see `NIRSDevice.nirs_mode` \n2. Frequency-Domain\n - see `NIRSDevice.nirs_mode` and `NIRSDevice.frequency`\n3. Time-Domain \n - see `NIRSDevice.nirs_mode`, `NIRSDevice.time_delay`, and `NIRSDevice.time_delay_width`\n4. Diffuse Correlation Spectroscopy\n - see `NIRSDevice.nirs_mode`, `NIRSDevice.correlation_time_delay`, and `NIRSDevice.correlation_time_delay_width`\n\nIn addition, it includes support for fluorescent versions of each of these techniques.\n - see `NIRSChannelsTable.emssion_wavelength`\n\nOther NIRS modalities are supported implicitly. We acknowledge that NIRS is a fast-growing recording method with new modalities constantly under development. For this reason, it is possible to define other useful parameters using the `NIRSDevice.additional_parameters` field. Future version of NWB NIRS will add native support for new NIRS modalities.\n\n## Related data standards \n\nThe NWB NIRS neurodata type was inspired by the [SNIRF](https://fnirs.org/resources/software/snirf/) data specification ([Github](https://github.com/fNIRS/snirf)). Many of the data fields can be directly mapped from SNIRF to NWB and vice-versa. We expect to release a SNIRF<->NWB conversion tool in the near future to improve compatibility between data standards and ease the burden of conversion on NIRS researchers.\n\n## NWB NIRS data architecture\n\nThe two principal neurodata types of this extension are ``NIRSDevice``, which extends the `Device` data type and holds information about the NIRS hardware and software configuration, and ``NIRSSeries``, which contains the timeseries data collected by the NIRS device.\n\n``NIRSSourcesTable``, ``NIRSDetectorsTable``, and ``NIRSChannelsTable`` are children of ``NIRSDevice`` which describe the source and detector layout as well as the wavelength-specific optical channels that are measured.\n\nEach row of ``NIRSChannelsTable`` represents a specific source and detector pair along with the source illumination wavelength (and optionally, in the case of fluorescent spectroscopy, the emission/detection wavelength). The channels in this table correspond have a 1-to-1 correspondence with the data columns in ``NIRSSeries``.\n\n![ndx-nirs UML](https://github.com/agencyenterprise/ndx-nirs/raw/main/docs/source/images/ndx-nirs-uml.png)\n\n### Defined neurodata types\n\n1. ``NIRSSourcesTable`` stores rows for each optical source of a NIRS device. ``NIRSSourcesTable`` columns includes:\n - ``label`` - the label of the source.\n - ``x``, ``y``, and ``z`` - the coordinates in meters of the optical source (``z`` is optional).\n\n2. ``NIRSDetectorsTable`` stores rows for each of the optical detectors of a NIRS device. ``NIRSDetectorsTable`` columns includes:\n - ``label`` - the label of the detector.\n - ``x``, ``y``, and ``z`` - the coordinates in meters of the optical detector (``z`` is optional).\n\n3. ``NIRSChannelsTable`` stores rows for each physiological channel, which is defined by source-detector pairs, where sources & detectors are referenced via ``NIRSSourcesTable`` and ``NIRSDetectorsTable``. ``NIRSChannelsTable`` columns includes:\n - ``label`` - the label of the channel.\n - ``source`` - a reference to the optical source in ``NIRSSourcesTable``.\n - ``detector`` - a reference to the optical detector in ``NIRSDetectorsTable``.\n - ``source_wavelength`` - the wavelength of light in nm emitted by the source for this channel.\n - ``emission_wavelength`` - the wavelength of light in nm emitted by the fluorophone (optional; only used for fluorescent spectroscopy).\n - ``source_power`` - the power of the source in mW used for this channel (optional).\n - ``detector_gain`` - the gain applied to the detector for this channel (optional).\n \n4. ``NIRSDevice`` defines the NIRS device itself and includes the following required fields:\n - ``name`` - a unique name for the device.\n - ``description`` - a free-form text description of the device.\n - ``manufacturer`` - the name of the manufacturer of the device.\n - ``channels`` - a table of the optical channels available on this device (references ``NIRSChannelsTable``).\n - ``sources`` - the optical sources of this device (references ``NIRSSourcesTable``).\n - ``detectors`` - the optical detectors of this device (references ``NIRSDetectorsTable``).\n - ``nirs_mode`` - the mode of NIRS measurement performed with this device (e.g., 'continuous-wave', 'frequency-domain', etc.).\n \n ``NIRSDevice`` also includes several optional attributes to be used in parallel with specific ``nirs_mode`` values:\n - ``frequency`` - the modulation frequency in Hz for frequency domain NIRS (optional).\n - ``time_delay`` - the time delay in ns used for gated time domain NIRS (TD-NIRS) (optional).\n - ``time_delay_width`` - the time delay width in ns used for gated time domain NIRS (optional).\n - ``correlation_time_delay`` - the correlation time delay in ns for diffuse correlation spectroscopy NIRS (optional).\n - ``correlation_time_delay_width`` - the correlation time delay width in ns for diffuse correlation spectroscopy NIRS (optional).\n - ``additional_parameters`` - any additional parameters corresponding to the NIRS device/mode that are useful for interpreting the data (optional).\n\n5. ``NIRSSeries`` stores the actual timeseries data collected by the NIRS device and includes:\n - ``name`` - a unique name for the NIRS timeseries.\n - ``description`` - a description of the NIRS timeseries.\n - ``timestamps`` - the timestamps for each row of ``data`` in seconds.\n - ``channels`` - a ``DynamicTableRegion`` mapping to the appropriate channels in a ``NIRSChannelsTable``.\n - ``data`` - the actual numeric raw data measured by the NIRS system. It is a 2D array where the columns correspond to ``channels`` and the rows correspond to ``timestamps``.\n\n## Installation\n\nTo install from PyPI use pip:\n\n```\n$ pip install ndx-nirs\n```\n\nTo install after cloning the extension repo from github, execute the following from the root of the repo:\n\n```\n$ pip install .\n```\n\nFor development purposes, it might be useful to install in editable mode:\n\n```\n$ pip install -e .\n```\n\n## Usage\n\n```python\nfrom datetime import datetime\n\nimport numpy as np\n\nfrom hdmf.common import DynamicTableRegion\nfrom pynwb import NWBHDF5IO\nfrom pynwb.file import NWBFile, Subject\n\nfrom ndx_nirs import NIRSSourcesTable, NIRSDetectorsTable, NIRSChannelsTable, NIRSDevice, NIRSSeries\n\n\n##### create some example data to add to the NWB file #####\n\n# create NIRS source & detector labels\nsource_labels = [\"S1\", \"S2\"]\ndetector_labels = [\"D1\", \"D2\"]\n\n# create NIRS source & detector positions as a numpy array\n# with dims: [num sources/detectors rows x 2 columns (for x, y)]\nsource_pos = np.array([[-2.0, 0.0], [-4.0, 5.6]])\ndetector_pos = np.array([[0.0, 0.0], [-4.0, 1.0]])\n\n# create a list of source detector pairs (pairs of indices)\nsource_detector_pairs = [(0, 0), (0, 1), (1, 0), (1, 1)]\n\n\n##### create NWB file using the example data above #####\n\n# create a basic NWB file\nnwb = NWBFile(\n session_description=\"A NIRS test session\",\n identifier=\"nirs_test_001\",\n session_start_time=datetime.now().astimezone(),\n subject=Subject(subject_id=\"nirs_subj_01\"),\n)\n\n\n# create and populate a NIRSSourcesTable containing the\n# label and location of optical sources for the device\nsources = NIRSSourcesTable()\n# add source labels & positions row-by-row\nfor i_source in range(0, len(source_labels)):\n sources.add_row(\n label=source_labels[i_source],\n x=source_pos[i_source, 0],\n y=source_pos[i_source, 1],\n )\n\n\n# create and populate a NIRSDetectorsTable containing the\n# label and location of optical sources for the device\ndetectors = NIRSDetectorsTable()\n# add a row for each detector\nfor i_detector in range(0, len(detector_labels)):\n detectors.add_row(\n label=detector_labels[i_detector],\n x=detector_pos[i_detector, 0],\n y=detector_pos[i_detector, 1],\n ) # z-coordinate is optional\n\n\n# create a NIRSChannelsTable which defines the channels\n# between the provided sources and detectors\nchannels = NIRSChannelsTable(sources=sources, detectors=detectors)\n# each channel is composed of a single source, a single detector, and the wavelength\n# most source-detector pairs will use two separate wavelengths, and have two channels\nfor i_source, i_detector in source_detector_pairs:\n for wavelength in [690.0, 830.0]:\n # for the source and detector parameters, pass in the index of\n # the desired source (detector) in the sources (detectors) table\n channels.add_row(\n label=f\"{source_labels[i_source]}.{detector_labels[i_detector]}.{wavelength:0.0f}nm\",\n source=i_source,\n detector=i_detector,\n source_wavelength=wavelength,\n )\n\n\n# create a NIRSDevice which contains all of the information\n# about the device configuration and arrangement\ndevice = NIRSDevice(\n name=\"nirs_device\",\n description=\"world's best fNIRS device\",\n manufacturer=\"skynet\",\n nirs_mode=\"time-domain\",\n channels=channels,\n sources=sources,\n detectors=detectors,\n # depending on which nirs_mode is selected, additional parameter values should be\n # included. these two parameters are included because we are using time-domain NIRS\n time_delay=1.5, # in ns\n time_delay_width=0.1, # in ns\n # specialized NIRS hardware may require additional parameters that can be defined\n # using the `additional_parameters` field:\n additional_parameters=\"flux_capacitor_gain = 9000; speaker_volume = 11;\",\n)\n# add the device to the NWB file\nnwb.add_device(device)\n\n\n# create a NIRSSeries timeseries containing raw NIRS data\nnirs_series = NIRSSeries(\n name=\"nirs_data\",\n description=\"The raw NIRS channel data\",\n timestamps=np.arange(0, 10, 0.01), # in seconds\n # reference only the channels associated with this series\n channels=DynamicTableRegion(\n name=\"channels\",\n description=\"an ordered map to the channels in this NIRS series\",\n table=channels,\n data=channels.id[:],\n ),\n data=np.random.rand(1000, 8), # shape: (num timesteps, num channels)\n unit=\"V\",\n)\n# add the series to the NWB file\nnwb.add_acquisition(nirs_series)\n\n\n# Write our test file\nfilename = \"test_nirs_file.nwb\"\nwith NWBHDF5IO(filename, \"w\") as io:\n io.write(nwb)\n\n# Read the data back in\nwith NWBHDF5IO(filename, \"r\", load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n print(nwb.devices[\"nirs_device\"])\n print(nwb.acquisition[\"nirs_data\"])\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-hierarchical-behavioral-data-record": {"ref": "ndx-hierarchical-behavioral-data-record", "record_url": "https://github.com/nwb-extensions/ndx-hierarchical-behavioral-data-record", "last_updated": "2022-11-15T06:20:55Z", "name": "ndx-hierarchical-behavioral-data", "version": "0.1.1", "src": "https://github.com/catalystneuro/ndx-hierarchical-behavioral-data", "pip": "https://pypi.org/project/ndx-hierarchical-behavioral-data/", "license": "BSD", "maintainers": ["bendichter", "luiztauffer"], "readme": "# ndx-hierarchical-behavioral-data Extension for NWB\n\n[![PyPI version](https://badge.fury.io/py/ndx-hierarchical-behavioral-data.svg)](https://badge.fury.io/py/ndx-hierarchical-behavioral-data)\n\n![schema schema](https://github.com/catalystneuro/ndx-hierarchical-behavioral-data/blob/master/docs/media/hierarchical_behavioral_data.png?raw=true)\n\n## Installation\n\n```\npip install ndx-hierarchical-behavioral-data\n```\n\n## Usage\nUse pre-made hierarchical transcription tables:\n\n```python\nfrom ndx_hierarchical_behavioral_data.definitions.transcription import TIPhonemes, HBTSyllables, HBTWords, HBTSentences\n\n# Phonemes level\nphonemes = TIPhonemes()\nphonemes.add_column('max_pitch', 'maximum pitch for this phoneme. NaN for unvoiced')\nfor i, p in enumerate('abcdefghijkl'):\n phonemes.add_interval(label=p, start_time=float(i), stop_time=float(i+1), max_pitch=i**2)\n\n# Syllables level\nsyllables = HBTSyllables(lower_tier_table=phonemes)\nsyllables.add_interval(label='abc', next_tier=[0, 1, 2])\nsyllables.add_interval(label='def', next_tier=[3, 4, 5])\nsyllables.add_interval(label='ghi', next_tier=[6, 7, 8])\nsyllables.add_interval(label='jkl', next_tier=[9, 10, 11])\n\n# Words level\nwords = HBTWords(lower_tier_table=syllables)\nwords.add_column('emphasis', 'boolean indicating whether this word was emphasized')\nwords.add_interval(label='A-F', next_tier=[0, 1], emphasis=False)\nwords.add_interval(label='G-L', next_tier=[2, 3], emphasis=True)\n\n# Sentences level\nsentences = HBTSentences(lower_tier_table=words)\nsentences.add_interval(label='A-L', next_tier=[0, 1])\n```\n\nView individual tiers:\n\n```python\nsentences.to_dataframe()\n```\n
labelstart_timestop_timenext_tier
id
0A-L0.012.0label start_time stop_time \\\\id ...
\n\n\n```python\nwords.to_dataframe()\n```\n\n
label start_time stop_time next_tier emphasis
id
0 A-F 0.0 6.0 label start_time stop_time \\\\ id 0 abc 0.0 3.0 1 def 3.0 6.0 next_tier id 0 start_time stop_time label max_pitch id 0 0.0 1.0 a 0 1 1.0 2.0 b 1 2 2.0 3.0 c 4 1 start_time stop_time label max_pitch id 3 3.0 4.0 d 9 4 4.0 5.0 e 16 5 5.0 6.0 f 25 False
1 G-L 6.0 12.0 label start_time stop_time \\\\ id 2 ghi 6.0 9.0 3 jkl 9.0 12.0 next_tier id 2 start_time stop_time label max_pitch id 6 6.0 7.0 g 36 7 7.0 8.0 h 49 8 8.0 9.0 i 64 3 start_time stop_time label max_pitch id 9 9.0 10.0 j 81 10 10.0 11.0 k 100 11 11.0 12.0 l 121 True
\n\n```python\nsyllables.to_dataframe()\n```\n\n
labelstart_timestop_timenext_tier
id
0 abc 0.0 3.0 start_time stop_time label id 0 0.0 1.0 a 1 1.0 2.0 b 2 2.0 3.0 c
1 def 3.0 6.0 start_time stop_time label id 3 3.0 4.0 d 4 4.0 5.0 e 5 5.0 6.0 f
2 ghi 6.0 9.0 start_time stop_time label id 6 6.0 7.0 g 7 7.0 8.0 h 8 8.0 9.0 i
3 jkl 9.0 12.0 start_time stop_time label id 9 9.0 10.0 j 10 10.0 11.0 k 11 11.0 12.0 l
\n\n```python\nphonemes.to_dataframe()\n```\n\n
start_time stop_time label max_pitch
id
0 0.0 1.0 a 0
1 1.0 2.0 b 1
2 2.0 3.0 c 4
3 3.0 4.0 d 9
4 4.0 5.0 e 16
5 5.0 6.0 f 25
6 6.0 7.0 g 36
7 7.0 8.0 h 49
8 8.0 9.0 i 64
9 9.0 10.0 j 81
10 10.0 11.0 k 100
11 11.0 12.0 l 121
\n\n\nHierarchical dataframe:\n```python\nsentences.to_hierarchical_dataframe()\n```\n
source_table phonemes
label id start_time stop_time label max_pitch
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time
0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
3.0 1 1.0 2.0 b 1
3.0 2 2.0 3.0 c 4
1 def 3.0 6.0 3 3.0 4.0 d 9
6.0 4 4.0 5.0 e 16
6.0 5 5.0 6.0 f 25
1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
9.0 7 7.0 8.0 h 49
9.0 8 8.0 9.0 i 64
3 jkl 9.0 12.0 9 9.0 10.0 j 81
12.0 10 10.0 11.0 k 100
12.0 11 11.0 12.0 l 121
\n\n\nHierachical columns, flattened rows:\n\n```python\nsentences.to_hierarchical_dataframe(flat_column_index=True)\n```\n\n
id start_time stop_time label max_pitch
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time
0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
3.0 1 1.0 2.0 b 1
3.0 2 2.0 3.0 c 4
1 def 3.0 6.0 3 3.0 4.0 d 9
6.0 4 4.0 5.0 e 16
6.0 5 5.0 6.0 f 25
1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
9.0 7 7.0 8.0 h 49
9.0 8 8.0 9.0 i 64
3 jkl 9.0 12.0 9 9.0 10.0 j 81
12.0 10 10.0 11.0 k 100
12.0 11 11.0 12.0 l 121
\n\nDenormalized dataframe:\n```python\nsentences.to_denormalized_dataframe()\n```\n\n
source_table sentences words syllables phonemes
label id label start_time stop_time id label start_time stop_time emphasis id label start_time stop_time id start_time stop_time label max_pitch
0 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
1 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 1 1.0 2.0 b 1
2 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 2 2.0 3.0 c 4
3 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 3 3.0 4.0 d 9
4 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 4 4.0 5.0 e 16
5 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 5 5.0 6.0 f 25
6 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
7 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 7 7.0 8.0 h 49
8 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 8 8.0 9.0 i 64
9 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 9 9.0 10.0 j 81
10 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 10 10.0 11.0 k 100
11 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 11 11.0 12.0 l 121
\n\nDenormalized dataframe with flattened columns:\n```python\nsentences.to_denormalized_dataframe(flat_column_index=True)\n```\n\n
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time id start_time stop_time label max_pitch
0 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
1 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 1 1.0 2.0 b 1
2 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 2 2.0 3.0 c 4
3 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 3 3.0 4.0 d 9
4 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 4 4.0 5.0 e 16
5 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 5 5.0 6.0 f 25
6 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
7 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 7 7.0 8.0 h 49
8 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 8 8.0 9.0 i 64
9 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 9 9.0 10.0 j 81
10 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 10 10.0 11.0 k 100
11 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 11 11.0 12.0 l 121
\n\n\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-sound-record": {"ref": "ndx-sound-record", "record_url": "https://github.com/nwb-extensions/ndx-sound-record", "last_updated": "2022-11-15T07:17:28Z", "name": "ndx-sound", "version": "0.2.0", "src": "https://github.com/catalystneuro/ndx-sound/", "pip": "https://pypi.org/project/ndx-sound/", "license": "BSD", "maintainers": ["weiglszonja", "bendichter"], "readme": "![PyPI](https://img.shields.io/pypi/v/ndx-sound?color=blue)\n\n# ndx-sound Extension for NWB\n\nNWB extension for sounds.\n\n## Installation\n\n```shell\npip install ndx-sound\n```\n\n## Usage\n\n## Python\n\n### Add to an NWB file\n```python\nfrom pynwb import NWBFile\nfrom scipy.io import wavfile\n\nfrom ndx_sound import AcousticWaveformSeries\n\n# The file path to the audio file\nfile_path = \"audio_data.wav\"\n\n# Read the audio file to get the rate of the recording and the waveform\nsampling_rate, samples = wavfile.read(file_path)\n\n# Create an AcousticWaveformSeries object with a given name and description\nacoustic_waveform_series = AcousticWaveformSeries(\n name=\"acoustic_stimulus\",\n data=samples,\n rate=sampling_rate,\n description=\"acoustic stimulus\",\n)\n\n# Create an NWBFile object where this AcousticWaveformSeries can be added to\nnwbfile = NWBFile(\n session_description=...,\n identifier=...,\n session_start_time=...,\n)\n\n# If a recording of behavior, add to acquisition\nnwbfile.add_acquisition(acoustic_waveform_series)\n\n# If a stimulus, add to stimulus\nnwbfile.add_stimulus(acoustic_waveform_series)\n```\n\n### Visualization\n\n#### Static widgets\nUse `plot_sound` to visualize the waveform series and the spectrogram.\nFor longer recordings, specify the `time_window` argument for the start and end time\nof the recording to be shown.\n```python\nfrom ndx_sound.widgets import plot_sound\n\nplot_sound(nwbfile.stimulus[\"acoustic_stimulus\"])\n\n# Show only from 5 to 15 seconds\nplot_sound(nwbfile.stimulus[\"acoustic_stimulus\"], time_window=(5, 15))\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/ndx_sound_plot_timewindow.png)\n\nUse `acoustic_waveform_widget` to include an Audio element that plays the sound.\n\n```python\nfrom ndx_sound.widgets import acoustic_waveform_widget\n\nacoustic_waveform_widget(nwbfile.stimulus[\"acoustic_stimulus\"], time_window=(5, 15))\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/acoustic_waveform_widget_timewindow.png)\n\n#### Interactive widgets\nUse `AcousticWaveformWidget` to use a slider for interactively scrolling through the\nrecording and a button for changing the duration of the sound that is being shown.\n\n```python\nfrom ndx_sound.widgets import AcousticWaveformWidget\n\nAcousticWaveformWidget(nwbfile.stimulus[\"acoustic_stimulus\"])\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/interactive_widget.png)\n\n### nwbwidgets\nUse `load_widgets` to load the interactive sound widget into `nwb2widget`.\n\n```python\nfrom ndx_sound.widgets import load_widgets\nfrom nwbwidgets import nwb2widget\n\nload_widgets()\n\nnwb2widget(nwbfile)\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/ndx_sound_in_nwbwidgets.png)\n\n#### nwbwidgets and HDF5IO\nWhen using `nwb2widget` with an NWB file that is read from disk, make sure to have\n`load_widgets` imported within the same Jupyter cell where your data is being loaded.\n\n```python\nfrom pynwb import NWBHDF5IO\nfrom ndx_sound.widgets import load_widgets\nfrom nwbwidgets import nwb2widget\n\nload_widgets()\n\n\nio = NWBHDF5IO(\"audio.nwb\", mode=\"r\", load_namespaces=True)\nnwbfile = io.read()\nnwb2widget(nwbfile)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-extract-record": {"ref": "ndx-extract-record", "record_url": "https://github.com/nwb-extensions/ndx-extract-record", "last_updated": "2022-11-15T07:23:53Z", "name": "ndx-extract", "version": "0.2.0", "src": "https://github.com/catalystneuro/ndx-extract", "pip": "https://pypi.org/project/ndx-extract/0.2.0/", "license": "BSD", "maintainers": ["bendichter", "weiglszonja"], "readme": "# ndx-extract Extension for NWB\n\nAuthor: Cesar Echavarria\n\nThis extension allows for the storage of configuration options used by the [EXTRACT](https://github.com/schnitzer-lab/EXTRACT-public) tool for calcium imaging.\n\n\n## Usage\n\n\n### Python\nInstall the extension from [PyPI](https://pypi.org/project/ndx-extract/)\n```shell\npip install ndx-extract\n```\nUsage:\n```python\nfrom datetime import datetime\nfrom ndx_extract import EXTRACTSegmentation\nfrom pynwb import NWBFile, NWBHDF5IO\n\n# Create the NWBfile\nnwbfile = NWBFile(\n session_description=\"The mouse in open exploration.\",\n identifier=\"Mouse5_Day3\",\n session_start_time=datetime.now().astimezone(),\n)\n# Create the processing module\nophys_module = nwbfile.create_processing_module(\n name=\"ophys\",\n description=\"optical physiology processed data\",\n)\n# Create the segmentation object and define the configuration properties\n# The properties that can be defined are listed at spec/ndx-EXTRACT.extensions.yaml\nimage_segmentation = EXTRACTSegmentation(\n name=\"ImageSegmentation\",\n version=\"1.1.0\",\n preprocess=True,\n trace_output_option=\"nonneg\",\n)\n# Add this image segmentation to the processing module\nophys_module.add(image_segmentation)\n\n# Writing the NWB file\nwith NWBHDF5IO(\"image_segmentation.nwb\", mode=\"w\") as io:\n io.write(nwbfile)\n\n# Reading the NWB file and accessing the segmentation parameters\nwith NWBHDF5IO(\"image_segmentation.nwb\", mode=\"r\") as io:\n nwbfile_in = io.read()\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].version\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].preprocess\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].trace_output_option\n```\n\nRunning the tests:\n```shell\n python -m unittest src/pynwb/tests/test_extract.py\n```\n\n### MATLAB\ninstall:\n```matlab\ngenerateExtension('/path/to/ndx-extract/spec/ndx-extract.namespace.yaml');\n```\n\nwrite:\n```matlab\n% define NWB file\nnwb = NwbFile( ...\n 'session_description', 'mouse in open exploration', ...\n 'identifier', 'Mouse5_Day3', ...\n 'session_start_time', datetime(2018, 4, 25, 2, 30, 3) ...\n);\n% define processing module\nophys_module = types.core.ProcessingModule( ...\n 'description', 'test processing module' ...\n);\nnwb.processing.set('ophys', ophys_module);\n% define segmentation\nimg_seg = types.ndx_extract.EXTRACTSegmentation();\n% set segmentation properties\nimg_seg.trace_output_option = 'nonneg';\nimg_seg.save_all_found = false;\nimg_seg.dendrite_aware = false;\nimg_seg.adaptive_kappa = false;\nimg_seg.use_sparse_arrays = false;\nimg_seg.dendrite_aware = 0;\nimg_seg.hyperparameter_tuning_flag = false;\nimg_seg.remove_duplicate_cells = false;\nimg_seg.max_iter = 6;\nimg_seg.S_init = rand(100,10);\nimg_seg.T_init = rand(100,10);\nimg_seg.preprocess = true;\nimg_seg.fix_zero_FOV_strips = false;\nimg_seg.medfilt_outlier_pixels = false;\nimg_seg.skip_dff = false;\nimg_seg.baseline_quantile = .4;\nimg_seg.skip_highpass = false;\nimg_seg.spatial_highpass_cutoff = 0;\nimg_seg.temporal_denoising = false;\nimg_seg.remove_background = true;\nimg_seg.cellfind_filter_type = 'butter';\nimg_seg.spatial_lowpass_cutoff = 2;\nimg_seg.moving_radius = 3;\nimg_seg.cellfind_min_snr = 1;\nimg_seg.cellfind_max_steps = 1000;\nimg_seg.cellfind_kappa_std_ratio = 1;\nimg_seg.init_with_gaussian = false;\nimg_seg.kappa_std_ratio = 1;\nimg_seg.downsample_time_by = 'auto';\nimg_seg.downsample_space_by = 'auto';\nimg_seg.min_radius_after_downsampling = 5;\nimg_seg.min_tau_after_downsampling = 5;\nimg_seg.reestimate_S_if_downsampled = false;\nimg_seg.reestimate_T_if_downsampled = true;\nimg_seg.crop_circular = false;\nimg_seg.movie_mask = randi(2,100,100)-1;\nimg_seg.smoothing_ratio_x2y = 0;\nimg_seg.compact_output = true;\nimg_seg.cellfind_numpix_threshold = 9;\nimg_seg.high2low_brightness_ratio = Inf;\nimg_seg.l1_penalty_factor = 0;\nimg_seg.T_lower_snr_threshold = 10;\nimg_seg.smooth_T = false;\nimg_seg.smooth_S = true;\nimg_seg.max_iter_S = 100;\nimg_seg.max_iter_T = 100;\nimg_seg.TOL_sub = 1.0000e-06;\nimg_seg.TOL_main = 0.0100;\nimg_seg.avg_cell_radius = 0;\nimg_seg.T_min_snr = 10;\nimg_seg.size_lower_limit = .1000;\nimg_seg.size_upper_limit = 10;\nimg_seg.temporal_corrupt_thresh = 0.7000;\nimg_seg.spatial_corrupt_thresh = 0.7000;\nimg_seg.eccent_thresh = 6;\nimg_seg.low_ST_index_thresh = 0.0100;\nimg_seg.low_ST_corr_thresh = 0;\nimg_seg.S_dup_corr_thresh = 0.9500;\nimg_seg.T_dup_corr_thresh = 0.9500;\nimg_seg.confidence_thresh = 0.8000;\nimg_seg.high_ST_index_thresh = 0.8000;\nophys_module.nwbdatainterface.set('ImgSegmentation', img_seg);\nnwbExport(nwb, 'test_123.nwb');\n```\n\nrun tests:\n```matlab\ncd /path/to/ndx-extract/src/matnwb/tests\nresults = test_ndx_extract()\n```\n"}, "ndx-photometry-record": {"ref": "ndx-photometry-record", "record_url": "https://github.com/nwb-extensions/ndx-photometry-record", "last_updated": "2022-12-01T18:03:38Z", "name": "ndx-photometry", "version": "0.1.0", "src": "https://github.com/catalystneuro/ndx-photometry", "pip": "https://pypi.org/project/ndx-photometry/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-photometry Extension for NWB\n[![Build Status](https://travis-ci.com/akshay-jaggi/ndx-photometry.svg?branch=master)](https://travis-ci.com/akshay-jaggi/ndx-photometry)\n[![Documentation Status](https://readthedocs.org/projects/ndx-photometry/badge/?version=latest)](https://ndx-photometry.readthedocs.io/en/latest/?badge=latest)\n\n![NWB - Photometry](https://user-images.githubusercontent.com/844306/144680873-3e2d957f-97ff-45cb-b625-517f5e7dfb9f.png)\n\n## Introduction\nThis is an NWB extension for storing photometry recordings and associated metadata. This extension stores photometry information across three folders in the NWB file: acquisition, processing, and general. The acquisiton folder contains an ROIResponseSeries (inherited from `pynwb.ophys`), which references rows of a FibersTable rather than 2 Photon ROIs. The new types for this extension are in metadata and processing\n\n### Metadata\n1. `FibersTable` stores rows for each fiber with information about the location, excitation, source, photodetector, fluorophore, and more (associated with each fiber). \n2. `ExcitationSourcesTable` stores rows for each excitation source with information about the peak wavelength, source type, and the commanded voltage series of type `CommandedVoltageSeries`\n3. `PhotodectorsTable` stores rows for each photodetector with information about the peak wavelength, type, etc. \n4. `FluorophoresTable` stores rows for each fluorophore with information about the fluorophore itself and the injeciton site. \n\n### Processing\n1. `DeconvoledROIResponseSeries` stores DfOverF and Fluorescence traces and extends `ROIResponseSeries` to contain information about the deconvolutional and downsampling procedures performed.\n\n\nThis extension was developed by Akshay Jaggi, Ben Dichter, and Ryan Ly. \n\n\n## Installation\n\n```\npip install ndx-photometry\n```\n\n\n## Usage\n\n```python\nimport datetime\nimport numpy as np\n\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom pynwb.core import DynamicTableRegion\nfrom pynwb.ophys import RoiResponseSeries\nfrom ndx_photometry import (\n FibersTable,\n PhotodetectorsTable,\n ExcitationSourcesTable,\n DeconvolvedRoiResponseSeries,\n MultiCommandedVoltage,\n FiberPhotometry,\n FluorophoresTable\n)\n\n\nnwbfile = NWBFile(\n session_description=\"session_description\",\n identifier=\"identifier\",\n session_start_time=datetime.datetime.now(datetime.timezone.utc),\n)\n\n# In the next ten calls or so, we'll set up the metadata from the bottom of the metadata tree up\n# You can follow along here: \n\n# Create a commanded voltage container, this can store one or more commanded voltage series\nmulti_commanded_voltage = MultiCommandedVoltage(\n name=\"MyMultiCommandedVoltage\",\n)\n\n# Add a commanded voltage series to this container\ncommandedvoltage_series = (\n multi_commanded_voltage.create_commanded_voltage_series(\n name=\"commanded_voltage\",\n data=[1.0, 2.0, 3.0],\n frequency=30.0,\n power=500.0,\n rate=30.0,\n )\n)\n\n# Create an excitation sources table\nexcitationsources_table = ExcitationSourcesTable(\n description=\"excitation sources table\"\n)\n\n# Add one row to the table per excitation source\n# You can repeat this in a for-loop for many sources\nexcitationsources_table.add_row(\n peak_wavelength=700.0,\n source_type=\"laser\",\n commanded_voltage=commandedvoltage_series,\n)\n\nphotodetectors_table = PhotodetectorsTable(\n description=\"photodetectors table\"\n)\n\n# Add one row to the table per photodetector\nphotodetectors_table.add_row(\n peak_wavelength=500.0, \n type=\"PMT\", \n gain=100.0\n)\n\n\nfluorophores_table = FluorophoresTable(\n description='fluorophores'\n)\n\nfluorophores_table.add_row(\n label='dlight',\n location='VTA',\n coordinates=(3.0,2.0,1.0)\n)\n\nfibers_table = FibersTable(\n description=\"fibers table\"\n)\n\n# Here we add the metadata tables to the metadata section\nnwbfile.add_lab_meta_data(\n FiberPhotometry(\n fibers=fibers_table,\n excitation_sources=excitationsources_table,\n photodetectors=photodetectors_table,\n fluorophores=fluorophores_table\n )\n)\n\n# Important: we add the fibers to the fibers table _after_ adding the metadata\n# This ensures that we can find this data in their tables of origin\nfibers_table.add_fiber(\n excitation_source=0, #integers indicated rows of excitation sources table\n photodetector=0,\n fluorophores=[0], #potentially multiple fluorophores, so list of indices\n location='my location',\n notes='notes'\n)\n\n# Here we set up a list of fibers that our recording came from\nfibers_ref = DynamicTableRegion(\n name=\"rois\", \n data=[0], # potentially multiple fibers\n description=\"source fibers\", \n table=fibers_table\n)\n\n# Create a raw roiresponseseries, this is your main acquisition\nroi_response_series = RoiResponseSeries(\n name=\"roi_response_series\",\n description=\"my roi response series\",\n data=np.random.randn(100, 1),\n unit='F',\n rate=30.0,\n rois=fibers_ref,\n)\n\n# This is your processed data \ndeconv_roi_response_series = DeconvolvedRoiResponseSeries(\n name=\"DeconvolvedRoiResponseSeries\",\n description=\"my roi response series\",\n data=np.random.randn(100, 1),\n unit='F',\n rate=30.0,\n rois=fibers_ref,\n raw=roi_response_series,\n)\n\nophys_module = nwbfile.create_processing_module(\n name=\"ophys\", description=\"fiber photometry\"\n)\n\nophys_module.add(multi_commanded_voltage)\nnwbfile.add_acquisition(roi_response_series)\nophys_module.add(deconv_roi_response_series)\n\n# write nwb file\nfilename = 'test.nwb'\nwith NWBHDF5IO(filename, 'w') as io:\n io.write(nwbfile)\n \n# read nwb file and check its contents\nwith NWBHDF5IO(filename, 'r', load_namespaces=True) as io:\n nwbfile = io.read()\n # Access and print information about the acquisition\n print(nwbfile.acquisition[\"roi_response_series\"])\n # Access and print information about the processed data\n print(nwbfile.processing['ophys'][\"DeconvolvedRoiResponseSeries\"])\n # Access and print all of the metadata\n print(nwbfile.lab_meta_data)\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-acquisition-module-record": {"ref": "ndx-acquisition-module-record", "record_url": "https://github.com/nwb-extensions/ndx-acquisition-module-record", "last_updated": "2023-07-24T14:18:13Z", "name": "ndx-acquisition-module", "version": "0.1.2", "src": "https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/", "pip": "https://pypi.org/project/ndx-acquisition-module/", "license": "BSD-3", "maintainers": ["tuanpham96"], "readme": "# ndx-acquisition-module\n\n[![pipeline status](https://img.shields.io/gitlab/pipeline-status/fleischmann-lab/ndx/ndx-acquisition-module?branch=main&label=pipeline&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/-/commits/main)\n[![license](https://img.shields.io/gitlab/license/fleischmann-lab/ndx/ndx-acquisition-module?color=yellow&label=license&style=flat-square)](LICENSE.txt)\n\n\n![python version](https://img.shields.io/pypi/pyversions/ndx-acquisition-module?style=flat-square)\n[![release](https://img.shields.io/gitlab/v/release/fleischmann-lab/ndx/ndx-acquisition-module?label=release&sort=date&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/-/releases)\n[![pypi package](https://img.shields.io/pypi/v/ndx-acquisition-module?label=pypi%20package&style=flat-square&color=blue)](https://pypi.org/pypi/ndx-acquisition-module)\n[![conda package](https://img.shields.io/conda/v/fleischmannlab/ndx-acquisition-module?color=green&style=flat-square)](https://anaconda.org/FleischmannLab/ndx-acquisition-module)\n\nThis extension is used to allow adding modules in `nwbfile.acquisition`, similarly to how `nwbfile.processing` allows adding modules.\n\nMore specifically, this allows creating a module that has `TimeSeries` and `DynamicTable` objects, then users can add this module.\n\nThis is in alpha development stages. Please use with discretion.\n\n## Installation\n\nYou can install via `pip`:\n\n```bash\npip install ndx-acquisition-module\n```\n\nOr `conda`:\n\n```bash\nconda install -c fleischmannlab ndx-acquisition-module\n```\n\nOr directly from the `git` repository:\n\n```bash\npip install git+https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module\n```\n\n## Usage\n\n### Main usage\n\nHere's a short example to create the module, add objects into it then add to acquisition\n\n```python\nfrom ndx_acquisition_module import AcquisitionModule\n\nmod = AcquisitionModule(name=\"raw_mod\", description=\"raw acq module\")\n\n# Add data objects to created AcquisitionModule\nmod.add(time_series) # add time series\nmod.add(dynamic_table) # add dynamic table\n\n# Add AcquisitionModule to nwbfile.acquisition\nnwbfile.add_acquisition(mod)\n```\n\n### Full example\n\nHere's a full example that you can copy and paste in a script/notebook and run. A `test.nwb` file would be created.\n\n
Expand to see full example script\n\n```python\nfrom datetime import datetime\n\nimport numpy as np\nfrom dateutil import tz\nfrom hdmf.common import DynamicTable, VectorData\nfrom ndx_acquisition_module import AcquisitionModule\n\nfrom pynwb import NWBHDF5IO, NWBFile, TimeSeries\n\n# Create an example NWBFile\nnwbfile = NWBFile(\n session_description=\"test session description\",\n identifier=\"unique_identifier\",\n session_start_time=datetime(2012, 2, 21, tzinfo=tz.gettz(\"US/Pacific\")),\n)\n\n# Create time series\nts = TimeSeries(\n name=\"choice_series\",\n description=\"raw choice series\",\n data=np.random.randint(4, size=100),\n timestamps=(np.arange(100).astype(\"float\") + 2) / 30,\n unit=\"-\",\n)\n\n# Create dynamic table\ntbl = DynamicTable(\n name=\"lookup_table\",\n description=\"lookup table for `choice_series`\",\n columns=[\n VectorData(\n name=\"lookup_id\", description=\"ID to look up\", data=[0, 1, 2, 3]\n ),\n VectorData(\n name=\"lookup_name\",\n description=\"name of ID\",\n data=[\"water\", \"earth\", \"fire\", \"air\"],\n ),\n ],\n)\n\n# Create AcquisitionModule to store these objects\nmod = AcquisitionModule(name=\"raw_mod\", description=\"raw acq module\")\n\n# Add data objects to created AcquisitionModule\nmod.add(ts) # add time series\nmod.add(tbl) # add dynamic table\n\n# Add AcquisitionModule to nwbfile.acquisition\nnwbfile.add_acquisition(mod)\n\n# Write the file to disk\nfilename = \"test.nwb\"\nwith NWBHDF5IO(path=filename, mode=\"w\") as io:\n io.write(nwbfile)\n\n```\n\n
\n\n\n## API usage notes and limitations\n\n### With package installed\n\nCurrently to use `mod.get()` or `mod[]`, users would also need to install this package, for example with\n\n```bash\npip install ndx-acquisition-module\n```\n\nAnd import, using `NWBHDF5IO(..., load_namespaces=True)` would not be enough.\n\n```python\n# new file completely\nfrom pynwb import NWBHDF5IO\nfrom ndx_acquisition_module import AcquisitionModule\nnwb = NWBHDF5IO('test.nwb', mode='r').read() # notice `load_namepsaces` is not needed\n\nprint(nwb.acquisition['raw_mod'])\n```\n\nwhich outputs:\n\n```text\nraw_mod ndx_acquisition_module.AcquisitionModule at 0x139742592581104\nFields:\n data_interfaces: {\n choice_series ,\n lookup_table \n }\n```\n\nTo access:\n\n```python\nnwb.acquisition['raw_mod']['lookup_table']\nnwb.acquisition['raw_mod']['choice_series']\n```\n\n### Without package installed\n\nOtherwise, if `ndx-acquisition-module` is not installed, accessing the inside objects have to be done based on types:\n\n```python\n# new file completely\nfrom pynwb import NWBHDF5IO\nnwb = NWBHDF5IO('test.nwb', mode='r', load_namespaces=True).read() # notice `load_namepsaces` is NEEDED\n\nprint(nwb.acquisition['raw_mod'])\n```\n\nwhich outputs:\n\n```text\nraw_mod abc.AcquisitionModule at 0x140252603705728\nFields:\n description: raw acq module\n dynamic_tables: {\n lookup_table \n }\n nwb_data_interfaces: {\n choice_series \n }\n```\n\nTo access:\n\n```python\nnwb.acquisition['raw_mod'].dynamic_tables['lookup_table']\nnwb.acquisition['raw_mod'].nwb_data_interfaces['choice_series']\n```\n\n---\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template)\n"}, "ndx-odor-metadata-record": {"ref": "ndx-odor-metadata-record", "record_url": "https://github.com/nwb-extensions/ndx-odor-metadata-record", "last_updated": "2023-07-24T14:42:32Z", "name": "ndx-odor-metadata", "version": "0.1.1", "src": "https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata", "pip": "https://pypi.org/project/ndx-odor-metadata/", "license": "BSD-3", "maintainers": ["tuanpham96"], "readme": "# `ndx-odor-metadata`\n\n[![pipeline status](https://img.shields.io/gitlab/pipeline-status/fleischmann-lab/ndx/ndx-odor-metadata?branch=main&label=pipeline&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/commits/main)\n[![license](https://img.shields.io/gitlab/license/fleischmann-lab/ndx/ndx-odor-metadata?color=yellow&label=license&style=flat-square)](LICENSE.txt)\n\n![python version](https://img.shields.io/pypi/pyversions/ndx-odor-metadata?style=flat-square)\n[![release](https://img.shields.io/gitlab/v/release/fleischmann-lab/ndx/ndx-odor-metadata?label=release&sort=date&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/releases)\n[![pypi package](https://img.shields.io/pypi/v/ndx-odor-metadata?label=pypi%20package&style=flat-square&color=blue)](https://pypi.org/pypi/ndx-odor-metadata)\n[![conda package](https://img.shields.io/conda/v/fleischmannlab/ndx-odor-metadata?color=green&style=flat-square)](https://anaconda.org/FleischmannLab/ndx-odor-metadata)\n\nNWB extension to store odor stimulus metadata with `DynamicTable` format. Entries that have a PubChem and `stim_types` indicate odor/chemical will also be queried with `pubchempy` for more information.\n\nThis is in alpha development stages **WITHOUT** any appropriate tests yet. Please use with discretion.\n\n\n## Installation\n\nYou can install via `pip`:\n\n```bash\npip install ndx-odor-metadata\n```\n\nOr `conda`:\n\n```bash\nconda install -c fleischmannlab ndx-odor-metadata\n```\n\nOr directly from the `git` repository:\n\n```bash\npip install git+https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata\n```\n\n## Usage\n\n### Main usage\n\n```python\nfrom ndx_odor_metadata import OdorMetaData\n\nodor_table = OdorMetaData(name='odor_table', description='an odor table')\n\nodor_table.add_stimulus(\n pubchem_id = 7662.0,\n stim_name = \"3-Phenylpropyl isobutyrate\",\n raw_id = 3,\n stim_id = 1,\n stim_types = \"odor\",\n chemical_dilution_type='vaporized',\n chemical_concentration = 0.01,\n chemical_concentration_unit='%',\n chemical_solvent = \"Mineral Oil\",\n chemical_provider = \"Sigma\",\n stim_description = \"Legit odor stimulus #1\",\n)\n\nnwbfile.add_acquisition(odor_table)\n```\n\n### Details on arguments\n\n| | name | dtype | doc | default_value | quantity |\n|---:|:----------------------------|:--------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------|:-----------|\n| 0 | stim_name | text | Stimulus name, e.g. \"hexanal\" or \"sound\" | nan | nan |\n| 1 | pubchem_id | float | PubChem ID, `NaN` indicates non-(standard)-odor stimulus | nan | ? |\n| 2 | raw_id | text | Raw acquisition stimulus ID. Will be converted to `str`. | nan | nan |\n| 3 | raw_id_dtype | text | The actual dtype of `raw_id` value. Useful for (re)casting. | N/A | ? |\n| 4 | stim_id | text | Preferred stimulus ID, which can be used to remap acquisition stimulus id `raw_id`. Will be converted to `str`. If not explicitly given, will copy from `raw_id` | nan | ? |\n| 5 | stim_id_dtype | text | The actual dtype of `stim_id` value. Useful for (re)casting. | N/A | ? |\n| 6 | stim_types | text | Type(s) of stimulus, e.g. 'odor', 'sound', 'control', 'CS', 'US', ... | nan | nan |\n| 7 | stim_types_index | nan | Index for `stim_types` | nan | nan |\n| 8 | stim_description | text | Human-readable description, notes, comments of each stimulus | N/A | ? |\n| 9 | chemical_dilution_type | text | Type of dilution, e.g. 'volume/volume', 'vaporized' | N/A | ? |\n| 10 | chemical_concentration | float | Concentration of chemical | nan | ? |\n| 11 | chemical_concentration_unit | text | Unit of concentration, e.g. \"%\" or \"M\" | N/A | ? |\n| 12 | chemical_solvent | text | Solvent to dilute the chemicals in, e.g. 'mineral oil' | N/A | ? |\n| 13 | chemical_provider | text | Provider of the chemicals, e.g. 'Sigma' | N/A | ? |\n| 14 | nonchemical_details | text | Information about non-chemical/odor stimulus, e.g. 'sound' frequencies | N/A | ? |\n| 15 | is_validated | bool | Whether the stimulus, if chemical/odor, is validated against PubChem (or other sources listed in `validation_info`.If does not have a valid PubChem ID, this assumes to default `False` value | False | ? |\n| 16 | validation_details | text | Additional information/details/notes about stimulus validation, e.g. source, software used & version, validation date, ... | N/A | ? |\n| 17 | pubchem_cid | float | PubChem CID, `NaN` indicates non-(standard)-odor stimulus | nan | ? |\n| 18 | chemical_IUPAC | text | Official chemical IUPAC name | N/A | ? |\n| 19 | chemical_SMILES | text | Canonical SMILES | N/A | ? |\n| 20 | chemical_synonyms | text | List of chemical synonyms | | ? |\n| 21 | chemical_synonyms_index | nan | Index for `chemical_synonyms` | nan | ? |\n| 22 | chemical_molecular_formula | text | Molecular formula of chemical used | N/A | ? |\n| 23 | chemical_molecular_weight | float | Molecular weight of chemical used | nan | ? |\n\n### Demonstration\n\nFor more detailed demonstration, please visit the [`demo`](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/tree/main/demo) folder.\n\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-whisk-record": {"ref": "ndx-whisk-record", "record_url": "https://github.com/nwb-extensions/ndx-whisk-record", "last_updated": "2023-08-29T16:21:44Z", "name": "ndx-whisk", "version": "0.1.1", "src": "https://github.com/vncntprvst/ndx-whisk", "pip": "https://pypi.org/project/ndx-whisk/", "license": "BSD-3", "maintainers": ["vncntprvst"], "readme": "# ndx-whisk Extension for NWB\n\nndx-whisk is an [NWB](https://www.nwb.org/) extension to store whisker tracking measurements. It is intended to convert `.whiskers` and `.measurements` files generated by [whisk (Janelia Whisker Tracker)](https://github.com/nclack/whisk/), or saved to `hdf5` with [WhiskiWrap](https://github.com/cxrodgers/WhiskiWrap), but can be used with other whisker tracking methods.\n\n## Installation\n\n`pip install ndx-whisk`\n\n## Usage\n\nSee test script `test_whiskermeasurement.py` in `src/pynwb/tests`. \n\n```python\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom ndx_whisk import WhiskerMeasurementTable\nimport numpy as np\n\n# Load your data\nwhisker_data = read_whisker_measurement_table('tracked_data.whiskers')\n\n# Create a WhiskerMeasurementTable\nwhisker_meas = WhiskerMeasurementTable(\n name='name',\n description='description'\n)\n\n# Add data to the WhiskerMeasurementTable\nfor i in range(np.shape(whisker_data['frame_id'])[0]):\n whisker_meas.add_row({k: whisker_data[k][i] for k in whisker_data.keys()})\n \n# Set up a NWB file\nnwbfile = set_up_nwbfile()\npath = 'tracked_data.nwb'\n\n# Add a ProcessingModule for behavioral data\nbehavior_module = nwbfile.create_processing_module(\n name=\"behavior\", description=\"Processed behavioral data\"\n)\n\n# Add the WhiskerMeasurementTable\nnwbfile.processing['behavior'].add(whisker_meas)\n\n# Save to NWB file\nwith NWBHDF5IO(path, mode='w') as io:\n io.write(nwbfile)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-ecg-record": {"ref": "ndx-ecg-record", "record_url": "https://github.com/nwb-extensions/ndx-ecg-record", "last_updated": "2023-11-13T19:50:37Z", "name": "ndx-ecg", "version": "0.1.0", "src": "https://github.com/Defense-Circuits-Lab/ndx_ecg", "pip": "https://pypi.org/project/ndx-ecg/", "license": "BSD 3-clause", "maintainers": ["Hamidreza-Alimohammadi"], "readme": "# ndx-ecg Extension for NWB\n\nThis extension is developed to extend NWB data standards to incorporate ECG recordings. `CardiacSeries`, the main neurodata-type in this extension, in fact extends the base type of NWB TimeSeries and can be stored into three specific data interfaces of `ECG`, `HeartRate` and `AuxiliaryAnalysis`. Also, the `ECGRecordingGroup` is another neurodata-type in this module which extends `LabMetaData` which itself extends the NWBContainer and stores descriptive meta-data recording channels information along with the electrodes implementation (`ECGChannels` and `ECGElectrodes` respectively as extensions of DynamicTable) and a link to another extended neurodata-type -`ECGRecDevice`- which extends the type Device.\n\n
\n\n
\n\n## Installation\nCan be installed directly from PyPI:\n```\npip install ndx-ecg\n```\nor simply clone the repo and navigate to the root directory, then:\n```\npip install .\n```\n## Test\nA roundTrip test is runnable through ```pytest``` from the root. The test script can be found here:\n```\n\\src\\pynwb\\tests\n```\n## An example use-case\nThe following is an example use case of ```ndx-ecg``` with explanatory comments. First, we build up an ```nwbfile``` and define an endpoint recording device:\n```python\nfrom datetime import datetime\nfrom uuid import uuid4\nimport numpy as np\nfrom dateutil.tz import tzlocal\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom hdmf.common import DynamicTable\n\nfrom ndx_ecg import (\n CardiacSeries,\n ECG,\n HeartRate,\n AuxiliaryAnalysis,\n ECGRecordingGroup,\n ECGRecDevice,\n ECGElectrodes,\n ECGChannels\n)\n\nnwbfile = NWBFile(\n session_description='ECG test-rec session',\n identifier=str(uuid4()),\n session_start_time=datetime.now(tzlocal()),\n experimenter='experimenter',\n lab='DCL',\n institution='UKW',\n experiment_description='',\n session_id='',\n)\n# define an endpoint main recording device\nmain_device = nwbfile.create_device(\n name='endpoint_recording_device',\n description='description_of_the_ERD', # ERD: Endpoint recording device\n manufacturer='manufacturer_of_the_ERD'\n)\n```\nThen, we define instances of `ECGElectrodes` and `ECGChannels`, to represent the meta-data on the recording electrodes and also the recording channels:\n```python\n'''\ncreating an ECG electrodes table\nas a DynamicTable\n'''\necg_electrodes_table = ECGElectrodes(\n description='descriptive meta-data on ECG recording electrodes'\n)\n\n# add electrodes\necg_electrodes_table.add_row(\n electrode_name='el_0',\n electrode_location='right upper-chest',\n electrode_info='descriptive info on el_0'\n)\necg_electrodes_table.add_row(\n electrode_name='el_1',\n electrode_location='left lower-chest',\n electrode_info='descriptive info on el_1'\n)\necg_electrodes_table.add_row(\n electrode_name='reference',\n electrode_location='top of the head',\n electrode_info='descriptive info on reference'\n)\n# adding the object of DynamicTable\nnwbfile.add_acquisition(ecg_electrodes_table) # storage point for DT\n\n'''\ncreating an ECG recording channels table\nas a DynamicTable\n'''\necg_channels_table = ECGChannels(\n description='descriptive meta-data on ECG recording channels'\n)\n\n# add channels\necg_channels_table.add_row(\n channel_name='ch_0',\n channel_type='single',\n involved_electrodes='el_0',\n channel_info='channel info on ch_0'\n)\necg_channels_table.add_row(\n channel_name='ch_1',\n channel_type='differential',\n involved_electrodes='el_0 and el_1',\n channel_info='channel info on ch_1'\n)\n# adding the object of DynamicTable\nnwbfile.add_acquisition(ecg_channels_table) # storage point for DT\n```\nNow, we can define an instance of ```ECGRecDevice```:\n```python\n# define an ECGRecDevice-type device for ecg recording\necg_device = ECGRecDevice(\n name='recording_device',\n description='description_of_the_ECGRD',\n manufacturer='manufacturer_of_the_ECGRD',\n filtering='notch-60Hz-analog',\n gain='100',\n offset='0',\n synchronization='taken care of via ...',\n endpoint_recording_device=main_device\n)\n# adding the object of ECGRecDevice\nnwbfile.add_device(ecg_device)\n```\nAnd also an instance of ```ECGChannelsGroup```:\n```python\necg_recording_group = ECGRecordingGroup(\n name='recording_group',\n group_description='a group to store electrodes and channels table, and linking to ECGRecDevice.',\n electrodes=ecg_electrodes_table,\n channels=ecg_channels_table,\n recording_device=ecg_device\n)\n# adding the object of ECGChannelsGroup\nnwbfile.add_lab_meta_data(ecg_recording_group) # storage point for custom LMD\n#\n```\nNow, we have all the required standard arguments to genearate instances of `CardiacSeries` and stroing them in our three different NWBDataInterfaces:\n```python\n# storing the ECG data\ndum_data_ecg = np.random.randn(20, 2)\ndum_time_ecg = np.linspace(0, 10, len(dum_data_ecg))\necg_cardiac_series = CardiacSeries(\n name='ecg_raw_CS',\n data=dum_data_ecg,\n timestamps=dum_time_ecg,\n unit='mV',\n recording_group=ecg_recording_group\n)\n\necg_raw = ECG(\n cardiac_series=[ecg_cardiac_series],\n processing_description='raw acquisition'\n)\n```\nHere, we built an instance of our `CradiacSeries` to store a dummy raw ECG acquisition into a specified `ECG` interface, and we store it as an acquisition into the `nwbfile`:\n```python\n# adding the raw acquisition of ECG to the nwb_file inside an 'ECG' container\nnwbfile.add_acquisition(ecg_raw)\n```\nIn the following, we have taken the similar approach but this time storing dummy data as processed data, into specific interfaces of `HeartRate` and `AuxiliaryAnalysis`, then storing it into a -to be defined- `ecg_module`:\n```python\n# storing the HeartRate data\ndum_data_hr = np.random.randn(10, 2)\ndum_time_hr = np.linspace(0, 10, len(dum_data_hr))\nhr_cardiac_series = CardiacSeries(\n name='heart_rate_CS',\n data=dum_data_hr,\n timestamps=dum_time_hr,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\n# defining an ecg_module to store the processed cardiac data and analysis\necg_module = nwbfile.create_processing_module(\n name='cardio_module',\n description='a module to store processed cardiac data'\n)\n\nhr = HeartRate(\n cardiac_series=[hr_cardiac_series],\n processing_description='processed heartRate of the animal'\n)\n# adding the heart rate data to the nwb_file inside an 'HeartRate' container\necg_module.add(hr)\n\n# storing the Auxiliary data\n# An example could be the concept of ceiling that is being used in the literature published by DCL@UKW\ndum_data_ceil = np.random.randn(10, 2)\ndum_time_ceil = np.linspace(0, 10, len(dum_data_ceil))\nceil_cardiac_series = CardiacSeries(\n name='heart_rate_ceil_CS',\n data=dum_data_ceil,\n timestamps=dum_time_ceil,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\nceil = AuxiliaryAnalysis(\n cardiac_series=[ceil_cardiac_series],\n processing_description='processed auxiliary analysis'\n)\n# adding the 'ceiling' auxiliary analysis to the nwb_file inside an 'AuxiliaryAnalysis' container\necg_module.add(ceil)\n\n# storing the processed heart rate: as an NWBDataInterface with the new assigned name instead of default\n# An example could be the concept of HR2ceiling that is being used in the literature published by DCL@UKW\ndum_data_hr2ceil = np.random.randn(10, 2)\ndum_time_hr2ceil = np.linspace(0, 10, len(dum_data_hr2ceil))\nhr2ceil_cardiac_series = CardiacSeries(\n name='heart_rate_to_ceil_CS',\n data=dum_data_hr2ceil,\n timestamps=dum_time_hr2ceil,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\nhr2ceil = HeartRate(\n name='HR2Ceil',\n cardiac_series=[hr2ceil_cardiac_series],\n processing_description='processed heartRate to ceiling'\n)\n# adding the 'HR2ceiling' processed HR to the nwb_file inside an 'HeartRate' container\necg_module.add(hr2ceil)\n\n```\nNow, the `nwbfile` is ready to be written on the disk and read back. \n\n"}, "ndx-franklab-novela-record": {"ref": "ndx-franklab-novela-record", "record_url": "https://github.com/nwb-extensions/ndx-franklab-novela-record", "last_updated": "2023-12-14T05:54:08Z", "name": "ndx-franklab-novela", "version": "0.1.0", "src": "https://github.com/lorenfranklab/ndx-franklab-novela", "pip": "https://pypi.org/project/ndx-franklab-novela/", "conda": "https://anaconda.org/novelakrk/ndx-franklab-novela", "license": "BSD 3-Clause", "maintainers": ["NovelaNeuro", "rly", "edeno"], "readme": "# ndx-franklab-novela Extension for NWB\n\n# About\nndx-franklab-novela is a python package containing NWB custom extensions for Loren Frank's Lab.\n\n# How to install\n\nAdd ndx-franklab-novela to your conda environment\n\n`pip install git+git://github.com/LorenFrankLab/ndx-franklab-novela`\n\nThe original published extension maintained by NovelaNeuro can be installed using:\n\n`conda install -c conda-forge -c novelakrk ndx-franklab-novela`\n\n\n# How to install\n\nAdd ndx-franklab-novela to your conda environment
\n```pip install git+git://github.com/LorenFrankLab/ndx-franklab-novela```\n\nThe original published extension maintained by NovelaNeuro can be installed using:\n```conda install -c conda-forge -c novelakrk ndx-franklab-novela```\n\n\n# Extensions\n\n## AssociatedFiles\nRepresentation of associated files in NWB.\n\n**Attributes:**\n- **description** `string`: description of associated file\n- **content** `string`: content of associated file\n- **task_epochs** `string`: id of epochs with task that is descripted by associated files\n\n## HeaderDevice\nRepresentation of HeaderDevice in NWB.\n\n**Attributes:**\n- **headstage_serial** `string`: headstage_serial from header global configuration\n- **headstage_smart_ref_on** `string`: headstage_smart_ref_on from header global configuration\n- **realtime_mode** `string`: realtime_mode from header global configuration\n- **headstage_auto_settle_on** `string`: headstage_auto_settle_on from header global configuration\n- **timestamp_at_creation** `string`: timestamp_at_creation from header global configuration\n- **controller_firmware_version** `string`: conntroller_firmware_version from header global configuration\n- **controller_serial** `string`: conntroller_serial from header global configuration\n- **save_displayed_chan_only** `string`: save_displayed_chan_only from header global configuration\n- **headstage_firmware_version** `string`: headstage_firmware_version from header global configuration\n- **qt_version** `string`: qt_version from header global configuration\n- **compile_date** `string`: compile_date from header global configuration\n- **compile_time** `string`: compile_time from header global configuration\n- **file_prefix** `string`: file_prefix from header global configuration\n- **headstage_gyro_sensor_on** `string`: headstage_gyro_sensor_on from header global configuration\n- **headstage_mag_sensor_on** `string`: headstage_mag_sensor_on from header global configuration\n- **trodes_version** `string`: trodes_version from header global configuration\n- **headstage_accel_sensor_on** `string`: headstage_accel_sensor_on from header global configuration\n- **commit_head** `string`: commit_head from header global configuration\n- **system_time_at_creation** `string`: system_time_at_creation from header global configuration\n- **file_path** `string`: file_path from header global configuration\n\n## ShanksElectrode\nRepresentation of electrodes of a shank in NWB.\n\n**Attributes:**\n- **name** `string`: name of the shank\n- **rel_x** `float`: the rel_x value of this electrode\n- **rel_y** `float`: the rel_y value of this electrode\n- **rel_z** `float`: the rel_z value of this electrode\n\n## Shank\nRepresentation of a shank in NWB.\n\n**Attributes:**\n- **name** `string`: name of the shank\n- **shanks_electrodes** `dict`: electrodes in the shank\n\n## Probe\nRepresentation of a probe in NWB.\n\n**Attributes:**\n- **id** `int`: unique id of the probe\n- **probe_type** `string`: type of probe\n- **units** `string`: units in device\n- **probe_description** `string`: description of probe\n- **contact_side_numbering** `bool`: is contact_side_numbering enabled\n- **contact_size** `float`: value of contact size as float\n- **shanks** `dict`: shanks in the probe\n\n## DataAcqDevice\nRepresentation of data acquisition device in NWB.\n\n**Attributes:**\n- **system** `string`: system of device\n- **amplifier** `string`: amplifier (optional)\n- **adc_circuit** `string`: adc_circuit (optional)\n\n## CameraDevice\nRepresentation of a camera device in NWB.\n\n**Attributes:**\n- **meters_per_pixel** `float`: meters per pixel\n- **model** `string`: model of this camera device\n- **lens** `string`: info about lens in this camera\n- **camera_name** `string`: name of this camera\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-photostim-record": {"ref": "ndx-photostim-record", "record_url": "https://github.com/nwb-extensions/ndx-photostim-record", "last_updated": "2023-12-16T05:29:05Z", "name": "ndx-photostim", "version": "0.0.3", "src": "https://github.com/histedlab/ndx-photostim", "pip": "https://pypi.org/project/ndx-photostim", "license": "BSD3", "maintainers": ["lafosse", "histed"], "readme": "# ndx-photostim Extension for NWB\n\n
\n\n\nThis is a NeuroData Without Borders (NWB) extension for storing data and metadata from holographic photostimulation\nmethods. It includes containers for storing photostimulation-specific device parameters, holographic patterns \n(either 2D or 3D), and time series data related to photostimulation.\n
\n\n
We release six PyNWB containers as part of this extension (we currently only have a Python implementation, rather than both Python and a MATLAB ones -- this is why the `matnwb` directory is empty):\n\n* The `SpatialLightModulator` and `Laser` containers store metadata about the spatial light modulator and laser used in the photostimulation, respectively. These containers are then stored within the `PhotostimulationMethod` parent container, which stores the remaining photostimulation method-specifici metadata.\n* `HolographicPattern` stores the **holographic pattern** used in stimulation.\n* `PhotostimulationSeries` contains the **time series data** corresponding to the presentation of a given stimulus (where the stimulus is represented by a `HolographicPattern` container linked to the `PhotostimulationSeries`).\n* We group **all time series & patterns for a given experiment** together using the `PhotostimulationTable` container. This object is a dynamic table, where each row in the table corresponds to a single `PhotostimulationSeries`. Additionally, the table links to the `StimulationDevice` used in the experiment.\n\n\n## Background\n\n\nState-of-the-art holographic photostimulation methods, used in concert with two-photon imaging, \nallow unprecedented \ncontrol and measurement of cell activity in the living brain. Methods for managing data for two-photon imaging \nexperiments are improving, but there is little to no standardization of data for holographic stimulation methods. \nStimulation in vivo depends on fine-tuning many experimental variables, which poses a challenge for reproducibility \nand data sharing between researchers. To improve standardization of photostimulation data storage and processing, \nwe release this extension as a generic data format for simultaneous holographic stimulation experiments, \nusing the NWB format to store experimental details and data relating to both acquisition \nand photostimulation.\n\n## Installation\n\nTo install the extension, first clone the `ndx_photostim` repository to the desired folder using the command\n```angular2svg\ngit clone https://github.com/histedlab/ndx-photostim.git\n```\nThen, to install the requisite python packages and extension, run:\n```angular2svg\npython -m pip install -r requirements.txt -r requirements-dev.txt\npython setup.py install\n```\nThe extension can then be imported into python scripts via `import ndx_photostim`.\n\n## Usage\n\n**For full example usage, see [tutorial.ipynb](https://github.com/histedlab/ndx-photostim/blob/main/tutorial.ipynb)**\n\nBelow is example code to:\n1. Create a device used in photostimulation\n2. Simulate and store photostimulation ROIs\n3. Store the time series corresponding to each stimulation\n4. Record all time series and patterns used in an experiment in a table\n5. Write the above to an NWB file and read it back\n\n\n```python\nimport numpy as np\nfrom dateutil.tz import tzlocal\nfrom datetime import datetime\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom ndx_photostim import SpatialLightModulator, Laser, PhotostimulationMethod, HolographicPattern, \\\n PhotostimulationSeries, PhotostimulationTable\n\n# create an example NWB file\nnwbfile = NWBFile('ndx-photostim_example', 'EXAMPLE_ID', datetime.now(tzlocal()))\n\n# store the spatial light modulator used\nslm = SpatialLightModulator(name='slm',\n model='Meadowlark',\n size=np.array([512, 512]))\n\n# store the laser used\nlaser = Laser(name='laser',\n model='Coherent Monaco',\n wavelength=1030,\n power=8,\n peak_pulse_energy=20,\n pulse_rate=500)\n\n# create a container for the method used for photostimulation, and link the SLM and laser to it\nps_method = PhotostimulationMethod(name=\"methodA\",\n stimulus_method=\"scanless\",\n sweep_pattern=\"none\",\n sweep_size=0,\n time_per_sweep=0,\n num_sweeps=0)\nps_method.add_slm(slm)\nps_method.add_laser(laser)\n\n# define holographic pattern\nhp = HolographicPattern(name='pattern1',\n image_mask_roi=np.round(np.random.rand(5, 5)),\n stim_duration=0.300,\n power_per_target=8)\n\n# show the mask\nhp.show_mask()\n\n# define stimulation time series using holographic pattern\nps_series = PhotostimulationSeries(name=\"series_1\",\n format='interval',\n data=[1, -1, 1, -1],\n timestamps=[0.5, 1, 2, 4],\n pattern=hp,\n method=ps_method)\n\n# add the stimulus to the NWB file\nnwbfile.add_stimulus(ps_series)\n\n# create a table to store the time series/patterns for all stimuli together, along with experiment-specific\n# parameters\nstim_table = PhotostimulationTable(name='test', description='...')\nstim_table.add_series(ps_series)\n\n# plot the timestamps when the stimulus was presented\nstim_table.plot_presentation_times()\n\n# create a processing module and add the PresentationTable to it\nmodule = nwbfile.create_processing_module(name=\"photostimulation\", description=\"example photostimulation table\")\nmodule.add(stim_table)\n\n# write to an NWB file and read it back\nwith NWBHDF5IO(\"example_file.nwb\", \"w\") as io:\n io.write(nwbfile)\n\nwith NWBHDF5IO(\"example_file.nwb\", \"r\", load_namespaces=True) as io:\n read_nwbfile = io.read()\n\n # Check the file & processing module\n print(read_nwbfile)\n print(read_nwbfile.processing['photostimulation'])\n\nif os.path.exists(\"example_file.nwb\"):\n os.remove(\"example_file.nwb\")\n```\n## Running tests\n\nUnit and integration\ntests are implemented using pytest, and can be run via the command \n`pytest` from the root of the extension directory (i.e., inside `ndx-photostim/src`). In addition, the\n`pytest` command will also run a test of the example code above.\n\n## Documentation\n\n### Specification\n\n\nDocumentation for the extension's specification, which is based on the YAML files, is generated and stored in\nthe `./docs` folder. To create it, run the following from the home directory:\n```angular2svg\ncd docs\nmake fulldoc\n```\nThis will save documentation to the `./docs/build` folder, and can be accessed via the \n`./docs/build/html/index.html` file.\n\n### API\n\nTo generate documentation for the Python API (stores in `./api_docs`), we use Sphinx \nand a template from ReadTheDocs. API documentation can\nbe created by running \n```angular2svg\nsphinx-build -b html api_docs/source/ api_docs/build/\n```\nfrom the home folder. Similar to the specification docs, API documentation is stored in `./api_docs/build`. Select \n`./api_docs/build/index.html` to access the API documentation in a website format.\n\n\n## Credit\n\nCode by Carl Harris and Paul LaFosse (equal contribution). Collaboration between the NIMH's [Data Science and Sharing Team](https://cmn.nimh.nih.gov/dsst) and [Histed Lab](https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/ncb).\n\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n\n"}, "ndx-multichannel-volume-record": {"ref": "ndx-multichannel-volume-record", "record_url": "https://github.com/nwb-extensions/ndx-multichannel-volume-record", "last_updated": "2024-03-31T05:45:39Z", "name": "ndx-multichannel-volume", "version": "0.1.12", "src": "https://github.com/focolab/ndx-multichannel-volume", "pip": "https://pypi.org/project/ndx-multichannel-volume/", "license": "BSD-3", "maintainers": ["dysprague"], "readme": "# ndx-multichannel-volume Extension for NWB\n\nThis extension is to add support for volumetric multichannel images. This\nextends existing NWB functions for optophysiology imaging to allow for \n3 dimensions and a flexible number of channels. There is additional support\nfor adding metadata that is necessary for imaging in C. Elegans.\n\n## Installation\n\nTo install this package on Unix/macOS, run in command line\npython3 -m pip install --index-url https://pypi.org/simple/ --no-deps ndx-multichannel-volume\n\nOn windows, run \n\npy -m pip install --index-url https://pypi.org/simple/ --no-deps ndx-multichannel-volume\n\n\n## Usage\n\nThis extension is to add support for volumetric multichannel images. This \nextends existing NWB functions for optophysiology imaging to allow for \n3 dimensions and a flexible number of channels. There is additional support\nfor adding metadata that is necessary for imaging in C. Elegans. \n\nNew classes added in this extension are:\n\nCElegansSubject - extension of the base subject class with additional attributes\nfor metadata specific to C. Elegans.\n\nMultiChannelVolumeSeries - extension of the base TimeSeries class to support \nmultiple channels and 3 dimensions.\n\nMultiChannelVolume - class for storing mutlichannel volumetric images with \na flexible number of channels. \n\nImagingVolume - alternate version of the native ImagingPlane class for supporting\nmetadata associated with volumetric multichannel images. Contains a list of optical\nchannel references as well as an ordered list of how those channels index to the \nchannels in the image.\n\nOpticalChannelPlus - extension of the OpticalChannel class to support additional\ninformation including emission_range, excitation_range, and excitation_lambda.\n\nOpticalChannelReferences - contains ordered list of optical channel to represent the \norder of the optical channels in the reference volume.\n\nVolumeSegmentation - contains segmentation masks for image volumes. There are options \nto use either a standard voxel_mask with XYZ information as well as a Cell ID label,\nor color_voxel_mask which has RGBW information as well as XYZ.\n\nPlease see https://github.com/focolab/ndx-multichannel-volume/blob/main/src/pynwb/create_NWB.ipynb for example code on how to use these new data types/classes\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-depth-moseq-record": {"ref": "ndx-depth-moseq-record", "record_url": "https://github.com/nwb-extensions/ndx-depth-moseq-record", "last_updated": "2024-07-25T03:10:54Z", "name": "ndx-depth-moseq", "version": "0.1.2", "src": "https://github.com/catalystneuro/ndx-depth-moseq", "pip": "https://pypi.org/project/ndx-depth-moseq/", "license": "BSD-3", "maintainers": ["pauladkisson"], "readme": "# ndx-depth-moseq Extension for NWB\n\nndx-depth-moseq is a standardized format for storing the output of [depth-moseq](https://dattalab.github.io/moseq2-website/index.html), an automatic motion sequencing algorithm, in NWB. Currently, this extension only supports the output of depth-moseq-extract, but will be extended as needed to cover the other types of depth-moseq outputs.\n\nThis extension consists of 3 new neurodata types:\n\n- `DepthImageSeries`, which is a simple extension of `pynwb.image.ImageSeries` for depth video with a constant reference depth.\n- `MoSeqExtractParameterGroup`, which stores all the various parameters from the depth-moseq-extract algorithm.\n- `MoSeqExtractGroup`, which stores all the relevant depth-moseq outputs including the `DepthImageSeries`, `MoSeqExtractParameterGroup`, as well as various native neurodata types such as the `Position`.\n\n## Installation\n```\npip install ndx-depth-moseq\n```\n\n## Usage\n\n```python\n\"\"\"Example of usage with mock data.\"\"\"\nfrom datetime import datetime\nfrom pytz import timezone\nimport numpy as np\nfrom pynwb.image import GrayscaleImage, ImageMaskSeries\nfrom pynwb import NWBFile, TimeSeries\nfrom pynwb.behavior import (\n CompassDirection,\n Position,\n SpatialSeries,\n)\nfrom ndx_depth_moseq import DepthImageSeries, MoSeqExtractGroup, MoSeqExtractParameterGroup\n\n# Define mock data (this will be replaced with the actual data) \nversion = \"0.1.0\"\nnum_frames = 10\nnum_rows = 512\nnum_cols = 424\nprocessed_depth_video = np.zeros((num_frames, num_rows, num_cols))\nloglikelihood_video = np.zeros((num_frames, num_rows, num_cols))\ntimestamps = np.arange(num_frames)\nbackground = np.zeros((num_rows, num_cols))\nis_flipped = np.zeros(num_frames, dtype=bool)\nroi = np.zeros((num_rows, num_cols))\ntrue_depth = 1.0\nkinematic_var_names = ['centroid_x_mm', 'centroid_y_mm', 'height_ave_mm', 'angle', 'velocity_2d_mm', 'velocity_3d_mm', 'velocity_theta', 'length_mm', 'width_mm', 'area_px', 'width_px', 'length_px']\nkinematic_vars = {k: np.zeros(num_frames) for k in kinematic_var_names}\nkinematic_vars['length_px'] += 1\nkinematic_vars['width_px'] += 1\nparameters = {\n 'angle_hampel_sig': np.array([3], dtype=np.int64)[0],\n 'angle_hampel_span': np.array([5], dtype=np.int64)[0],\n 'bg_roi_depth_range_min': np.array([0], dtype=np.int64)[0],\n 'bg_roi_depth_range_max': np.array([1000], dtype=np.int64)[0],\n 'bg_roi_dilate_x': np.array([10], dtype=np.int64)[0],\n 'bg_roi_dilate_y': np.array([10], dtype=np.int64)[0],\n 'bg_roi_fill_holes': True,\n 'bg_roi_gradient_filter': True,\n 'bg_roi_gradient_kernel': np.array([5], dtype=np.int64)[0],\n 'bg_roi_gradient_threshold': np.array([10], dtype=np.int64)[0],\n 'bg_roi_index': np.array([0], dtype=np.int64)[0],\n 'bg_roi_shape': 'ellipse',\n 'bg_roi_weight_area': np.array([0.5], dtype=np.float64)[0],\n 'bg_roi_weight_extent': np.array([0.5], dtype=np.float64)[0],\n 'bg_roi_weight_dist': np.array([0.5], dtype=np.float64)[0],\n 'cable_filter_iters': np.array([5], dtype=np.int64)[0],\n 'cable_filter_shape': 'ellipse',\n 'cable_filter_size_x': np.array([5], dtype=np.int64)[0],\n 'cable_filter_size_y': np.array([5], dtype=np.int64)[0],\n 'centroid_hampel_sig': np.array([3], dtype=np.int64)[0],\n 'centroid_hampel_span': np.array([5], dtype=np.int64)[0],\n 'chunk_overlap': np.array([0], dtype=np.int64)[0],\n 'chunk_size': np.array([100], dtype=np.int64)[0],\n 'compress': False,\n 'compress_chunk_size': np.array([100], dtype=np.int64)[0],\n 'compress_threads': np.array([1], dtype=np.int64)[0],\n 'config_file': 'config.yaml',\n 'crop_size_width': np.array([512], dtype=np.int64)[0],\n 'crop_size_height': np.array([424], dtype=np.int64)[0],\n 'flip_classifier': 'flip_classifier.pkl',\n 'flip_classifier_smoothing': np.array([5], dtype=np.int64)[0],\n 'fps': np.array([30], dtype=np.int64)[0],\n 'frame_dtype': 'uint16',\n 'frame_trim_beginning': np.array([0], dtype=np.int64)[0],\n 'frame_trim_end': np.array([0], dtype=np.int64)[0],\n 'max_height': np.array([1000], dtype=np.int64)[0],\n 'min_height': np.array([0], dtype=np.int64)[0],\n 'model_smoothing_clips_x': np.array([5], dtype=np.int64)[0],\n 'model_smoothing_clips_y': np.array([5], dtype=np.int64)[0],\n 'spatial_filter_size': np.array([5], dtype=np.int64)[0],\n 'tail_filter_iters': np.array([5], dtype=np.int64)[0],\n 'tail_filter_shape': 'ellipse',\n 'tail_filter_size_x': np.array([5], dtype=np.int64)[0],\n 'tail_filter_size_y': np.array([5], dtype=np.int64)[0],\n 'temporal_filter_size': np.array([5], dtype=np.int64)[0],\n 'tracking_model_init': 'mean',\n 'tracking_model_ll_clip': np.array([5], dtype=np.int64)[0],\n 'tracking_model_ll_threshold': np.array([5], dtype=np.int64)[0],\n 'tracking_model_mask_threshold': np.array([5], dtype=np.int64)[0],\n 'tracking_model_segment': True,\n 'use_plane_bground': True,\n 'use_tracking_model': True,\n 'write_movie': False,\n}\n\n# Create the NWB file\nnwbfile = NWBFile(\n session_description=\"session_description\",\n identifier=\"identifier\",\n session_start_time=datetime.now(timezone(\"US/Pacific\")),\n)\n\n# Add Imaging Data\nkinect = nwbfile.create_device(name=\"kinect\", manufacturer=\"Microsoft\", description=\"Microsoft Kinect 2\")\nflipped_series = TimeSeries(\n name=\"flipped_series\",\n data=is_flipped,\n unit=\"a.u.\",\n timestamps=timestamps,\n description=\"Boolean array indicating whether the image was flipped left/right\",\n)\nprocessed_depth_video = DepthImageSeries(\n name=\"processed_depth_video\",\n data=processed_depth_video,\n unit=\"millimeters\",\n format=\"raw\",\n timestamps=flipped_series.timestamps,\n description=\"3D array of depth frames (nframes x w x h, in mm)\",\n distant_depth=true_depth,\n device=kinect,\n)\nloglikelihood_video = ImageMaskSeries(\n name=\"loglikelihood_video\",\n data=loglikelihood_video,\n masked_imageseries=processed_depth_video,\n unit=\"a.u.\",\n format=\"raw\",\n timestamps=flipped_series.timestamps,\n description=\"Log-likelihood values from the tracking model (nframes x w x h)\",\n device=kinect,\n)\nbackground = GrayscaleImage(\n name=\"background\",\n data=background,\n description=\"Computed background image.\",\n)\nroi = GrayscaleImage(\n name=\"roi\",\n data=roi,\n description=\"Computed region of interest.\",\n)\n\n# Add Position Data\nposition_data = np.vstack(\n (kinematic_vars[\"centroid_x_mm\"], kinematic_vars[\"centroid_y_mm\"], kinematic_vars[\"height_ave_mm\"])\n).T\nposition_series = SpatialSeries(\n name=\"position\",\n description=\"Position (x, y, height) in an open field.\",\n data=position_data,\n timestamps=flipped_series.timestamps,\n reference_frame=\"top left\",\n unit=\"mm\",\n)\nposition = Position(spatial_series=position_series, name=\"position\")\n\n# Add Compass Direction Data\nheading_2d_series = SpatialSeries(\n name=\"heading_2d\",\n description=\"Head orientation.\",\n data=kinematic_vars[\"angle\"],\n timestamps=flipped_series.timestamps,\n reference_frame=\"top left\",\n unit=\"radians\",\n)\nheading_2d = CompassDirection(spatial_series=heading_2d_series, name=\"heading_2d\")\n\n# Add speed/velocity data\nspeed_2d = TimeSeries(\n name=\"speed_2d\",\n description=\"2D speed (mm / frame), note that missing frames are not accounted for\",\n data=kinematic_vars[\"velocity_2d_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm/frame\",\n)\nspeed_3d = TimeSeries(\n name=\"speed_3d\",\n description=\"3D speed (mm / frame), note that missing frames are not accounted for\",\n data=kinematic_vars[\"velocity_3d_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm/frame\",\n)\nangular_velocity_2d = TimeSeries(\n name=\"angular_velocity_2d\",\n description=\"Angular component of velocity (arctan(vel_x, vel_y))\",\n data=kinematic_vars[\"velocity_theta\"],\n timestamps=flipped_series.timestamps,\n unit=\"radians/frame\",\n)\n\n# Add length/width/area data\nlength = TimeSeries(\n name=\"length\",\n description=\"Length of mouse (mm)\",\n data=kinematic_vars[\"length_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm\",\n)\nwidth = TimeSeries(\n name=\"width\",\n description=\"Width of mouse (mm)\",\n data=kinematic_vars[\"width_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm\",\n)\nwidth_px_to_mm = kinematic_vars[\"width_mm\"] / kinematic_vars[\"width_px\"]\nlength_px_to_mm = kinematic_vars[\"length_mm\"] / kinematic_vars[\"length_px\"]\narea_px_to_mm2 = width_px_to_mm * length_px_to_mm\narea_mm2 = kinematic_vars[\"area_px\"] * area_px_to_mm2\narea = TimeSeries(\n name=\"area\",\n description=\"Pixel-wise area of mouse (mm^2)\",\n data=area_mm2,\n timestamps=flipped_series.timestamps,\n unit=\"mm^2\",\n)\n\n# Add Parameters\nparameters = MoSeqExtractParameterGroup(name=\"parameters\", **parameters)\n\n# Add MoseqExtractGroup\nmoseq_extract_group = MoSeqExtractGroup(\n name=\"moseq_extract_group\",\n version=version,\n parameters=parameters,\n background=background,\n processed_depth_video=processed_depth_video,\n loglikelihood_video=loglikelihood_video,\n roi=roi,\n flipped_series=flipped_series,\n depth_camera=kinect,\n position=position,\n heading_2d=heading_2d,\n speed_2d=speed_2d,\n speed_3d=speed_3d,\n angular_velocity_2d=angular_velocity_2d,\n length=length,\n width=width,\n area=area,\n)\n# Add data into a behavioral processing module\nbehavior_module = nwbfile.create_processing_module(\n name=\"behavior\",\n description=\"Processed behavioral data from MoSeq\",\n)\nbehavior_module.add(moseq_extract_group)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-probeinterface-record": {"ref": "ndx-probeinterface-record", "record_url": "https://github.com/nwb-extensions/ndx-probeinterface-record", "last_updated": "2024-07-25T16:48:49Z", "name": "ndx-probeinterface", "version": "0.1.0", "src": "https://github.com/SpikeInterface/ndx-probeinterface", "pip": "https://pypi.org/project/ndx-probeinterface/", "license": "MIT", "maintainers": ["alejoe91", "khl02007"], "readme": "# ndx-probeinterface Extension for NWB\n\n`ndx-probeinterface` is an extension of the NWB format to formally define information about neural probes as data types in NWB files. It comes with helper functions to easily construct `ndx-probeinterface.Probe` from `probeinterface.Probe` and vice versa.\n\n## Installation\n```python\npip install ndx_probeinterface\n```\n\n## Usage\n\n### Going from a `probeinterface.Probe`/`ProbeGroup` object to a `ndx_probeinterface.Probe` object \n```python\nimport ndx_probeinterface\n\npi_probe = probeinterface.Probe(...)\npi_probegroup = probeinterface.ProbeGroup()\n\n# from_probeinterface always returns a list of ndx_probeinterface.Probe devices\nndx_probes1 = ndx_probeinterface.from_probeinterface(pi_probe)\nndx_probes2 = ndx_probeinterface.from_probeinterface(pi_probegroup)\n\nndx_probes = ndx_probes1.extend(ndx_probes2)\n\nnwbfile = pynwb.NWBFile(...)\n\n# add Probe as NWB Devices\nfor ndx_probe in ndx_probes:\n nwbfile.add_device(ndx_probe)\n```\n\n### Going from a `ndx_probeinterface.Probe` object to a `probeinterface.Probe` object \n```python\nimport ndx_probeinterface\n\n# load ndx_probeinterface.Probe objects from NWB file\nio = pynwb.NWBH5IO(file_path, 'r', load_namespaces=True)\nnwbfile = io.read()\n\nndx_probes = []\nfor device in nwbfile:\n if isinstance(device, ndx_probeinterface.Probe):\n ndx_probes.append(device)\n\n# convert to probeinterface.Probe objects\npi_probes = []\nfor ndx_probe in ndx_probes:\n pi_probe = ndx_probeinterface.to_probeinterface(ndx_probe)\n pi_probes.append(pi_probe)\n```\n\n## Future plans\n- Add information about the headstage used for data acquisition\n- Remove redundant information from `ElectrodeTable`\n- Incorporate this NDX into the core NWB schema\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-dbs-record": {"ref": "ndx-dbs-record", "record_url": "https://github.com/nwb-extensions/ndx-dbs-record", "last_updated": "2024-07-25T16:51:30Z", "name": "ndx-dbs", "version": "0.1.0", "src": "https://github.com/Hamidreza-Alimohammadi/ndx-dbs", "pip": "https://pypi.org/project/ndx-dbs/", "license": "BSD 3-clause", "maintainers": ["Hamidreza-Alimohammadi"], "readme": "# ndx-dbs Extension for NWB\n\nThis extension is developed to extend NWB data standards to incorporate required (meta)data for DBS experiments. `DBSGroup`, the main neurodata-type in this extension, in fact extends the `LabMetaData` which itself extends the NWBContainer base type and incorporates data types of `DBSMeta`(as an extension of LabMetaData), `DBSSubject`(as an extension of LabMetaData) and `DBSDevice`(as an extension of Device) which itself includes `DBSElectrodes`(as an extension of DynamicTable). Instances of these data types are interlinked to each other to account for the comprehensiveness of all the required meta(data) in a general experiment including DBS.\n\n
\n\n
\n\n## Installation\nCan be installed directly from PyPI:\n```\npip install ndx-dbs\n```\nor simply clone the repo and navigate to the root directory, then:\n```\npip install .\n```\n## Test\nA roundTrip test is runnable through ```pytest``` from the root. The test script can be found here:\n```\n\\src\\pynwb\\tests\n```\n## An example use-case\nThe following is an example use case of ```ndx-dbs``` with explanatory comments. First, we build up an ```nwb_file``` and define an endpoint recording device:\n```python\nfrom datetime import datetime\nfrom uuid import uuid4\nfrom dateutil.tz import tzlocal\nfrom pynwb import NWBHDF5IO, NWBFile\n\nfrom ndx_dbs import (\n DBSDevice,\n DBSElectrodes,\n DBSMeta,\n DBSSubject,\n DBSGroup\n)\n\nnwb_file = NWBFile(\n session_description='DBS mock session',\n identifier=str(uuid4()),\n session_start_time=datetime.now(tzlocal()),\n experimenter='experimenter',\n lab='ChiWangLab',\n institution='UKW',\n experiment_description='',\n session_id='',\n)\n\n# define an endpoint main recording device\nmain_device = nwb_file.create_device(\n name='endpoint_recording_device',\n description='description_of_the_ERD', # ERD: Endpoint recording device\n manufacturer='manufacturer_of_the_ERD'\n)\n```\nThen, we define an instance of `DBSElectrodes` to represent the meta-data on the recording electrodes:\n```python\n'''\ncreating an DBS electrodes table\nas a DynamicTable\n'''\ndbs_electrodes_table = DBSElectrodes(\n description='descriptive meta-data on DBS stimulus electrodes'\n)\n\n# add electrodes\ndbs_electrodes_table.add_row(\n el_id='el_0',\n polarity='negative electrode (stimulation electrode, cathode)',\n impedance='0.8 MOhm',\n length='X cm',\n tip='tip surface ~ XX micrometer sq',\n material='platinum/iridium',\n location='STN',\n comment='none',\n)\ndbs_electrodes_table.add_row(\n el_id='el_1',\n polarity='positive electrode (reference electrode, anode)',\n impedance='1 MOhm',\n length='Y cm',\n tip='tip surface ~ YY micrometer sq',\n material='platinum/iridium',\n location='scalp surface',\n comment='distance D from el_0',\n)\n# adding the object of DynamicTable\nnwb_file.add_acquisition(dbs_electrodes_table) # storage point for DT\n```\nNow, we can define an instance of ```DBSDevice```:\n```python\n# define an DBSDevice-type device for ecg recording\ndbs_device = DBSDevice(\n name='DBS_device',\n description='cable-bound multichannel systems stimulus generator; TypeSTG4004',\n manufacturer='MultichannelSystems, Reutlingen, Germany',\n synchronization='taken care of via ...',\n electrodes_group=dbs_electrodes_table,\n endpoint_recording_device=main_device\n)\n# adding the object of DBSDevice\nnwb_file.add_device(dbs_device)\n```\nAnd also an instance of ```DBSMeta``` to store the meta-data for a DBS experiment:\n```python\ndbs_meta_group = DBSMeta(\n name='DBS_meta',\n stim_state='ON',\n stim_type='unipolar',\n stim_area='STN',\n stim_coordinates='\u20133.6mmAP, either\u20132.5mm (right) or 12.5mm(left)ML, and\u20137.7mmDV',\n pulse_shape='rectangular',\n pulse_width='60 micro-seconds',\n pulse_frequency='130 Hz',\n pulse_intensity='50 micro-Ampere',\n charge_balance='pulse symmetric; set to be theoretically zero',\n)\n# adding the object of DBSMeta\nnwb_file.add_lab_meta_data(dbs_meta_group) # storage point for custom LMD\n```\nAlong with an instance of `DBSSubject`:\n```python\ndbs_subject_group = DBSSubject(\n name='DBS_subject',\n model='6-OHDA',\n controls='specific control group in this experiment',\n comment='any comments on this subject',\n)\n# adding the object of DBSSubject\nnwb_file.add_lab_meta_data(dbs_subject_group) # storage point for custom LMD\n```\nNow that we have all the required components, we define the main group for DBS to connect them all:\n```python\ndbs_main_group = DBSGroup(\n name='DBS_main_container',\n DBS_phase='first phase after implementation recovery',\n DBS_device=dbs_device,\n DBS_meta=dbs_meta_group,\n DBS_subject=dbs_subject_group,\n comment='any comments ...',\n)\n# adding the object of DBSSubject\nnwb_file.add_lab_meta_data(dbs_main_group) # storage point for custom LMD\n```\nNow, the `nwb_file` is ready to be written on the disk and read back. \n"}} \ No newline at end of file +{"ndx-miniscope-record": {"ref": "ndx-miniscope-record", "record_url": "https://github.com/nwb-extensions/ndx-miniscope-record", "last_updated": "2019-10-16T05:56:05Z", "name": "ndx-miniscope", "version": "0.2.2", "src": "https://github.com/bendichter/ndx-miniscope", "pip": "https://pypi.org/project/ndx-miniscope/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-miniscope Extension for NWB:N\n\nThis is a Neurodata Extension (NDX) for Neurodata Without Borders: Neurophysiology (NWB:N) 2.0 that provides an extension to Device to hold meta-data collected by the Miniscope device."}, "ndx-simulation-output-record": {"ref": "ndx-simulation-output-record", "record_url": "https://github.com/nwb-extensions/ndx-simulation-output-record", "last_updated": "2019-10-16T06:23:42Z", "name": "ndx-simulation-output", "version": "0.2.5", "src": "https://github.com/bendichter/ndx-simulation-output", "pip": "https://pypi.org/project/ndx-simulation-output", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-simulation-output Extension for NWB:N\n\n## An extension for output data of large-scale simulations\n Developed in collaboration between the Soltesz lab and the Allen Institute during [NWB Hackathon #4](https://github.com/NeurodataWithoutBorders/nwb_hackathons/tree/master/HCK04_2018_Seattle/Projects/NetworkOutput) by Ben Dichter*, Kael Dai*, Aaron Milstein, Yazan Billeh, Andrew Tritt, Jean-Christophe Fillion-Robin, Anton Akhipov, Oliver Ruebel, Nicholas Cain, Kristofer Bouchard, and Ivan Soltesz\n\nThis extension defines two NWB neuorodata_types, `CompartmentSeries` and `Compartments`. `CompartmentSeries` stores continuous data (e.g. membrane potential, calcium concentration) from many compartments of many cells, and scales to hundreds of thousands of compartments. `Compartments` stores the meta-data associated with those compartments, and is stored in `SimulationMetaData`.\n\n![Image of CompartmentSeries](multicompartment_schema_1.png)\n\n\n## Guide\n### python\n#### installation\n```\npip install ndx-simulation-output\n```\n\n#### usage\n```python\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom datetime import datetime\nfrom ndx_simulation_output import CompartmentSeries, Compartments, SimulationMetaData\nimport numpy as np\n\n\ncompartments = Compartments()\ncompartments.add_row(number=[0, 1, 2, 3, 4], position=[0.1, 0.2, 0.3, 0.4, 0.5])\ncompartments.add_row(number=[0], position=[np.nan])\n\nnwbfile = NWBFile('description', 'id', datetime.now().astimezone())\n\nnwbfile.add_lab_meta_data(SimulationMetaData(compartments=compartments))\ncs = CompartmentSeries('membrane_potential', np.random.randn(10, 6),\n compartments=compartments, unit='V', rate=100.)\nnwbfile.add_acquisition(cs)\n\nwith NWBHDF5IO('test_compartment_series.nwb', 'w') as io:\n io.write(nwbfile)\n```\n\nconversion from SONTATA:\n```python\nfrom ndx_simulation_output.io import sonata2nwb\n\nsonata2nwb('sonata_fpath', 'save_path')\n```\n\n### MATLAB\n#### installation\n\ncommand line:\n```\ngit clone https://github.com/bendichter/ndx-simulation-output.git\n```\n\nin matlab:\n```matlab\ngenerateExtension('/path/to/ndx-simulation-output/spec/ndx-simulation-output.namespace.yaml');\n```\n\n#### usage\n```matlab\nnwb = nwbfile()\n\n[number, number_index] = util.create_indexed_column( ...\n {[0, 1, 2, 3, 4], 0}, '/acquisition/compartments/number');\n\n[position, position_index] = util.create_indexed_column( ...\n {[0.1, 0.2, 0.3, 0.4, 0.5], 0}, '/acquisition/compartments/position');\n\ncompartments = types.ndx_simulation_output.Compartments( ...\n 'colnames', {'number', 'position'}, ...\n 'description', 'membrane potential from various compartments', ...\n 'id', types.core.ElementIdentifiers('data', int64(0:5)));\n\ncompartments.position = position;\ncompartments.position_index = position_index;\ncompartments.number = number;\ncompartments.number_index = number_index;\n\nmembrane_potential = types.ndx_simulation_output.CompartmentSeries( ...\n 'data', randn(10,6), ...\n 'compartments', types.untyped.SoftLink('/acquisition/compartments'), ...\n 'data_unit', 'V', ...\n 'starting_time_rate', 100., ...\n 'starting_time', 0.0);\n \nsimulation = types.ndx_simulation_output.SimulationMetaData('compartments', compartments);\n \nnwb.general.set('simulation', simulation);\n\nnwb.acquisition.set('membrane_potential', membrane_potential);\n```\n\n## Talks\nBen Dichter*, Kael Dai*, Aaron Milstein, Yazan Billeh, Andrew Tritt, Jean-Christophe Fillion-Robin, Anton Akhipov, Oliver Ruebel, Nicholas Cain, Kristofer Bouchard, Ivan Soltesz. NWB extension for storing results of large-scale neural network simulations. NeuroInformatics. Montreal, Canada (2018). [video](https://www.youtube.com/watch?v=uuYQW0EE2GY).\n"}, "ndx-ecog-record": {"ref": "ndx-ecog-record", "record_url": "https://github.com/nwb-extensions/ndx-ecog-record", "last_updated": "2019-10-16T08:20:22Z", "name": "ndx-ecog", "version": "0.1.1", "src": "https://github.com/ben-dichter-consulting/ndx-ecog", "pip": "https://pypi.org/project/ndx-ecog/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-ecog Extension for NWB:N\n\nAuthor: Ben Dichter\n\nThere are three data types, `Surface`, `CorticalSurfaces`, and `ECoGSubject`. `CorticalSurfaces` is simply a group (like a folder) to put `Surface` objects into. `Surface` holds surface mesh data (vertices and triangular faces) for sections of cortex. `ECoGSubject` is an extension of `Subject` that allows you to add the `CorticalSurfaces` object to `/general/subject`.\n\n## Usage\n\n### python\n\ninstall:\n```bash\npip install ndx_ecog\n```\n\nwrite:\n```python\nimport pynwb\nfrom ndx_ecog import CorticalSurfaces, ECoGSubject\n\nnwbfile = pynwb.NWBFile(...)\n\n...\n\ncortical_surfaces = CorticalSurfaces()\n## loop me\n cortical_surfaces.create_surface(name=name, faces=faces, vertices=veritices)\n##\nnwbfile.subject = ECoGSubject(cortical_surfaces=cortical_surfaces)\n```\n\nYou can optionally attach images of the subject's brain:\n```python\nfrom pynwb.base import Images\nfrom pynwb.image import GrayscaleImage\n\nsubject.images = Images(name='subject images', images=[GrayscaleImage('image1', data=image_data)])\n```\n\nread:\n```python\nimport nwbext_ecog\nfrom pynwb import NWBHDF5IO\nio = NWBHDF5IO('path_to_file.nwb','r')\nnwb = io.read()\nnwb.subject.cortical_surfaces\n```\n\n### MATLAB\ninstall:\n```matlab\ngenerateExtension('/path/to/ndx-ecog/spec/ndx-ecog.namespace.yaml');\n```\n\nwrite:\n```matlab\ncortical_surfaces = types.ecog.CorticalSurfaces;\n\n%%% loop me\n surf = types.ecog.Surface('faces', faces, 'vertices', vertices);\n cortical_surfaces.surface.set(surface_name, surf);\n%%%\n\nfile.subject = types.ecog.ECoGSubject(name, cortical_surfaces);\n```\n"}, "ndx-fret-record": {"ref": "ndx-fret-record", "record_url": "https://github.com/nwb-extensions/ndx-fret-record", "last_updated": "2020-01-24T21:49:16Z", "name": "ndx-fret", "version": "0.1.1", "src": "https://github.com/ben-dichter-consulting/ndx-fret", "pip": "https://pypi.org/project/ndx-fret/", "license": "BSD", "maintainers": ["luiztauffer", "bendichter"], "readme": "# ndx-fret\n[![PyPI version](https://badge.fury.io/py/ndx-fret.svg)](https://badge.fury.io/py/ndx-fret)\n\nNWB extension for storing Fluorescence Resonance Energy Transfer (FRET) experimental data.\nA collaboration with [Jaeger Lab](https://scholarblogs.emory.edu/jaegerlab/), [Emory University](https://www.emory.edu/home/index.html) and [The Kavli Foundation](https://www.kavlifoundation.org/).\n\n

\n\n

\n\n### Python Installation\n```bash\npip install ndx-fret\n```\n\n### Python Usage\n```python\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom pynwb.device import Device\nfrom pynwb.ophys import OpticalChannel\nfrom ndx_fret import FRET, FRETSeries\n\nfrom datetime import datetime\nimport numpy as np\n\nnwb = NWBFile('session_description', 'identifier', datetime.now().astimezone())\n\n# Create and add device\ndevice = Device(name='Device')\nnwb.add_device(device)\n\n# Create optical channels\nopt_ch_d = OpticalChannel(\n name='optical_channel',\n description='optical_channel_description',\n emission_lambda=529.\n)\nopt_ch_a = OpticalChannel(\n name='optical_channel',\n description='optical_channel_description',\n emission_lambda=633.\n)\n\n# Create FRET\nfs_d = FRETSeries(\n name='donor',\n fluorophore='mCitrine',\n optical_channel=opt_ch_d,\n device=device,\n description='description of donor series',\n data=np.random.randn(100, 10, 10),\n rate=200.,\n)\nfs_a = FRETSeries(\n name='acceptor',\n fluorophore='mKate2',\n optical_channel=opt_ch_a,\n device=device,\n description='description of acceptor series',\n data=np.random.randn(100, 10, 10),\n rate=200.,\n)\n\nfret = FRET(\n name='FRET',\n excitation_lambda=482.,\n donor=fs_d,\n acceptor=fs_a\n)\nnwb.add_acquisition(fret)\n\n# Write nwb file\nwith NWBHDF5IO('test_fret.nwb', 'w') as io:\n io.write(nwb)\n print('NWB file written')\n\n# Read nwb file and check its content\nwith NWBHDF5IO('test_fret.nwb', 'r', load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n```\n"}, "ndx-icephys-meta-record": {"ref": "ndx-icephys-meta-record", "record_url": "https://github.com/nwb-extensions/ndx-icephys-meta-record", "last_updated": "2022-04-20T19:15:11Z", "name": "ndx-icephys-meta", "version": "0.1.0", "src": "https://github.com/oruebel/ndx-icephys-meta", "pip": "https://pypi.org/project/ndx-icephys-meta/", "license": "BSD 3-Clause", "maintainers": ["oruebel"], "readme": "# [Deprecated] ndx-icephys-meta Extension for NWB\n\n**Status:** The changes from this extension have been integrated with NWB and are part of then [NWB 2.4 release](https://nwb-schema.readthedocs.io/en/latest/format_release_notes.html#aug-11-2021). Use of this extension is deprecated. \n\n**Overview:** This extension implements the icephys extension proposal described [here](https://docs.google.com/document/d/1cAgsXv26BmQoVfa7Greyxs0oc4IGH-t5aJsm-AwUAAE/edit). The extension is intended to evaluate and explore the practical use of the proposed changes as well as to provide a reference implementation with the goal to ease integration of the proposed changes with NWB.\n\n## Install\n\n```\npython setup.py develop\n```\n\nThe extension is now also available on pip and can be installed via:\n\n```\npip install ndx-icephys-meta\n```\n\nThe extension is also listed in the (NDX catalog)[https://nwb-extensions.github.io/]. See [here](https://github.com/nwb-extensions/ndx-icephys-meta-record) for the catalog metadata record.\n\n\n## Building the spec documentation\n\n```\ncd docs\nmake html\n```\n\nThis generates the specification docs directly from the YAML specifciation in the ``spec`` folder. The generated docs are stored in ``/docs/build``\n\n## Running the unit tests\n\n```\npython src/pynwb/ndx_icephys_meta/test/test_icephys.py\n```\n\n## Content\n\n* ``spec/`` : YAML specification of the extension\n* ``docs/`` : Sources for building the specification docs from the YAML spec\n* ``src/spec/create_extension_spec.py`` : Python source file for creating the specification\n* ``src/pynwb/`` : Sources for Python extensions and examples\n * ``ndx_icephys_meta`` : Python package with extensions to PyNWB for read/write of extension data\n * ``ndx_icephys_meta/test`` : Unit test for the Python extension\n * ``ndx_icephys_meta/icephys.py`` : PyNWB Container classes\n * ``ndx_icephys_meta/io/icephys.py`` : PyNWB ObjectMapper classes\n * ``examples`` : Examples illustrating the use of the extension in Python\n\n\n## Example\n\nExamples for the Python extension are available at ``src/pynwb/examples``. The unit tests in ``src/pynwb/ndx_icephys_meta/test`` can also serve as additional examples.\n\nThe following shows a simple example. The steps with (A) - (E) in the comments are the main new steps for this extension. The other parts of the code are standard NWB code.\n\n```python\nfrom datetime import datetime\nfrom dateutil.tz import tzlocal\nimport numpy as np\nfrom pynwb.icephys import VoltageClampStimulusSeries, VoltageClampSeries\nfrom pynwb import NWBHDF5IO\nfrom ndx_icephys_meta.icephys import ICEphysFile # Import the extension\n\n# Create an ICEphysFile\nnwbfile = ICEphysFile(session_description='my first recording',\n identifier='EXAMPLE_ID',\n session_start_time=datetime.now(tzlocal()))\n\n# Add a device\ndevice = nwbfile.create_device(name='Heka ITC-1600')\n\n# Add an intracellular electrode\nelectrode = nwbfile.create_ic_electrode(name=\"elec0\",\n description='a mock intracellular electrode',\n device=device)\n\n# Create an ic-ephys stimulus\nstimulus = VoltageClampStimulusSeries(\n name=\"stimulus\",\n data=[1, 2, 3, 4, 5],\n starting_time=123.6,\n rate=10e3,\n electrode=electrode,\n gain=0.02)\n\n# Create an ic-response\nresponse = VoltageClampSeries(\n name='response',\n data=[0.1, 0.2, 0.3, 0.4, 0.5],\n conversion=1e-12,\n resolution=np.nan,\n starting_time=123.6,\n rate=20e3,\n electrode=electrode,\n gain=0.02,\n capacitance_slow=100e-12,\n resistance_comp_correction=70.0)\n\n# (A) Add an intracellular recording to the file\nir_index = nwbfile.add_intracellular_recording(electrode=electrode,\n stimulus=stimulus,\n response=response)\n\n# (B) Add a list of sweeps to the sweeps table\nsweep_index = nwbfile.add_ic_sweep(recordings=[ir_index, ])\n\n# (C) Add a list of sweep table indices as a sweep sequence\nsequence_index = nwbfile.add_ic_sweep_sequence(sweeps=[sweep_index, ])\n\n# (D) Add a list of sweep sequence table indices as a run\nrun_index = nwbfile.add_ic_run(sweep_sequences=[sequence_index, ])\n\n# (E) Add a list of run table indices as a condition\nnwbfile.add_ic_condition(runs=[run_index, ])\n\n# Write our test file\ntestpath = \"test_icephys_file.h5\"\nwith NWBHDF5IO(testpath, 'w') as io:\n io.write(nwbfile)\n\n# Read the data back in\nwith NWBHDF5IO(testpath, 'r') as io:\n infile = io.read()\n print(infile)\n\n```\n"}, "ndx-events-record": {"ref": "ndx-events-record", "record_url": "https://github.com/nwb-extensions/ndx-events-record", "last_updated": "2022-11-15T06:37:13Z", "name": "ndx-events", "version": "0.2.0", "src": "https://github.com/rly/ndx-events", "pip": "https://pypi.org/project/ndx-events/", "license": "BSD", "maintainers": ["rly"], "readme": "# ndx-events Extension for NWB\n\nThis is an NWB extension for storing timestamped event data and TTL pulses.\n\nEvents can be:\n1. **Simple events**. These are stored in the `Events` type. The `Events` type consists of only a name, a description,\nand a 1D array of timestamps. This should be used instead of a `TimeSeries` when the time series has no data.\n2. **Labeled events**. These are stored in the `LabeledEvents` type. The `LabeledEvents` type expands on the `Events`\ntype by adding 1) a 1D array of integer values (data) with the same length as the timestamps and 2) a 1D array of\nlabels (labels) associated with each unique integer value in the data array. The data values are indices into the\narray of labels. The `LabeledEvents` type can be used to encode additional information about individual events,\nsuch as the reward values for each reward event.\n3. **TTL pulses**. These are stored in the `TTLs` type. The `TTLs` type is a subtype of the `LabeledEvents` type\nspecifically for TTL pulse data. A single instance should be used for all TTL pulse data. The pulse value (or channel)\nshould be stored in the 1D data array, and the labels associated with each pulse value (or channel)\nshould be stored in the 1D array of labels.\n4. **Annotated events**. These are stored in the `AnnotatedEventsTable` type. The `AnnotatedEventsTable` type is a\nsubtype of `DynamicTable`, where each row corresponds to a different event type. The table has a ragged\n(variable-length) 1D column of event times, such that each event type (row) is associated with an array of event times.\nUnlike for the other event types, users can add their own custom columns to annotate each event type or event time.\nThis can be useful for storing event metadata related to data preprocessing and analysis, such as marking bad events.\n\nThis extension was developed by Ryan Ly, Ben Dichter, Oliver R\u00fcbel, and Andrew Tritt. Information about the rationale,\nbackground, and alternative approaches to this extension can be found here:\nhttps://docs.google.com/document/d/1qcsjyFVX9oI_746RdMoDdmQPu940s0YtDjb1en1Xtdw\n\n## Installation\n\n```\npip install ndx-events\n```\n\n## Example usage\n\n```python\nfrom datetime import datetime\n\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom ndx_events import LabeledEvents, AnnotatedEventsTable\n\n\nnwb = NWBFile(\n session_description='session description',\n identifier='cool_experiment_001',\n session_start_time=datetime.now().astimezone()\n)\n\n# create a new LabeledEvents type to hold events recorded from the data acquisition system\nevents = LabeledEvents(\n name='LabeledEvents',\n description='events from my experiment',\n timestamps=[0., 0.5, 0.6, 2., 2.05, 3., 3.5, 3.6, 4.],\n resolution=1e-5, # resolution of the timestamps, i.e., smallest possible difference between timestamps\n data=[0, 1, 2, 3, 5, 0, 1, 2, 4],\n labels=['trial_start', 'cue_onset', 'cue_offset', 'response_left', 'response_right', 'reward']\n)\n\n# add the LabeledEvents type to the acquisition group of the NWB file\nnwb.add_acquisition(events)\n\n# create a new AnnotatedEventsTable type to hold annotated events\nannotated_events = AnnotatedEventsTable(\n name='AnnotatedEventsTable',\n description='annotated events from my experiment',\n resolution=1e-5 # resolution of the timestamps, i.e., smallest possible difference between timestamps\n)\n# add a custom indexed (ragged) column to represent whether each event time was a bad event\nannotated_events.add_column(\n name='bad_event',\n description='whether each event time should be excluded',\n index=True\n)\n# add an event type (row) to the AnnotatedEventsTable instance\nannotated_events.add_event_type(\n label='Reward',\n event_description='Times when the subject received juice reward.',\n event_times=[1., 2., 3.],\n bad_event=[False, False, True],\n id=3\n)\n\n# create a processing module in the NWB file to hold processed events data\nevents_module = nwb.create_processing_module(\n name='events',\n description='processed event data'\n)\n\n# add the AnnotatedEventsTable instance to the processing module\nevents_module.add(annotated_events)\n\n# write nwb file\nfilename = 'test.nwb'\nwith NWBHDF5IO(filename, 'w') as io:\n io.write(nwb)\n\n# read nwb file and check its contents\nwith NWBHDF5IO(filename, 'r', load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-nirs-record": {"ref": "ndx-nirs-record", "record_url": "https://github.com/nwb-extensions/ndx-nirs-record", "last_updated": "2022-11-15T06:37:40Z", "name": "ndx-nirs", "version": "0.3.0", "src": "https://github.com/agencyenterprise/ndx-nirs", "pip": "https://pypi.org/project/ndx-nirs/", "license": "BSD 3-Clause", "maintainers": ["sumner15", "dsleiter", "ribeirojose"], "readme": "# ndx-nirs Extension for NWB\n\nThis is an [NWB](https://www.nwb.org/) extension for storing and sharing near-infrared spectroscopy (NIRS) data.\n\nIf you're new to NWB: \"Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build common analysis tools for neurophysiology data.\" ([source](https://www.nwb.org/nwb-neurophysiology/))\n\nThis extension defines the data specification for NIRS data in addition to providing a python API for reading and writing .nwb files containing data that follows this specification. The python package can be used with [pyNWB](https://github.com/NeurodataWithoutBorders/pynwb).\n\nThis extension has been officially accepted into the [Neurodata Extensions Catalog](https://nwb-extensions.github.io/) and can be found there along with other accepted extensions.\n\n## Introduction to NIRS\n\nNIRS uses near-infrared sources (from 780 nm to 2500 nm) to assess brain function by detecting changes in blood hemoglobin (Hb) concentrations. \n\nAs neural activity changes, blood volume and the concentration of hemoglobin in the local area changes through the neurovascular coupling phenomenon. NIRS techniques requires optical sources with two or more wavelengths in the near-infrared spectrum. One must have a wavelength above and one below the isosbestic point of 810 nm - the point at which deoxygenated hemoglobin (deoxy-Hb) and oxygenated hemoglobin (oxy-Hb) have identical absorption coefficients. Using the modified Beer-Lambert law (mBLL), NIRS techniques reveal changes in hemoglobin concentration. NIRS monitors hemoglobin levels through these optical absorption coefficients as a proxy for localized brain activity.\n\n## Purpose of the extension\n\nThe user-base of NIRS techniques continues to grow. In addition, NIRS techniques are often used in conjunction with other brain recording techniques (e.g. EEG) and/or use common stimuli or behavioral paradigms. The NWB NIRS extension provides a data standard for neuroscientist to share, archive, use, and build analysis tools for NIRS data. \n\nIntegration of NIRS into the NWB data standard affords all NIRS users interoperability with many of the data storage, processing, analysis, and visualization tools already integrated within NWB. \n\n## Modes of NIRS currently supported\n\nThis extension currently explicitly supports: \n\n1. Continuous Wave\n - see `NIRSDevice.nirs_mode` \n2. Frequency-Domain\n - see `NIRSDevice.nirs_mode` and `NIRSDevice.frequency`\n3. Time-Domain \n - see `NIRSDevice.nirs_mode`, `NIRSDevice.time_delay`, and `NIRSDevice.time_delay_width`\n4. Diffuse Correlation Spectroscopy\n - see `NIRSDevice.nirs_mode`, `NIRSDevice.correlation_time_delay`, and `NIRSDevice.correlation_time_delay_width`\n\nIn addition, it includes support for fluorescent versions of each of these techniques.\n - see `NIRSChannelsTable.emssion_wavelength`\n\nOther NIRS modalities are supported implicitly. We acknowledge that NIRS is a fast-growing recording method with new modalities constantly under development. For this reason, it is possible to define other useful parameters using the `NIRSDevice.additional_parameters` field. Future version of NWB NIRS will add native support for new NIRS modalities.\n\n## Related data standards \n\nThe NWB NIRS neurodata type was inspired by the [SNIRF](https://fnirs.org/resources/software/snirf/) data specification ([Github](https://github.com/fNIRS/snirf)). Many of the data fields can be directly mapped from SNIRF to NWB and vice-versa. We expect to release a SNIRF<->NWB conversion tool in the near future to improve compatibility between data standards and ease the burden of conversion on NIRS researchers.\n\n## NWB NIRS data architecture\n\nThe two principal neurodata types of this extension are ``NIRSDevice``, which extends the `Device` data type and holds information about the NIRS hardware and software configuration, and ``NIRSSeries``, which contains the timeseries data collected by the NIRS device.\n\n``NIRSSourcesTable``, ``NIRSDetectorsTable``, and ``NIRSChannelsTable`` are children of ``NIRSDevice`` which describe the source and detector layout as well as the wavelength-specific optical channels that are measured.\n\nEach row of ``NIRSChannelsTable`` represents a specific source and detector pair along with the source illumination wavelength (and optionally, in the case of fluorescent spectroscopy, the emission/detection wavelength). The channels in this table correspond have a 1-to-1 correspondence with the data columns in ``NIRSSeries``.\n\n![ndx-nirs UML](https://github.com/agencyenterprise/ndx-nirs/raw/main/docs/source/images/ndx-nirs-uml.png)\n\n### Defined neurodata types\n\n1. ``NIRSSourcesTable`` stores rows for each optical source of a NIRS device. ``NIRSSourcesTable`` columns includes:\n - ``label`` - the label of the source.\n - ``x``, ``y``, and ``z`` - the coordinates in meters of the optical source (``z`` is optional).\n\n2. ``NIRSDetectorsTable`` stores rows for each of the optical detectors of a NIRS device. ``NIRSDetectorsTable`` columns includes:\n - ``label`` - the label of the detector.\n - ``x``, ``y``, and ``z`` - the coordinates in meters of the optical detector (``z`` is optional).\n\n3. ``NIRSChannelsTable`` stores rows for each physiological channel, which is defined by source-detector pairs, where sources & detectors are referenced via ``NIRSSourcesTable`` and ``NIRSDetectorsTable``. ``NIRSChannelsTable`` columns includes:\n - ``label`` - the label of the channel.\n - ``source`` - a reference to the optical source in ``NIRSSourcesTable``.\n - ``detector`` - a reference to the optical detector in ``NIRSDetectorsTable``.\n - ``source_wavelength`` - the wavelength of light in nm emitted by the source for this channel.\n - ``emission_wavelength`` - the wavelength of light in nm emitted by the fluorophone (optional; only used for fluorescent spectroscopy).\n - ``source_power`` - the power of the source in mW used for this channel (optional).\n - ``detector_gain`` - the gain applied to the detector for this channel (optional).\n \n4. ``NIRSDevice`` defines the NIRS device itself and includes the following required fields:\n - ``name`` - a unique name for the device.\n - ``description`` - a free-form text description of the device.\n - ``manufacturer`` - the name of the manufacturer of the device.\n - ``channels`` - a table of the optical channels available on this device (references ``NIRSChannelsTable``).\n - ``sources`` - the optical sources of this device (references ``NIRSSourcesTable``).\n - ``detectors`` - the optical detectors of this device (references ``NIRSDetectorsTable``).\n - ``nirs_mode`` - the mode of NIRS measurement performed with this device (e.g., 'continuous-wave', 'frequency-domain', etc.).\n \n ``NIRSDevice`` also includes several optional attributes to be used in parallel with specific ``nirs_mode`` values:\n - ``frequency`` - the modulation frequency in Hz for frequency domain NIRS (optional).\n - ``time_delay`` - the time delay in ns used for gated time domain NIRS (TD-NIRS) (optional).\n - ``time_delay_width`` - the time delay width in ns used for gated time domain NIRS (optional).\n - ``correlation_time_delay`` - the correlation time delay in ns for diffuse correlation spectroscopy NIRS (optional).\n - ``correlation_time_delay_width`` - the correlation time delay width in ns for diffuse correlation spectroscopy NIRS (optional).\n - ``additional_parameters`` - any additional parameters corresponding to the NIRS device/mode that are useful for interpreting the data (optional).\n\n5. ``NIRSSeries`` stores the actual timeseries data collected by the NIRS device and includes:\n - ``name`` - a unique name for the NIRS timeseries.\n - ``description`` - a description of the NIRS timeseries.\n - ``timestamps`` - the timestamps for each row of ``data`` in seconds.\n - ``channels`` - a ``DynamicTableRegion`` mapping to the appropriate channels in a ``NIRSChannelsTable``.\n - ``data`` - the actual numeric raw data measured by the NIRS system. It is a 2D array where the columns correspond to ``channels`` and the rows correspond to ``timestamps``.\n\n## Installation\n\nTo install from PyPI use pip:\n\n```\n$ pip install ndx-nirs\n```\n\nTo install after cloning the extension repo from github, execute the following from the root of the repo:\n\n```\n$ pip install .\n```\n\nFor development purposes, it might be useful to install in editable mode:\n\n```\n$ pip install -e .\n```\n\n## Usage\n\n```python\nfrom datetime import datetime\n\nimport numpy as np\n\nfrom hdmf.common import DynamicTableRegion\nfrom pynwb import NWBHDF5IO\nfrom pynwb.file import NWBFile, Subject\n\nfrom ndx_nirs import NIRSSourcesTable, NIRSDetectorsTable, NIRSChannelsTable, NIRSDevice, NIRSSeries\n\n\n##### create some example data to add to the NWB file #####\n\n# create NIRS source & detector labels\nsource_labels = [\"S1\", \"S2\"]\ndetector_labels = [\"D1\", \"D2\"]\n\n# create NIRS source & detector positions as a numpy array\n# with dims: [num sources/detectors rows x 2 columns (for x, y)]\nsource_pos = np.array([[-2.0, 0.0], [-4.0, 5.6]])\ndetector_pos = np.array([[0.0, 0.0], [-4.0, 1.0]])\n\n# create a list of source detector pairs (pairs of indices)\nsource_detector_pairs = [(0, 0), (0, 1), (1, 0), (1, 1)]\n\n\n##### create NWB file using the example data above #####\n\n# create a basic NWB file\nnwb = NWBFile(\n session_description=\"A NIRS test session\",\n identifier=\"nirs_test_001\",\n session_start_time=datetime.now().astimezone(),\n subject=Subject(subject_id=\"nirs_subj_01\"),\n)\n\n\n# create and populate a NIRSSourcesTable containing the\n# label and location of optical sources for the device\nsources = NIRSSourcesTable()\n# add source labels & positions row-by-row\nfor i_source in range(0, len(source_labels)):\n sources.add_row(\n label=source_labels[i_source],\n x=source_pos[i_source, 0],\n y=source_pos[i_source, 1],\n )\n\n\n# create and populate a NIRSDetectorsTable containing the\n# label and location of optical sources for the device\ndetectors = NIRSDetectorsTable()\n# add a row for each detector\nfor i_detector in range(0, len(detector_labels)):\n detectors.add_row(\n label=detector_labels[i_detector],\n x=detector_pos[i_detector, 0],\n y=detector_pos[i_detector, 1],\n ) # z-coordinate is optional\n\n\n# create a NIRSChannelsTable which defines the channels\n# between the provided sources and detectors\nchannels = NIRSChannelsTable(sources=sources, detectors=detectors)\n# each channel is composed of a single source, a single detector, and the wavelength\n# most source-detector pairs will use two separate wavelengths, and have two channels\nfor i_source, i_detector in source_detector_pairs:\n for wavelength in [690.0, 830.0]:\n # for the source and detector parameters, pass in the index of\n # the desired source (detector) in the sources (detectors) table\n channels.add_row(\n label=f\"{source_labels[i_source]}.{detector_labels[i_detector]}.{wavelength:0.0f}nm\",\n source=i_source,\n detector=i_detector,\n source_wavelength=wavelength,\n )\n\n\n# create a NIRSDevice which contains all of the information\n# about the device configuration and arrangement\ndevice = NIRSDevice(\n name=\"nirs_device\",\n description=\"world's best fNIRS device\",\n manufacturer=\"skynet\",\n nirs_mode=\"time-domain\",\n channels=channels,\n sources=sources,\n detectors=detectors,\n # depending on which nirs_mode is selected, additional parameter values should be\n # included. these two parameters are included because we are using time-domain NIRS\n time_delay=1.5, # in ns\n time_delay_width=0.1, # in ns\n # specialized NIRS hardware may require additional parameters that can be defined\n # using the `additional_parameters` field:\n additional_parameters=\"flux_capacitor_gain = 9000; speaker_volume = 11;\",\n)\n# add the device to the NWB file\nnwb.add_device(device)\n\n\n# create a NIRSSeries timeseries containing raw NIRS data\nnirs_series = NIRSSeries(\n name=\"nirs_data\",\n description=\"The raw NIRS channel data\",\n timestamps=np.arange(0, 10, 0.01), # in seconds\n # reference only the channels associated with this series\n channels=DynamicTableRegion(\n name=\"channels\",\n description=\"an ordered map to the channels in this NIRS series\",\n table=channels,\n data=channels.id[:],\n ),\n data=np.random.rand(1000, 8), # shape: (num timesteps, num channels)\n unit=\"V\",\n)\n# add the series to the NWB file\nnwb.add_acquisition(nirs_series)\n\n\n# Write our test file\nfilename = \"test_nirs_file.nwb\"\nwith NWBHDF5IO(filename, \"w\") as io:\n io.write(nwb)\n\n# Read the data back in\nwith NWBHDF5IO(filename, \"r\", load_namespaces=True) as io:\n nwb = io.read()\n print(nwb)\n print(nwb.devices[\"nirs_device\"])\n print(nwb.acquisition[\"nirs_data\"])\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-hierarchical-behavioral-data-record": {"ref": "ndx-hierarchical-behavioral-data-record", "record_url": "https://github.com/nwb-extensions/ndx-hierarchical-behavioral-data-record", "last_updated": "2022-11-15T06:20:55Z", "name": "ndx-hierarchical-behavioral-data", "version": "0.1.1", "src": "https://github.com/catalystneuro/ndx-hierarchical-behavioral-data", "pip": "https://pypi.org/project/ndx-hierarchical-behavioral-data/", "license": "BSD", "maintainers": ["bendichter", "luiztauffer"], "readme": "# ndx-hierarchical-behavioral-data Extension for NWB\n\n[![PyPI version](https://badge.fury.io/py/ndx-hierarchical-behavioral-data.svg)](https://badge.fury.io/py/ndx-hierarchical-behavioral-data)\n\n![schema schema](https://github.com/catalystneuro/ndx-hierarchical-behavioral-data/blob/master/docs/media/hierarchical_behavioral_data.png?raw=true)\n\n## Installation\n\n```\npip install ndx-hierarchical-behavioral-data\n```\n\n## Usage\nUse pre-made hierarchical transcription tables:\n\n```python\nfrom ndx_hierarchical_behavioral_data.definitions.transcription import TIPhonemes, HBTSyllables, HBTWords, HBTSentences\n\n# Phonemes level\nphonemes = TIPhonemes()\nphonemes.add_column('max_pitch', 'maximum pitch for this phoneme. NaN for unvoiced')\nfor i, p in enumerate('abcdefghijkl'):\n phonemes.add_interval(label=p, start_time=float(i), stop_time=float(i+1), max_pitch=i**2)\n\n# Syllables level\nsyllables = HBTSyllables(lower_tier_table=phonemes)\nsyllables.add_interval(label='abc', next_tier=[0, 1, 2])\nsyllables.add_interval(label='def', next_tier=[3, 4, 5])\nsyllables.add_interval(label='ghi', next_tier=[6, 7, 8])\nsyllables.add_interval(label='jkl', next_tier=[9, 10, 11])\n\n# Words level\nwords = HBTWords(lower_tier_table=syllables)\nwords.add_column('emphasis', 'boolean indicating whether this word was emphasized')\nwords.add_interval(label='A-F', next_tier=[0, 1], emphasis=False)\nwords.add_interval(label='G-L', next_tier=[2, 3], emphasis=True)\n\n# Sentences level\nsentences = HBTSentences(lower_tier_table=words)\nsentences.add_interval(label='A-L', next_tier=[0, 1])\n```\n\nView individual tiers:\n\n```python\nsentences.to_dataframe()\n```\n
labelstart_timestop_timenext_tier
id
0A-L0.012.0label start_time stop_time \\\\id ...
\n\n\n```python\nwords.to_dataframe()\n```\n\n
label start_time stop_time next_tier emphasis
id
0 A-F 0.0 6.0 label start_time stop_time \\\\ id 0 abc 0.0 3.0 1 def 3.0 6.0 next_tier id 0 start_time stop_time label max_pitch id 0 0.0 1.0 a 0 1 1.0 2.0 b 1 2 2.0 3.0 c 4 1 start_time stop_time label max_pitch id 3 3.0 4.0 d 9 4 4.0 5.0 e 16 5 5.0 6.0 f 25 False
1 G-L 6.0 12.0 label start_time stop_time \\\\ id 2 ghi 6.0 9.0 3 jkl 9.0 12.0 next_tier id 2 start_time stop_time label max_pitch id 6 6.0 7.0 g 36 7 7.0 8.0 h 49 8 8.0 9.0 i 64 3 start_time stop_time label max_pitch id 9 9.0 10.0 j 81 10 10.0 11.0 k 100 11 11.0 12.0 l 121 True
\n\n```python\nsyllables.to_dataframe()\n```\n\n
labelstart_timestop_timenext_tier
id
0 abc 0.0 3.0 start_time stop_time label id 0 0.0 1.0 a 1 1.0 2.0 b 2 2.0 3.0 c
1 def 3.0 6.0 start_time stop_time label id 3 3.0 4.0 d 4 4.0 5.0 e 5 5.0 6.0 f
2 ghi 6.0 9.0 start_time stop_time label id 6 6.0 7.0 g 7 7.0 8.0 h 8 8.0 9.0 i
3 jkl 9.0 12.0 start_time stop_time label id 9 9.0 10.0 j 10 10.0 11.0 k 11 11.0 12.0 l
\n\n```python\nphonemes.to_dataframe()\n```\n\n
start_time stop_time label max_pitch
id
0 0.0 1.0 a 0
1 1.0 2.0 b 1
2 2.0 3.0 c 4
3 3.0 4.0 d 9
4 4.0 5.0 e 16
5 5.0 6.0 f 25
6 6.0 7.0 g 36
7 7.0 8.0 h 49
8 8.0 9.0 i 64
9 9.0 10.0 j 81
10 10.0 11.0 k 100
11 11.0 12.0 l 121
\n\n\nHierarchical dataframe:\n```python\nsentences.to_hierarchical_dataframe()\n```\n
source_table phonemes
label id start_time stop_time label max_pitch
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time
0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
3.0 1 1.0 2.0 b 1
3.0 2 2.0 3.0 c 4
1 def 3.0 6.0 3 3.0 4.0 d 9
6.0 4 4.0 5.0 e 16
6.0 5 5.0 6.0 f 25
1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
9.0 7 7.0 8.0 h 49
9.0 8 8.0 9.0 i 64
3 jkl 9.0 12.0 9 9.0 10.0 j 81
12.0 10 10.0 11.0 k 100
12.0 11 11.0 12.0 l 121
\n\n\nHierachical columns, flattened rows:\n\n```python\nsentences.to_hierarchical_dataframe(flat_column_index=True)\n```\n\n
id start_time stop_time label max_pitch
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time
0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
3.0 1 1.0 2.0 b 1
3.0 2 2.0 3.0 c 4
1 def 3.0 6.0 3 3.0 4.0 d 9
6.0 4 4.0 5.0 e 16
6.0 5 5.0 6.0 f 25
1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
9.0 7 7.0 8.0 h 49
9.0 8 8.0 9.0 i 64
3 jkl 9.0 12.0 9 9.0 10.0 j 81
12.0 10 10.0 11.0 k 100
12.0 11 11.0 12.0 l 121
\n\nDenormalized dataframe:\n```python\nsentences.to_denormalized_dataframe()\n```\n\n
source_table sentences words syllables phonemes
label id label start_time stop_time id label start_time stop_time emphasis id label start_time stop_time id start_time stop_time label max_pitch
0 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
1 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 1 1.0 2.0 b 1
2 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 2 2.0 3.0 c 4
3 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 3 3.0 4.0 d 9
4 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 4 4.0 5.0 e 16
5 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 5 5.0 6.0 f 25
6 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
7 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 7 7.0 8.0 h 49
8 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 8 8.0 9.0 i 64
9 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 9 9.0 10.0 j 81
10 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 10 10.0 11.0 k 100
11 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 11 11.0 12.0 l 121
\n\nDenormalized dataframe with flattened columns:\n```python\nsentences.to_denormalized_dataframe(flat_column_index=True)\n```\n\n
sentences_id sentences_label sentences_start_time sentences_stop_time words_id words_label words_start_time words_stop_time words_emphasis syllables_id syllables_label syllables_start_time syllables_stop_time id start_time stop_time label max_pitch
0 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 0 0.0 1.0 a 0
1 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 1 1.0 2.0 b 1
2 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 0 abc 0.0 3.0 2 2.0 3.0 c 4
3 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 3 3.0 4.0 d 9
4 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 4 4.0 5.0 e 16
5 0 A-L 0.0 12.0 0 A-F 0.0 6.0 False 1 def 3.0 6.0 5 5.0 6.0 f 25
6 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 6 6.0 7.0 g 36
7 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 7 7.0 8.0 h 49
8 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 2 ghi 6.0 9.0 8 8.0 9.0 i 64
9 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 9 9.0 10.0 j 81
10 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 10 10.0 11.0 k 100
11 0 A-L 0.0 12.0 1 G-L 6.0 12.0 True 3 jkl 9.0 12.0 11 11.0 12.0 l 121
\n\n\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-sound-record": {"ref": "ndx-sound-record", "record_url": "https://github.com/nwb-extensions/ndx-sound-record", "last_updated": "2022-11-15T07:17:28Z", "name": "ndx-sound", "version": "0.2.0", "src": "https://github.com/catalystneuro/ndx-sound/", "pip": "https://pypi.org/project/ndx-sound/", "license": "BSD", "maintainers": ["weiglszonja", "bendichter"], "readme": "![PyPI](https://img.shields.io/pypi/v/ndx-sound?color=blue)\n\n# ndx-sound Extension for NWB\n\nNWB extension for sounds.\n\n## Installation\n\n```shell\npip install ndx-sound\n```\n\n## Usage\n\n## Python\n\n### Add to an NWB file\n```python\nfrom pynwb import NWBFile\nfrom scipy.io import wavfile\n\nfrom ndx_sound import AcousticWaveformSeries\n\n# The file path to the audio file\nfile_path = \"audio_data.wav\"\n\n# Read the audio file to get the rate of the recording and the waveform\nsampling_rate, samples = wavfile.read(file_path)\n\n# Create an AcousticWaveformSeries object with a given name and description\nacoustic_waveform_series = AcousticWaveformSeries(\n name=\"acoustic_stimulus\",\n data=samples,\n rate=sampling_rate,\n description=\"acoustic stimulus\",\n)\n\n# Create an NWBFile object where this AcousticWaveformSeries can be added to\nnwbfile = NWBFile(\n session_description=...,\n identifier=...,\n session_start_time=...,\n)\n\n# If a recording of behavior, add to acquisition\nnwbfile.add_acquisition(acoustic_waveform_series)\n\n# If a stimulus, add to stimulus\nnwbfile.add_stimulus(acoustic_waveform_series)\n```\n\n### Visualization\n\n#### Static widgets\nUse `plot_sound` to visualize the waveform series and the spectrogram.\nFor longer recordings, specify the `time_window` argument for the start and end time\nof the recording to be shown.\n```python\nfrom ndx_sound.widgets import plot_sound\n\nplot_sound(nwbfile.stimulus[\"acoustic_stimulus\"])\n\n# Show only from 5 to 15 seconds\nplot_sound(nwbfile.stimulus[\"acoustic_stimulus\"], time_window=(5, 15))\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/ndx_sound_plot_timewindow.png)\n\nUse `acoustic_waveform_widget` to include an Audio element that plays the sound.\n\n```python\nfrom ndx_sound.widgets import acoustic_waveform_widget\n\nacoustic_waveform_widget(nwbfile.stimulus[\"acoustic_stimulus\"], time_window=(5, 15))\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/acoustic_waveform_widget_timewindow.png)\n\n#### Interactive widgets\nUse `AcousticWaveformWidget` to use a slider for interactively scrolling through the\nrecording and a button for changing the duration of the sound that is being shown.\n\n```python\nfrom ndx_sound.widgets import AcousticWaveformWidget\n\nAcousticWaveformWidget(nwbfile.stimulus[\"acoustic_stimulus\"])\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/interactive_widget.png)\n\n### nwbwidgets\nUse `load_widgets` to load the interactive sound widget into `nwb2widget`.\n\n```python\nfrom ndx_sound.widgets import load_widgets\nfrom nwbwidgets import nwb2widget\n\nload_widgets()\n\nnwb2widget(nwbfile)\n```\n\n![](https://github.com/catalystneuro/ndx-sound/blob/main/ndx_sound_in_nwbwidgets.png)\n\n#### nwbwidgets and HDF5IO\nWhen using `nwb2widget` with an NWB file that is read from disk, make sure to have\n`load_widgets` imported within the same Jupyter cell where your data is being loaded.\n\n```python\nfrom pynwb import NWBHDF5IO\nfrom ndx_sound.widgets import load_widgets\nfrom nwbwidgets import nwb2widget\n\nload_widgets()\n\n\nio = NWBHDF5IO(\"audio.nwb\", mode=\"r\", load_namespaces=True)\nnwbfile = io.read()\nnwb2widget(nwbfile)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-extract-record": {"ref": "ndx-extract-record", "record_url": "https://github.com/nwb-extensions/ndx-extract-record", "last_updated": "2022-11-15T07:23:53Z", "name": "ndx-extract", "version": "0.2.0", "src": "https://github.com/catalystneuro/ndx-extract", "pip": "https://pypi.org/project/ndx-extract/0.2.0/", "license": "BSD", "maintainers": ["bendichter", "weiglszonja"], "readme": "# ndx-extract Extension for NWB\n\nAuthor: Cesar Echavarria\n\nThis extension allows for the storage of configuration options used by the [EXTRACT](https://github.com/schnitzer-lab/EXTRACT-public) tool for calcium imaging.\n\n\n## Usage\n\n\n### Python\nInstall the extension from [PyPI](https://pypi.org/project/ndx-extract/)\n```shell\npip install ndx-extract\n```\nUsage:\n```python\nfrom datetime import datetime\nfrom ndx_extract import EXTRACTSegmentation\nfrom pynwb import NWBFile, NWBHDF5IO\n\n# Create the NWBfile\nnwbfile = NWBFile(\n session_description=\"The mouse in open exploration.\",\n identifier=\"Mouse5_Day3\",\n session_start_time=datetime.now().astimezone(),\n)\n# Create the processing module\nophys_module = nwbfile.create_processing_module(\n name=\"ophys\",\n description=\"optical physiology processed data\",\n)\n# Create the segmentation object and define the configuration properties\n# The properties that can be defined are listed at spec/ndx-EXTRACT.extensions.yaml\nimage_segmentation = EXTRACTSegmentation(\n name=\"ImageSegmentation\",\n version=\"1.1.0\",\n preprocess=True,\n trace_output_option=\"nonneg\",\n)\n# Add this image segmentation to the processing module\nophys_module.add(image_segmentation)\n\n# Writing the NWB file\nwith NWBHDF5IO(\"image_segmentation.nwb\", mode=\"w\") as io:\n io.write(nwbfile)\n\n# Reading the NWB file and accessing the segmentation parameters\nwith NWBHDF5IO(\"image_segmentation.nwb\", mode=\"r\") as io:\n nwbfile_in = io.read()\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].version\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].preprocess\n nwbfile_in.processing[\"ophys\"].data_interfaces[\"ImageSegmentation\"].trace_output_option\n```\n\nRunning the tests:\n```shell\n python -m unittest src/pynwb/tests/test_extract.py\n```\n\n### MATLAB\ninstall:\n```matlab\ngenerateExtension('/path/to/ndx-extract/spec/ndx-extract.namespace.yaml');\n```\n\nwrite:\n```matlab\n% define NWB file\nnwb = NwbFile( ...\n 'session_description', 'mouse in open exploration', ...\n 'identifier', 'Mouse5_Day3', ...\n 'session_start_time', datetime(2018, 4, 25, 2, 30, 3) ...\n);\n% define processing module\nophys_module = types.core.ProcessingModule( ...\n 'description', 'test processing module' ...\n);\nnwb.processing.set('ophys', ophys_module);\n% define segmentation\nimg_seg = types.ndx_extract.EXTRACTSegmentation();\n% set segmentation properties\nimg_seg.trace_output_option = 'nonneg';\nimg_seg.save_all_found = false;\nimg_seg.dendrite_aware = false;\nimg_seg.adaptive_kappa = false;\nimg_seg.use_sparse_arrays = false;\nimg_seg.dendrite_aware = 0;\nimg_seg.hyperparameter_tuning_flag = false;\nimg_seg.remove_duplicate_cells = false;\nimg_seg.max_iter = 6;\nimg_seg.S_init = rand(100,10);\nimg_seg.T_init = rand(100,10);\nimg_seg.preprocess = true;\nimg_seg.fix_zero_FOV_strips = false;\nimg_seg.medfilt_outlier_pixels = false;\nimg_seg.skip_dff = false;\nimg_seg.baseline_quantile = .4;\nimg_seg.skip_highpass = false;\nimg_seg.spatial_highpass_cutoff = 0;\nimg_seg.temporal_denoising = false;\nimg_seg.remove_background = true;\nimg_seg.cellfind_filter_type = 'butter';\nimg_seg.spatial_lowpass_cutoff = 2;\nimg_seg.moving_radius = 3;\nimg_seg.cellfind_min_snr = 1;\nimg_seg.cellfind_max_steps = 1000;\nimg_seg.cellfind_kappa_std_ratio = 1;\nimg_seg.init_with_gaussian = false;\nimg_seg.kappa_std_ratio = 1;\nimg_seg.downsample_time_by = 'auto';\nimg_seg.downsample_space_by = 'auto';\nimg_seg.min_radius_after_downsampling = 5;\nimg_seg.min_tau_after_downsampling = 5;\nimg_seg.reestimate_S_if_downsampled = false;\nimg_seg.reestimate_T_if_downsampled = true;\nimg_seg.crop_circular = false;\nimg_seg.movie_mask = randi(2,100,100)-1;\nimg_seg.smoothing_ratio_x2y = 0;\nimg_seg.compact_output = true;\nimg_seg.cellfind_numpix_threshold = 9;\nimg_seg.high2low_brightness_ratio = Inf;\nimg_seg.l1_penalty_factor = 0;\nimg_seg.T_lower_snr_threshold = 10;\nimg_seg.smooth_T = false;\nimg_seg.smooth_S = true;\nimg_seg.max_iter_S = 100;\nimg_seg.max_iter_T = 100;\nimg_seg.TOL_sub = 1.0000e-06;\nimg_seg.TOL_main = 0.0100;\nimg_seg.avg_cell_radius = 0;\nimg_seg.T_min_snr = 10;\nimg_seg.size_lower_limit = .1000;\nimg_seg.size_upper_limit = 10;\nimg_seg.temporal_corrupt_thresh = 0.7000;\nimg_seg.spatial_corrupt_thresh = 0.7000;\nimg_seg.eccent_thresh = 6;\nimg_seg.low_ST_index_thresh = 0.0100;\nimg_seg.low_ST_corr_thresh = 0;\nimg_seg.S_dup_corr_thresh = 0.9500;\nimg_seg.T_dup_corr_thresh = 0.9500;\nimg_seg.confidence_thresh = 0.8000;\nimg_seg.high_ST_index_thresh = 0.8000;\nophys_module.nwbdatainterface.set('ImgSegmentation', img_seg);\nnwbExport(nwb, 'test_123.nwb');\n```\n\nrun tests:\n```matlab\ncd /path/to/ndx-extract/src/matnwb/tests\nresults = test_ndx_extract()\n```\n"}, "ndx-photometry-record": {"ref": "ndx-photometry-record", "record_url": "https://github.com/nwb-extensions/ndx-photometry-record", "last_updated": "2022-12-01T18:03:38Z", "name": "ndx-photometry", "version": "0.1.0", "src": "https://github.com/catalystneuro/ndx-photometry", "pip": "https://pypi.org/project/ndx-photometry/", "license": "BSD", "maintainers": ["bendichter"], "readme": "# ndx-photometry Extension for NWB\n[![Build Status](https://travis-ci.com/akshay-jaggi/ndx-photometry.svg?branch=master)](https://travis-ci.com/akshay-jaggi/ndx-photometry)\n[![Documentation Status](https://readthedocs.org/projects/ndx-photometry/badge/?version=latest)](https://ndx-photometry.readthedocs.io/en/latest/?badge=latest)\n\n![NWB - Photometry](https://user-images.githubusercontent.com/844306/144680873-3e2d957f-97ff-45cb-b625-517f5e7dfb9f.png)\n\n## Introduction\nThis is an NWB extension for storing photometry recordings and associated metadata. This extension stores photometry information across three folders in the NWB file: acquisition, processing, and general. The acquisiton folder contains an ROIResponseSeries (inherited from `pynwb.ophys`), which references rows of a FibersTable rather than 2 Photon ROIs. The new types for this extension are in metadata and processing\n\n### Metadata\n1. `FibersTable` stores rows for each fiber with information about the location, excitation, source, photodetector, fluorophore, and more (associated with each fiber). \n2. `ExcitationSourcesTable` stores rows for each excitation source with information about the peak wavelength, source type, and the commanded voltage series of type `CommandedVoltageSeries`\n3. `PhotodectorsTable` stores rows for each photodetector with information about the peak wavelength, type, etc. \n4. `FluorophoresTable` stores rows for each fluorophore with information about the fluorophore itself and the injeciton site. \n\n### Processing\n1. `DeconvoledROIResponseSeries` stores DfOverF and Fluorescence traces and extends `ROIResponseSeries` to contain information about the deconvolutional and downsampling procedures performed.\n\n\nThis extension was developed by Akshay Jaggi, Ben Dichter, and Ryan Ly. \n\n\n## Installation\n\n```\npip install ndx-photometry\n```\n\n\n## Usage\n\n```python\nimport datetime\nimport numpy as np\n\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom pynwb.core import DynamicTableRegion\nfrom pynwb.ophys import RoiResponseSeries\nfrom ndx_photometry import (\n FibersTable,\n PhotodetectorsTable,\n ExcitationSourcesTable,\n DeconvolvedRoiResponseSeries,\n MultiCommandedVoltage,\n FiberPhotometry,\n FluorophoresTable\n)\n\n\nnwbfile = NWBFile(\n session_description=\"session_description\",\n identifier=\"identifier\",\n session_start_time=datetime.datetime.now(datetime.timezone.utc),\n)\n\n# In the next ten calls or so, we'll set up the metadata from the bottom of the metadata tree up\n# You can follow along here: \n\n# Create a commanded voltage container, this can store one or more commanded voltage series\nmulti_commanded_voltage = MultiCommandedVoltage(\n name=\"MyMultiCommandedVoltage\",\n)\n\n# Add a commanded voltage series to this container\ncommandedvoltage_series = (\n multi_commanded_voltage.create_commanded_voltage_series(\n name=\"commanded_voltage\",\n data=[1.0, 2.0, 3.0],\n frequency=30.0,\n power=500.0,\n rate=30.0,\n )\n)\n\n# Create an excitation sources table\nexcitationsources_table = ExcitationSourcesTable(\n description=\"excitation sources table\"\n)\n\n# Add one row to the table per excitation source\n# You can repeat this in a for-loop for many sources\nexcitationsources_table.add_row(\n peak_wavelength=700.0,\n source_type=\"laser\",\n commanded_voltage=commandedvoltage_series,\n)\n\nphotodetectors_table = PhotodetectorsTable(\n description=\"photodetectors table\"\n)\n\n# Add one row to the table per photodetector\nphotodetectors_table.add_row(\n peak_wavelength=500.0, \n type=\"PMT\", \n gain=100.0\n)\n\n\nfluorophores_table = FluorophoresTable(\n description='fluorophores'\n)\n\nfluorophores_table.add_row(\n label='dlight',\n location='VTA',\n coordinates=(3.0,2.0,1.0)\n)\n\nfibers_table = FibersTable(\n description=\"fibers table\"\n)\n\n# Here we add the metadata tables to the metadata section\nnwbfile.add_lab_meta_data(\n FiberPhotometry(\n fibers=fibers_table,\n excitation_sources=excitationsources_table,\n photodetectors=photodetectors_table,\n fluorophores=fluorophores_table\n )\n)\n\n# Important: we add the fibers to the fibers table _after_ adding the metadata\n# This ensures that we can find this data in their tables of origin\nfibers_table.add_fiber(\n excitation_source=0, #integers indicated rows of excitation sources table\n photodetector=0,\n fluorophores=[0], #potentially multiple fluorophores, so list of indices\n location='my location',\n notes='notes'\n)\n\n# Here we set up a list of fibers that our recording came from\nfibers_ref = DynamicTableRegion(\n name=\"rois\", \n data=[0], # potentially multiple fibers\n description=\"source fibers\", \n table=fibers_table\n)\n\n# Create a raw roiresponseseries, this is your main acquisition\nroi_response_series = RoiResponseSeries(\n name=\"roi_response_series\",\n description=\"my roi response series\",\n data=np.random.randn(100, 1),\n unit='F',\n rate=30.0,\n rois=fibers_ref,\n)\n\n# This is your processed data \ndeconv_roi_response_series = DeconvolvedRoiResponseSeries(\n name=\"DeconvolvedRoiResponseSeries\",\n description=\"my roi response series\",\n data=np.random.randn(100, 1),\n unit='F',\n rate=30.0,\n rois=fibers_ref,\n raw=roi_response_series,\n)\n\nophys_module = nwbfile.create_processing_module(\n name=\"ophys\", description=\"fiber photometry\"\n)\n\nophys_module.add(multi_commanded_voltage)\nnwbfile.add_acquisition(roi_response_series)\nophys_module.add(deconv_roi_response_series)\n\n# write nwb file\nfilename = 'test.nwb'\nwith NWBHDF5IO(filename, 'w') as io:\n io.write(nwbfile)\n \n# read nwb file and check its contents\nwith NWBHDF5IO(filename, 'r', load_namespaces=True) as io:\n nwbfile = io.read()\n # Access and print information about the acquisition\n print(nwbfile.acquisition[\"roi_response_series\"])\n # Access and print information about the processed data\n print(nwbfile.processing['ophys'][\"DeconvolvedRoiResponseSeries\"])\n # Access and print all of the metadata\n print(nwbfile.lab_meta_data)\n```\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-acquisition-module-record": {"ref": "ndx-acquisition-module-record", "record_url": "https://github.com/nwb-extensions/ndx-acquisition-module-record", "last_updated": "2023-07-24T14:18:13Z", "name": "ndx-acquisition-module", "version": "0.1.2", "src": "https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/", "pip": "https://pypi.org/project/ndx-acquisition-module/", "license": "BSD-3", "maintainers": ["tuanpham96"], "readme": "# ndx-acquisition-module\n\n[![pipeline status](https://img.shields.io/gitlab/pipeline-status/fleischmann-lab/ndx/ndx-acquisition-module?branch=main&label=pipeline&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/-/commits/main)\n[![license](https://img.shields.io/gitlab/license/fleischmann-lab/ndx/ndx-acquisition-module?color=yellow&label=license&style=flat-square)](LICENSE.txt)\n\n\n![python version](https://img.shields.io/pypi/pyversions/ndx-acquisition-module?style=flat-square)\n[![release](https://img.shields.io/gitlab/v/release/fleischmann-lab/ndx/ndx-acquisition-module?label=release&sort=date&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module/-/releases)\n[![pypi package](https://img.shields.io/pypi/v/ndx-acquisition-module?label=pypi%20package&style=flat-square&color=blue)](https://pypi.org/pypi/ndx-acquisition-module)\n[![conda package](https://img.shields.io/conda/v/fleischmannlab/ndx-acquisition-module?color=green&style=flat-square)](https://anaconda.org/FleischmannLab/ndx-acquisition-module)\n\nThis extension is used to allow adding modules in `nwbfile.acquisition`, similarly to how `nwbfile.processing` allows adding modules.\n\nMore specifically, this allows creating a module that has `TimeSeries` and `DynamicTable` objects, then users can add this module.\n\nThis is in alpha development stages. Please use with discretion.\n\n## Installation\n\nYou can install via `pip`:\n\n```bash\npip install ndx-acquisition-module\n```\n\nOr `conda`:\n\n```bash\nconda install -c fleischmannlab ndx-acquisition-module\n```\n\nOr directly from the `git` repository:\n\n```bash\npip install git+https://gitlab.com/fleischmann-lab/ndx/ndx-acquisition-module\n```\n\n## Usage\n\n### Main usage\n\nHere's a short example to create the module, add objects into it then add to acquisition\n\n```python\nfrom ndx_acquisition_module import AcquisitionModule\n\nmod = AcquisitionModule(name=\"raw_mod\", description=\"raw acq module\")\n\n# Add data objects to created AcquisitionModule\nmod.add(time_series) # add time series\nmod.add(dynamic_table) # add dynamic table\n\n# Add AcquisitionModule to nwbfile.acquisition\nnwbfile.add_acquisition(mod)\n```\n\n### Full example\n\nHere's a full example that you can copy and paste in a script/notebook and run. A `test.nwb` file would be created.\n\n
Expand to see full example script\n\n```python\nfrom datetime import datetime\n\nimport numpy as np\nfrom dateutil import tz\nfrom hdmf.common import DynamicTable, VectorData\nfrom ndx_acquisition_module import AcquisitionModule\n\nfrom pynwb import NWBHDF5IO, NWBFile, TimeSeries\n\n# Create an example NWBFile\nnwbfile = NWBFile(\n session_description=\"test session description\",\n identifier=\"unique_identifier\",\n session_start_time=datetime(2012, 2, 21, tzinfo=tz.gettz(\"US/Pacific\")),\n)\n\n# Create time series\nts = TimeSeries(\n name=\"choice_series\",\n description=\"raw choice series\",\n data=np.random.randint(4, size=100),\n timestamps=(np.arange(100).astype(\"float\") + 2) / 30,\n unit=\"-\",\n)\n\n# Create dynamic table\ntbl = DynamicTable(\n name=\"lookup_table\",\n description=\"lookup table for `choice_series`\",\n columns=[\n VectorData(\n name=\"lookup_id\", description=\"ID to look up\", data=[0, 1, 2, 3]\n ),\n VectorData(\n name=\"lookup_name\",\n description=\"name of ID\",\n data=[\"water\", \"earth\", \"fire\", \"air\"],\n ),\n ],\n)\n\n# Create AcquisitionModule to store these objects\nmod = AcquisitionModule(name=\"raw_mod\", description=\"raw acq module\")\n\n# Add data objects to created AcquisitionModule\nmod.add(ts) # add time series\nmod.add(tbl) # add dynamic table\n\n# Add AcquisitionModule to nwbfile.acquisition\nnwbfile.add_acquisition(mod)\n\n# Write the file to disk\nfilename = \"test.nwb\"\nwith NWBHDF5IO(path=filename, mode=\"w\") as io:\n io.write(nwbfile)\n\n```\n\n
\n\n\n## API usage notes and limitations\n\n### With package installed\n\nCurrently to use `mod.get()` or `mod[]`, users would also need to install this package, for example with\n\n```bash\npip install ndx-acquisition-module\n```\n\nAnd import, using `NWBHDF5IO(..., load_namespaces=True)` would not be enough.\n\n```python\n# new file completely\nfrom pynwb import NWBHDF5IO\nfrom ndx_acquisition_module import AcquisitionModule\nnwb = NWBHDF5IO('test.nwb', mode='r').read() # notice `load_namepsaces` is not needed\n\nprint(nwb.acquisition['raw_mod'])\n```\n\nwhich outputs:\n\n```text\nraw_mod ndx_acquisition_module.AcquisitionModule at 0x139742592581104\nFields:\n data_interfaces: {\n choice_series ,\n lookup_table \n }\n```\n\nTo access:\n\n```python\nnwb.acquisition['raw_mod']['lookup_table']\nnwb.acquisition['raw_mod']['choice_series']\n```\n\n### Without package installed\n\nOtherwise, if `ndx-acquisition-module` is not installed, accessing the inside objects have to be done based on types:\n\n```python\n# new file completely\nfrom pynwb import NWBHDF5IO\nnwb = NWBHDF5IO('test.nwb', mode='r', load_namespaces=True).read() # notice `load_namepsaces` is NEEDED\n\nprint(nwb.acquisition['raw_mod'])\n```\n\nwhich outputs:\n\n```text\nraw_mod abc.AcquisitionModule at 0x140252603705728\nFields:\n description: raw acq module\n dynamic_tables: {\n lookup_table \n }\n nwb_data_interfaces: {\n choice_series \n }\n```\n\nTo access:\n\n```python\nnwb.acquisition['raw_mod'].dynamic_tables['lookup_table']\nnwb.acquisition['raw_mod'].nwb_data_interfaces['choice_series']\n```\n\n---\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template)\n"}, "ndx-odor-metadata-record": {"ref": "ndx-odor-metadata-record", "record_url": "https://github.com/nwb-extensions/ndx-odor-metadata-record", "last_updated": "2023-07-24T14:42:32Z", "name": "ndx-odor-metadata", "version": "0.1.1", "src": "https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata", "pip": "https://pypi.org/project/ndx-odor-metadata/", "license": "BSD-3", "maintainers": ["tuanpham96"], "readme": "# `ndx-odor-metadata`\n\n[![pipeline status](https://img.shields.io/gitlab/pipeline-status/fleischmann-lab/ndx/ndx-odor-metadata?branch=main&label=pipeline&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/commits/main)\n[![license](https://img.shields.io/gitlab/license/fleischmann-lab/ndx/ndx-odor-metadata?color=yellow&label=license&style=flat-square)](LICENSE.txt)\n\n![python version](https://img.shields.io/pypi/pyversions/ndx-odor-metadata?style=flat-square)\n[![release](https://img.shields.io/gitlab/v/release/fleischmann-lab/ndx/ndx-odor-metadata?label=release&sort=date&style=flat-square)](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/releases)\n[![pypi package](https://img.shields.io/pypi/v/ndx-odor-metadata?label=pypi%20package&style=flat-square&color=blue)](https://pypi.org/pypi/ndx-odor-metadata)\n[![conda package](https://img.shields.io/conda/v/fleischmannlab/ndx-odor-metadata?color=green&style=flat-square)](https://anaconda.org/FleischmannLab/ndx-odor-metadata)\n\nNWB extension to store odor stimulus metadata with `DynamicTable` format. Entries that have a PubChem and `stim_types` indicate odor/chemical will also be queried with `pubchempy` for more information.\n\nThis is in alpha development stages **WITHOUT** any appropriate tests yet. Please use with discretion.\n\n\n## Installation\n\nYou can install via `pip`:\n\n```bash\npip install ndx-odor-metadata\n```\n\nOr `conda`:\n\n```bash\nconda install -c fleischmannlab ndx-odor-metadata\n```\n\nOr directly from the `git` repository:\n\n```bash\npip install git+https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata\n```\n\n## Usage\n\n### Main usage\n\n```python\nfrom ndx_odor_metadata import OdorMetaData\n\nodor_table = OdorMetaData(name='odor_table', description='an odor table')\n\nodor_table.add_stimulus(\n pubchem_id = 7662.0,\n stim_name = \"3-Phenylpropyl isobutyrate\",\n raw_id = 3,\n stim_id = 1,\n stim_types = \"odor\",\n chemical_dilution_type='vaporized',\n chemical_concentration = 0.01,\n chemical_concentration_unit='%',\n chemical_solvent = \"Mineral Oil\",\n chemical_provider = \"Sigma\",\n stim_description = \"Legit odor stimulus #1\",\n)\n\nnwbfile.add_acquisition(odor_table)\n```\n\n### Details on arguments\n\n| | name | dtype | doc | default_value | quantity |\n|---:|:----------------------------|:--------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------|:-----------|\n| 0 | stim_name | text | Stimulus name, e.g. \"hexanal\" or \"sound\" | nan | nan |\n| 1 | pubchem_id | float | PubChem ID, `NaN` indicates non-(standard)-odor stimulus | nan | ? |\n| 2 | raw_id | text | Raw acquisition stimulus ID. Will be converted to `str`. | nan | nan |\n| 3 | raw_id_dtype | text | The actual dtype of `raw_id` value. Useful for (re)casting. | N/A | ? |\n| 4 | stim_id | text | Preferred stimulus ID, which can be used to remap acquisition stimulus id `raw_id`. Will be converted to `str`. If not explicitly given, will copy from `raw_id` | nan | ? |\n| 5 | stim_id_dtype | text | The actual dtype of `stim_id` value. Useful for (re)casting. | N/A | ? |\n| 6 | stim_types | text | Type(s) of stimulus, e.g. 'odor', 'sound', 'control', 'CS', 'US', ... | nan | nan |\n| 7 | stim_types_index | nan | Index for `stim_types` | nan | nan |\n| 8 | stim_description | text | Human-readable description, notes, comments of each stimulus | N/A | ? |\n| 9 | chemical_dilution_type | text | Type of dilution, e.g. 'volume/volume', 'vaporized' | N/A | ? |\n| 10 | chemical_concentration | float | Concentration of chemical | nan | ? |\n| 11 | chemical_concentration_unit | text | Unit of concentration, e.g. \"%\" or \"M\" | N/A | ? |\n| 12 | chemical_solvent | text | Solvent to dilute the chemicals in, e.g. 'mineral oil' | N/A | ? |\n| 13 | chemical_provider | text | Provider of the chemicals, e.g. 'Sigma' | N/A | ? |\n| 14 | nonchemical_details | text | Information about non-chemical/odor stimulus, e.g. 'sound' frequencies | N/A | ? |\n| 15 | is_validated | bool | Whether the stimulus, if chemical/odor, is validated against PubChem (or other sources listed in `validation_info`.If does not have a valid PubChem ID, this assumes to default `False` value | False | ? |\n| 16 | validation_details | text | Additional information/details/notes about stimulus validation, e.g. source, software used & version, validation date, ... | N/A | ? |\n| 17 | pubchem_cid | float | PubChem CID, `NaN` indicates non-(standard)-odor stimulus | nan | ? |\n| 18 | chemical_IUPAC | text | Official chemical IUPAC name | N/A | ? |\n| 19 | chemical_SMILES | text | Canonical SMILES | N/A | ? |\n| 20 | chemical_synonyms | text | List of chemical synonyms | | ? |\n| 21 | chemical_synonyms_index | nan | Index for `chemical_synonyms` | nan | ? |\n| 22 | chemical_molecular_formula | text | Molecular formula of chemical used | N/A | ? |\n| 23 | chemical_molecular_weight | float | Molecular weight of chemical used | nan | ? |\n\n### Demonstration\n\nFor more detailed demonstration, please visit the [`demo`](https://gitlab.com/fleischmann-lab/ndx/ndx-odor-metadata/-/tree/main/demo) folder.\n\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-whisk-record": {"ref": "ndx-whisk-record", "record_url": "https://github.com/nwb-extensions/ndx-whisk-record", "last_updated": "2023-08-29T16:21:44Z", "name": "ndx-whisk", "version": "0.1.1", "src": "https://github.com/vncntprvst/ndx-whisk", "pip": "https://pypi.org/project/ndx-whisk/", "license": "BSD-3", "maintainers": ["vncntprvst"], "readme": "# ndx-whisk Extension for NWB\n\nndx-whisk is an [NWB](https://www.nwb.org/) extension to store whisker tracking measurements. It is intended to convert `.whiskers` and `.measurements` files generated by [whisk (Janelia Whisker Tracker)](https://github.com/nclack/whisk/), or saved to `hdf5` with [WhiskiWrap](https://github.com/cxrodgers/WhiskiWrap), but can be used with other whisker tracking methods.\n\n## Installation\n\n`pip install ndx-whisk`\n\n## Usage\n\nSee test script `test_whiskermeasurement.py` in `src/pynwb/tests`. \n\n```python\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom ndx_whisk import WhiskerMeasurementTable\nimport numpy as np\n\n# Load your data\nwhisker_data = read_whisker_measurement_table('tracked_data.whiskers')\n\n# Create a WhiskerMeasurementTable\nwhisker_meas = WhiskerMeasurementTable(\n name='name',\n description='description'\n)\n\n# Add data to the WhiskerMeasurementTable\nfor i in range(np.shape(whisker_data['frame_id'])[0]):\n whisker_meas.add_row({k: whisker_data[k][i] for k in whisker_data.keys()})\n \n# Set up a NWB file\nnwbfile = set_up_nwbfile()\npath = 'tracked_data.nwb'\n\n# Add a ProcessingModule for behavioral data\nbehavior_module = nwbfile.create_processing_module(\n name=\"behavior\", description=\"Processed behavioral data\"\n)\n\n# Add the WhiskerMeasurementTable\nnwbfile.processing['behavior'].add(whisker_meas)\n\n# Save to NWB file\nwith NWBHDF5IO(path, mode='w') as io:\n io.write(nwbfile)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-ecg-record": {"ref": "ndx-ecg-record", "record_url": "https://github.com/nwb-extensions/ndx-ecg-record", "last_updated": "2023-11-13T19:50:37Z", "name": "ndx-ecg", "version": "0.1.0", "src": "https://github.com/Defense-Circuits-Lab/ndx_ecg", "pip": "https://pypi.org/project/ndx-ecg/", "license": "BSD 3-clause", "maintainers": ["Hamidreza-Alimohammadi"], "readme": "# ndx-ecg Extension for NWB\n\nThis extension is developed to extend NWB data standards to incorporate ECG recordings. `CardiacSeries`, the main neurodata-type in this extension, in fact extends the base type of NWB TimeSeries and can be stored into three specific data interfaces of `ECG`, `HeartRate` and `AuxiliaryAnalysis`. Also, the `ECGRecordingGroup` is another neurodata-type in this module which extends `LabMetaData` which itself extends the NWBContainer and stores descriptive meta-data recording channels information along with the electrodes implementation (`ECGChannels` and `ECGElectrodes` respectively as extensions of DynamicTable) and a link to another extended neurodata-type -`ECGRecDevice`- which extends the type Device.\n\n
\n\n
\n\n## Installation\nCan be installed directly from PyPI:\n```\npip install ndx-ecg\n```\nor simply clone the repo and navigate to the root directory, then:\n```\npip install .\n```\n## Test\nA roundTrip test is runnable through ```pytest``` from the root. The test script can be found here:\n```\n\\src\\pynwb\\tests\n```\n## An example use-case\nThe following is an example use case of ```ndx-ecg``` with explanatory comments. First, we build up an ```nwbfile``` and define an endpoint recording device:\n```python\nfrom datetime import datetime\nfrom uuid import uuid4\nimport numpy as np\nfrom dateutil.tz import tzlocal\nfrom pynwb import NWBHDF5IO, NWBFile\nfrom hdmf.common import DynamicTable\n\nfrom ndx_ecg import (\n CardiacSeries,\n ECG,\n HeartRate,\n AuxiliaryAnalysis,\n ECGRecordingGroup,\n ECGRecDevice,\n ECGElectrodes,\n ECGChannels\n)\n\nnwbfile = NWBFile(\n session_description='ECG test-rec session',\n identifier=str(uuid4()),\n session_start_time=datetime.now(tzlocal()),\n experimenter='experimenter',\n lab='DCL',\n institution='UKW',\n experiment_description='',\n session_id='',\n)\n# define an endpoint main recording device\nmain_device = nwbfile.create_device(\n name='endpoint_recording_device',\n description='description_of_the_ERD', # ERD: Endpoint recording device\n manufacturer='manufacturer_of_the_ERD'\n)\n```\nThen, we define instances of `ECGElectrodes` and `ECGChannels`, to represent the meta-data on the recording electrodes and also the recording channels:\n```python\n'''\ncreating an ECG electrodes table\nas a DynamicTable\n'''\necg_electrodes_table = ECGElectrodes(\n description='descriptive meta-data on ECG recording electrodes'\n)\n\n# add electrodes\necg_electrodes_table.add_row(\n electrode_name='el_0',\n electrode_location='right upper-chest',\n electrode_info='descriptive info on el_0'\n)\necg_electrodes_table.add_row(\n electrode_name='el_1',\n electrode_location='left lower-chest',\n electrode_info='descriptive info on el_1'\n)\necg_electrodes_table.add_row(\n electrode_name='reference',\n electrode_location='top of the head',\n electrode_info='descriptive info on reference'\n)\n# adding the object of DynamicTable\nnwbfile.add_acquisition(ecg_electrodes_table) # storage point for DT\n\n'''\ncreating an ECG recording channels table\nas a DynamicTable\n'''\necg_channels_table = ECGChannels(\n description='descriptive meta-data on ECG recording channels'\n)\n\n# add channels\necg_channels_table.add_row(\n channel_name='ch_0',\n channel_type='single',\n involved_electrodes='el_0',\n channel_info='channel info on ch_0'\n)\necg_channels_table.add_row(\n channel_name='ch_1',\n channel_type='differential',\n involved_electrodes='el_0 and el_1',\n channel_info='channel info on ch_1'\n)\n# adding the object of DynamicTable\nnwbfile.add_acquisition(ecg_channels_table) # storage point for DT\n```\nNow, we can define an instance of ```ECGRecDevice```:\n```python\n# define an ECGRecDevice-type device for ecg recording\necg_device = ECGRecDevice(\n name='recording_device',\n description='description_of_the_ECGRD',\n manufacturer='manufacturer_of_the_ECGRD',\n filtering='notch-60Hz-analog',\n gain='100',\n offset='0',\n synchronization='taken care of via ...',\n endpoint_recording_device=main_device\n)\n# adding the object of ECGRecDevice\nnwbfile.add_device(ecg_device)\n```\nAnd also an instance of ```ECGChannelsGroup```:\n```python\necg_recording_group = ECGRecordingGroup(\n name='recording_group',\n group_description='a group to store electrodes and channels table, and linking to ECGRecDevice.',\n electrodes=ecg_electrodes_table,\n channels=ecg_channels_table,\n recording_device=ecg_device\n)\n# adding the object of ECGChannelsGroup\nnwbfile.add_lab_meta_data(ecg_recording_group) # storage point for custom LMD\n#\n```\nNow, we have all the required standard arguments to genearate instances of `CardiacSeries` and stroing them in our three different NWBDataInterfaces:\n```python\n# storing the ECG data\ndum_data_ecg = np.random.randn(20, 2)\ndum_time_ecg = np.linspace(0, 10, len(dum_data_ecg))\necg_cardiac_series = CardiacSeries(\n name='ecg_raw_CS',\n data=dum_data_ecg,\n timestamps=dum_time_ecg,\n unit='mV',\n recording_group=ecg_recording_group\n)\n\necg_raw = ECG(\n cardiac_series=[ecg_cardiac_series],\n processing_description='raw acquisition'\n)\n```\nHere, we built an instance of our `CradiacSeries` to store a dummy raw ECG acquisition into a specified `ECG` interface, and we store it as an acquisition into the `nwbfile`:\n```python\n# adding the raw acquisition of ECG to the nwb_file inside an 'ECG' container\nnwbfile.add_acquisition(ecg_raw)\n```\nIn the following, we have taken the similar approach but this time storing dummy data as processed data, into specific interfaces of `HeartRate` and `AuxiliaryAnalysis`, then storing it into a -to be defined- `ecg_module`:\n```python\n# storing the HeartRate data\ndum_data_hr = np.random.randn(10, 2)\ndum_time_hr = np.linspace(0, 10, len(dum_data_hr))\nhr_cardiac_series = CardiacSeries(\n name='heart_rate_CS',\n data=dum_data_hr,\n timestamps=dum_time_hr,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\n# defining an ecg_module to store the processed cardiac data and analysis\necg_module = nwbfile.create_processing_module(\n name='cardio_module',\n description='a module to store processed cardiac data'\n)\n\nhr = HeartRate(\n cardiac_series=[hr_cardiac_series],\n processing_description='processed heartRate of the animal'\n)\n# adding the heart rate data to the nwb_file inside an 'HeartRate' container\necg_module.add(hr)\n\n# storing the Auxiliary data\n# An example could be the concept of ceiling that is being used in the literature published by DCL@UKW\ndum_data_ceil = np.random.randn(10, 2)\ndum_time_ceil = np.linspace(0, 10, len(dum_data_ceil))\nceil_cardiac_series = CardiacSeries(\n name='heart_rate_ceil_CS',\n data=dum_data_ceil,\n timestamps=dum_time_ceil,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\nceil = AuxiliaryAnalysis(\n cardiac_series=[ceil_cardiac_series],\n processing_description='processed auxiliary analysis'\n)\n# adding the 'ceiling' auxiliary analysis to the nwb_file inside an 'AuxiliaryAnalysis' container\necg_module.add(ceil)\n\n# storing the processed heart rate: as an NWBDataInterface with the new assigned name instead of default\n# An example could be the concept of HR2ceiling that is being used in the literature published by DCL@UKW\ndum_data_hr2ceil = np.random.randn(10, 2)\ndum_time_hr2ceil = np.linspace(0, 10, len(dum_data_hr2ceil))\nhr2ceil_cardiac_series = CardiacSeries(\n name='heart_rate_to_ceil_CS',\n data=dum_data_hr2ceil,\n timestamps=dum_time_hr2ceil,\n unit='bpm',\n recording_group=ecg_recording_group\n)\n\nhr2ceil = HeartRate(\n name='HR2Ceil',\n cardiac_series=[hr2ceil_cardiac_series],\n processing_description='processed heartRate to ceiling'\n)\n# adding the 'HR2ceiling' processed HR to the nwb_file inside an 'HeartRate' container\necg_module.add(hr2ceil)\n\n```\nNow, the `nwbfile` is ready to be written on the disk and read back. \n\n"}, "ndx-franklab-novela-record": {"ref": "ndx-franklab-novela-record", "record_url": "https://github.com/nwb-extensions/ndx-franklab-novela-record", "last_updated": "2023-12-14T05:54:08Z", "name": "ndx-franklab-novela", "version": "0.1.0", "src": "https://github.com/lorenfranklab/ndx-franklab-novela", "pip": "https://pypi.org/project/ndx-franklab-novela/", "conda": "https://anaconda.org/novelakrk/ndx-franklab-novela", "license": "BSD 3-Clause", "maintainers": ["NovelaNeuro", "rly", "edeno"], "readme": "# ndx-franklab-novela Extension for NWB\n\n# About\nndx-franklab-novela is a python package containing NWB custom extensions for Loren Frank's Lab.\n\n# How to install\n\nAdd ndx-franklab-novela to your conda environment\n\n`pip install git+git://github.com/LorenFrankLab/ndx-franklab-novela`\n\nThe original published extension maintained by NovelaNeuro can be installed using:\n\n`conda install -c conda-forge -c novelakrk ndx-franklab-novela`\n\n\n# How to install\n\nAdd ndx-franklab-novela to your conda environment
\n```pip install git+git://github.com/LorenFrankLab/ndx-franklab-novela```\n\nThe original published extension maintained by NovelaNeuro can be installed using:\n```conda install -c conda-forge -c novelakrk ndx-franklab-novela```\n\n\n# Extensions\n\n## AssociatedFiles\nRepresentation of associated files in NWB.\n\n**Attributes:**\n- **description** `string`: description of associated file\n- **content** `string`: content of associated file\n- **task_epochs** `string`: id of epochs with task that is descripted by associated files\n\n## HeaderDevice\nRepresentation of HeaderDevice in NWB.\n\n**Attributes:**\n- **headstage_serial** `string`: headstage_serial from header global configuration\n- **headstage_smart_ref_on** `string`: headstage_smart_ref_on from header global configuration\n- **realtime_mode** `string`: realtime_mode from header global configuration\n- **headstage_auto_settle_on** `string`: headstage_auto_settle_on from header global configuration\n- **timestamp_at_creation** `string`: timestamp_at_creation from header global configuration\n- **controller_firmware_version** `string`: conntroller_firmware_version from header global configuration\n- **controller_serial** `string`: conntroller_serial from header global configuration\n- **save_displayed_chan_only** `string`: save_displayed_chan_only from header global configuration\n- **headstage_firmware_version** `string`: headstage_firmware_version from header global configuration\n- **qt_version** `string`: qt_version from header global configuration\n- **compile_date** `string`: compile_date from header global configuration\n- **compile_time** `string`: compile_time from header global configuration\n- **file_prefix** `string`: file_prefix from header global configuration\n- **headstage_gyro_sensor_on** `string`: headstage_gyro_sensor_on from header global configuration\n- **headstage_mag_sensor_on** `string`: headstage_mag_sensor_on from header global configuration\n- **trodes_version** `string`: trodes_version from header global configuration\n- **headstage_accel_sensor_on** `string`: headstage_accel_sensor_on from header global configuration\n- **commit_head** `string`: commit_head from header global configuration\n- **system_time_at_creation** `string`: system_time_at_creation from header global configuration\n- **file_path** `string`: file_path from header global configuration\n\n## ShanksElectrode\nRepresentation of electrodes of a shank in NWB.\n\n**Attributes:**\n- **name** `string`: name of the shank\n- **rel_x** `float`: the rel_x value of this electrode\n- **rel_y** `float`: the rel_y value of this electrode\n- **rel_z** `float`: the rel_z value of this electrode\n\n## Shank\nRepresentation of a shank in NWB.\n\n**Attributes:**\n- **name** `string`: name of the shank\n- **shanks_electrodes** `dict`: electrodes in the shank\n\n## Probe\nRepresentation of a probe in NWB.\n\n**Attributes:**\n- **id** `int`: unique id of the probe\n- **probe_type** `string`: type of probe\n- **units** `string`: units in device\n- **probe_description** `string`: description of probe\n- **contact_side_numbering** `bool`: is contact_side_numbering enabled\n- **contact_size** `float`: value of contact size as float\n- **shanks** `dict`: shanks in the probe\n\n## DataAcqDevice\nRepresentation of data acquisition device in NWB.\n\n**Attributes:**\n- **system** `string`: system of device\n- **amplifier** `string`: amplifier (optional)\n- **adc_circuit** `string`: adc_circuit (optional)\n\n## CameraDevice\nRepresentation of a camera device in NWB.\n\n**Attributes:**\n- **meters_per_pixel** `float`: meters per pixel\n- **model** `string`: model of this camera device\n- **lens** `string`: info about lens in this camera\n- **camera_name** `string`: name of this camera\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-photostim-record": {"ref": "ndx-photostim-record", "record_url": "https://github.com/nwb-extensions/ndx-photostim-record", "last_updated": "2023-12-16T05:29:05Z", "name": "ndx-photostim", "version": "0.0.3", "src": "https://github.com/histedlab/ndx-photostim", "pip": "https://pypi.org/project/ndx-photostim", "license": "BSD3", "maintainers": ["lafosse", "histed"], "readme": "# ndx-photostim Extension for NWB\n\n
\n\n\nThis is a NeuroData Without Borders (NWB) extension for storing data and metadata from holographic photostimulation\nmethods. It includes containers for storing photostimulation-specific device parameters, holographic patterns \n(either 2D or 3D), and time series data related to photostimulation.\n
\n\n
We release six PyNWB containers as part of this extension (we currently only have a Python implementation, rather than both Python and a MATLAB ones -- this is why the `matnwb` directory is empty):\n\n* The `SpatialLightModulator` and `Laser` containers store metadata about the spatial light modulator and laser used in the photostimulation, respectively. These containers are then stored within the `PhotostimulationMethod` parent container, which stores the remaining photostimulation method-specifici metadata.\n* `HolographicPattern` stores the **holographic pattern** used in stimulation.\n* `PhotostimulationSeries` contains the **time series data** corresponding to the presentation of a given stimulus (where the stimulus is represented by a `HolographicPattern` container linked to the `PhotostimulationSeries`).\n* We group **all time series & patterns for a given experiment** together using the `PhotostimulationTable` container. This object is a dynamic table, where each row in the table corresponds to a single `PhotostimulationSeries`. Additionally, the table links to the `StimulationDevice` used in the experiment.\n\n\n## Background\n\n\nState-of-the-art holographic photostimulation methods, used in concert with two-photon imaging, \nallow unprecedented \ncontrol and measurement of cell activity in the living brain. Methods for managing data for two-photon imaging \nexperiments are improving, but there is little to no standardization of data for holographic stimulation methods. \nStimulation in vivo depends on fine-tuning many experimental variables, which poses a challenge for reproducibility \nand data sharing between researchers. To improve standardization of photostimulation data storage and processing, \nwe release this extension as a generic data format for simultaneous holographic stimulation experiments, \nusing the NWB format to store experimental details and data relating to both acquisition \nand photostimulation.\n\n## Installation\n\nTo install the extension, first clone the `ndx_photostim` repository to the desired folder using the command\n```angular2svg\ngit clone https://github.com/histedlab/ndx-photostim.git\n```\nThen, to install the requisite python packages and extension, run:\n```angular2svg\npython -m pip install -r requirements.txt -r requirements-dev.txt\npython setup.py install\n```\nThe extension can then be imported into python scripts via `import ndx_photostim`.\n\n## Usage\n\n**For full example usage, see [tutorial.ipynb](https://github.com/histedlab/ndx-photostim/blob/main/tutorial.ipynb)**\n\nBelow is example code to:\n1. Create a device used in photostimulation\n2. Simulate and store photostimulation ROIs\n3. Store the time series corresponding to each stimulation\n4. Record all time series and patterns used in an experiment in a table\n5. Write the above to an NWB file and read it back\n\n\n```python\nimport numpy as np\nfrom dateutil.tz import tzlocal\nfrom datetime import datetime\nfrom pynwb import NWBFile, NWBHDF5IO\nfrom ndx_photostim import SpatialLightModulator, Laser, PhotostimulationMethod, HolographicPattern, \\\n PhotostimulationSeries, PhotostimulationTable\n\n# create an example NWB file\nnwbfile = NWBFile('ndx-photostim_example', 'EXAMPLE_ID', datetime.now(tzlocal()))\n\n# store the spatial light modulator used\nslm = SpatialLightModulator(name='slm',\n model='Meadowlark',\n size=np.array([512, 512]))\n\n# store the laser used\nlaser = Laser(name='laser',\n model='Coherent Monaco',\n wavelength=1030,\n power=8,\n peak_pulse_energy=20,\n pulse_rate=500)\n\n# create a container for the method used for photostimulation, and link the SLM and laser to it\nps_method = PhotostimulationMethod(name=\"methodA\",\n stimulus_method=\"scanless\",\n sweep_pattern=\"none\",\n sweep_size=0,\n time_per_sweep=0,\n num_sweeps=0)\nps_method.add_slm(slm)\nps_method.add_laser(laser)\n\n# define holographic pattern\nhp = HolographicPattern(name='pattern1',\n image_mask_roi=np.round(np.random.rand(5, 5)),\n stim_duration=0.300,\n power_per_target=8)\n\n# show the mask\nhp.show_mask()\n\n# define stimulation time series using holographic pattern\nps_series = PhotostimulationSeries(name=\"series_1\",\n format='interval',\n data=[1, -1, 1, -1],\n timestamps=[0.5, 1, 2, 4],\n pattern=hp,\n method=ps_method)\n\n# add the stimulus to the NWB file\nnwbfile.add_stimulus(ps_series)\n\n# create a table to store the time series/patterns for all stimuli together, along with experiment-specific\n# parameters\nstim_table = PhotostimulationTable(name='test', description='...')\nstim_table.add_series(ps_series)\n\n# plot the timestamps when the stimulus was presented\nstim_table.plot_presentation_times()\n\n# create a processing module and add the PresentationTable to it\nmodule = nwbfile.create_processing_module(name=\"photostimulation\", description=\"example photostimulation table\")\nmodule.add(stim_table)\n\n# write to an NWB file and read it back\nwith NWBHDF5IO(\"example_file.nwb\", \"w\") as io:\n io.write(nwbfile)\n\nwith NWBHDF5IO(\"example_file.nwb\", \"r\", load_namespaces=True) as io:\n read_nwbfile = io.read()\n\n # Check the file & processing module\n print(read_nwbfile)\n print(read_nwbfile.processing['photostimulation'])\n\nif os.path.exists(\"example_file.nwb\"):\n os.remove(\"example_file.nwb\")\n```\n## Running tests\n\nUnit and integration\ntests are implemented using pytest, and can be run via the command \n`pytest` from the root of the extension directory (i.e., inside `ndx-photostim/src`). In addition, the\n`pytest` command will also run a test of the example code above.\n\n## Documentation\n\n### Specification\n\n\nDocumentation for the extension's specification, which is based on the YAML files, is generated and stored in\nthe `./docs` folder. To create it, run the following from the home directory:\n```angular2svg\ncd docs\nmake fulldoc\n```\nThis will save documentation to the `./docs/build` folder, and can be accessed via the \n`./docs/build/html/index.html` file.\n\n### API\n\nTo generate documentation for the Python API (stores in `./api_docs`), we use Sphinx \nand a template from ReadTheDocs. API documentation can\nbe created by running \n```angular2svg\nsphinx-build -b html api_docs/source/ api_docs/build/\n```\nfrom the home folder. Similar to the specification docs, API documentation is stored in `./api_docs/build`. Select \n`./api_docs/build/index.html` to access the API documentation in a website format.\n\n\n## Credit\n\nCode by Carl Harris and Paul LaFosse (equal contribution). Collaboration between the NIMH's [Data Science and Sharing Team](https://cmn.nimh.nih.gov/dsst) and [Histed Lab](https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/ncb).\n\n\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n\n"}, "ndx-multichannel-volume-record": {"ref": "ndx-multichannel-volume-record", "record_url": "https://github.com/nwb-extensions/ndx-multichannel-volume-record", "last_updated": "2024-03-31T05:45:39Z", "name": "ndx-multichannel-volume", "version": "0.1.12", "src": "https://github.com/focolab/ndx-multichannel-volume", "pip": "https://pypi.org/project/ndx-multichannel-volume/", "license": "BSD-3", "maintainers": ["dysprague"], "readme": "# ndx-multichannel-volume Extension for NWB\n\nThis extension is to add support for volumetric multichannel images. This\nextends existing NWB functions for optophysiology imaging to allow for \n3 dimensions and a flexible number of channels. There is additional support\nfor adding metadata that is necessary for imaging in C. Elegans.\n\n## Installation\n\nTo install this package on Unix/macOS, run in command line\npython3 -m pip install --index-url https://pypi.org/simple/ --no-deps ndx-multichannel-volume\n\nOn windows, run \n\npy -m pip install --index-url https://pypi.org/simple/ --no-deps ndx-multichannel-volume\n\n\n## Usage\n\nThis extension is to add support for volumetric multichannel images. This \nextends existing NWB functions for optophysiology imaging to allow for \n3 dimensions and a flexible number of channels. There is additional support\nfor adding metadata that is necessary for imaging in C. Elegans. \n\nNew classes added in this extension are:\n\nCElegansSubject - extension of the base subject class with additional attributes\nfor metadata specific to C. Elegans.\n\nMultiChannelVolumeSeries - extension of the base TimeSeries class to support \nmultiple channels and 3 dimensions.\n\nMultiChannelVolume - class for storing mutlichannel volumetric images with \na flexible number of channels. \n\nImagingVolume - alternate version of the native ImagingPlane class for supporting\nmetadata associated with volumetric multichannel images. Contains a list of optical\nchannel references as well as an ordered list of how those channels index to the \nchannels in the image.\n\nOpticalChannelPlus - extension of the OpticalChannel class to support additional\ninformation including emission_range, excitation_range, and excitation_lambda.\n\nOpticalChannelReferences - contains ordered list of optical channel to represent the \norder of the optical channels in the reference volume.\n\nVolumeSegmentation - contains segmentation masks for image volumes. There are options \nto use either a standard voxel_mask with XYZ information as well as a Cell ID label,\nor color_voxel_mask which has RGBW information as well as XYZ.\n\nPlease see https://github.com/focolab/ndx-multichannel-volume/blob/main/src/pynwb/create_NWB.ipynb for example code on how to use these new data types/classes\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-depth-moseq-record": {"ref": "ndx-depth-moseq-record", "record_url": "https://github.com/nwb-extensions/ndx-depth-moseq-record", "last_updated": "2024-07-25T03:10:54Z", "name": "ndx-depth-moseq", "version": "0.1.2", "src": "https://github.com/catalystneuro/ndx-depth-moseq", "pip": "https://pypi.org/project/ndx-depth-moseq/", "license": "BSD-3", "maintainers": ["pauladkisson"], "readme": "# ndx-depth-moseq Extension for NWB\n\nndx-depth-moseq is a standardized format for storing the output of [depth-moseq](https://dattalab.github.io/moseq2-website/index.html), an automatic motion sequencing algorithm, in NWB. Currently, this extension only supports the output of depth-moseq-extract, but will be extended as needed to cover the other types of depth-moseq outputs.\n\nThis extension consists of 3 new neurodata types:\n\n- `DepthImageSeries`, which is a simple extension of `pynwb.image.ImageSeries` for depth video with a constant reference depth.\n- `MoSeqExtractParameterGroup`, which stores all the various parameters from the depth-moseq-extract algorithm.\n- `MoSeqExtractGroup`, which stores all the relevant depth-moseq outputs including the `DepthImageSeries`, `MoSeqExtractParameterGroup`, as well as various native neurodata types such as the `Position`.\n\n## Installation\n```\npip install ndx-depth-moseq\n```\n\n## Usage\n\n```python\n\"\"\"Example of usage with mock data.\"\"\"\nfrom datetime import datetime\nfrom pytz import timezone\nimport numpy as np\nfrom pynwb.image import GrayscaleImage, ImageMaskSeries\nfrom pynwb import NWBFile, TimeSeries\nfrom pynwb.behavior import (\n CompassDirection,\n Position,\n SpatialSeries,\n)\nfrom ndx_depth_moseq import DepthImageSeries, MoSeqExtractGroup, MoSeqExtractParameterGroup\n\n# Define mock data (this will be replaced with the actual data) \nversion = \"0.1.0\"\nnum_frames = 10\nnum_rows = 512\nnum_cols = 424\nprocessed_depth_video = np.zeros((num_frames, num_rows, num_cols))\nloglikelihood_video = np.zeros((num_frames, num_rows, num_cols))\ntimestamps = np.arange(num_frames)\nbackground = np.zeros((num_rows, num_cols))\nis_flipped = np.zeros(num_frames, dtype=bool)\nroi = np.zeros((num_rows, num_cols))\ntrue_depth = 1.0\nkinematic_var_names = ['centroid_x_mm', 'centroid_y_mm', 'height_ave_mm', 'angle', 'velocity_2d_mm', 'velocity_3d_mm', 'velocity_theta', 'length_mm', 'width_mm', 'area_px', 'width_px', 'length_px']\nkinematic_vars = {k: np.zeros(num_frames) for k in kinematic_var_names}\nkinematic_vars['length_px'] += 1\nkinematic_vars['width_px'] += 1\nparameters = {\n 'angle_hampel_sig': np.array([3], dtype=np.int64)[0],\n 'angle_hampel_span': np.array([5], dtype=np.int64)[0],\n 'bg_roi_depth_range_min': np.array([0], dtype=np.int64)[0],\n 'bg_roi_depth_range_max': np.array([1000], dtype=np.int64)[0],\n 'bg_roi_dilate_x': np.array([10], dtype=np.int64)[0],\n 'bg_roi_dilate_y': np.array([10], dtype=np.int64)[0],\n 'bg_roi_fill_holes': True,\n 'bg_roi_gradient_filter': True,\n 'bg_roi_gradient_kernel': np.array([5], dtype=np.int64)[0],\n 'bg_roi_gradient_threshold': np.array([10], dtype=np.int64)[0],\n 'bg_roi_index': np.array([0], dtype=np.int64)[0],\n 'bg_roi_shape': 'ellipse',\n 'bg_roi_weight_area': np.array([0.5], dtype=np.float64)[0],\n 'bg_roi_weight_extent': np.array([0.5], dtype=np.float64)[0],\n 'bg_roi_weight_dist': np.array([0.5], dtype=np.float64)[0],\n 'cable_filter_iters': np.array([5], dtype=np.int64)[0],\n 'cable_filter_shape': 'ellipse',\n 'cable_filter_size_x': np.array([5], dtype=np.int64)[0],\n 'cable_filter_size_y': np.array([5], dtype=np.int64)[0],\n 'centroid_hampel_sig': np.array([3], dtype=np.int64)[0],\n 'centroid_hampel_span': np.array([5], dtype=np.int64)[0],\n 'chunk_overlap': np.array([0], dtype=np.int64)[0],\n 'chunk_size': np.array([100], dtype=np.int64)[0],\n 'compress': False,\n 'compress_chunk_size': np.array([100], dtype=np.int64)[0],\n 'compress_threads': np.array([1], dtype=np.int64)[0],\n 'config_file': 'config.yaml',\n 'crop_size_width': np.array([512], dtype=np.int64)[0],\n 'crop_size_height': np.array([424], dtype=np.int64)[0],\n 'flip_classifier': 'flip_classifier.pkl',\n 'flip_classifier_smoothing': np.array([5], dtype=np.int64)[0],\n 'fps': np.array([30], dtype=np.int64)[0],\n 'frame_dtype': 'uint16',\n 'frame_trim_beginning': np.array([0], dtype=np.int64)[0],\n 'frame_trim_end': np.array([0], dtype=np.int64)[0],\n 'max_height': np.array([1000], dtype=np.int64)[0],\n 'min_height': np.array([0], dtype=np.int64)[0],\n 'model_smoothing_clips_x': np.array([5], dtype=np.int64)[0],\n 'model_smoothing_clips_y': np.array([5], dtype=np.int64)[0],\n 'spatial_filter_size': np.array([5], dtype=np.int64)[0],\n 'tail_filter_iters': np.array([5], dtype=np.int64)[0],\n 'tail_filter_shape': 'ellipse',\n 'tail_filter_size_x': np.array([5], dtype=np.int64)[0],\n 'tail_filter_size_y': np.array([5], dtype=np.int64)[0],\n 'temporal_filter_size': np.array([5], dtype=np.int64)[0],\n 'tracking_model_init': 'mean',\n 'tracking_model_ll_clip': np.array([5], dtype=np.int64)[0],\n 'tracking_model_ll_threshold': np.array([5], dtype=np.int64)[0],\n 'tracking_model_mask_threshold': np.array([5], dtype=np.int64)[0],\n 'tracking_model_segment': True,\n 'use_plane_bground': True,\n 'use_tracking_model': True,\n 'write_movie': False,\n}\n\n# Create the NWB file\nnwbfile = NWBFile(\n session_description=\"session_description\",\n identifier=\"identifier\",\n session_start_time=datetime.now(timezone(\"US/Pacific\")),\n)\n\n# Add Imaging Data\nkinect = nwbfile.create_device(name=\"kinect\", manufacturer=\"Microsoft\", description=\"Microsoft Kinect 2\")\nflipped_series = TimeSeries(\n name=\"flipped_series\",\n data=is_flipped,\n unit=\"a.u.\",\n timestamps=timestamps,\n description=\"Boolean array indicating whether the image was flipped left/right\",\n)\nprocessed_depth_video = DepthImageSeries(\n name=\"processed_depth_video\",\n data=processed_depth_video,\n unit=\"millimeters\",\n format=\"raw\",\n timestamps=flipped_series.timestamps,\n description=\"3D array of depth frames (nframes x w x h, in mm)\",\n distant_depth=true_depth,\n device=kinect,\n)\nloglikelihood_video = ImageMaskSeries(\n name=\"loglikelihood_video\",\n data=loglikelihood_video,\n masked_imageseries=processed_depth_video,\n unit=\"a.u.\",\n format=\"raw\",\n timestamps=flipped_series.timestamps,\n description=\"Log-likelihood values from the tracking model (nframes x w x h)\",\n device=kinect,\n)\nbackground = GrayscaleImage(\n name=\"background\",\n data=background,\n description=\"Computed background image.\",\n)\nroi = GrayscaleImage(\n name=\"roi\",\n data=roi,\n description=\"Computed region of interest.\",\n)\n\n# Add Position Data\nposition_data = np.vstack(\n (kinematic_vars[\"centroid_x_mm\"], kinematic_vars[\"centroid_y_mm\"], kinematic_vars[\"height_ave_mm\"])\n).T\nposition_series = SpatialSeries(\n name=\"position\",\n description=\"Position (x, y, height) in an open field.\",\n data=position_data,\n timestamps=flipped_series.timestamps,\n reference_frame=\"top left\",\n unit=\"mm\",\n)\nposition = Position(spatial_series=position_series, name=\"position\")\n\n# Add Compass Direction Data\nheading_2d_series = SpatialSeries(\n name=\"heading_2d\",\n description=\"Head orientation.\",\n data=kinematic_vars[\"angle\"],\n timestamps=flipped_series.timestamps,\n reference_frame=\"top left\",\n unit=\"radians\",\n)\nheading_2d = CompassDirection(spatial_series=heading_2d_series, name=\"heading_2d\")\n\n# Add speed/velocity data\nspeed_2d = TimeSeries(\n name=\"speed_2d\",\n description=\"2D speed (mm / frame), note that missing frames are not accounted for\",\n data=kinematic_vars[\"velocity_2d_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm/frame\",\n)\nspeed_3d = TimeSeries(\n name=\"speed_3d\",\n description=\"3D speed (mm / frame), note that missing frames are not accounted for\",\n data=kinematic_vars[\"velocity_3d_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm/frame\",\n)\nangular_velocity_2d = TimeSeries(\n name=\"angular_velocity_2d\",\n description=\"Angular component of velocity (arctan(vel_x, vel_y))\",\n data=kinematic_vars[\"velocity_theta\"],\n timestamps=flipped_series.timestamps,\n unit=\"radians/frame\",\n)\n\n# Add length/width/area data\nlength = TimeSeries(\n name=\"length\",\n description=\"Length of mouse (mm)\",\n data=kinematic_vars[\"length_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm\",\n)\nwidth = TimeSeries(\n name=\"width\",\n description=\"Width of mouse (mm)\",\n data=kinematic_vars[\"width_mm\"],\n timestamps=flipped_series.timestamps,\n unit=\"mm\",\n)\nwidth_px_to_mm = kinematic_vars[\"width_mm\"] / kinematic_vars[\"width_px\"]\nlength_px_to_mm = kinematic_vars[\"length_mm\"] / kinematic_vars[\"length_px\"]\narea_px_to_mm2 = width_px_to_mm * length_px_to_mm\narea_mm2 = kinematic_vars[\"area_px\"] * area_px_to_mm2\narea = TimeSeries(\n name=\"area\",\n description=\"Pixel-wise area of mouse (mm^2)\",\n data=area_mm2,\n timestamps=flipped_series.timestamps,\n unit=\"mm^2\",\n)\n\n# Add Parameters\nparameters = MoSeqExtractParameterGroup(name=\"parameters\", **parameters)\n\n# Add MoseqExtractGroup\nmoseq_extract_group = MoSeqExtractGroup(\n name=\"moseq_extract_group\",\n version=version,\n parameters=parameters,\n background=background,\n processed_depth_video=processed_depth_video,\n loglikelihood_video=loglikelihood_video,\n roi=roi,\n flipped_series=flipped_series,\n depth_camera=kinect,\n position=position,\n heading_2d=heading_2d,\n speed_2d=speed_2d,\n speed_3d=speed_3d,\n angular_velocity_2d=angular_velocity_2d,\n length=length,\n width=width,\n area=area,\n)\n# Add data into a behavioral processing module\nbehavior_module = nwbfile.create_processing_module(\n name=\"behavior\",\n description=\"Processed behavioral data from MoSeq\",\n)\nbehavior_module.add(moseq_extract_group)\n```\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-probeinterface-record": {"ref": "ndx-probeinterface-record", "record_url": "https://github.com/nwb-extensions/ndx-probeinterface-record", "last_updated": "2024-07-25T16:48:49Z", "name": "ndx-probeinterface", "version": "0.1.0", "src": "https://github.com/SpikeInterface/ndx-probeinterface", "pip": "https://pypi.org/project/ndx-probeinterface/", "license": "MIT", "maintainers": ["alejoe91", "khl02007"], "readme": "# ndx-probeinterface Extension for NWB\n\n`ndx-probeinterface` is an extension of the NWB format to formally define information about neural probes as data types in NWB files. It comes with helper functions to easily construct `ndx-probeinterface.Probe` from `probeinterface.Probe` and vice versa.\n\n## Installation\n```python\npip install ndx_probeinterface\n```\n\n## Usage\n\n### Going from a `probeinterface.Probe`/`ProbeGroup` object to a `ndx_probeinterface.Probe` object \n```python\nimport ndx_probeinterface\n\npi_probe = probeinterface.Probe(...)\npi_probegroup = probeinterface.ProbeGroup()\n\n# from_probeinterface always returns a list of ndx_probeinterface.Probe devices\nndx_probes1 = ndx_probeinterface.from_probeinterface(pi_probe)\nndx_probes2 = ndx_probeinterface.from_probeinterface(pi_probegroup)\n\nndx_probes = ndx_probes1.extend(ndx_probes2)\n\nnwbfile = pynwb.NWBFile(...)\n\n# add Probe as NWB Devices\nfor ndx_probe in ndx_probes:\n nwbfile.add_device(ndx_probe)\n```\n\n### Going from a `ndx_probeinterface.Probe` object to a `probeinterface.Probe` object \n```python\nimport ndx_probeinterface\n\n# load ndx_probeinterface.Probe objects from NWB file\nio = pynwb.NWBH5IO(file_path, 'r', load_namespaces=True)\nnwbfile = io.read()\n\nndx_probes = []\nfor device in nwbfile:\n if isinstance(device, ndx_probeinterface.Probe):\n ndx_probes.append(device)\n\n# convert to probeinterface.Probe objects\npi_probes = []\nfor ndx_probe in ndx_probes:\n pi_probe = ndx_probeinterface.to_probeinterface(ndx_probe)\n pi_probes.append(pi_probe)\n```\n\n## Future plans\n- Add information about the headstage used for data acquisition\n- Remove redundant information from `ElectrodeTable`\n- Incorporate this NDX into the core NWB schema\n\n---\nThis extension was created using [ndx-template](https://github.com/nwb-extensions/ndx-template).\n"}, "ndx-dbs-record": {"ref": "ndx-dbs-record", "record_url": "https://github.com/nwb-extensions/ndx-dbs-record", "last_updated": "2024-07-25T16:51:30Z", "name": "ndx-dbs", "version": "0.1.0", "src": "https://github.com/Hamidreza-Alimohammadi/ndx-dbs", "pip": "https://pypi.org/project/ndx-dbs/", "license": "BSD 3-clause", "maintainers": ["Hamidreza-Alimohammadi"], "readme": "# ndx-dbs Extension for NWB\n\nThis extension is developed to extend NWB data standards to incorporate required (meta)data for DBS experiments. `DBSGroup`, the main neurodata-type in this extension, in fact extends the `LabMetaData` which itself extends the NWBContainer base type and incorporates data types of `DBSMeta`(as an extension of LabMetaData), `DBSSubject`(as an extension of LabMetaData) and `DBSDevice`(as an extension of Device) which itself includes `DBSElectrodes`(as an extension of DynamicTable). Instances of these data types are interlinked to each other to account for the comprehensiveness of all the required meta(data) in a general experiment including DBS.\n\n
\n\n
\n\n## Installation\nCan be installed directly from PyPI:\n```\npip install ndx-dbs\n```\nor simply clone the repo and navigate to the root directory, then:\n```\npip install .\n```\n## Test\nA roundTrip test is runnable through ```pytest``` from the root. The test script can be found here:\n```\n\\src\\pynwb\\tests\n```\n## An example use-case\nThe following is an example use case of ```ndx-dbs``` with explanatory comments. First, we build up an ```nwb_file``` and define an endpoint recording device:\n```python\nfrom datetime import datetime\nfrom uuid import uuid4\nfrom dateutil.tz import tzlocal\nfrom pynwb import NWBHDF5IO, NWBFile\n\nfrom ndx_dbs import (\n DBSDevice,\n DBSElectrodes,\n DBSMeta,\n DBSSubject,\n DBSGroup\n)\n\nnwb_file = NWBFile(\n session_description='DBS mock session',\n identifier=str(uuid4()),\n session_start_time=datetime.now(tzlocal()),\n experimenter='experimenter',\n lab='ChiWangLab',\n institution='UKW',\n experiment_description='',\n session_id='',\n)\n\n# define an endpoint main recording device\nmain_device = nwb_file.create_device(\n name='endpoint_recording_device',\n description='description_of_the_ERD', # ERD: Endpoint recording device\n manufacturer='manufacturer_of_the_ERD'\n)\n```\nThen, we define an instance of `DBSElectrodes` to represent the meta-data on the recording electrodes:\n```python\n'''\ncreating an DBS electrodes table\nas a DynamicTable\n'''\ndbs_electrodes_table = DBSElectrodes(\n description='descriptive meta-data on DBS stimulus electrodes'\n)\n\n# add electrodes\ndbs_electrodes_table.add_row(\n el_id='el_0',\n polarity='negative electrode (stimulation electrode, cathode)',\n impedance='0.8 MOhm',\n length='X cm',\n tip='tip surface ~ XX micrometer sq',\n material='platinum/iridium',\n location='STN',\n comment='none',\n)\ndbs_electrodes_table.add_row(\n el_id='el_1',\n polarity='positive electrode (reference electrode, anode)',\n impedance='1 MOhm',\n length='Y cm',\n tip='tip surface ~ YY micrometer sq',\n material='platinum/iridium',\n location='scalp surface',\n comment='distance D from el_0',\n)\n# adding the object of DynamicTable\nnwb_file.add_acquisition(dbs_electrodes_table) # storage point for DT\n```\nNow, we can define an instance of ```DBSDevice```:\n```python\n# define an DBSDevice-type device for ecg recording\ndbs_device = DBSDevice(\n name='DBS_device',\n description='cable-bound multichannel systems stimulus generator; TypeSTG4004',\n manufacturer='MultichannelSystems, Reutlingen, Germany',\n synchronization='taken care of via ...',\n electrodes_group=dbs_electrodes_table,\n endpoint_recording_device=main_device\n)\n# adding the object of DBSDevice\nnwb_file.add_device(dbs_device)\n```\nAnd also an instance of ```DBSMeta``` to store the meta-data for a DBS experiment:\n```python\ndbs_meta_group = DBSMeta(\n name='DBS_meta',\n stim_state='ON',\n stim_type='unipolar',\n stim_area='STN',\n stim_coordinates='\u20133.6mmAP, either\u20132.5mm (right) or 12.5mm(left)ML, and\u20137.7mmDV',\n pulse_shape='rectangular',\n pulse_width='60 micro-seconds',\n pulse_frequency='130 Hz',\n pulse_intensity='50 micro-Ampere',\n charge_balance='pulse symmetric; set to be theoretically zero',\n)\n# adding the object of DBSMeta\nnwb_file.add_lab_meta_data(dbs_meta_group) # storage point for custom LMD\n```\nAlong with an instance of `DBSSubject`:\n```python\ndbs_subject_group = DBSSubject(\n name='DBS_subject',\n model='6-OHDA',\n controls='specific control group in this experiment',\n comment='any comments on this subject',\n)\n# adding the object of DBSSubject\nnwb_file.add_lab_meta_data(dbs_subject_group) # storage point for custom LMD\n```\nNow that we have all the required components, we define the main group for DBS to connect them all:\n```python\ndbs_main_group = DBSGroup(\n name='DBS_main_container',\n DBS_phase='first phase after implementation recovery',\n DBS_device=dbs_device,\n DBS_meta=dbs_meta_group,\n DBS_subject=dbs_subject_group,\n comment='any comments ...',\n)\n# adding the object of DBSSubject\nnwb_file.add_lab_meta_data(dbs_main_group) # storage point for custom LMD\n```\nNow, the `nwb_file` is ready to be written on the disk and read back. \n"}, "ndx-hed-record": {"ref": "ndx-hed-record", "record_url": "https://github.com/nwb-extensions/ndx-hed-record", "last_updated": "2024-08-05T21:38:52Z", "name": "ndx-hed", "version": "0.1.0", "src": "https://github.com/hed-standard/ndx-hed", "pip": "https://pypi.org/project/ndx-hed/", "license": "BSD", "maintainers": ["VisLab", "hed-maintainers"], "readme": "# ndx-hed Extension for NWB\n\n[**Neurodata Without Borders (NWB)**](https://www.nwb.org/) is a data standard for organizing neurophysiology data.\nNWB is used extensively as the data representation for single cell and animal recordings as well as\nhuman neuroimaging modalities such as IEEG. HED (Hierarchical Event Descriptors) is a system of\nstandardized vocabularies and supporting tools that allows fine-grained annotation of data.\nHED annotations can now be used in NWB to provide a column of HED annotations for any NWB\ndynamic table. \n\nThe [**HED annotation in NWB**](https://www.hed-resources.org/en/latest/HedAnnotationInNWB.html)\nuser guide explains in more detail how to use this extension for HED.\n\n\n## Installation\n\n**Python:**\n```bash\npip install -U ndx-hed\n```\n\n**Matlab:** The Matlab extension is under development."}} \ No newline at end of file