Skip to content

Commit

Permalink
merge
Browse files Browse the repository at this point in the history
  • Loading branch information
h-mayorquin committed Nov 15, 2024
2 parents a677924 + a608e90 commit 5698adc
Show file tree
Hide file tree
Showing 24 changed files with 301 additions and 304 deletions.
7 changes: 6 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
# Upcoming

## Deprecations
* Completely removed compression settings from most places[PR #1126](https://github.com/catalystneuro/neuroconv/pull/1126)

## Bug Fixes
* datetime objects now can be validated as conversion options [#1139](https://github.com/catalystneuro/neuroconv/pull/1126)

## Features
* Propagate the `unit_electrode_indices` argument from the spikeinterface tools to `BaseSortingExtractorInterface`. This allows users to map units to the electrode table when adding sorting data [PR #1124](https://github.com/catalystneuro/neuroconv/pull/1124)
* Added `SortedRecordingConverter` to convert sorted recordings to NWB with correct metadata mapping between units and electrodes [PR #1132](https://github.com/catalystneuro/neuroconv/pull/1132)
* Imaging interfaces have a new conversion option `always_write_timestamps` that can be used to force writing timestamps even if neuroconv's heuristics indicates regular sampling rate [PR #1125](https://github.com/catalystneuro/neuroconv/pull/1125)
* Added .csv support to DeepLabCutInterface [PR #1140](https://github.com/catalystneuro/neuroconv/pull/1140)

## Improvements
* Use mixing tests for ecephy's mocks [PR #1136](https://github.com/catalystneuro/neuroconv/pull/1136)

# v0.6.5 (November 1, 2024)

Expand Down Expand Up @@ -47,7 +52,7 @@
* Added automated EFS volume creation and mounting to the `submit_aws_job` helper function. [PR #1018](https://github.com/catalystneuro/neuroconv/pull/1018)
* Added a mock for segmentation extractors interfaces in ophys: `MockSegmentationInterface` [PR #1067](https://github.com/catalystneuro/neuroconv/pull/1067)
* Added a `MockSortingInterface` for testing purposes. [PR #1065](https://github.com/catalystneuro/neuroconv/pull/1065)
* BaseRecordingInterfaces have a new conversion options `always_write_timestamps` that ca be used to force writing timestamps even if neuroconv heuristic indicates regular sampling rate [PR #1091](https://github.com/catalystneuro/neuroconv/pull/1091)
* BaseRecordingInterfaces have a new conversion options `always_write_timestamps` that can be used to force writing timestamps even if neuroconv heuristic indicates regular sampling rate [PR #1091](https://github.com/catalystneuro/neuroconv/pull/1091)


## Improvements
Expand Down
5 changes: 3 additions & 2 deletions docs/conversion_examples_gallery/behavior/deeplabcut.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Install NeuroConv with the additional dependencies necessary for reading DeepLab
pip install "neuroconv[deeplabcut]"
Convert DeepLabCut pose estimation data to NWB using :py:class:`~neuroconv.datainterfaces.behavior.deeplabcut.deeplabcutdatainterface.DeepLabCutInterface`.
This interface supports both .h5 and .csv output files from DeepLabCut.

.. code-block:: python
Expand All @@ -16,8 +17,8 @@ Convert DeepLabCut pose estimation data to NWB using :py:class:`~neuroconv.datai
>>> from pathlib import Path
>>> from neuroconv.datainterfaces import DeepLabCutInterface
>>> file_path = BEHAVIOR_DATA_PATH / "DLC" / "m3v1mp4DLC_resnet50_openfieldAug20shuffle1_30000.h5"
>>> config_file_path = BEHAVIOR_DATA_PATH / "DLC" / "config.yaml"
>>> file_path = BEHAVIOR_DATA_PATH / "DLC" / "open_field_without_video" / "m3v1mp4DLC_resnet50_openfieldAug20shuffle1_30000.h5"
>>> config_file_path = BEHAVIOR_DATA_PATH / "DLC" / "open_field_without_video" / "config.yaml"
>>> interface = DeepLabCutInterface(file_path=file_path, config_file_path=config_file_path, subject_name="ind1", verbose=False)
Expand Down
26 changes: 13 additions & 13 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -270,50 +270,50 @@ icephys = [

## Ophys
brukertiff = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
"tifffile>=2023.3.21",
]
caiman = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
cnmfe = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
extract = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
hdf5 = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
micromanagertiff = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
"tifffile>=2023.3.21",
]
miniscope = [
"natsort>=8.3.1",
"ndx-miniscope>=0.5.1",
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
sbx = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
scanimage = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
"scanimage-tiff-reader>=1.4.1",
]
sima = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
suite2p = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
]
tdt_fp = [
"ndx-fiber-photometry",
"roiextractors>=0.5.7",
"roiextractors>=0.5.10",
"tdt",
]
tiff = [
"roiextractors>=0.5.7",
"roiextractors>=0.5.9",
"tiffile>=2018.10.18",
]
ophys = [
Expand Down
4 changes: 3 additions & 1 deletion src/neuroconv/basedatainterface.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def create_nwbfile(self, metadata: Optional[dict] = None, **conversion_options)
return nwbfile

@abstractmethod
def add_to_nwbfile(self, nwbfile: NWBFile, **conversion_options) -> None:
def add_to_nwbfile(self, nwbfile: NWBFile, metadata: Optional[dict], **conversion_options) -> None:
"""
Define a protocol for mapping the data from this interface to NWB neurodata objects.
Expand All @@ -136,6 +136,8 @@ def add_to_nwbfile(self, nwbfile: NWBFile, **conversion_options) -> None:
----------
nwbfile : pynwb.NWBFile
The in-memory object to add the data to.
metadata : dict
Metadata dictionary with information used to create the NWBFile.
**conversion_options
Additional keyword arguments to pass to the `.add_to_nwbfile` method.
"""
Expand Down
41 changes: 13 additions & 28 deletions src/neuroconv/datainterfaces/behavior/deeplabcut/_dlc_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,21 +251,6 @@ def _get_video_info_from_config_file(config_file_path: Path, vidname: str):
return video_file_path, image_shape


def _get_pes_args(
*,
h5file: Path,
individual_name: str,
):
h5file = Path(h5file)

_, scorer = h5file.stem.split("DLC")
scorer = "DLC" + scorer

df = _ensure_individuals_in_header(pd.read_hdf(h5file), individual_name)

return scorer, df


def _write_pes_to_nwbfile(
nwbfile,
animal,
Expand Down Expand Up @@ -339,23 +324,23 @@ def _write_pes_to_nwbfile(
return nwbfile


def add_subject_to_nwbfile(
def _add_subject_to_nwbfile(
nwbfile: NWBFile,
h5file: FilePath,
file_path: FilePath,
individual_name: str,
config_file: Optional[FilePath] = None,
timestamps: Optional[Union[list, np.ndarray]] = None,
pose_estimation_container_kwargs: Optional[dict] = None,
) -> NWBFile:
"""
Given the subject name, add the DLC .h5 file to an in-memory NWBFile object.
Given the subject name, add the DLC output file (.h5 or .csv) to an in-memory NWBFile object.
Parameters
----------
nwbfile : pynwb.NWBFile
The in-memory nwbfile object to which the subject specific pose estimation series will be added.
h5file : str or path
Path to the DeepLabCut .h5 output file.
file_path : str or path
Path to the DeepLabCut .h5 or .csv output file.
individual_name : str
Name of the subject (whose pose is predicted) for single-animal DLC project.
For multi-animal projects, the names from the DLC project will be used directly.
Expand All @@ -371,18 +356,18 @@ def add_subject_to_nwbfile(
nwbfile : pynwb.NWBFile
nwbfile with pes written in the behavior module
"""
h5file = Path(h5file)

if "DLC" not in h5file.name or not h5file.suffix == ".h5":
raise IOError("The file passed in is not a DeepLabCut h5 data file.")
file_path = Path(file_path)

video_name, scorer = h5file.stem.split("DLC")
video_name, scorer = file_path.stem.split("DLC")
scorer = "DLC" + scorer

# TODO probably could be read directly with h5py
# This requires pytables
data_frame_from_hdf5 = pd.read_hdf(h5file)
df = _ensure_individuals_in_header(data_frame_from_hdf5, individual_name)
if ".h5" in file_path.suffixes:
df = pd.read_hdf(file_path)
elif ".csv" in file_path.suffixes:
df = pd.read_csv(file_path, header=[0, 1, 2], index_col=0)
df = _ensure_individuals_in_header(df, individual_name)

# Note the video here is a tuple of the video path and the image shape
if config_file is not None:
Expand All @@ -404,7 +389,7 @@ def add_subject_to_nwbfile(

# Fetch the corresponding metadata pickle file, we extract the edges graph from here
# TODO: This is the original implementation way to extract the file name but looks very brittle. Improve it
filename = str(h5file.parent / h5file.stem)
filename = str(file_path.parent / file_path.stem)
for i, c in enumerate(filename[::-1]):
if c.isnumeric():
break
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from pydantic import FilePath, validate_call
from pynwb.file import NWBFile

# import ndx_pose
from ....basetemporalalignmentinterface import BaseTemporalAlignmentInterface


Expand All @@ -13,16 +14,16 @@ class DeepLabCutInterface(BaseTemporalAlignmentInterface):

display_name = "DeepLabCut"
keywords = ("DLC",)
associated_suffixes = (".h5",)
associated_suffixes = (".h5", ".csv")
info = "Interface for handling data from DeepLabCut."

_timestamps = None

@classmethod
def get_source_schema(cls) -> dict:
source_schema = super().get_source_schema()
source_schema["properties"]["file_path"]["description"] = "Path to the .h5 file output by dlc."
source_schema["properties"]["config_file_path"]["description"] = "Path to .yml config file"
source_schema["properties"]["file_path"]["description"] = "Path to the file output by dlc (.h5 or .csv)."
source_schema["properties"]["config_file_path"]["description"] = "Path to .yml config file."
return source_schema

@validate_call
Expand All @@ -34,24 +35,25 @@ def __init__(
verbose: bool = True,
):
"""
Interface for writing DLC's h5 files to nwb using dlc2nwb.
Interface for writing DLC's output files to nwb using dlc2nwb.
Parameters
----------
file_path : FilePath
path to the h5 file output by dlc.
Path to the file output by dlc (.h5 or .csv).
config_file_path : FilePath, optional
path to .yml config file
Path to .yml config file
subject_name : str, default: "ind1"
the name of the subject for which the :py:class:`~pynwb.file.NWBFile` is to be created.
The name of the subject for which the :py:class:`~pynwb.file.NWBFile` is to be created.
verbose: bool, default: True
controls verbosity.
Controls verbosity.
"""
from ._dlc_utils import _read_config

file_path = Path(file_path)
if "DLC" not in file_path.stem or ".h5" not in file_path.suffixes:
raise IOError("The file passed in is not a DeepLabCut h5 data file.")
suffix_is_valid = ".h5" in file_path.suffixes or ".csv" in file_path.suffixes
if not "DLC" in file_path.stem or not suffix_is_valid:
raise IOError("The file passed in is not a valid DeepLabCut output data file.")

self.config_dict = dict()
if config_file_path is not None:
Expand Down Expand Up @@ -108,12 +110,14 @@ def add_to_nwbfile(
nwb file to which the recording information is to be added
metadata: dict
metadata info for constructing the nwb file (optional).
container_name: str, default: "PoseEstimation"
Name of the container to store the pose estimation.
"""
from ._dlc_utils import add_subject_to_nwbfile
from ._dlc_utils import _add_subject_to_nwbfile

add_subject_to_nwbfile(
_add_subject_to_nwbfile(
nwbfile=nwbfile,
h5file=str(self.source_data["file_path"]),
file_path=str(self.source_data["file_path"]),
individual_name=self.subject_name,
config_file=self.source_data["config_file_path"],
timestamps=self._timestamps,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import json
import re
import warnings
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional, Union
Expand Down Expand Up @@ -210,8 +209,6 @@ def add_to_nwbfile(
self,
nwbfile: NWBFile,
metadata: Optional[dict] = None,
compression: Optional[str] = None, # TODO: remove completely after 10/1/2024
compression_opts: Optional[int] = None, # TODO: remove completely after 10/1/2024
):
"""
Parameters
Expand All @@ -223,17 +220,6 @@ def add_to_nwbfile(
"""
import pandas as pd

# TODO: remove completely after 10/1/2024
if compression is not None or compression_opts is not None:
warnings.warn(
message=(
"Specifying compression methods and their options at the level of tool functions has been deprecated. "
"Please use the `configure_backend` tool function for this purpose."
),
category=DeprecationWarning,
stacklevel=2,
)

fictrac_data_df = pd.read_csv(self.file_path, sep=",", header=None, names=self.columns_in_dat_file)

# Get the timestamps
Expand Down
13 changes: 0 additions & 13 deletions src/neuroconv/datainterfaces/behavior/video/videodatainterface.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,8 +269,6 @@ def add_to_nwbfile(
chunk_data: bool = True,
module_name: Optional[str] = None,
module_description: Optional[str] = None,
compression: Optional[str] = "gzip",
compression_options: Optional[int] = None,
):
"""
Convert the video data files to :py:class:`~pynwb.image.ImageSeries` and write them in the
Expand Down Expand Up @@ -431,17 +429,6 @@ def add_to_nwbfile(
pbar.update(1)
iterable = video

# TODO: remove completely after 03/1/2024
if compression is not None or compression_options is not None:
warnings.warn(
message=(
"Specifying compression methods and their options for this interface has been deprecated. "
"Please use the `configure_backend` tool function for this purpose."
),
category=DeprecationWarning,
stacklevel=2,
)

image_series_kwargs.update(data=iterable)

if timing_type == "starting_time and rate":
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,6 @@ def add_to_nwbfile(
starting_time: Optional[float] = None,
write_as: Literal["raw", "lfp", "processed"] = "lfp",
write_electrical_series: bool = True,
compression: Optional[str] = None, # TODO: remove completely after 10/1/2024
compression_opts: Optional[int] = None,
iterator_type: str = "v2",
iterator_opts: Optional[dict] = None,
):
Expand All @@ -38,8 +36,6 @@ def add_to_nwbfile(
starting_time=starting_time,
write_as=write_as,
write_electrical_series=write_electrical_series,
compression=compression,
compression_opts=compression_opts,
iterator_type=iterator_type,
iterator_opts=iterator_opts,
)
Original file line number Diff line number Diff line change
Expand Up @@ -308,8 +308,6 @@ def add_to_nwbfile(
starting_time: Optional[float] = None,
write_as: Literal["raw", "lfp", "processed"] = "raw",
write_electrical_series: bool = True,
compression: Optional[str] = None, # TODO: remove completely after 10/1/2024
compression_opts: Optional[int] = None,
iterator_type: Optional[str] = "v2",
iterator_opts: Optional[dict] = None,
always_write_timestamps: bool = False,
Expand Down Expand Up @@ -388,8 +386,6 @@ def add_to_nwbfile(
write_as=write_as,
write_electrical_series=write_electrical_series,
es_key=self.es_key,
compression=compression,
compression_opts=compression_opts,
iterator_type=iterator_type,
iterator_opts=iterator_opts,
always_write_timestamps=always_write_timestamps,
Expand Down
Loading

0 comments on commit 5698adc

Please sign in to comment.