Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more docstrings to public methods #1063

Merged
merged 30 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
9adc03b
add ruff check for public functions
h-mayorquin Sep 7, 2024
87c6cb5
changelog
h-mayorquin Sep 7, 2024
8623806
work in progress
h-mayorquin Sep 7, 2024
e8a2ccd
work in progress
h-mayorquin Sep 7, 2024
4135939
noqa in proress
h-mayorquin Sep 7, 2024
7afe65e
almost done
h-mayorquin Sep 7, 2024
438913b
DONE
h-mayorquin Sep 7, 2024
3066217
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Sep 10, 2024
f2ef463
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Sep 17, 2024
fb2e484
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Sep 17, 2024
2cd9b55
missing stuff
h-mayorquin Sep 18, 2024
228b32f
first suggestion
h-mayorquin Sep 18, 2024
0d5d6cc
second suggestion
h-mayorquin Sep 18, 2024
8035298
removed
h-mayorquin Sep 18, 2024
c69487b
bruker tiff docstring
h-mayorquin Sep 18, 2024
564703c
bruker tiff single parameters
h-mayorquin Sep 18, 2024
d95364b
miniscope imaging
h-mayorquin Sep 18, 2024
adcf7fb
scanimage
h-mayorquin Sep 18, 2024
9d7f734
suit2p paul suggestion
h-mayorquin Sep 18, 2024
dd85b3c
time intervals
h-mayorquin Sep 18, 2024
1cca391
reverse return in add_to_nwbfile
h-mayorquin Sep 18, 2024
de12b9a
hdmf
h-mayorquin Sep 18, 2024
716d400
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Sep 26, 2024
a770758
remove ruff check for public method docstrings
pauladkisson Nov 21, 2024
e4117e8
removed noqa: D102
pauladkisson Nov 21, 2024
cbfb3a7
removed noqa D102
pauladkisson Nov 21, 2024
be80b61
removed ,D102
pauladkisson Nov 21, 2024
fc3bfdb
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Dec 11, 2024
4f1d639
CHANGELOG
h-mayorquin Dec 11, 2024
1bcd57b
Merge branch 'main' into the_ruffest_rule_of_all
h-mayorquin Dec 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,16 @@
## Improvements
* Use mixing tests for ecephy's mocks [PR #1136](https://github.com/catalystneuro/neuroconv/pull/1136)
* Use pytest format for dandi tests to avoid window permission error on teardown [PR #1151](https://github.com/catalystneuro/neuroconv/pull/1151)
* Added many docstrings for public functions [PR #1063](https://github.com/catalystneuro/neuroconv/pull/1063)

# v0.6.5 (November 1, 2024)

## Deprecations

## Bug Fixes
* Fixed formatwise installation from pipy [PR #1118](https://github.com/catalystneuro/neuroconv/pull/1118)
* Fixed dailies [PR #1113](https://github.com/catalystneuro/neuroconv/pull/1113)

## Deprecations

## Features
* Using in-house `GenericDataChunkIterator` [PR #1068](https://github.com/catalystneuro/neuroconv/pull/1068)
* Data interfaces now perform source (argument inputs) validation with the json schema [PR #1020](https://github.com/catalystneuro/neuroconv/pull/1020)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ doctest_optionflags = "ELLIPSIS"

[tool.black]
line-length = 120
target-version = ['py38', 'py39', 'py310']
target-version = ['py39', 'py310']
include = '\.pyi?$'
extend-exclude = '''
/(
Expand Down
9 changes: 8 additions & 1 deletion src/neuroconv/basedatainterface.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,14 @@ class BaseDataInterface(ABC):

@classmethod
def get_source_schema(cls) -> dict:
"""Infer the JSON schema for the source_data from the method signature (annotation typing)."""
"""
Infer the JSON schema for the source_data from the method signature (annotation typing).

Returns
-------
dict
The JSON schema for the source_data.
"""
return get_json_schema_from_method_signature(cls, exclude=["source_data"])

@classmethod
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,28 @@ def add_to_nwbfile(
starting_frames_labeled_videos: Optional[list[int]] = None,
stub_test: bool = False,
):
"""
Add behavior and pose estimation data, including original and labeled videos, to the specified NWBFile.

Parameters
----------
nwbfile : NWBFile
The NWBFile object to which the data will be added.
metadata : dict
Metadata dictionary containing information about the behavior and videos.
reference_frame : str, optional
Description of the reference frame for pose estimation, by default None.
confidence_definition : str, optional
Definition for the confidence levels in pose estimation, by default None.
external_mode : bool, optional
If True, the videos will be referenced externally rather than embedded within the NWB file, by default True.
starting_frames_original_videos : list of int, optional
List of starting frames for the original videos, by default None.
starting_frames_labeled_videos : list of int, optional
List of starting frames for the labeled videos, by default None.
stub_test : bool, optional
If True, only a subset of the data will be added for testing purposes, by default False.
"""
original_video_interface = self.data_interface_objects["OriginalVideo"]

original_video_metadata = next(
Expand Down Expand Up @@ -172,6 +194,33 @@ def run_conversion(
starting_frames_labeled_videos: Optional[list] = None,
stub_test: bool = False,
) -> None:
"""
Run the full conversion process, adding behavior, video, and pose estimation data to an NWB file.

Parameters
----------
nwbfile_path : FilePath, optional
The file path where the NWB file will be saved. If None, the file is handled in memory.
nwbfile : NWBFile, optional
An in-memory NWBFile object. If None, a new NWBFile object will be created.
metadata : dict, optional
Metadata dictionary for describing the NWB file contents. If None, it is auto-generated.
overwrite : bool, optional
If True, overwrites the NWB file at `nwbfile_path` if it exists. If False, appends to the file, by default False.
reference_frame : str, optional
Description of the reference frame for pose estimation, by default None.
confidence_definition : str, optional
Definition for confidence levels in pose estimation, by default None.
external_mode : bool, optional
If True, the videos will be referenced externally rather than embedded within the NWB file, by default True.
starting_frames_original_videos : list of int, optional
List of starting frames for the original videos, by default None.
starting_frames_labeled_videos : list of int, optional
List of starting frames for the labeled videos, by default None.
stub_test : bool, optional
If True, only a subset of the data will be added for testing purposes, by default False.

"""
if metadata is None:
metadata = self.get_metadata()

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,7 @@ def add_to_nwbfile(
nwbfile: NWBFile,
metadata: dict,
) -> None:

ndx_events = get_package(package_name="ndx_events", installation_instructions="pip install ndx-events")
medpc_name_to_info_dict = metadata["MedPC"].get("medpc_name_to_info_dict", None)
assert medpc_name_to_info_dict is not None, "medpc_name_to_info_dict must be provided in metadata"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,19 @@ def set_aligned_segment_starting_times(self, aligned_segment_starting_times: lis
sorting_segment._t_start = aligned_segment_starting_time

def subset_sorting(self):
"""
Generate a subset of the sorting extractor based on spike timing data.

This method identifies the earliest spike time across all units in the sorting extractor and creates a
subset of the sorting data up to 110% of the earliest spike time. If the sorting extractor is associated
with a recording, the subset is further limited by the total number of samples in the recording.

Returns
-------
SortingExtractor
A new `SortingExtractor` object representing the subset of the original sorting data,
sliced from the start frame to the calculated end frame.
"""
max_min_spike_time = max(
[
min(x)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -518,6 +518,33 @@ def __init__(self, file_path: FilePath, verbose: bool = True):
)

def generate_recording_with_channel_metadata(self):
"""
Generate a dummy recording extractor with channel metadata from session data.

This method reads session data from a `.session.mat` file (if available) and generates a dummy recording
extractor. The recording extractor is then populated with channel metadata extracted from the session file.

Returns
-------
NumpyRecording
A `NumpyRecording` object representing the dummy recording extractor, containing the channel metadata.

Notes
-----
- The method reads the `.session.mat` file using `pymatreader` and extracts `extracellular` data.
- It creates a dummy recording extractor using `spikeinterface.core.numpyextractors.NumpyRecording`.
- The generated extractor includes channel IDs and other relevant metadata such as number of channels,
number of samples, and sampling frequency.
- Channel metadata is added to the dummy extractor using the `add_channel_metadata_to_recoder` function.
- If the `.session.mat` file is not found, no extractor is returned.

Warnings
--------
Ensure that the `.session.mat` file is correctly located in the expected session path, or the method will not generate
pauladkisson marked this conversation as resolved.
Show resolved Hide resolved
a recording extractor. The expected session is self.session_path / f"{self.session_id}.session.mat"

"""

session_data_file_path = self.session_path / f"{self.session_id}.session.mat"
if session_data_file_path.is_file():
from pymatreader import read_mat
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,17 @@ def __init__(
def get_metadata_schema(
self,
) -> dict:
"""
Retrieve the metadata schema for the optical physiology (Ophys) data, with optional handling of photon series type.

Parameters
----------
photon_series_type : {"OnePhotonSeries", "TwoPhotonSeries"}, optional
The type of photon series to include in the schema. If None, the value from the instance is used.
This argument is deprecated and will be removed in a future version. Set `photon_series_type` during
the initialization of the `BaseImagingExtractorInterface` instance.

"""

metadata_schema = super().get_metadata_schema()

Expand Down Expand Up @@ -93,6 +104,16 @@ def get_metadata_schema(
def get_metadata(
self,
) -> DeepDict:
"""
Retrieve the metadata for the imaging data, with optional handling of photon series type.

Parameters
----------
photon_series_type : {"OnePhotonSeries", "TwoPhotonSeries"}, optional
The type of photon series to include in the metadata. If None, the value from the instance is used.
This argument is deprecated and will be removed in a future version. Instead, set `photon_series_type`
during the initialization of the `BaseImagingExtractorInterface` instance.
"""

from ...tools.roiextractors import get_nwb_imaging_metadata

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,27 @@ def __init__(self, verbose: bool = False, **source_data):
self.segmentation_extractor = self.get_extractor()(**source_data)

def get_metadata_schema(self) -> dict:
"""
Generate the metadata schema for Ophys data, updating required fields and properties.

This method builds upon the base schema and customizes it for Ophys-specific metadata, including required
components such as devices, fluorescence data, imaging planes, and two-photon series. It also applies
temporary schema adjustments to handle certain use cases until a centralized metadata schema definition
is available.

Returns
-------
dict
A dictionary representing the updated Ophys metadata schema.

Notes
-----
- Ensures that `Device` and `ImageSegmentation` are marked as required.
- Updates various properties, including ensuring arrays for `ImagingPlane` and `TwoPhotonSeries`.
- Adjusts the schema for `Fluorescence`, including required fields and pattern properties.
- Adds schema definitions for `DfOverF`, segmentation images, and summary images.
- Applies temporary fixes, such as setting additional properties for `ImageSegmentation` to True.
"""
metadata_schema = super().get_metadata_schema()
metadata_schema["required"] = ["Ophys"]
metadata_schema["properties"]["Ophys"] = get_base_schema()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ def get_source_schema(cls):
return source_schema

def get_conversion_options_schema(self):
"""get the conversion options schema."""
interface_name = list(self.data_interface_objects.keys())[0]
return self.data_interface_objects[interface_name].get_conversion_options_schema()

Expand Down Expand Up @@ -91,6 +92,20 @@ def add_to_nwbfile(
stub_test: bool = False,
stub_frames: int = 100,
):
"""
Add data from multiple data interfaces to the specified NWBFile.

Parameters
----------
nwbfile : NWBFile
The NWBFile object to which the data will be added.
metadata : dict
Metadata dictionary containing information to describe the data being added to the NWB file.
stub_test : bool, optional
If True, only a subset of the data (up to `stub_frames`) will be added for testing purposes. Default is False.
stub_frames : int, optional
The number of frames to include in the subset if `stub_test` is True. Default is 100.
"""
for photon_series_index, (interface_name, data_interface) in enumerate(self.data_interface_objects.items()):
data_interface.add_to_nwbfile(
nwbfile=nwbfile,
Expand All @@ -109,6 +124,24 @@ def run_conversion(
stub_test: bool = False,
stub_frames: int = 100,
) -> None:
"""
Run the conversion process for the instantiated data interfaces and add data to the NWB file.

Parameters
----------
nwbfile_path : FilePath, optional
Path where the NWB file will be written. If None, the file will be handled in-memory.
nwbfile : NWBFile, optional
An in-memory NWBFile object. If None, a new NWBFile object will be created.
metadata : dict, optional
Metadata dictionary for describing the NWB file. If None, it will be auto-generated using the `get_metadata()` method.
overwrite : bool, optional
If True, overwrites the existing NWB file at `nwbfile_path`. If False, appends to the file (default is False).
stub_test : bool, optional
If True, only a subset of the data (up to `stub_frames`) will be added for testing purposes, by default False.
stub_frames : int, optional
The number of frames to include in the subset if `stub_test` is True, by default 100.
"""
if metadata is None:
metadata = self.get_metadata()

Expand Down Expand Up @@ -141,6 +174,7 @@ def get_source_schema(cls):
return get_json_schema_from_method_signature(cls)

def get_conversion_options_schema(self):
"""Get the conversion options schema."""
interface_name = list(self.data_interface_objects.keys())[0]
return self.data_interface_objects[interface_name].get_conversion_options_schema()

Expand Down Expand Up @@ -187,6 +221,21 @@ def add_to_nwbfile(
stub_test: bool = False,
stub_frames: int = 100,
):
"""
Add data from all instantiated data interfaces to the provided NWBFile.

Parameters
----------
nwbfile : NWBFile
The NWBFile object to which the data will be added.
metadata : dict
Metadata dictionary containing information about the data to be added.
stub_test : bool, optional
If True, only a subset of the data (defined by `stub_frames`) will be added for testing purposes,
by default False.
stub_frames : int, optional
The number of frames to include in the subset if `stub_test` is True, by default 100.
"""
for photon_series_index, (interface_name, data_interface) in enumerate(self.data_interface_objects.items()):
data_interface.add_to_nwbfile(
nwbfile=nwbfile,
Expand All @@ -205,6 +254,24 @@ def run_conversion(
stub_test: bool = False,
stub_frames: int = 100,
) -> None:
"""
Run the NWB conversion process for all instantiated data interfaces.

Parameters
----------
nwbfile_path : FilePath, optional
The file path where the NWB file will be written. If None, the file is handled in-memory.
nwbfile : NWBFile, optional
An existing in-memory NWBFile object. If None, a new NWBFile object will be created.
metadata : dict, optional
Metadata dictionary used to create or validate the NWBFile. If None, metadata is automatically generated.
overwrite : bool, optional
If True, the NWBFile at `nwbfile_path` is overwritten if it exists. If False (default), data is appended.
stub_test : bool, optional
If True, only a subset of the data (up to `stub_frames`) is used for testing purposes. By default False.
stub_frames : int, optional
The number of frames to include in the subset if `stub_test` is True. By default 100.
"""
if metadata is None:
metadata = self.get_metadata()

Expand Down
Loading
Loading