Skip to content

Commit

Permalink
1D -> single channel; 2D -> multi-channel
Browse files Browse the repository at this point in the history
  • Loading branch information
alejoe91 committed Sep 22, 2023
2 parents 87b2b10 + d78cb07 commit e2d3a8b
Show file tree
Hide file tree
Showing 68 changed files with 1,467 additions and 1,527 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/installation-tips-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ jobs:
- name: Test Conda Environment Creation
uses: conda-incubator/[email protected]
with:
environment-file: ./installations_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
environment-file: ./installation_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
13 changes: 8 additions & 5 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ spikeinterface.core
.. autofunction:: extract_waveforms
.. autofunction:: load_waveforms
.. autofunction:: compute_sparsity
.. autoclass:: ChannelSparsity
:members:
.. autoclass:: BinaryRecordingExtractor
.. autoclass:: ZarrRecordingExtractor
.. autoclass:: BinaryFolderRecording
Expand Down Expand Up @@ -48,17 +50,15 @@ spikeinterface.core
.. autofunction:: get_template_extremum_channel
.. autofunction:: get_template_extremum_channel_peak_shift
.. autofunction:: get_template_extremum_amplitude

..
.. autofunction:: read_binary
.. autofunction:: read_zarr
.. autofunction:: append_recordings
.. autofunction:: concatenate_recordings
.. autofunction:: split_recording
.. autofunction:: select_segment_recording
.. autofunction:: append_sortings
.. autofunction:: split_sorting
.. autofunction:: select_segment_sorting
.. autofunction:: read_binary
.. autofunction:: read_zarr

Low-level
~~~~~~~~~
Expand All @@ -67,7 +67,6 @@ Low-level
:noindex:

.. autoclass:: BaseWaveformExtractorExtension
.. autoclass:: ChannelSparsity
.. autoclass:: ChunkRecordingExecutor

spikeinterface.extractors
Expand All @@ -83,6 +82,7 @@ NEO-based
.. autofunction:: read_alphaomega_event
.. autofunction:: read_axona
.. autofunction:: read_biocam
.. autofunction:: read_binary
.. autofunction:: read_blackrock
.. autofunction:: read_ced
.. autofunction:: read_intan
Expand All @@ -104,6 +104,7 @@ NEO-based
.. autofunction:: read_spikegadgets
.. autofunction:: read_spikeglx
.. autofunction:: read_tdt
.. autofunction:: read_zarr


Non-NEO-based
Expand Down Expand Up @@ -216,8 +217,10 @@ spikeinterface.sorters
.. autofunction:: print_sorter_versions
.. autofunction:: get_sorter_description
.. autofunction:: run_sorter
.. autofunction:: run_sorter_jobs
.. autofunction:: run_sorters
.. autofunction:: run_sorter_by_property
.. autofunction:: read_sorter_folder

Low level
~~~~~~~~~
Expand Down
6 changes: 3 additions & 3 deletions doc/development/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ There are various ways to contribute to SpikeInterface as a user or developer. S
* Writing unit tests to expand code coverage and use case scenarios.
* Reporting bugs and issues.

We use a forking workflow <https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:
We use a forking workflow `<https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>`_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:

* Fork the SpikeInterface repository.
* Create a new branch (e.g., :code:`git switch -c my-contribution`).
* Modify the code, commit, and push changes to your fork.
* Open a pull request from the "Pull Requests" tab of your fork to :code:`spikeinterface/main`.
* By following this process, we can review the code and even make changes as necessary.

While we appreciate all the contributions please be mindful of the cost of reviewing pull requests <https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>_ .
While we appreciate all the contributions please be mindful of the cost of reviewing pull requests `<https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>`_ .


How to run tests locally
Expand Down Expand Up @@ -201,7 +201,7 @@ Implement a new extractor
SpikeInterface already supports over 30 file formats, but the acquisition system you use might not be among the
supported formats list (***ref***). Most of the extractord rely on the `NEO <https://github.com/NeuralEnsemble/python-neo>`_
package to read information from files.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new `neo.rawio `_ class.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new :code:`neo.rawio.BaseRawIO` class (see `example <https://github.com/NeuralEnsemble/python-neo/blob/master/neo/rawio/examplerawio.py#L44>`_).
Once that is done, the new class can be easily wrapped into SpikeInterface as an extension of the
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for :py:class:`~spikeinterface.core.BaseRecording` objects) or
Expand Down
Binary file added doc/images/plot_traces_ephyviewer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion doc/install_sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Kilosort2.5

git clone https://github.com/MouseLand/Kilosort
# provide installation path by setting the KILOSORT2_5_PATH environment variable
# or using Kilosort2_5Sorter.set_kilosort2_path()
# or using Kilosort2_5Sorter.set_kilosort2_5_path()

* See also for Matlab/CUDA: https://www.mathworks.com/help/parallel-computing/gpu-support-by-release.html

Expand Down
43 changes: 20 additions & 23 deletions doc/modules/sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ There are three options:
1. **released PyPi version**: if you installed :code:`spikeinterface` with :code:`pip install spikeinterface`,
the latest released version will be installed in the container.

2. **development :code:`main` version**: if you installed :code:`spikeinterface` from source from the cloned repo
2. **development** :code:`main` **version**: if you installed :code:`spikeinterface` from source from the cloned repo
(with :code:`pip install .`) or with :code:`pip install git+https://github.com/SpikeInterface/spikeinterface.git`,
the current development version from the :code:`main` branch will be installed in the container.

Expand Down Expand Up @@ -285,27 +285,26 @@ Running several sorters in parallel

The :py:mod:`~spikeinterface.sorters` module also includes tools to run several spike sorting jobs
sequentially or in parallel. This can be done with the
:py:func:`~spikeinterface.sorters.run_sorters()` function by specifying
:py:func:`~spikeinterface.sorters.run_sorter_jobs()` function by specifying
an :code:`engine` that supports parallel processing (such as :code:`joblib` or :code:`slurm`).

.. code-block:: python
recordings = {'rec1' : recording, 'rec2': another_recording}
sorter_list = ['herdingspikes', 'tridesclous']
sorter_params = {
'herdingspikes': {'clustering_bandwidth' : 8},
'tridesclous': {'detect_threshold' : 5.},
}
sorting_output = run_sorters(sorter_list, recordings, working_folder='tmp_some_sorters',
mode_if_folder_exists='overwrite', sorter_params=sorter_params)
# here we run 2 sorters on 2 different recordings = 4 jobs
recording = ...
another_recording = ...
job_list = [
{'sorter_name': 'tridesclous', 'recording': recording, 'output_folder': 'folder1','detect_threshold': 5.},
{'sorter_name': 'tridesclous', 'recording': another_recording, 'output_folder': 'folder2', 'detect_threshold': 5.},
{'sorter_name': 'herdingspikes', 'recording': recording, 'output_folder': 'folder3', 'clustering_bandwidth': 8., 'docker_image': True},
{'sorter_name': 'herdingspikes', 'recording': another_recording, 'output_folder': 'folder4', 'clustering_bandwidth': 8., 'docker_image': True},
]
# run in loop
sortings = run_sorter_jobs(job_list, engine='loop')
# the output is a dict with (rec_name, sorter_name) as keys
for (rec_name, sorter_name), sorting in sorting_output.items():
print(rec_name, sorter_name, ':', sorting.get_unit_ids())
After the jobs are run, the :code:`sorting_outputs` is a dictionary with :code:`(rec_name, sorter_name)` as a key (e.g.
:code:`('rec1', 'tridesclous')` in this example), and the corresponding :py:class:`~spikeinterface.core.BaseSorting`
as a value.
:py:func:`~spikeinterface.sorters.run_sorters` has several "engines" available to launch the computation:

Expand All @@ -315,13 +314,11 @@ as a value.

.. code-block:: python
run_sorters(sorter_list, recordings, engine='loop')
run_sorter_jobs(job_list, engine='loop')
run_sorters(sorter_list, recordings, engine='joblib',
engine_kwargs={'n_jobs': 2})
run_sorter_jobs(job_list, engine='joblib', engine_kwargs={'n_jobs': 2})
run_sorters(sorter_list, recordings, engine='slurm',
engine_kwargs={'cpus_per_task': 10, 'mem', '5G'})
run_sorter_jobs(job_list, engine='slurm', engine_kwargs={'cpus_per_task': 10, 'mem', '5G'})
Spike sorting by group
Expand Down Expand Up @@ -458,7 +455,7 @@ Here is the list of external sorters accessible using the run_sorter wrapper:
* **Kilosort** :code:`run_sorter('kilosort')`
* **Kilosort2** :code:`run_sorter('kilosort2')`
* **Kilosort2.5** :code:`run_sorter('kilosort2_5')`
* **Kilosort3** :code:`run_sorter('Kilosort3')`
* **Kilosort3** :code:`run_sorter('kilosort3')`
* **PyKilosort** :code:`run_sorter('pykilosort')`
* **Klusta** :code:`run_sorter('klusta')`
* **Mountainsort4** :code:`run_sorter('mountainsort4')`
Expand All @@ -474,7 +471,7 @@ Here is the list of external sorters accessible using the run_sorter wrapper:
Here a list of internal sorter based on `spikeinterface.sortingcomponents`; they are totally
experimental for now:

* **Spyking circus2** :code:`run_sorter('spykingcircus2')`
* **Spyking Circus2** :code:`run_sorter('spykingcircus2')`
* **Tridesclous2** :code:`run_sorter('tridesclous2')`

In 2023, we expect to add many more sorters to this list.
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/sortingcomponents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ Here is a short example that depends on the output of "Motion interpolation":
**Notes**:
* :code:`spatial_interpolation_method` "kriging" or "iwd" do not play a big role.
* :code:`border_mode` is a very important parameter. It controls how to deal with the border because motion causes units on the
* :code:`border_mode` is a very important parameter. It controls dealing with the border because motion causes units on the
border to not be present throughout the entire recording. We highly recommend the :code:`border_mode='remove_channels'`
because this removes channels on the border that will be impacted by drift. Of course the larger the motion is
the more channels are removed.
Expand Down Expand Up @@ -278,7 +278,7 @@ At the moment, there are five methods implemented:
* 'naive': a very naive implemenation used as a reference for benchmarks
* 'tridesclous': the algorithm for template matching implemented in Tridesclous
* 'circus': the algorithm for template matching implemented in SpyKING-Circus
* 'circus-omp': a updated algorithm similar to SpyKING-Circus but with OMP (orthogonal macthing
* 'circus-omp': a updated algorithm similar to SpyKING-Circus but with OMP (orthogonal matching
pursuit)
* 'wobble' : an algorithm loosely based on YASS that scales template amplitudes and shifts them in time
to match detected spikes
Expand Down
42 changes: 41 additions & 1 deletion doc/modules/widgets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@ Since version 0.95.0, the :py:mod:`spikeinterface.widgets` module supports multi
* | :code:`sortingview`: web-based and interactive rendering using the `sortingview <https://github.com/magland/sortingview>`_
| and `FIGURL <https://github.com/flatironinstitute/figurl>`_ packages.
Version 0.100.0, also come with this new backend:
* | :code:`ephyviewer`: interactive Qt based using the `ephyviewer <https://ephyviewer.readthedocs.io/en/latest/>`_ package


Installing backends
-------------------
Expand Down Expand Up @@ -85,6 +88,28 @@ Finally, if you wish to set up another cloud provider, follow the instruction fr
`kachery-cloud <https://github.com/flatironinstitute/kachery-cloud>`_ package ("Using your own storage bucket").


ephyviewer
^^^^^^^^^^

This backend is Qt based with PyQt5, PyQt6 or PySide6 support. Qt is sometimes tedious to install.


For a pip-based installation, run:

.. code-block:: bash
pip install PySide6 ephyviewer
Anaconda users will have a better experience with this:

.. code-block:: bash
conda install pyqt=5
pip install ephyviewer
Usage
-----

Expand Down Expand Up @@ -215,6 +240,21 @@ For example, here is how to combine the timeseries and sorting summary generated
print(url)
ephyviewer
^^^^^^^^^^


The :code:`ephyviewer` backend is currently only available for the :py:func:`~spikeinterface.widgets.plot_traces()` function.


.. code-block:: python
plot_traces(recording, backend="ephyviewer", mode="line", show_channel_ids=True)
.. image:: ../images/plot_traces_ephyviewer.png



Available plotting functions
----------------------------
Expand All @@ -229,7 +269,7 @@ Available plotting functions
* :py:func:`~spikeinterface.widgets.plot_spikes_on_traces` (backends: :code:`matplotlib`, :code:`ipywidgets`)
* :py:func:`~spikeinterface.widgets.plot_template_metrics` (backends: :code:`matplotlib`, :code:`ipywidgets`, :code:`sortingview`)
* :py:func:`~spikeinterface.widgets.plot_template_similarity` (backends: ::code:`matplotlib`, :code:`sortingview`)
* :py:func:`~spikeinterface.widgets.plot_timeseries` (backends: :code:`matplotlib`, :code:`ipywidgets`, :code:`sortingview`)
* :py:func:`~spikeinterface.widgets.plot_traces` (backends: :code:`matplotlib`, :code:`ipywidgets`, :code:`sortingview`, :code:`ephyviewer`)
* :py:func:`~spikeinterface.widgets.plot_unit_depths` (backends: :code:`matplotlib`)
* :py:func:`~spikeinterface.widgets.plot_unit_locations` (backends: :code:`matplotlib`, :code:`ipywidgets`, :code:`sortingview`)
* :py:func:`~spikeinterface.widgets.plot_unit_summary` (backends: :code:`matplotlib`)
Expand Down
4 changes: 2 additions & 2 deletions src/spikeinterface/comparison/basecomparison.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,11 +262,11 @@ def get_ordered_agreement_scores(self):
indexes = np.arange(scores.shape[1])
order1 = []
for r in range(scores.shape[0]):
possible = indexes[~np.in1d(indexes, order1)]
possible = indexes[~np.isin(indexes, order1)]
if possible.size > 0:
ind = np.argmax(scores.iloc[r, possible].values)
order1.append(possible[ind])
remain = indexes[~np.in1d(indexes, order1)]
remain = indexes[~np.isin(indexes, order1)]
order1.extend(remain)
scores = scores.iloc[:, order1]

Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/comparison/comparisontools.py
Original file line number Diff line number Diff line change
Expand Up @@ -538,7 +538,7 @@ def do_confusion_matrix(event_counts1, event_counts2, match_12, match_event_coun
matched_units2 = match_12[match_12 != -1].values

unmatched_units1 = match_12[match_12 == -1].index
unmatched_units2 = unit2_ids[~np.in1d(unit2_ids, matched_units2)]
unmatched_units2 = unit2_ids[~np.isin(unit2_ids, matched_units2)]

ordered_units1 = np.hstack([matched_units1, unmatched_units1])
ordered_units2 = np.hstack([matched_units2, unmatched_units2])
Expand Down
35 changes: 34 additions & 1 deletion src/spikeinterface/comparison/studytools.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,45 @@
from spikeinterface.core.job_tools import fix_job_kwargs
from spikeinterface.extractors import NpzSortingExtractor
from spikeinterface.sorters import sorter_dict
from spikeinterface.sorters.launcher import iter_working_folder, iter_sorting_output
from spikeinterface.sorters.basesorter import is_log_ok


from .comparisontools import _perf_keys
from .paircomparisons import compare_sorter_to_ground_truth


# This is deprecated and will be removed
def iter_working_folder(working_folder):
working_folder = Path(working_folder)
for rec_folder in working_folder.iterdir():
if not rec_folder.is_dir():
continue
for output_folder in rec_folder.iterdir():
if (output_folder / "spikeinterface_job.json").is_file():
with open(output_folder / "spikeinterface_job.json", "r") as f:
job_dict = json.load(f)
rec_name = job_dict["rec_name"]
sorter_name = job_dict["sorter_name"]
yield rec_name, sorter_name, output_folder
else:
rec_name = rec_folder.name
sorter_name = output_folder.name
if not output_folder.is_dir():
continue
if not is_log_ok(output_folder):
continue
yield rec_name, sorter_name, output_folder


# This is deprecated and will be removed
def iter_sorting_output(working_folder):
"""Iterator over output_folder to retrieve all triplets of (rec_name, sorter_name, sorting)."""
for rec_name, sorter_name, output_folder in iter_working_folder(working_folder):
SorterClass = sorter_dict[sorter_name]
sorting = SorterClass.get_result_from_folder(output_folder)
yield rec_name, sorter_name, sorting


def setup_comparison_study(study_folder, gt_dict, **job_kwargs):
"""
Based on a dict of (recording, sorting) create the study folder.
Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/core/baserecording.py
Original file line number Diff line number Diff line change
Expand Up @@ -592,7 +592,7 @@ def _channel_slice(self, channel_ids, renamed_channel_ids=None):
def _remove_channels(self, remove_channel_ids):
from .channelslice import ChannelSliceRecording

new_channel_ids = self.channel_ids[~np.in1d(self.channel_ids, remove_channel_ids)]
new_channel_ids = self.channel_ids[~np.isin(self.channel_ids, remove_channel_ids)]
sub_recording = ChannelSliceRecording(self, new_channel_ids)
return sub_recording

Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/core/basesnippets.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def _channel_slice(self, channel_ids, renamed_channel_ids=None):
def _remove_channels(self, remove_channel_ids):
from .channelslice import ChannelSliceSnippets

new_channel_ids = self.channel_ids[~np.in1d(self.channel_ids, remove_channel_ids)]
new_channel_ids = self.channel_ids[~np.isin(self.channel_ids, remove_channel_ids)]
sub_recording = ChannelSliceSnippets(self, new_channel_ids)
return sub_recording

Expand Down
5 changes: 2 additions & 3 deletions src/spikeinterface/core/basesorting.py
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ def remove_units(self, remove_unit_ids):
"""
from spikeinterface import UnitsSelectionSorting

new_unit_ids = self.unit_ids[~np.in1d(self.unit_ids, remove_unit_ids)]
new_unit_ids = self.unit_ids[~np.isin(self.unit_ids, remove_unit_ids)]
new_sorting = UnitsSelectionSorting(self, new_unit_ids)
return new_sorting

Expand Down Expand Up @@ -473,8 +473,7 @@ def to_spike_vector(self, concatenated=True, extremum_channel_inds=None, use_cac
if not concatenated:
spikes_ = []
for segment_index in range(self.get_num_segments()):
s0 = np.searchsorted(spikes["segment_index"], segment_index, side="left")
s1 = np.searchsorted(spikes["segment_index"], segment_index + 1, side="left")
s0, s1 = np.searchsorted(spikes["segment_index"], [segment_index, segment_index + 1], side="left")
spikes_.append(spikes[s0:s1])
spikes = spikes_

Expand Down
Loading

0 comments on commit e2d3a8b

Please sign in to comment.