Skip to content

Commit

Permalink
Merge branch 'main' into extended-template-metrics
Browse files Browse the repository at this point in the history
  • Loading branch information
alejoe91 authored Oct 6, 2023
2 parents 4e3140f + a2d27ff commit 3a1f540
Show file tree
Hide file tree
Showing 85 changed files with 1,928 additions and 390 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -188,3 +188,4 @@ test_folder/

# Mac OS
.DS_Store
test_data.json
18 changes: 10 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,17 @@ With SpikeInterface, users can:
- post-process sorted datasets.
- compare and benchmark spike sorting outputs.
- compute quality metrics to validate and curate spike sorting outputs.
- visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, in jupyter)
- export report and export to phy
- offer a powerful Qt-based viewer in separate package [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui)
- have some powerful sorting components to build your own sorter.
- visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, jupyter, ephyviewer)
- export a report and/or export to phy
- offer a powerful Qt-based viewer in a separate package [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui)
- have powerful sorting components to build your own sorter.


## Documentation

Detailed documentation for spikeinterface can be found [here](https://spikeinterface.readthedocs.io/en/latest).
Detailed documentation of the latest PyPI release of SpikeInterface can be found [here](https://spikeinterface.readthedocs.io/en/0.98.2).

Detailed documentation of the development version of SpikeInterface can be found [here](https://spikeinterface.readthedocs.io/en/latest).

Several tutorials to get started can be found in [spiketutorials](https://github.com/SpikeInterface/spiketutorials).

Expand All @@ -77,9 +79,9 @@ and sorting components.
You can also have a look at the [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui).


## How to install spikeinteface
## How to install spikeinterface

You can install the new `spikeinterface` version with pip:
You can install the latest version of `spikeinterface` version with pip:

```bash
pip install spikeinterface[full]
Expand All @@ -94,7 +96,7 @@ To install all interactive widget backends, you can use:
```


To get the latest updates, you can install `spikeinterface` from sources:
To get the latest updates, you can install `spikeinterface` from source:

```bash
git clone https://github.com/SpikeInterface/spikeinterface.git
Expand Down
4 changes: 2 additions & 2 deletions doc/how_to/load_matlab_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Use the following Python script to load the binary data into SpikeInterface:
dtype = "float64" # MATLAB's double corresponds to Python's float64
# Load data using SpikeInterface
recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype)
# Confirm that the data was loaded correctly by comparing the data shapes and see they match the MATLAB data
Expand Down Expand Up @@ -86,7 +86,7 @@ If your data in MATLAB is stored as :code:`int16`, and you know the gain and off
gain_to_uV = 0.195 # Adjust according to your MATLAB dataset
offset_to_uV = 0 # Adjust according to your MATLAB dataset
recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype_int,
gain_to_uV=gain_to_uV, offset_to_uV=offset_to_uV)
Expand Down
36 changes: 18 additions & 18 deletions doc/modules/curation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,21 @@ The merging and splitting operations are handled by the :py:class:`~spikeinterfa
from spikeinterface.curation import CurationSorting
sorting = run_sorter('kilosort2', recording)
sorting = run_sorter(sorter_name='kilosort2', recording=recording)
cs = CurationSorting(sorting)
cs = CurationSorting(parent_sorting=sorting)
# make a first merge
cs.merge(['#1', '#5', '#15'])
cs.merge(units_to_merge=['#1', '#5', '#15'])
# make a second merge
cs.merge(['#11', '#21'])
cs.merge(units_to_merge=['#11', '#21'])
# make a split
split_index = ... # some criteria on spikes
cs.split('#20', split_index)
cs.split(split_unit_id='#20', indices_list=split_index)
# here the final clean sorting
# here is the final clean sorting
clean_sorting = cs.sorting
Expand All @@ -60,20 +60,20 @@ merges. Therefore, it has many parameters and options.
from spikeinterface.curation import MergeUnitsSorting, get_potential_auto_merge
sorting = run_sorter('kilosort', recording)
sorting = run_sorter(sorter_name='kilosort', recording=recording)
we = extract_waveforms(recording, sorting, folder='wf_folder')
we = extract_waveforms(recording=recording, sorting=sorting, folder='wf_folder')
# merges is a list of lists, with unit_ids to be merged.
merges = get_potential_auto_merge(we, minimum_spikes=1000, maximum_distance_um=150.,
merges = get_potential_auto_merge(waveform_extractor=we, minimum_spikes=1000, maximum_distance_um=150.,
peak_sign="neg", bin_ms=0.25, window_ms=100.,
corr_diff_thresh=0.16, template_diff_thresh=0.25,
censored_period_ms=0., refractory_period_ms=1.0,
contamination_threshold=0.2, num_channels=5, num_shift=5,
firing_contamination_balance=1.5)
# here we apply the merges
clean_sorting = MergeUnitsSorting(sorting, merges)
clean_sorting = MergeUnitsSorting(parent_sorting=sorting, units_to_merge=merges)
Manual curation with sorting view
Expand All @@ -98,24 +98,24 @@ The manual curation (including merges and labels) can be applied to a SpikeInter
from spikeinterface.widgets import plot_sorting_summary
# run a sorter and export waveforms
sorting = run_sorter('kilosort2', recording)
we = extract_waveforms(recording, sorting, folder='wf_folder')
sorting = run_sorter(sorter_name'kilosort2', recording=recording)
we = extract_waveforms(recording=recording, sorting=sorting, folder='wf_folder')
# some postprocessing is required
_ = compute_spike_amplitudes(we)
_ = compute_unit_locations(we)
_ = compute_template_similarity(we)
_ = compute_correlograms(we)
_ = compute_spike_amplitudes(waveform_extractor=we)
_ = compute_unit_locations(waveform_extractor=we)
_ = compute_template_similarity(waveform_extractor=we)
_ = compute_correlograms(waveform_extractor=we)
# This loads the data to the cloud for web-based plotting and sharing
plot_sorting_summary(we, curation=True, backend='sortingview')
plot_sorting_summary(waveform_extractor=we, curation=True, backend='sortingview')
# we open the printed link URL in a browswe
# - make manual merges and labeling
# - from the curation box, click on "Save as snapshot (sha1://)"
# copy the uri
sha_uri = "sha1://59feb326204cf61356f1a2eb31f04d8e0177c4f1"
clean_sorting = apply_sortingview_curation(sorting, uri_or_json=sha_uri)
clean_sorting = apply_sortingview_curation(sorting=sorting, uri_or_json=sha_uri)
Note that you can also "Export as JSON" and pass the json file as :code:`uri_or_json` parameter.

Expand Down
19 changes: 9 additions & 10 deletions doc/modules/exporters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,14 @@ The input of the :py:func:`~spikeinterface.exporters.export_to_phy` is a :code:`
from spikeinterface.exporters import export_to_phy
# the waveforms are sparse so it is faster to export to phy
folder = 'waveforms'
we = extract_waveforms(recording, sorting, folder, sparse=True)
we = extract_waveforms(recording=recording, sorting=sorting, folder='waveforms', sparse=True)
# some computations are done before to control all options
compute_spike_amplitudes(we)
compute_principal_components(we, n_components=3, mode='by_channel_global')
compute_spike_amplitudes(waveform_extractor=we)
compute_principal_components(waveform_extractor=we, n_components=3, mode='by_channel_global')
# the export process is fast because everything is pre-computed
export_to_phy(we, output_folder='path/to/phy_folder')
export_to_phy(wavefor_extractor=we, output_folder='path/to/phy_folder')
Expand Down Expand Up @@ -72,12 +71,12 @@ with many units!
# the waveforms are sparse for more interpretable figures
we = extract_waveforms(recording, sorting, folder='path/to/wf', sparse=True)
we = extract_waveforms(recording=recording, sorting=sorting, folder='path/to/wf', sparse=True)
# some computations are done before to control all options
compute_spike_amplitudes(we)
compute_correlograms(we)
compute_quality_metrics(we, metric_names=['snr', 'isi_violation', 'presence_ratio'])
compute_spike_amplitudes(waveform_extractor=we)
compute_correlograms(waveform_extractor=we)
compute_quality_metrics(waveform_extractor=we, metric_names=['snr', 'isi_violation', 'presence_ratio'])
# the export process
export_report(we, output_folder='path/to/spikeinterface-report-folder')
export_report(waveform_extractor=we, output_folder='path/to/spikeinterface-report-folder')
23 changes: 13 additions & 10 deletions doc/modules/extractors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,12 @@ Most of the :code:`Recording` classes are implemented by wrapping the

Most of the :code:`Sorting` classes are instead directly implemented in SpikeInterface.


Although SpikeInterface is object-oriented (class-based), each object can also be loaded with a convenient
:code:`read_XXXXX()` function.

.. code-block:: python
import spikeinterface.extractors as se
Read one Recording
Expand All @@ -27,40 +28,42 @@ Every format can be read with a simple function:

.. code-block:: python
recording_oe = read_openephys("open-ephys-folder")
recording_oe = read_openephys(folder_path="open-ephys-folder")
recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")
recording_blackrock = read_blackrock("blackrock-folder")
recording_blackrock = read_blackrock(folder_path="blackrock-folder")
recording_mearec = read_mearec("mearec_file.h5")
recording_mearec = read_mearec(file_path="mearec_file.h5")
Importantly, some formats directly handle the probe information:

.. code-block:: python
recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")
print(recording_spikeglx.get_probe())
recording_mearec = read_mearec("mearec_file.h5")
recording_mearec = read_mearec(file_path="mearec_file.h5")
print(recording_mearec.get_probe())
Read one Sorting
----------------

.. code-block:: python
sorting_KS = read_kilosort("kilosort-folder")
sorting_KS = read_kilosort(folder_path="kilosort-folder")
Read one Event
--------------

.. code-block:: python
events_OE = read_openephys_event("open-ephys-folder")
events_OE = read_openephys_event(folder_path="open-ephys-folder")
For a comprehensive list of compatible technologies, see :ref:`compatible_formats`.
Expand All @@ -77,7 +80,7 @@ The actual reading will be done on demand using the :py:meth:`~spikeinterface.co
.. code-block:: python
# opening a 40GB SpikeGLX dataset is fast
recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")
# this really does load the full 40GB into memory : not recommended!!!!!
traces = recording_spikeglx.get_traces(start_frame=None, end_frame=None, return_scaled=False)
Expand Down
46 changes: 25 additions & 21 deletions doc/modules/motion_correction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ We currently have 3 presets:
.. code-block:: python
# read and preprocess
rec = read_spikeglx('/my/Neuropixel/recording')
rec = bandpass_filter(rec)
rec = common_reference(rec)
rec = read_spikeglx(folder_path='/my/Neuropixel/recording')
rec = bandpass_filter(recording=rec)
rec = common_reference(recording=rec)
# then correction is one line of code
rec_corrected = correct_motion(rec, preset="nonrigid_accurate")
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate")
The process is quite long due the two first steps (activity profile + motion inference)
But the return :code:`rec_corrected` is a lazy recording object that will interpolate traces on the
Expand All @@ -94,20 +94,20 @@ If you want to user other presets, this is as easy as:
.. code-block:: python
# mimic kilosort motion
rec_corrected = correct_motion(rec, preset="kilosort_like")
rec_corrected = correct_motion(recording=rec, preset="kilosort_like")
# super but less accurate and rigid
rec_corrected = correct_motion(rec, preset="rigid_fast")
rec_corrected = correct_motion(recording=rec, preset="rigid_fast")
Optionally any parameter from the preset can be overwritten:

.. code-block:: python
rec_corrected = correct_motion(rec, preset="nonrigid_accurate",
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate",
detect_kwargs=dict(
detect_threshold=10.),
estimate_motion_kwargs=dic(
estimate_motion_kwargs=dict(
histogram_depth_smooth_um=8.,
time_horizon_s=120.,
),
Expand All @@ -123,7 +123,7 @@ and checking. The folder will contain the motion vector itself of course but als
.. code-block:: python
motion_folder = '/somewhere/to/save/the/motion'
rec_corrected = correct_motion(rec, preset="nonrigid_accurate", folder=motion_folder)
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate", folder=motion_folder)
# and then
motion_info = load_motion_info(motion_folder)
Expand Down Expand Up @@ -156,14 +156,16 @@ The high-level :py:func:`~spikeinterface.preprocessing.correct_motion()` is inte
job_kwargs = dict(chunk_duration="1s", n_jobs=20, progress_bar=True)
# Step 1 : activity profile
peaks = detect_peaks(rec, method="locally_exclusive", detect_threshold=8.0, **job_kwargs)
peaks = detect_peaks(recording=rec, method="locally_exclusive", detect_threshold=8.0, **job_kwargs)
# (optional) sub-select some peaks to speed up the localization
peaks = select_peaks(peaks, ...)
peak_locations = localize_peaks(rec, peaks, method="monopolar_triangulation",radius_um=75.0,
peaks = select_peaks(peaks=peaks, ...)
peak_locations = localize_peaks(recording=rec, peaks=peaks, method="monopolar_triangulation",radius_um=75.0,
max_distance_um=150.0, **job_kwargs)
# Step 2: motion inference
motion, temporal_bins, spatial_bins = estimate_motion(rec, peaks, peak_locations,
motion, temporal_bins, spatial_bins = estimate_motion(recording=rec,
peaks=peaks,
peak_locations=peak_locations,
method="decentralized",
direction="y",
bin_duration_s=2.0,
Expand All @@ -173,7 +175,9 @@ The high-level :py:func:`~spikeinterface.preprocessing.correct_motion()` is inte
# Step 3: motion interpolation
# this step is lazy
rec_corrected = interpolate_motion(rec, motion, temporal_bins, spatial_bins,
rec_corrected = interpolate_motion(recording=rec, motion=motion,
temporal_bins=temporal_bins,
spatial_bins=spatial_bins,
border_mode="remove_channels",
spatial_interpolation_method="kriging",
sigma_um=30.)
Expand All @@ -196,20 +200,20 @@ different preprocessing chains: one for motion correction and one for spike sort

.. code-block:: python
raw_rec = read_spikeglx(...)
raw_rec = read_spikeglx(folder_path='/spikeglx_folder')
# preprocessing 1 : bandpass (this is smoother) + cmr
rec1 = si.bandpass_filter(raw_rec, freq_min=300., freq_max=5000.)
rec1 = si.common_reference(rec1, reference='global', operator='median')
rec1 = si.bandpass_filter(recording=raw_rec, freq_min=300., freq_max=5000.)
rec1 = si.common_reference(recording=rec1, reference='global', operator='median')
# here the corrected recording is done on the preprocessing 1
# rec_corrected1 will not be used for sorting!
motion_folder = '/my/folder'
rec_corrected1 = correct_motion(rec1, preset="nonrigid_accurate", folder=motion_folder)
rec_corrected1 = correct_motion(recording=rec1, preset="nonrigid_accurate", folder=motion_folder)
# preprocessing 2 : highpass + cmr
rec2 = si.highpass_filter(raw_rec, freq_min=300.)
rec2 = si.common_reference(rec2, reference='global', operator='median')
rec2 = si.highpass_filter(recording=raw_rec, freq_min=300.)
rec2 = si.common_reference(recording=rec2, reference='global', operator='median')
# we use another preprocessing for the final interpolation
motion_info = load_motion_info(motion_folder)
Expand All @@ -220,7 +224,7 @@ different preprocessing chains: one for motion correction and one for spike sort
spatial_bins=motion_info['spatial_bins'],
**motion_info['parameters']['interpolate_motion_kwargs'])
sorting = run_sorter("montainsort5", rec_corrected2)
sorting = run_sorter(sorter_name="montainsort5", recording=rec_corrected2)
References
Expand Down
Loading

0 comments on commit 3a1f540

Please sign in to comment.