Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement syncrhrony metrics #1951

Merged
merged 11 commits into from
Sep 7, 2023
2 changes: 2 additions & 0 deletions doc/modules/qualitymetrics/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ References

.. [Hruschka] Hruschka, E.R., de Castro, L.N., Campello R.J.G.B. "Evolutionary algorithms for clustering gene-expression data." Fourth IEEE International Conference on Data Mining (ICDM'04) 2004, pp 403-406.

.. [Gruen] Sonja Grün, Moshe Abeles, and Markus Diesmann. Impact of higher-order correlations on coincidence distributions of massively parallel data. In International School on Neural Networks, Initiated by IIASS and EMFCSC, volume 5286, 96–114. Springer, 2007.

.. [IBL] International Brain Laboratory. “Spike sorting pipeline for the International Brain Laboratory”. 4 May 2022.

.. [Jackson] Jadin Jackson, Neil Schmitzer-Torbert, K.D. Harris, and A.D. Redish. Quantitative assessment of extracellular multichannel recording quality using measures of cluster separation. Soc Neurosci Abstr, 518, 01 2005.
Expand Down
49 changes: 49 additions & 0 deletions doc/modules/qualitymetrics/synchrony.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
Synchrony Metrics (:code:`synchrony`)
=====================================

Calculation
-----------
This function is providing a metric for the presence of synchronous spiking events across multiple spike trains.

The complexity is used to characterize synchronous events within the same spike train and across different spike
trains. This way synchronous events can be found both in multi-unit and single-unit spike trains.
Complexity is calculated by counting the number of spikes (i.e. non-empty bins) that occur at the same sample index,
within and across spike trains.

Synchrony metrics can be computed for different syncrony sizes (>1), defining the number of simultanous spikes to count.
alejoe91 marked this conversation as resolved.
Show resolved Hide resolved



Expectation and use
-------------------

A larger value indicates a higher synchrony of the respective spike train with the other spike trains.
Higher values, especially for high sizes, indicate a higher probability of noisy spikes in spike trains.
alejoe91 marked this conversation as resolved.
Show resolved Hide resolved

Example code
------------

.. code-block:: python

import spikeinterface.qualitymetrics as qm
# Make recording, sorting and wvf_extractor object for your data.
synchrony = qm.compute_synchrony_metrics(wvf_extractor, synchrony_sizes=(2, 4, 8))
# synchrony is a tuple of dicts with the synchrony metrics for each unit


Links to original implementations
---------------------------------

The SpikeInterface implementation is a partial port of the low-level complexity functions from `Elephant - Electrophysiology Analysis Toolkit <https://github.com/NeuralEnsemble/elephant/blob/master/elephant/spike_train_synchrony.py#L245>`_

References
----------

.. automodule:: spikeinterface.toolkit.qualitymetrics.misc_metrics

.. autofunction:: compute_synchrony_metrics

Literature
----------

Based on concepts described in Gruen_
69 changes: 69 additions & 0 deletions src/spikeinterface/qualitymetrics/misc_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -498,6 +498,75 @@ def compute_sliding_rp_violations(
)


def compute_synchrony_metrics(waveform_extractor, synchrony_sizes=(2, 4, 8), **kwargs):
"""
Compute synchrony metrics. Synchrony metrics represent the rate of occurrences of
"synchrony_size" spikes at the exact same sample index.

Parameters
----------
waveform_extractor : WaveformExtractor
The waveform extractor object.
synchrony_sizes : list or tuple, default: (2, 4, 8)
The synchrony sizes to compute.

Returns
-------
sync_spike_{X} : dict
The synchrony metric for synchrony size X.
Returns are as many as synchrony_sizes.

References
----------
Based on concepts described in [Gruen]_
This code was adapted from `Elephant - Electrophysiology Analysis Toolkit <https://github.com/NeuralEnsemble/elephant/blob/master/elephant/spike_train_synchrony.py#L245>`_
"""
assert np.all(s > 1 for s in synchrony_sizes), "Synchrony sizes must be greater than 1"
spike_counts = waveform_extractor.sorting.count_num_spikes_per_unit()
sorting = waveform_extractor.sorting
spikes = sorting.to_spike_vector(concatenated=False)

# Pre-allocate synchrony counts
synchrony_counts = {}
for synchrony_size in synchrony_sizes:
synchrony_counts[synchrony_size] = np.zeros(len(waveform_extractor.unit_ids), dtype=np.int64)

for segment_index in range(sorting.get_num_segments()):
num_samples = waveform_extractor.get_num_samples(segment_index)
spikes_in_segment = spikes[segment_index]

# we compute the complexity as an histogram with a single sample as bin
bins = np.arange(0, num_samples + 1)
complexity = np.histogram(spikes_in_segment["sample_index"], bins)[0]

# add counts for this segment
for unit_index in np.arange(len(sorting.unit_ids)):
spikes_per_unit = spikes_in_segment[spikes_in_segment["unit_index"] == unit_index]
# some segments/units might have no spikes
if len(spikes_per_unit) == 0:
continue
spike_complexity = complexity[spikes_per_unit["sample_index"]]
for synchrony_size in synchrony_sizes:
synchrony_counts[synchrony_size][unit_index] += np.count_nonzero(spike_complexity >= synchrony_size)

# add counts for this segment
synchrony_metrics_dict = {
f"sync_spike_{synchrony_size}": {
unit_id: synchrony_counts[synchrony_size][unit_index] / spike_counts[unit_id]
for unit_index, unit_id in enumerate(sorting.unit_ids)
}
for synchrony_size in synchrony_sizes
}

# Convert dict to named tuple
synchrony_metrics_tuple = namedtuple("synchrony_metrics", synchrony_metrics_dict.keys())
synchrony_metrics = synchrony_metrics_tuple(**synchrony_metrics_dict)
return synchrony_metrics


_default_params["synchrony_metrics"] = dict(synchrony_sizes=(0, 2, 4))


def compute_amplitude_cutoffs(
waveform_extractor,
peak_sign="neg",
Expand Down
2 changes: 2 additions & 0 deletions src/spikeinterface/qualitymetrics/quality_metric_list.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
compute_amplitude_cutoffs,
compute_amplitude_medians,
compute_drift_metrics,
compute_synchrony_metrics,
)

from .pca_metrics import (
Expand Down Expand Up @@ -39,5 +40,6 @@
"sliding_rp_violation": compute_sliding_rp_violations,
"amplitude_cutoff": compute_amplitude_cutoffs,
"amplitude_median": compute_amplitude_medians,
"synchrony": compute_synchrony_metrics,
"drift": compute_drift_metrics,
}