Skip to content

Commit

Permalink
Merge branch 'run_sorter_jobs' of github.com:samuelgarcia/spikeinterf…
Browse files Browse the repository at this point in the history
…ace into gt_study
  • Loading branch information
samuelgarcia committed Sep 13, 2023
2 parents f97f76a + 1b28837 commit cf9a3b5
Show file tree
Hide file tree
Showing 26 changed files with 1,493 additions and 268 deletions.
21 changes: 21 additions & 0 deletions .github/actions/install-wine/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Install packages
description: This action installs the package and its dependencies for testing

inputs:
python-version:
description: 'Python version to set up'
required: false
os:
description: 'Operating system to set up'
required: false

runs:
using: "composite"
steps:
- name: Install wine (needed for Plexon2)
run: |
sudo rm -f /etc/apt/sources.list.d/microsoft-prod.list
sudo dpkg --add-architecture i386
sudo apt-get update -qq
sudo apt-get install -yqq --allow-downgrades libc6:i386 libgcc-s1:i386 libstdc++6:i386 wine
shell: bash
7 changes: 7 additions & 0 deletions .github/workflows/full-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,10 @@ jobs:
echo "Extractors changed"
echo "EXTRACTORS_CHANGED=true" >> $GITHUB_OUTPUT
fi
if [[ $file == *"plexon2"* ]]; then
echo "Plexon2 changed"
echo "PLEXON2_CHANGED=true" >> $GITHUB_OUTPUT
fi
if [[ $file == *"/preprocessing/"* ]]; then
echo "Preprocessing changed"
echo "PREPROCESSING_CHANGED=true" >> $GITHUB_OUTPUT
Expand Down Expand Up @@ -122,6 +126,9 @@ jobs:
done
- name: Set execute permissions on run_tests.sh
run: chmod +x .github/run_tests.sh
- name: Install Wine (Plexon2)
if: ${{ steps.modules-changed.outputs.PLEXON2_CHANGED == 'true' }}
uses: ./.github/actions/install-wine
- name: Test core
run: ./.github/run_tests.sh core
- name: Test extractors
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 23.7.0
rev: 23.9.1
hooks:
- id: black
files: ^src/
2 changes: 2 additions & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ NEO-based
.. autofunction:: read_mcsraw
.. autofunction:: read_neuralynx
.. autofunction:: read_neuralynx_sorting
.. autofunction:: read_neuroexplorer
.. autofunction:: read_neuroscope
.. autofunction:: read_nix
.. autofunction:: read_openephys
Expand All @@ -102,6 +103,7 @@ NEO-based
.. autofunction:: read_spikeglx
.. autofunction:: read_tdt


Non-NEO-based
~~~~~~~~~~~~~
.. automodule:: spikeinterface.extractors
Expand Down
2 changes: 2 additions & 0 deletions doc/modules/qualitymetrics/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ References
.. [Hruschka] Hruschka, E.R., de Castro, L.N., Campello R.J.G.B. "Evolutionary algorithms for clustering gene-expression data." Fourth IEEE International Conference on Data Mining (ICDM'04) 2004, pp 403-406.
.. [Gruen] Sonja Grün, Moshe Abeles, and Markus Diesmann. Impact of higher-order correlations on coincidence distributions of massively parallel data. In International School on Neural Networks, Initiated by IIASS and EMFCSC, volume 5286, 96–114. Springer, 2007.
.. [IBL] International Brain Laboratory. “Spike sorting pipeline for the International Brain Laboratory”. 4 May 2022.
.. [Jackson] Jadin Jackson, Neil Schmitzer-Torbert, K.D. Harris, and A.D. Redish. Quantitative assessment of extracellular multichannel recording quality using measures of cluster separation. Soc Neurosci Abstr, 518, 01 2005.
Expand Down
49 changes: 49 additions & 0 deletions doc/modules/qualitymetrics/synchrony.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
Synchrony Metrics (:code:`synchrony`)
=====================================

Calculation
-----------
This function is providing a metric for the presence of synchronous spiking events across multiple spike trains.

The complexity is used to characterize synchronous events within the same spike train and across different spike
trains. This way synchronous events can be found both in multi-unit and single-unit spike trains.
Complexity is calculated by counting the number of spikes (i.e. non-empty bins) that occur at the same sample index,
within and across spike trains.

Synchrony metrics can be computed for different synchrony sizes (>1), defining the number of simultaneous spikes to count.



Expectation and use
-------------------

A larger value indicates a higher synchrony of the respective spike train with the other spike trains.
Larger values, especially for larger sizes, indicate a higher probability of noisy spikes in spike trains.

Example code
------------

.. code-block:: python
import spikeinterface.qualitymetrics as qm
# Make recording, sorting and wvf_extractor object for your data.
synchrony = qm.compute_synchrony_metrics(wvf_extractor, synchrony_sizes=(2, 4, 8))
# synchrony is a tuple of dicts with the synchrony metrics for each unit
Links to original implementations
---------------------------------

The SpikeInterface implementation is a partial port of the low-level complexity functions from `Elephant - Electrophysiology Analysis Toolkit <https://github.com/NeuralEnsemble/elephant/blob/master/elephant/spike_train_synchrony.py#L245>`_

References
----------

.. automodule:: spikeinterface.toolkit.qualitymetrics.misc_metrics

.. autofunction:: compute_synchrony_metrics

Literature
----------

Based on concepts described in Gruen_
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ extractors = [
"ONE-api>=1.19.1",
"ibllib>=2.21.0",
"pymatreader>=0.0.32", # For cell explorer matlab files
"zugbruecke>=0.2; sys_platform!='win32'", # For plexon2
]

streaming_extractors = [
Expand Down
1 change: 1 addition & 0 deletions src/spikeinterface/core/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
from .generate import (
generate_recording,
generate_sorting,
add_synchrony_to_sorting,
create_sorting_npz,
generate_snippets,
synthesize_random_firings,
Expand Down
95 changes: 87 additions & 8 deletions src/spikeinterface/core/generate.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import math

import warnings
import numpy as np
from typing import Union, Optional, List, Literal

Expand Down Expand Up @@ -120,6 +120,31 @@ def generate_sorting(
refractory_period_ms=3.0, # in ms
seed=None,
):
"""
Generates sorting object with random firings.
Parameters
----------
num_units : int, default: 5
Number of units
sampling_frequency : float, default: 30000.0
The sampling frequency
durations : list, default: [10.325, 3.5]
Duration of each segment in s
firing_rates : float, default: 3.0
The firing rate of each unit (in Hz).
empty_units : list, default: None
List of units that will have no spikes. (used for testing mainly).
refractory_period_ms : float, default: 3.0
The refractory period in ms
seed : int, default: None
The random seed
Returns
-------
sorting : NumpySorting
The sorting object
"""
seed = _ensure_seed(seed)
num_segments = len(durations)
unit_ids = np.arange(num_units)
Expand Down Expand Up @@ -152,6 +177,59 @@ def generate_sorting(
return sorting


def add_synchrony_to_sorting(sorting, sync_event_ratio=0.3, seed=None):
"""
Generates sorting object with added synchronous events from an existing sorting objects.
Parameters
----------
sorting : BaseSorting
The sorting object
sync_event_ratio : float
The ratio of added synchronous spikes with respect to the total number of spikes.
E.g., 0.5 means that the final sorting will have 1.5 times number of spikes, and all the extra
spikes are synchronous (same sample_index), but on different units (not duplicates).
seed : int, default: None
The random seed
Returns
-------
sorting : NumpySorting
The sorting object
"""
rng = np.random.default_rng(seed)
spikes = sorting.to_spike_vector()
unit_ids = sorting.unit_ids

# add syncrhonous events
num_sync = int(len(spikes) * sync_event_ratio)
spikes_duplicated = rng.choice(spikes, size=num_sync, replace=True)
# change unit_index
new_unit_indices = np.zeros(len(spikes_duplicated))
# make sure labels are all unique, keep unit_indices used for each spike
units_used_for_spike = {}
for i, spike in enumerate(spikes_duplicated):
sample_index = spike["sample_index"]
if sample_index not in units_used_for_spike:
units_used_for_spike[sample_index] = np.array([spike["unit_index"]])
units_not_used = unit_ids[~np.in1d(unit_ids, units_used_for_spike[sample_index])]

if len(units_not_used) == 0:
continue
new_unit_indices[i] = rng.choice(units_not_used)
units_used_for_spike[sample_index] = np.append(units_used_for_spike[sample_index], new_unit_indices[i])
spikes_duplicated["unit_index"] = new_unit_indices
spikes_all = np.concatenate((spikes, spikes_duplicated))
sort_idxs = np.lexsort([spikes_all["sample_index"], spikes_all["segment_index"]])
spikes_all = spikes_all[sort_idxs]

sorting = NumpySorting(spikes=spikes_all, sampling_frequency=sorting.sampling_frequency, unit_ids=unit_ids)

return sorting


def create_sorting_npz(num_seg, file_path):
# create a NPZ sorting file
d = {}
Expand Down Expand Up @@ -959,13 +1037,14 @@ def __init__(
parent_recording: Union[BaseRecording, None] = None,
num_samples: Optional[List[int]] = None,
upsample_vector: Union[List[int], None] = None,
check_borbers: bool = True,
check_borders: bool = False,
) -> None:
templates = np.asarray(templates)
if check_borbers:
# TODO: this should be external to this class. It is not the responsability of this class to check the templates
if check_borders:
self._check_templates(templates)
# lets test this only once so force check_borbers=false for kwargs
check_borbers = False
# lets test this only once so force check_borders=False for kwargs
check_borders = False
self.templates = templates

channel_ids = parent_recording.channel_ids if parent_recording is not None else list(range(templates.shape[2]))
Expand Down Expand Up @@ -1053,7 +1132,7 @@ def __init__(
"nbefore": nbefore,
"amplitude_factor": amplitude_factor,
"upsample_vector": upsample_vector,
"check_borbers": check_borbers,
"check_borders": check_borders,
}
if parent_recording is None:
self._kwargs["num_samples"] = num_samples
Expand All @@ -1066,8 +1145,8 @@ def _check_templates(templates: np.ndarray):
threshold = 0.01 * max_value

if max(np.max(np.abs(templates[:, 0])), np.max(np.abs(templates[:, -1]))) > threshold:
raise Exception(
"Warning!\nYour templates do not go to 0 on the edges in InjectTemplatesRecording.__init__\nPlease make your window bigger."
warnings.warn(
"Warning! Your templates do not go to 0 on the edges in InjectTemplatesRecording. Please make your window bigger."
)


Expand Down
Loading

0 comments on commit cf9a3b5

Please sign in to comment.