Skip to content

Commit

Permalink
Merge branch 'main' of github.com:SpikeInterface/spikeinterface into …
Browse files Browse the repository at this point in the history
…rel-path-phy
  • Loading branch information
alejoe91 committed Sep 27, 2023
2 parents f16b12c + c9b3d4b commit 24f77e1
Show file tree
Hide file tree
Showing 45 changed files with 717 additions and 191 deletions.
1 change: 1 addition & 0 deletions doc/how_to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ How to guides
get_started
analyse_neuropixels
handle_drift
load_matlab_data
100 changes: 100 additions & 0 deletions doc/how_to/load_matlab_data.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
Exporting MATLAB Data to Binary & Loading in SpikeInterface
===========================================================

In this tutorial, we will walk through the process of exporting data from MATLAB in a binary format and subsequently loading it using SpikeInterface in Python.

Exporting Data from MATLAB
--------------------------

Begin by ensuring your data structure is correct. Organize your data matrix so that the first dimension corresponds to samples/time and the second to channels.
Here, we present a MATLAB code that creates a random dataset and writes it to a binary file as an illustration.

.. code-block:: matlab
% Define the size of your data
numSamples = 1000;
numChannels = 384;
% Generate random data as an example
data = rand(numSamples, numChannels);
% Write the data to a binary file
fileID = fopen('your_data_as_a_binary.bin', 'wb');
fwrite(fileID, data, 'double');
fclose(fileID);
.. note::

In your own script, replace the random data generation with your actual dataset.

Loading Data in SpikeInterface
------------------------------

After executing the above MATLAB code, a binary file named `your_data_as_a_binary.bin` will be created in your MATLAB directory. To load this file in Python, you'll need its full path.

Use the following Python script to load the binary data into SpikeInterface:

.. code-block:: python
import spikeinterface as si
from pathlib import Path
# Define file path
# For Linux or macOS:
file_path = Path("/The/Path/To/Your/Data/your_data_as_a_binary.bin")
# For Windows:
# file_path = Path(r"c:\path\to\your\data\your_data_as_a_binary.bin")
# Confirm file existence
assert file_path.is_file(), f"Error: {file_path} is not a valid file. Please check the path."
# Define recording parameters
sampling_frequency = 30_000.0 # Adjust according to your MATLAB dataset
num_channels = 384 # Adjust according to your MATLAB dataset
dtype = "float64" # MATLAB's double corresponds to Python's float64
# Load data using SpikeInterface
recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype)
# Confirm that the data was loaded correctly by comparing the data shapes and see they match the MATLAB data
print(recording.get_num_frames(), recording.get_num_channels())
Follow the steps above to seamlessly import your MATLAB data into SpikeInterface. Once loaded, you can harness the full power of SpikeInterface for data processing, including filtering, spike sorting, and more.

Common Pitfalls & Tips
----------------------

1. **Data Shape**: Make sure your MATLAB data matrix's first dimension is samples/time and the second is channels. If your time is in the second dimension, use `time_axis=1` in `si.read_binary()`.
2. **File Path**: Always double-check the Python file path.
3. **Data Type Consistency**: Ensure data types between MATLAB and Python are consistent. MATLAB's `double` is equivalent to Numpy's `float64`.
4. **Sampling Frequency**: Set the appropriate sampling frequency in Hz for SpikeInterface.
5. **Transition to Python**: Moving from MATLAB to Python can be challenging. For newcomers to Python, consider reviewing numpy's [Numpy for MATLAB Users](https://numpy.org/doc/stable/user/numpy-for-matlab-users.html) guide.

Using gains and offsets for integer data
----------------------------------------

Raw data formats often store data as integer values for memory efficiency. To give these integers meaningful physical units, you can apply a gain and an offset.
In SpikeInterface, you can use the `gain_to_uV` and `offset_to_uV` parameters, since traces are handled in microvolts (uV). Both parameters can be integrated into the `read_binary` function.
If your data in MATLAB is stored as `int16`, and you know the gain and offset, you can use the following code to load the data:

.. code-block:: python
sampling_frequency = 30_000.0 # Adjust according to your MATLAB dataset
num_channels = 384 # Adjust according to your MATLAB dataset
dtype_int = 'int16' # Adjust according to your MATLAB dataset
gain_to_uV = 0.195 # Adjust according to your MATLAB dataset
offset_to_uV = 0 # Adjust according to your MATLAB dataset
recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype_int,
gain_to_uV=gain_to_uV, offset_to_uV=offset_to_uV)
recording.get_traces(return_scaled=True) # Return traces in micro volts (uV)
This will equip your recording object with capabilities to convert the data to float values in uV using the :code:`get_traces()` method with the :code:`return_scaled` parameter set to :code:`True`.

.. note::

The gain and offset parameters are usually format dependent and you will need to find out the correct values for your data format. You can load your data without gain and offset but then the traces will be in integer values and not in uV.
3 changes: 1 addition & 2 deletions doc/modules/core.rst
Original file line number Diff line number Diff line change
Expand Up @@ -547,8 +547,7 @@ workflow.
In order to do this, one can use the :code:`Numpy*` classes, :py:class:`~spikeinterface.core.NumpyRecording`,
:py:class:`~spikeinterface.core.NumpySorting`, :py:class:`~spikeinterface.core.NumpyEvent`, and
:py:class:`~spikeinterface.core.NumpySnippets`. These object behave exactly like normal SpikeInterface objects,
but they are not bound to a file. This makes these objects *not dumpable*, so parallel processing is not supported.
In order to make them *dumpable*, one can simply :code:`save()` them (see :ref:`save_load`).
but they are not bound to a file.

Also note the class :py:class:`~spikeinterface.core.SharedMemorySorting` which is very similar to
Similar to :py:class:`~spikeinterface.core.NumpySorting` but with an unerlying SharedMemory which is usefull for
Expand Down
2 changes: 2 additions & 0 deletions doc/modules/qualitymetrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,11 @@ For more details about each metric and it's availability and use within SpikeInt
:glob:

qualitymetrics/amplitude_cutoff
qualitymetrics/amplitude_cv
qualitymetrics/amplitude_median
qualitymetrics/d_prime
qualitymetrics/drift
qualitymetrics/firing_range
qualitymetrics/firing_rate
qualitymetrics/isi_violations
qualitymetrics/isolation_distance
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/qualitymetrics/amplitude_cutoff.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# It is also recommended to run `compute_spike_amplitudes(wvf_extractor)`
# in order to use amplitudes from all spikes
fraction_missing = qm.compute_amplitude_cutoffs(wvf_extractor, peak_sign="neg")
# fraction_missing is a dict containing the units' IDs as keys,
fraction_missing = sqm.compute_amplitude_cutoffs(wvf_extractor, peak_sign="neg")
# fraction_missing is a dict containing the unit IDs as keys,
# and their estimated fraction of missing spikes as values.
Reference
Expand Down
55 changes: 55 additions & 0 deletions doc/modules/qualitymetrics/amplitude_cv.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
Amplitude CV (:code:`amplitude_cv_median`, :code:`amplitude_cv_range`)
======================================================================


Calculation
-----------

The amplitude CV (coefficient of variation) is a measure of the amplitude variability.
It is computed as the ratio between the standard deviation and the amplitude mean.
To obtain a better estimate of this measure, it is first computed separately for several temporal bins.
Out of these values, the median and the range (percentile distance, by default between the
5th and 95th percentiles) are computed.

The computation requires either spike amplitudes (see :py:func:`~spikeinterface.postprocessing.compute_spike_amplitudes()`)
or amplitude scalings (see :py:func:`~spikeinterface.postprocessing.compute_amplitude_scalings()`) to be pre-computed.


Expectation and use
-------------------

The amplitude CV median is expected to be relatively low for well-isolated units, indicating a "stereotypical" spike shape.

The amplitude CV range can be high in the presence of noise contamination, due to amplitude outliers like in
the example below.

.. image:: amplitudes.png
:width: 600


Example code
------------

.. code-block:: python
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
# It is required to run `compute_spike_amplitudes(wvf_extractor)` or
# `compute_amplitude_scalings(wvf_extractor)` (if missing, values will be NaN)
amplitude_cv_median, amplitude_cv_range = sqm.compute_amplitude_cv_metrics(wvf_extractor)
# amplitude_cv_median and amplitude_cv_range are dicts containing the unit ids as keys,
# and their amplitude_cv metrics as values.
References
----------

.. autofunction:: spikeinterface.qualitymetrics.misc_metrics.compute_amplitude_cv_metrics


Literature
----------

Designed by Simon Musall and adapted to SpikeInterface by Alessio Buccino.
6 changes: 3 additions & 3 deletions doc/modules/qualitymetrics/amplitude_median.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# It is also recommended to run `compute_spike_amplitudes(wvf_extractor)`
# in order to use amplitude values from all spikes.
amplitude_medians = qm.compute_amplitude_medians(wvf_extractor)
# amplitude_medians is a dict containing the units' IDs as keys,
amplitude_medians = sqm.compute_amplitude_medians(wvf_extractor)
# amplitude_medians is a dict containing the unit IDs as keys,
# and their estimated amplitude medians as values.
Reference
Expand Down
Binary file added doc/modules/qualitymetrics/amplitudes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions doc/modules/qualitymetrics/d_prime.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
d_prime = qm.lda_metrics(all_pcs, all_labels, 0)
d_prime = sqm.lda_metrics(all_pcs, all_labels, 0)
Reference
Expand Down
5 changes: 3 additions & 2 deletions doc/modules/qualitymetrics/drift.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,12 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
# It is required to run `compute_spike_locations(wvf_extractor)`
# (if missing, values will be NaN)
drift_ptps, drift_stds, drift_mads = qm.compute_drift_metrics(wvf_extractor, peak_sign="neg")
drift_ptps, drift_stds, drift_mads = sqm.compute_drift_metrics(wvf_extractor, peak_sign="neg")
# drift_ptps, drift_stds, and drift_mads are dict containing the units' ID as keys,
# and their metrics as values.
Expand Down
40 changes: 40 additions & 0 deletions doc/modules/qualitymetrics/firing_range.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
Firing range (:code:`firing_range`)
===================================


Calculation
-----------

The firing range indicates the dispersion of the firing rate of a unit across the recording. It is computed by
taking the difference between the 95th percentile's firing rate and the 5th percentile's firing rate computed over short time bins (e.g. 10 s).



Expectation and use
-------------------

Very high levels of firing ranges, outside of a physiological range, might indicate noise contamination.


Example code
------------

.. code-block:: python
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
firing_range = sqm.compute_firing_ranges(wvf_extractor)
# firing_range is a dict containing the unit IDs as keys,
# and their firing firing_range as values (in Hz).
References
----------

.. autofunction:: spikeinterface.qualitymetrics.misc_metrics.compute_firing_ranges


Literature
----------

Designed by Simon Musall and adapted to SpikeInterface by Alessio Buccino.
6 changes: 3 additions & 3 deletions doc/modules/qualitymetrics/firing_rate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,11 @@ With SpikeInterface:

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
firing_rate = qm.compute_firing_rates(wvf_extractor)
# firing_rate is a dict containing the units' IDs as keys,
firing_rate = sqm.compute_firing_rates(wvf_extractor)
# firing_rate is a dict containing the unit IDs as keys,
# and their firing rates across segments as values (in Hz).
References
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/qualitymetrics/isi_violations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,11 +77,11 @@ With SpikeInterface:

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
isi_violations_ratio, isi_violations_count = qm.compute_isi_violations(wvf_extractor, isi_threshold_ms=1.0)
isi_violations_ratio, isi_violations_count = sqm.compute_isi_violations(wvf_extractor, isi_threshold_ms=1.0)
References
----------
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/qualitymetrics/presence_ratio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
presence_ratio = qm.compute_presence_ratios(wvf_extractor)
# presence_ratio is a dict containing the units' IDs as keys
presence_ratio = sqm.compute_presence_ratios(wvf_extractor)
# presence_ratio is a dict containing the unit IDs as keys
# and their presence ratio (between 0 and 1) as values.
Links to original implementations
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/qualitymetrics/sliding_rp_violations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,11 @@ With SpikeInterface:

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
contamination = qm.compute_sliding_rp_violations(wvf_extractor, bin_size_ms=0.25)
contamination = sqm.compute_sliding_rp_violations(wvf_extractor, bin_size_ms=0.25)
References
----------
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/qualitymetrics/snr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,12 @@ With SpikeInterface:

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
SNRs = qm.compute_snrs(wvf_extractor)
# SNRs is a dict containing the units' IDs as keys and their SNRs as values.
SNRs = sqm.compute_snrs(wvf_extractor)
# SNRs is a dict containing the unit IDs as keys and their SNRs as values.
Links to original implementations
---------------------------------
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/qualitymetrics/synchrony.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ Example code

.. code-block:: python
import spikeinterface.qualitymetrics as qm
import spikeinterface.qualitymetrics as sqm
# Make recording, sorting and wvf_extractor object for your data.
synchrony = qm.compute_synchrony_metrics(wvf_extractor, synchrony_sizes=(2, 4, 8))
synchrony = sqm.compute_synchrony_metrics(wvf_extractor, synchrony_sizes=(2, 4, 8))
# synchrony is a tuple of dicts with the synchrony metrics for each unit
Expand Down
Loading

0 comments on commit 24f77e1

Please sign in to comment.