Skip to content

Commit

Permalink
Merge branch 'main' into postprocessing-read-only
Browse files Browse the repository at this point in the history
  • Loading branch information
alejoe91 authored Sep 15, 2023
2 parents 426f395 + 77523e1 commit 492f2a2
Show file tree
Hide file tree
Showing 41 changed files with 1,560 additions and 434 deletions.
21 changes: 21 additions & 0 deletions .github/actions/install-wine/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Install packages
description: This action installs the package and its dependencies for testing

inputs:
python-version:
description: 'Python version to set up'
required: false
os:
description: 'Operating system to set up'
required: false

runs:
using: "composite"
steps:
- name: Install wine (needed for Plexon2)
run: |
sudo rm -f /etc/apt/sources.list.d/microsoft-prod.list
sudo dpkg --add-architecture i386
sudo apt-get update -qq
sudo apt-get install -yqq --allow-downgrades libc6:i386 libgcc-s1:i386 libstdc++6:i386 wine
shell: bash
7 changes: 7 additions & 0 deletions .github/workflows/full-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,10 @@ jobs:
echo "Extractors changed"
echo "EXTRACTORS_CHANGED=true" >> $GITHUB_OUTPUT
fi
if [[ $file == *"plexon2"* ]]; then
echo "Plexon2 changed"
echo "PLEXON2_CHANGED=true" >> $GITHUB_OUTPUT
fi
if [[ $file == *"/preprocessing/"* ]]; then
echo "Preprocessing changed"
echo "PREPROCESSING_CHANGED=true" >> $GITHUB_OUTPUT
Expand Down Expand Up @@ -122,6 +126,9 @@ jobs:
done
- name: Set execute permissions on run_tests.sh
run: chmod +x .github/run_tests.sh
- name: Install Wine (Plexon2)
if: ${{ steps.modules-changed.outputs.PLEXON2_CHANGED == 'true' }}
uses: ./.github/actions/install-wine
- name: Test core
run: ./.github/run_tests.sh core
- name: Test extractors
Expand Down
33 changes: 33 additions & 0 deletions .github/workflows/installation-tips-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: Creates Conda Install for Installation Tips

on:
workflow_dispatch:
schedule:
- cron: "0 12 * * 0" # Weekly at noon UTC on Sundays

jobs:
installation-tips-testing:
name: Build Conda Env on ${{ matrix.os }} OS
runs-on: ${{ matrix.os }}
defaults:
run:
shell: bash -el {0}
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
label: linux_dandi
- os: macos-latest
label: mac
- os: windows-latest
label: windows
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Test Conda Environment Creation
uses: conda-incubator/[email protected]
with:
environment-file: ./installations_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 23.7.0
rev: 23.9.1
hooks:
- id: black
files: ^src/
4 changes: 4 additions & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,17 +91,21 @@ NEO-based
.. autofunction:: read_mcsraw
.. autofunction:: read_neuralynx
.. autofunction:: read_neuralynx_sorting
.. autofunction:: read_neuroexplorer
.. autofunction:: read_neuroscope
.. autofunction:: read_nix
.. autofunction:: read_openephys
.. autofunction:: read_openephys_event
.. autofunction:: read_plexon
.. autofunction:: read_plexon_sorting
.. autofunction:: read_plexon2
.. autofunction:: read_plexon2_sorting
.. autofunction:: read_spike2
.. autofunction:: read_spikegadgets
.. autofunction:: read_spikeglx
.. autofunction:: read_tdt


Non-NEO-based
~~~~~~~~~~~~~
.. automodule:: spikeinterface.extractors
Expand Down
2 changes: 2 additions & 0 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@
'numpydoc',
"sphinx.ext.intersphinx",
"sphinx.ext.extlinks",
"IPython.sphinxext.ipython_directive",
"IPython.sphinxext.ipython_console_highlighting"
]

numpydoc_show_class_members = False
Expand Down
2 changes: 1 addition & 1 deletion doc/development/development.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Development
==========
===========

How to contribute
-----------------
Expand Down
64 changes: 32 additions & 32 deletions doc/how_to/analyse_neuropixels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,19 @@ Analyse Neuropixels datasets
This example shows how to perform Neuropixels-specific analysis,
including custom pre- and post-processing.

.. code:: ipython3
.. code:: ipython
%matplotlib inline
.. code:: ipython3
.. code:: ipython
import spikeinterface.full as si
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
.. code:: ipython3
.. code:: ipython
base_folder = Path('/mnt/data/sam/DataSpikeSorting/neuropixel_example/')
Expand All @@ -29,7 +29,7 @@ Read the data
The ``SpikeGLX`` folder can contain several “streams” (AP, LF and NIDQ).
We need to specify which one to read:

.. code:: ipython3
.. code:: ipython
stream_names, stream_ids = si.get_neo_streams('spikeglx', spikeglx_folder)
stream_names
Expand All @@ -43,7 +43,7 @@ We need to specify which one to read:
.. code:: ipython3
.. code:: ipython
# we do not load the sync channel, so the probe is automatically loaded
raw_rec = si.read_spikeglx(spikeglx_folder, stream_name='imec0.ap', load_sync_channel=False)
Expand All @@ -58,7 +58,7 @@ We need to specify which one to read:
.. code:: ipython3
.. code:: ipython
# we automaticaly have the probe loaded!
raw_rec.get_probe().to_dataframe()
Expand Down Expand Up @@ -201,7 +201,7 @@ We need to specify which one to read:



.. code:: ipython3
.. code:: ipython
fig, ax = plt.subplots(figsize=(15, 10))
si.plot_probe_map(raw_rec, ax=ax, with_channel_ids=True)
Expand Down Expand Up @@ -229,7 +229,7 @@ Let’s do something similar to the IBL destriping chain (See
- instead of interpolating bad channels, we remove then.
- instead of highpass_spatial_filter() we use common_reference()

.. code:: ipython3
.. code:: ipython
rec1 = si.highpass_filter(raw_rec, freq_min=400.)
bad_channel_ids, channel_labels = si.detect_bad_channels(rec1)
Expand Down Expand Up @@ -271,7 +271,7 @@ preprocessing chain wihtout to save the entire file to disk. Everything
is lazy, so you can change the previsous cell (parameters, step order,
…) and visualize it immediatly.

.. code:: ipython3
.. code:: ipython
# here we use static plot using matplotlib backend
fig, axs = plt.subplots(ncols=3, figsize=(20, 10))
Expand All @@ -287,7 +287,7 @@ is lazy, so you can change the previsous cell (parameters, step order,
.. image:: analyse_neuropixels_files/analyse_neuropixels_13_0.png


.. code:: ipython3
.. code:: ipython
# plot some channels
fig, ax = plt.subplots(figsize=(20, 10))
Expand Down Expand Up @@ -326,7 +326,7 @@ Depending on the complexity of the preprocessing chain, this operation
can take a while. However, we can make use of the powerful
parallelization mechanism of SpikeInterface.

.. code:: ipython3
.. code:: ipython
job_kwargs = dict(n_jobs=40, chunk_duration='1s', progress_bar=True)
Expand All @@ -344,7 +344,7 @@ parallelization mechanism of SpikeInterface.
write_binary_recording: 0%| | 0/1139 [00:00<?, ?it/s]
.. code:: ipython3
.. code:: ipython
# our recording now points to the new binary folder
rec
Expand Down Expand Up @@ -376,13 +376,13 @@ Check noise level
Noise levels can be estimated on the scaled traces or on the raw
(``int16``) traces.

.. code:: ipython3
.. code:: ipython
# we can estimate the noise on the scaled traces (microV) or on the raw one (which is in our case int16).
noise_levels_microV = si.get_noise_levels(rec, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(rec, return_scaled=False)
.. code:: ipython3
.. code:: ipython
fig, ax = plt.subplots()
_ = ax.hist(noise_levels_microV, bins=np.arange(5, 30, 2.5))
Expand Down Expand Up @@ -420,7 +420,7 @@ The two functions (detect + localize):
Let’s use here the ``locally_exclusive`` method for detection and the
``center_of_mass`` for peak localization:

.. code:: ipython3
.. code:: ipython
from spikeinterface.sortingcomponents.peak_detection import detect_peaks
Expand Down Expand Up @@ -472,7 +472,7 @@ In case we notice apparent drifts in the recording, one can use the
SpikeInterface modules to estimate and correct motion. See the
documentation for motion estimation and correction for more details.

.. code:: ipython3
.. code:: ipython
# check for drifts
fs = rec.sampling_frequency
Expand All @@ -492,7 +492,7 @@ documentation for motion estimation and correction for more details.
.. image:: analyse_neuropixels_files/analyse_neuropixels_26_1.png


.. code:: ipython3
.. code:: ipython
# we can also use the peak location estimates to have an insight of cluster separation before sorting
fig, ax = plt.subplots(figsize=(15, 10))
Expand Down Expand Up @@ -538,7 +538,7 @@ In this example:
- we apply no drift correction (because we don’t have drift)
- we use the docker image because we don’t want to pay for MATLAB :)

.. code:: ipython3
.. code:: ipython
# check default params for kilosort2.5
si.get_default_sorter_params('kilosort2_5')
Expand Down Expand Up @@ -571,20 +571,20 @@ In this example:
.. code:: ipython3
.. code:: ipython
# run kilosort2.5 without drift correction
params_kilosort2_5 = {'do_correction': False}
sorting = si.run_sorter('kilosort2_5', rec, output_folder=base_folder / 'kilosort2.5_output',
docker_image=True, verbose=True, **params_kilosort2_5)
.. code:: ipython3
.. code:: ipython
# the results can be read back for futur session
sorting = si.read_sorter_folder(base_folder / 'kilosort2.5_output')
.. code:: ipython3
.. code:: ipython
# here we have 31 untis in our recording
sorting
Expand Down Expand Up @@ -612,7 +612,7 @@ because the waveforms will be extracted only for a few channels around
the main channel of each unit. This saves tons of disk space and speeds
up the waveforms extraction and further processing.

.. code:: ipython3
.. code:: ipython
we = si.extract_waveforms(rec, sorting, folder=base_folder / 'waveforms_kilosort2.5',
sparse=True, max_spikes_per_unit=500, ms_before=1.5,ms_after=2.,
Expand All @@ -631,7 +631,7 @@ up the waveforms extraction and further processing.
extract waveforms memmap: 0%| | 0/1139 [00:00<?, ?it/s]
.. code:: ipython3
.. code:: ipython
# the WaveformExtractor contains all information and is persistent on disk
print(we)
Expand All @@ -645,7 +645,7 @@ up the waveforms extraction and further processing.
/mnt/data/sam/DataSpikeSorting/neuropixel_example/waveforms_kilosort2.5
.. code:: ipython3
.. code:: ipython
# the waveform extractor can be easily loaded back from folder
we = si.load_waveforms(base_folder / 'waveforms_kilosort2.5')
Expand All @@ -668,7 +668,7 @@ using the ``**job_kwargs`` mechanism.
Every computation will also be persistent on disk in the same folder,
since they represent waveform extensions.

.. code:: ipython3
.. code:: ipython
_ = si.compute_noise_levels(we)
_ = si.compute_correlograms(we)
Expand Down Expand Up @@ -699,7 +699,7 @@ PCA for their computation. This can be achieved with:

``si.compute_principal_components(waveform_extractor)``

.. code:: ipython3
.. code:: ipython
metrics = si.compute_quality_metrics(we, metric_names=['firing_rate', 'presence_ratio', 'snr',
'isi_violation', 'amplitude_cutoff'])
Expand Down Expand Up @@ -1034,7 +1034,7 @@ Curation using metrics
A very common curation approach is to threshold these metrics to select
*good* units:

.. code:: ipython3
.. code:: ipython
amplitude_cutoff_thresh = 0.1
isi_violations_ratio_thresh = 1
Expand All @@ -1049,7 +1049,7 @@ A very common curation approach is to threshold these metrics to select
(amplitude_cutoff < 0.1) & (isi_violations_ratio < 1) & (presence_ratio > 0.9)
.. code:: ipython3
.. code:: ipython
keep_units = metrics.query(our_query)
keep_unit_ids = keep_units.index.values
Expand All @@ -1071,11 +1071,11 @@ In order to export the final results we need to make a copy of the the
waveforms, but only for the selected units (so we can avoid to compute
them again).

.. code:: ipython3
.. code:: ipython
we_clean = we.select_units(keep_unit_ids, new_folder=base_folder / 'waveforms_clean')
.. code:: ipython3
.. code:: ipython
we_clean
Expand All @@ -1091,12 +1091,12 @@ them again).
Then we export figures to a report folder

.. code:: ipython3
.. code:: ipython
# export spike sorting report to a folder
si.export_report(we_clean, base_folder / 'report', format='png')
.. code:: ipython3
.. code:: ipython
we_clean = si.load_waveforms(base_folder / 'waveforms_clean')
we_clean
Expand Down
Loading

0 comments on commit 492f2a2

Please sign in to comment.