Skip to content

Commit

Permalink
Merge branch 'main' of github.com:SpikeInterface/spikeinterface into …
Browse files Browse the repository at this point in the history
…waveform_tools_speedup

# Conflicts:
#	src/spikeinterface/core/tests/test_waveform_tools.py
  • Loading branch information
samuelgarcia committed Jul 26, 2023
2 parents 570a3a5 + 9141a19 commit 2ae50e7
Show file tree
Hide file tree
Showing 233 changed files with 4,157 additions and 4,996 deletions.
13 changes: 5 additions & 8 deletions .github/actions/build-test-environment/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,15 @@ runs:
pip install -e .[test,extractors,full]
shell: bash
- name: Force installation of latest dev from key-packages when running dev (not release)
id: version
run: |
source ${{ github.workspace }}/test_env/bin/activate
if python ./.github/is_spikeinterface_dev.py; then
spikeinterface_is_dev_version=$(python -c "import importlib.metadata; version = importlib.metadata.version('spikeinterface'); print(version.endswith('dev0'))")
if [ $spikeinterface_is_dev_version = "True" ]; then
echo "Running spikeinterface dev version"
pip uninstall -y neo
pip uninstall -y probeinterface
pip install git+https://github.com/NeuralEnsemble/python-neo
pip install git+https://github.com/SpikeInterface/probeinterface
else
echo "Running tests for release"
pip install --no-cache-dir git+https://github.com/NeuralEnsemble/python-neo
pip install --no-cache-dir git+https://github.com/SpikeInterface/probeinterface
fi
echo "Running tests for release, using pyproject.toml versions of neo and probeinterface"
shell: bash
- name: git-annex install
run: |
Expand Down
6 changes: 0 additions & 6 deletions .github/is_spikeinterface_dev.py

This file was deleted.

4 changes: 3 additions & 1 deletion .github/workflows/caches_cron_job.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ jobs:
with:
path: ${{ github.workspace }}/test_env
key: ${{ runner.os }}-venv-${{ steps.dependencies.outputs.hash }}-${{ steps.date.outputs.date }}
lookup-only: 'true' # Avoids downloading the data, saving behavior is not affected.
- name: Cache found?
run: echo "Cache-hit == ${{steps.cache-venv.outputs.cache-hit == 'true'}}"
- name: Create the virtual environment to be cached
Expand Down Expand Up @@ -64,6 +65,7 @@ jobs:
with:
path: ~/spikeinterface_datasets
key: ${{ runner.os }}-datasets-${{ steps.repo_hash.outputs.dataset_hash }}
lookup-only: 'true' # Avoids downloading the data, saving behavior is not affected.
- name: Cache found?
run: echo "Cache-hit == ${{steps.cache-datasets.outputs.cache-hit == 'true'}}"
- name: Installing datalad and git-annex
Expand All @@ -88,7 +90,7 @@ jobs:
run: |
cd $HOME
pwd
du -hs spikeinterface_datasets
du -hs spikeinterface_datasets # Should show the size of ephy_testing_data
cd spikeinterface_datasets
pwd
ls -lh # Should show ephy_testing_data
Expand Down
5 changes: 1 addition & 4 deletions .github/workflows/full-test-with-codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,6 @@ jobs:
with:
path: ${{ github.workspace }}/test_env
key: ${{ runner.os }}-venv-${{ hashFiles('**/pyproject.toml') }}-${{ steps.date.outputs.date }}
restore-keys: |
${{ runner.os }}-venv-
- name: Get ephy_testing_data current head hash
# the key depends on the last comit repo https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git
id: vars
Expand All @@ -48,8 +46,7 @@ jobs:
with:
path: ~/spikeinterface_datasets
key: ${{ runner.os }}-datasets-${{ steps.vars.outputs.HASH_EPHY_DATASET }}
restore-keys: |
${{ runner.os }}-datasets
restore-keys: ${{ runner.os }}-datasets
- name: Install packages
uses: ./.github/actions/build-test-environment
- name: Shows installed packages by pip, git-annex and cached testing files
Expand Down
9 changes: 5 additions & 4 deletions .github/workflows/full-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ jobs:
with:
path: ${{ github.workspace }}/test_env
key: ${{ runner.os }}-venv-${{ hashFiles('**/pyproject.toml') }}-${{ steps.date.outputs.date }}
restore-keys: |
${{ runner.os }}-venv-
- name: Get ephy_testing_data current head hash
# the key depends on the last comit repo https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git
id: vars
Expand All @@ -53,8 +51,7 @@ jobs:
with:
path: ~/spikeinterface_datasets
key: ${{ runner.os }}-datasets-${{ steps.vars.outputs.HASH_EPHY_DATASET }}
restore-keys: |
${{ runner.os }}-datasets
restore-keys: ${{ runner.os }}-datasets
- name: Install packages
uses: ./.github/actions/build-test-environment
- name: Shows installed packages by pip, git-annex and cached testing files
Expand All @@ -66,6 +63,10 @@ jobs:
id: modules-changed
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [[ $file == *"pyproject.toml" ]]; then
echo "pyproject.toml changed"
echo "CORE_CHANGED=true" >> $GITHUB_OUTPUT
fi
if [[ $file == *"/core/"* || $file == *"/extractors/neoextractors/neobaseextractor.py" ]]; then
echo "Core changed"
echo "CORE_CHANGED=true" >> $GITHUB_OUTPUT
Expand Down
23 changes: 20 additions & 3 deletions .github/workflows/streaming-extractor-test.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
name: Test streaming extractors

on: workflow_dispatch
on:
pull_request:
types: [synchronize, opened, reopened]
branches:
- main

concurrency: # Cancel previous workflows on the same pull request
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down Expand Up @@ -28,14 +32,27 @@ jobs:
- run: git fetch --prune --unshallow --tags
- name: Install openblas
run: sudo apt install libopenblas-dev # Necessary for ROS3 support
- name: Install package and streaming extractor dependencies
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v35
- name: Module changes
id: modules-changed
run: |
pip install -e .[test_core,streaming_extractors]
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [[ $file == *"/nwbextractors.py" || $file == *"/iblstreamingrecording.py"* ]]; then
echo "Streaming files changed changed"
echo "STREAMING_CHANGED=true" >> $GITHUB_OUTPUT
fi
done
- name: Install package and streaming extractor dependencies
if: ${{ steps.modules-changed.outputs.STREAMING_CHANGED == 'true' }}
run: pip install -e .[test_core,streaming_extractors]
# Temporary disabled because of complicated error with path
# - name: Install h5py with ROS3 support and test it works
# run: |
# pip uninstall -y h5py
# conda install -c conda-forge "h5py>=3.2"
# python -c "import h5py; assert 'ros3' in h5py.registered_drivers(), f'ros3 suppport not available, failed to install'"
- name: run tests
if: steps.modules-changed.outputs.STREAMING_CHANGED == 'true'
run: pytest -m "streaming_extractors and not ros3_test" -vv -ra
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 23.3.0
rev: 23.7.0
hooks:
- id: black
files: ^src/
3 changes: 2 additions & 1 deletion doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -269,13 +269,14 @@ spikeinterface.widgets
.. autofunction:: plot_amplitudes
.. autofunction:: plot_autocorrelograms
.. autofunction:: plot_crosscorrelograms
.. autofunction:: plot_motion
.. autofunction:: plot_quality_metrics
.. autofunction:: plot_sorting_summary
.. autofunction:: plot_spike_locations
.. autofunction:: plot_spikes_on_traces
.. autofunction:: plot_template_metrics
.. autofunction:: plot_template_similarity
.. autofunction:: plot_timeseries
.. autofunction:: plot_traces
.. autofunction:: plot_unit_depths
.. autofunction:: plot_unit_locations
.. autofunction:: plot_unit_summary
Expand Down
14 changes: 7 additions & 7 deletions doc/how_to/analyse_neuropixels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ the ipywydgets interactive ploter
.. code:: python
%matplotlib widget
si.plot_timeseries({'filter':rec1, 'cmr': rec4}, backend='ipywidgets')
si.plot_traces({'filter':rec1, 'cmr': rec4}, backend='ipywidgets')
Note that using this ipywydgets make possible to explore diffrents
preprocessing chain wihtout to save the entire file to disk. Everything
Expand All @@ -276,9 +276,9 @@ is lazy, so you can change the previsous cell (parameters, step order,
# here we use static plot using matplotlib backend
fig, axs = plt.subplots(ncols=3, figsize=(20, 10))
si.plot_timeseries(rec1, backend='matplotlib', clim=(-50, 50), ax=axs[0])
si.plot_timeseries(rec4, backend='matplotlib', clim=(-50, 50), ax=axs[1])
si.plot_timeseries(rec, backend='matplotlib', clim=(-50, 50), ax=axs[2])
si.plot_traces(rec1, backend='matplotlib', clim=(-50, 50), ax=axs[0])
si.plot_traces(rec4, backend='matplotlib', clim=(-50, 50), ax=axs[1])
si.plot_traces(rec, backend='matplotlib', clim=(-50, 50), ax=axs[2])
for i, label in enumerate(('filter', 'cmr', 'final')):
axs[i].set_title(label)
Expand All @@ -292,7 +292,7 @@ is lazy, so you can change the previsous cell (parameters, step order,
# plot some channels
fig, ax = plt.subplots(figsize=(20, 10))
some_chans = rec.channel_ids[[100, 150, 200, ]]
si.plot_timeseries({'filter':rec1, 'cmr': rec4}, backend='matplotlib', mode='line', ax=ax, channel_ids=some_chans)
si.plot_traces({'filter':rec1, 'cmr': rec4}, backend='matplotlib', mode='line', ax=ax, channel_ids=some_chans)
Expand Down Expand Up @@ -426,7 +426,7 @@ Let’s use here the ``locally_exclusive`` method for detection and the
job_kwargs = dict(n_jobs=40, chunk_duration='1s', progress_bar=True)
peaks = detect_peaks(rec, method='locally_exclusive', noise_levels=noise_levels_int16,
detect_threshold=5, local_radius_um=50., **job_kwargs)
detect_threshold=5, radius_um=50., **job_kwargs)
peaks
Expand All @@ -451,7 +451,7 @@ Let’s use here the ``locally_exclusive`` method for detection and the
from spikeinterface.sortingcomponents.peak_localization import localize_peaks
peak_locations = localize_peaks(rec, peaks, method='center_of_mass', local_radius_um=50., **job_kwargs)
peak_locations = localize_peaks(rec, peaks, method='center_of_mass', radius_um=50., **job_kwargs)
Expand Down
4 changes: 2 additions & 2 deletions doc/how_to/get_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ and the raster plots.

.. code:: ipython3
w_ts = sw.plot_timeseries(recording, time_range=(0, 5))
w_ts = sw.plot_traces(recording, time_range=(0, 5))
w_rs = sw.plot_rasters(sorting_true, time_range=(0, 5))
Expand Down Expand Up @@ -266,7 +266,7 @@ available parameters are dictionaries and can be accessed with:
'clustering': {},
'detection': {'detect_threshold': 5, 'peak_sign': 'neg'},
'filtering': {'dtype': 'float32'},
'general': {'local_radius_um': 100, 'ms_after': 2, 'ms_before': 2},
'general': {'radius_um': 100, 'ms_after': 2, 'ms_before': 2},
'job_kwargs': {},
'localization': {},
'matching': {},
Expand Down
22 changes: 11 additions & 11 deletions doc/how_to/handle_drift.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
Handle motion/drift with spikeinterface
=======================================

Spikeinterface offers a very flexible framework to handle drift as a
SpikeInterface offers a very flexible framework to handle drift as a
preprocessing step. If you want to know more, please read the
:ref:`motion_correction` section of the documentation.

Expand Down Expand Up @@ -96,7 +96,7 @@ Correcting for drift is easy! You just need to run a single function. We
will try this function with 3 presets.

Internally a preset is a dictionary of dictionaries containing all
parameters for every steps.
parameters for each step.

Here we also save the motion correction results into a folder to be able
to load them later.
Expand All @@ -118,10 +118,10 @@ to load them later.
'peak_sign': 'neg',
'detect_threshold': 8.0,
'exclude_sweep_ms': 0.1,
'local_radius_um': 50},
'radius_um': 50},
'select_kwargs': None,
'localize_peaks_kwargs': {'method': 'grid_convolution',
'local_radius_um': 30.0,
'radius_um': 30.0,
'upsampling_um': 3.0,
'sigma_um': array([ 5. , 12.5, 20. ]),
'sigma_ms': 0.25,
Expand Down Expand Up @@ -185,14 +185,14 @@ A few comments on the figures:
start moving is recovered quite well.
* The preset **kilosort_like** gives better results because it is a non-rigid case. The motion vector
is computed for different depths. The corrected peak locations are
flatter than the rigid case. The motion vector map is still be a bit
noisy at some depths (e.g around 1000um).
flatter than the rigid case. The motion vector map is still a bit
noisy at some depths (e.g. around 1000um).
* The preset **nonrigid_accurate** seems to give the best results on this recording.
The motion vector seems less noisy globally, but it is not “perfect”
(see at the top of the probe 3200um to 3800um). Also note that in the first part
of the recording before the imposed motion (0-600s) we
clearly have a non-rigid motion: the upper part of the probe
(2000-3000um) experience some drifts, but the lower part (0-1000um) is
(2000-3000um) experience some drift, but the lower part (0-1000um) is
relatively stable. The method defined by this preset is able to capture this.

.. code:: ipython3
Expand All @@ -204,8 +204,8 @@ A few comments on the figures:
# and plot
fig = plt.figure(figsize=(14, 8))
si.plot_motion(rec, motion_info, figure=fig, depth_lim=(400, 600),
color_amplitude=True, amplitude_cmap='inferno', scatter_decimate=10)
si.plot_motion(motion_info, figure=fig, depth_lim=(400, 600),
color_amplitude=True, amplitude_cmap='inferno', scatter_decimate=10)
fig.suptitle(f"{preset=}")
Expand Down Expand Up @@ -237,7 +237,7 @@ axis, especially for the preset “nonrigid_accurate”.

Be aware that there are two ways to correct for the motion: 1.
Interpolate traces and detect/localize peaks again
(:py:func:`interpolate_recording()`) 2. Compensate for drifts directly on peak
(:py:func:`interpolate_recording()`) 2. Compensate for drift directly on peak
locations (:py:func:`correct_motion_on_peaks()`)

Case 1 is used before running a spike sorter and the case 2 is used here
Expand Down Expand Up @@ -272,7 +272,7 @@ to display the results.
#color='black',
ax.scatter(loc['x'][mask][sl], loc['y'][mask][sl], **color_kargs)
loc2 = correct_motion_on_peaks(motion_info['peaks'], motion_info['peak_locations'], rec.get_times(),
loc2 = correct_motion_on_peaks(motion_info['peaks'], motion_info['peak_locations'], rec.sampling_frequency,
motion_info['motion'], motion_info['temporal_bins'], motion_info['spatial_bins'], direction="y")
ax = axs[1]
Expand Down
2 changes: 1 addition & 1 deletion doc/install_sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ Mountainsort5
pip install mountainsort5

SpyKING CIRCUS
^^^^^^^^^^^^^
^^^^^^^^^^^^^^

* Python, requires MPICH
* Url: https://spyking-circus.readthedocs.io
Expand Down
4 changes: 2 additions & 2 deletions doc/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ From source

As :code:`spikeinterface` is undergoing a heavy development phase, it is sometimes convenient to install from source
to get the latest bug fixes and improvements. We recommend constructing the package within a
[virtual environment](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/)
`virtual environment <https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/>`_
to prevent potential conflicts with local dependencies.

.. code-block:: bash
Expand All @@ -49,7 +49,7 @@ to prevent potential conflicts with local dependencies.
pip install -e .
cd ..
Note that this will install the package in [editable mode](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs).
Note that this will install the package in `editable mode <https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs>`_.

It is also recommended in that case to also install :code:`neo` and :code:`probeinterface` from source,
as :code:`spikeinterface` strongly relies on these packages to interface with various formats and handle probes:
Expand Down
Loading

0 comments on commit 2ae50e7

Please sign in to comment.