Skip to content

Commit

Permalink
Another minimal change (#1643)
Browse files Browse the repository at this point in the history
* the change

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
h-mayorquin and pre-commit-ci[bot] authored May 15, 2023
1 parent 7c6d356 commit fa68afc
Show file tree
Hide file tree
Showing 97 changed files with 616 additions and 638 deletions.
6 changes: 3 additions & 3 deletions .github/actions/build-test-environment/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ inputs:
os:
description: 'Operating system to set up'
required: false

runs:
using: "composite"
steps:
- name: Install dependencies
run: |
sudo apt install git
sudo apt install git
git config --global user.email "[email protected]"
git config --global user.name "CI Almighty"
python -m venv ${{ github.workspace }}/test_env # Environment used in the caching step
Expand Down Expand Up @@ -42,4 +42,4 @@ runs:
wget https://downloads.kitenet.net/git-annex/linux/current/git-annex-standalone-amd64.tar.gz
tar xvzf git-annex-standalone-amd64.tar.gz
echo "$(pwd)/git-annex.linux" >> $GITHUB_PATH
shell: bash
shell: bash
2 changes: 1 addition & 1 deletion .github/actions/show-test-environment/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ runs:
if [ -d "$HOME/spikeinterface_datasets" ]; then
find $HOME/spikeinterface_datasets
fi
shell: bash
shell: bash
6 changes: 3 additions & 3 deletions .github/build_job_summary.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
This function builds a summary from the pytest output in markdown to be used by GITHUB_STEP_SUMMARY.
This function builds a summary from the pytest output in markdown to be used by GITHUB_STEP_SUMMARY.
The input file is the output of the following command:
pytest -vv --durations=0 --durations-min=0.001 > report.txt
"""
Expand Down Expand Up @@ -37,7 +37,7 @@
data_frame_to_display = data_frame[["test_name", "type", "test_time", "%of_total_time", "%cum_total_time", "long_name"]]
data_frame_header_markdown = data_frame_to_display.head(10).to_markdown()
data_frame_markdown = data_frame_to_display.to_markdown()

# Build GITHUB_STEP_SUMMARY markdown file
sys.stdout.write("## Pytest summary")
sys.stdout.write("\n \n")
Expand All @@ -60,4 +60,4 @@
sys.stdout.write(data_frame_markdown)
sys.stdout.write("\n \n")
sys.stdout.write("</details>")
sys.stdout.write("\n \n")
sys.stdout.write("\n \n")
4 changes: 2 additions & 2 deletions .github/import_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
f"import_statement = '{import_statement}' \n"
f"time_taken = timeit.timeit(import_statement, number=1) \n"
f"print(time_taken) \n"
)
)

result = subprocess.run(["python", "-c", script_to_execute], capture_output=True, text=True)

Expand All @@ -54,4 +54,4 @@
raise Exception("\n".join(exceptions))

# This is displayed to GITHUB_STEP_SUMMARY
print(markdown_output)
print(markdown_output)
2 changes: 1 addition & 1 deletion .github/is_spikeinterface_dev.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
package_name = "spikeinterface"
version = importlib.metadata.version(package_name)
if version.endswith("dev0"):
print(True)
print(True)
2 changes: 1 addition & 1 deletion .github/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ source $GITHUB_WORKSPACE/test_env/bin/activate
pytest -m "$MARKER" -vv -ra --durations=0 --durations-min=0.001 | tee report.txt; test ${PIPESTATUS[0]} -eq 0 || exit 1
echo "# Timing profile of ${MARKER}" >> $GITHUB_STEP_SUMMARY
python $GITHUB_WORKSPACE/.github/build_job_summary.py report.txt >> $GITHUB_STEP_SUMMARY
rm report.txt
rm report.txt
22 changes: 11 additions & 11 deletions .github/workflows/caches_cron_job.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Create caches for gin ecephys data and virtual env

on:
on:
workflow_dispatch:
push: # When someting is pushed into main this checks if caches need to re-created
branches:
Expand All @@ -9,9 +9,9 @@ on:
- cron: "0 12 * * *" # Daily at noon UTC

jobs:





create-virtual-env-cache-if-missing:
name: Caching virtual env
runs-on: "ubuntu-latest"
Expand All @@ -35,13 +35,13 @@ jobs:
key: ${{ runner.os }}-venv-${{ steps.dependencies.outputs.hash }}-${{ steps.date.outputs.date }}
- name: Cache found?
run: echo "Cache-hit == ${{steps.cache-venv.outputs.cache-hit == 'true'}}"
- name: Create the virtual environment to be cached
- name: Create the virtual environment to be cached
if: steps.cache-venv.outputs.cache-hit != 'true'
uses: ./.github/actions/build-test-environment




create-gin-data-cache-if-missing:
name: Caching data env
runs-on: "ubuntu-latest"
Expand All @@ -54,10 +54,10 @@ jobs:
mkdir --parents --verbose $HOME/spikeinterface_datasets/ephy_testing_data/
chmod -R 777 $HOME/spikeinterface_datasets
ls -l $HOME/spikeinterface_datasets
- name: Get current hash (SHA) of the ephy_testing_data repo
- name: Get current hash (SHA) of the ephy_testing_data repo
id: repo_hash
run: |
echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)"
echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)"
echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)" >> $GITHUB_OUTPUT
- uses: actions/cache@v3
id: cache-datasets
Expand Down Expand Up @@ -87,7 +87,7 @@ jobs:
- name: Show size of the cache to assert data is downloaded
run: |
cd $HOME
pwd
pwd
du -hs spikeinterface_datasets
cd spikeinterface_datasets
pwd
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/core-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,19 @@ jobs:
run: |
git config --global user.email "[email protected]"
git config --global user.name "CI Almighty"
python -m pip install -U pip # Official recommended way
python -m pip install -U pip # Official recommended way
pip install -e .[test_core]
- name: Test core with pytest
run: |
pytest -vv -ra --durations=0 --durations-min=0.001 src/spikeinterface/core | tee report.txt; test ${PIPESTATUS[0]} -eq 0 || exit 1
shell: bash # Necessary for pipeline to work on windows
- name: Build test summary
run: |
run: |
pip install pandas
pip install tabulate
echo "# Timing profile of core tests in ${{matrix.os}}" >> $GITHUB_STEP_SUMMARY
echo "# Timing profile of core tests in ${{matrix.os}}" >> $GITHUB_STEP_SUMMARY
# Outputs markdown summary to standard output
python ./.github/build_job_summary.py report.txt >> $GITHUB_STEP_SUMMARY
cat $GITHUB_STEP_SUMMARY
python ./.github/build_job_summary.py report.txt >> $GITHUB_STEP_SUMMARY
cat $GITHUB_STEP_SUMMARY
rm report.txt
shell: bash # Necessary for pipeline to work on windows
shell: bash # Necessary for pipeline to work on windows
4 changes: 2 additions & 2 deletions .github/workflows/full-test-with-codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ jobs:
run: |
source ${{ github.workspace }}/test_env/bin/activate
pytest -m "not sorters_external" --cov=./ --cov-report xml:./coverage.xml -vv -ra --durations=0 | tee report_full.txt; test ${PIPESTATUS[0]} -eq 0 || exit 1
echo "# Timing profile of full tests" >> $GITHUB_STEP_SUMMARY
python ./.github/build_job_summary.py report_full.txt >> $GITHUB_STEP_SUMMARY
echo "# Timing profile of full tests" >> $GITHUB_STEP_SUMMARY
python ./.github/build_job_summary.py report_full.txt >> $GITHUB_STEP_SUMMARY
cat $GITHUB_STEP_SUMMARY
rm report_full.txt
- uses: codecov/codecov-action@v3
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/full-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ on:
types: [synchronize, opened, reopened]
branches:
- main

concurrency: # Cancel previous workflows on the same pull request
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Expand Down
9 changes: 4 additions & 5 deletions .github/workflows/test_imports.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,24 +27,23 @@ jobs:
run: |
git config --global user.email "[email protected]"
git config --global user.name "CI Almighty"
python -m pip install -U pip # Official recommended way
python -m pip install -U pip # Official recommended way
pip install -e . # This should install core only
- name: Profile Imports
run: |
echo "## OS: ${{ matrix.os }}" >> $GITHUB_STEP_SUMMARY
echo "---" >> $GITHUB_STEP_SUMMARY
echo "### Import times when only installing only core dependencies " >> $GITHUB_STEP_SUMMARY
python ./.github/import_test.py >> $GITHUB_STEP_SUMMARY
python ./.github/import_test.py >> $GITHUB_STEP_SUMMARY
shell: bash # Necessary for pipeline to work on windows
- name: Install in full mode
run: |
python -m pip install -U pip # Official recommended way
python -m pip install -U pip # Official recommended way
pip install -e .[full]
- name: Profile Imports with full
run: |
# Add a header to separate the two profiles
echo "---" >> $GITHUB_STEP_SUMMARY
echo "### Import times when installing full dependencies in " >> $GITHUB_STEP_SUMMARY
python ./.github/import_test.py >> $GITHUB_STEP_SUMMARY
python ./.github/import_test.py >> $GITHUB_STEP_SUMMARY
shell: bash # Necessary for pipeline to work on windows

2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -187,4 +187,4 @@ examples/modules/toolkit/tmp_*
test_folder/

# Mac OS
.DS_Store
.DS_Store
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Detailed documentation for spikeinterface can be found [here](https://spikeinter

Several tutorials to get started can be found in [spiketutorials](https://github.com/SpikeInterface/spiketutorials).

There are also some useful notebooks [on our blog](https://spikeinterface.github.io) that cover advanced benchmarking
There are also some useful notebooks [on our blog](https://spikeinterface.github.io) that cover advanced benchmarking
and sorting components.

You can also have a look at the [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui).
Expand All @@ -85,7 +85,7 @@ You can install the new `spikeinterface` version with pip:
pip install spikeinterface[full]
```

The `[full]` option installs all the extra dependencies for all the different sub-modules.
The `[full]` option installs all the extra dependencies for all the different sub-modules.

To install all interactive widget backends, you can use:

Expand Down
4 changes: 2 additions & 2 deletions conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ def pytest_sessionstart(session):
def pytest_collection_modifyitems(config, items):
"""
This function marks (in the pytest sense) the tests according to their name and file_path location
Marking them in turn allows the tests to be run by using the pytest -m marker_name option.
Marking them in turn allows the tests to be run by using the pytest -m marker_name option.
"""


# python 3.4/3.5 compat: rootdir = pathlib.Path(str(config.rootdir))
rootdir = Path(config.rootdir)

Expand Down
7 changes: 3 additions & 4 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ spikeinterface.core
.. autofunction:: get_template_extremum_channel_peak_shift
.. autofunction:: get_template_extremum_amplitude

..
..
.. autofunction:: read_binary
.. autofunction:: read_zarr
.. autofunction:: append_recordings
Expand Down Expand Up @@ -78,7 +78,7 @@ NEO-based
~~~~~~~~~

.. automodule:: spikeinterface.extractors

.. autofunction:: read_alphaomega
.. autofunction:: read_alphaomega_event
.. autofunction:: read_axona
Expand Down Expand Up @@ -241,7 +241,7 @@ spikeinterface.comparison

.. autoclass:: MultiSortingComparison
:members:

.. autoclass:: CollisionGTComparison
.. autoclass:: CorrelogramGTComparison
.. autoclass:: CollisionGTStudy
Expand Down Expand Up @@ -356,4 +356,3 @@ Template Matching
.. automodule:: spikeinterface.sortingcomponents.matching

.. autofunction:: find_spikes_from_templates

2 changes: 1 addition & 1 deletion doc/authors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Current core team
* `Alessio Paolo Buccino <https://github.com/alejoe91>`_ [1]
* `Samuel Garcia <https://github.com/samuelgarcia>`_ [2]

For any inquiries, please contact Alessio Buccino ([email protected]) or Samuel Garcia
For any inquiries, please contact Alessio Buccino ([email protected]) or Samuel Garcia
([email protected]), or just write an issue (preferred)!


Expand Down
6 changes: 3 additions & 3 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
'../examples/modules_gallery/qualitymetrics/waveforms_mearec',
'../examples/modules_gallery/qualitymetrics/wfs_mearec',
'../examples/modules_gallery/widgets/waveforms_mearec',

]

for folder in folders:
Expand Down Expand Up @@ -69,7 +69,7 @@
"sphinx.ext.extlinks",
]

numpydoc_show_class_members = False
numpydoc_show_class_members = False


# Add any paths that contain templates here, relative to this directory.
Expand Down Expand Up @@ -133,4 +133,4 @@

extlinks = {
"probeinterface": ("https://probeinterface.readthedocs.io/%s", None),
}
}
33 changes: 16 additions & 17 deletions doc/development/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ We use a forking workflow <https://www.atlassian.com/git/tutorials/comparing-wor
* Create a new branch (e.g., :code:git switch -c my-contribution).
* Modify the code, commit, and push changes to your fork.
* Open a pull request from the "Pull Requests" tab of your fork to :code:spikeinterface/main.
* By following this process, we can review the code and even make changes as necessary.
* By following this process, we can review the code and even make changes as necessary.

While we appreciate all the contributions please be mindful of the cost of reviewing pull requests <https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>_ .

Expand Down Expand Up @@ -52,7 +52,7 @@ If you want to run a specific test in a specific file, you can use the following
pytest pytest src/spikeinterface/core/tests/test_baserecording.py::specific_test_in_this_module
We also mantain pytest markers to run specific tests. For example, if you want to run only the tests
We also mantain pytest markers to run specific tests. For example, if you want to run only the tests
for the :code:`spikeinterface.extractors` module, you can use the following command:

.. code-block:: bash
Expand All @@ -69,7 +69,7 @@ Note that you should install the package before running the tests. You can do th
You can change the :code:`[test,extractors,full]` to install only the dependencies you need. The dependencies are specified in the :code:`pyproject.toml` file in the root of the repository.

The specific environment for the CI is specified in the :code:`.github/actions/build-test-environment/action.yml` and you can
The specific environment for the CI is specified in the :code:`.github/actions/build-test-environment/action.yml` and you can
find the full tests in the :code:`.github/workflows/full_test.yml` file.

The extractor tests require datalad for some of the tests. Here are instructions for installing datalad:
Expand Down Expand Up @@ -156,22 +156,22 @@ Note, however, that the running time of the command above will be slow. If you w
Implement a new extractor
-------------------------

SpikeInterface already supports over 30 file formats, but the acquisition system you use might not be among the
supported formats list (***ref***). Most of the extractord rely on the `NEO <https://github.com/NeuralEnsemble/python-neo>`_
SpikeInterface already supports over 30 file formats, but the acquisition system you use might not be among the
supported formats list (***ref***). Most of the extractord rely on the `NEO <https://github.com/NeuralEnsemble/python-neo>`_
package to read information from files.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new `neo.rawio `_ class.
Once that is done, the new class can be easily wrapped into SpikeInterface as an extension of the
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for :py:class:`~spikeinterface.core.BaseRecording` objects) or
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for py:class:`~spikeinterface.core.BaseSorting` objects) or with a few lines of
code (e.g., see reader for `SpikeGLX <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/neoextractors/spikeglx.py>`_
or `Neuralynx <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/neoextractors/neuralynx.py>`_).

**NOTE:** implementing a `neo.rawio` Class is not required, but recommended. Several extractors (especially) for Sorting
Once that is done, the new class can be easily wrapped into SpikeInterface as an extension of the
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for :py:class:`~spikeinterface.core.BaseRecording` objects) or
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for py:class:`~spikeinterface.core.BaseSorting` objects) or with a few lines of
code (e.g., see reader for `SpikeGLX <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/neoextractors/spikeglx.py>`_
or `Neuralynx <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/neoextractors/neuralynx.py>`_).

**NOTE:** implementing a `neo.rawio` Class is not required, but recommended. Several extractors (especially) for Sorting
objects are implemented directly in SpikeInterface and inherit from the base classes.
As examples, see the `CompressedBinaryIblExtractor <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/cbin_ibl.py>`_
for a :py:class:`~spikeinterface.core.BaseRecording` object, or the `SpykingCircusSortingExtractor <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/spykingcircusextractors.py>`_
As examples, see the `CompressedBinaryIblExtractor <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/cbin_ibl.py>`_
for a :py:class:`~spikeinterface.core.BaseRecording` object, or the `SpykingCircusSortingExtractor <https://github.com/SpikeInterface/spikeinterface/blob/0.96.1/spikeinterface/extractors/spykingcircusextractors.py>`_
for a a :py:class:`~spikeinterface.core.BaseSorting` object.


Expand Down Expand Up @@ -309,4 +309,3 @@ but we recommend testing the implementation locally.
After this you need to add a block in doc/sorters_info.rst

Finally, make a pull request to the spikesorters repo, so we can review the code and merge it to the spikesorters!

Loading

0 comments on commit fa68afc

Please sign in to comment.