Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Keywords to the Documentation Functions #2057

Merged
merged 5 commits into from
Oct 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/how_to/load_matlab_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Use the following Python script to load the binary data into SpikeInterface:
dtype = "float64" # MATLAB's double corresponds to Python's float64

# Load data using SpikeInterface
recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype)

# Confirm that the data was loaded correctly by comparing the data shapes and see they match the MATLAB data
Expand Down Expand Up @@ -86,7 +86,7 @@ If your data in MATLAB is stored as :code:`int16`, and you know the gain and off
gain_to_uV = 0.195 # Adjust according to your MATLAB dataset
offset_to_uV = 0 # Adjust according to your MATLAB dataset

recording = si.read_binary(file_path, sampling_frequency=sampling_frequency,
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype_int,
gain_to_uV=gain_to_uV, offset_to_uV=offset_to_uV)

Expand Down
36 changes: 18 additions & 18 deletions doc/modules/curation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,21 @@ The merging and splitting operations are handled by the :py:class:`~spikeinterfa

from spikeinterface.curation import CurationSorting

sorting = run_sorter('kilosort2', recording)
sorting = run_sorter(sorter_name='kilosort2', recording=recording)

cs = CurationSorting(sorting)
cs = CurationSorting(parent_sorting=sorting)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for this? Why do some functions use parent_sorting or parent_recording and most use sorting or recording.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, we should unify this indeed! What would you vote for? Probably just recording and sorting?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think the path of least resistance and the most obvious for the user is just to use recording and sorting. I don't think I would guess that I have a parent_sorting vs child_sorting at the api level for most functions. (Also less typing).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. It's also a good time to make the switch. Do you think we should deprecate the old arguments or just break compatibility towards the next release?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the recent issues with functions and arguments (#1907, #1899, #1678) it might be a better idea to deprecate for at least one release (although tearing off the band-aid would be cleaner like in #1865, which I would prefer), it seems like that change caused a bit of confusion for users.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well this is only changing an argument name that it's probably not really used by most users. I think that most users will use banpass_filter(recording) and not bandpass_filter(parent_recording=recording) (same for channel_slice, other preprocessors, etc..). Keeping back-compatibility would mean adding a temporary argument and setting the main recording to None, which would possibly require to make some other args None by default...

Let's just make the switch. I checked and it's not that many files :)


# make a first merge
cs.merge(['#1', '#5', '#15'])
cs.merge(units_to_merge=['#1', '#5', '#15'])

# make a second merge
cs.merge(['#11', '#21'])
cs.merge(units_to_merge=['#11', '#21'])

# make a split
split_index = ... # some criteria on spikes
cs.split('#20', split_index)
cs.split(split_unit_id='#20', indices_list=split_index)

# here the final clean sorting
# here is the final clean sorting
clean_sorting = cs.sorting


Expand All @@ -60,20 +60,20 @@ merges. Therefore, it has many parameters and options.

from spikeinterface.curation import MergeUnitsSorting, get_potential_auto_merge

sorting = run_sorter('kilosort', recording)
sorting = run_sorter(sorter_name='kilosort', recording=recording)

we = extract_waveforms(recording, sorting, folder='wf_folder')
we = extract_waveforms(recording=recording, sorting=sorting, folder='wf_folder')

# merges is a list of lists, with unit_ids to be merged.
merges = get_potential_auto_merge(we, minimum_spikes=1000, maximum_distance_um=150.,
merges = get_potential_auto_merge(waveform_extractor=we, minimum_spikes=1000, maximum_distance_um=150.,
peak_sign="neg", bin_ms=0.25, window_ms=100.,
corr_diff_thresh=0.16, template_diff_thresh=0.25,
censored_period_ms=0., refractory_period_ms=1.0,
contamination_threshold=0.2, num_channels=5, num_shift=5,
firing_contamination_balance=1.5)

# here we apply the merges
clean_sorting = MergeUnitsSorting(sorting, merges)
clean_sorting = MergeUnitsSorting(parent_sorting=sorting, units_to_merge=merges)


Manual curation with sorting view
Expand All @@ -98,24 +98,24 @@ The manual curation (including merges and labels) can be applied to a SpikeInter
from spikeinterface.widgets import plot_sorting_summary

# run a sorter and export waveforms
sorting = run_sorter('kilosort2', recording)
we = extract_waveforms(recording, sorting, folder='wf_folder')
sorting = run_sorter(sorter_name'kilosort2', recording=recording)
we = extract_waveforms(recording=recording, sorting=sorting, folder='wf_folder')

# some postprocessing is required
_ = compute_spike_amplitudes(we)
_ = compute_unit_locations(we)
_ = compute_template_similarity(we)
_ = compute_correlograms(we)
_ = compute_spike_amplitudes(waveform_extractor=we)
_ = compute_unit_locations(waveform_extractor=we)
_ = compute_template_similarity(waveform_extractor=we)
_ = compute_correlograms(waveform_extractor=we)

# This loads the data to the cloud for web-based plotting and sharing
plot_sorting_summary(we, curation=True, backend='sortingview')
plot_sorting_summary(waveform_extractor=we, curation=True, backend='sortingview')
# we open the printed link URL in a browswe
# - make manual merges and labeling
# - from the curation box, click on "Save as snapshot (sha1://)"

# copy the uri
sha_uri = "sha1://59feb326204cf61356f1a2eb31f04d8e0177c4f1"
clean_sorting = apply_sortingview_curation(sorting, uri_or_json=sha_uri)
clean_sorting = apply_sortingview_curation(sorting=sorting, uri_or_json=sha_uri)

Note that you can also "Export as JSON" and pass the json file as :code:`uri_or_json` parameter.

Expand Down
19 changes: 9 additions & 10 deletions doc/modules/exporters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,14 @@ The input of the :py:func:`~spikeinterface.exporters.export_to_phy` is a :code:`
from spikeinterface.exporters import export_to_phy

# the waveforms are sparse so it is faster to export to phy
folder = 'waveforms'
we = extract_waveforms(recording, sorting, folder, sparse=True)
we = extract_waveforms(recording=recording, sorting=sorting, folder='waveforms', sparse=True)

# some computations are done before to control all options
compute_spike_amplitudes(we)
compute_principal_components(we, n_components=3, mode='by_channel_global')
compute_spike_amplitudes(waveform_extractor=we)
compute_principal_components(waveform_extractor=we, n_components=3, mode='by_channel_global')

# the export process is fast because everything is pre-computed
export_to_phy(we, output_folder='path/to/phy_folder')
export_to_phy(wavefor_extractor=we, output_folder='path/to/phy_folder')



Expand Down Expand Up @@ -72,12 +71,12 @@ with many units!


# the waveforms are sparse for more interpretable figures
we = extract_waveforms(recording, sorting, folder='path/to/wf', sparse=True)
we = extract_waveforms(recording=recording, sorting=sorting, folder='path/to/wf', sparse=True)

# some computations are done before to control all options
compute_spike_amplitudes(we)
compute_correlograms(we)
compute_quality_metrics(we, metric_names=['snr', 'isi_violation', 'presence_ratio'])
compute_spike_amplitudes(waveform_extractor=we)
compute_correlograms(waveform_extractor=we)
compute_quality_metrics(waveform_extractor=we, metric_names=['snr', 'isi_violation', 'presence_ratio'])

# the export process
export_report(we, output_folder='path/to/spikeinterface-report-folder')
export_report(waveform_extractor=we, output_folder='path/to/spikeinterface-report-folder')
23 changes: 13 additions & 10 deletions doc/modules/extractors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,12 @@ Most of the :code:`Recording` classes are implemented by wrapping the

Most of the :code:`Sorting` classes are instead directly implemented in SpikeInterface.


Although SpikeInterface is object-oriented (class-based), each object can also be loaded with a convenient
:code:`read_XXXXX()` function.

.. code-block:: python

import spikeinterface.extractors as se


Read one Recording
Expand All @@ -27,40 +28,42 @@ Every format can be read with a simple function:

.. code-block:: python

recording_oe = read_openephys("open-ephys-folder")
recording_oe = read_openephys(folder_path="open-ephys-folder")

recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")

recording_blackrock = read_blackrock("blackrock-folder")
recording_blackrock = read_blackrock(folder_path="blackrock-folder")

recording_mearec = read_mearec("mearec_file.h5")
recording_mearec = read_mearec(file_path="mearec_file.h5")


Importantly, some formats directly handle the probe information:

.. code-block:: python

recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")
print(recording_spikeglx.get_probe())

recording_mearec = read_mearec("mearec_file.h5")
recording_mearec = read_mearec(file_path="mearec_file.h5")
print(recording_mearec.get_probe())




Read one Sorting
----------------

.. code-block:: python

sorting_KS = read_kilosort("kilosort-folder")
sorting_KS = read_kilosort(folder_path="kilosort-folder")


Read one Event
--------------

.. code-block:: python

events_OE = read_openephys_event("open-ephys-folder")
events_OE = read_openephys_event(folder_path="open-ephys-folder")


For a comprehensive list of compatible technologies, see :ref:`compatible_formats`.
Expand All @@ -77,7 +80,7 @@ The actual reading will be done on demand using the :py:meth:`~spikeinterface.co
.. code-block:: python

# opening a 40GB SpikeGLX dataset is fast
recording_spikeglx = read_spikeglx("spikeglx-folder")
recording_spikeglx = read_spikeglx(folder_path="spikeglx-folder")

# this really does load the full 40GB into memory : not recommended!!!!!
traces = recording_spikeglx.get_traces(start_frame=None, end_frame=None, return_scaled=False)
Expand Down
46 changes: 25 additions & 21 deletions doc/modules/motion_correction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ We currently have 3 presets:
.. code-block:: python

# read and preprocess
rec = read_spikeglx('/my/Neuropixel/recording')
rec = bandpass_filter(rec)
rec = common_reference(rec)
rec = read_spikeglx(folder_path='/my/Neuropixel/recording')
rec = bandpass_filter(recording=rec)
rec = common_reference(recording=rec)

# then correction is one line of code
rec_corrected = correct_motion(rec, preset="nonrigid_accurate")
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate")

The process is quite long due the two first steps (activity profile + motion inference)
But the return :code:`rec_corrected` is a lazy recording object that will interpolate traces on the
Expand All @@ -94,20 +94,20 @@ If you want to user other presets, this is as easy as:
.. code-block:: python

# mimic kilosort motion
rec_corrected = correct_motion(rec, preset="kilosort_like")
rec_corrected = correct_motion(recording=rec, preset="kilosort_like")

# super but less accurate and rigid
rec_corrected = correct_motion(rec, preset="rigid_fast")
rec_corrected = correct_motion(recording=rec, preset="rigid_fast")


Optionally any parameter from the preset can be overwritten:

.. code-block:: python

rec_corrected = correct_motion(rec, preset="nonrigid_accurate",
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate",
detect_kwargs=dict(
detect_threshold=10.),
estimate_motion_kwargs=dic(
estimate_motion_kwargs=dict(
histogram_depth_smooth_um=8.,
time_horizon_s=120.,
),
Expand All @@ -123,7 +123,7 @@ and checking. The folder will contain the motion vector itself of course but als
.. code-block:: python

motion_folder = '/somewhere/to/save/the/motion'
rec_corrected = correct_motion(rec, preset="nonrigid_accurate", folder=motion_folder)
rec_corrected = correct_motion(recording=rec, preset="nonrigid_accurate", folder=motion_folder)

# and then
motion_info = load_motion_info(motion_folder)
Expand Down Expand Up @@ -156,14 +156,16 @@ The high-level :py:func:`~spikeinterface.preprocessing.correct_motion()` is inte

job_kwargs = dict(chunk_duration="1s", n_jobs=20, progress_bar=True)
# Step 1 : activity profile
peaks = detect_peaks(rec, method="locally_exclusive", detect_threshold=8.0, **job_kwargs)
peaks = detect_peaks(recording=rec, method="locally_exclusive", detect_threshold=8.0, **job_kwargs)
# (optional) sub-select some peaks to speed up the localization
peaks = select_peaks(peaks, ...)
peak_locations = localize_peaks(rec, peaks, method="monopolar_triangulation",radius_um=75.0,
peaks = select_peaks(peaks=peaks, ...)
peak_locations = localize_peaks(recording=rec, peaks=peaks, method="monopolar_triangulation",radius_um=75.0,
max_distance_um=150.0, **job_kwargs)

# Step 2: motion inference
motion, temporal_bins, spatial_bins = estimate_motion(rec, peaks, peak_locations,
motion, temporal_bins, spatial_bins = estimate_motion(recording=rec,
peaks=peaks,
peak_locations=peak_locations,
method="decentralized",
direction="y",
bin_duration_s=2.0,
Expand All @@ -173,7 +175,9 @@ The high-level :py:func:`~spikeinterface.preprocessing.correct_motion()` is inte

# Step 3: motion interpolation
# this step is lazy
rec_corrected = interpolate_motion(rec, motion, temporal_bins, spatial_bins,
rec_corrected = interpolate_motion(recording=rec, motion=motion,
temporal_bins=temporal_bins,
spatial_bins=spatial_bins,
border_mode="remove_channels",
spatial_interpolation_method="kriging",
sigma_um=30.)
Expand All @@ -196,20 +200,20 @@ different preprocessing chains: one for motion correction and one for spike sort

.. code-block:: python

raw_rec = read_spikeglx(...)
raw_rec = read_spikeglx(folder_path='/spikeglx_folder')

# preprocessing 1 : bandpass (this is smoother) + cmr
rec1 = si.bandpass_filter(raw_rec, freq_min=300., freq_max=5000.)
rec1 = si.common_reference(rec1, reference='global', operator='median')
rec1 = si.bandpass_filter(recording=raw_rec, freq_min=300., freq_max=5000.)
rec1 = si.common_reference(recording=rec1, reference='global', operator='median')

# here the corrected recording is done on the preprocessing 1
# rec_corrected1 will not be used for sorting!
motion_folder = '/my/folder'
rec_corrected1 = correct_motion(rec1, preset="nonrigid_accurate", folder=motion_folder)
rec_corrected1 = correct_motion(recording=rec1, preset="nonrigid_accurate", folder=motion_folder)

# preprocessing 2 : highpass + cmr
rec2 = si.highpass_filter(raw_rec, freq_min=300.)
rec2 = si.common_reference(rec2, reference='global', operator='median')
rec2 = si.highpass_filter(recording=raw_rec, freq_min=300.)
rec2 = si.common_reference(recording=rec2, reference='global', operator='median')

# we use another preprocessing for the final interpolation
motion_info = load_motion_info(motion_folder)
Expand All @@ -220,7 +224,7 @@ different preprocessing chains: one for motion correction and one for spike sort
spatial_bins=motion_info['spatial_bins'],
**motion_info['parameters']['interpolate_motion_kwargs'])

sorting = run_sorter("montainsort5", rec_corrected2)
sorting = run_sorter(sorter_name="montainsort5", recording=rec_corrected2)


References
Expand Down
10 changes: 5 additions & 5 deletions doc/modules/postprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ WaveformExtractor extensions

There are several postprocessing tools available, and all
of them are implemented as a :py:class:`~spikeinterface.core.BaseWaveformExtractorExtension`. All computations on top
of a WaveformExtractor will be saved along side the WaveformExtractor itself (sub folder, zarr path or sub dict).
of a :code:`WaveformExtractor` will be saved along side the :code:`WaveformExtractor` itself (sub folder, zarr path or sub dict).
This workflow is convenient for retrieval of time-consuming computations (such as pca or spike amplitudes) when reloading a
WaveformExtractor.
:code:`WaveformExtractor`.

:py:class:`~spikeinterface.core.BaseWaveformExtractorExtension` objects are tightly connected to the
parent :code:`WaveformExtractor` object, so that operations done on the :code:`WaveformExtractor`, such as saving,
Expand Down Expand Up @@ -80,9 +80,9 @@ This extension computes the principal components of the waveforms. There are sev

* "by_channel_local" (default): fits one PCA model for each by_channel
* "by_channel_global": fits the same PCA model to all channels (also termed temporal PCA)
* "concatenated": contatenates all channels and fits a PCA model on the concatenated data
* "concatenated": concatenates all channels and fits a PCA model on the concatenated data

If the input :code:`WaveformExtractor` is sparse, the sparsity is used when computing PCA.
If the input :code:`WaveformExtractor` is sparse, the sparsity is used when computing the PCA.
For dense waveforms, sparsity can also be passed as an argument.

For more information, see :py:func:`~spikeinterface.postprocessing.compute_principal_components`
Expand Down Expand Up @@ -127,7 +127,7 @@ with center of mass (:code:`method="center_of_mass"` - fast, but less accurate),
For more information, see :py:func:`~spikeinterface.postprocessing.compute_spike_locations`


unit locations
unit_locations
^^^^^^^^^^^^^^


Expand Down
Loading