Skip to content

Commit

Permalink
Bye bye spikeextractors (#349)
Browse files Browse the repository at this point in the history
* bye bye spikeextractors

* use generate_sorting as the recording is not used

* changelog

---------

Co-authored-by: Paul Adkisson <[email protected]>
h-mayorquin and pauladkisson authored Jun 20, 2024
1 parent f6fc44a commit 065ed43
Showing 3 changed files with 10 additions and 17 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@
* Updated testing workflows to include python 3.12, m1/intel macos, and dev tests to check neuroconv: [PR #317](https://github.com/catalystneuro/roiextractors/pull/317)
* Added daily testing workflow and fixed bug with python 3.12 by upgrading scanimage-tiff-reader version: [PR #321](https://github.com/catalystneuro/roiextractors/pull/321)
* Remove wheel from requirements and move CI dependencies to test requirements [PR #348](https://github.com/catalystneuro/roiextractors/pull/348)

* Use Spikeinterface instead of Spikeextractors for toy_example [PR #349](https://github.com/catalystneuro/roiextractors/pull/349)


# v0.5.8
2 changes: 1 addition & 1 deletion requirements-testing.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
pytest
pytest-cov
parameterized==0.8.1
spikeextractors>=0.9.10
spikeinterface>=0.100.7
pytest-xdist
23 changes: 8 additions & 15 deletions src/roiextractors/example_datasets/toy_example.py
Original file line number Diff line number Diff line change
@@ -157,33 +157,26 @@ def toy_example(
mode=mode,
)

# generate spike trains
import spikeextractors as se

rec, sort = se.example_datasets.toy_example(
duration=duration,
K=num_rois,
num_channels=1,
sampling_frequency=sampling_frequency,
)
from spikeinterface.core import generate_sorting

sort = generate_sorting(durations=[duration], num_units=num_rois, sampling_frequency=sampling_frequency)

# create decaying response
resp_samples = int(decay_time * sampling_frequency)
resp_tau = resp_samples / 5
tresp = np.arange(resp_samples)
resp = np.exp(-tresp / resp_tau)

num_frames = rec.get_num_frames() # TODO This should be changed to sampling_frequency x duration
num_of_units = sort.get_unit_ids() # TODO This to be changed by num_rois
num_frames = sampling_frequency * duration

# convolve response with ROIs
raw = np.zeros(num_of_units, num_frames) # TODO Change to new standard formating with time in first axis
deconvolved = np.zeros(num_of_units, num_frames) # TODO Change to new standard formating with time in first axis
raw = np.zeros(num_rois, num_frames) # TODO Change to new standard formating with time in first axis
deconvolved = np.zeros(num_rois, num_frames) # TODO Change to new standard formating with time in first axis
neuropil = noise_std * np.random.randn(
num_of_units, num_frames
num_rois, num_frames
) # TODO Change to new standard formating with time in first axis
frames = num_frames
for u_i, unit in range(num_of_units):
for u_i, unit in range(num_rois):
unit = u_i + 1 # spikeextractor toy example has unit ids starting at 1
for s in sort.get_unit_spike_train(unit): # TODO build a local function that generates frames with spikes
if s < num_frames:

0 comments on commit 065ed43

Please sign in to comment.