Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving deepinterpolation denoised file as binary #1785

Open
jazlynntan opened this issue Jul 6, 2023 · 8 comments
Open

Saving deepinterpolation denoised file as binary #1785

jazlynntan opened this issue Jul 6, 2023 · 8 comments
Labels
preprocessing Related to preprocessing module

Comments

@jazlynntan
Copy link

Hello,

I managed to run deepinterpolation and the time series plot looks good. However, when I try to save it as a .dat binary file to run on my local installation of Kilosort3, I get the error message: TypeError: init() got an unexpected keyword argument 'recording'. I get the same error when running Kilosort3 on the denoised recording using the singularity container directly.

I run deepinterpolation via:
import spikeinterface.preprocessing as sp
rec5 = sp.deepinterpolate(sp.zero_channel_pad(rec4,384), model_path = '2020_02_29_15_28_unet_single_ephys_1024_mean_squared_error-1050.h5')
where rec4 is the output of filtering, removing bad channels and common average referencing, following the code of the Neuropixel tutorial. The zero channel padding is because I removed a bad channel (the reference channel). I have the model downloaded and the path is correct.

Regarding saving, I tried:
job_kwargs = dict(n_jobs=12, chunk_duration='1s', progress_bar=True)
rec = rec5.save(folder='myfolder', format='binary', **job_kwargs)
which gave the same TypeError: init() got an unexpected keyword argument 'recording'

as well as:
import spikeinterface.core as sc
job_kwargs = dict(chunk_duration='1s', progress_bar=True)
sc.write_binary_recording(rec5, file_paths = 'myfolder/deepinterpolation_denoised.dat', dtype='int16', n_jobs=12, **job_kwargs)
which again gave the same error.

May I know if I was running deepinterpolation wrongly? Also, is it recommended to save the output as a binary .dat file via the recording.save() method or the write_binary_recording() method? I attempted to save my filtered recording (without deepinterpolation denoising) via the .save() method but it gave a .raw file that my local installation of Kilosort3 did not accept (error message that the input was not binary).

Thank you!

@alejoe91
Copy link
Member

alejoe91 commented Jul 6, 2023

Hi, can you paste the full error?

For running KS3, I recommend doing it via SpikeInterface since our wrapper automatically maps all inputs correctly ;)

sorting_KS3 = ss.run_sorter("kilosort3", recording)

For saving to binary, we recommend using the save() function, which also saves a bunch of metadata and allows you to reload the recording/sorting with the load_extractor() function

@jazlynntan
Copy link
Author

jazlynntan commented Jul 6, 2023

Thanks for the quick reply!

I just re-ran the sorter and got this error:

Starting container
Installing spikeinterface==0.97.1 in spikeinterface/kilosort3-compiled-base
Installing extra requirements: ['neo', 'tensorflow']
Running kilosort3 sorter inside spikeinterface/kilosort3-compiled-base
Stopping container
---------------------------------------------------------------------------
SpikeSortingError                         Traceback (most recent call last)
Cell In[14], line 3
      1 import spikeinterface.sorters as ss
      2 # si.get_default_sorter_params('kilosort3')
----> 3 sorting = ss.run_sorter('kilosort3', rec5, output_folder='/data/projects/neuropixel_jt/test_kilosort3_output',
      4                         singularity_image=True, verbose=True)

File ~/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py:137, in run_sorter(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, docker_image, singularity_image, with_output, **sorter_params)
    135         else:
    136             container_image = singularity_image
--> 137     return run_sorter_container(
    138         container_image=container_image,
    139         mode=mode,
    140         **common_kwargs,
    141     )
    143 return run_sorter_local(**common_kwargs)

File ~/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py:583, in run_sorter_container(sorter_name, recording, mode, container_image, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, with_output, extra_requirements, **sorter_params)
    581 if run_error:
    582     if raise_error:
--> 583         raise SpikeSortingError(
    584             f"Spike sorting in {mode} failed with the following error:\n{run_sorter_output}")
    585 else:
    586     if with_output:

SpikeSortingError: Spike sorting in singularity failed with the following error:
Traceback (most recent call last):
  File "/data/projects/neuropixel_jt/in_container_sorter_script.py", line 3, in <module>
    from spikeinterface import load_extractor
  File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/__init__.py", line 6, in <module>
    __version__ = importlib.metadata.version("spikeinterface")
  File "/home/miniconda3/lib/python3.8/importlib/metadata.py", line 493, in version
    return distribution(distribution_name).version
  File "/home/miniconda3/lib/python3.8/importlib/metadata.py", line 466, in distribution
    return Distribution.from_name(distribution_name)
  File "/home/miniconda3/lib/python3.8/importlib/metadata.py", line 176, in from_name
    raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: spikeinterface

When I try to save the recording using rec5.save(), I get this error still:

write_binary_recording with n_jobs = 12 and chunk_size = 30000
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
Exception in initializer:
Traceback (most recent call last):
  File "/data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py", line 233, in _process_worker
    initializer(*initargs)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py", line 384, in worker_initializer
    _worker_ctx = init_func(*init_args)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py", line 185, in _init_binary_worker
    worker_ctx['recording'] = load_extractor(recording)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 917, in load_extractor
    return BaseExtractor.from_dict(file_or_folder_or_dict, base_folder=base_folder)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 370, in from_dict
    extractor = _load_extractor_from_dict(d)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 845, in _load_extractor_from_dict
    kwargs[k] = _load_extractor_from_dict(v)
  File "/home/jatan/.local/lib/python3.9/site-packages/spikeinterface/core/base.py", line 862, in _load_extractor_from_dict
    extractor = cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'recording'
write_binary_recording: 0%
0/1561 [00:00<?, ?it/s]
---------------------------------------------------------------------------
BrokenProcessPool                         Traceback (most recent call last)
Cell In[15], line 4
      1 # saves to .raw and kilosort reads wrong number of channels
      2 job_kwargs = dict(n_jobs=12, chunk_duration='1s', progress_bar=True)
----> 4 rec = rec5.save(folder='/data/projects/neuropixel_jt/20230524_mouse2/preprocess3', format='binary', **job_kwargs)

File ~/.local/lib/python3.9/site-packages/spikeinterface/core/base.py:621, in BaseExtractor.save(self, **kwargs)
    619     loaded_extractor = self.save_to_zarr(**kwargs)
    620 else:
--> 621     loaded_extractor = self.save_to_folder(**kwargs)
    622 return loaded_extractor

File ~/.local/lib/python3.9/site-packages/spikeinterface/core/base.py:700, in BaseExtractor.save_to_folder(self, name, folder, verbose, **save_kwargs)
    697 self.save_metadata_to_folder(folder)
    699 # save data (done the subclass)
--> 700 cached = self._save(folder=folder, verbose=verbose, **save_kwargs)
    702 # copy properties/
    703 self.copy_metadata(cached)

File ~/.local/lib/python3.9/site-packages/spikeinterface/core/baserecording.py:297, in BaseRecording._save(self, format, **save_kwargs)
    294 file_paths = [folder / f'traces_cached_seg{i}.raw' for i in range(self.get_num_segments())]
    295 dtype = kwargs.get('dtype', None) or self.get_dtype()
--> 297 write_binary_recording(self, file_paths=file_paths, dtype=dtype, **job_kwargs)
    299 from .binaryrecordingextractor import BinaryRecordingExtractor
    300 binary_rec = BinaryRecordingExtractor(file_paths=file_paths, sampling_frequency=self.get_sampling_frequency(),
    301                                       num_chan=self.get_num_channels(), dtype=dtype,
    302                                       t_starts=t_starts, channel_ids=self.get_channel_ids(), time_axis=0,
    303                                       file_offset=0, gain_to_uV=self.get_channel_gains(),
    304                                       offset_to_uV=self.get_channel_offsets())

File ~/.local/lib/python3.9/site-packages/spikeinterface/core/core_tools.py:280, in write_binary_recording(recording, file_paths, dtype, add_file_extension, verbose, byte_offset, auto_cast_uint, **job_kwargs)
    277     init_args = (recording.to_dict(), rec_memmaps_dict, dtype, cast_unsigned)
    278 executor = ChunkRecordingExecutor(recording, func, init_func, init_args, verbose=verbose,
    279                                   job_name='write_binary_recording', **job_kwargs)
--> 280 executor.run()

File ~/.local/lib/python3.9/site-packages/spikeinterface/core/job_tools.py:364, in ChunkRecordingExecutor.run(self)
    362                 returns.append(res)
    363         else:
--> 364             for res in results:
    365                 pass
    367 return returns

File /data2/software/anaconda3/envs/phy/lib/python3.9/site-packages/tqdm/notebook.py:254, in tqdm_notebook.__iter__(self)
    252 try:
    253     it = super(tqdm_notebook, self).__iter__()
--> 254     for obj in it:
    255         # return super(tqdm...) will not catch exception
    256         yield obj
    257 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

File /data2/software/anaconda3/envs/phy/lib/python3.9/site-packages/tqdm/std.py:1178, in tqdm.__iter__(self)
   1175 time = self._time
   1177 try:
-> 1178     for obj in iterable:
   1179         yield obj
   1180         # Update and possibly print the progressbar.
   1181         # Note: does not call self.update(1) for speed optimisation.

File /data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/process.py:562, in _chain_from_iterable_of_lists(iterable)
    556 def _chain_from_iterable_of_lists(iterable):
    557     """
    558     Specialized implementation of itertools.chain.from_iterable.
    559     Each item in *iterable* should be a list.  This function is
    560     careful not to keep references to yielded objects.
    561     """
--> 562     for element in iterable:
    563         element.reverse()
    564         while element:

File /data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/_base.py:609, in Executor.map.<locals>.result_iterator()
    606 while fs:
    607     # Careful not to keep a reference to the popped future
    608     if timeout is None:
--> 609         yield fs.pop().result()
    610     else:
    611         yield fs.pop().result(end_time - time.monotonic())

File /data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/_base.py:446, in Future.result(self, timeout)
    444     raise CancelledError()
    445 elif self._state == FINISHED:
--> 446     return self.__get_result()
    447 else:
    448     raise TimeoutError()

File /data2/software/anaconda3/envs/phy/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
    389 if self._exception:
    390     try:
--> 391         raise self._exception
    392     finally:
    393         # Break a reference cycle with the exception in self._exception
    394         self = None

BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Is there a way I can use .save() to get a .dat file output that can be input into kilosort rather than a .raw (I'm guessing the saved recording file is traces_cached_seg0.raw)?

@alejoe91
Copy link
Member

alejoe91 commented Jul 6, 2023

Thanks. Can you try to install SpikeInterface from the main branch? I think this issue might be fixed!

@jazlynntan
Copy link
Author

Hi, thanks for your quick response, I tried running the sorting but experienced a different error instead. This is the full output and error message:

Starting container
Installing spikeinterface==0.98.0 in spikeinterface/kilosort3-compiled-base
Installing extra requirements: ['neo', 'tensorflow']
Running kilosort3 sorter inside spikeinterface/kilosort3-compiled-base
Stopping container

SpikeSortingError Traceback (most recent call last)
Cell In[36], line 3
1 import spikeinterface.sorters as ss
2 # si.get_default_sorter_params('kilosort3')
----> 3 sorting = ss.run_sorter('kilosort3', rec5_removed, output_folder='/data/projects/neuropixel_jt/test_kilosort3_output',
4 singularity_image=True, verbose=True)

File ~/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py:142, in run_sorter(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, docker_image, singularity_image, delete_container_files, with_output, **sorter_params)
140 else:
141 container_image = singularity_image
--> 142 return run_sorter_container(
143 container_image=container_image,
144 mode=mode,
145 **common_kwargs,
146 )
148 return run_sorter_local(**common_kwargs)

File ~/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py:595, in run_sorter_container(sorter_name, recording, mode, container_image, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, with_output, delete_container_files, extra_requirements, **sorter_params)
593 if run_error:
594 if raise_error:
--> 595 raise SpikeSortingError(f"Spike sorting in {mode} failed with the following error:\n{run_sorter_output}")
596 else:
597 if with_output:

SpikeSortingError: Spike sorting in singularity failed with the following error:
2023-07-13 03:43:08.085362: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-13 03:43:08.593005: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-07-13 03:43:09.080342: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-07-13 03:43:09.185553: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2023-07-13 03:43:09.595311: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:375] MLIR V1 optimization pass is not enabled
2023-07-13 03:43:09.628755: W tensorflow/c/c_api.cc:304] Operation '{name:'conv2d_8/kernel/Assign' id:209 op device:{requested: '', assigned: ''} def:{{{node conv2d_8/kernel/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](conv2d_8/kernel, conv2d_8/kernel/Initializer/stateless_random_uniform)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
2023-07-13 03:43:09.820931: W tensorflow/c/c_api.cc:304] Operation '{name:'conv2d_9/bias/rms/Assign' id:506 op device:{requested: '', assigned: ''} def:{{{node conv2d_9/bias/rms/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](conv2d_9/bias/rms, conv2d_9/bias/rms/Initializer/zeros)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
2023-07-13 03:43:20.596828: W tensorflow/c/c_api.cc:304] Operation '{name:'conv2d_7/kernel/Assign' id:179 op device:{requested: '', assigned: ''} def:{{{node conv2d_7/kernel/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](conv2d_7/kernel, conv2d_7/kernel/Initializer/stateless_random_uniform)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
2023-07-13 03:43:20.769335: W tensorflow/c/c_api.cc:304] Operation '{name:'conv2d_5/kernel/rms/Assign' id:451 op device:{requested: '', assigned: ''} def:{{{node conv2d_5/kernel/rms/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](conv2d_5/kernel/rms, conv2d_5/kernel/rms/Initializer/zeros)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
write_binary_recording: 0%| | 0/1561 [00:00<?, ?it/s]/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/keras/src/engine/training_v1.py:2359: UserWarning: Model.state_updates will be removed in a future version. This property should not be used in TensorFlow 2.0, as updates are applied automatically.
updates=self.state_updates,
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/miniconda3/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/home/miniconda3/lib/python3.8/concurrent/futures/process.py", line 198, in _process_chunk
return [fn(*args) for args in chunk]
File "/home/miniconda3/lib/python3.8/concurrent/futures/process.py", line 198, in
return [fn(*args) for args in chunk]
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/job_tools.py", line 438, in function_wrapper
return _func(segment_index, start_frame, end_frame, _worker_ctx)
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/core_tools.py", line 233, in _write_binary_chunk
traces = recording.get_traces(
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/baserecording.py", line 278, in get_traces
traces = rs.get_traces(start_frame=start_frame, end_frame=end_frame, channel_indices=channel_indices)
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/channelslice.py", line 93, in get_traces
traces = self._parent_recording_segment.get_traces(start_frame, end_frame, parent_indices)
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/preprocessing/deepinterpolation/deepinterpolation.py", line 386, in get_traces
out_traces = np.concatenate((array_to_append_front, out_traces), axis=0)
File "<array_function internals>", line 200, in concatenate
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 383 and the array at index 1 has size 384
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/data/projects/neuropixel_jt/in_container_sorter_script.py", line 17, in
sorting = run_sorter_local(
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/sorters/runsorter.py", line 173, in run_sorter_local
SorterClass.setup_recording(recording, output_folder, verbose=verbose)
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/sorters/basesorter.py", line 206, in setup_recording
cls._setup_recording(recording, sorter_output_folder, sorter_params, verbose)
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/sorters/external/kilosortbase.py", line 153, in _setup_recording
write_binary_recording(
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/core_tools.py", line 314, in write_binary_recording
executor.run()
File "/data/projects/neuropixel_jt/in_container_python_base/lib/python3.8/site-packages/spikeinterface/core/job_tools.py", line 400, in run
for res in results:
File "/home/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1107, in iter
for obj in iterable:
File "/home/miniconda3/lib/python3.8/concurrent/futures/process.py", line 484, in _chain_from_iterable_of_lists
for element in iterable:
File "/home/miniconda3/lib/python3.8/concurrent/futures/_base.py", line 611, in result_iterator
yield fs.pop().result()
File "/home/miniconda3/lib/python3.8/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/home/miniconda3/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 383 and the array at index 1 has size 384
write_binary_recording: 0%| | 0/1561 [23:04<?, ?it/s]

@alejoe91
Copy link
Member

@jazlynntan I recently made a PR that refactors and updates the deepinterolation step. Can you test it out?
#1804

@jazlynntan
Copy link
Author

Hi, I tried it out but can't get past the deepinterpolation step now. Am I doing something wrong?

My code is:

import spikeinterface.preprocessing as sp

rec1 = sp.highpass_filter(recording, freq_min=300.)
bad_channel_ids, channel_labels = sp.detect_bad_channels(rec1)
rec2 = rec1.remove_channels(bad_channel_ids)
print('bad_channel_ids', bad_channel_ids)

rec3 = sp.phase_shift(rec2)
rec4 = sp.common_reference(rec3, operator="median", reference="global")
rec5 = sp.deepinterpolate(sp.zero_channel_pad(rec4,384), model_path = '/data/projects/neuropixel_jt/2020_02_29_15_28_unet_single_ephys_1024_mean_squared_error-1050.h5')

rec1 - 4 are the preprocessing steps based on the Neuropixel tutorial. My recording is on the Neuropixel 1.0 phase 3B staggered configuration probe. There is one bad channel (the reference), hence while 'recording' has 384 channels, rec2 onwards has 383. I pad rec4 to obtain 384 channels and then run deepinterpolation on that. I'm using the pretrained model from deepinterpolation.

And the error:

AssertionError                            Traceback (most recent call last)
Cell In[13], line 10
      8 rec3 = sp.phase_shift(rec2)
      9 rec4 = sp.common_reference(rec3, operator="median", reference="global")
---> 10 rec5 = sp.deepinterpolate(sp.zero_channel_pad(rec4,384), model_path = '/data/projects/neuropixel_jt/2020_02_29_15_28_unet_single_ephys_1024_mean_squared_error-1050.h5')

File /data/projects/neuropixel_jt/spikeinterface/src/spikeinterface/preprocessing/deepinterpolation/deepinterpolation.py:80, in DeepInterpolatedRecording.__init__(self, recording, model_path, pre_frame, post_frame, pre_post_omission, batch_size, use_gpu, disable_tf_logger, memory_gpu)
     78 network_input_shape = model.get_config()["layers"][0]["config"]["batch_input_shape"]
     79 desired_shape = network_input_shape[1:3]
---> 80 assert (
     81     desired_shape[0] * desired_shape[1] == recording.get_num_channels()
     82 ), "The desired shape of the network input must match the number of channels in the recording"
     83 assert (
     84     network_input_shape[-1] == pre_frame + post_frame
     85 ), "The desired shape of the network input must match the pre and post frames"
     87 self.model = model

AssertionError: The desired shape of the network input must match the number of channels in the recording

@alejoe91
Copy link
Member

HI @jazlynntan

The problem is that the model that you are using is padding the electrodes with interleaved zeros to obtain a shape of (384, 2).

Basically you have:

...
e2 e3
0 0
e0 e1

@jeromelecoq can confirm!

Is your rec4 a Neuropixels recording?

NOTE the PR is still in progress. I will add more docs and tutorials in the following week

@jazlynntan
Copy link
Author

The original recording is a Neuropixel 1.0 recording, but rec4 is a processed version with the reference channel removed (hence one less than the usual 384). I did pad it using sp.zero_channel_pad(rec4,384) so I thought it would be the correct shape for the model.

Would you suggest I instead train deepinterpolation from scratch? If so, do you mind sharing what version of deepinterpolation you use?

Noted and thank you for the work on the PR. A tutorial on denoising with deepinterpolation would be really helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
preprocessing Related to preprocessing module
Projects
None yet
Development

No branches or pull requests

2 participants