Skip to content

Commit

Permalink
Merge pull request datajoint#54 from CBroz1/event
Browse files Browse the repository at this point in the history
trialized and localization notebooks
Thinh Nguyen authored Mar 31, 2022

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
2 parents c27e806 + 6f72f79 commit a5ee2c6
Showing 38 changed files with 6,556 additions and 1,246 deletions.
27 changes: 19 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -9,6 +9,7 @@ A complete electrophysiology workflow can be built using the DataJoint Elements.
+ [element-animal](https://github.com/datajoint/element-animal)
+ [element-session](https://github.com/datajoint/element-session)
+ [element-array-ephys](https://github.com/datajoint/element-array-ephys)
+ Optionally, [element-event](https://github.com/datajoint/element-event)

This repository provides demonstrations for:
1. Set up a workflow using DataJoint Elements (see
@@ -17,11 +18,12 @@ This repository provides demonstrations for:
convention, and directory lookup methods (see
[workflow_array_ephys/paths.py](workflow_array_ephys/paths.py)).
3. Ingestion of clustering results.
4. Ingestion of experimental condition information and downstream [PSTH analysis](https://www.sciencedirect.com/topics/neuroscience/peristimulus-time-histogram).

## Workflow architecture

The electrophysiology workflow presented here uses components from 4 DataJoint
Elements (`element-lab`, `element-animal`, `element-session`,
The electrophysiology workflow presented here uses components from 5 DataJoint
Elements (`element-lab`, `element-animal`, `element-session`, `element-event`,
`element-array-ephys`) assembled together to form a fully functional workflow.

### element-lab
@@ -34,9 +36,17 @@ https://github.com/datajoint/element-lab/raw/main/images/lab_diagram.svg)
![element-animal](
https://github.com/datajoint/element-animal/raw/main/images/subject_diagram.svg)

### assembled with element-array-ephys
### Assembled with element-array-ephys

![element-array-ephys](images/attached_array_ephys_element.svg)
![attached-element-array-ephys](images/attached_array_ephys_element.svg)

### Assembled with element-event and workflow analysis

![attached-trial-analysis](./images/attached_trial_analysis.svg)

## Assembled with element-electrode-localization

![attached-electrode-localization](./images/attached_electrode_localization.svg)

## Installation instructions

@@ -45,7 +55,8 @@ https://github.com/datajoint/element-animal/raw/main/images/subject_diagram.svg)

## Interacting with the DataJoint workflow

+ Please refer to the following workflow-specific
[Jupyter notebooks](/notebooks) for an in-depth explanation of how to run the
workflow ([03-process.ipynb](notebooks/03-process.ipynb)) and explore the data
([05-explore.ipynb](notebooks/05-explore.ipynb)).
Please refer to the following workflow-specific
[Jupyter notebooks](/notebooks) for an in-depth explanation of how to ...
+ run the workflow ([03-process.ipynb](notebooks/03-process.ipynb))
+ explore the data ([05-explore.ipynb](notebooks/05-explore.ipynb))
+ visualize trial-based analyses ([07-downstream-analysis.ipynb](notebooks/07-downstream-analysis.ipynb))
2 changes: 2 additions & 0 deletions docker/Dockerfile.dev
Original file line number Diff line number Diff line change
@@ -17,6 +17,7 @@ COPY --chown=anaconda:anaconda ./element-lab /main/element-lab
COPY --chown=anaconda:anaconda ./element-animal /main/element-animal
COPY --chown=anaconda:anaconda ./element-session /main/element-session
COPY --chown=anaconda:anaconda ./element-trial /main/element-trial
COPY --chown=anaconda:anaconda ./element-electrode-localization /main/element-electrode-localization
COPY --chown=anaconda:anaconda ./element-array-ephys /main/element-array-ephys
COPY --chown=anaconda:anaconda ./workflow-array-ephys /main/workflow-array-ephys

@@ -25,6 +26,7 @@ RUN pip install -e /main/element-lab
RUN pip install -e /main/element-animal
RUN pip install -e /main/element-session
RUN pip install -e /main/element-trial
RUN pip install -e /main/element-electrode-localization
RUN pip install -e /main/element-array-ephys
RUN pip install -e /main/workflow-array-ephys
RUN pip install -r /main/workflow-array-ephys/requirements_test.txt
19 changes: 13 additions & 6 deletions docker/Dockerfile.test
Original file line number Diff line number Diff line change
@@ -15,27 +15,34 @@ WORKDIR /main/workflow-array-ephys
# RUN pip install git+https://github.com/<user>element-lab.git
# RUN pip install git+https://github.com/<user>/element-animal.git
# RUN pip install git+https://github.com/<user>/element-session.git
# RUN pip install git+https://github.com/<user>/element-trial.git
# RUN pip install git+https://github.com/<user>/element-electrode-localization.git
# RUN pip install git+https://github.com/<user>/element-array-ephys.git
# RUN git clone https://github.com/<user>/workflow-array-ephys.git /main/workflow-array-ephys

# Option 3 - Install user's local fork of element and workflow
RUN mkdir /main/element-lab \
/main/element-animal \
/main/element-session \
/main/element-array-ephys \
/main/workflow-array-ephys
RUN mkdir -p /main/element-lab \
/main/element-animal \
/main/element-session \
/main/element-trial \
/main/element-array-ephys \
/main/workflow-array-ephys

COPY --chown=anaconda:anaconda ./element-lab /main/element-lab
COPY --chown=anaconda:anaconda ./element-animal /main/element-animal
COPY --chown=anaconda:anaconda ./element-session /main/element-session
COPY --chown=anaconda:anaconda ./element-trial /main/element-trial
COPY --chown=anaconda:anaconda ./element-electrode-localization /main/element-electrode-localization
COPY --chown=anaconda:anaconda ./element-array-ephys /main/element-array-ephys
COPY --chown=anaconda:anaconda ./workflow-array-ephys /main/workflow-array-ephys

RUN pip install -e /main/element-lab
RUN pip install -e /main/element-animal
RUN pip install -e /main/element-session
RUN pip install -e /main/element-trial
RUN pip install -e /main/element-electrode-localization
RUN pip install -e /main/element-array-ephys
RUN rm -f /main/workflow-array-ephys/dj_local_conf.json
# RUN rm -f /main/workflow-array-ephys/dj_local_conf.json

# Install the workflow
RUN pip install /main/workflow-array-ephys
9 changes: 6 additions & 3 deletions docker/docker-compose-dev.yaml
Original file line number Diff line number Diff line change
@@ -6,18 +6,20 @@ x-net: &net
networks:
- main
services:
array-ephys-dev-db:
db:
<<: *net
image: datajoint/mysql:5.7
container_name: workflow-array-ephys-dev-db
environment:
- MYSQL_ROOT_PASSWORD=simple
array-ephys-dev-workflow:
workflow:
<<: *net
build:
context: ../../
dockerfile: ./workflow-array-ephys/docker/Dockerfile.dev
env_file: .env
image: workflow_array_ephys_dev:0.1.0a4
image: workflow-array-ephys-dev:0.1.0a4
container_name: workflow-array-ephys-dev
environment:
- EPHYS_ROOT_DATA_DIR=/main/test_data/workflow_ephys_data1/,/main/test_data/workflow_ephys_data2/
volumes:
@@ -27,6 +29,7 @@ services:
- ../../element-animal:/main/element-animal
- ../../element-session:/main/element-session
- ../../element-trial:/main/element-trial
- ../../element-electrode-localization:/main/element-electrode-localization
- ../../element-array-ephys:/main/element-array-ephys
- ..:/main/workflow-array-ephys
depends_on:
12 changes: 8 additions & 4 deletions docker/docker-compose-test.yaml
Original file line number Diff line number Diff line change
@@ -9,22 +9,25 @@ x-net: &net
networks:
- main
services:
array-ephys-test-db:
db:
<<: *net
image: datajoint/mysql:5.7
container_name: workflow-array-ephys-test-db
environment:
- MYSQL_ROOT_PASSWORD=simple
array-ephys-test-workflow:
workflow:
<<: *net
build:
context: ../../
dockerfile: ./workflow-array-ephys/docker/Dockerfile.test
env_file: .env
image: workflow_array_ephys_test:0.1.0a4
image: workflow-array-ephys-test:0.1.0a4
container_name: workflow-array-ephys-test
environment:
- DJ_HOST=db
- DJ_USER=root
- DJ_PASS=simple
- EPHYS_MODE=no-curation
- EPHYS_ROOT_DATA_DIR=/main/test_data/workflow_ephys_data1/,/main/test_data/workflow_ephys_data2/
- DATABASE_PREFIX=test_
command:
@@ -40,11 +43,12 @@ services:
- ../../element-lab:/main/element-lab
- ../../element-animal:/main/element-animal
- ../../element-session:/main/element-session
- ../../element-electrode-localization:/main/element-electrode-localization
- ../../element-trial:/main/element-trial
- ../../element-array-ephys:/main/element-array-ephys
- ..:/main/workflow-array-ephys
depends_on:
array-ephys-test-db:
db:
condition: service_healthy
networks:
main:
25 changes: 25 additions & 0 deletions images/attached_electrode_localization.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
195 changes: 195 additions & 0 deletions images/attached_trial_analysis.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions notebooks/00-data-download-optional.ipynb
Original file line number Diff line number Diff line change
@@ -182,6 +182,9 @@
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,py"
},
"kernelspec": {
"display_name": "ephys_workflow_runner",
"language": "python",
3 changes: 3 additions & 0 deletions notebooks/01-configure.ipynb
Original file line number Diff line number Diff line change
@@ -247,6 +247,9 @@
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,py"
},
"kernelspec": {
"display_name": "bl_dev",
"language": "python",
2,382 changes: 2,372 additions & 10 deletions notebooks/02-workflow-structure-optional.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion notebooks/03-process.ipynb
Original file line number Diff line number Diff line change
@@ -3523,7 +3523,8 @@
],
"metadata": {
"jupytext": {
"encoding": "# -*- coding: utf-8 -*-"
"encoding": "# -*- coding: utf-8 -*-",
"formats": "ipynb,py"
},
"kernelspec": {
"display_name": "ephys_workflow_runner",
3 changes: 3 additions & 0 deletions notebooks/04-automate-optional.ipynb
Original file line number Diff line number Diff line change
@@ -391,6 +391,9 @@
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,py"
},
"kernelspec": {
"display_name": "ephys_workflow_runner",
"language": "python",
455 changes: 454 additions & 1 deletion notebooks/05-explore.ipynb

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions notebooks/06-drop-optional.ipynb
Original file line number Diff line number Diff line change
@@ -53,6 +53,9 @@
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,py"
},
"kernelspec": {
"display_name": "Python [conda env:workflow-ephys]",
"language": "python",
1,037 changes: 0 additions & 1,037 deletions notebooks/07-Downstream analysis.ipynb

This file was deleted.

1,159 changes: 1,159 additions & 0 deletions notebooks/07-downstream-analysis.ipynb

Large diffs are not rendered by default.

710 changes: 710 additions & 0 deletions notebooks/08-electrode-localization.ipynb

Large diffs are not rendered by default.

69 changes: 69 additions & 0 deletions notebooks/py_scripts/00-data-download-optional.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: ephys_workflow_runner
# language: python
# name: ephys_workflow_runner
# ---

# This workflow will need Ephys data collected from either SpikeGLX or OpenEphys and the output from kilosort2. We provided an example dataset to be downloaded to run through the pipeline. This notebook walks you through the process to download the dataset.

# ## Install djarchive-client

# The example dataset was hosted on djarchive, an AWS storage. We provide a client package to download the data.[djarchive-client](https://github.com/datajoint/djarchive-client), which could be installed with pip:

pip install git+https://github.com/datajoint/djarchive-client.git

# ## Download ephys test datasets using `djarchive-client`

import os
import djarchive_client
client = djarchive_client.client()

# To browse the datasets that are available in djarchive:

list(client.datasets())

# Each of the datasets have different versions associated with the version of workflow package. To browse the revisions:

list(client.revisions())

# To download the dataset, let's prepare a root directory, for example in `/tmp`:

os.mkdir('/tmp/test_data')

# Get the dataset revision with the current version of the workflow:

from workflow_array_ephys import version
revision = version.__version__.replace('.', '_')
revision

# Then run download for a given set and the revision:

client.download('workflow-array-ephys-test-set', target_directory='/tmp/test_data', revision=revision)

# ## Directory organization
# After downloading, the directory will be organized as follows:

# ```
# /tmp/test_data/
# - subject6
# - session1
# - towersTask_g0_imec0
# - towersTask_g0_t0_nidq.meta
# - towersTask_g0_t0.nidq.bin
# ```

# We will use this dataset as an example for the rest of the notebooks. If you use for own dataset for the workflow, change the path accordingly.
#
# The example dataset `subject6/session1` is a dataset recorded with SpikeGLX and processed with Kilosort2. The workflow also supports the processing of dataset recorded with OpenEphys.

# ## Next step
# In the [next notebook](01-configure.ipynb), we will set up the configuration file for the workflow.
104 changes: 104 additions & 0 deletions notebooks/py_scripts/01-configure.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: bl_dev
# language: python
# name: bl_dev
# ---

# To run the array ephys workflow, we need to properly set up the DataJoint configuration. The configuration will be saved in a file called `dj_local_conf.json` on each machine and this notebook walks you through the process.
#
#
# **The configuration only needs to be set up once**, if you have gone through the configuration before, directly go to [02-workflow-structure](02-workflow-structure-optional.ipynb).

# # Set up configuration in root directory of this package
#
# As a convention, we set the configuration up in the root directory of the workflow package and always starts importing datajoint and pipeline modules from there.

import os
os.chdir('..')

pwd

import datajoint as dj

# # Configure database host address and credentials

# Now let's set up the host, user and password in the `dj.config` global variable

import getpass
dj.config['database.host'] = '{YOUR_HOST}'
dj.config['database.user'] = '{YOUR_USERNAME}'
dj.config['database.password'] = getpass.getpass() # enter the password securily

# You should be able to connect to the database at this stage.

dj.conn()

# # Configure the `custom` field in `dj.config` for the element-array-ephys

# The major component of the current workflow is the [DataJoint Array Ephys Element](https://github.com/datajoint/element-array-ephys). Array Ephys Element requires configurations in the field `custom` in `dj.config`:

# ## Database prefix
#
# Giving a prefix to schema could help on the configuration of privilege settings. For example, if we set prefix `neuro_`, every schema created with the current workflow will start with `neuro_`, e.g. `neuro_lab`, `neuro_subject`, `neuro_ephys` etc.
#
# The prefix could be configurated as follows in `dj.config`:

dj.config['custom'] = {'database.prefix': 'neuro_'}

# ## Root directories for raw ephys data and kilosort 2 processed results
#
# + `ephys_root_data_dir` field indicates the root directory for the **ephys raw data** from SpikeGLX or OpenEphys (e.g. `*imec0.ap.bin`, `*imec0.ap.meta`, `*imec0.lf.bin`, `imec0.lf.meta`) or the **clustering results** from kilosort2 (e.g. `spike_times.npy`, `spike_clusters.npy`). The root path typically **do not** contain information of subjects or sessions, all data from subjects/sessions should be subdirectories in the root path.
#
# In the example dataset downloaded with [this instruction](00-data-download-optional.ipynb), `/tmp/test_data` will be the root
#
# ```
# /tmp/test_data/
# - subject6
# - session1
# - towersTask_g0_imec0
# - towersTask_g0_t0_nidq.meta
# - towersTask_g0_t0.nidq.bin
# ```

# If there is only one root path.
dj.config['custom']['ephys_root_data_dir'] = '/tmp/test_data'
# If there are multiple possible root paths:
dj.config['custom']['ephys_root_data_dir'] = ['/tmp/test_data']

dj.config

# + In the database, every path for the ephys raw data is **relative to this root path**. The benefit is that the absolute path could be configured for each machine, and when data transfer happens, we just need to change the root directory in the config file.
# + The workflow supports **multiple root directories**. If there are multiple possible root directories, specify the `ephys_root_data_dir` as a list.
# + The root path(s) are **specific to each machine**, as the name of drive mount could be different for different operating systems or machines.
# + In the context of the workflow, all the paths saved into the database or saved in the config file need to be in the **POSIX standards** (Unix/Linux), with `/`. The path conversion for machines of any operating system is taken care of inside the elements.

# # Save the configuration as a json file

# With the proper configurations, we could save this as a file, either as a local json file, or a global file.

dj.config.save_local()

# ls

# Local configuration file is saved as `dj_local_conf.json` in the root directory of this package `workflow-array-ephys`. Next time if you change your directory to `workflow-array-ephys` before importing datajoint and the pipeline packages, the configurations will get properly loaded.
#
# If saved globally, there will be a hidden configuration file saved in your root directory. The configuration will be loaded no matter where the directory is.

# +
# dj.config.save_global()
# -

# # Next Step

# After the configuration, we will be able to review the workflow structure with [02-workflow-structure-optional](02-workflow-structure-optional.ipynb).


137 changes: 137 additions & 0 deletions notebooks/py_scripts/02-workflow-structure-optional.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: ephys_workflow_runner
# language: python
# name: ephys_workflow_runner
# ---

# # Introduction to the workflow structure

# This notebook gives a brief overview of the workflow structure and introduce some useful DataJoint tools to facilitate the exploration.
# + DataJoint needs to be pre-configured before running this notebook, if you haven't set up the configuration, refer to notebook [01-configure](01-configure.ipynb).
# + If you are familar with DataJoint and the workflow structure, proceed to the next notebook [03-process](03-process.ipynb) directly to run the workflow.
# + For a more thorough introduction of DataJoint functionings, please visit our [general tutorial site](https://playground.datajoint.io)

# To load the local configuration, we will change the directory to the package root.

import os
os.chdir('..')

# ## Schemas and tables

# The current workflow is composed of multiple database schemas, each of them corresponds to a module within `workflow_array_ephys.pipeline`

import datajoint as dj
from workflow_array_ephys.pipeline import lab, subject, session, probe, ephys

# + Each module contains a schema object that enables interaction with the schema in the database.

probe.schema

# + Each module imported above corresponds to one schema inside the database. For example, `ephys` corresponds to `neuro_ephys` schema in the database.
ephys.schema

# + The table classes in the module corresponds to a table in the schema in the database.

# + Each datajoint table class inside the module corresponds to a table inside the schema. For example, the class `ephys.EphysRecording` correponds to the table `_ephys_recording` in the schema `neuro_ephys` in the database.
# preview table columns and contents in a table
ephys.EphysRecording()

# + The first time importing the modules, empty schemas and tables will be created in the database. [markdown]
# # + By importing the modules for the first time, the schemas and tables will be created inside the database.
# # + Once created, importing modules will not create schemas and tables again, but the existing schemas/tables can be accessed and manipulated by the modules.
# -
# ## DataJoint tools to explore schemas and tables

# + The schemas and tables will not be re-created when importing modules if they have existed. [markdown]
# # + `dj.list_schemas()`: list all schemas a user has access to in the current database
# + `dj.list_schemas()`: list all schemas a user could access.
dj.list_schemas()

# + `dj.Diagram()`: plot tables and dependencies.

# + `dj.Diagram()`: plot tables and dependencies
# plot diagram for all tables in a schema
dj.Diagram(ephys)
# -

# **Table tiers**:
#
# Manual table: green box, manually inserted table, expect new entries daily, e.g. Subject, ProbeInsertion.
# Lookup table: gray box, pre inserted table, commonly used for general facts or parameters. e.g. Strain, ClusteringMethod, ClusteringParamSet.
# Imported table: blue oval, auto-processing table, the processing depends on the importing of external files. e.g. process of Clustering requires output files from kilosort2.
# Computed table: red circle, auto-processing table, the processing does not depend on files external to the database, commonly used for
# Part table: plain text, as an appendix to the master table, all the part entries of a given master entry represent a intact set of the master entry. e.g. Unit of a CuratedClustering.
#
# **Dependencies**:
#
# One-to-one primary: thick solid line, share the exact same primary key, meaning the child table inherits all the primary key fields from the parent table as its own primary key.
# One-to-many primary: thin solid line, inherit the primary key from the parent table, but have additional field(s) as part of the primary key as well
# secondary dependency: dashed line, the child table inherits the primary key fields from parent table as its own secondary attribute.

# + `dj.Diagram()`: plot the diagram of the tables and dependencies. It could be used to plot tables in a schema or selected tables.
# plot diagram of tables in multiple schemas
dj.Diagram(subject) + dj.Diagram(session) + dj.Diagram(ephys)
# -

# plot diagram of selected tables and schemas
dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + dj.Diagram(ephys)

# plot diagram with 1 additional level of dependency downstream
dj.Diagram(subject.Subject) + 1

# plot diagram with 2 additional levels of dependency upstream
dj.Diagram(ephys.EphysRecording) - 2

# + `heading`: [markdown]
# # + `describe()`: show table definition with foreign key references.
# -
ephys.EphysRecording.describe();

# + `heading`: show attribute definitions regardless of foreign key references

# + `heading`: show table attributes regardless of foreign key references.
ephys.EphysRecording.heading

# + probe [markdown]
# # Major DataJoint Elements installed in the current workflow
# + ephys [markdown]
# # + [`lab`](https://github.com/datajoint/element-lab): lab management related information, such as Lab, User, Project, Protocol, Source.
# -

dj.Diagram(lab)

# + [`animal`](https://github.com/datajoint/element-animal): general animal information, User, Genetic background, Death etc.

dj.Diagram(subject)

# + [subject](https://github.com/datajoint/element-animal): contains the basic information of subject, including Strain, Line, Subject, Zygosity, and SubjectDeath information.
subject.Subject.describe();

# + [`session`](https://github.com/datajoint/element-session): General information of experimental sessions.

dj.Diagram(session)

# + [session](https://github.com/datajoint/element-session): experimental session information
session.Session.describe();

# + [`ephys`](https://github.com/datajoint/element-array-ephys): Neuropixel based probe and ephys information

# + [probe and ephys](https://github.com/datajoint/element-array-ephys): Neuropixel based probe and ephys tables
dj.Diagram(probe) + dj.Diagram(ephys)
# -

# ## Summary and next step
#
# + This notebook introduced the overall structures of the schemas and tables in the workflow and relevant tools to explore the schema structure and table definitions.
#
# + In the next notebook [03-process](03-process.ipynb), we will further introduce the detailed steps running through the pipeline and table contents accordingly.
298 changes: 298 additions & 0 deletions notebooks/py_scripts/03-process.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,298 @@
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: ephys_workflow_runner
# language: python
# name: ephys_workflow_runner
# ---

# # Interatively run array ephys workflow

# This notebook walks you through the steps in detail to run the ephys workflow.
#
# + If you need a more automatic approach to run the workflow, refer to [03-automate](03-automate.ipynb)
# + The workflow requires neuropixels meta file and kilosort output data. If you haven't configure the paths, refer to [01-configure](01-configure.ipynb)
# + To overview the schema structures, refer to [02-workflow-structure](02-workflow-structure.ipynb)

# Let's will change the directory to the package root to load configuration and also import relevant schemas.

import os
os.chdir('..')

import datajoint as dj
from workflow_array_ephys.pipeline import lab, subject, session, probe, ephys

# ## Ingestion of metadata: subjects, sessions, probes, and probe insertions
#
# The first step to run through the workflow is to insert metadata into the following tables:
#
# + subject.Subject: an animal subject for experiments
# + session.Session: an experimental session performed on an animal subject
# + session.SessionDirectory: directory to the data for a given experimental session
# + probe.Probe: probe information
# + ephys.ProbeInsertion: probe insertion into an animal subject during a given experimental session

dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + dj.Diagram(probe.Probe) + dj.Diagram(ephys.ProbeInsertion)

# Our example dataset is for subject6, session1.

# ### Ingest Subject

subject.Subject.heading

# insert entries with insert1() or insert(), with all required attributes specified in a dictionary
subject.Subject.insert1(
dict(subject='subject6', sex='M', subject_birth_date='2020-01-04'),
skip_duplicates=True) # skip_duplicates avoids error when inserting entries with duplicated primary keys
subject.Subject()

# ### Ingest Session

session.Session.describe();

session.Session.heading

session_key = dict(subject='subject6', session_datetime='2021-01-15 11:16:38')
session.Session.insert1(session_key, skip_duplicates=True)
session.Session()

# ### Ingest SessionDirectory

session.SessionDirectory.describe();

session.SessionDirectory.heading

session.SessionDirectory.insert1(
dict(subject='subject6', session_datetime='2021-01-15 11:16:38',
session_dir='subject6/session1'),
skip_duplicates=True)
session.SessionDirectory()

# **Note**:
#
# the `session_dir` needs to be:
# + a directory **relative to** the `ephys_root_path` in the configuration file, refer to [01-configure](01-configure.ipynb) for more information.
# + a directory in POSIX format (Unix/Linux), with `/`, the difference in Operating system will be taken care of by the elements.

# ### Ingest Probe

probe.Probe.heading

probe.Probe.insert1(
dict(probe='17131311651', probe_type='neuropixels 1.0 - 3B'),
skip_duplicates=True) # this info could be achieve from neuropixels meta file.
probe.Probe()

# ### Ingest ProbeInsertion

ephys.ProbeInsertion.describe();

ephys.ProbeInsertion.heading

ephys.ProbeInsertion.insert1(
dict(subject='subject6', session_datetime="2021-01-15 11:16:38",
insertion_number=0, probe='17131311651'),
skip_duplicates=True) # probe, subject, session_datetime needs to follow the restrictions of foreign keys.
ephys.ProbeInsertion()

# ## Automate this manual step
#
# In this workflow, these manual steps could be automated by:
# 1. Insert entries in files `/user_data/subjects.csv` and `/user_data/session.csv`
# 2. Extract user-specified information from `/user_data/subjects.csv` and `/user_data/sessions.csv` and insert to Subject and Session tables by running:
# ```
# from workflow_array_ephys.ingest import ingest_subjects, ingest_sessions
# ingest_subjects()
# ingest_sessions()
# ```
# `ingest_sessions` also extracts probe and probe insertion information automatically from the meta file.
#
# This is the regular routine for daily data processing, illustrated in notebook [04-automate](04-automate[optional].ipynb).

# ## Populate EphysRecording

# Now we are ready to populate EphysRecording, a table for entries of ephys recording in a particular session.

dj.Diagram(session.Session) + \
(dj.Diagram(probe.ElectrodeConfig) + 1) + \
dj.Diagram(ephys.EphysRecording) + dj.Diagram(ephys.EphysRecording.EphysFile)
# # +1 means plotting 1 more layer of the child tables

# The first argument specify a particular session to populate
ephys.EphysRecording.populate(session_key, display_progress=True)

# Populate EphysRecording extracts the following information from .ap.meta file from SpikeGLX:
#
# 1. **probe.EelectrodeConfig**: this procedure detects new ElectrodeConfig, i.e. which 384 electrodes out of the total 960 on the probe were used in this ephys session, and save the results into the table `probe.EelectrodeConfig`. Each entry in table `ephys.EphysRecording` specifies which ElectrodeConfig is used in a particular ephys session.

# For this ephys session we just populated, Electrodes 0-383 was used.

probe.ElectrodeConfig()

probe.ElectrodeConfig.Electrode()

# 2. **ephys.EphysRecording**: note here that it refers to a particular electrode_config identified with a hash.

ephys.EphysRecording() & session_key

# 3. **ephys_element.EphysRecording.EphysFile**
#
# The table `EphysFile` only saves the meta file from the recording

ephys.EphysRecording.EphysFile() & session_key

# ## Create ClusteringTask and run/validate Clustering

dj.Diagram(ephys.EphysRecording) + ephys.ClusteringParamSet + ephys.ClusteringTask + \
ephys.Clustering

# The next major table in the ephys pipeline is the `ClusteringTask`.
#
# + An entry in `ClusteringTask` indicates a set of clustering results generated from Kilosort2 outside `workflow-array-ephys` are ready be ingested. In a future release, an entry in `ClusteringTask` can also indicate a new Kilosort2 clustering job is ready to be triggered.
#
# + The `ClusteringTask` table depends on the table `ClusteringParamSet`, which are the parameters of the clustering task and needed to be ingested first.

# A method of the class `ClusteringParamSet` called `insert_new_params` helps on the insertion of a parameter set and ensures the inserted one is not duplicated with existing parameter sets in the database.
#
# The following parameters' values are set based on [Kilosort StandardConfig file](https://github.com/MouseLand/Kilosort/tree/main/configFiles)

# insert clustering task manually
params_ks = {
"fs": 30000,
"fshigh": 150,
"minfr_goodchannels": 0.1,
"Th": [10, 4],
"lam": 10,
"AUCsplit": 0.9,
"minFR": 0.02,
"momentum": [20, 400],
"sigmaMask": 30,
"ThPr": 8,
"spkTh": -6,
"reorder": 1,
"nskip": 25,
"GPU": 1,
"Nfilt": 1024,
"nfilt_factor": 4,
"ntbuff": 64,
"whiteningRange": 32,
"nSkipCov": 25,
"scaleproc": 200,
"nPCs": 3,
"useRAM": 0
}
ephys.ClusteringParamSet.insert_new_params(
processing_method='kilosort2',
paramset_idx=0,
params=params_ks,
paramset_desc='Spike sorting using Kilosort2')
ephys.ClusteringParamSet()

# We are then able to insert an entry into the `ClusteringTask` table. One important field of the table is `clustering_output_dir`, which specifies the Kilosort2 output directory for the later processing.
# **Note**: this output dir is a relative path to be combined with `ephys_root_directory` in the config file.

ephys.ClusteringTask.describe();

ephys.ClusteringTask.heading

ephys.ClusteringTask.insert1(
dict(session_key, insertion_number=0, paramset_idx=0,
clustering_output_dir='subject6/session1/towersTask_g0_imec0'),
skip_duplicates=True)

ephys.ClusteringTask() & session_key

# We are then able to populate the clustering results. The `Clustering` table now validates the Kilosort2 outcomes before ingesting the spike sorted results. In a future release of `element-array-ephys`, this table may be used to trigger a Kilosort2 process. A record in the `Clustering` indicates that Kilosort2 job is done successfully and the results are ready to be processed.

ephys.Clustering.populate(display_progress=True)

ephys.Clustering() & session_key

# ## Import clustering results and manually curated results

# We are now ready to ingest the clustering results (spike times etc.) into the database. These clustering results are either directly from Kilosort2 or with manual curation. Both ways share the same format of files. In the element, there is a `Curation` table that saves this information.

dj.Diagram(ephys.ClusteringTask) + dj.Diagram(ephys.Clustering) + dj.Diagram(ephys.Curation) + \
dj.Diagram(ephys.CuratedClustering) + dj.Diagram(ephys.CuratedClustering.Unit)

ephys.Curation.describe();

ephys.Curation.heading

ephys.Curation.insert1(
dict(session_key, insertion_number=0, paramset_idx=0,
curation_id=1,
curation_time='2021-04-28 15:47:01',
curation_output_dir='subject6/session1/towersTask_g0_imec0',
quality_control=0,
manual_curation=0
))

# In this case, the curation results are directly from Kilosort2 outputs, so the `curation_output_dir` is identical to `clustering_output_dir` in the table `ephys.ClusteringTask`. The `element-array-ephys` provides a helper function `ephys.Curation().create1_from_clustering_task` to conveniently insert an entry without manual curation.
#
# Example usage:
#
# ```python
# key = (ephys.ClusteringTask & session_key).fetch1('KEY')
# ephys.Curation().create1_from_clustering_task(key)
# ```

# Then we could populate table `CuratedClustering`, ingesting either the output of Kilosort2 or the curated results.

ephys.CuratedClustering.populate(session_key, display_progress=True)

# The part table `CuratedClustering.Unit` contains the spike sorted units

ephys.CuratedClustering.Unit()

# ## Populate LFP

# + `LFP`: mean LFP across all electrodes [markdown]
# # + `LFP`: Mean local field potential across different electrodes.
# # + `LFP.Electrode`: Local field potential of a given electrode.
# + LFP and LFP.Electrode: By populating LFP, LFP of every other 9 electrode on the probe will be saved into table `ephys_element.LFP.Electrode` and an average LFP saved into table `ephys_element.LFP`
dj.Diagram(ephys.EphysRecording) + dj.Diagram(ephys.LFP) + dj.Diagram(ephys.LFP.Electrode)
# -

# Takes a few minutes to populate
ephys.LFP.populate(session_key, display_progress=True)
ephys.LFP & session_key

ephys.LFP.Electrode & session_key

# ## Populate Spike Waveform

# The current workflow also contain tables to save spike waveforms:
# + `WaveformSet`: a table to drive the processing of all spikes waveforms resulting from a CuratedClustering.
# + `WaveformSet.Waveform`: mean waveform across spikes for a given unit and electrode.
# + `WaveformSet.PeakWaveform`: mean waveform across spikes for a given unit at the electrode with peak spike amplitude.

dj.Diagram(ephys.CuratedClustering) + dj.Diagram(ephys.WaveformSet) + 1

# + The `probe_element.EelectrodeConfig` table conains the configuration information of the electrodes used, i.e. which 384 electrodes out of the total 960 on the probe were used in this ephys session, while the table `ephys_element.EphysRecording` specify which ElectrodeConfig is used in a particular ephys session.
# Takes ~1h to populate for the test dataset
ephys.WaveformSet.populate(session_key, display_progress=True)
# -

ephys.WaveformSet & session_key

ephys.WaveformSet.Waveform & session_key

ephys.WaveformSet.PeakWaveform & session_key

# ## Summary and next step

# This notebook walks through the detailed steps running the workflow.
#
# + For an more automated way running the workflow, refer to [04-automate](04-automate-optional.ipynb)
# + In the next notebook [05-explore](05-explore.ipynb), we will introduce DataJoint methods to explore and visualize the ingested data.


113 changes: 113 additions & 0 deletions notebooks/py_scripts/04-automate-optional.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: ephys_workflow_runner
# language: python
# name: ephys_workflow_runner
# ---

# + [markdown] pycharm={"name": "#%% md\n"}
# # Run workflow in an automatic way
#
# In the previous notebook [03-process](03-process.ipynb), we ran through the workflow in detailed steps. For daily running routines, the current notebook provides a more succinct and automatic approach to run through the pipeline using some utility functions in the workflow.
# -

import os
os.chdir('..')
import numpy as np
from workflow_array_ephys.pipeline import lab, subject, session, probe, ephys

# ## Ingestion of subjects, sessions, probes, probe insertions
#
# 1. Fill subject and session information in files `/user_data/subjects.csv` and `/user_data/sessions.csv`
# 2. Run automatic scripts prepared in `workflow_array_ephys.ingest` for ingestion

from workflow_array_ephys.ingest import ingest_subjects, ingest_sessions

# ##### Insert new entries for subject.Subject from the `subjects.csv` file

ingest_subjects()

# ##### Insert new entries for session.Session, session.SessionDirectory, probe.Probe, ephys.ProbeInsertions from the `sessions.csv` file

ingest_sessions()

# ## [Optional] Insert new ClusteringParamSet for Kilosort
#
# This is not needed if keep using the existing ClusteringParamSet

# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
params_ks = {
"fs": 30000,
"fshigh": 150,
"minfr_goodchannels": 0.1,
"Th": [10, 4],
"lam": 10,
"AUCsplit": 0.9,
"minFR": 0.02,
"momentum": [20, 400],
"sigmaMask": 30,
"ThPr": 8,
"spkTh": -6,
"reorder": 1,
"nskip": 25,
"GPU": 1,
"Nfilt": 1024,
"nfilt_factor": 4,
"ntbuff": 64,
"whiteningRange": 32,
"nSkipCov": 25,
"scaleproc": 200,
"nPCs": 3,
"useRAM": 0
}
ephys.ClusteringParamSet.insert_new_params(
processing_method='kilosort2',
paramset_idx=0,
params=params_ks,
paramset_desc='Spike sorting using Kilosort2')
# -

# ## Trigger autoprocessing of the remaining ephys pipeline

from workflow_array_ephys import process

# The `process.run()` function in the workflow populates every auto-processing table in the workflow. If a table is dependent on a manual table upstream, it will not get populated until the manual table is inserted.

# At this stage, process script populates through the table upstream of `ClusteringTask`
process.run()

# ## Insert new ClusteringTask to trigger ingestion of clustering results
#
# To populate the rest of the tables in the workflow, an entry in the `ClusteringTask` needs to be added to trigger the ingestion of the clustering results, with the two pieces of information specified:
# + the `paramset_idx` used for the clustering job
# + the output directory storing the clustering results

session_key = session.Session.fetch1('KEY')
ephys.ClusteringTask.insert1(
dict(session_key, insertion_number=0, paramset_idx=0,
clustering_output_dir='subject6/session1/towersTask_g0_imec0'), skip_duplicates=True)

# run populate again for table Clustering
process.run()

# ## Insert new Curation to trigger ingestion of curated results

key = (ephys.ClusteringTask & session_key).fetch1('KEY')
ephys.Curation().create1_from_clustering_task(key)

# run populate for the rest of the tables in the workflow, takes a while
process.run()

# ## Summary and next step
#
# + This notebook runs through the workflow in an automatic manner.
#
# + In the next notebook [05-explore](05-explore.ipynb), we will introduce how to query, fetch and visualize the contents we ingested into the tables.
137 changes: 137 additions & 0 deletions notebooks/py_scripts/05-explore.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: ephys_workflow_runner
# language: python
# name: ephys_workflow_runner
# ---

# # DataJoint Workflow Array Ephys
#
# This notebook will describe the steps for interacting with the data ingested into `workflow-array-ephys`.

import os
os.chdir('..')

# +
import datajoint as dj
import matplotlib.pyplot as plt
import numpy as np

from workflow_array_ephys.pipeline import lab, subject, session, ephys

# + [markdown] pycharm={"name": "#%% md\n"}
# ## Workflow architecture
#
# This workflow is assembled from 4 DataJoint elements:
# # + [element-lab](https://github.com/datajoint/element-lab)
# # + [element-animal](https://github.com/datajoint/element-animal)
# # + [element-session](https://github.com/datajoint/element-session)
# # + [element-array-ephys](https://github.com/datajoint/element-array-ephys)
#
# For the architecture and detailed descriptions for each of those elements, please visit the respective links.
#
# Below is the diagram describing the core components of the fully assembled pipeline.
#
# -

dj.Diagram(ephys) + (dj.Diagram(session.Session) + 1) - 1

# ## Browsing the data with DataJoint query and fetch
#
#
# DataJoint provides abundant functions to query data and fetch. For a detailed tutorials, visit our [general tutorial site](https://playground.datajoint.io/)
#

# Running through the pipeline, we have ingested data of subject6 session1 into the database. Here are some highlights of the important tables.

# ### `Subject` and `Session` tables

subject.Subject()

session.Session()

session_key = (session.Session & 'subject="subject6"' & 'session_datetime = "2021-01-15 11:16:38"').fetch1('KEY')

# ### `ephys.ProbeInsertion` and `ephys.EphysRecording` tables
#
# These tables stores the probe recordings within a particular session from one or more probes.

ephys.ProbeInsertion & session_key

ephys.EphysRecording & session_key

# ### `ephys.ClusteringTask` , `ephys.Clustering`, `ephys.Curation`, and `ephys.CuratedClustering`
#
# + Spike-sorting is performed on a per-probe basis with the details stored in `ClusteringTask` and `Clustering`
#
# + After the spike sorting, the results may go through curation process.
# + If it did not go through curation, a copy of `ClusteringTask` entry was inserted into table `ephys.Curation` with the `curation_ouput_dir` identicial to the `clustering_output_dir`.
# + If it did go through a curation, a new entry will be inserted into `ephys.Curation`, with a `curation_output_dir` specified.
# + `ephys.Curation` supports multiple curations of a clustering task.

ephys.ClusteringTask * ephys.Clustering & session_key

# In our example workflow, `curation_output_dir` is the same as `clustering_output_dir`

ephys.Curation * ephys.CuratedClustering & session_key

# ### Spike-sorting results are stored in `ephys.CuratedClustering`, `ephys.WaveformSet.Waveform`

ephys.CuratedClustering.Unit & session_key

# Let's pick one probe insertion and one `curation_id`, and further inspect the clustering results.

curation_key = (ephys.CuratedClustering & session_key & 'insertion_number = 0' & 'curation_id=1').fetch1('KEY')

ephys.CuratedClustering.Unit & curation_key

# ### Generate a raster plot

# Let's try a raster plot - just the "good" units

ephys.CuratedClustering.Unit & curation_key & 'cluster_quality_label = "good"'

units, unit_spiketimes = (ephys.CuratedClustering.Unit
& curation_key
& 'cluster_quality_label = "good"').fetch('unit', 'spike_times')

x = np.hstack(unit_spiketimes)
y = np.hstack([np.full_like(s, u) for u, s in zip(units, unit_spiketimes)])

fig, ax = plt.subplots(1, 1, figsize=(32, 16))
ax.plot(x, y, '|')
ax.set_xlabel('Time (s)');
ax.set_ylabel('Unit');

# ### Plot waveform of a unit

# Let's pick one unit and further inspect

unit_key = (ephys.CuratedClustering.Unit & curation_key & 'unit = 15').fetch1('KEY')

ephys.CuratedClustering.Unit * ephys.WaveformSet.Waveform & unit_key

unit_data = (ephys.CuratedClustering.Unit * ephys.WaveformSet.PeakWaveform & unit_key).fetch1()

unit_data

sampling_rate = (ephys.EphysRecording & curation_key).fetch1('sampling_rate')/1000 # in kHz
plt.plot(np.r_[:unit_data['peak_electrode_waveform'].size] * 1/sampling_rate, unit_data['peak_electrode_waveform'])
plt.xlabel('Time (ms)');
plt.ylabel(r'Voltage ($\mu$V)');

# ## Summary and Next Step

# This notebook highlights the major tables in the workflow and visualize some of the ingested results.
#
# The next notebook [06-drop](06-drop-optional.ipynb) shows how to drop schemas and tables if needed.


34 changes: 34 additions & 0 deletions notebooks/py_scripts/06-drop-optional.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: Python [conda env:workflow-ephys]
# language: python
# name: conda-env-workflow-ephys-py
# ---

# # Drop schemas
#
# + Schemas are not typically dropped in a production workflow with real data in it.
# + At the developmental phase, it might be required for the table redesign.
# + When dropping all schemas is needed, the following is the dependency order.

# Change into the parent directory to find the `dj_local_conf.json` file.

import os
os.chdir('..')

# +
from workflow_array_ephys.pipeline import *

# ephys.schema.drop()
# probe.schema.drop()
# session.schema.drop()
# subject.schema.drop()
# lab.schema.drop()
168 changes: 168 additions & 0 deletions notebooks/py_scripts/07-downstream-analysis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: venv-nwb
# language: python
# name: venv-nwb
# ---

# + [markdown] tags=[]
# # DataJoint U24 - Workflow Array Electrophysiology

# + [markdown] tags=[]
# ## Setup
# -

# First, let's change directories to find the `dj_local_conf` file.

import os
# change to the upper level folder to detect dj_local_conf.json
if os.path.basename(os.getcwd())=='notebooks': os.chdir('..')
assert os.path.basename(os.getcwd())=='workflow-array-ephys', ("Please move to the "
+ "workflow directory")
# We'll be working with long tables, so we'll make visualization easier with a limit
import datajoint as dj; dj.config['display.limit']=10

# Next, we populate the python namespace with the required schemas

from workflow_array_ephys.pipeline import session, ephys, trial, event

# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Trial and Event schemas
# -

# Tables in the `trial` and `event` schemas specify the structure of your experiment, including block, trial and event timing.
# - Session has a 1-to-1 mapping with a behavior recording
# - A block is a continuous phase of an experiment that contains repeated instances of a condition, or trials.
# - Events may occur within or outside of conditions, either instantaneous or continuous.
#
# The diagram below shows (a) the levels of hierarchy and (b) how the bounds may not completely overlap. A block may not fully capure trials and events may occur outside both blocks/trials.

# ```
# |----------------------------------------------------------------------------|
# |-------------------------------- Session ---------------------------------|__
# |-------------------------- BehaviorRecording ---------------------------|____
# |----- Block 1 -----|______|----- Block 2 -----|______|----- Block 3 -----|___
# | trial 1 || trial 2 |____| trial 3 || trial 4 |____| trial 5 |____| trial 6 |
# |_|e1|_|e2||e3|_|e4|__|e5|__|e6||e7||e8||e9||e10||e11|____|e12||e13|_________|
# |----------------------------------------------------------------------------|
# ```

# Let's load some example data. The `ingest.py` script has a series of loaders to help.

from workflow_array_ephys.ingest import ingest_subjects, ingest_sessions,\
ingest_events, ingest_alignment

ingest_subjects(); ingest_sessions(); ingest_events()

# We have 100 total trials, either 'stim' or 'ctrl', with start and stop time

trial.Trial()

# Each trial is paired with one or more events that take place during the trial window.

trial.TrialEvent() & 'trial_id<5'

# Finally, the `AlignmentEvent` describes the event of interest and the window we'd like to see around it.

ingest_alignment()

event.AlignmentEvent()

# ## Event-aligned trialized unit spike times

# First, we'll check that the data is still properly inserted from the previous notebooks.

ephys.CuratedClustering()

# For this example, we'll be looking at `subject6`.

clustering_key = (ephys.CuratedClustering
& {'subject': 'subject6', 'session_datetime': '2021-01-15 11:16:38',
'insertion_number': 0}
).fetch1('KEY')

trial.Trial & clustering_key

# And we can narrow our focus on `ctrl` trials.

ctrl_trials = trial.Trial & clustering_key & 'trial_type = "ctrl"'

# The `analysis` schema provides example tables to perform event-aligned spike-times analysis.

from workflow_array_ephys import analysis

# + ***SpikesAlignmentCondition*** - a manual table to specify the inputs and condition for the analysis


# + ***SpikesAlignment*** - a computed table to extract event-aligned spikes and compute unit PSTH

# Let's start by creating several analyses configuration - i.e. inserting into ***SpikesAlignmentCondition*** for the `center` event, called `center_button` in the `AlignmentEvent` table.

event.AlignmentEvent()

alignment_key = (event.AlignmentEvent & 'alignment_name = "center_button"'
).fetch1('KEY')
alignment_condition = {**clustering_key, **alignment_key,
'trial_condition': 'ctrl_center_button'}
analysis.SpikesAlignmentCondition.insert1(alignment_condition, skip_duplicates=True)

analysis.SpikesAlignmentCondition.Trial.insert(
(analysis.SpikesAlignmentCondition * ctrl_trials & alignment_condition).proj(),
skip_duplicates=True)

analysis.SpikesAlignmentCondition.Trial()

# With the steps above, we have create a new spike alignment condition for analysis, named `ctrl_center_button`, which specifies:
# + a CuratedClustering of interest for analysis


# + an event of interest to align the spikes to - `center_button`


# + a set of trials of interest to perform the analysis on - `ctrl` trials

# Now, let's create another set with:
# + the same CuratedClustering of interest for analysis


# + an event of interest to align the spikes to - `center_button`


# + a set of trials of interest to perform the analysis on - `stim` trials

stim_trials = trial.Trial & clustering_key & 'trial_type = "stim"'
alignment_condition = {**clustering_key, **alignment_key, 'trial_condition': 'stim_center_button'}
analysis.SpikesAlignmentCondition.insert1(alignment_condition, skip_duplicates=True)
analysis.SpikesAlignmentCondition.Trial.insert(
(analysis.SpikesAlignmentCondition * stim_trials & alignment_condition).proj(),
skip_duplicates=True)

# We can compare conditions in the `SpikesAlignmentCondition` table.

analysis.SpikesAlignmentCondition()

analysis.SpikesAlignmentCondition.Trial & 'trial_condition = "ctrl_center_button"'

# ### Computation

# Now let's run the computation on these.

analysis.SpikesAlignment.populate(display_progress=True)

# ### Vizualize

# We can visualize the results with the `plot_raster` function.

alignment_condition = {**clustering_key, **alignment_key, 'trial_condition': 'ctrl_center_button'}
analysis.SpikesAlignment().plot_raster(alignment_condition, unit=2);

alignment_condition = {**clustering_key, **alignment_key, 'trial_condition': 'stim_center_button'}
analysis.SpikesAlignment().plot_raster(alignment_condition, unit=2);
101 changes: 101 additions & 0 deletions notebooks/py_scripts/08-electrode-localization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.13.7
# kernelspec:
# display_name: venv-nwb
# language: python
# name: venv-nwb
# ---

# # Electrode Localization

# Change into the parent directory to find the `dj_local_conf.json` file.

import os
# change to the upper level folder to detect dj_local_conf.json
if os.path.basename(os.getcwd())=='notebooks': os.chdir('..')
assert os.path.basename(os.getcwd())=='workflow-array-ephys', ("Please move to the "
+ "workflow directory")
# We'll be working with long tables, so we'll make visualization easier with a limit
import datajoint as dj; dj.config['display.limit']=10

# + [markdown] tags=[] jp-MarkdownHeadingCollapsed=true jp-MarkdownHeadingCollapsed=true tags=[]
# ## Coordinate Framework
# -

# The Allen Institute hosts [brain atlases](http://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/annotation/ccf_2017) and [ontology trees](https://community.brain-map.org/t/allen-mouse-ccf-accessing-and-using-related-data-and-tools/359) that we'll use in the next section. The `localization.py` script assumes this is your first atlas, and that you'll use the 100μm resolution. For finer resolutions, edit `voxel_resolution` in `localization.py`. Higher resolution `nrrd` files are quite large when loaded. Depending on the python environment, the terminal may be killed when loading so much information into memory. To load multiple atlases, increment `ccf_id` for each unique atlas.
#
# To run this pipeline ...
# 1. Download the 100μm `nrrd` and `csv` files from the links above.
# 2. Move these files to your ephys root directory.

# Next, we'll populate the coordinate framework schema simply by loading it. Because we are loading the whole brain volume, this may take 25 minutes or more.

from workflow_array_ephys.localization import coordinate_framework as ccf

# Now, to explore the data we just loaded.

ccf.BrainRegionAnnotation.BrainRegion()

# The acronyms listed in the DataJoint table differ slightly from the CCF standard by substituting case-sensitive differences with [snake case](https://en.wikipedia.org/wiki/Snake_case). To lookup the snake case equivalent, use the `retrieve_acronym` function.

central_thalamus = ccf.BrainRegionAnnotation.retrieve_acronym('CM')
cranial_nerves = ccf.BrainRegionAnnotation.retrieve_acronym('cm')
print(f'CM: {central_thalamus}\ncm: {cranial_nerves}')

# If your work requires the case-sensitive columns please contact get in touch with the DataJoint team via [StackOverflow](https://stackoverflow.com/questions/tagged/datajoint).
#
# For this demo, let's look at the dimensions of the central thalamus. To look at other regions, open the CSV you downloaded and search for your desired region.

cm_voxels = ccf.BrainRegionAnnotation.Voxel() & f'acronym=\"{central_thalamus}\"'
cm_voxels

cm_x, cm_y, cm_z = cm_voxels.fetch('x', 'y', 'z')
print(f'The central thalamus extends from \n\tx = {min(cm_x)} to x = {max(cm_x)}\n\t'
+ f'y = {min(cm_y)} to y = {max(cm_y)}\n\tz = {min(cm_z)} to z = {max(cm_z)}')

# ## Electrode Localization

# If you have `channel_location` json files for your data, you can look at the position and regions associated with each electrode. Here, we've added an example file to our pre-existing `subject6` for demonstration purposes.

from workflow_array_ephys.localization import coordinate_framework as ccf
from workflow_array_ephys.localization import electrode_localization as eloc

# Because the probe may not be fully inserted, there will be some electrode positions that occur outside the brain. We register these instances with an `IntegrityError` warning because we're trying to register a coorinate position with no corresponding location in the `ccf.CCF.Voxel` table. We can silence these warnings by setting the log level before running `populate()` on the `ElectrodePosition` table.

import logging
logging.getLogger().setLevel(logging.ERROR) # or logging.INFO

eloc.ElectrodePosition.populate()

# By calling the `ElectrodePosition` table, we can see the keys the `populate()` method has already processed.

eloc.ElectrodePosition()

# Let's focus on `subject5`, insertion `1`.

from workflow_array_ephys.pipeline import ephys
key=(ephys.EphysRecording & 'subject="subject5"' & 'insertion_number=1').fetch1('KEY')
len(eloc.ElectrodePosition.Electrode & key)

# With a resolution of 100μm, adjacent electrodes will very likely be in the same region. Let's look at every 38th electrode to sample 10 across the probe.
#
# If you're interested in more electrodes, decrease the number next to the `%` modulo operator.

electrode_coordinates = (eloc.ElectrodePosition.Electrode & 'electrode%38=0'
& key).fetch('electrode', 'x', 'y', 'z', as_dict=True)
for e in electrode_coordinates:
x, y, z = [ e[k] for k in ('x','y', 'z')]
acronym = (ccf.BrainRegionAnnotation.Voxel & f'x={x}' & f'y={y}' & f'z={z}'
).fetch1('acronym')
e['region'] = (ccf.BrainRegionAnnotation.BrainRegion & f'acronym=\"{acronym}\"'
).fetch1('region_name')
print('Electrode {electrode} (x={x}, y={y}, z={z}) is in {region}'.format(**e))


1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -3,5 +3,6 @@ element-array-ephys==0.1.0b0
element-lab>=0.1.0b0
element-animal==0.1.0b0
element-session==0.1.0b0
element-event @ git+https://github.com/datajoint/element-event.git
element-interface @ git+https://github.com/datajoint/element-interface.git
ipykernel==6.0.1
9 changes: 7 additions & 2 deletions tests/__init__.py
Original file line number Diff line number Diff line change
@@ -57,7 +57,9 @@ def dj_config():
or dj.config['custom']['ephys_mode']),
'database.prefix': (os.environ.get('DATABASE_PREFIX')
or dj.config['custom']['database.prefix']),
'ephys_root_data_dir': (os.environ.get('EPHYS_ROOT_DATA_DIR').split(',') if os.environ.get('EPHYS_ROOT_DATA_DIR') else dj.config['custom']['ephys_root_data_dir'])
'ephys_root_data_dir': (os.environ.get('EPHYS_ROOT_DATA_DIR').split(',')
if os.environ.get('EPHYS_ROOT_DATA_DIR')
else dj.config['custom']['ephys_root_data_dir'])
}
return

@@ -200,11 +202,12 @@ def testdata_paths():
'npx3B-p1-ks': 'subject6/session1/towersTask_g0_imec0'
}


@pytest.fixture
def ephys_insertionlocation(pipeline, ingest_sessions):
"""Insert probe location into ephys.InsertionLocation"""
ephys = pipeline['ephys']

for probe_insertion_key in ephys.ProbeInsertion.fetch('KEY'):
ephys.InsertionLocation.insert1(dict(**probe_insertion_key,
skull_reference='Bregma',
@@ -223,6 +226,7 @@ def ephys_insertionlocation(pipeline, ingest_sessions):
with QuietStdOut():
ephys.InsertionLocation.delete()


@pytest.fixture
def kilosort_paramset(pipeline):
"""Insert kilosort parameters into ephys.ClusteringParamset"""
@@ -333,6 +337,7 @@ def clustering(clustering_tasks, pipeline):

@pytest.fixture
def curations(clustering, pipeline):
"""Insert keys from ephys.ClusteringTask into ephys.Curation"""
ephys_mode = pipeline['ephys_mode']

if ephys_mode == 'no-curation':
4 changes: 2 additions & 2 deletions user_data/behavior_recordings.csv
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
subject,session_datetime,filepath
subject5,2018-07-03 20:32:28,./user_data/trials.csv
subject5,2018-07-03 20:32:28,./user_data/events.csv
subject6,2021-01-15 11:16:38,./user_data/trials.csv
subject6,2021-01-15 11:16:38,./user_data/events.csv
8 changes: 4 additions & 4 deletions user_data/blocks.csv
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
subject,session_datetime,block_id,block_start_time,block_stop_time,attribute_name,attribute_value
subject5,2018-07-03 20:32:28,1,2,86,type,light
subject5,2018-07-03 20:32:28,2,86,170,type,dark
subject5,2018-07-03 20:32:28,3,170,254,type,light
subject5,2018-07-03 20:32:28,4,254,338,type,dark
subject6,2021-01-15 11:16:38,1,0,476,type,light
subject6,2021-01-15 11:16:38,2,476,952,type,dark
subject6,2021-01-15 11:16:38,3,952,1428,type,light
subject6,2021-01-15 11:16:38,4,1428,1904,type,dark
205 changes: 153 additions & 52 deletions user_data/events.csv
Original file line number Diff line number Diff line change
@@ -1,53 +1,154 @@
subject,session_datetime,trial_id,event_start_time,event_type
subject5,2018-07-03 20:32:28,1,8.812,center
subject5,2018-07-03 20:32:28,1,4.647,left
subject5,2018-07-03 20:32:28,1,3.733,right
subject5,2018-07-03 20:32:28,3,21.198,left
subject5,2018-07-03 20:32:28,3,19.121,left
subject5,2018-07-03 20:32:28,3,25.344,right
subject5,2018-07-03 20:32:28,4,26.769,right
subject5,2018-07-03 20:32:28,4,27.073,right
subject5,2018-07-03 20:32:28,4,32.411,center
subject5,2018-07-03 20:32:28,5,35.048,right
subject5,2018-07-03 20:32:28,5,35.091,left
subject5,2018-07-03 20:32:28,9,72.032,right
subject5,2018-07-03 20:32:28,10,83.279,left
subject5,2018-07-03 20:32:28,11,90.027,right
subject5,2018-07-03 20:32:28,14,115.413,center
subject5,2018-07-03 20:32:28,14,111.274,center
subject5,2018-07-03 20:32:28,15,125.791,left
subject5,2018-07-03 20:32:28,16,135.069,left
subject5,2018-07-03 20:32:28,16,128.56,center
subject5,2018-07-03 20:32:28,16,133.69,center
subject5,2018-07-03 20:32:28,19,158.744,center
subject5,2018-07-03 20:32:28,19,152.561,right
subject5,2018-07-03 20:32:28,22,186.771,center
subject5,2018-07-03 20:32:28,22,185.484,right
subject5,2018-07-03 20:32:28,22,182.107,left
subject5,2018-07-03 20:32:28,23,192.072,right
subject5,2018-07-03 20:32:28,23,187.923,center
subject5,2018-07-03 20:32:28,24,198.2,right
subject5,2018-07-03 20:32:28,24,198.436,left
subject5,2018-07-03 20:32:28,26,213.439,right
subject5,2018-07-03 20:32:28,26,214.028,center
subject5,2018-07-03 20:32:28,26,215.391,right
subject5,2018-07-03 20:32:28,29,237.248,left
subject5,2018-07-03 20:32:28,31,261.215,left
subject5,2018-07-03 20:32:28,33,277.311,center
subject5,2018-07-03 20:32:28,33,271.487,center
subject5,2018-07-03 20:32:28,33,277.392,center
subject5,2018-07-03 20:32:28,34,282.147,left
subject5,2018-07-03 20:32:28,34,283.893,right
subject5,2018-07-03 20:32:28,35,288.407,center
subject5,2018-07-03 20:32:28,35,291.126,right
subject5,2018-07-03 20:32:28,35,287.522,center
subject5,2018-07-03 20:32:28,36,301.118,left
subject5,2018-07-03 20:32:28,36,298.515,right
subject5,2018-07-03 20:32:28,37,312.081,right
subject5,2018-07-03 20:32:28,37,305.579,left
subject5,2018-07-03 20:32:28,37,309.113,left
subject5,2018-07-03 20:32:28,39,324.653,left
subject5,2018-07-03 20:32:28,39,322.778,left
subject5,2018-07-03 20:32:28,40,336.635,center
subject5,2018-07-03 20:32:28,40,332.538,right
subject5,2018-07-03 20:32:28,40,331.889,right
subject6,2021-01-15 11:16:38,1,4.498,left
subject6,2021-01-15 11:16:38,1,10.58,center
subject6,2021-01-15 11:16:38,2,21.647,center
subject6,2021-01-15 11:16:38,2,23.9,right
subject6,2021-01-15 11:16:38,3,37.044,center
subject6,2021-01-15 11:16:38,3,41.892,left
subject6,2021-01-15 11:16:38,4,55.259,center
subject6,2021-01-15 11:16:38,5,74.072,right
subject6,2021-01-15 11:16:38,6,91.062,right
subject6,2021-01-15 11:16:38,6,96.453,left
subject6,2021-01-15 11:16:38,7,104.462,left
subject6,2021-01-15 11:16:38,7,111.576,left
subject6,2021-01-15 11:16:38,8,125.868,left
subject6,2021-01-15 11:16:38,8,127.549,left
subject6,2021-01-15 11:16:38,9,142.886,left
subject6,2021-01-15 11:16:38,9,144.982,right
subject6,2021-01-15 11:16:38,10,158.177,center
subject6,2021-01-15 11:16:38,11,177.522,left
subject6,2021-01-15 11:16:38,11,179.519,right
subject6,2021-01-15 11:16:38,12,194.551,center
subject6,2021-01-15 11:16:38,13,208.223,left
subject6,2021-01-15 11:16:38,14,227.622,left
subject6,2021-01-15 11:16:38,14,234.657,center
subject6,2021-01-15 11:16:38,15,242.766,center
subject6,2021-01-15 11:16:38,16,259.943,center
subject6,2021-01-15 11:16:38,16,265.695,left
subject6,2021-01-15 11:16:38,17,278.033,right
subject6,2021-01-15 11:16:38,18,295.436,center
subject6,2021-01-15 11:16:38,18,301.092,right
subject6,2021-01-15 11:16:38,19,312.168,center
subject6,2021-01-15 11:16:38,19,320.269,left
subject6,2021-01-15 11:16:38,20,328.678,left
subject6,2021-01-15 11:16:38,21,350.125,left
subject6,2021-01-15 11:16:38,21,355.114,left
subject6,2021-01-15 11:16:38,22,365.729,left
subject6,2021-01-15 11:16:38,23,381.802,center
subject6,2021-01-15 11:16:38,23,388.785,center
subject6,2021-01-15 11:16:38,24,397.983,center
subject6,2021-01-15 11:16:38,25,480.351,right
subject6,2021-01-15 11:16:38,25,483.584,right
subject6,2021-01-15 11:16:38,26,496.797,left
subject6,2021-01-15 11:16:38,26,500.623,center
subject6,2021-01-15 11:16:38,27,513.88,center
subject6,2021-01-15 11:16:38,28,530.146,center
subject6,2021-01-15 11:16:38,29,548.123,left
subject6,2021-01-15 11:16:38,29,551.419,center
subject6,2021-01-15 11:16:38,30,565.787,right
subject6,2021-01-15 11:16:38,31,581.927,right
subject6,2021-01-15 11:16:38,32,597.105,center
subject6,2021-01-15 11:16:38,32,605.174,center
subject6,2021-01-15 11:16:38,33,617.296,center
subject6,2021-01-15 11:16:38,34,631.519,center
subject6,2021-01-15 11:16:38,34,639.799,center
subject6,2021-01-15 11:16:38,35,648.943,left
subject6,2021-01-15 11:16:38,35,655.128,right
subject6,2021-01-15 11:16:38,36,666.147,center
subject6,2021-01-15 11:16:38,36,672.215,left
subject6,2021-01-15 11:16:38,37,684.363,right
subject6,2021-01-15 11:16:38,37,691.55,right
subject6,2021-01-15 11:16:38,38,701.364,right
subject6,2021-01-15 11:16:38,39,720.031,right
subject6,2021-01-15 11:16:38,39,725.79,right
subject6,2021-01-15 11:16:38,40,736.141,right
subject6,2021-01-15 11:16:38,40,741.894,right
subject6,2021-01-15 11:16:38,41,756.252,right
subject6,2021-01-15 11:16:38,42,770.854,left
subject6,2021-01-15 11:16:38,43,789.026,left
subject6,2021-01-15 11:16:38,44,806.311,right
subject6,2021-01-15 11:16:38,44,814.348,left
subject6,2021-01-15 11:16:38,45,824.353,center
subject6,2021-01-15 11:16:38,46,841.581,center
subject6,2021-01-15 11:16:38,47,855.822,left
subject6,2021-01-15 11:16:38,47,861.736,left
subject6,2021-01-15 11:16:38,48,873.726,center
subject6,2021-01-15 11:16:38,49,893.056,center
subject6,2021-01-15 11:16:38,50,954.241,left
subject6,2021-01-15 11:16:38,50,959.362,right
subject6,2021-01-15 11:16:38,51,972.74,right
subject6,2021-01-15 11:16:38,52,990.811,center
subject6,2021-01-15 11:16:38,52,995.781,left
subject6,2021-01-15 11:16:38,53,1003.789,right
subject6,2021-01-15 11:16:38,53,1014.044,right
subject6,2021-01-15 11:16:38,54,1023.039,right
subject6,2021-01-15 11:16:38,55,1042.303,left
subject6,2021-01-15 11:16:38,55,1045.455,center
subject6,2021-01-15 11:16:38,56,1055.932,right
subject6,2021-01-15 11:16:38,57,1075.973,left
subject6,2021-01-15 11:16:38,57,1079.937,center
subject6,2021-01-15 11:16:38,58,1094.502,right
subject6,2021-01-15 11:16:38,58,1096.454,right
subject6,2021-01-15 11:16:38,59,1111.624,right
subject6,2021-01-15 11:16:38,60,1125.798,center
subject6,2021-01-15 11:16:38,60,1132.081,left
subject6,2021-01-15 11:16:38,61,1143.288,right
subject6,2021-01-15 11:16:38,61,1148.102,center
subject6,2021-01-15 11:16:38,62,1161.495,left
subject6,2021-01-15 11:16:38,63,1180.847,right
subject6,2021-01-15 11:16:38,63,1185.941,center
subject6,2021-01-15 11:16:38,64,1198.098,left
subject6,2021-01-15 11:16:38,64,1201.006,left
subject6,2021-01-15 11:16:38,65,1213.625,right
subject6,2021-01-15 11:16:38,66,1230.984,center
subject6,2021-01-15 11:16:38,66,1236.597,center
subject6,2021-01-15 11:16:38,67,1250.275,center
subject6,2021-01-15 11:16:38,67,1253.086,center
subject6,2021-01-15 11:16:38,68,1266.502,right
subject6,2021-01-15 11:16:38,68,1269.087,right
subject6,2021-01-15 11:16:38,69,1282.504,right
subject6,2021-01-15 11:16:38,69,1287.335,left
subject6,2021-01-15 11:16:38,70,1297.615,left
subject6,2021-01-15 11:16:38,71,1318.076,center
subject6,2021-01-15 11:16:38,72,1332.045,left
subject6,2021-01-15 11:16:38,73,1353.319,left
subject6,2021-01-15 11:16:38,74,1368.471,left
subject6,2021-01-15 11:16:38,75,1432.293,left
subject6,2021-01-15 11:16:38,76,1448.608,left
subject6,2021-01-15 11:16:38,77,1463.869,center
subject6,2021-01-15 11:16:38,78,1482.217,center
subject6,2021-01-15 11:16:38,78,1487.081,right
subject6,2021-01-15 11:16:38,79,1500.456,right
subject6,2021-01-15 11:16:38,79,1503.5,left
subject6,2021-01-15 11:16:38,80,1515.845,center
subject6,2021-01-15 11:16:38,81,1533.975,center
subject6,2021-01-15 11:16:38,81,1540.17,center
subject6,2021-01-15 11:16:38,82,1552.464,left
subject6,2021-01-15 11:16:38,83,1570.591,right
subject6,2021-01-15 11:16:38,84,1583.92,left
subject6,2021-01-15 11:16:38,85,1604.533,right
subject6,2021-01-15 11:16:38,86,1620.502,left
subject6,2021-01-15 11:16:38,86,1625.613,right
subject6,2021-01-15 11:16:38,87,1637.768,right
subject6,2021-01-15 11:16:38,87,1641.88,right
subject6,2021-01-15 11:16:38,88,1654.531,center
subject6,2021-01-15 11:16:38,88,1659.364,center
subject6,2021-01-15 11:16:38,89,1673.598,right
subject6,2021-01-15 11:16:38,89,1680.487,left
subject6,2021-01-15 11:16:38,90,1691.872,center
subject6,2021-01-15 11:16:38,90,1696.875,right
subject6,2021-01-15 11:16:38,91,1705.346,right
subject6,2021-01-15 11:16:38,92,1722.876,right
subject6,2021-01-15 11:16:38,92,1732.6,center
subject6,2021-01-15 11:16:38,93,1740.822,right
subject6,2021-01-15 11:16:38,94,1757.562,right
subject6,2021-01-15 11:16:38,95,1774.462,right
subject6,2021-01-15 11:16:38,95,1783.568,left
subject6,2021-01-15 11:16:38,96,1795.55,center
subject6,2021-01-15 11:16:38,97,1810.663,left
subject6,2021-01-15 11:16:38,97,1815.738,center
subject6,2021-01-15 11:16:38,98,1825.994,center
subject6,2021-01-15 11:16:38,98,1831.876,left
subject6,2021-01-15 11:16:38,99,1845.095,right
subject6,2021-01-15 11:16:38,99,1851.139,left
subject6,2021-01-15 11:16:38,100,1863.219,left
140 changes: 100 additions & 40 deletions user_data/trials.csv
Original file line number Diff line number Diff line change
@@ -1,41 +1,101 @@
subject,session_datetime,block_id,trial_id,trial_start_time,trial_stop_time,trial_type,attribute_name,attribute_value
subject5,2018-07-03 20:32:28,1,1,2.043,10.043,stim,lumen,633
subject5,2018-07-03 20:32:28,1,2,10.508,18.508,ctrl,lumen,840
subject5,2018-07-03 20:32:28,1,3,18.7,26.7,ctrl,lumen,872
subject5,2018-07-03 20:32:28,1,4,26.707,34.707,ctrl,lumen,539
subject5,2018-07-03 20:32:28,1,5,34.715,42.715,stim,lumen,929
subject5,2018-07-03 20:32:28,1,6,42.806,50.806,stim,lumen,991
subject5,2018-07-03 20:32:28,1,7,50.839,58.839,ctrl,lumen,970
subject5,2018-07-03 20:32:28,1,8,59.196,67.196,ctrl,lumen,990
subject5,2018-07-03 20:32:28,1,9,67.31,75.31,stim,lumen,745
subject5,2018-07-03 20:32:28,1,10,75.772,83.772,ctrl,lumen,818
subject5,2018-07-03 20:32:28,2,11,86.082,94.082,stim,lumen,0
subject5,2018-07-03 20:32:28,2,12,94.087,102.087,stim,lumen,0
subject5,2018-07-03 20:32:28,2,13,102.183,110.183,stim,lumen,0
subject5,2018-07-03 20:32:28,2,14,110.526,118.526,stim,lumen,0
subject5,2018-07-03 20:32:28,2,15,118.844,126.844,stim,lumen,0
subject5,2018-07-03 20:32:28,2,16,127.22,135.22,stim,lumen,0
subject5,2018-07-03 20:32:28,2,17,135.319,143.319,ctrl,lumen,0
subject5,2018-07-03 20:32:28,2,18,143.357,151.357,stim,lumen,0
subject5,2018-07-03 20:32:28,2,19,151.54,159.54,ctrl,lumen,0
subject5,2018-07-03 20:32:28,2,20,159.8,167.8,stim,lumen,0
subject5,2018-07-03 20:32:28,3,21,170.146,178.146,ctrl,lumen,551
subject5,2018-07-03 20:32:28,3,22,178.548,186.548,ctrl,lumen,701
subject5,2018-07-03 20:32:28,3,23,186.909,194.909,stim,lumen,665
subject5,2018-07-03 20:32:28,3,24,195.124,203.124,ctrl,lumen,745
subject5,2018-07-03 20:32:28,3,25,203.344,211.344,ctrl,lumen,695
subject5,2018-07-03 20:32:28,3,26,211.788,219.788,stim,lumen,684
subject5,2018-07-03 20:32:28,3,27,220.009,228.009,ctrl,lumen,608
subject5,2018-07-03 20:32:28,3,28,228.359,236.359,stim,lumen,913
subject5,2018-07-03 20:32:28,3,29,236.768,244.768,stim,lumen,650
subject5,2018-07-03 20:32:28,3,30,244.884,252.884,ctrl,lumen,571
subject5,2018-07-03 20:32:28,4,31,254.437,262.437,stim,lumen,0
subject5,2018-07-03 20:32:28,4,32,262.918,270.918,stim,lumen,0
subject5,2018-07-03 20:32:28,4,33,270.95,278.95,ctrl,lumen,0
subject5,2018-07-03 20:32:28,4,34,279.121,287.121,stim,lumen,0
subject5,2018-07-03 20:32:28,4,35,287.194,295.194,ctrl,lumen,0
subject5,2018-07-03 20:32:28,4,36,295.661,303.661,ctrl,lumen,0
subject5,2018-07-03 20:32:28,4,37,304.029,312.029,stim,lumen,0
subject5,2018-07-03 20:32:28,4,38,312.486,320.486,stim,lumen,0
subject5,2018-07-03 20:32:28,4,39,320.576,328.576,ctrl,lumen,0
subject5,2018-07-03 20:32:28,4,40,328.971,336.971,stim,lumen,0
subject6,2021-01-15 11:16:38,1,1,0.123,17.123,stim,lumen,851
subject6,2021-01-15 11:16:38,1,2,17.54,34.54,ctrl,lumen,762
subject6,2021-01-15 11:16:38,1,3,34.81,51.81,ctrl,lumen,634
subject6,2021-01-15 11:16:38,1,4,52.202,69.202,ctrl,lumen,707
subject6,2021-01-15 11:16:38,1,5,69.611,86.611,stim,lumen,507
subject6,2021-01-15 11:16:38,1,6,87.03,104.03,stim,lumen,541
subject6,2021-01-15 11:16:38,1,7,104.165,121.165,ctrl,lumen,543
subject6,2021-01-15 11:16:38,1,8,121.502,138.502,ctrl,lumen,791
subject6,2021-01-15 11:16:38,1,9,138.612,155.612,ctrl,lumen,696
subject6,2021-01-15 11:16:38,1,10,155.741,172.741,stim,lumen,973
subject6,2021-01-15 11:16:38,1,11,173.101,190.101,stim,lumen,504
subject6,2021-01-15 11:16:38,1,12,190.433,207.433,stim,lumen,985
subject6,2021-01-15 11:16:38,1,13,207.788,224.788,ctrl,lumen,803
subject6,2021-01-15 11:16:38,1,14,225.163,242.163,ctrl,lumen,942
subject6,2021-01-15 11:16:38,1,15,242.351,259.351,ctrl,lumen,835
subject6,2021-01-15 11:16:38,1,16,259.577,276.577,ctrl,lumen,996
subject6,2021-01-15 11:16:38,1,17,276.695,293.695,stim,lumen,631
subject6,2021-01-15 11:16:38,1,18,293.985,310.985,stim,lumen,621
subject6,2021-01-15 11:16:38,1,19,311.152,328.152,stim,lumen,767
subject6,2021-01-15 11:16:38,1,20,328.523,345.523,ctrl,lumen,529
subject6,2021-01-15 11:16:38,1,21,345.944,362.944,stim,lumen,886
subject6,2021-01-15 11:16:38,1,22,363.362,380.362,stim,lumen,810
subject6,2021-01-15 11:16:38,1,23,380.503,397.503,stim,lumen,664
subject6,2021-01-15 11:16:38,1,24,397.887,414.887,stim,lumen,612
subject6,2021-01-15 11:16:38,1,25,476.155,493.155,stim,lumen,950
subject6,2021-01-15 11:16:38,2,26,493.517,510.517,stim,lumen,0
subject6,2021-01-15 11:16:38,2,27,510.72,527.72,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,28,527.866,544.866,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,29,545.114,562.114,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,30,562.331,579.331,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,31,579.437,596.437,stim,lumen,0
subject6,2021-01-15 11:16:38,2,32,596.811,613.811,stim,lumen,0
subject6,2021-01-15 11:16:38,2,33,614.125,631.125,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,34,631.516,648.516,stim,lumen,0
subject6,2021-01-15 11:16:38,2,35,648.63,665.63,stim,lumen,0
subject6,2021-01-15 11:16:38,2,36,665.929,682.929,stim,lumen,0
subject6,2021-01-15 11:16:38,2,37,683.141,700.141,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,38,700.313,717.313,stim,lumen,0
subject6,2021-01-15 11:16:38,2,39,717.586,734.586,stim,lumen,0
subject6,2021-01-15 11:16:38,2,40,734.714,751.714,stim,lumen,0
subject6,2021-01-15 11:16:38,2,41,752.075,769.075,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,42,769.458,786.458,stim,lumen,0
subject6,2021-01-15 11:16:38,2,43,786.765,803.765,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,44,803.884,820.884,stim,lumen,0
subject6,2021-01-15 11:16:38,2,45,821.245,838.245,stim,lumen,0
subject6,2021-01-15 11:16:38,2,46,838.408,855.408,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,47,855.549,872.549,stim,lumen,0
subject6,2021-01-15 11:16:38,2,48,872.713,889.713,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,49,889.942,906.942,ctrl,lumen,0
subject6,2021-01-15 11:16:38,2,50,952.031,969.031,ctrl,lumen,0
subject6,2021-01-15 11:16:38,3,51,969.387,986.387,ctrl,lumen,529
subject6,2021-01-15 11:16:38,3,52,986.578,1003.578,ctrl,lumen,601
subject6,2021-01-15 11:16:38,3,53,1003.788,1020.788,stim,lumen,962
subject6,2021-01-15 11:16:38,3,54,1021.001,1038.001,ctrl,lumen,617
subject6,2021-01-15 11:16:38,3,55,1038.129,1055.129,stim,lumen,541
subject6,2021-01-15 11:16:38,3,56,1055.529,1072.529,ctrl,lumen,894
subject6,2021-01-15 11:16:38,3,57,1072.875,1089.875,ctrl,lumen,849
subject6,2021-01-15 11:16:38,3,58,1090.057,1107.057,ctrl,lumen,911
subject6,2021-01-15 11:16:38,3,59,1107.354,1124.354,ctrl,lumen,807
subject6,2021-01-15 11:16:38,3,60,1124.679,1141.679,ctrl,lumen,717
subject6,2021-01-15 11:16:38,3,61,1141.983,1158.983,ctrl,lumen,844
subject6,2021-01-15 11:16:38,3,62,1159.267,1176.267,ctrl,lumen,953
subject6,2021-01-15 11:16:38,3,63,1176.454,1193.454,ctrl,lumen,974
subject6,2021-01-15 11:16:38,3,64,1193.837,1210.837,stim,lumen,820
subject6,2021-01-15 11:16:38,3,65,1211.187,1228.187,ctrl,lumen,763
subject6,2021-01-15 11:16:38,3,66,1228.556,1245.556,ctrl,lumen,860
subject6,2021-01-15 11:16:38,3,67,1245.909,1262.909,ctrl,lumen,947
subject6,2021-01-15 11:16:38,3,68,1263.059,1280.059,stim,lumen,542
subject6,2021-01-15 11:16:38,3,69,1280.222,1297.222,ctrl,lumen,845
subject6,2021-01-15 11:16:38,3,70,1297.399,1314.399,stim,lumen,715
subject6,2021-01-15 11:16:38,3,71,1314.593,1331.593,stim,lumen,825
subject6,2021-01-15 11:16:38,3,72,1331.785,1348.785,ctrl,lumen,672
subject6,2021-01-15 11:16:38,3,73,1349.079,1366.079,ctrl,lumen,512
subject6,2021-01-15 11:16:38,3,74,1366.183,1383.183,stim,lumen,742
subject6,2021-01-15 11:16:38,3,75,1428.237,1445.237,ctrl,lumen,741
subject6,2021-01-15 11:16:38,4,76,1445.393,1462.393,stim,lumen,0
subject6,2021-01-15 11:16:38,4,77,1462.722,1479.722,stim,lumen,0
subject6,2021-01-15 11:16:38,4,78,1479.996,1496.996,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,79,1497.224,1514.224,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,80,1514.559,1531.559,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,81,1531.955,1548.955,stim,lumen,0
subject6,2021-01-15 11:16:38,4,82,1549.3,1566.3,stim,lumen,0
subject6,2021-01-15 11:16:38,4,83,1566.43,1583.43,stim,lumen,0
subject6,2021-01-15 11:16:38,4,84,1583.819,1600.819,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,85,1601.127,1618.127,stim,lumen,0
subject6,2021-01-15 11:16:38,4,86,1618.378,1635.378,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,87,1635.718,1652.718,stim,lumen,0
subject6,2021-01-15 11:16:38,4,88,1653.111,1670.111,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,89,1670.435,1687.435,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,90,1687.721,1704.721,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,91,1705.036,1722.036,stim,lumen,0
subject6,2021-01-15 11:16:38,4,92,1722.447,1739.447,stim,lumen,0
subject6,2021-01-15 11:16:38,4,93,1739.749,1756.749,stim,lumen,0
subject6,2021-01-15 11:16:38,4,94,1757.081,1774.081,ctrl,lumen,0
subject6,2021-01-15 11:16:38,4,95,1774.191,1791.191,stim,lumen,0
subject6,2021-01-15 11:16:38,4,96,1791.426,1808.426,stim,lumen,0
subject6,2021-01-15 11:16:38,4,97,1808.53,1825.53,stim,lumen,0
subject6,2021-01-15 11:16:38,4,98,1825.691,1842.691,stim,lumen,0
subject6,2021-01-15 11:16:38,4,99,1842.868,1859.868,stim,lumen,0
subject6,2021-01-15 11:16:38,4,100,1860.118,1877.118,stim,lumen,0
84 changes: 31 additions & 53 deletions workflow_array_ephys/analysis.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
import datajoint as dj
import numpy as np

from workflow_array_ephys.pipeline import db_prefix, session, ephys, trial, event
from .pipeline import db_prefix, ephys, trial, event


schema = dj.schema(db_prefix + 'analysis')

AlignmentEvent = event.AlignmentEvent


@schema
class SpikesAlignmentCondition(dj.Manual):
@@ -19,7 +21,7 @@ class SpikesAlignmentCondition(dj.Manual):
"""

class Trial(dj.Part):
definition = """ # Trials (or subset of trials) to computed event-aligned spikes and PSTH on
definition = """ # Trials on which to compute event-aligned spikes and PSTH
-> master
-> trial.Trial
"""
@@ -38,7 +40,7 @@ class AlignedTrialSpikes(dj.Part):
-> ephys.CuratedClustering.Unit
-> SpikesAlignmentCondition.Trial
---
aligned_spike_times: longblob # (s) spike times relative to the alignment event time
aligned_spike_times: longblob # (s) spike times relative to alignment event time
"""

class UnitPSTH(dj.Part):
@@ -51,68 +53,43 @@ class UnitPSTH(dj.Part):
"""

def make(self, key):
unit_keys, unit_spike_times = (ephys.CuratedClustering.Unit & key).fetch('KEY', 'spike_times', order_by='unit')

trial_keys, trial_starts, trial_ends = (trial.Trial & (SpikesAlignmentCondition.Trial & key)).fetch(
'KEY', 'trial_start_time', 'trial_stop_time', order_by='trial_id')

unit_keys, unit_spike_times = (ephys.CuratedClustering.Unit & key
).fetch('KEY', 'spike_times', order_by='unit')
bin_size = (SpikesAlignmentCondition & key).fetch1('bin_size')

alignment_spec = (event.AlignmentEvent & key).fetch1()
trialized_event_times = trial.get_trialized_alignment_event_times(
key, trial.Trial & (SpikesAlignmentCondition.Trial & key))

min_limit = (trialized_event_times.event - trialized_event_times.start).max()
max_limit = (trialized_event_times.end - trialized_event_times.event).max()

# Spike raster
aligned_trial_spikes = []
units_spike_raster = {u['unit']: {**key, **u, 'aligned_spikes': []} for u in unit_keys}
min_limit, max_limit = np.Inf, -np.Inf
for trial_key, trial_start, trial_stop in zip(trial_keys, trial_starts, trial_ends):
alignment_event_time = (event.Event & key & {'event_type': alignment_spec['alignment_event_type']}
& f'event_start_time BETWEEN {trial_start} AND {trial_stop}')
if alignment_event_time:
# if there are multiple of such alignment event, pick the last one in the trial
alignment_event_time = alignment_event_time.fetch(
'event_start_time', order_by='event_start_time DESC', limit=1)[0]
else:
units_spike_raster = {u['unit']: {**key, **u, 'aligned_spikes': []
} for u in unit_keys}
for _, r in trialized_event_times.iterrows():
if np.isnan(r.event):
continue

alignment_start_time = (event.Event & key & {'event_type': alignment_spec['start_event_type']}
& f'event_start_time < {alignment_event_time}')
if alignment_start_time:
# if there are multiple of such start event, pick the most immediate one prior to the alignment event
alignment_start_time = alignment_start_time.fetch(
'event_start_time', order_by='event_start_time DESC', limit=1)[0]
alignment_start_time = max(alignment_start_time, trial_start)
else:
alignment_start_time = trial_start

alignment_end_time = (event.Event & key & {'event_type': alignment_spec['end_event_type']}
& f'event_start_time > {alignment_event_time}')
if alignment_end_time:
# if there are multiple of such start event, pick the most immediate one following the alignment event
alignment_end_time = alignment_end_time.fetch(
'event_start_time', order_by='event_start_time', limit=1)[0]
alignment_end_time = min(alignment_end_time, trial_stop)
else:
alignment_end_time = trial_stop

alignment_event_time += alignment_spec['alignment_time_shift']
alignment_start_time += alignment_spec['start_time_shift']
alignment_end_time += alignment_spec['end_time_shift']

min_limit = min(alignment_start_time - alignment_event_time, min_limit)
max_limit = max(alignment_end_time - alignment_event_time, max_limit)

alignment_start_time = r.event - min_limit
alignment_end_time = r.event + max_limit
for unit_key, spikes in zip(unit_keys, unit_spike_times):
aligned_spikes = spikes[(alignment_start_time <= spikes)
& (spikes < alignment_end_time)] - alignment_event_time
aligned_trial_spikes.append({**key, **unit_key, **trial_key, 'aligned_spike_times': aligned_spikes})
units_spike_raster[unit_key['unit']]['aligned_spikes'].append(aligned_spikes)
& (spikes < alignment_end_time)] - r.event
aligned_trial_spikes.append({**key, **unit_key,
**r.trial_key,
'aligned_spike_times': aligned_spikes})
units_spike_raster[unit_key['unit']]['aligned_spikes'
].append(aligned_spikes)

# PSTH
for unit_spike_raster in units_spike_raster.values():
spikes = np.concatenate(unit_spike_raster['aligned_spikes'])

psth, edges = np.histogram(spikes, bins=np.arange(min_limit, max_limit, bin_size))
unit_spike_raster['psth'] = psth / len(unit_spike_raster.pop('aligned_spikes')) / bin_size
psth, edges = np.histogram(spikes,
bins=np.arange(-min_limit, max_limit, bin_size))
unit_spike_raster['psth'] = (psth
/ len(unit_spike_raster.pop('aligned_spikes'))
/ bin_size)
unit_spike_raster['psth_edges'] = edges[1:]

self.insert1(key)
@@ -128,7 +105,8 @@ def plot_raster(self, key, unit, axs=None):
fig, axs = plt.subplots(2, 1, figsize=(12, 8))

trial_ids, aligned_spikes = (self.AlignedTrialSpikes
& key & {'unit': unit}).fetch('trial_id', 'aligned_spike_times')
& key & {'unit': unit}
).fetch('trial_id', 'aligned_spike_times')
psth, psth_edges = (self.UnitPSTH & key & {'unit': unit}).fetch1(
'psth', 'psth_edges')

31 changes: 19 additions & 12 deletions workflow_array_ephys/ingest.py
Original file line number Diff line number Diff line change
@@ -15,12 +15,14 @@ def ingest_general(csvs, tables, skip_duplicates=True, verbose=True,
Inserts data from a series of csvs into their corresponding table:
e.g., ingest_general(['./lab_data.csv', './proj_data.csv'],
[lab.Lab(),lab.Project()]
ingest_general(csvs, tables, skip_duplicates=True, verbose=True, allow_direct_insert=False)
ingest_general(csvs, tables, skip_duplicates=True, verbose=True,
allow_direct_insert=False)
:param csvs: list of relative paths to CSV files. CSV are delimited by commas.
:param tables: list of datajoint tables with ()
:param verbose: print number inserted (i.e., table length change)
:param skip_duplicates: <description>
:param allow_direct_insert: <description>
:param skip_duplicates: skip items that are either (a) duplicates within the csv
or (b) already exist in the corresponding table
:param allow_direct_insert: Permit insertion directly into calculated tables
"""
for csv_filepath, table in zip(csvs, tables):
with open(csv_filepath, newline='') as f:
@@ -47,7 +49,8 @@ def ingest_subjects(subject_csv_path='./user_data/subjects.csv',
ingest_general(csvs, tables, skip_duplicates=skip_duplicates, verbose=verbose)


def ingest_sessions(session_csv_path='./user_data/sessions.csv', verbose=True):
def ingest_sessions(session_csv_path='./user_data/sessions.csv', verbose=True,
skip_duplicates=False):
"""
Ingests SpikeGLX and OpenEphys files from directories listed
in the session_dir column of ./user_data/sessions.csv
@@ -66,7 +69,8 @@ def ingest_sessions(session_csv_path='./user_data/sessions.csv', verbose=True):
session_datetimes, insertions = [], []

# search session dir and determine acquisition software
for ephys_pattern, ephys_acq_type in zip(['*.ap.meta', '*.oebin'], ['SpikeGLX', 'OpenEphys']):
for ephys_pattern, ephys_acq_type in zip(['*.ap.meta', '*.oebin'],
['SpikeGLX', 'OpenEphys']):
ephys_meta_filepaths = list(session_dir.rglob(ephys_pattern))
if ephys_meta_filepaths:
acq_software = ephys_acq_type
@@ -130,16 +134,19 @@ def ingest_sessions(session_csv_path='./user_data/sessions.csv', verbose=True):
session.Session.insert(session_list)
session.SessionDirectory.insert(session_dir_list)
if verbose:
print(f'\n---- Insert {len(session_list)} entry(s) into session.Session ----')
print(f'\n---- Insert {len(probe_insertion_list)} entry(s) into ephys.ProbeInsertion ----')
print(f'\n---- Insert {len(probe_insertion_list)} entry(s) into '
+ 'ephys.ProbeInsertion ----')
print(f'\n---- Insert {len(session_list)} entry(s) into '
+ 'session.Session ----')
else:
session.Session.insert(session_list)
session.SessionDirectory.insert(session_dir_list)
ephys.ProbeInsertion.insert(probe_insertion_list)
if verbose:
print(f'\n---- Insert {len(session_list)} entry(s) into session.Session ----')
print(f'\n---- Insert {len(probe_insertion_list)} entry(s) into ephys.ProbeInsertion ----')

print(f'\n---- Insert {len(probe_insertion_list)} entry(s) into '
+ 'ephys.ProbeInsertion ----')
print(f'\n---- Insert {len(session_list)} entry(s) into '
+ 'session.Session ----')
if verbose:
print('\n---- Successfully completed workflow_array_ephys/ingest.py ----')

@@ -153,12 +160,12 @@ def ingest_events(recording_csv_path='./user_data/behavior_recordings.csv',
block_csv_path, block_csv_path,
trial_csv_path, trial_csv_path, trial_csv_path,
trial_csv_path,
event_csv_path,event_csv_path]
event_csv_path, event_csv_path, event_csv_path]
tables = [event.BehaviorRecording(), event.BehaviorRecording.File(),
trial.Block(), trial.Block.Attribute(),
trial.TrialType(), trial.Trial(), trial.Trial.Attribute(),
trial.BlockTrial(),
event.EventType(), event.Event()]
event.EventType(), event.Event(), trial.TrialEvent()]

# Allow direct insert required bc element-trial has Imported that should be Manual
ingest_general(csvs, tables, skip_duplicates=skip_duplicates, verbose=verbose,
63 changes: 63 additions & 0 deletions workflow_array_ephys/localization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import datajoint as dj
from element_interface.utils import find_full_path
from element_electrode_localization import coordinate_framework, electrode_localization
from element_electrode_localization.coordinate_framework import load_ccf_annotation

from .pipeline import ephys, probe
from .paths import get_ephys_root_data_dir, get_session_directory, \
get_electrode_localization_dir


if 'custom' not in dj.config:
dj.config['custom'] = {}

db_prefix = dj.config['custom'].get('database.prefix', '')

__all__ = ['ephys', 'probe', 'coordinate_framework', 'electrode_localization',
'ProbeInsertion',
'get_ephys_root_data_dir', 'get_session_directory',
'get_electrode_localization_dir', 'load_ccf_annotation']


ccf_id = 0
voxel_resolution = 100


# # Dummy table for case sensitivity in MySQL------------------------------------
# # Without DummyTable, the schema activates with a case-insensitive character set
# # which cannot ingest all CCF standard acronyms

# coordinate_framework_schema = dj.schema(db_prefix + 'ccf')


# @coordinate_framework_schema
# class DummyTable(dj.Manual):
# definition = """
# id : varchar(1)
# """
# contents = zip(['1', '2'])


# ccf_schema_name = db_prefix + 'ccf'
# dj.conn().query(f'ALTER DATABASE `{ccf_schema_name}` CHARACTER SET utf8 COLLATE '
# + 'utf8_bin;')


# Activate "electrode-localization" schema ------------------------------------

ProbeInsertion = ephys.ProbeInsertion
Electrode = probe.ProbeType.Electrode

electrode_localization.activate(db_prefix + 'eloc',
db_prefix + 'ccf',
linking_module=__name__)

nrrd_filepath = find_full_path(get_ephys_root_data_dir(),
f'annotation_{voxel_resolution}.nrrd')
ontology_csv_filepath = find_full_path(get_ephys_root_data_dir(), 'query.csv')

if not (coordinate_framework.CCF & {'ccf_id': ccf_id}):
coordinate_framework.load_ccf_annotation(
ccf_id=ccf_id, version_name='ccf_2017', voxel_resolution=voxel_resolution,
nrrd_filepath=nrrd_filepath,
ontology_csv_filepath=ontology_csv_filepath)
21 changes: 21 additions & 0 deletions workflow_array_ephys/paths.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import datajoint as dj
import pathlib
from element_interface.utils import find_full_path


def get_ephys_root_data_dir():
@@ -9,3 +11,22 @@ def get_session_directory(session_key: dict) -> str:
from .pipeline import session
session_dir = (session.SessionDirectory & session_key).fetch1('session_dir')
return session_dir


def get_electrode_localization_dir(probe_insertion_key: dict) -> str:
from .pipeline import ephys
acq_software = (ephys.EphysRecording & probe_insertion_key).fetch1('acq_software')

if acq_software == 'SpikeGLX':
spikeglx_meta_filepath = pathlib.Path((ephys.EphysRecording.EphysFile
& probe_insertion_key
& 'file_path LIKE "%.ap.meta"'
).fetch1('file_path'))
probe_dir = find_full_path(get_ephys_root_data_dir(),
spikeglx_meta_filepath.parent)
elif acq_software == 'Open Ephys':
probe_path = (ephys.EphysRecording.EphysFile & probe_insertion_key
).fetch1('file_path')
probe_dir = find_full_path(get_ephys_root_data_dir(), probe_path)

return probe_dir
Loading

0 comments on commit a5ee2c6

Please sign in to comment.