Skip to content

Commit

Permalink
deploy: e1d800c
Browse files Browse the repository at this point in the history
  • Loading branch information
rcpeene committed Jul 31, 2023
1 parent 27692fb commit db75031
Show file tree
Hide file tree
Showing 38 changed files with 115 additions and 110 deletions.
6 changes: 3 additions & 3 deletions FAQ.html
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ <h1 class="site-logo" id="site-title">OpenScope Databook</h1>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="basics/background.html">
DANDI
BACKGROUND
</a>
</li>
<li class="toctree-l1">
Expand Down Expand Up @@ -275,7 +275,7 @@ <h1 class="site-logo" id="site-title">OpenScope Databook</h1>
</li>
<li class="toctree-l1">
<a class="reference internal" href="methods/github.html">
Git/Github:
Git/Github
</a>
</li>
<li class="toctree-l1">
Expand Down Expand Up @@ -570,7 +570,7 @@ <h3>Computation<a class="headerlink" href="#computation" title="Permalink to thi
<p><strong>Running with Binder/Thebe is not working, what’s up?</strong>
As described in <a class="reference external" href="https://blog.jupyter.org/mybinder-org-reducing-capacity-c93ccfc6413f">This Jupyter blog post</a>, Binder no longer has the support of Google, and therefore shows reduced performance. Launches may fail or take a long time. There is no working solution to this except trying to launch again. An alternative would be to launch Databook notebooks with Google Colab.</p>
<p><strong>How can I store my work on the Databook and come back to it later?</strong>
Launching the Databook with <a class="reference external" href="https://hub.dandiarchive.org/">Dandihub</a> will allow your files to be stored persistently and contain all the Databook’s notebooks together. Additionally, you can clone the <a class="reference external" href="https://github.com/AllenInstitute/openscope_databook">Github repo</a> run our files locally. These are both explained in further detail on the <a class="reference external" href="https://alleninstitute.github.io/openscope_databook/intro.html">front page</a>.</p>
Launching the Databook with <a class="reference external" href="https://hub.dandiarchive.org/">Dandihub</a> will allow your files to be stored persistently and contain all the Databook’s notebooks together. Additionally, you can clone the <a class="reference external" href="https://github.com/AllenInstitute/openscope_databook">Github repo</a> and run our files locally. These are both explained in further detail on the <a class="reference external" href="https://alleninstitute.github.io/openscope_databook/intro.html">front page</a>.</p>
<p><strong>How do you recommend using the Databook?</strong>
The Databook can be used to reproduce analysis on files, as a starting point for investigating a dataset, or as an educational resource to get more familiar with NWB files or particular kinds of data. In all of these cases, the code can be modified, copied, and interactively run to gain a better understanding of the data. For educational use, the databook may be run remotely with Thebe, Binder, or Google Colab as simple demonstrations. For more advanced usage and analysis, it may behoove you to download an individual notebook and run it locally.</p>
</section>
Expand Down
4 changes: 2 additions & 2 deletions _sources/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The Openscope Databook is a great place to demonstrate the capabilities and some
As described in [This Jupyter blog post](https://blog.jupyter.org/mybinder-org-reducing-capacity-c93ccfc6413f), Binder no longer has the support of Google, and therefore shows reduced performance. Launches may fail or take a long time. There is no working solution to this except trying to launch again. An alternative would be to launch Databook notebooks with Google Colab.

**How can I store my work on the Databook and come back to it later?**
Launching the Databook with [Dandihub](https://hub.dandiarchive.org/) will allow your files to be stored persistently and contain all the Databook's notebooks together. Additionally, you can clone the [Github repo](https://github.com/AllenInstitute/openscope_databook) run our files locally. These are both explained in further detail on the [front page](https://alleninstitute.github.io/openscope_databook/intro.html).
Launching the Databook with [Dandihub](https://hub.dandiarchive.org/) will allow your files to be stored persistently and contain all the Databook's notebooks together. Additionally, you can clone the [Github repo](https://github.com/AllenInstitute/openscope_databook) and run our files locally. These are both explained in further detail on the [front page](https://alleninstitute.github.io/openscope_databook/intro.html).

**How do you recommend using the Databook?**
The Databook can be used to reproduce analysis on files, as a starting point for investigating a dataset, or as an educational resource to get more familiar with NWB files or particular kinds of data. In all of these cases, the code can be modified, copied, and interactively run to gain a better understanding of the data. For educational use, the databook may be run remotely with Thebe, Binder, or Google Colab as simple demonstrations. For more advanced usage and analysis, it may behoove you to download an individual notebook and run it locally.
Expand All @@ -42,4 +42,4 @@ If local installation is failing, it is recommended that you attempt to clone an
Contributing to this project can be as simple as forking the [Github repo](https://github.com/AllenInstitute/openscope_databook), making your changes, and issuing a PR on Github. However, it would be advised to reach out to [Jerome Lecoq](https://github.com/jeromelecoq) or [Carter Peene](https://github.com/rcpeene) and discuss if you wish to make a significant contribution.

**I have a question that isn't addressed here**
Questions, bugs, or any other topics of interest can be discussed by filing an issue on the Github repo's [issues page](https://github.com/AllenInstitute/openscope_databook/issues).
Questions, bugs, or any other topics of interest can be discussed by filing an issue on the Github repo's [issues page](https://github.com/AllenInstitute/openscope_databook/issues).
14 changes: 8 additions & 6 deletions _sources/basics/background.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
## DANDI
# BACKGROUND

### DANDI
At the Allen Institute, we frequently utilize a platform called [DANDI](https://dandiarchive.org/) (Data Archive and Neurophysiology Imaging). DANDI is a platform that allows open-source data sharing and archiving and acts as a centralized repository where researchers can deposit data. While some of these notebooks use pre-loaded data from DANDI, the ultimate purpose of this Databook is to teach users to take any dataset off DANDI and reproduce the analysis within these notebooks.

At the beginning of each notebook, we need to download the data that we will be analyzing. Our data is stored in NWB files (explained below), and the NWB files for a given experiment are contained in datasets called dandisets which are available for download from the [DANDI Archive](https://dandiarchive.org/). First, we identify what dandiset we want to access, and from there we identify the filepath for the specific NWB file we want to download. While these notebooks provide most of the steps to download and access an NWB file, here are instruction for accessing a particular filepath for any file stored on DANDI:
Expand All @@ -9,13 +11,13 @@ At the beginning of each notebook, we need to download the data that we will be
4) Once you have entered the folder, you will see a list of files with 4 blue buttons to the right of each file name. Select any file you would like and click the "i" icon in the blue circle.
5) This will pull up a new tab with a bunch of red code. At the top of the code, you will see `"id" :`, `"path" :`, and `"access":`. Copy the code to the right of `"path" :`. It will look like this: `"sub-460654/sub-460654_ses-20190611T181840_behavior+ophys.nwb"`. This is the filepath you will insert in various notebooks when it asks for `dandi_filepath`.

## NWB FILES
### NWB FILES
Throughout these notebooks, we analyze data that is stored in NWB (Neurodata Without Borders) files. NWB files utilize a standardized format for storing and sharing optical data, behavioral data, and neurophysiology data. Their purpose is to address the need for a universal data format that makes analysis accessible across different experimental techniques. NWB files contain raw data, processed data, analysis results, and metadata all organized in a uniform manner across different research projects. During analysis within these notebooks, we often need to access data within different parts of an NWB file. To explore the specifications of NWB format or to clarify the documentation used within NWB files, you can explore the [NWB Format Specification](https://nwb-schema.readthedocs.io/en/latest/) website. This is a helpful resource to understand how to access different parts of an NWB file that may not be provided in examples within our notebooks.

In order to do basic reading of NWB files we utilize [PyNWB](https://github.com/NeurodataWithoutBorders/pynwb), which is a Python package that allows users to read, write, and manipulate NWB files. We explain how to utilize PyNWB and how to explore NWB files in our [Reading an NWB file](./read_nwb.ipynb) notebook. In addition to viewing raw data, it is also useful to graphically visualize data that is stored in NWB files. In our notebooks, we do so by utilizing [NWBWidgets](https://github.com/NeurodataWithoutBorders/nwbwidgets), which is an interactive package that can be used in Jupyter notebooks to easily visualize data. Some examples of what this package can do is display time series data, spatial data, and spike trains. To visualize data from NWB files using methods that don't require Jupyter, you can utilize [HDFView](https://www.hdfgroup.org/downloads/hdfview/), a graphical user interface tool. To explore how to use these methods to visualize data from NWB files, please reference our [Explore an NWB file](./use_newwidgets.ipynb) notebook.


## Understanding Data Collection Techniques
### Understanding Data Collection Techniques
In this Databook, we will be analyzing data from two different types of experimental techniques: two-photon calcium imaging (ophys) and extracellular electrophysiology (ecephys).

Two-photon calcium imaging utilizes a fluorescence indicator that emits fluorescence when bound to calcium ions. The intensity of fluorescence is proportional to the concentration of calcium, and this allows us to measure and visualize neural activity at the cellular level. A specialized microscope detects the fluorescence and the data is converted to a visual image. The Allen Institute uses the [Suite2P](https://github.com/MouseLand/suite2p) algorithm to identify regions of interest, ROIs, from the images that are putative neurons, and each neuron's activity can be studied over the duration of an experiment.
Expand All @@ -24,9 +26,9 @@ Extracellular electrophysiology is a technique that analyzes the electrical acti

Ultimately, both experimental techniques are used to collect information about neuronal activity in the brain, but the way the data is analyzed for each technique is different. We provide notebooks that explain the analysis for both ophys and ecephys. The first notebook that discusses ophys analysis is [Visualizing Raw 2-Photon Images](./visualize_2p_raw.ipynb) and the first notebook that discusses ecephys analysis is [Visualizing LFP responses to Stimulus](visualize_lfp_responses.ipynb).

Resources for ophys:
**Resources for Ophys:**
1) [In vivo two photon calcium imaging of neuronal networks](https://www.pnas.org/doi/epdf/10.1073/pnas.1232232100) is a paper that can provide an introduction to two-photon calcium imaging techniques

Resources for ecephys:
**Resources for Ecephys:**
1) [Survey of spiking in the mouse visual system reveals functional hierarchy](https://www.nature.com/articles/s41586-020-03171-x) is a paper from the Allen Institute that can provide an introduction to ecephys and the use of neuropixel probes.
2) [here](https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels) is a visualization of the neuropixel probes that may come in handy when trying to visualize how the data is collected from the probe itself.
2) [here](https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels) is a visualization of the neuropixel probes that may come in handy when trying to visualize how the data is collected from the probe itself.
8 changes: 4 additions & 4 deletions _sources/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@

<!-- authors start -->

*R. Carter Peene (927), Katrina Ager (52), Jerome Lecoq (7), Colleen J. Gillon (5), Josh Siegle, Ahad Bawany*
*R. Carter Peene (932), Katrina Ager (52), Jerome Lecoq (7), Colleen J. Gillon (5), Josh Siegle, Ahad Bawany*

<!-- authors end -->
<!-- version start -->

[v0.6.0](https://github.com/AllenInstitute/openscope_databook/releases)
[v0.7.0](https://github.com/AllenInstitute/openscope_databook/releases)

<!-- version end -->

Expand Down Expand Up @@ -69,13 +69,13 @@ Binder will automatically setup the environment with [repo2docker](https://githu
### Dandihub
[Dandihub](https://hub.dandiarchive.org/) is an instance of JupyterHub hosted by DANDI. Dandihub does not automatically reproduce the environment required for these notebooks, but importantly, Dandihub allows for persistent storage of your files, so you can leave your work and come back to it later. It can be used by hovering over the `Launch` button in the top-right of a notebook and selecting `JupyterHub`. In order to run notebooks on Dandihub, you must sign in with your Github account. To set up the correct environment on Dandihub, open a `terminal` tab, navigate to the directory `openscope_databook` and run the command
```
pip install -r ./requirements.txt --user
pip install -e .
```

### Locally
You can download an individual notebook by pressing the `Download` button in the top-right and selecting `.ipynb`. Alternatively, you can clone the repo to your machine and access the files there. The repo can be found by hovering over the the `Github` button in the top-right and selecting `repository`. When run locally, the environment can be replicated with our [requirements.txt](https://github.com/AllenInstitute/openscope_databook/blob/main/requirements.txt) file using the command
```
pip install -r ./requirements.txt --user
pip install -e .
```
It is recommended that this is done within a conda environment using Python 3.8 to minimize any interference with local machine environments.
From there, you can execute the notebook in Jupyter by running the following command within the repo directory;
Expand Down
2 changes: 1 addition & 1 deletion _sources/methods/github.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Git/Github:
### Git/Github

The main code repository and development location of the Databook is on [Github](https://github.com/), and the project is version-controlled with [Git](https://git-scm.com/). The Databook is developed with version-control practices in mind. Branches are used for developing separate features and commits are pushed from local machines to the remote repo. A `dev` branch is used as the basis for all continuous integration and merging separate feature branches. During deployments, if the dev branch passes the build test, it may be merged into the `main` branch, intended to host the latest working release of the Databook. Finally, from the main branch, the Databook is deployed to the public website and a release is made manually using the Github release feature.

Expand Down
Loading

0 comments on commit db75031

Please sign in to comment.