The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individual synapses. Recent advances in electronic microscopy (EM) have enabled the collection of a large number of image stacks at nanometer resolution, but annotation requires expertise and is super time-consuming. Here we provide a deep learning framework powered by PyTorch for automatic and semi-automatic semantic and instance segmentation in connectomics, which we call PyTorch Connectomics (PyTC). This repository is mainly maintained by the Visual Computing Group (VCG) at Harvard University.
PyTorch Connectomics is currently under active development!
- Multi-task, active and semi-supervised learning
- Distributed and mixed-precision optimization
- Scalability for handling large datasets
- Comprehensive augmentations for volumetric data
Refer to the Pytorch Connectomics wiki, specifically the installation page, for the most up-to-date instructions on installation on a local machine or high-performance cluster.
Besides the installation guidance above, we also push a PyTC Docker image to the public docker registry (03/12/2022) to improve usability. Additionally, we provide the corresponding Dockerfile to enable individual modifications. Pleas refer to our PyTC Docker Guidance for more information.
We provide several encoder-decoder architectures, which are customized 3D UNet and Feature Pyramid Network (FPN) models with various blocks and backbones. Those models can be applied for both semantic segmentation and bottom-up instance segmentation of 3D image stacks. Those models can also be constructed specifically for isotropic and anisotropic datasets. Please check the documentation for more details.
We provide a data augmentation interface for several common augmentation methods for EM images. The interface operates on NumPy arrays, so it can be easily incorporated alongside many Python-based deep learning framework (e.g. TensorFlow). For more details about the design of the data augmentation module, please check the documentation, specifically the utils
documentation.
We use the Yet Another Configuration System (YACS) library to manage settings and hyperparameters in model training and inference. The configuration files for tutorial examples can be found here. All available configuration options can be found at connectomics/config/defaults.py
. Please note that the default value of several options is None
, which is only supported after YACS v0.1.8.
This project is built upon numerous previous projects. Especially, we'd like to thank the contributors of the following github repositories:
- pyGreenTea: HHMI Janelia FlyEM Team
- DataProvider: Princeton SeungLab
- Detectron2: Facebook AI Reserach
We gratefully acknowledge the support from NSF awards IIS-1835231 and IIS-2124179.
This project is licensed under the MIT License and the copyright belongs to all PyTorch Connectomics contributors - see the LICENSE file for details.
For a detailed description of our framework, please read this technical report. If you find PyTorch Connectomics (PyTC) useful in your research, please cite:
@article{lin2021pytorch,
title={PyTorch Connectomics: A Scalable and Flexible Segmentation Framework for EM Connectomics},
author={Lin, Zudi and Wei, Donglai and Lichtman, Jeff and Pfister, Hanspeter},
journal={arXiv preprint arXiv:2112.05754},
year={2021}
}