Skip to content

Commit

Permalink
set up repo for better management of OSF downloads
Browse files Browse the repository at this point in the history
  • Loading branch information
DSilva27 committed Jul 16, 2024
1 parent 75c9eb6 commit f7578ea
Show file tree
Hide file tree
Showing 7 changed files with 54 additions and 18 deletions.
12 changes: 12 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,15 @@
# downloaded data
data/dataset_2_submissions
data/dataset_1_submissions
data/dataset_2_ground_truth

# data for testing and resulting outputs
tests/data/Ground_truth
tests/data/dataset_2_submissions/
tests/data/unprocessed_dataset_2_submissions/submission_x/
tests/results/


# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<h1 align='center'>Cryo-EM Heterogeneity Challenge</h1>

<p align="center">

<img alt="Supported Python versions" src="https://img.shields.io/badge/Supported_Python_Versions-3.8_%7C_3.9_%7C_3.10_%7C_3.11-blue">
<img alt="GitHub Downloads (all assets, all releases)" src="https://img.shields.io/github/downloads/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/total">
<img alt="GitHub branch check runs" src="https://img.shields.io/github/check-runs/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/main">
Expand All @@ -10,13 +10,13 @@
</p>

<p align="center">

<img alt="Cryo-EM Heterogeneity Challenge" src="https://simonsfoundation.imgix.net/wp-content/uploads/2023/05/15134456/Screenshot-2023-05-15-at-1.39.07-PM.png?auto=format&q=90">

</p>



This repository contains the code used to analyse the submissions for the [Inaugural Flatiron Cryo-EM Heterogeneity Challenge](https://www.simonsfoundation.org/flatiron/center-for-computational-biology/structural-and-molecular-biophysics-collaboration/heterogeneity-in-cryo-electron-microscopy/).

# Scope
Expand All @@ -26,13 +26,13 @@ This repository explains how to preprocess a submission (80 maps and correspondi
This is a work in progress, while the code will probably not change, we are still writting better tutorials, documentation, and other ideas for analyzing the data. We are also in the process of making it easier for other people to contribute with their own metrics and methods. We are also in the process of distributing the code to PyPi.

# Accesing the data
The data is available via the Open Science Foundation project [The Inaugural Flatiron Institute Cryo-EM Heterogeneity Community Challenge](https://osf.io/8h6fz/). You can download via a web browser, or programatically with wget as per [this script](https://github.com/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/blob/main/tests/scripts/fetch_test_data.sh).
The data is available via the Open Science Foundation project [The Inaugural Flatiron Institute Cryo-EM Heterogeneity Community Challenge](https://osf.io/8h6fz/). You can download via a web browser, or programatically with wget as per [this script](https://github.com/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/blob/main/data/fetch_data.sh).

**_NOTE_**: We recommend downloadaing the data with the script and wget as the downloads from the web browser might be unstable.

# Installation

## Stable installation
## Stable installation
Installing this repository is simply. We recommend creating a virtual environment (using conda or pyenv), since we have dependencies such as PyTorch or Aspire, which are better dealt with in an isolated environment. After creating your environment, make sure to activate it and run

```bash
Expand Down Expand Up @@ -63,7 +63,7 @@ pytest tests/test_distribution_to_distribution.py
If you want to run our code on the full challenge data, or you own local data, please complete the following steps

### 1. Download the full challenge data from [The Inaugural Flatiron Institute Cryo-EM Heterogeneity Community Challenge](https://osf.io/8h6fz/)
You can do this through the web browser, or programatically with wget (you can get inspiration from [this script](https://github.com/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/blob/main/tests/scripts/fetch_test_data.sh), which is just for the test data, not the full datasets)
You can do this through the web browser, or programatically with wget (you can use [this script](https://github.com/flatironinstitute/Cryo-EM-Heterogeneity-Challenge-1/blob/main/data/fetch_data.sh, this will download around 220 GB of data)

### 2. Modify the config files and run the commands on the full challenge data
Point to the path where the data is locally
Expand Down
21 changes: 21 additions & 0 deletions data/fetch_data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
mkdir -p data/dataset_2_submissions data/dataset_1_submissions data/dataset_2_ground_truth

# dataset 1 submissions
for i in {0..10}
do
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/dataset_1_submissions/submission_${i}.pt?download=true -O data/dataset_1_submissions/submission_${i}.pt
done

# dataset 2 submissions
for i in {0..11}
do
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/dataset_2_submissions/submission_${i}.pt?download=true -O data/dataset_2_submissions/submission_${i}.pt
done

# ground truth

wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/Ground_truth/maps_gt_flat.pt?download=true -O data/dataset_2_ground_truth/maps_gt_flat.pt

wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/Ground_truth/metadata.csv?download=true -O data/dataset_2_ground_truth/metadata.csv

wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/Ground_truth/mask_dilated_wide_224x224.mrc?download=true -O data/dataset_2_ground_truth/mask_dilated_wide_224x224.mrc
5 changes: 4 additions & 1 deletion src/cryo_challenge/_preprocessing/preprocessing_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,10 @@ def preprocess_submissions(submission_dataset, config):
print(f" submission saved as submission_{idx}.pt")
print(f"Preprocessing submission {idx} complete")

with open("hash_table.json", "w") as f:
hash_table_path = os.path.join(
config["output_path"], "submission_to_icecream_table.json"
)
with open(hash_table_path, "w") as f:
json.dump(hash_table, f, indent=4)

return
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ cvxpy_solver: ECOS
optimal_q_kl:
n_iter: 100000
break_atol: 0.0001
output_fname: results/test_distribution_to_distribution_submission_0.pkl
output_fname: tests/results/test_distribution_to_distribution_submission_0.pkl
12 changes: 6 additions & 6 deletions tests/config_files/test_config_map_to_map.yaml
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
data:
n_pix: 224
psize: 2.146
psize: 2.146
submission:
fname: tests/data/dataset_2_submissions/test_submission_0_n8.pt
volume_key: volumes
metadata_key: populations
label_key: id
ground_truth:
volumes: tests/data/Ground_truth/test_maps_gt_flat_10.pt
metadata: tests/data/Ground_truth/test_metadata_10.csv
mask:
volumes: tests/data/Ground_truth/test_maps_gt_flat_10.pt
metadata: tests/data/Ground_truth/test_metadata_10.csv
mask:
do: true
volume: data/Ground_truth/mask_dilated_wide_224x224.mrc
volume: tests/data/Ground_truth/mask_dilated_wide_224x224.mrc
analysis:
metrics:
- l2
Expand All @@ -20,4 +20,4 @@ analysis:
normalize:
do: true
method: median_zscore
output: tests/results/test_map_to_map_distance_matrix_submission_0.pkl
output: tests/results/test_map_to_map_distance_matrix_submission_0.pkl
8 changes: 4 additions & 4 deletions tests/scripts/fetch_test_data.sh
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
mkdir -p tests/data/dataset_2_submissions data/dataset_2_submissions tests/results tests/data/unprocessed_dataset_2_submissions/submission_x tests/data/Ground_truth/ data/Ground_truth
mkdir -p tests/data/dataset_2_submissions tests/data/dataset_2_submissions tests/results tests/data/unprocessed_dataset_2_submissions/submission_x tests/data/Ground_truth/ tests/data/Ground_truth
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/dataset_2_submissions/test_submission_0_n8.pt?download=true -O tests/data/dataset_2_submissions/test_submission_0_n8.pt
ADIR=$(pwd)
ln -s $ADIR/tests/data/dataset_2_submissions/test_submission_0_n8.pt $ADIR/tests/data/dataset_2_submissions/submission_0.pt # symlink for svd which needs submission_0.pt for filename
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/Ground_truth/test_maps_gt_flat_10.pt?download=true -O tests/data/Ground_truth/test_maps_gt_flat_10.pt
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/Ground_truth/test_metadata_10.csv?download=true -O tests/data/Ground_truth/test_metadata_10.csv
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/Ground_truth/1.mrc?download=true -O tests/data/Ground_truth/1.mrc
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/Ground_truth/mask_dilated_wide_224x224.mrc?download=true -O data/Ground_truth/mask_dilated_wide_224x224.mrc
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/Ground_truth/test_metadata_10.csv?download=true -O tests/data/Ground_truth/test_metadata_10.csv
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/Ground_truth/1.mrc?download=true -O tests/data/Ground_truth/1.mrc
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/Ground_truth/mask_dilated_wide_224x224.mrc?download=true -O tests/data/Ground_truth/mask_dilated_wide_224x224.mrc
for FILE in 1.mrc 2.mrc 3.mrc 4.mrc populations.txt
do
wget https://files.osf.io/v1/resources/8h6fz/providers/dropbox/tests/unprocessed_dataset_2_submissions/submission_x/${FILE}?download=true -O tests/data/unprocessed_dataset_2_submissions/submission_x/${FILE}
Expand Down

0 comments on commit f7578ea

Please sign in to comment.