Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training with measurements done with multiple masks. #117

Merged
merged 26 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
b62e1ff
Make sensor downsampling consistent with rest of package.
ebezzam Feb 28, 2024
223f6a2
Add support for multimask training.
ebezzam Feb 28, 2024
ceb5d20
Add notebook to visualize alignment.
ebezzam Feb 28, 2024
fc5fbf6
Fix lensed resizing.
ebezzam Feb 29, 2024
fc38d30
Add multi-GPU support to unrolled ADMM.
ebezzam Feb 29, 2024
853a0ad
Same dataset object for multi and single mask.
ebezzam Mar 8, 2024
0333c8b
Cleaner image alignment.
ebezzam Mar 8, 2024
db87966
Make sensor downsampling consistent with rest.
ebezzam Mar 8, 2024
5147e06
Fix admm for multi-gpu support.
ebezzam Mar 8, 2024
8904a39
Reset after setting PSF.
ebezzam Mar 8, 2024
ffcd383
Fix trainable recon for multi-GPU.
ebezzam Mar 8, 2024
5c30bf7
Add support for additional benchmark sets and fix multi-gpu in traini…
ebezzam Mar 8, 2024
94164dd
Add support to hugging face datasets for benchmarking.
ebezzam Mar 8, 2024
b54cf6f
Update hugging face upload script to split single mask dataset like m…
ebezzam Mar 8, 2024
69c3a30
Update configurations for multimask experiments.
ebezzam Mar 8, 2024
40772f0
Merge branch 'main' into train_multi_mask
ebezzam Mar 12, 2024
a8e9668
Support hugging face dataset with PSF.
ebezzam Mar 18, 2024
26f5310
Training and authentication for ICCP.
ebezzam Mar 28, 2024
03bec46
Add support for programmable mask to Telegram bot.
ebezzam Apr 4, 2024
5e31c02
Merge main.
ebezzam Apr 5, 2024
10afc42
Clean up.
ebezzam Apr 5, 2024
27a0c90
Remove alignment notebook.
ebezzam Apr 5, 2024
6c8de08
Add function to plot train-test curves.
ebezzam Apr 5, 2024
fab248f
Remove commented line.
ebezzam Apr 8, 2024
35b3c11
Add links to Google colab notebooks.
ebezzam Apr 8, 2024
57eeb00
Update CHANGELOG.
ebezzam Apr 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ DiffuserCam_Mirflickr_200_3011302021_11h43_seed11*
paper/paper.pdf
data/*
models/*
notebooks/models/*
*.png
*.jpg
*.npy
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ Added
- ``lensless.hardware.trainable_mask.TrainableCodedAperture`` class for training a coded aperture mask pattern.
- Support for other optimizers in ``lensless.utils.Trainer.set_optimizer``.
- ``lensless.utils.dataset.simulate_dataset`` for simulating a dataset given a mask/PSF.
- Support for training/testing with multiple mask patterns in the dataset.
- Multi-GPU support for training.
- DigiCam dataset which interfaces with Hugging Face.
- Scripts for authentication.
- DigiCam support for Telegram demo.

Changed
~~~~~
Expand Down
17 changes: 17 additions & 0 deletions configs/authen.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# python scripts/data/authenticate.py
hydra:
job:
chdir: True # change to output folder

# repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
# repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-1K"
repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-10K"
split: all # "all" (for 100 masks), "test" (for 15 masks)
n_iter: 25
n_files: 100 # per mask
grayscale: True
font_scale: 1.5
torch_device: cuda:0

cont: null # continue already started file
scores_fp: null # file path to already computed scores
11 changes: 10 additions & 1 deletion configs/benchmark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,18 @@ hydra:
chdir: True


dataset: DiffuserCam # DiffuserCam, DigiCamCelebA
dataset: DiffuserCam # DiffuserCam, DigiCamCelebA, DigiCamHF
seed: 0

huggingface:
repo: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
alignment:
topright: [80, 100] # height, width
height: 200
downsample: 1

device: "cuda"
# numbers of iterations to benchmark
n_iter_range: [5, 10, 20, 50, 100, 200, 300]
Expand Down
18 changes: 13 additions & 5 deletions configs/demo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ display:
screen_res: [1920, 1200] # width, height
pad: 0
hshift: 0
vshift: 0
vshift: -10
brightness: 100
rot90: 3

Expand All @@ -32,7 +32,7 @@ display:
capture:
sensor: rpi_hq
gamma: null # for visualization
exp: 0.02
exp: 1
delay: 2
script: ~/LenslessPiCam/scripts/measure/on_device_capture.py
iso: 100
Expand All @@ -56,14 +56,22 @@ capture:
# remote script returns RGB data
rgb: True
down: 4
awb_gains: [1.9, 1.2]
awb_gains: [1.6, 1.2]


camera:
# these gains are not applied if rgb=True
red_gain: 1.9
blue_gain: 1.2
psf: data/psf/tape_rgb_31032023.png
# -- path to PSF,
# psf: data/psf/tape_rgb_31032023.png
# -- DigiCam configuration
psf:
seed: 0
device: adafruit
mask_shape: [54, 26]
mask_center: [57, 77]
flipud: True
background: null


Expand All @@ -84,7 +92,7 @@ recon:

# -- admm
admm:
n_iter: 10
n_iter: 100
disp_iter: null
mu1: 1e-6
mu2: 1e-5
Expand Down
21 changes: 21 additions & 0 deletions configs/recon_digicam_mirflickr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# python scripts/recon/digicam_mirflickr.py
defaults:
- defaults_recon
- _self_

# - Learned reconstructions: see "lensless/recon/model_dict.py"
# model: U10
# model: Unet8M
# model: TrainInv+Unet8M
# model: U10+Unet8M
# model: Unet4M+TrainInv+Unet4M
# model: Unet4M+U10+Unet4M

# -- for ADMM with fixed parameters
model: admm
n_iter: 10

device: cuda:0
n_trials: 100 # more if you want to get average inference time
idx: 1 # index from test set to reconstruct
save: True
20 changes: 16 additions & 4 deletions configs/telegram_demo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,30 @@ rpi_lensed_hostname: null

# can pre-load PSF so it doesn't have to be loaded and resize at each reconstruction
# psf: null
# -- digicam (simulated)
psf:
fp: data/psf/tape_rgb_31032023.png
# fp: data/psf/tape_rgb.png # wrong
sensor: rpi_hq
device: adafruit
mask_shape: [54, 26]
mask_center: [57, 77]
flipud: True
downsample: 4
# -- measured PSF
# psf:
# # https://drive.switch.ch/index.php/s/NdgHlcDeHVDH5ww?path=%2Fpsf
# fp: data/psf/tape_rgb_31032023.png
# # fp: data/psf/tape_rgb.png # wrong
# downsample: 4

# which hydra config to use and available algos
config_name: demo
default_algo: unrolled # note that this requires GPU
supported_algos: ["fista", "admm", "unrolled"]
default_algo: admm # note that unrolled requires GPU
# supported_algos: ["fista", "admm", "unrolled"]
supported_algos: ["fista", "admm"]


# overlaying logos on the reconstruction
# images: https://drive.switch.ch/index.php/s/NdgHlcDeHVDH5ww?path=%2Foriginal
overlay:
alpha: 60

Expand Down
71 changes: 0 additions & 71 deletions configs/train_celeba_digicam.yaml

This file was deleted.

64 changes: 64 additions & 0 deletions configs/train_digicam_celeba.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# python scripts/recon/train_unrolled.py -cn train_digicam_singlemask
defaults:
- train_unrolledADMM
- _self_

torch_device: 'cuda:0'
device_ids: [0, 1, 2, 3]
eval_disp_idx: [0, 2, 3, 4, 9]

# Dataset
files:
dataset: bezzam/DigiCam-CelebA-26K
huggingface_psf: "psf_simulated.png"
huggingface_dataset: True
split_seed: 0
downsample: 2
rotate: True # if measurement is upside-down
save_psf: False

alignment:
# cropping when there is no downsampling
crop:
vertical: [0, 525]
horizontal: [265, 695]

# for prepping ground truth data
simulation:
scene2mask: 0.25 # [m]
mask2sensor: 0.002 # [m]
object_height: 0.33 # [m]
sensor: "rpi_hq"
snr_db: null
downsample: null
random_vflip: False
random_hflip: False
quantize: False
# shifting when there is no files.downsample
vertical_shift: -117
horizontal_shift: -25

training:
batch_size: 4
epoch: 25
eval_batch_size: 4
crop_preloss: True

reconstruction:
method: unrolled_admm
unrolled_admm:
# Number of iterations
n_iter: 10
# Hyperparameters
mu1: 1e-4
mu2: 1e-4
mu3: 1e-4
tau: 2e-4
pre_process:
network : UnetRes # UnetRes or DruNet or null
depth : 4 # depth of each up/downsampling layer. Ignore if network is DruNet
nc: [32,64,116,128]
post_process:
network : UnetRes # UnetRes or DruNet or null
depth : 4 # depth of each up/downsampling layer. Ignore if network is DruNet
nc: [32,64,116,128]
58 changes: 58 additions & 0 deletions configs/train_digicam_multimask.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# python scripts/recon/train_unrolled.py -cn train_digicam_multimask
defaults:
- train_unrolledADMM
- _self_


torch_device: 'cuda:0'
device_ids: [0, 1, 2, 3]
eval_disp_idx: [1, 2, 4, 5, 9]

# Dataset
files:
dataset: bezzam/DigiCam-Mirflickr-MultiMask-25K
huggingface_dataset: True
downsample: 1
# TODO: these parameters should be in the dataset?
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
save_psf: False

extra_eval:
singlemask:
huggingface_repo: bezzam/DigiCam-Mirflickr-SingleMask-25K
display_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
alignment:
topright: [80, 100] # height, width
height: 200

# TODO: these parameters should be in the dataset?
alignment:
# when there is no downsampling
topright: [80, 100] # height, width
height: 200

training:
batch_size: 4
epoch: 25
eval_batch_size: 4

reconstruction:
method: unrolled_admm
unrolled_admm:
# Number of iterations
n_iter: 10
# Hyperparameters
mu1: 1e-4
mu2: 1e-4
mu3: 1e-4
tau: 2e-4
pre_process:
network : UnetRes # UnetRes or DruNet or null
depth : 4 # depth of each up/downsampling layer. Ignore if network is DruNet
nc: [32,64,116,128]
post_process:
network : UnetRes # UnetRes or DruNet or null
depth : 4 # depth of each up/downsampling layer. Ignore if network is DruNet
nc: [32,64,116,128]
Loading
Loading