Skip to content

Commit

Permalink
Merge branch 'main' into upload_background_dataset
Browse files Browse the repository at this point in the history
  • Loading branch information
ebezzam committed Aug 8, 2024
2 parents 141e96e + 3af7b14 commit 7118afc
Show file tree
Hide file tree
Showing 58 changed files with 3,302 additions and 656 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
fail-fast: false
max-parallel: 12
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest, macos-12, windows-latest]
python-version: [3.9, "3.10"]
steps:
- uses: actions/checkout@v3
Expand Down
15 changes: 14 additions & 1 deletion CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,17 @@ Added

- Option to pass background image to ``utils.io.load_data``.
- Option to set image resolution with ``hardware.utils.display`` function.
- Auxiliary of reconstructing output from pre-processor (not working).
- Option to set focal range for MultiLensArray.
- Optional to remove deadspace modelling for programmable mask.
- Compensation branch for unrolled ADMM: https://ieeexplore.ieee.org/abstract/document/9546648
- Multi-Wiener deconvolution network: https://opg.optica.org/oe/fulltext.cfm?uri=oe-31-23-39088&id=541387
- Option to skip pre-processor and post-processor at inference time.
- Option to set difference learning rate schedules, e.g. ADAMW, exponential decay, Cosine decay with warmup.
- Various augmentations for training: random flipping, random rotate, and random shifts. Latter two don't work well since new regions appear that throw off PSF/LSI modeling.
- HFSimulated object for simulating lensless data from ground-truth and PSF.
- Option to set cache directory for Hugging Face datasets.
- Option to initialize training with another model.

Changed
~~~~~~~
Expand All @@ -24,7 +35,9 @@ Changed
Bugfix
~~~~~~

- Nothing
- Computation of average metric in batches.
- Support for grayscale PSF for RealFFTConvolve2D.
- Calling model.eval() before inference, and model.train() before training.


1.0.7 - (2024-05-14)
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ The toolkit includes:
* Camera assembly tutorials (`link <https://lensless.readthedocs.io/en/latest/building.html>`__).
* Measurement scripts (`link <https://lensless.readthedocs.io/en/latest/measurement.html>`__).
* Dataset preparation and loading tools, with `Hugging Face <https://huggingface.co/bezzam>`__ integration (`slides <https://docs.google.com/presentation/d/18h7jTcp20jeoiF8dJIEcc7wHgjpgFgVxZ_bJ04W55lg/edit?usp=sharing>`__ on uploading a dataset to Hugging Face with `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/data/upload_dataset_huggingface.py>`__).
* `Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>`__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, pre- and post-processors).
* `Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>`__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, , multi-Wiener deconvolution network, pre- and post-processors).
* `Training script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_learning_based.py>`__ for learning-based reconstruction.
* `Pre-trained models <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/model_dict.py>`__ that can be loaded from `Hugging Face <https://huggingface.co/bezzam>`__, for example in `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/diffusercam_mirflickr.py>`__.
* Mask `design <https://lensless.readthedocs.io/en/latest/mask.html>`__ and `fabrication <https://lensless.readthedocs.io/en/latest/fabrication.html>`__ tools.
Expand Down
13 changes: 12 additions & 1 deletion configs/benchmark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,23 @@ hydra:

dataset: DiffuserCam # DiffuserCam, DigiCamCelebA, HFDataset
seed: 0
batchsize: 1 # must be 1 for iterative approaches

huggingface:
repo: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
cache_dir: null # where to read/write dataset. Defaults to `"~/.cache/huggingface/datasets"`.
psf: null # null for simulating PSF
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
flipud: False
flip_lensed: False # if rotate or flipud is True, apply to lensed
alignment:
top_left: [80, 100] # height, width
height: 200
downsample: 1
downsample_lensed: 1
split_seed: null
single_channel_psf: False

device: "cuda"
# numbers of iterations to benchmark
Expand All @@ -33,6 +41,8 @@ algorithms: ["ADMM", "ADMM_Monakhova2019", "FISTA"] #["ADMM", "ADMM_Monakhova201
baseline: "MONAKHOVA 100iter"

save_idx: [0, 1, 2, 3, 4] # provide index of files to save e.g. [1, 5, 10]
gamma_psf: 1.5 # gamma factor for PSF


# Hyperparameters
nesterov:
Expand Down Expand Up @@ -86,7 +96,8 @@ simulation:
# mask2sensor: 9e-3 # mask2sensor: 4e-3
# -- for CelebA
scene2mask: 0.25 # [m]
mask2sensor: 0.002 # [m]
mask2sensor: 0.002 # [m]
deadspace: True # whether to account for deadspace for programmable mask
# see waveprop.devices
use_waveprop: False # for PSF simulation
sensor: "rpi_hq"
Expand Down
63 changes: 63 additions & 0 deletions configs/benchmark_diffusercam_mirflickr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# python scripts/eval/benchmark_recon.py -cn benchmark_diffusercam_mirflickr
defaults:
- benchmark
- _self_

dataset: HFDataset
batchsize: 4
device: "cuda:0"

huggingface:
repo: "bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM"
psf: psf.tiff
image_res: null
rotate: False # if measurement is upside-down
alignment: null
downsample: 2
downsample_lensed: 2
flipud: True
flip_lensed: True
single_channel_psf: True

algorithms: [
"ADMM",

## -- reconstructions trained on DiffuserCam measured
"hf:diffusercam:mirflickr:U5+Unet8M",
"hf:diffusercam:mirflickr:TrainInv+Unet8M",
"hf:diffusercam:mirflickr:MMCN4M+Unet4M",
"hf:diffusercam:mirflickr:MWDN8M",
"hf:diffusercam:mirflickr:Unet4M+U5+Unet4M",
"hf:diffusercam:mirflickr:Unet4M+TrainInv+Unet4M",
"hf:diffusercam:mirflickr:Unet2M+MMCN+Unet2M",
"hf:diffusercam:mirflickr:Unet2M+MWDN6M",
"hf:diffusercam:mirflickr:Unet4M+U10+Unet4M",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_ft_tapecam",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_ft_tapecam_post",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_ft_tapecam_pre",

# ## -- reconstruction trained on DiffuserCam simulated
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_tapecam",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_tapecam_post",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_tapecam_pre",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_digicam_multi_post",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_digicam_multi_pre",
# "hf:diffusercam:mirflickr_sim:Unet4M+U5+Unet4M_ft_digicam_multi",

# ## -- reconstructions trained on other datasets/systems
# "hf:tapecam:mirflickr:Unet4M+U10+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave",
# "hf:digicam:celeba_26k:Unet4M+U5+Unet4M_wave",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U10+Unet4M_wave",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M_flips",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M_flips_rotate10",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M_aux1",
# "hf:digicam:mirflickr_multi_25k:Unet4M+U10+Unet4M_wave",
# "hf:digicam:mirflickr_multi_25k:Unet4M+U5+Unet4M_wave",
]

save_idx: [0, 1, 3, 4, 8, 45, 58, 63]
n_iter_range: [100] # for ADMM

62 changes: 62 additions & 0 deletions configs/benchmark_digicam_celeba.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# python scripts/eval/benchmark_recon.py -cn benchmark_digicam_celeba
defaults:
- benchmark
- _self_


dataset: HFDataset
batchsize: 10
device: "cuda:0"

algorithms: [
"ADMM",

## -- reconstructions trained on measured data
"hf:digicam:celeba_26k:U5+Unet8M_wave",
"hf:digicam:celeba_26k:TrainInv+Unet8M_wave",
"hf:digicam:celeba_26k:MWDN8M_wave",
"hf:digicam:celeba_26k:MMCN4M+Unet4M_wave",
"hf:digicam:celeba_26k:Unet2M+MWDN6M_wave",
"hf:digicam:celeba_26k:Unet4M+TrainInv+Unet4M_wave",
"hf:digicam:celeba_26k:Unet2M+MMCN+Unet2M_wave",
"hf:digicam:celeba_26k:Unet4M+U5+Unet4M_wave",
"hf:digicam:celeba_26k:Unet4M+U10+Unet4M_wave",

# # -- reconstructions trained on other datasets/systems
# "hf:diffusercam:mirflickr:Unet4M+U10+Unet4M",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M",
# "hf:tapecam:mirflickr:Unet4M+U10+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave",
# "hf:digicam:mirflickr_single_25k:Unet4M+U10+Unet4M_wave",
]

save_idx: [0, 2, 3, 4, 9]
n_iter_range: [100] # for ADMM

huggingface:
repo: bezzam/DigiCam-CelebA-26K
psf: psf_simulated_waveprop.png # psf_simulated_waveprop.png, psf_simulated.png, psf_measured.png
split_seed: 0
test_size: 0.15
downsample: 2
image_res: null

alignment:
top_left: null
height: null

# cropping when there is no downsampling
crop:
vertical: [0, 525]
horizontal: [265, 695]

# for prepping ground truth data
simulation:
scene2mask: 0.25 # [m]
mask2sensor: 0.002 # [m]
object_height: 0.33 # [m]
sensor: "rpi_hq"
# shifting when there is no files to downsample
vertical_shift: -117
horizontal_shift: -25
60 changes: 60 additions & 0 deletions configs/benchmark_digicam_mirflickr_multi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# python scripts/eval/benchmark_recon.py -cn benchmark_digicam_mirflickr_multi
defaults:
- benchmark
- _self_


dataset: HFDataset
batchsize: 4
device: "cuda:0"

huggingface:
repo: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
psf: null # null for simulating PSF
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
flipud: False
flip_lensed: False # if rotate or flipud is True, apply to lensed
alignment:
top_left: [80, 100] # height, width
height: 200
downsample: 1

algorithms: [
"ADMM",

## -- reconstructions trained on measured data
"hf:digicam:mirflickr_multi_25k:Unet4M+U5+Unet4M_wave",
"hf:digicam:mirflickr_multi_25k:Unet4M+U10+Unet4M_wave",
"hf:digicam:mirflickr_multi_25k:Unet4M+U5+Unet4M_wave_aux1",
"hf:digicam:mirflickr_multi_25k:Unet4M+U5+Unet4M_wave_flips",

# ## -- reconstructions trained on other datasets/systems
# "hf:diffusercam:mirflickr:Unet4M+U10+Unet4M",
# "hf:tapecam:mirflickr:Unet4M+U10+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U10+Unet4M_wave",
# "hf:digicam:celeba_26k:Unet4M+U5+Unet4M_wave",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_aux1",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_flips",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_flips_rotate10",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_ft_flips",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_ft_flips_rotate10",
]

# # -- to only use output from unrolled
# hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_aux1:
# skip_post: True
# skip_pre: True

save_idx: [1, 2, 4, 5, 9, 24, 33, 61]
n_iter_range: [100] # for ADMM

# simulating PSF
simulation:
use_waveprop: True
deadspace: True
scene2mask: 0.3
mask2sensor: 0.002
58 changes: 58 additions & 0 deletions configs/benchmark_digicam_mirflickr_single.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# python scripts/eval/benchmark_recon.py -cn benchmark_digicam_mirflickr_single
defaults:
- benchmark
- _self_

dataset: HFDataset
batchsize: 4
device: "cuda:0"

huggingface:
repo: "bezzam/DigiCam-Mirflickr-SingleMask-25K"
cache_dir: /dev/shm
psf: null # null for simulating PSF
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
flipud: False
flip_lensed: False # if rotate or flipud is True, apply to lensed
alignment:
top_left: [80, 100] # height, width
height: 200
downsample: 1


algorithms: [
"ADMM",

# -- reconstructions trained on measured data
"hf:digicam:mirflickr_single_25k:U5+Unet8M_wave",
"hf:digicam:mirflickr_single_25k:TrainInv+Unet8M_wave",
"hf:digicam:mirflickr_single_25k:MMCN4M+Unet4M_wave",
"hf:digicam:mirflickr_single_25k:MWDN8M_wave",
"hf:digicam:mirflickr_single_25k:Unet4M+TrainInv+Unet4M_wave",
"hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave",
"hf:digicam:mirflickr_single_25k:Unet2M+MMCN+Unet2M_wave",
"hf:digicam:mirflickr_single_25k:Unet2M+MWDN6M_wave",
"hf:digicam:mirflickr_single_25k:Unet4M+U10+Unet4M_wave",
"hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_flips",
"hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave_flips_rotate10",

# ## -- reconstructions trained on other datasets/systems
# "hf:diffusercam:mirflickr:Unet4M+U10+Unet4M",
# "hf:tapecam:mirflickr:Unet4M+U10+Unet4M",
# "hf:digicam:mirflickr_single_25k:Unet4M+U5+Unet4M_wave",
# "hf:digicam:celeba_26k:Unet4M+U5+Unet4M_wave",
# "hf:tapecam:mirflickr:Unet4M+U5+Unet4M",
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M",
# "hf:digicam:mirflickr_multi_25k:Unet4M+U5+Unet4M_wave",
]

save_idx: [1, 2, 4, 5, 9]
n_iter_range: [100] # for ADMM

# simulating PSF
simulation:
use_waveprop: True
deadspace: True
scene2mask: 0.3
mask2sensor: 0.002
Loading

0 comments on commit 7118afc

Please sign in to comment.