Skip to content

Commit

Permalink
Merge branch 'main' of github.com:LCAV/LenslessPiCam into main
Browse files Browse the repository at this point in the history
  • Loading branch information
ebezzam committed Sep 20, 2023
2 parents 24a147a + 753a64a commit 6b16b86
Show file tree
Hide file tree
Showing 66 changed files with 4,344 additions and 696 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,5 @@ jobs:
pip install -U pytest
pip install -r recon_requirements.txt
pip install -r mask_requirements.txt
pip install git+https://github.com/matthieumeo/pycsou.git@v2-dev
pip install git+https://github.com/matthieumeo/pycsou.git@38e9929c29509d350a7ff12c514e2880fdc99d6e
pytest
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ data/*
models/*
*.png
*.jpg
*.npy

configs/telegram_demo_secret.yaml

Expand Down
32 changes: 31 additions & 1 deletion CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,29 @@ Unreleased
Added
~~~~~

- Trainable reconstruction can return intermediate outputs (between pre- and post-processing).
- Auto-download for DRUNet model.
- ``utils.dataset.DiffuserCamMirflickr`` helper class for Mirflickr dataset.

Changed
~~~~~~~

- Better logic for saving best model. Based on desired metric rather than last epoch, and intermediate models can be saved.
- Optional normalization in ``utils.io.load_image``.

Bugfix
~~~~~~

- Support for unrolled reconstruction with grayscale, needed to copy to three channels for LPIPS.
- Fix bad train/test split for DiffuserCamMirflickr in unrolled training.


1.0.5 - (2023-09-05)
--------------------

Added
~~~~~

- Sensor module.
- Single-script and Telegram demo.
- Link and citation for JOSS.
Expand All @@ -22,8 +45,15 @@ Added
- Script for measuring arbitrary dataset (from Raspberry Pi).
- Support for preprocessing and postprocessing, such as denoising, in ``TrainableReconstructionAlgorithm``. Both trainable and fix postprocessing can be used.
- Utilities to load a trained DruNet model for use as postprocessing in ``TrainableReconstructionAlgorithm``.
- Unified interface for dataset. See ``utils.dataset.DualDataset``.
- New simulated dataset compatible with new data format ([(batch_size), depth, width, height, color]). See ``utils.dataset.SimulatedFarFieldDataset``.
- New dataset for pair of original image and their measurement from a screen. See ``utils.dataset.MeasuredDataset`` and ``utils.dataset.MeasuredDatasetSimulatedOriginal``.
- Support for unrolled loading and inference in the script ``admm.py``.
- Tikhonov reconstruction for coded aperture measurements (MLS / MURA).
- Tikhonov reconstruction for coded aperture measurements (MLS / MURA): numpy and Pytorch support.
- New ``Trainer`` class to train ``TrainableReconstructionAlgorithm`` with PyTorch.
- New ``TrainableMask`` and ``TrainablePSF`` class to train/fine-tune a mask from a dataset.
- New ``SimulatedDatasetTrainableMask`` class to train/fine-tune a mask for measurement.
- PyTorch support for ``lensless.utils.io.rgb2gray``.


Changed
Expand Down
18 changes: 12 additions & 6 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,8 @@ Python 3.9, as some Python library versions may not be available with
earlier versions of Python. Moreover, its `end-of-life <https://endoflife.date/python>`__
is Oct 2025.

**Local machine**
*Local machine setup*
=====================

Below are commands that worked for our configuration (Ubuntu
21.04), but there are certainly other ways to download a repository and
Expand All @@ -83,16 +84,20 @@ install the library locally.
# (optional) try reconstruction on local machine
python scripts/recon/admm.py
# (optional) try reconstruction on local machine with GPU
python scripts/recon/admm.py -cn pytorch
Note (25-04-2023): for using reconstruction method based on Pycsou ``lensless.apgd.APGD``,
V2 has to be installed:
Note (25-04-2023): for using the :py:class:`~lensless.recon.apgd.APGD` reconstruction method based on Pycsou
(now `Pyxu <https://github.com/matthieumeo/pyxu>`__), a specific commit has
to be installed (as there was no release at the time of implementation):

.. code:: bash
pip install git+https://github.com/matthieumeo/pycsou.git@v2-dev
pip install git+https://github.com/matthieumeo/pycsou.git@38e9929c29509d350a7ff12c514e2880fdc99d6e
If PyTorch is installed, you will need to be sure to have PyTorch 2.0 or higher,
as Pycsou V2 is not compatible with earlier versions of PyTorch. Moreover,
as Pycsou is not compatible with earlier versions of PyTorch. Moreover,
Pycsou requires Python within
`[3.9, 3.11) <https://github.com/matthieumeo/pycsou/blob/v2-dev/setup.cfg#L28>`__.

Expand All @@ -102,7 +107,8 @@ Moreover, ``numba`` (requirement for Pycsou V2) may require an older version of
pip install numpy==1.23.5
**Raspberry Pi**
*Raspberry Pi setup*
====================

After `flashing your Raspberry Pi with SSH enabled <https://medium.com/@bezzam/setting-up-a-raspberry-pi-without-a-monitor-headless-9a3c2337f329>`__,
you need to set it up for `passwordless access <https://medium.com/@bezzam/headless-and-passwordless-interfacing-with-a-raspberry-pi-ssh-453dd75154c3>`__.
Expand Down
9 changes: 9 additions & 0 deletions configs/adafruit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
defaults:
- demo
- _self_

plot: True

capture:
exp: 5.0
awb_gains: [1, 1]
4 changes: 4 additions & 0 deletions configs/apgd_l1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ defaults:
- defaults_recon
- _self_

preprocess:
# Downsampling factor along X and Y
downsample: 8

apgd:
# Proximal prior / regularization: nonneg, l1, null
prox_penalty: l1
Expand Down
4 changes: 4 additions & 0 deletions configs/apgd_l2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ defaults:
- defaults_recon
- _self_

preprocess:
# Downsampling factor along X and Y
downsample: 8

apgd:
diff_penalty: l2
diff_lambda: 0.0001
Expand Down
3 changes: 3 additions & 0 deletions configs/defaults_recon.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,13 @@ input:
# File path for raw data
data: data/raw_data/thumbs_up_rgb.png
dtype: float32
original: null # ground truth image

torch: False
torch_device: 'cpu'

preprocess:
normalize: True
# Downsampling factor along X and Y
downsample: 4
# Image shape (height, width) for reconstruction.
Expand All @@ -27,6 +29,7 @@ preprocess:
single_psf: False
# Whether to perform construction in grayscale.
gray: False
bg_pix: [5, 25] # null to skip


display:
Expand Down
2 changes: 2 additions & 0 deletions configs/demo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ display:
psf: null
# all black screen
black: False
# all white screen
white: False

capture:
gamma: null # for visualization
Expand Down
43 changes: 43 additions & 0 deletions configs/diffusercam_mirflickr_single_admm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# python scripts/recon/admm.py -cn diffusercam_mirflickr_single_admm
defaults:
- defaults_recon
- _self_


display:
gamma: null

input:
# File path for recorded PSF
psf: data/DiffuserCam_Test/psf.tiff
# File path for raw data
data: data/DiffuserCam_Test/diffuser/im5.npy
dtype: float32
original: data/DiffuserCam_Test/lensed/im5.npy

torch: True
torch_device: 'cuda:0'

preprocess:
downsample: 8 # factor for PSF, which is 4x resolution of image
normalize: False

admm:
# Number of iterations
n_iter: 20
# Hyperparameters
mu1: 1e-6
mu2: 1e-5
mu3: 4e-5
tau: 0.0001
#Loading unrolled model
unrolled: True
# checkpoint_fp: pretrained_models/Pre_Unrolled_Post-DiffuserCam/model_weights.pt
checkpoint_fp: outputs/2023-09-11/22-06-49/recon.pt # pre unet and post drunet
pre_process_model:
network : UnetRes # UnetRes or DruNet or null
depth : 2 # depth of each up/downsampling layer. Ignore if network is DruNet
post_process_model:
network : DruNet # UnetRes or DruNet or null
depth : 2 # depth of each up/downsampling layer. Ignore if network is DruNet

23 changes: 23 additions & 0 deletions configs/digicam.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
rpi:
username: null
hostname: null

device: adafruit
virtual: False
save: True

# pattern: data/psf/adafruit_random_pattern_20230719.npy
pattern: random
# pattern: rect
# pattern: circ
min_val: 0 # if pattern: random, min for range(0,1)
rect_shape: [20, 10] # if pattern: rect
radius: 20 # if pattern: circ
center: [0, 0]


aperture:
center: [59,76]
shape: [19,26]

z: 4 # mask to sensor distance
18 changes: 18 additions & 0 deletions configs/fine-tune_PSF.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# python scripts/recon/train_unrolled.py -cn fine-tune_PSF
defaults:
- train_unrolledADMM
- _self_

#Trainable Mask
trainable_mask:
mask_type: TrainablePSF #Null or "TrainablePSF"
initial_value: psf
mask_lr: 1e-3
L1_strength: 1.0 #False or float

#Training
training:
save_every: 5

display:
gamma: 2.2
1 change: 1 addition & 0 deletions configs/mask_sim_single.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ files:
#original: data/original/mnist_3.png

save: True
use_torch: False

simulation:
object_height: 0.3
Expand Down
47 changes: 47 additions & 0 deletions configs/recon_dataset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# python scripts/recon/dataset.py
defaults:
- defaults_recon
- _self_

torch: True
torch_device: 'cuda:0'

input:
# https://drive.switch.ch/index.php/s/NdgHlcDeHVDH5ww?path=%2Fpsf
psf: data/psf/adafruit_random_2mm_20231907.png
# https://drive.switch.ch/index.php/s/m89D1tFEfktQueS
raw_data: data/celeba_adafruit_random_2mm_20230720_1K

n_files: 25 # null for all files
output_folder: data/celeba_adafruit_recon

# extraction region of interest
roi: null # top, left, bottom, right
# -- values for `data/celeba_adafruit_random_2mm_20230720_1K`
# roi: [10, 300, 560, 705] # down 4
# roi: [6, 200, 373, 470] # down 6
# roi: [5, 150, 280, 352] # down 8

preprocess:
flip: True
downsample: 6

# to have different data shape than PSF
data_dim: null
# data_dim: [48, 64] # down 64
# data_dim: [506, 676] # down 6

display:
disp: -1
plot: False

algo: admm # "admm", "apgd", "null" to just copy over (resized) raw data

apgd:
n_jobs: 1 # run in parallel as algo is slow
max_iter: 500

admm:
n_iter: 10

save: False
38 changes: 38 additions & 0 deletions configs/sim_digicam_psf.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# python scripts/sim/digicam_psf.py
hydra:
job:
chdir: True # change to output folder

use_torch: False
dtype: float32
torch_device: cuda
requires_grad: True

digicam:

slm: adafruit
sensor: rpi_hq

# https://drive.switch.ch/index.php/s/NdgHlcDeHVDH5ww?path=%2Fpsf
pattern: data/psf/adafruit_random_pattern_20230719.npy
ap_center: [59, 76]
ap_shape: [19, 26]
rotate: -0.8 # rotation in degrees

# optionally provide measured PSF for side-by-side comparison
# https://drive.switch.ch/index.php/s/NdgHlcDeHVDH5ww?path=%2Fpsf
psf: data/psf/adafruit_random_2mm_20231907.png
gamma: 2 # for plotting measured

sim:

# whether SLM is fliped
flipud: True

# in practice found waveprop=True or False doesn't make difference
waveprop: False

# below are ignored if waveprop=False
scene2mask: 0.03 # [m]
mask2sensor: 0.002 # [m]

38 changes: 38 additions & 0 deletions configs/train_celeba_classifier.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
hydra:
job:
chdir: True # change to output folder

seed: 0

data:
# -- path to original CelebA (parent directory)
original: /scratch/bezzam

output_dir: "./vit-celeba" # basename for model output

# -- raw
# https://drive.switch.ch/index.php/s/m89D1tFEfktQueS
measured: data/celeba_adafruit_random_2mm_20230720_10K
raw: True

# # -- reconstructed
# # run `python scripts/recon/dataset.py` to get a reconstructed dataset
# measured: null
# raw: False

n_files: null # null to use all in measured_folder
test_size: 0.15
attr: Male # "Male", "Smiling", etc

augmentation:

random_resize_crop: False
horizontal_flip: True # cannot be used with raw measurement!

train:

prev: null # path to previously trained model
n_epochs: 4
dropout: 0.1
batch_size: 16
learning_rate: 2e-4
Loading

0 comments on commit 6b16b86

Please sign in to comment.