Skip to content

Commit

Permalink
Merge branch 'main' into digicam_example
Browse files Browse the repository at this point in the history
  • Loading branch information
ebezzam committed Apr 25, 2024
2 parents 07cf6bb + 79eb8a4 commit a97c1df
Show file tree
Hide file tree
Showing 77 changed files with 3,334 additions and 3,450 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/python_no_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:
fail-fast: false
max-parallel: 12
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
# TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808
# os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest, macos-12, windows-latest]
python-version: [3.8, "3.11"]
steps:
- uses: actions/checkout@v3
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/python_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:
fail-fast: false
max-parallel: 12
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
# TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808
# os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest, macos-12, windows-latest]
python-version: [3.9, "3.10"]
steps:
- uses: actions/checkout@v3
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ DiffuserCam_Mirflickr_200_3011302021_11h43_seed11*
paper/paper.pdf
data/*
models/*
notebooks/models/*
*.png
*.jpg
*.npy
Expand Down
20 changes: 18 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,33 @@ Added
- ``lensless.hardware.trainable_mask.TrainableCodedAperture`` class for training a coded aperture mask pattern.
- Support for other optimizers in ``lensless.utils.Trainer.set_optimizer``.
- ``lensless.utils.dataset.simulate_dataset`` for simulating a dataset given a mask/PSF.
- Support for training/testing with multiple mask patterns in the dataset.
- Multi-GPU support for training.
- DigiCam dataset which interfaces with Hugging Face.
- Scripts for authentication.
- DigiCam support for Telegram demo.
- DiffuserCamMirflickr Hugging Face API.
- Fallback for normalization if data not in 8bit range (``lensless.utils.io.save_image``).
- Add utilities for fabricating masks with 3D printing (``lensless.hardware.fabrication``).

Changed
~~~~~
~~~~~~~

- Dataset reconstruction script uses datasets from Hugging Face: ``scripts/recon/dataset.py``
- For trainable masks, set trainable parameters inside the child class.
- ``distance_sensor`` optional for ``lensless.hardware.mask.Mask``, e.g. don't need for fabrication.
- More intuitive interface for MURA for coded aperture (``lensless.hardware.mask.CodedAperture``), i.e. directly pass prime number.


Bugfix
~~~~~
~~~~~~

- ``lensless.hardware.trainable_mask.AdafruitLCD`` input handling.
- Local path for DRUNet download.
- APGD input handling (float32).
- Multimask handling.
- Passing shape to IRFFT so that it matches shape of input to RFFT.
- MLS mask creation (needed to rescale digits).

1.0.6 - (2024-02-21)
--------------------
Expand Down
49 changes: 33 additions & 16 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,32 +17,43 @@ LenslessPiCam


.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://drive.google.com/drive/folders/1nBDsg86RaZIqQM6qD-612k9v8gDrgdwB?usp=drive_link
:target: https://lensless.readthedocs.io/en/latest/examples.html
:alt: notebooks

.. image:: https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-dark.svg
:target: https://huggingface.co/bezzam
:alt: huggingface


*A Hardware and Software Toolkit for Lensless Computational Imaging with a Raspberry Pi*
-----------------------------------------------------------------------------------------
*A Hardware and Software Toolkit for Lensless Computational Imaging*
--------------------------------------------------------------------

.. image:: https://github.com/LCAV/LenslessPiCam/raw/main/scripts/recon/example.png
:alt: Lensless imaging example
:align: center


This toolkit has everything you need to perform imaging with a lensless
camera. We make use of a low-cost implementation of DiffuserCam [1]_,
where we use a piece of tape instead of the lens and the
`Raspberry Pi HQ camera sensor <https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera>`__
(the `V2 sensor <https://www.raspberrypi.com/products/camera-module-v2/>`__
is also supported). Similar principles and methods can be used for a
different lensless encoder and a different sensor.
This toolkit has everything you need to perform imaging with a lensless camera.
The sensor in most examples is the `Raspberry Pi HQ <https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera>`__,
camera sensor as it is low cost (around 50 USD) and has a high resolution (12 MP).
The lensless encoder/mask used in most examples is either a piece of tape or a `low-cost LCD <https://www.adafruit.com/product/358>`__.
As **modularity** is a key feature of this toolkit, we try to support different sensors and/or lensless encoders.

*If you are interested in exploring reconstruction algorithms without building the camera, that is entirely possible!*
The provided reconstruction algorithms can be used with the provided data or simulated data.
The toolkit includes:

* Camera assembly tutorials (`link <https://lensless.readthedocs.io/en/latest/building.html>`__).
* Measurement scripts (`link <https://lensless.readthedocs.io/en/latest/measurement.html>`__).
* Dataset preparation and loading tools, with `Hugging Face <https://huggingface.co/bezzam>`__ integration (`slides <https://docs.google.com/presentation/d/18h7jTcp20jeoiF8dJIEcc7wHgjpgFgVxZ_bJ04W55lg/edit?usp=sharing>`__ on uploading a dataset to Hugging Face with `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/data/upload_dataset_huggingface.py>`__).
* `Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>`__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, pre- and post-processors).
* `Training script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_unrolled.py>`__ for learning-based reconstruction.
* `Pre-trained models <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/model_dict.py>`__ that can be loaded from `Hugging Face <https://huggingface.co/bezzam>`__, for example in `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/diffusercam_mirflickr.py>`__.
* Mask `design <https://lensless.readthedocs.io/en/latest/mask.html>`__ and `fabrication <https://lensless.readthedocs.io/en/latest/fabrication.html>`__ tools.
* `Simulation tools <https://lensless.readthedocs.io/en/latest/simulation.html>`__.
* `Evalutions tools <https://lensless.readthedocs.io/en/latest/evaluation.html>`__ (e.g. PSNR, LPIPS, SSIM, visualizations).
* `Demo <https://lensless.readthedocs.io/en/latest/demo.html#telegram-demo>`__ that can be run on Telegram!

Please refer to the `documentation <http://lensless.readthedocs.io>`__ for more details,
while an overview of example notebooks can be found `here <https://lensless.readthedocs.io/en/latest/examples.html>`__.

We've also written a few Medium articles to guide users through the process
of building the camera, measuring data with it, and reconstruction.
Expand Down Expand Up @@ -155,6 +166,12 @@ directory):
source lensless_env/bin/activate
pip install --no-deps -e .
pip install -r rpi_requirements.txt
# test on-device camera capture (after setting up the camera)
source lensless_env/bin/activate
python scripts/measure/on_device_capture.py
You may still need to manually install ``numpy`` and/or ``scipy`` with ``pip`` in case libraries (e.g. ``libopenblas.so.0``) cannot be detected.


Acknowledgements
Expand All @@ -166,12 +183,14 @@ to them for the idea and making tools/code/data available! Below is some of
the work that has inspired this toolkit:

* `Build your own DiffuserCam tutorial <https://waller-lab.github.io/DiffuserCam/tutorial>`__.
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [2]_.
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [1]_.

A few students at EPFL have also contributed to this project:

* Julien Sahli: support and extension of algorithms for 3D.
* Yohann Perron: unrolled algorithms for reconstruction.
* Aaron Fargeon: mask designs.
* Rein Bentdal: mask fabrication with 3D printing.

Citing this work
----------------
Expand All @@ -196,6 +215,4 @@ If you use these tools in your own research, please cite the following:
References
----------

.. [1] Antipa, N., Kuo, G., Heckel, R., Mildenhall, B., Bostan, E., Ng, R., & Waller, L. (2018). DiffuserCam: lensless single-exposure 3D imaging. Optica, 5(1), 1-9.
.. [2] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
.. [1] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
32 changes: 0 additions & 32 deletions configs/apply_admm_single_mirflickr.yaml

This file was deleted.

17 changes: 17 additions & 0 deletions configs/authen.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# python scripts/data/authenticate.py
hydra:
job:
chdir: True # change to output folder

# repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
# repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-1K"
repo_id: "bezzam/DigiCam-Mirflickr-MultiMask-10K"
split: all # "all" (for 100 masks), "test" (for 15 masks)
n_iter: 25
n_files: 100 # per mask
grayscale: True
font_scale: 1.5
torch_device: cuda:0

cont: null # continue already started file
scores_fp: null # file path to already computed scores
11 changes: 10 additions & 1 deletion configs/benchmark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,18 @@ hydra:
chdir: True


dataset: DiffuserCam # DiffuserCam, DigiCamCelebA
dataset: DiffuserCam # DiffuserCam, DigiCamCelebA, DigiCamHF
seed: 0

huggingface:
repo: "bezzam/DigiCam-Mirflickr-MultiMask-25K"
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
alignment:
topright: [80, 100] # height, width
height: 200
downsample: 1

device: "cuda"
# numbers of iterations to benchmark
n_iter_range: [5, 10, 20, 50, 100, 200, 300]
Expand Down
9 changes: 7 additions & 2 deletions configs/collect_dataset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ output_file_ext: png

# files to measure
n_files: 15
start_idx: 0 # start index, TODO implement
start_idx: 0

# timing
runtime: null # in hours
Expand All @@ -23,19 +23,24 @@ max_level: 254
min_level: 200
max_tries: 6

masks: null # for multi-mask measurements

# -- display parameters
display:
output_fp: "~/LenslessPiCam_display/test.png"
# default to this screen: https://www.dell.com/en-us/work/shop/dell-ultrasharp-usb-c-hub-monitor-u2421e/apd/210-axmg/monitors-monitor-accessories#techspecs_section
screen_res: [1920, 1200] # width, height
image_res: null # useful if input images don't have the same dimension, set it to this
pad: 0
hshift: 0
vshift: -10
brightness: 80 # max brightness
rot90: 3
delay: 2 # to allow picture to display
landscape: False # whether to force landscape

capture:
delay: 2 # to allow picture to display
skip: False # to test looping over displaying images
config_pause: 2
iso: 100
res: null
Expand Down
35 changes: 35 additions & 0 deletions configs/collect_mirflickr_multimask.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# python scripts/measure/collect_dataset_on_device.py -cn collect_mirflickr_multimask
defaults:
- collect_dataset
- _self_


min_level: 170
max_tries: 1

masks:
seed: 0
device: adafruit
n: 100 # number of masks
shape: [54, 26]
center: [57, 77]

input_dir: /mnt/mirflickr/all

# can pass existing folder to continue measurement
output_dir: /mnt/mirflickr/all_measured_20240209-172459

# files to measure
n_files: null

# -- display parameters
display:
image_res: [900, 1200]
vshift: -26
brightness: 90
delay: 1

capture:
down: 8
exposure: 0.7
awb_gains: [1.6, 1.2] # red, blue
18 changes: 18 additions & 0 deletions configs/collect_mirflickr_singlemask.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# python scripts/measure/collect_dataset_on_device.py -cn collect_mirflickr_COSI
defaults:
- collect_mirflickr_multimask
- _self_


min_level: 125


masks:
n: 1

# can pass existing folder to continue measurement
output_dir: /mnt/mirflickr/all_measured_20240226-111214 # single mask

capture:
exposure: 0.5
awb_gains: [1.6, 1.2] # red, blue
18 changes: 13 additions & 5 deletions configs/demo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ display:
screen_res: [1920, 1200] # width, height
pad: 0
hshift: 0
vshift: 0
vshift: -10
brightness: 100
rot90: 3

Expand All @@ -32,7 +32,7 @@ display:
capture:
sensor: rpi_hq
gamma: null # for visualization
exp: 0.02
exp: 1
delay: 2
script: ~/LenslessPiCam/scripts/measure/on_device_capture.py
iso: 100
Expand All @@ -56,14 +56,22 @@ capture:
# remote script returns RGB data
rgb: True
down: 4
awb_gains: [1.9, 1.2]
awb_gains: [1.6, 1.2]


camera:
# these gains are not applied if rgb=True
red_gain: 1.9
blue_gain: 1.2
psf: data/psf/tape_rgb_31032023.png
# -- path to PSF,
# psf: data/psf/tape_rgb_31032023.png
# -- DigiCam configuration
psf:
seed: 0
device: adafruit
mask_shape: [54, 26]
mask_center: [57, 77]
flipud: True
background: null


Expand All @@ -84,7 +92,7 @@ recon:

# -- admm
admm:
n_iter: 10
n_iter: 100
disp_iter: null
mu1: 1e-6
mu2: 1e-5
Expand Down
23 changes: 0 additions & 23 deletions configs/evaluate_mirflickr_admm.yaml

This file was deleted.

Loading

0 comments on commit a97c1df

Please sign in to comment.