Skip to content

Commit

Permalink
Mask fabrication with 3D printing (#118)
Browse files Browse the repository at this point in the history
* Add notebooks and utilities for fabrication.

* Switch to macOS 12 for workflow.

* Add Aaron and Rein.

* Add plotting function, improve MURA interface.

* Add link to multi-lens array notebook.

* Fix MLS mask generation.

* Add author info.

* Add link to notebook for 3D printing masks.

* Fix flatcam test for new MURA interface.

* FIx changelog headings.

* Add link to reconstruction notebook.

* Update docs with 3d printing masks.

* Clean up fabrication utils.

* Update CHANGELOG.

* Change link for notebook.

* Update mask design title.

* Update docs with latest features.
  • Loading branch information
ebezzam authored Apr 24, 2024
1 parent 95aa24e commit 0eb3644
Show file tree
Hide file tree
Showing 17 changed files with 722 additions and 48 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/python_no_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:
fail-fast: false
max-parallel: 12
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
# TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808
# os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest, macos-12, windows-latest]
python-version: [3.8, "3.11"]
steps:
- uses: actions/checkout@v3
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/python_pycsou.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:
fail-fast: false
max-parallel: 12
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
# TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808
# os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest, macos-12, windows-latest]
python-version: [3.9, "3.10"]
steps:
- uses: actions/checkout@v3
Expand Down
9 changes: 7 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,21 +26,26 @@ Added
- DigiCam support for Telegram demo.
- DiffuserCamMirflickr Hugging Face API.
- Fallback for normalization if data not in 8bit range (``lensless.utils.io.save_image``).
- Add utilities for fabricating masks with 3D printing (``lensless.hardware.fabrication``).

Changed
~~~~~
~~~~~~~

- Dataset reconstruction script uses datasets from Hugging Face: ``scripts/recon/dataset.py``
- For trainable masks, set trainable parameters inside the child class.
- ``distance_sensor`` optional for ``lensless.hardware.mask.Mask``, e.g. don't need for fabrication.
- More intuitive interface for MURA for coded aperture (``lensless.hardware.mask.CodedAperture``), i.e. directly pass prime number.


Bugfix
~~~~~
~~~~~~

- ``lensless.hardware.trainable_mask.AdafruitLCD`` input handling.
- Local path for DRUNet download.
- APGD input handling (float32).
- Multimask handling.
- Passing shape to IRFFT so that it matches shape of input to RFFT.
- MLS mask creation (needed to rescale digits).

1.0.6 - (2024-02-21)
--------------------
Expand Down
43 changes: 27 additions & 16 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,32 +17,43 @@ LenslessPiCam


.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://drive.google.com/drive/folders/1nBDsg86RaZIqQM6qD-612k9v8gDrgdwB?usp=drive_link
:target: https://lensless.readthedocs.io/en/latest/examples.html
:alt: notebooks

.. image:: https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-dark.svg
:target: https://huggingface.co/bezzam
:alt: huggingface


*A Hardware and Software Toolkit for Lensless Computational Imaging with a Raspberry Pi*
-----------------------------------------------------------------------------------------
*A Hardware and Software Toolkit for Lensless Computational Imaging*
--------------------------------------------------------------------

.. image:: https://github.com/LCAV/LenslessPiCam/raw/main/scripts/recon/example.png
:alt: Lensless imaging example
:align: center


This toolkit has everything you need to perform imaging with a lensless
camera. We make use of a low-cost implementation of DiffuserCam [1]_,
where we use a piece of tape instead of the lens and the
`Raspberry Pi HQ camera sensor <https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera>`__
(the `V2 sensor <https://www.raspberrypi.com/products/camera-module-v2/>`__
is also supported). Similar principles and methods can be used for a
different lensless encoder and a different sensor.
This toolkit has everything you need to perform imaging with a lensless camera.
The sensor in most examples is the `Raspberry Pi HQ <https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera>`__,
camera sensor as it is low cost (around 50 USD) and has a high resolution (12 MP).
The lensless encoder/mask used in most examples is either a piece of tape or a `low-cost LCD <https://www.adafruit.com/product/358>`__.
As **modularity** is a key feature of this toolkit, you can use a different sensors and lensless encoders.

*If you are interested in exploring reconstruction algorithms without building the camera, that is entirely possible!*
The provided reconstruction algorithms can be used with the provided data or simulated data.
The toolkit includes:

* Camera assembly tutorials (`link <https://lensless.readthedocs.io/en/latest/building.html>`__).
* Measurement scripts (`link <https://lensless.readthedocs.io/en/latest/measurement.html>`__).
* Dataset preparation and loading tools, with `Hugging Face <https://huggingface.co/bezzam>`__ integration (`slides <https://docs.google.com/presentation/d/18h7jTcp20jeoiF8dJIEcc7wHgjpgFgVxZ_bJ04W55lg/edit?usp=sharing>`__ on uploading a dataset to Hugging Face with `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/data/upload_dataset_huggingface.py>`__).
* `Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>`__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, pre- and post-processors).
* `Training script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_unrolled.py>`__ for learning-based reconstruction.
* `Pre-trained models <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/model_dict.py>`__ that can be loaded from `Hugging Face <https://huggingface.co/bezzam>`__, for example in `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/diffusercam_mirflickr.py>`__.
* Mask `design <https://lensless.readthedocs.io/en/latest/mask.html>`__ and `fabrication <https://lensless.readthedocs.io/en/latest/fabrication.html>`__ tools.
* `Simulation tools <https://lensless.readthedocs.io/en/latest/simulation.html>`__.
* `Evalutions tools <https://lensless.readthedocs.io/en/latest/evaluation.html>`__ (e.g. PSNR, LPIPS, SSIM, visualizations).
* `Demo <https://lensless.readthedocs.io/en/latest/demo.html#telegram-demo>`__ that can be run on Telegram!

Please refer to the `documentation <http://lensless.readthedocs.io>`__ for more details,
while an overview of example notebooks can be found `here <https://lensless.readthedocs.io/en/latest/examples.html>`__.

We've also written a few Medium articles to guide users through the process
of building the camera, measuring data with it, and reconstruction.
Expand Down Expand Up @@ -172,12 +183,14 @@ to them for the idea and making tools/code/data available! Below is some of
the work that has inspired this toolkit:

* `Build your own DiffuserCam tutorial <https://waller-lab.github.io/DiffuserCam/tutorial>`__.
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [2]_.
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [1]_.

A few students at EPFL have also contributed to this project:

* Julien Sahli: support and extension of algorithms for 3D.
* Yohann Perron: unrolled algorithms for reconstruction.
* Aaron Fargeon: mask designs.
* Rein Bentdal: mask fabrication with 3D printing.

Citing this work
----------------
Expand All @@ -202,6 +215,4 @@ If you use these tools in your own research, please cite the following:
References
----------

.. [1] Antipa, N., Kuo, G., Heckel, R., Mildenhall, B., Bostan, E., Ng, R., & Waller, L. (2018). DiffuserCam: lensless single-exposure 3D imaging. Optica, 5(1), 1-9.
.. [2] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
.. [1] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
3 changes: 3 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@
"scipy.special",
"matplotlib.cm",
"pyffs",
"datasets",
"huggingface_hub",
"cadquery",
]
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = mock.Mock()
Expand Down
8 changes: 4 additions & 4 deletions docs/source/demo.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Demo
====
Demo (measurement and reconstruction)
=====================================

A full demo script can be found in ``scripts/demo.py``. Its configuration
file can be found in ``configs/demo.yaml``.
Expand All @@ -11,7 +11,7 @@ It assumes the following setup:
* The RPi and the PC are connected to the same network.
* You can SSH into the RPi from the PC `without a password <https://medium.com/@bezzam/headless-and-passwordless-interfacing-with-a-raspberry-pi-ssh-453dd75154c3>`_.
* The RPi is connected to a lensless camera and a display.
* The display is configured to display images in full screen, as described in :ref:`measurement<Remote display>`.
* The display is configured to display images in full screen, as described in :ref:`measurement<Preparing an external monitor for displaying images (remote display)>`.
* The PSF of the lensless camera is known and saved as an RGB file.

.. image:: demo_setup.png
Expand Down Expand Up @@ -100,7 +100,7 @@ you need to:
#. Install Telegram Python API (and other dependencies): ``pip install python-telegram-bot emoji pilmoji``.

#. Make sure ``LenslessPiCam`` is installed on your server and on the Raspberry Pi, and that the display is configured to display images in full screen, as described in :ref:`measurement<Remote display>`.
#. Make sure ``LenslessPiCam`` is installed on your server and on the Raspberry Pi, and that the display is configured to display images in full screen, as described in :ref:`measurement<Preparing an external monitor for displaying images (remote display)>`.

#. Prepare your configuration file using ``configs/telegram_demo.yaml`` as a template. You will have to set ``token`` to the token of your bot, ``rpi_username`` and ``rpi_hostname`` to the username and hostname of your Raspberry Pi, ``psf:fp`` to the path of your PSF file, and ``config_name`` to a demo configuration that e.g. worked for above. You may also want to set what algorithms you are willing to let the bot support (note that as of 12 March 2023, unrolled ADMM requires a GPU).

Expand Down
31 changes: 31 additions & 0 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
Examples
========

There are many example scripts
`on GitHub <https://github.com/LCAV/LenslessPiCam/tree/main/scripts>`__,
but they may not be the best way to get started with the library.
The following notebooks aim to provide a more interactive and intuitive
way to explore the different functionalities of the library.

System / Hardware
-----------------

Using a programmable mask based lensless imaging system,
where the programmable is a low-cost LCD:

- `DigiCam: Single-Shot Lensless Sensing with a Low-Cost Programmable Mask <https://colab.research.google.com/drive/1t59uyZMMyCUYVHGXdqdlNlDlb--FL_3P#scrollTo=t9o50zTf3oUg>`__
- `Towards Scalable and Secure Lensless Imaging with a Programmable Mask <https://colab.research.google.com/drive/1YGfs9p4T4NefX8GemVWwtrw4aX8zH1qu#scrollTo=tipedTe4vGwD>`__

Reconstruction method
---------------------

Learning-based reconstruction methods:

- `A Modular and Robust Physics-Based Approach for Lensless Image Reconstruction <https://colab.research.google.com/drive/1Wgt6ZMRZVuctLHaXxk7PEyPaBaUPvU33>`__
- `Aligning images for training <https://colab.research.google.com/drive/1c6kUbiB5JO1vro0-IMd-YDDP1g7NFXv3#scrollTo=MtN7GWCIrBKr>`__

Mask Design and fabrication
---------------------------

- `Multi-lens array design <https://drive.google.com/file/d/1IIGjdPUD5qqq4kWjDp50OWnIvHPVdvmp/view?usp=sharing>`__
- `Creating STEP files for 3D printing masks <https://colab.research.google.com/drive/1eDLnDL5q4i41xPZLn73wKcKpZksfkkIo?usp=sharing>`__
40 changes: 40 additions & 0 deletions docs/source/fabrication.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. automodule:: lensless.hardware.fabrication

This masks are meant to be used with a mount for the Raspberry Pi HQ sensor (shown below).
The design files can be found `here <https://drive.switch.ch/index.php/s/nDe50iC7zn52r07#/>`_.

.. image:: mount_components.png
:alt: Mount components.
:align: center


Mask3DModel
~~~~~~~~~~~

Below is a screenshot of a Fresnel Zone Aperture mask that can be designed with the above notebook
(using ``simplify=True``).

.. image:: fza.png
:alt: Fresnel Zone Aperture.
:align: center

.. autoclass:: lensless.hardware.fabrication.Mask3DModel
:members:
:special-members: __init__

MultiLensMold
~~~~~~~~~~~~~

Below is a screenshot of a mold that can be designed for a multi-lens array with the above notebook.

*Note: We were not successful in our attempts to remove the mask from the mold
(we poured epoxy and it was impossible to remove the mask from the mold).
Perhaps the mold needs to be coated with a non-stick material.*

.. image:: mla_mold.png
:alt: Multi-lens array mold.
:align: center

.. autoclass:: lensless.hardware.fabrication.MultiLensMold
:members:
:special-members: __init__
Binary file added docs/source/fza.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,17 @@ Contents
reconstruction
evaluation
mask
fabrication
sensor
utilities
demo


.. toctree::
:caption: Examples

examples

.. toctree::
:caption: Data

Expand Down
Binary file added docs/source/mla_mold.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/mount_components.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 0eb3644

Please sign in to comment.