diff --git a/.github/workflows/python_no_pycsou.yml b/.github/workflows/python_no_pycsou.yml index a1c1a617..fba48a11 100644 --- a/.github/workflows/python_no_pycsou.yml +++ b/.github/workflows/python_no_pycsou.yml @@ -20,7 +20,9 @@ jobs: fail-fast: false max-parallel: 12 matrix: - os: [ubuntu-latest, macos-latest, windows-latest] + # TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808 + # os: [ubuntu-latest, macos-latest, windows-latest] + os: [ubuntu-latest, macos-12, windows-latest] python-version: [3.8, "3.11"] steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/python_pycsou.yml b/.github/workflows/python_pycsou.yml index 61f89fa5..7640c660 100644 --- a/.github/workflows/python_pycsou.yml +++ b/.github/workflows/python_pycsou.yml @@ -20,7 +20,9 @@ jobs: fail-fast: false max-parallel: 12 matrix: - os: [ubuntu-latest, macos-latest, windows-latest] + # TODO: use below with this issue is resolved: https://github.com/actions/setup-python/issues/808 + # os: [ubuntu-latest, macos-latest, windows-latest] + os: [ubuntu-latest, macos-12, windows-latest] python-version: [3.9, "3.10"] steps: - uses: actions/checkout@v3 diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 136545f2..029c6d22 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -26,21 +26,26 @@ Added - DigiCam support for Telegram demo. - DiffuserCamMirflickr Hugging Face API. - Fallback for normalization if data not in 8bit range (``lensless.utils.io.save_image``). +- Add utilities for fabricating masks with 3D printing (``lensless.hardware.fabrication``). Changed -~~~~~ +~~~~~~~ - Dataset reconstruction script uses datasets from Hugging Face: ``scripts/recon/dataset.py`` - For trainable masks, set trainable parameters inside the child class. +- ``distance_sensor`` optional for ``lensless.hardware.mask.Mask``, e.g. don't need for fabrication. +- More intuitive interface for MURA for coded aperture (``lensless.hardware.mask.CodedAperture``), i.e. directly pass prime number. + Bugfix -~~~~~ +~~~~~~ - ``lensless.hardware.trainable_mask.AdafruitLCD`` input handling. - Local path for DRUNet download. - APGD input handling (float32). - Multimask handling. - Passing shape to IRFFT so that it matches shape of input to RFFT. +- MLS mask creation (needed to rescale digits). 1.0.6 - (2024-02-21) -------------------- diff --git a/README.rst b/README.rst index 4000543a..05f27061 100644 --- a/README.rst +++ b/README.rst @@ -17,7 +17,7 @@ LenslessPiCam .. image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://drive.google.com/drive/folders/1nBDsg86RaZIqQM6qD-612k9v8gDrgdwB?usp=drive_link + :target: https://lensless.readthedocs.io/en/latest/examples.html :alt: notebooks .. image:: https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-dark.svg @@ -25,24 +25,35 @@ LenslessPiCam :alt: huggingface -*A Hardware and Software Toolkit for Lensless Computational Imaging with a Raspberry Pi* ------------------------------------------------------------------------------------------ +*A Hardware and Software Toolkit for Lensless Computational Imaging* +-------------------------------------------------------------------- .. image:: https://github.com/LCAV/LenslessPiCam/raw/main/scripts/recon/example.png :alt: Lensless imaging example :align: center -This toolkit has everything you need to perform imaging with a lensless -camera. We make use of a low-cost implementation of DiffuserCam [1]_, -where we use a piece of tape instead of the lens and the -`Raspberry Pi HQ camera sensor `__ -(the `V2 sensor `__ -is also supported). Similar principles and methods can be used for a -different lensless encoder and a different sensor. +This toolkit has everything you need to perform imaging with a lensless camera. +The sensor in most examples is the `Raspberry Pi HQ `__, +camera sensor as it is low cost (around 50 USD) and has a high resolution (12 MP). +The lensless encoder/mask used in most examples is either a piece of tape or a `low-cost LCD `__. +As **modularity** is a key feature of this toolkit, you can use a different sensors and lensless encoders. -*If you are interested in exploring reconstruction algorithms without building the camera, that is entirely possible!* -The provided reconstruction algorithms can be used with the provided data or simulated data. +The toolkit includes: + +* Camera assembly tutorials (`link `__). +* Measurement scripts (`link `__). +* Dataset preparation and loading tools, with `Hugging Face `__ integration (`slides `__ on uploading a dataset to Hugging Face with `this script `__). +* `Reconstruction algorithms `__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, pre- and post-processors). +* `Training script `__ for learning-based reconstruction. +* `Pre-trained models `__ that can be loaded from `Hugging Face `__, for example in `this script `__. +* Mask `design `__ and `fabrication `__ tools. +* `Simulation tools `__. +* `Evalutions tools `__ (e.g. PSNR, LPIPS, SSIM, visualizations). +* `Demo `__ that can be run on Telegram! + +Please refer to the `documentation `__ for more details, +while an overview of example notebooks can be found `here `__. We've also written a few Medium articles to guide users through the process of building the camera, measuring data with it, and reconstruction. @@ -172,12 +183,14 @@ to them for the idea and making tools/code/data available! Below is some of the work that has inspired this toolkit: * `Build your own DiffuserCam tutorial `__. -* `DiffuserCam Lensless MIR Flickr dataset `__ [2]_. +* `DiffuserCam Lensless MIR Flickr dataset `__ [1]_. A few students at EPFL have also contributed to this project: * Julien Sahli: support and extension of algorithms for 3D. * Yohann Perron: unrolled algorithms for reconstruction. +* Aaron Fargeon: mask designs. +* Rein Bentdal: mask fabrication with 3D printing. Citing this work ---------------- @@ -202,6 +215,4 @@ If you use these tools in your own research, please cite the following: References ---------- -.. [1] Antipa, N., Kuo, G., Heckel, R., Mildenhall, B., Bostan, E., Ng, R., & Waller, L. (2018). DiffuserCam: lensless single-exposure 3D imaging. Optica, 5(1), 1-9. - -.. [2] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090. +.. [1] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090. diff --git a/docs/source/conf.py b/docs/source/conf.py index 9e39a5c0..4635afa5 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -41,6 +41,9 @@ "scipy.special", "matplotlib.cm", "pyffs", + "datasets", + "huggingface_hub", + "cadquery", ] for mod_name in MOCK_MODULES: sys.modules[mod_name] = mock.Mock() diff --git a/docs/source/demo.rst b/docs/source/demo.rst index 8e823da5..4bef6fb6 100644 --- a/docs/source/demo.rst +++ b/docs/source/demo.rst @@ -1,5 +1,5 @@ -Demo -==== +Demo (measurement and reconstruction) +===================================== A full demo script can be found in ``scripts/demo.py``. Its configuration file can be found in ``configs/demo.yaml``. @@ -11,7 +11,7 @@ It assumes the following setup: * The RPi and the PC are connected to the same network. * You can SSH into the RPi from the PC `without a password `_. * The RPi is connected to a lensless camera and a display. -* The display is configured to display images in full screen, as described in :ref:`measurement`. +* The display is configured to display images in full screen, as described in :ref:`measurement`. * The PSF of the lensless camera is known and saved as an RGB file. .. image:: demo_setup.png @@ -100,7 +100,7 @@ you need to: #. Install Telegram Python API (and other dependencies): ``pip install python-telegram-bot emoji pilmoji``. -#. Make sure ``LenslessPiCam`` is installed on your server and on the Raspberry Pi, and that the display is configured to display images in full screen, as described in :ref:`measurement`. +#. Make sure ``LenslessPiCam`` is installed on your server and on the Raspberry Pi, and that the display is configured to display images in full screen, as described in :ref:`measurement`. #. Prepare your configuration file using ``configs/telegram_demo.yaml`` as a template. You will have to set ``token`` to the token of your bot, ``rpi_username`` and ``rpi_hostname`` to the username and hostname of your Raspberry Pi, ``psf:fp`` to the path of your PSF file, and ``config_name`` to a demo configuration that e.g. worked for above. You may also want to set what algorithms you are willing to let the bot support (note that as of 12 March 2023, unrolled ADMM requires a GPU). diff --git a/docs/source/examples.rst b/docs/source/examples.rst new file mode 100644 index 00000000..7fd7102a --- /dev/null +++ b/docs/source/examples.rst @@ -0,0 +1,31 @@ +Examples +======== + +There are many example scripts +`on GitHub `__, +but they may not be the best way to get started with the library. +The following notebooks aim to provide a more interactive and intuitive +way to explore the different functionalities of the library. + +System / Hardware +----------------- + +Using a programmable mask based lensless imaging system, +where the programmable is a low-cost LCD: + +- `DigiCam: Single-Shot Lensless Sensing with a Low-Cost Programmable Mask `__ +- `Towards Scalable and Secure Lensless Imaging with a Programmable Mask `__ + +Reconstruction method +--------------------- + +Learning-based reconstruction methods: + +- `A Modular and Robust Physics-Based Approach for Lensless Image Reconstruction `__ +- `Aligning images for training `__ + +Mask Design and fabrication +--------------------------- + +- `Multi-lens array design `__ +- `Creating STEP files for 3D printing masks `__ diff --git a/docs/source/fabrication.rst b/docs/source/fabrication.rst new file mode 100644 index 00000000..83fc3963 --- /dev/null +++ b/docs/source/fabrication.rst @@ -0,0 +1,40 @@ +.. automodule:: lensless.hardware.fabrication + + This masks are meant to be used with a mount for the Raspberry Pi HQ sensor (shown below). + The design files can be found `here `_. + + .. image:: mount_components.png + :alt: Mount components. + :align: center + + + Mask3DModel + ~~~~~~~~~~~ + + Below is a screenshot of a Fresnel Zone Aperture mask that can be designed with the above notebook + (using ``simplify=True``). + + .. image:: fza.png + :alt: Fresnel Zone Aperture. + :align: center + + .. autoclass:: lensless.hardware.fabrication.Mask3DModel + :members: + :special-members: __init__ + + MultiLensMold + ~~~~~~~~~~~~~ + + Below is a screenshot of a mold that can be designed for a multi-lens array with the above notebook. + + *Note: We were not successful in our attempts to remove the mask from the mold + (we poured epoxy and it was impossible to remove the mask from the mold). + Perhaps the mold needs to be coated with a non-stick material.* + + .. image:: mla_mold.png + :alt: Multi-lens array mold. + :align: center + + .. autoclass:: lensless.hardware.fabrication.MultiLensMold + :members: + :special-members: __init__ \ No newline at end of file diff --git a/docs/source/fza.png b/docs/source/fza.png new file mode 100644 index 00000000..25cc3ee4 Binary files /dev/null and b/docs/source/fza.png differ diff --git a/docs/source/index.rst b/docs/source/index.rst index 3fba13d2..dfd69bb0 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -26,10 +26,17 @@ Contents reconstruction evaluation mask + fabrication sensor utilities demo + +.. toctree:: + :caption: Examples + + examples + .. toctree:: :caption: Data diff --git a/docs/source/mla_mold.png b/docs/source/mla_mold.png new file mode 100644 index 00000000..590d8547 Binary files /dev/null and b/docs/source/mla_mold.png differ diff --git a/docs/source/mount_components.png b/docs/source/mount_components.png new file mode 100644 index 00000000..14534b4e Binary files /dev/null and b/docs/source/mount_components.png differ diff --git a/lensless/hardware/fabrication.py b/lensless/hardware/fabrication.py new file mode 100644 index 00000000..39425fd8 --- /dev/null +++ b/lensless/hardware/fabrication.py @@ -0,0 +1,522 @@ +# ############################################################################# +# fabrication.py +# ================= +# Authors : +# Rein BENTDAL [rein.bent@gmail.com] +# Eric BEZZAM [ebezzam@gmail.com] +# ############################################################################# + + +""" +Mask Fabrication +================ + +This module provides tools for fabricating masks for lensless imaging. +Check out `this notebook `_ on Google Colab for how to use this module. + +""" + +import os +import cadquery as cq +import numpy as np +from typing import Union, Optional +from abc import ABC, abstractmethod +from lensless.hardware.mask import Mask, MultiLensArray, CodedAperture, FresnelZoneAperture + + +class Frame(ABC): + @abstractmethod + def generate(self, mask_size, depth: float) -> cq.Workplane: + pass + + +class Connection(ABC): + @abstractmethod + def generate(self, mask: np.ndarray, mask_size, depth: float) -> cq.Workplane: + """connections can in general use the mask array to determine where to connect to the mask, but it is not required.""" + pass + + +class Mask3DModel: + def __init__( + self, + mask_array: np.ndarray, + mask_size: Union[tuple[float, float], np.ndarray], + height: Optional[float] = None, + frame: Optional[Frame] = None, + connection: Optional[Connection] = None, + simplify: bool = False, + show_axis: bool = False, + generate: bool = True, + ): + """ + Wrapper to CadQuery to generate a 3D model from a mask array, e.g. for 3D printing. + + Parameters + ---------- + mask_array : np.ndarray + Array of the mask to generate from. 1 is opaque, 0 is transparent. + mask_size : Union[tuple[float, float], np.ndarray] + Dimensions of the mask in meters. + height : Optional[float], optional + How thick to make the mask in millimeters. + frame : Optional[Frame], optional + Frame object defining the frame around the mask. + connection : Optional[Connection], optional + Connection object defining how to connect the frame to the mask. + simplify : bool, optional + Combines all objects in the model to a single object. Can result in a smaller 3d model file and faster post processing. But takes a considerable amount of more time to generate model. Defaults to False. + show_axis : bool, optional + Show axis for debug purposes. Defaults to False. + generate : bool, optional + Generate model on initialization. Defaults to True. + + """ + + self.mask = mask_array + self.frame: Frame = frame + self.connections: Connection = connection + + if isinstance(mask_size, tuple): + self.mask_size = np.array(mask_size) * 1e3 + else: + self.mask_size = mask_size * 1e3 + + self.height = height + self.simplify = simplify + self.show_axis = show_axis + + self.model = None + + if generate: + self.generate_3d_model() + + @classmethod + def from_mask(cls, mask: Mask, **kwargs): + """ + Create a Mask3DModel from a Mask object. + + Parameters + ---------- + mask : :py:class:`~lensless.hardware.mask.Mask` + Mask object to generate from, e.g. :py:class:`~lensless.hardware.mask.CodedAperture` or :py:class:`~lensless.hardware.mask.FresnelZoneAperture`. + """ + assert isinstance(mask, CodedAperture) or isinstance( + mask, FresnelZoneAperture + ), "Mask must be a CodedAperture or FresnelZoneAperture object." + return cls(mask_array=mask.mask, mask_size=mask.size, **kwargs) + + @staticmethod + def mask_to_points(mask: np.ndarray, px_size: Union[tuple[float, float], np.ndarray]): + """ + Turns mask into 2D point coordinates. + + Parameters + ---------- + mask : np.ndarray + Mask array. + px_size : Union[tuple[float, float], np.ndarray] + Pixel size in meters. + + """ + is_3D = len(np.unique(mask)) > 2 + + if is_3D: + indices = np.argwhere(mask != 0) + coordinates = (indices - np.array(mask.shape) / 2) * px_size + heights = mask[indices[:, 0], indices[:, 1]] + + else: + indices = np.argwhere(mask == 0) + coordinates = (indices - np.array(mask.shape) / 2) * px_size + heights = None + return coordinates, heights + + def generate_3d_model(self): + """ + Based on provided (1) mask, (2) frame, and (3) connection between frame and mask, generate a 3d model. + """ + + assert self.model is None, "Model already generated." + + model = cq.Workplane("XY") + + if self.frame is not None: + frame_model = self.frame.generate(self.mask_size, self.height) + model = model.add(frame_model) + if self.connections is not None: + connection_model = self.connections.generate(self.mask, self.mask_size, self.height) + model = model.add(connection_model) + + px_size = self.mask_size / self.mask.shape + points, heights = Mask3DModel.mask_to_points(self.mask, px_size) + if len(points) != 0: + if heights is None: + assert self.height is not None, "height must be provided if mask is 2D." + mask_model = ( + cq.Workplane("XY") + .pushPoints(points) + .box(px_size[0], px_size[1], self.height, centered=False, combine=False) + ) + else: + mask_model = cq.Workplane("XY") + for point, height in zip(points, heights): + + box = ( + cq.Workplane("XY") + .moveTo(point[0], point[1]) + .box( + px_size[0], + px_size[1], + height * self.height, + centered=False, + combine=False, + ) + ) + mask_model = mask_model.add(box) + + if self.simplify: + mask_model = mask_model.combine(glue=True) + + model = model.add(mask_model) + + if self.simplify: + model = model.combine(glue=False) + + if self.show_axis: + axis_thickness = 0.01 + axis_length = 20 + axis_test = ( + cq.Workplane("XY") + .box(axis_thickness, axis_thickness, axis_length) + .box(axis_thickness, axis_length, axis_thickness) + .box(axis_length, axis_thickness, axis_thickness) + ) + model = model.add(axis_test) + + self.model = model + + def save(self, fname): + """ + Save the 3d model to a file. + + Parameters + ---------- + fname : str + File name to save the model to. + """ + + assert self.model is not None, "Model not generated yet." + + directory = os.path.dirname(fname) + if directory and not os.path.exists(directory): + print( + f"Error: The directory {directory} does not exist! Failed to save CadQuery model." + ) + return + + cq.exporters.export(self.model, fname) + + +class MultiLensMold: + def __init__( + self, + sphere_locations: np.ndarray, + sphere_radius: np.ndarray, + mask_size: Union[tuple[float, float], np.ndarray], + mold_size: tuple[int, int, int] = (0.4e-1, 0.4e-1, 3.0e-3), + base_height_mm: Optional[float] = 0.5, + frame: Optional[Frame] = None, + simplify: bool = False, + show_axis: bool = False, + ): + """ + Create a 3D model of a multi-lens array mold. + + Parameters + ---------- + sphere_locations : np.ndarray + Array of sphere locations in meters. + sphere_radius : np.ndarray + Array of sphere radii in meters. + mask_size : Union[tuple[float, float], np.ndarray] + Dimensions of the mask in meters. + mold_size : tuple[int, int, int], optional + Dimensions of the mold in meters. Defaults to (0.4e-1, 0.4e-1, 3.0e-3). + base_height_mm : Optional[float], optional + Height of the base in millimeters. Defaults to 0.5. + frame : Optional[Frame], optional + Frame object defining the frame around the mask. + simplify : bool, optional + Combines all objects in the model to a single object. Can result in a smaller 3d model file and faster post processing. But takes a considerable amount of more time to generate model. Defaults to False. + show_axis : bool, optional + Show axis for debug purposes. Defaults to False. + """ + + self.mask_size_mm = mask_size * 1e3 + self.mold_size_mm = np.array(mold_size) * 1e3 + self.simplify = simplify + self.frame = frame + self.show_axis = show_axis + self.n_lens = len(sphere_radius) + + # check mold larger than mask + assert np.all(self.mask_size_mm <= self.mold_size_mm[:2]), "Mold must be larger than mask." + assert base_height_mm < self.mold_size_mm[2], "Base height must be less than mold height." + + # create 3D model of multi-lens array + model = cq.Workplane("XY") + base_model = cq.Workplane("XY").box( + self.mask_size_mm[0], self.mask_size_mm[1], base_height_mm, centered=(True, True, False) + ) + model = model.add(base_model) + + if self.frame is not None: + frame_model = self.frame.generate(self.mask_size_mm, base_height_mm) + model = model.add(frame_model) + + sphere_model = cq.Workplane("XY") + for i in range(self.n_lens): + loc_mm = sphere_locations[i] * 1e3 + # # center locations + loc_mm[0] -= self.mask_size_mm[1] / 2 + loc_mm[1] -= self.mask_size_mm[0] / 2 + r_mm = sphere_radius[i] * 1e3 + sphere = cq.Workplane("XY").moveTo(loc_mm[1], loc_mm[0]).sphere(r_mm, angle1=0) + sphere_model = sphere_model.add(sphere) + + # add indent for removing + if self.frame is not None: + mask_dim = self.frame.size + else: + mask_dim = self.mask_size_mm + # indent = cq.Workplane("XY").moveTo(0, mask_dim[1] / 2).sphere(base_height_mm, angle1=0) + # indent = indent.translate((0, 0, -base_height_mm)) + indent = ( + cq.Workplane("XY") + .moveTo(0, mask_dim[1] / 2) + .box(base_height_mm, base_height_mm, base_height_mm) + ) + indent = indent.translate((0, 0, -base_height_mm / 2)) + sphere_model = sphere_model.add(indent) + + # add to base + sphere_model = sphere_model.translate((0, 0, base_height_mm)) + model = model.add(sphere_model) + + if self.simplify: + model = model.combine(glue=True) + + if self.show_axis: + axis_thickness = 0.01 + axis_length = 20 + axis_test = ( + cq.Workplane("XY") + .box(axis_thickness, axis_thickness, axis_length) + .box(axis_thickness, axis_length, axis_thickness) + .box(axis_length, axis_thickness, axis_thickness) + ) + model = model.add(axis_test) + + self.mask = model + + # create mold + mold = cq.Workplane("XY").box( + self.mold_size_mm[0], + self.mold_size_mm[1], + self.mold_size_mm[2], + centered=(True, True, False), + ) + mold = mold.cut(model).rotate((0, 0, 0), (1, 0, 0), 180) + + self.mold = mold + + @classmethod + def from_mask(cls, mask: Mask, **kwargs): + """ + Create a Mask3DModel from a Mask object. + + Parameters + ---------- + mask : :py:class:`~lensless.hardware.mask.MultiLensArray` + Multi-lens array mask object. + """ + assert isinstance(mask, MultiLensArray), "Mask must be a MultiLensArray object." + return cls( + sphere_locations=mask.loc, sphere_radius=mask.radius, mask_size=mask.size, **kwargs + ) + + def save(self, fname): + assert self.mold is not None, "Model not generated yet." + + directory = os.path.dirname(fname) + if directory and not os.path.exists(directory): + print( + f"Error: The directory {directory} does not exist! Failed to save CadQuery model." + ) + return + + cq.exporters.export(self.mold, fname) + + +# --- from here, implementations of frames and connections --- + + +class SimpleFrame(Frame): + def __init__(self, padding: float = 2, size: Optional[tuple[float, float]] = None): + """ + Specify either padding or size. If size is specified, padding is ignored. + + All dimensions are in millimeters. + + Parameters + ---------- + padding : float, optional + padding around the mask. Defaults to 2mm. + size : Optional[tuple[float, float]], optional + Size of the frame in mm. Defaults to None. + """ + self.padding = padding + self.size = size + + def generate(self, mask_size, depth: float) -> cq.Workplane: + width, height = mask_size[0], mask_size[1] + size = ( + self.size + if self.size is not None + else (width + 2 * self.padding, height + 2 * self.padding) + ) + return ( + cq.Workplane("XY") + .box(size[0], size[1], depth, centered=(True, True, False)) + .rect(width, height) + .cutThruAll() + ) + + +class CrossConnection(Connection): + """Transverse cross connection""" + + def __init__(self, line_width: float = 0.1, mask_radius: float = None): + self.line_width = line_width + self.mask_radius = mask_radius + + def generate(self, mask: np.ndarray, mask_size, depth: float) -> cq.Workplane: + width, height = mask_size[0], mask_size[1] + model = ( + cq.Workplane("XY") + .box(self.line_width, height, depth, centered=(True, True, False)) + .box(width, self.line_width, depth, centered=(True, True, True)) + ) + + if self.mask_radius is not None: + circle = cq.Workplane("XY").cylinder( + depth, self.mask_radius, centered=(True, True, False) + ) + model = model.cut(circle) + + return model + + +class SaltireConnection(Connection): + """Diagonal cross connection""" + + def __init__(self, line_width: float = 0.1, mask_radius: float = None): + self.line_width = line_width + self.mask_radius = mask_radius + + def generate(self, mask: np.ndarray, mask_size, depth: float) -> cq.Workplane: + width, height = mask_size[0], mask_size[1] + width2, height2 = width / 2, height / 2 + lw = self.line_width / np.sqrt(2) + model = ( + cq.Workplane("XY") + .moveTo(-(width2 - lw), -height2) + .lineTo(-width2, -height2) + .lineTo(-width2, -(height2 - lw)) + .lineTo(width2 - lw, height2) + .lineTo(width2, height2) + .lineTo(width2, height2 - lw) + .close() + .extrude(depth) + .moveTo(-(width2 - lw), height2) + .lineTo(-width2, height2) + .lineTo(-width2, height2 - lw) + .lineTo(width2 - lw, -height2) + .lineTo(width2, -height2) + .lineTo(width2, -(height2 - lw)) + .close() + .extrude(depth) + ) + + if self.mask_radius is not None: + circle = cq.Workplane("XY").cylinder( + depth, self.mask_radius, centered=(True, True, False) + ) + model = model.cut(circle) + + return model + + +class ThreePointConnection(Connection): + """ + Connection for free-floating components as in FresnelZoneAperture. + """ + + def __init__(self, line_width: float = 0.1, mask_radius: float = None): + self.line_width = line_width + self.mask_radius = mask_radius + + def generate(self, mask: np.ndarray, mask_size, depth: float) -> cq.Workplane: + width, height = mask_size[0], mask_size[1] + width2, height2 = width / 2, height / 2 + lw = self.line_width / np.sqrt(2) + + model = ( + cq.Workplane("XY") + .box(width2, self.line_width, depth, centered=(False, True, False)) + .moveTo(-(width2 - lw), -height2) + .lineTo(-width2, -height2) + .lineTo(-width2, -(height2 - lw)) + .lineTo(-lw, 0) + .lineTo(lw, 0) + .close() + .extrude(depth) + .moveTo(-(width2 - lw), height2) + .lineTo(-width2, height2) + .lineTo(-width2, (height2 - lw)) + .lineTo(-lw, 0) + .lineTo(lw, 0) + .close() + .extrude(depth) + ) + + if self.mask_radius is not None: + circle = cq.Workplane("XY").cylinder( + depth, self.mask_radius, centered=(True, True, False) + ) + model = model.cut(circle) + + return model + + +class CodedApertureConnection(Connection): + def __init__(self, joint_radius: float = 0.1): + self.joint_radius = joint_radius + + def generate(self, mask: np.ndarray, mask_size, depth: float) -> cq.Workplane: + x_lines = np.where(np.diff(mask[:, 0]) != 0)[0] + 1 + y_lines = np.where(np.diff(mask[0]) != 0)[0] + 1 + X, Y = np.meshgrid(x_lines, y_lines) + point_idxs = np.vstack([X.ravel(), Y.ravel()]).T - np.array(mask.shape) / 2 + + px_size = mask_size / mask.shape + points = point_idxs * px_size + + model = ( + cq.Workplane("XY") + .pushPoints(points) + .cylinder(depth, self.joint_radius, centered=(True, True, False), combine=False) + ) + + return model diff --git a/lensless/hardware/mask.py b/lensless/hardware/mask.py index 40c7cbcc..a14d8c61 100644 --- a/lensless/hardware/mask.py +++ b/lensless/hardware/mask.py @@ -7,8 +7,8 @@ # ############################################################################# """ -Mask -==== +Mask Design +=========== This module provides utilities to create different types of masks (:py:class:`~lensless.hardware.mask.CodedAperture`, :py:class:`~lensless.hardware.mask.PhaseContour`, @@ -32,6 +32,7 @@ from waveprop.noise import add_shot_noise from lensless.hardware.sensor import VirtualSensor from lensless.utils.image import resize +import matplotlib.pyplot as plt try: import torch @@ -49,7 +50,7 @@ class Mask(abc.ABC): def __init__( self, resolution, - distance_sensor, + distance_sensor=None, size=None, feature_size=None, psf_wavelength=[460e-9, 550e-9, 640e-9], @@ -65,9 +66,9 @@ def __init__( resolution: array_like Resolution of the mask (px). distance_sensor: float - Distance between the mask and the sensor (m). + Distance between the mask and the sensor (m). Needed to simulate PSF. size: array_like - Size of the sensor (m). Only one of ``size`` or ``feature_size`` needs to be specified. + Size of the mask (m). Only one of ``size`` or ``feature_size`` needs to be specified. feature_size: float or array_like Size of the feature (m). Only one of ``size`` or ``feature_size`` needs to be specified. psf_wavelength: list, optional @@ -111,14 +112,16 @@ def __init__( self.torch_device = torch_device # create mask - self.create_mask() + self.phase_pattern = None # for phase masks + self.create_mask() # creates self.mask self.shape = self.mask.shape # PSF assert hasattr(psf_wavelength, "__len__"), "psf_wavelength should be a list" self.psf_wavelength = psf_wavelength self.psf = None - self.compute_psf() + if self.distance_sensor is not None: + self.compute_psf() @classmethod def from_sensor(cls, sensor_name, downsample=None, **kwargs): @@ -158,11 +161,22 @@ def create_mask(self): """ pass - def compute_psf(self): + def compute_psf(self, distance_sensor=None): """ Compute the intensity PSF with bandlimited angular spectrum (BLAS) for each wavelength. Common to all types of masks. + + Parameters + ---------- + distance_sensor: float, optional + Distance between mask and sensor (m). Default is the distance specified at initialization. """ + if distance_sensor is not None: + self.distance_sensor = distance_sensor + assert ( + self.distance_sensor is not None + ), "Distance between mask and sensor should be specified." + if self.is_torch: psf = torch.zeros( tuple(self.resolution) + (len(self.psf_wavelength),), @@ -188,6 +202,38 @@ def compute_psf(self): else: self.psf = np.abs(psf) ** 2 + def plot(self, ax=None, **kwargs): + """ + Plot the mask. + + Parameters + ---------- + ax: :py:class:`~matplotlib.axes.Axes`, optional + Axes to plot the mask on. Default is None. + **kwargs: + Additional arguments for the plot function. + """ + + if ax is None: + _, ax = plt.subplots() + + if self.phase_pattern is not None: + mask = self.phase_pattern + title = "Phase pattern" + else: + mask = self.mask + title = "Mask" + if self.is_torch: + mask = mask.cpu().numpy() + + ax.imshow( + mask, extent=(0, 1e3 * self.size[1], 1e3 * self.size[0], 0), cmap="gray", **kwargs + ) + ax.set_title(title) + ax.set_xlabel("[mm]") + ax.set_ylabel("[mm]") + return ax + class CodedAperture(Mask): """ @@ -203,8 +249,8 @@ def __init__(self, method="MLS", n_bits=8, **kwargs): method: str Pattern generation method (MURA or MLS). Default is ``MLS``. n_bits: int, optional - Number of bits for pattern generation. - Size is ``4*n_bits + 1`` for MURA and ``2^n - 1`` for MLS. + Number of bits for pattern generation, must be prime for MURA. + Results in ``2^n - 1``x``2^n - 1`` for MLS. Default is 8 (for a 255x255 MLS mask). **kwargs: The keyword arguments are passed to the parent class :py:class:`~lensless.hardware.mask.Mask`. @@ -220,11 +266,11 @@ def __init__(self, method="MLS", n_bits=8, **kwargs): # initialize parameters if self.method.upper() == "MURA": - self.mask = self.generate_mura(4 * self.n_bits + 1) + self.mask = self.generate_mura(self.n_bits) self.row = None self.col = None else: - seq = max_len_seq(self.n_bits)[0] + seq = max_len_seq(self.n_bits)[0] * 2 - 1 self.row = seq self.col = seq @@ -257,8 +303,10 @@ def create_mask(self, row=None, col=None, mask=None): if self.row is not None: if self.is_torch: self.mask = torch.outer(self.row, self.col) + self.mask = torch.round((self.mask + 1) / 2).to(torch.uint8) else: self.mask = np.outer(self.row, self.col) + self.mask = np.round((self.mask + 1) / 2).astype(np.uint8) assert self.mask is not None, "Mask should be specified" # resize to sensor shape @@ -547,8 +595,12 @@ def create_mask(self, loc=None, radius=None): locs_pix = self.loc * (1 / self.feature_size[0]) radius_pix = self.radius * (1 / self.feature_size[0]) height = self.create_height_map(radius_pix, locs_pix) - self.phi = height * (self.refractive_index - 1) * 2 * np.pi / self.wavelength - self.mask = np.exp(1j * self.phi) if not self.is_torch else torch.exp(1j * self.phi) + self.phase_pattern = height * (self.refractive_index - 1) * 2 * np.pi / self.wavelength + self.mask = ( + np.exp(1j * self.phase_pattern) + if not self.is_torch + else torch.exp(1j * self.phase_pattern) + ) def create_height_map(self, radius, locs): height = ( @@ -642,6 +694,9 @@ def create_mask(self): self.target_psf = cv.Canny(np.interp(binary, (-1, 1), (0, 255)).astype(np.uint8), 0, 255) # Computing mask and height map + assert ( + self.distance_sensor is not None + ), "Distance between mask and sensor should be specified." phase_mask, height_map = phase_retrieval( target_psf=self.target_psf, wv=self.design_wv, @@ -707,15 +762,14 @@ class FresnelZoneAperture(Mask): namely binarized cosine function. """ - def __init__(self, radius=0.32e-3, **kwargs): + def __init__(self, radius=0.56e-3, **kwargs): """ Fresnel Zone Aperture mask contructor. Parameters ---------- radius: float - characteristic radius of the FZA (m) - default value: 5e-4 + Radius of the FZA (m). Default value is 0.56e-3 (largest in the paper, others are 0.32e-3 and 0.25e-3). **kwargs: The keyword arguments are passed to the parent class :py:class:`~lensless.hardware.mask.Mask`. """ diff --git a/lensless/recon/recon.py b/lensless/recon/recon.py index 02c8007e..39f78a0c 100644 --- a/lensless/recon/recon.py +++ b/lensless/recon/recon.py @@ -9,6 +9,9 @@ Reconstruction ============== +Check out `this notebook `_ +on Google Colab for an overview of the reconstruction algorithms available in LenslessPiCam (analytic and learned). + The core algorithmic component of ``LenslessPiCam`` is the abstract class :py:class:`~lensless.ReconstructionAlgorithm`. The five reconstruction strategies available in ``LenslessPiCam`` derive from this class: diff --git a/notebooks/README.md b/notebooks/README.md deleted file mode 100644 index 504ea420..00000000 --- a/notebooks/README.md +++ /dev/null @@ -1,6 +0,0 @@ -The following notebooks can be run from Google Colab: - -- [DigiCam: Single-Shot Lensless Sensing with a Low-Cost Programmable Mask](https://colab.research.google.com/drive/1t59uyZMMyCUYVHGXdqdlNlDlb--FL_3P#scrollTo=t9o50zTf3oUg) -- [Aligning a reconstruction with the screen displayed image](https://colab.research.google.com/drive/1c6kUbiB5JO1vro0-IMd-YDDP1g7NFXv3#scrollTo=MtN7GWCIrBKr) -- [A Modular and Robust Physics-Based Approach for Lensless Image Reconstruction](https://colab.research.google.com/drive/1Wgt6ZMRZVuctLHaXxk7PEyPaBaUPvU33) -- [Towards Scalable and Secure Lensless Imaging with a Programmable Mask](https://colab.research.google.com/drive/1YGfs9p4T4NefX8GemVWwtrw4aX8zH1qu#scrollTo=tipedTe4vGwD) \ No newline at end of file diff --git a/test/test_masks.py b/test/test_masks.py index a16659d6..6a7fa81a 100644 --- a/test/test_masks.py +++ b/test/test_masks.py @@ -13,7 +13,7 @@ def test_flatcam(): mask1 = CodedAperture( method="MURA", - n_bits=25, + n_bits=23, resolution=resolution, feature_size=d1, distance_sensor=dz,