Skip to content

Latest commit

 

History

History
71 lines (50 loc) · 2.4 KB

README.md

File metadata and controls

71 lines (50 loc) · 2.4 KB

NafNet deblur CoreML model

Transformed to CoreML NAFNet model for deblurring. Normalization, preprocessing and postprocessing were integrated to network graph. FP8 quantized model is provided weights/nafnet_reds_64_fp8.mlmodel (66.04 MB). Original model is NAFNet-REDS-width64.

There is a known issue with artifacts on the edges of the image. It is caused by the model inference with FP16 and lower. If you're encountering this issue, try to use FP32/FP16 model. You can download those models from here. Also, model needs to be inferenced in FP32 too, so try to use CPU on the device, because Apple mobile NPU/GPU is not supported for FP32.

Model accepts given sizes:

[
    (256, 256), (512, 512), (768, 768), (1024, 1024),
    (1280, 1280), (1536, 1536), (1792, 1792), (2048, 2048), (2304, 2304),
    (2560, 2560), (2816, 2816), (3072, 3072), (3328, 3328), (3584, 3584),
    (3840, 3840), (4096, 4096), (4352, 4352), (4608, 4608), (4864, 4864)
]

Execution script pads provided images to the nearest size listed above.

result

Installation

  1. Create Python virtual environment:
virtualenv -p python3.9 .venv && source .venv/bin/activate
  1. Install all requirements:
pip install -r requirements.txt
  1. Use the project 🎉

Usage

To run use command below:

python run_nafnet_deblur.py --data-root images

Model inference will work only on MacOS. Also, model loading will take a LOT of time. Generally, it is not suitable for mobile devices...

Model

Model input is RGB Image. Model output is float32 image of shape (H, W, 3). Each pixel value of output varies from 0.0f to 255.0f.

Input node name is image, output - result.

Built With

License

This project is licensed under the MIT License – see the LICENSE file for details

Authors