The tool is designed to improve the images quality. At the moment, the tool accepts 512x512 JPG RGB images and returns images of the same size.
The before/after results look as follows:
- Clone this project.
- Download weights to the
sharp-in
project folder. - Create the
MyImages
folder at the project folder. - Put files to predict to the
MyImages
folder and then runpredict.py
. - The resulting images will appear shortly in the
MyImages
folder.
The tool was trained to improve photos from drones. If you want to train it for another domain, create your own dataset with the prepare_dataset_for_superresolution.ipynb
notebook.
You can use any images for training unless they meet the following requirements:
- 512x512 size images as the target data.
- The same images of the reduced quality as the training data.
The dataset preparation notebook works both with annotated and non-annotated images.
From each image in the dataset_paths
folders it:
- Takes crop of
initial_crop_size
. - Resizes the crop to
crop_size
- this will be y images. - Compresses it with the
compress_ratios
ratios and reshapes back tooutput_size
- this will be X images. - The resulting X and y images are stored in the respective folders inside
crops_folder
.
You can use the notebook with annotated dataset with two types of crops:
- To get crops around target areas, run
get_target_crops()
. - To get
crops_per_image
random crops from each source image, runget_random_crops()
.
Other parameters:
b_files_per_dataset
: qty of files in the folder used for target crops.r_files_per_dataset
: qty of files in the folder used for random crops.
When the dataset is prepared, run train.py
to start the training.
The recommended settings to start with: Adam optimizer with lr=1e-3 and Cosine LR scheduler. The batch size of 8 or 16 is acceptable.