You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This error shows up after starting train.py with the configuration that came with traiNNer. Fresh install with nothing modified (except the train_sr.yml"
23-05-31 19:09:48.945 - INFO: Random seed: 0
23-05-31 19:09:49.264 - INFO: Dataset [AlignedDataset - DIV2K] is created.
23-05-31 19:09:49.266 - INFO: Number of train images: 14,361, epoch iters: 1,795
23-05-31 19:09:49.266 - INFO: Total epochs needed: 279 for iters 500,000
23-05-31 19:09:49.266 - INFO: Dataset [AlignedDataset - val_set14_part] is created.
23-05-31 19:09:49.267 - INFO: Number of val images in [val_set14_part]: 1
23-05-31 19:09:49.624 - INFO: AMP library available
23-05-31 19:09:51.252 - INFO: Initialization method [kaiming]
23-05-31 19:09:51.547 - INFO: Initialization method [kaiming]
23-05-31 19:09:51.670 - INFO: Loading pretrained model for G [..\experiments\pretrained_models\RRDB_PSNR_x4.pth]
23-05-31 19:09:52.916 - INFO: GAN enabled
23-05-31 19:09:52.922 - INFO: AMP enabled
23-05-31 19:09:52.923 - INFO: norm gradient clip enabled. Clip value: 0.1.
23-05-31 19:09:52.935 - INFO: Network G structure: DataParallel - RRDBNet, with parameters: 16,697,987
23-05-31 19:09:52.936 - INFO: Network D structure: DataParallel - Discriminator_VGG, with parameters: 14,502,281
23-05-31 19:09:52.936 - INFO: Model [SRModel] created.
23-05-31 19:09:52.936 - INFO: Start training from epoch: 0, iter: 0
E:\nn\trainner\codes\models\base_model.py:921: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
self.grad_clip(
C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\lr_scheduler.py:129: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "
Traceback (most recent call last):
File "E:\nn\trainner\codes\train.py", line 500, in
main()
File "E:\nn\trainner\codes\train.py", line 496, in main
fit(model, opt, dataloaders, steps_states, data_params, loggers)
File "E:\nn\trainner\codes\train.py", line 224, in fit
for n, train_data in enumerate(dataloaders['train'], start=1):
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch_utils.py", line 425, in reraise
raise self.exc_type(msg)
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\nn\trainner\codes\data\aligned_dataset.py", line 117, in getitem
img_A, img_B = paired_imgs_check(
File "E:\nn\trainner\codes\dataops\augmentations.py", line 1388, in paired_imgs_check
img_A, img_B = shape_change_fn(
File "E:\nn\trainner\codes\dataops\augmentations.py", line 1141, in shape_change_fn
img_A = transforms.Resize((int(h/scale), int(w/scale)),
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\transforms.py", line 192, in call
return F.resize(img, self.size, self.interpolation)
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\common.py", line 211, in wrapped_function
result = func(img, *args, **kwargs)
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\functional.py", line 187, in resize
output = cv2.resize(img, dsize=(size[1], size[0]), interpolation=_cv2_str2interpolation[interpolation])
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4065: error: (-215:Assertion failed) inv_scale_x > 0 in function 'cv::resize'
The text was updated successfully, but these errors were encountered:
This error shows up after starting train.py with the configuration that came with traiNNer. Fresh install with nothing modified (except the train_sr.yml"
23-05-31 19:09:48.945 - INFO: Random seed: 0
23-05-31 19:09:49.264 - INFO: Dataset [AlignedDataset - DIV2K] is created.
23-05-31 19:09:49.266 - INFO: Number of train images: 14,361, epoch iters: 1,795
23-05-31 19:09:49.266 - INFO: Total epochs needed: 279 for iters 500,000
23-05-31 19:09:49.266 - INFO: Dataset [AlignedDataset - val_set14_part] is created.
23-05-31 19:09:49.267 - INFO: Number of val images in [val_set14_part]: 1
23-05-31 19:09:49.624 - INFO: AMP library available
23-05-31 19:09:51.252 - INFO: Initialization method [kaiming]
23-05-31 19:09:51.547 - INFO: Initialization method [kaiming]
23-05-31 19:09:51.670 - INFO: Loading pretrained model for G [..\experiments\pretrained_models\RRDB_PSNR_x4.pth]
23-05-31 19:09:52.916 - INFO: GAN enabled
23-05-31 19:09:52.922 - INFO: AMP enabled
23-05-31 19:09:52.923 - INFO: norm gradient clip enabled. Clip value: 0.1.
23-05-31 19:09:52.935 - INFO: Network G structure: DataParallel - RRDBNet, with parameters: 16,697,987
23-05-31 19:09:52.936 - INFO: Network D structure: DataParallel - Discriminator_VGG, with parameters: 14,502,281
23-05-31 19:09:52.936 - INFO: Model [SRModel] created.
23-05-31 19:09:52.936 - INFO: Start training from epoch: 0, iter: 0
E:\nn\trainner\codes\models\base_model.py:921: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
self.grad_clip(
C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\lr_scheduler.py:129: UserWarning: Detected call of
lr_scheduler.step()
beforeoptimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order:optimizer.step()
beforelr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratewarnings.warn("Detected call of
lr_scheduler.step()
beforeoptimizer.step()
. "Traceback (most recent call last):
File "E:\nn\trainner\codes\train.py", line 500, in
main()
File "E:\nn\trainner\codes\train.py", line 496, in main
fit(model, opt, dataloaders, steps_states, data_params, loggers)
File "E:\nn\trainner\codes\train.py", line 224, in fit
for n, train_data in enumerate(dataloaders['train'], start=1):
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch_utils.py", line 425, in reraise
raise self.exc_type(msg)
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\nn\trainner\codes\data\aligned_dataset.py", line 117, in getitem
img_A, img_B = paired_imgs_check(
File "E:\nn\trainner\codes\dataops\augmentations.py", line 1388, in paired_imgs_check
img_A, img_B = shape_change_fn(
File "E:\nn\trainner\codes\dataops\augmentations.py", line 1141, in shape_change_fn
img_A = transforms.Resize((int(h/scale), int(w/scale)),
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\transforms.py", line 192, in call
return F.resize(img, self.size, self.interpolation)
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\common.py", line 211, in wrapped_function
result = func(img, *args, **kwargs)
File "E:\nn\trainner\codes\dataops\augmennt\augmennt\functional.py", line 187, in resize
output = cv2.resize(img, dsize=(size[1], size[0]), interpolation=_cv2_str2interpolation[interpolation])
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4065: error: (-215:Assertion failed) inv_scale_x > 0 in function 'cv::resize'
The text was updated successfully, but these errors were encountered: