You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python basicsr/test.py --opt options/test_IPG_BasicSR_x2.yml --name train_IPG_SR_DF2K_x2_500000 --path__pretrain_network_g experiments\train_IPG_SR_DF2K_x
2_500000\models\net_g_latest.pth
================>
Traceback (most recent call last):
File "E:\Efficient-Computing-master\LowLevel\IPG\basicsr\test.py", line 2, in
import torch
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init_.py", line 262, in load_dll_libraries()
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init.py", line 192, in load_dll_libraries
if cuda_version and builtins.all(
^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init.py", line 193, in
not glob.glob(os.path.join(p, "cudart64*.dll")) for p in dll_paths
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 28, in glob
return list(iglob(pathname, root_dir=root_dir, dir_fd=dir_fd, recursive=recursive,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 97, in _iglob
for name in glob_in_dir(_join(root_dir, dirname), basename, dir_fd, dironly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 106, in _glob1
names = _listdir(dirname, dir_fd, dironly)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 178, in _listdir
return list(it)
^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 167, in _iterdir
yield entry.name
KeyboardInterrupt
^CTraceback (most recent call last):
File "E:\Efficient-Computing-master\LowLevel\IPG\exec.py", line 169, in
os.system(eval_cmd)
KeyboardInterrupt
^C
(ipg) E:\Efficient-Computing-master\LowLevel\IPG>python exec.py --opt options/train_IPG_SR_x2.yml --eval_opt options/test_IPG_BasicSR_x2.yml --scale 2
你好,请问这是什么原因,权重没有保存成功
(ipg) E:\Efficient-Computing-master\LowLevel\IPG>python exec.py --opt options/train_IPG_SR_x2.yml --eval_opt options/test_IPG_BasicSR_x2.yml --scale 2
NCCL_IB_TIMEOUT=22 python -m torch.distributed.launch --nproc_per_node=1 basicsr/train.py --opt options/train_IPG_SR_x2.yml --launcher pytorch --scale 2 --
train__total_iter 500000 --name train_IPG_SR_DF2K_x2_500000 --force_install 0
'NCCL_IB_TIMEOUT' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
python basicsr/test.py --opt options/test_IPG_BasicSR_x2.yml --name train_IPG_SR_DF2K_x2_500000 --path__pretrain_network_g experiments\train_IPG_SR_DF2K_x
2_500000\models\net_g_latest.pth
================>
Traceback (most recent call last):
File "E:\Efficient-Computing-master\LowLevel\IPG\basicsr\test.py", line 2, in
import torch
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init_.py", line 262, in
load_dll_libraries()
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init.py", line 192, in load_dll_libraries
if cuda_version and builtins.all(
^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\torch_init.py", line 193, in
not glob.glob(os.path.join(p, "cudart64*.dll")) for p in dll_paths
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 28, in glob
return list(iglob(pathname, root_dir=root_dir, dir_fd=dir_fd, recursive=recursive,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 97, in _iglob
for name in glob_in_dir(_join(root_dir, dirname), basename, dir_fd, dironly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 106, in _glob1
names = _listdir(dirname, dir_fd, dironly)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 178, in _listdir
return list(it)
^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\glob.py", line 167, in _iterdir
yield entry.name
KeyboardInterrupt
^CTraceback (most recent call last):
File "E:\Efficient-Computing-master\LowLevel\IPG\exec.py", line 169, in
os.system(eval_cmd)
KeyboardInterrupt
^C
(ipg) E:\Efficient-Computing-master\LowLevel\IPG>python exec.py --opt options/train_IPG_SR_x2.yml --eval_opt options/test_IPG_BasicSR_x2.yml --scale 2
NCCL_IB_TIMEOUT=22 python -m torch.distributed.launch --nproc_per_node=1 basicsr/train.py --opt options/train_IPG_SR_x2.yml --launcher pytorch --scale 2 --
train__total_iter 500000 --name train_IPG_SR_DF2K_x2_500000 --force_install 0
'NCCL_IB_TIMEOUT' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
python basicsr/test.py --opt options/test_IPG_BasicSR_x2.yml --name train_IPG_SR_DF2K_x2_500000 --path__pretrain_network_g experiments\train_IPG_SR_DF2K_x
2_500000\models\net_g_latest.pth
================>
Set opt ['name'] as train_IPG_SR_DF2K_x2_500000...
Value name set from test_IPG_SR_x2 as train_IPG_SR_DF2K_x2_500000!
Set opt ['path', 'pretrain_network_g'] as experiments\train_IPG_SR_DF2K_x2_500000\models\net_g_latest.pth...
Value pretrain_network_g set from None as experiments\train_IPG_SR_DF2K_x2_500000\models\net_g_latest.pth!
Disable distributed.
Path already exists. Rename it to E:\Efficient-Computing-master\LowLevel\IPG\results\train_IPG_SR_DF2K_x2_500000_archived_20241115_100223
2024-11-15 10:02:23,310 INFO:
name: train_IPG_SR_DF2K_x2_500000
model_type: IPGModel
scale: 2
num_gpu: 1
manual_seed: 10
datasets:[
test_1:[
name: Set5
type: PairedImageDataset
dataroot_gt: ../SRdata/BasicSR_SR_test/Set5/GTmod2
dataroot_lq: ../SRdata/BasicSR_SR_test/Set5/LRbicx2
io_backend:[
type: disk
]
phase: test
scale: 2
]
test_2:[
name: Set14
type: PairedImageDataset
dataroot_gt: ../SRdata/BasicSR_SR_test/Set14/GTmod2
dataroot_lq: ../SRdata/BasicSR_SR_test/Set14/LRbicx2
io_backend:[
type: disk
]
phase: test
scale: 2
]
test_3:[
name: BSDS100
type: PairedImageDataset
dataroot_gt: ../SRdata/BasicSR_SR_test/BSDS100/GTmod2
dataroot_lq: ../SRdata/BasicSR_SR_test/BSDS100/LRbicx2
io_backend:[
type: disk
]
phase: test
scale: 2
]
test_4:[
name: Urban100
type: PairedImageDataset
dataroot_gt: ../SRdata/BasicSR_SR_test/Urban100/GTmod2
dataroot_lq: ../SRdata/BasicSR_SR_test/Urban100/LRbicx2
io_backend:[
type: disk
]
phase: test
scale: 2
]
test_5:[
name: Manga109
type: PairedImageDataset
dataroot_gt: ../SRdata/BasicSR_SR_test/Manga109/GTmod2
dataroot_lq: ../SRdata/BasicSR_SR_test/Manga109/LRbicx2
io_backend:[
type: disk
]
phase: test
scale: 2
]
]
network_g:[
type: IPG
upscale: 2
in_chans: 3
img_size: 64
window_size: 16
img_range: 1.0
depths: [6, 6, 6, 6, 6, 6]
embed_dim: 180
num_heads: [6, 6, 6, 6, 6, 6]
mlp_ratio: 4
upsampler: pixelshuffle
resi_connection: 1conv
graph_flags: [1, 1, 1, 1, 1, 1]
stage_spec: [['GN', 'GS', 'GN', 'GS', 'GN', 'GS'], ['GN', 'GS', 'GN', 'GS', 'GN', 'GS'], ['GN', 'GS', 'GN', 'GS', 'GN', 'GS'], ['GN', 'GS', 'GN', 'GS',
'GN', 'GS'], ['GN', 'GS', 'GN', 'GS', 'GN', 'GS'], ['GN', 'GS', 'GN', 'GS', 'GN', 'GS']]
dist_type: cossim
top_k: 256
head_wise: 0
sample_size: 32
graph_switch: 1
flex_type: interdiff_plain
FFNtype: basic-dwconv3
conv_scale: 0.01
conv_type: dwconv3-gelu-conv1-ca
diff_scales: [10, 0, 0, 0, 0, 0]
fast_graph: 1
]
path:[
pretrain_network_g: experiments\train_IPG_SR_DF2K_x2_500000\models\net_g_latest.pth
strict_load_g: True
results_root: E:\Efficient-Computing-master\LowLevel\IPG\results\train_IPG_SR_DF2K_x2_500000
log: E:\Efficient-Computing-master\LowLevel\IPG\results\train_IPG_SR_DF2K_x2_500000
visualization: E:\Efficient-Computing-master\LowLevel\IPG\results\train_IPG_SR_DF2K_x2_500000\visualization
]
val:[
save_img: True
suffix: None
metrics:[
psnr:[
type: calculate_psnr
crop_border: 2
test_y_channel: True
]
ssim:[
type: calculate_ssim
crop_border: 2
test_y_channel: True
]
]
]
dist: False
rank: 0
world_size: 1
auto_resume: False
is_train: False
2024-11-15 10:02:23,310 INFO: Dataset [PairedImageDataset] - Set5 is built.
2024-11-15 10:02:23,310 INFO: Number of test images in Set5: 5
2024-11-15 10:02:23,310 INFO: Dataset [PairedImageDataset] - Set14 is built.
2024-11-15 10:02:23,310 INFO: Number of test images in Set14: 14
2024-11-15 10:02:23,326 INFO: Dataset [PairedImageDataset] - BSDS100 is built.
2024-11-15 10:02:23,326 INFO: Number of test images in BSDS100: 100
2024-11-15 10:02:23,326 INFO: Dataset [PairedImageDataset] - Urban100 is built.
2024-11-15 10:02:23,326 INFO: Number of test images in Urban100: 100
2024-11-15 10:02:23,326 INFO: Dataset [PairedImageDataset] - Manga109 is built.
2024-11-15 10:02:23,326 INFO: Number of test images in Manga109: 109
D:\install\Python\envs\ipg\Lib\site-packages\torch\functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the
indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:3596.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2024-11-15 10:02:23,935 INFO: Network [IPG] is created.
2024-11-15 10:02:25,083 INFO: Network: IPG, with parameters: 18,129,887
2024-11-15 10:02:25,083 INFO: IPG(
(conv_first): Conv2d(3, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(patch_embed): PatchEmbed(
(norm): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
)
(patch_unembed): PatchUnEmbed()
(pos_drop): Dropout(p=0.0, inplace=False)
(layers): ModuleList(
(0): MGB(
(residual_group): BasicLayer(
dim=180, depth=6
(blocks): ModuleList(
(0): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): Identity()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(1): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(2): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(3): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(4): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(5): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
)
)
(conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(patch_embed): PatchEmbed()
(patch_unembed): PatchUnEmbed()
)
(1-5): 5 x MGB(
(residual_group): BasicLayer(
dim=180, depth=6
(blocks): ModuleList(
(0): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(1): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(2): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(3): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(4): GAL(
dim=180, sampling_method=0, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
(5): GAL(
dim=180, sampling_method=1, mlp_ratio=4
(norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(grapher): IPG_Grapher(
dim=180, top_k=256, sample_size=(32, 32)
(proj_group): Linear(in_features=180, out_features=180, bias=True)
(proj_sample): Linear(in_features=180, out_features=360, bias=True)
(proj): Linear(in_features=180, out_features=180, bias=True)
(cpb_mlp): Sequential(
(0): Linear(in_features=2, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=6, bias=False)
)
)
(drop_path): DropPath()
(norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(mlp): ConvFFN(
(fc1): Linear(in_features=180, out_features=720, bias=True)
(act): GELU(approximate='none')
(before_add): Identity()
(after_add): Identity()
(dwconv): dwconv(
(depthwise_conv): Sequential(
(0): Conv2d(720, 720, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=720)
(1): GELU(approximate='none')
)
)
(fc2): Linear(in_features=720, out_features=180, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(conv_block): CAB(
(cab): Sequential(
(0): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=180)
(1): GELU(approximate='none')
(2): Conv2d(180, 180, kernel_size=(1, 1), stride=(1, 1))
(3): ChannelAttention(
(attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
)
)
)
)
(conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(patch_embed): PatchEmbed()
(patch_unembed): PatchUnEmbed()
)
)
(norm): LayerNorm((180,), eps=1e-05, elementwise_affine=True)
(conv_after_body): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv_before_upsample): Sequential(
(0): Conv2d(180, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): LeakyReLU(negative_slope=0.01, inplace=True)
)
(upsample): Upsample(
(0): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): PixelShuffle(upscale_factor=2)
)
(conv_last): Conv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
D:\install\Python\envs\ipg\Lib\site-packages\basicsr-1.0.0-py3.11.egg\basicsr\models\base_model.py:293: FutureWarning: You are using
torch.load
withwei ghts_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which willexecute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future rel
ease, the default value for
weights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objectswill no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via
torch.serialization.add_safe_globals
. We recommend you start setting
weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for anyissues related to this experimental feature.
load_net = torch.load(load_path, map_location=lambda storage, loc: storage)
Traceback (most recent call last):
File "E:\Efficient-Computing-master\LowLevel\IPG\basicsr\test.py", line 48, in
test_pipeline(root_path)
File "E:\Efficient-Computing-master\LowLevel\IPG\basicsr\test.py", line 34, in test_pipeline
model = build_model(opt)
^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\basicsr-1.0.0-py3.11.egg\basicsr\models_init.py", line 27, in build_model
model = MODEL_REGISTRY.get(opt['model_type'])(opt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\basicsr-1.0.0-py3.11.egg\basicsr\models\sr_model.py", line 30, in init
self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key)
File "D:\install\Python\envs\ipg\Lib\site-packages\basicsr-1.0.0-py3.11.egg\basicsr\models\base_model.py", line 293, in load_network
load_net = torch.load(load_path, map_location=lambda storage, loc: storage)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\torch\serialization.py", line 1319, in load
with _open_file_like(f, "rb") as opened_file:
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\torch\serialization.py", line 659, in _open_file_like
return _open_file(name_or_buffer, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\install\Python\envs\ipg\Lib\site-packages\torch\serialization.py", line 640, in init
super().init(open(name, mode))
^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'experiments\train_IPG_SR_DF2K_x2_500000\models\net_g_latest.pth'
Saved at: experiments\train_IPG_SR_DF2K_x2_500000
The text was updated successfully, but these errors were encountered: