Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I encountered the following error while training with the ScanNetV2 dataset,what should I do? #85

Open
FAkor1 opened this issue Oct 24, 2024 · 2 comments

Comments

@FAkor1
Copy link

FAkor1 commented Oct 24, 2024

Traceback (most recent call last):
File "tools/train.py", line 135, in
main()
File "tools/train.py", line 131, in main
runner.train()
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/runner/loops.py", line 98, in run
self.run_epoch()
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/runner/loops.py", line 115, in run_epoch
self.run_iter(idx, data_batch)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/runner/loops.py", line 131, in run_iter
outputs = self.runner.model.train_step(
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
losses = self._run_forward(data, mode='loss') # type: ignore
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/mmdet3d/models/detectors/base.py", line 75, in forward
return self.loss(inputs, data_samples, **kwargs)
File "/root/autodl-tmp/oneformer3d-main/oneformer3d/oneformer3d.py", line 369, in loss
x = self.decoder(x, queries)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/oneformer3d-main/oneformer3d/query_decoder.py", line 340, in forward
return self.forward_iter_pred(x, queries)
File "/root/autodl-tmp/oneformer3d-main/oneformer3d/query_decoder.py", line 457, in forward_iter_pred
queries = self.cross_attn_layers[i](inst_feats, queries, attn_mask)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/oneformer3d-main/oneformer3d/query_decoder.py", line 52, in forward
output, _ = self.attn(queries[i], k, v, attn_mask=attn_mask)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1003, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/root/miniconda3/envs/tr3d/lib/python3.8/site-packages/torch/nn/functional.py", line 4967, in multi_head_attention_forward
tgt_len, bsz, embed_dim = query.shape
ValueError: not enough values to unpack (expected 3, got 2)

@filaPro
Copy link
Owner

filaPro commented Oct 29, 2024

Please provide more info, e.g. the full output log of this run.

@FAkor1
Copy link
Author

FAkor1 commented Nov 11, 2024

2024/10/24 10:02:26 - mmengine - INFO -

System environment:
sys.platform: linux
Python: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 1509538789
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.10.0
PyTorch compiling details: PyTorch built with:

  • GCC 7.3

  • C++ Version: 201402

  • Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)

  • OpenMP 201511 (a.k.a. OpenMP 4.5)

  • LAPACK is enabled (usually provided by MKL)

  • NNPACK is enabled

  • CPU capability usage: AVX512

  • CUDA Runtime 11.3

  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37

  • CuDNN 8.2

  • Magma 2.5.2

  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.11.0
    OpenCV: 4.6.0
    MMEngine: 0.10.5

Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 1509538789
Distributed launcher: none
Distributed training: False
GPU number: 1

2024/10/24 10:02:27 - mmengine - INFO - Config:
backend_args = None
class_names = [
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
'unlabeled',
]
custom_hooks = [
dict(after_iter=True, type='EmptyCacheHook'),
]
custom_imports = dict(imports=[
'oneformer3d',
])
data_prefix = dict(
pts='points',
pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask',
sp_pts_mask='super_points')
data_root = 'data/scannet/'
dataset_type = 'ScanNetSegDataset_'
default_hooks = dict(
checkpoint=dict(
scope='mmdet3d',
interval=1,
max_keep_ckpts=16,
type='CheckpointHook'),
logger=dict(scope='mmdet3d', interval=50, type='LoggerHook'),
param_scheduler=dict(scope='mmdet3d', type='ParamSchedulerHook'),
sampler_seed=dict(scope='mmdet3d', type='DistSamplerSeedHook'),
timer=dict(scope='mmdet3d', type='IterTimerHook'),
visualization=dict(scope='mmdet3d', type='Det3DVisualizationHook'))
default_scope = 'mmdet3d'
env_cfg = dict(
cudnn_benchmark=False,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
eval_pipeline = [
dict(
scope='mmdet3d',
backend_args=None,
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(scope='mmdet3d', color_mean=None, type='NormalizePointsColor'),
dict(scope='mmdet3d', keys=[
'points',
], type='Pack3DDetInputs'),
]
input_modality = dict(use_camera=False, use_lidar=True)
inst_mapping = [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
]
label2cat = dict({
0: 'wall',
1: 'floor',
10: 'picture',
11: 'counter',
12: 'desk',
13: 'curtain',
14: 'refrigerator',
15: 'showercurtrain',
16: 'toilet',
17: 'sink',
18: 'bathtub',
19: 'otherfurniture',
2: 'cabinet',
20: 'unlabeled',
3: 'bed',
4: 'chair',
5: 'sofa',
6: 'table',
7: 'door',
8: 'window',
9: 'bookshelf'
})
launcher = 'none'
load_from = 'work_dirs/tmp/sstnet_scannet.pth'
log_level = 'INFO'
log_processor = dict(
scope='mmdet3d', by_epoch=True, type='LogProcessor', window_size=50)
metainfo = dict(
classes=(
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
))
metric_meta = dict(
classes=[
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
'unlabeled',
],
dataset_name='ScanNet',
ignore_index=[
20,
],
label2cat=dict({
0: 'wall',
1: 'floor',
10: 'picture',
11: 'counter',
12: 'desk',
13: 'curtain',
14: 'refrigerator',
15: 'showercurtrain',
16: 'toilet',
17: 'sink',
18: 'bathtub',
19: 'otherfurniture',
2: 'cabinet',
20: 'unlabeled',
3: 'bed',
4: 'chair',
5: 'sofa',
6: 'table',
7: 'door',
8: 'window',
9: 'bookshelf'
}))
model = dict(
backbone=dict(
num_planes=[
32,
64,
96,
128,
160,
],
return_blocks=True,
type='SpConvUNet'),
criterion=dict(
inst_criterion=dict(
fix_dice_loss_weight=True,
fix_mean_loss=True,
iter_matcher=True,
loss_weight=[
0.5,
1.0,
1.0,
0.5,
],
matcher=dict(
costs=[
dict(type='QueryClassificationCost', weight=0.5),
dict(type='MaskBCECost', weight=1.0),
dict(type='MaskDiceCost', weight=1.0),
],
topk=1,
type='SparseMatcher'),
non_object_weight=0.1,
num_classes=18,
type='InstanceCriterion'),
num_semantic_classes=20,
sem_criterion=dict(
ignore_index=20, loss_weight=0.2, type='ScanNetSemanticCriterion'),
type='ScanNetUnifiedCriterion'),
data_preprocessor=dict(type='Det3DDataPreprocessor_'),
decoder=dict(
activation_fn='gelu',
attn_mask=True,
d_model=256,
dropout=0.0,
fix_attention=True,
hidden_dim=1024,
in_channels=32,
iter_pred=True,
num_heads=8,
num_instance_classes=18,
num_instance_queries=0,
num_layers=6,
num_semantic_classes=20,
num_semantic_linears=1,
num_semantic_queries=0,
objectness_flag=False,
type='ScanNetQueryDecoder'),
in_channels=6,
min_spatial_shape=128,
num_channels=32,
num_classes=18,
query_thr=0.5,
test_cfg=dict(
inst_score_thr=0.0,
matrix_nms_kernel='linear',
nms=True,
npoint_thr=100,
obj_normalization=True,
pan_score_thr=0.5,
sp_score_thr=0.4,
stuff_classes=[
0,
1,
],
topk_insts=600),
train_cfg=dict(),
type='ScanNetOneFormer3D',
voxel_size=0.02)
num_channels = 32
num_instance_classes = 18
num_points = 8192
num_semantic_classes = 20
optim_wrapper = dict(
clip_grad=dict(max_norm=10, norm_type=2),
optimizer=dict(lr=0.0001, type='AdamW', weight_decay=0.05),
type='OptimWrapper')
param_scheduler = dict(begin=0, end=512, power=0.9, type='PolyLR')
resume = False
sem_mapping = [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
]
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=1,
dataset=dict(
scope='mmdet3d',
ann_file='scannet_oneformer3d_infos_val.pkl',
backend_args=None,
data_prefix=dict(
pts='points',
pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask',
sp_pts_mask='super_points'),
data_root='data/scannet/',
ignore_index=20,
metainfo=dict(
classes=(
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
)),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
type='LoadAnnotations3D_',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=True,
with_seg_3d=True,
with_sp_mask_3d=True),
dict(type='PointSegClassMapping'),
dict(
flip=False,
img_scale=(
1333,
800,
),
pts_scale_ratio=1,
transforms=[
dict(
color_mean=[
127.5,
127.5,
127.5,
],
type='NormalizePointsColor_'),
dict(
merge_non_stuff_cls=False,
num_classes=20,
stuff_classes=[
0,
1,
],
type='AddSuperPointAnnotations'),
],
type='MultiScaleFlipAug3D'),
dict(keys=[
'points',
'sp_pts_mask',
], type='Pack3DDetInputs_'),
],
test_mode=True,
type='ScanNetSegDataset_'),
drop_last=False,
num_workers=1,
persistent_workers=True,
sampler=dict(scope='mmdet3d', shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
scope='mmdet3d',
id_offset=65536,
inst_mapping=[
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
],
metric_meta=dict(
classes=[
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
'unlabeled',
],
dataset_name='ScanNet',
ignore_index=[
20,
],
label2cat=dict({
0: 'wall',
1: 'floor',
10: 'picture',
11: 'counter',
12: 'desk',
13: 'curtain',
14: 'refrigerator',
15: 'showercurtrain',
16: 'toilet',
17: 'sink',
18: 'bathtub',
19: 'otherfurniture',
2: 'cabinet',
20: 'unlabeled',
3: 'bed',
4: 'chair',
5: 'sofa',
6: 'table',
7: 'door',
8: 'window',
9: 'bookshelf'
})),
min_num_points=1,
sem_mapping=[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
],
stuff_class_inds=[
0,
1,
],
thing_class_inds=[
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
],
type='UnifiedSegMetric')
test_pipeline = [
dict(
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
type='LoadAnnotations3D_',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=True,
with_seg_3d=True,
with_sp_mask_3d=True),
dict(type='PointSegClassMapping'),
dict(
flip=False,
img_scale=(
1333,
800,
),
pts_scale_ratio=1,
transforms=[
dict(
color_mean=[
127.5,
127.5,
127.5,
],
type='NormalizePointsColor_'),
dict(
merge_non_stuff_cls=False,
num_classes=20,
stuff_classes=[
0,
1,
],
type='AddSuperPointAnnotations'),
],
type='MultiScaleFlipAug3D'),
dict(keys=[
'points',
'sp_pts_mask',
], type='Pack3DDetInputs_'),
]
train_cfg = dict(
dynamic_intervals=[
(
1,
16,
),
(
496,
1,
),
],
max_epochs=512,
type='EpochBasedTrainLoop')
train_dataloader = dict(
batch_size=4,
dataset=dict(
scope='mmdet3d',
ann_file='scannet_oneformer3d_infos_train.pkl',
backend_args=None,
data_prefix=dict(
pts='points',
pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask',
sp_pts_mask='super_points'),
data_root='data/scannet/',
ignore_index=20,
metainfo=dict(
classes=(
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
)),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
type='LoadAnnotations3D_',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=True,
with_seg_3d=True,
with_sp_mask_3d=True),
dict(type='PointSegClassMapping'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.14,
3.14,
],
scale_ratio_range=[
0.8,
1.2,
],
shift_height=False,
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
dict(
color_mean=[
127.5,
127.5,
127.5,
],
type='NormalizePointsColor_'),
dict(
merge_non_stuff_cls=False,
num_classes=20,
stuff_classes=[
0,
1,
],
type='AddSuperPointAnnotations'),
dict(
gran=[
6,
20,
],
mag=[
40,
160,
],
p=0.5,
type='ElasticTransfrom',
voxel_size=0.02),
dict(
keys=[
'points',
'gt_labels_3d',
'pts_semantic_mask',
'pts_instance_mask',
'sp_pts_mask',
'gt_sp_masks',
'elastic_coords',
],
type='Pack3DDetInputs_'),
],
scene_idxs=None,
test_mode=False,
type='ScanNetSegDataset_'),
num_workers=6,
persistent_workers=True,
sampler=dict(scope='mmdet3d', shuffle=True, type='DefaultSampler'))
train_pipeline = [
dict(
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
type='LoadAnnotations3D_',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=True,
with_seg_3d=True,
with_sp_mask_3d=True),
dict(type='PointSegClassMapping'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.14,
3.14,
],
scale_ratio_range=[
0.8,
1.2,
],
shift_height=False,
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
dict(color_mean=[
127.5,
127.5,
127.5,
], type='NormalizePointsColor_'),
dict(
merge_non_stuff_cls=False,
num_classes=20,
stuff_classes=[
0,
1,
],
type='AddSuperPointAnnotations'),
dict(
gran=[
6,
20,
],
mag=[
40,
160,
],
p=0.5,
type='ElasticTransfrom',
voxel_size=0.02),
dict(
keys=[
'points',
'gt_labels_3d',
'pts_semantic_mask',
'pts_instance_mask',
'sp_pts_mask',
'gt_sp_masks',
'elastic_coords',
],
type='Pack3DDetInputs_'),
]
tta_model = dict(scope='mmdet3d', type='Seg3DTTAModel')
tta_pipeline = [
dict(
scope='mmdet3d',
backend_args=None,
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
scope='mmdet3d',
backend_args=None,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=False,
with_seg_3d=True),
dict(scope='mmdet3d', color_mean=None, type='NormalizePointsColor'),
dict(
scope='mmdet3d',
transforms=[
[
dict(
flip_ratio_bev_horizontal=0.0,
flip_ratio_bev_vertical=0.0,
sync_2d=False,
type='RandomFlip3D'),
],
[
dict(keys=[
'points',
], type='Pack3DDetInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=1,
dataset=dict(
scope='mmdet3d',
ann_file='scannet_oneformer3d_infos_val.pkl',
backend_args=None,
data_prefix=dict(
pts='points',
pts_instance_mask='instance_mask',
pts_semantic_mask='semantic_mask',
sp_pts_mask='super_points'),
data_root='data/scannet/',
ignore_index=20,
metainfo=dict(
classes=(
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
)),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
coord_type='DEPTH',
load_dim=6,
shift_height=False,
type='LoadPointsFromFile',
use_color=True,
use_dim=[
0,
1,
2,
3,
4,
5,
]),
dict(
type='LoadAnnotations3D_',
with_bbox_3d=False,
with_label_3d=False,
with_mask_3d=True,
with_seg_3d=True,
with_sp_mask_3d=True),
dict(type='PointSegClassMapping'),
dict(
flip=False,
img_scale=(
1333,
800,
),
pts_scale_ratio=1,
transforms=[
dict(
color_mean=[
127.5,
127.5,
127.5,
],
type='NormalizePointsColor_'),
dict(
merge_non_stuff_cls=False,
num_classes=20,
stuff_classes=[
0,
1,
],
type='AddSuperPointAnnotations'),
],
type='MultiScaleFlipAug3D'),
dict(keys=[
'points',
'sp_pts_mask',
], type='Pack3DDetInputs_'),
],
test_mode=True,
type='ScanNetSegDataset_'),
drop_last=False,
num_workers=1,
persistent_workers=True,
sampler=dict(scope='mmdet3d', shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
scope='mmdet3d',
id_offset=65536,
inst_mapping=[
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
],
metric_meta=dict(
classes=[
'wall',
'floor',
'cabinet',
'bed',
'chair',
'sofa',
'table',
'door',
'window',
'bookshelf',
'picture',
'counter',
'desk',
'curtain',
'refrigerator',
'showercurtrain',
'toilet',
'sink',
'bathtub',
'otherfurniture',
'unlabeled',
],
dataset_name='ScanNet',
ignore_index=[
20,
],
label2cat=dict({
0: 'wall',
1: 'floor',
10: 'picture',
11: 'counter',
12: 'desk',
13: 'curtain',
14: 'refrigerator',
15: 'showercurtrain',
16: 'toilet',
17: 'sink',
18: 'bathtub',
19: 'otherfurniture',
2: 'cabinet',
20: 'unlabeled',
3: 'bed',
4: 'chair',
5: 'sofa',
6: 'table',
7: 'door',
8: 'window',
9: 'bookshelf'
})),
min_num_points=1,
sem_mapping=[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
14,
16,
24,
28,
33,
34,
36,
39,
],
stuff_class_inds=[
0,
1,
],
thing_class_inds=[
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
],
type='UnifiedSegMetric')
vis_backends = [
dict(scope='mmdet3d', type='LocalVisBackend'),
]
visualizer = dict(
scope='mmdet3d',
name='visualizer',
type='Det3DLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
])
work_dir = './work_dirs/oneformer3d_1xb4_scannet'

2024/10/24 10:02:32 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
2024/10/24 10:02:32 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook

before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook

before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
(NORMAL ) EmptyCacheHook

before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook

after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

before_val:
(VERY_HIGH ) RuntimeInfoHook

before_val_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook

before_val_iter:
(NORMAL ) IterTimerHook

after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(NORMAL ) EmptyCacheHook
(BELOW_NORMAL) LoggerHook

after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_val:
(VERY_HIGH ) RuntimeInfoHook

after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook

before_test:
(VERY_HIGH ) RuntimeInfoHook

before_test_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook

before_test_iter:
(NORMAL ) IterTimerHook

after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(NORMAL ) EmptyCacheHook
(BELOW_NORMAL) LoggerHook

after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) EmptyCacheHook
(BELOW_NORMAL) LoggerHook

after_test:
(VERY_HIGH ) RuntimeInfoHook

after_run:
(BELOW_NORMAL) LoggerHook

2024/10/24 10:02:33 - mmengine - WARNING - The prefix is not set in metric class UnifiedSegMetric.
Name of parameter - Initialization information

unet.blocks.block0.conv_branch.0.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block0.conv_branch.0.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block0.conv_branch.2.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block0.conv_branch.3.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block0.conv_branch.3.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block0.conv_branch.5.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.0.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.0.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.2.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.3.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.3.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks.block1.conv_branch.5.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.conv.0.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.conv.0.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.conv.2.weight - torch.Size([64, 2, 2, 2, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.2.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.3.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.3.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block0.conv_branch.5.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.2.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.3.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.3.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks.block1.conv_branch.5.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.conv.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.conv.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.conv.2.weight - torch.Size([96, 2, 2, 2, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.0.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.0.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.2.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.3.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.3.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block0.conv_branch.5.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.0.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.0.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.2.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.3.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.3.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks.block1.conv_branch.5.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.conv.0.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.conv.0.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.conv.2.weight - torch.Size([128, 2, 2, 2, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.2.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.3.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.3.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block0.conv_branch.5.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.2.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.3.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.3.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks.block1.conv_branch.5.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.conv.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.conv.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.conv.2.weight - torch.Size([160, 2, 2, 2, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.0.weight - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.0.bias - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.2.weight - torch.Size([160, 3, 3, 3, 160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.3.weight - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.3.bias - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block0.conv_branch.5.weight - torch.Size([160, 3, 3, 3, 160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.0.weight - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.0.bias - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.2.weight - torch.Size([160, 3, 3, 3, 160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.3.weight - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.3.bias - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.u.blocks.block1.conv_branch.5.weight - torch.Size([160, 3, 3, 3, 160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.deconv.0.weight - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.deconv.0.bias - torch.Size([160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.deconv.2.weight - torch.Size([128, 2, 2, 2, 160]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.i_branch.0.weight - torch.Size([128, 1, 1, 1, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.0.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.0.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.2.weight - torch.Size([128, 3, 3, 3, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.3.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.3.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block0.conv_branch.5.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.2.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.3.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.3.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.u.blocks_tail.block1.conv_branch.5.weight - torch.Size([128, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.deconv.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.deconv.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.deconv.2.weight - torch.Size([96, 2, 2, 2, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.i_branch.0.weight - torch.Size([96, 1, 1, 1, 192]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.0.weight - torch.Size([192]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.0.bias - torch.Size([192]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.2.weight - torch.Size([96, 3, 3, 3, 192]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.3.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.3.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block0.conv_branch.5.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.0.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.0.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.2.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.3.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.3.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.u.blocks_tail.block1.conv_branch.5.weight - torch.Size([96, 3, 3, 3, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.deconv.0.weight - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.deconv.0.bias - torch.Size([96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.deconv.2.weight - torch.Size([64, 2, 2, 2, 96]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.i_branch.0.weight - torch.Size([64, 1, 1, 1, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.0.weight - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.2.weight - torch.Size([64, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.3.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.3.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block0.conv_branch.5.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.2.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.3.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.3.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.u.blocks_tail.block1.conv_branch.5.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.deconv.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.deconv.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.deconv.2.weight - torch.Size([32, 2, 2, 2, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.i_branch.0.weight - torch.Size([32, 1, 1, 1, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.0.weight - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.0.bias - torch.Size([64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.2.weight - torch.Size([32, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.3.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.3.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block0.conv_branch.5.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.0.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.0.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.2.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.3.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.3.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

unet.blocks_tail.block1.conv_branch.5.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.input_proj.0.weight - torch.Size([256, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.input_proj.0.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.input_proj.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.input_proj.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.query_proj.0.weight - torch.Size([256, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.query_proj.0.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.query_proj.2.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.query_proj.2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.0.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.1.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.2.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.3.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.4.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.cross_attn_layers.5.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.0.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.1.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.2.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.3.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.4.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.attn.in_proj_weight - torch.Size([768, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.attn.in_proj_bias - torch.Size([768]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.attn.out_proj.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.attn.out_proj.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.self_attn_layers.5.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.0.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.1.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.2.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.3.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.4.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.net.0.weight - torch.Size([1024, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.net.0.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.net.3.weight - torch.Size([256, 1024]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.net.3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.ffn_layers.5.norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_norm.weight - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_norm.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_cls.0.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_cls.0.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_cls.2.weight - torch.Size([19, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_cls.2.bias - torch.Size([19]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.x_mask.0.weight - torch.Size([256, 32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.x_mask.0.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.x_mask.2.weight - torch.Size([256, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.x_mask.2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_sem.weight - torch.Size([21, 256]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

decoder.out_sem.bias - torch.Size([21]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

input_conv.0.weight - torch.Size([32, 3, 3, 3, 6]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

output_layer.0.weight - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D

output_layer.0.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ScanNetOneFormer3D
2024/10/24 10:02:34 - mmengine - INFO - Load checkpoint from work_dirs/tmp/sstnet_scannet.pth
2024/10/24 10:02:34 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2024/10/24 10:02:34 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2024/10/24 10:02:34 - mmengine - INFO - Checkpoints will be saved to /root/autodl-tmp/oneformer3d-main/work_dirs/oneformer3d_1xb4_scannet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants