Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Pruning: How to preserve the number of output channels of a particular layer? #5737

Discussion options

You must be logged in to vote

Found out that the op names do not and should not include weight or bias. So with that, using adding the last layer's name in the exclude_op_names just works:

config_list = [{
    'op_types': ['Conv2d'],
    'sparse_ratio': sparsity_ratio,
    'exclude_op_names': [
        'conv2',
    ]
}]
Log
Ouput shape: torch.Size([1, 80, 32, 32])
[2024-01-18 16:35:00] Start to speedup the model...
[2024-01-18 16:35:00] Resolve the mask conflict before mask propagate...
[2024-01-18 16:35:00] dim0 sparsity: 0.489796
[2024-01-18 16:35:00] dim1 sparsity: 0.000000
0 Filter
[2024-01-18 16:35:00] dim0 sparsity: 0.489796
[2024-01-18 16:35:00] dim1 sparsity: 0.000000
[2024-01-18 16:35:00] Infer module masks…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by saravanabalagi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant