You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing the code! I'm seeking for details about scale-aware, spatial-aware and task-aware blocks in the source code. However, as i can see in the source code "dyhead/dyhead.py", there only exists the implementation of spatial-aware block.
Is this a mistake or my misunderstanding?
The text was updated successfully, but these errors were encountered:
@kriskrisliu In dyhead.py from line 79-82, you can observe all three attentions being used. I think the difference is that they are first applying spatial attention using deformable convolutions and then level attention using hard sigmoid followed by task attention through DyReLU. But it is confusing to understand because they are doing this for each level in the pyramid (in for loop).
Thank you for sharing the code! I'm seeking for details about scale-aware, spatial-aware and task-aware blocks in the source code. However, as i can see in the source code "dyhead/dyhead.py", there only exists the implementation of spatial-aware block.
Is this a mistake or my misunderstanding?
The text was updated successfully, but these errors were encountered: