Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No ReLU Mask Code in your MAT Module. #11

Open
8414sys opened this issue Sep 2, 2024 · 0 comments
Open

No ReLU Mask Code in your MAT Module. #11

8414sys opened this issue Sep 2, 2024 · 0 comments

Comments

@8414sys
Copy link

8414sys commented Sep 2, 2024

You have written about the utility of ReLU Mask in your paper.
The content was briefly that ReLU Mask, which does not require learning parameters, performs better than DynaST's learnable MLP.
The implementation of that part is in models/networks/dynast_transformer.py. I don't think your code has changed at all compared to the original code in DynaST, is that correct?
If correct, should I simply apply the ReLU function to the output instead of the corresponding code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant