You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You have written about the utility of ReLU Mask in your paper.
The content was briefly that ReLU Mask, which does not require learning parameters, performs better than DynaST's learnable MLP.
The implementation of that part is in models/networks/dynast_transformer.py. I don't think your code has changed at all compared to the original code in DynaST, is that correct?
If correct, should I simply apply the ReLU function to the output instead of the corresponding code?
The text was updated successfully, but these errors were encountered:
You have written about the utility of ReLU Mask in your paper.
The content was briefly that ReLU Mask, which does not require learning parameters, performs better than DynaST's learnable MLP.
The implementation of that part is in models/networks/dynast_transformer.py. I don't think your code has changed at all compared to the original code in DynaST, is that correct?
If correct, should I simply apply the ReLU function to the output instead of the corresponding code?
The text was updated successfully, but these errors were encountered: