-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warning about masker internal working size #410
base: master
Are you sure you want to change the base?
Conversation
Seems so, yes.
I'm unsure about adding in the EMD model. Would this mean adding it to the forward as well? For most models, |
Yes. I don't see another way to do it if we want that feature for all models. We have to have a way of finding the internal size, so without any shape fixing. Maybe we can add an optional argument to the masker's forward that controls whether fixing should be applied, and then we could use introspection to check if the masker supports that argument, but that doesn't seem to have any advantages. If you have other ideas, please tell :) |
Another way could be to just add an optional-to-define method Another crazy idea is to have a mode ("training mode") that just doesn't re-pad anything? (Both in the masker and models) |
So that the loss is only computed on non-zero segment? |
Well to be exact, so that the loss is computed on "everything the model did produce", which is different from non-zero, generally speaking |
Yes, you're right |
I don't feel that adding it to the The |
Refs #402
Open questions:
fix_input_dims
method that's only defined for DCUNet right now, shall we add it to the base EMD model? (Are we even able to define this for the majority of models?)The implementation is very naive (stupid/inefficient) but should work well for all maskers.