-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running inference on mobile #20
Comments
it looks this issue would be a problem with making the model scriptable: pytorch/pytorch#36061 one alternative would be to do something like this: self.res_blocks = nn.ModuleList(
nn.ModuleList(([
nn.Conv2d(self.head_channels, self.head_channels, 1, 1, 0),
nn.Conv2d(self.head_channels, self.head_channels, 1, 1, 0),
nn.Conv2d(self.head_channels, self.head_channels, 1, 1, 0),
])) for block in range(num_head_blocks)
) though this would end up breaking this import: Line 212 in 6b2a3bf
|
@tcavallari I would really love to get inference to run on mobile if it's possible, but it appears as though the pretrained weights must be retrained as per this issue (I'm assuming that there are currently no plans to make the code for the pretrained weights available):
|
Hello!
I don't think it's necessary to retrain the encoder. You can create a scriptable architecture using Then it should just be a matter of replacing the weights in the new |
Ok awesome I think I managed to figure it out. I believe this change should be completely seamless in terms of back-compat so do you think we could merge this into main? I made a PR here in #22. Thank you so much for the guidance |
I was wondering if there would be any technical challenges that would make running inference on mobile impossible. The models themselves seem small but I'm not familiar enough to know if there are any other challenges/considerations.
Update 1
Currently I am on the following error when trying to convert the model to torchscript. Seems like it would be easy enough to fix but there would probably be more things that don't work afterwards as well.
The text was updated successfully, but these errors were encountered: