-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XNNPack support and performance comparison with TF Lite for Flutter #53
Comments
Well pytorch lite as far as I know doesn't use GPU, And pytorch mobile didn't release 2.1 version as far as I know if they did I don't mind updating to it Btw tflite should be faster if GPU is enabled |
Yes 2.1 was released thanks for letting me know https://central.sonatype.com/artifact/org.pytorch/pytorch_android_lite Will try to update to it |
You're welcome. Yes this would be great if you updated. TF Lite GPU support is way too limited. The GpuDelegate is very severely limited and does not work with most models. The NNAPI delegate, somehow also supporting GPUs (in addition to TPUs and DSPs - if I understood correctly), is supposed to work with more models but I can't get it working. It's a real PITA. But I am not talking about GPU support: XNNPack is pure CPU. It is written by Google and accelerates operations on the CPU. Do you know if it is supported and enabled on Pytorch? From what I can see, it is supported, but is it really enabled by default? See: https://pytorch.org/mobile/home/ It's important to know because it can, on some devices, have a significant performance boost, not to be taken lightly. |
Yes I saw XNNPack does exist on pytorch But the model should be changed to allow it And if model is exported with it it should work automatically |
Can you tell me more about this? |
https://mvnrepository.com/artifact/org.pytorch Maven and Gradle artifacts updated 9 days ago. |
optimize_for_mobile This does it for you from what I found |
for ios latest one is 1.13.0.1 just to keep track of the versions |
updated to latest pytorch packages in integration tests are working so everything should be the same so i did increase the patch version |
Somebody is mentionning the availability of LibTorch-Lite 2.1.0 here: pytorch/pytorch#102833 (comment) I am wondering: why isn't this version advertised here https://libraries.io/search?q=LibTorch-Lite ? (I am not extremely familiar with iOS developer sites etc.) |
Hello, for some reason I failed to make libtorch lite work but libtorch is working and I can't find libtorch v2 pod So it's not updated and from my testing updating the android one didn't affect performance for better or worse, so as long it's not needed I don't think upgrading is important If anyone needs it to be updated let me know |
Ah, the struggles of Libtorch-Lite versioning. @andynewman10 I was wondering as well why Libtorch 2.1.0 was not advertised. I asked this in discuss.pytorch.org as well. The 2.1.0 binaries clearly exist: CocoaPods/LibTorch-Lite/2.1.0/ (On Android I was able to run But on iOS, I get this strange crash on startup. (pytorch/pytorch#102833) I spent hours trying to find out why this happened, without success.
What he means is if you train / export a model with (python) pytorch 1.13 for example, and then try to run inference on mobile with let's say pytorch 1.10, you'll probably get an error. (You have to use 1.13 on mobile as well) |
@cyrillkuettel You only get a crash on iOS 12, right? Can you confirm things run fine on iOS 13 or later? |
Can't know for sure. The iPhone I use for testing (iPhone 6) only supports up until iOS 12. |
Have you tried your code with the iOS simulator? I would try it with iOS 12, and with a later iOS version. |
I tried running on simulator. Still returns error, but different one.
added dependencies (in Click me for more
open simuulator
Run on simulator:
Eventually this returns this.
Honestly this is beyond frustrating. At this point I stopped, I don't care anymore... |
@cyrillkuettel You must make sure that, in Xcode, |
Mind you, in debug builds this should not be an issue. Have you tried the code on iOS 13+? Say, iOS 15? |
Yes i have set it to |
To be clear
|
|
I understand your frustration. Unfortunately, I'm not very good in iOS programming, so I cannot help you with this 'podspec debugging' issue. It looks like a file is not found, but the cause is unknown. Do you know what the benefits of using 2.1.0 are over 1.13.x? Just curious. |
I would not bother with 2.1.0 unless you need it for a very specific purpose. Potential benefits:
|
How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.
|
An update to libtorch is needed so I will look into it when I have time edit: @luvwinnie latest version of pytorch is used pytorch_lite/android/build.gradle Line 64 in 3ab7bc0
same for ios https://github.com/CocoaPods/Specs/tree/master/Specs/1/3/c/LibTorch |
@abdelaziz-mahdy It seems like my environment using Libtorch(1.13.0.1) with iOS?
Is my pytorch_lite not latest or something problem? |
Does it work on Android if yes it may be a pytorch iOS problem |
I am currently carrying out performance tests with TF Lite and pytorch_lite in Flutter (I should be able to give more info in the future, if anyone is interested).
My question is: does pytorch_lite use XNNPack by default? It seems to me pytorch_lite is faster than tflite_flutter, but I surprisingly don't manage to enable XNNPack with tflite_flutter (I get an error).
If the answer is yes, is it possible to, eg. explicitly enable or disable XNNPack?
PS: Pytorch 2.1.0 has been released last week. Do you plan to update pytorch_lite and use the new version?
The text was updated successfully, but these errors were encountered: