Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XNNPack support and performance comparison with TF Lite for Flutter #53

Open
andynewman10 opened this issue Oct 21, 2023 · 28 comments
Open

Comments

@andynewman10
Copy link

I am currently carrying out performance tests with TF Lite and pytorch_lite in Flutter (I should be able to give more info in the future, if anyone is interested).

My question is: does pytorch_lite use XNNPack by default? It seems to me pytorch_lite is faster than tflite_flutter, but I surprisingly don't manage to enable XNNPack with tflite_flutter (I get an error).

If the answer is yes, is it possible to, eg. explicitly enable or disable XNNPack?

PS: Pytorch 2.1.0 has been released last week. Do you plan to update pytorch_lite and use the new version?

@abdelaziz-mahdy
Copy link
Owner

Well pytorch lite as far as I know doesn't use GPU,

And pytorch mobile didn't release 2.1 version as far as I know if they did I don't mind updating to it

Btw tflite should be faster if GPU is enabled

@abdelaziz-mahdy
Copy link
Owner

Yes 2.1 was released thanks for letting me know

https://central.sonatype.com/artifact/org.pytorch/pytorch_android_lite

Will try to update to it

@andynewman10
Copy link
Author

You're welcome.

Yes this would be great if you updated.

TF Lite GPU support is way too limited. The GpuDelegate is very severely limited and does not work with most models. The NNAPI delegate, somehow also supporting GPUs (in addition to TPUs and DSPs - if I understood correctly), is supposed to work with more models but I can't get it working. It's a real PITA.

But I am not talking about GPU support: XNNPack is pure CPU. It is written by Google and accelerates operations on the CPU.

Do you know if it is supported and enabled on Pytorch? From what I can see, it is supported, but is it really enabled by default? See:

https://pytorch.org/mobile/home/

It's important to know because it can, on some devices, have a significant performance boost, not to be taken lightly.

@abdelaziz-mahdy
Copy link
Owner

Yes I saw XNNPack does exist on pytorch

But the model should be changed to allow it

And if model is exported with it it should work automatically

@andynewman10
Copy link
Author

But the model should be changed to allow it

Can you tell me more about this?

@andynewman10
Copy link
Author

andynewman10 commented Oct 21, 2023

https://mvnrepository.com/artifact/org.pytorch

Maven and Gradle artifacts updated 9 days ago.

@abdelaziz-mahdy
Copy link
Owner

But the model should be changed to allow it

Can you tell me more about this?

optimize_for_mobile

This does it for you from what I found

@abdelaziz-mahdy
Copy link
Owner

for ios latest one is 1.13.0.1
https://libraries.io/cocoapods/LibTorch-Lite/1.13.0.1

just to keep track of the versions

@abdelaziz-mahdy
Copy link
Owner

updated to latest pytorch packages in 4.2.2

integration tests are working so everything should be the same so i did increase the patch version

@andynewman10
Copy link
Author

Somebody is mentionning the availability of LibTorch-Lite 2.1.0 here:

pytorch/pytorch#102833 (comment)

I am wondering: why isn't this version advertised here https://libraries.io/search?q=LibTorch-Lite ? (I am not extremely familiar with iOS developer sites etc.)
Is this worth updating?

@abdelaziz-mahdy
Copy link
Owner

Hello, for some reason I failed to make libtorch lite work but libtorch is working and I can't find libtorch v2 pod

So it's not updated and from my testing updating the android one didn't affect performance for better or worse, so as long it's not needed I don't think upgrading is important

If anyone needs it to be updated let me know

@cyrillkuettel
Copy link
Contributor

Ah, the struggles of Libtorch-Lite versioning. @andynewman10 I was wondering as well why Libtorch 2.1.0 was not advertised. I asked this in discuss.pytorch.org as well. The 2.1.0 binaries clearly exist: CocoaPods/LibTorch-Lite/2.1.0/ (On Android I was able to run Liborch-Lite 2.1.0 without problems.)

But on iOS, I get this strange crash on startup. (pytorch/pytorch#102833) I spent hours trying to find out why this happened, without success.

But the model should be changed to allow it

What he means is if you train / export a model with (python) pytorch 1.13 for example, and then try to run inference on mobile with let's say pytorch 1.10, you'll probably get an error. (You have to use 1.13 on mobile as well)

@andynewman10
Copy link
Author

But on iOS, I get this strange crash on startup. (pytorch/pytorch#102833) I spent hours trying to find out why this happened, without success.

@cyrillkuettel You only get a crash on iOS 12, right? Can you confirm things run fine on iOS 13 or later?

@cyrillkuettel
Copy link
Contributor

Can't know for sure. The iPhone I use for testing (iPhone 6) only supports up until iOS 12.

@andynewman10
Copy link
Author

Have you tried your code with the iOS simulator? I would try it with iOS 12, and with a later iOS version.

@cyrillkuettel
Copy link
Contributor

cyrillkuettel commented Dec 7, 2023

I tried running on simulator. Still returns error, but different one.

  • I created a new flutter plugin with ffi
flutter create --template=plugin_ffi --platforms=android,ios libtorch_test_version 

added dependencies (in ios/libtorch_test_version.podspec):

Click me for more
Pod::Spec.new do |s|
  s.name             = 'libtorch_test_version'
  s.version          = '0.0.1'
  s.summary          = 'A new Flutter FFI plugin project.'
  s.description      = <<-DESC
A new Flutter FFI plugin project.
                       DESC
  s.homepage         = 'http://example.com'
  s.license          = { :file => '../LICENSE' }
  s.author           = { 'Your Company' => '[email protected]' }

  s.static_framework = true
  s.public_header_files = 'Classes/**/*.h'

  s.source           = { :path => '.' }
  s.source_files = 'Classes/**/*'
  s.ios.deployment_target = '12.0'

  s.dependency 'Flutter'
  s.dependency 'LibTorch-Lite', '~>2.1.0'
  s.dependency 'OpenCV', '4.3.0'
  s.platform = :ios, '12.0'

  # Flutter.framework does not contain a i386 slice.
  s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES', 'EXCLUDED_ARCHS[sdk=iphonesimulator*]' => 'i386',
        'HEADER_SEARCH_PATHS' => '$(inherited) "${PODS_ROOT}/LibTorch-Lite/install/include"'
  }
  s.swift_version = '5.0'

# Needed for Libtorch-Lite 2.1.0

  s.xcconfig = {
  "CLANG_CXX_LANGUAGE_STANDARD" => "c++17",
  "CLANG_CXX_LIBRARY" => "libc++"
  }

end
cd example/ios
pod install
  • Changed C++ compiler to 17 in Xcode
  • added Signing in Xcode
  • Set strip style to Non-Global Symbols

open simuulator

open -a simulator

Run on simulator:

 flutter run -d 8648E37B-49FB-476F-9B52-041F5F4D4FD

Eventually this returns this.

Invalid argument(s): Failed to load dynamic library 'libtorch_test_version.framework/libtorch_test_version': dlopen(libtorch_test_version.framework/libtorch_test_version, 0x0001): tried: '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRootlibtorch_test_version.framework/libtorch_test_version' (no such file), '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/usr/lib/swift/libtorch_test_version.framework/libtorch_test_version' (no such file), 

Honestly this is beyond frustrating. At this point I stopped, I don't care anymore...

@andynewman10
Copy link
Author

@cyrillkuettel You must make sure that, in Xcode, Strip symbols is not set to All symbols, just Non-global symbols. This is critical and might explain dlopen failures.

@andynewman10
Copy link
Author

Mind you, in debug builds this should not be an issue.

Have you tried the code on iOS 13+? Say, iOS 15?

@cyrillkuettel
Copy link
Contributor

Yes i have set it to Non-global symbols, forgot to mention this. Still same "failed to lookup symobol"

@cyrillkuettel
Copy link
Contributor

Screenshot from Simulator:
simulator_screenshot_A60A5CFC-BFEC-49C9-AA60-37AA8EC5F095

@andynewman10
Copy link
Author

To be clear

  • you only have this problem with LibTorch-Lite 2.1.0, for all iOS releases, right?
  • LibTorch-Lite 1.13 works fine on all iOS versions except on iOS 12, right?

@cyrillkuettel
Copy link
Contributor

  • correct
  • 1.13.0.1, to be precise, works even with iOS 12

@andynewman10
Copy link
Author

I understand your frustration. Unfortunately, I'm not very good in iOS programming, so I cannot help you with this 'podspec debugging' issue. It looks like a file is not found, but the cause is unknown.

Do you know what the benefits of using 2.1.0 are over 1.13.x? Just curious.

@cyrillkuettel
Copy link
Contributor

cyrillkuettel commented Dec 7, 2023

I would not bother with 2.1.0 unless you need it for a very specific purpose.

Potential benefits:

  • Performance improvements (Yeah ok, but I doubt this is significant, the bottleneck in inference is still the (quite computationally limited) mobile hardware. )
  • If you're using models developed after March 2023, then you might need this version for compatibility.

@luvwinnie
Copy link

How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.

libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions: 
	aten::_scaled_dot_product_attention

The original call is:
 File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32
  _6 = (q_norm).forward()
  _7 = (k_norm).forward()
  x = torch.scaled_dot_product_attention(q, k, v)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4])
  _8 = (proj_drop).forward((proj).forward(input, ), )

Message from debugger: killed

@abdelaziz-mahdy
Copy link
Owner

abdelaziz-mahdy commented Jul 21, 2024

How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.

libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions: 
	aten::_scaled_dot_product_attention

The original call is:
 File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32
  _6 = (q_norm).forward()
  _7 = (k_norm).forward()
  x = torch.scaled_dot_product_attention(q, k, v)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4])
  _8 = (proj_drop).forward((proj).forward(input, ), )

Message from debugger: killed

An update to libtorch is needed so I will look into it when I have time

edit: @luvwinnie latest version of pytorch is used

implementation 'org.pytorch:pytorch_android:2.1.0'

same for ios https://github.com/CocoaPods/Specs/tree/master/Specs/1/3/c/LibTorch

@luvwinnie
Copy link

@abdelaziz-mahdy It seems like my environment using Libtorch(1.13.0.1) with iOS?

PODS:
  - camera_avfoundation (0.0.1):
    - Flutter
  - Flutter (1.0.0)
  - LibTorch (1.13.0.1):
    - LibTorch/Core (= 1.13.0.1)
  - LibTorch/Core (1.13.0.1):
    - LibTorch/Torch
  - LibTorch/Torch (1.13.0.1)
  - onnxruntime (0.0.1):
    - Flutter
    - onnxruntime-objc (= 1.15.1)
  - onnxruntime-c (1.15.1)
  - onnxruntime-objc (1.15.1):
    - onnxruntime-objc/Core (= 1.15.1)
  - onnxruntime-objc/Core (1.15.1):
    - onnxruntime-c (= 1.15.1)
  - path_provider_foundation (0.0.1):
    - Flutter
    - FlutterMacOS
  - pytorch_lite (0.0.1):
    - Flutter
    - LibTorch (~> 1.13.0.1)

DEPENDENCIES:
  - camera_avfoundation (from `.symlinks/plugins/camera_avfoundation/ios`)
  - Flutter (from `Flutter`)
  - onnxruntime (from `.symlinks/plugins/onnxruntime/ios`)
  - path_provider_foundation (from `.symlinks/plugins/path_provider_foundation/darwin`)
  - pytorch_lite (from `.symlinks/plugins/pytorch_lite/ios`)
dependencies:
  flutter:
    sdk: flutter
  # image_picker: ^0.8.4+4

  # The following adds the Cupertino Icons font to your application.
  # Use with the CupertinoIcons class for iOS style icons.
  image: ^4.2.0
  pytorch_lite: ^4.2.5
  camera: 0.10.6
  syncfusion_flutter_gauges: ^26.1.42
  loading_animation_widget: ^1.2.1

Is my pytorch_lite not latest or something problem?

@abdelaziz-mahdy
Copy link
Owner

Does it work on Android if yes it may be a pytorch iOS problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants