-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build] Syntax errors in core/providers/dml/dml_provider_factory.h building with --build_java and --use_dml #19656
Comments
I don't plan to convert the JNI code to C++ so let's try to get the DML header fixed so it's parseable as C. Looks like it doesn't like the enum being a subtype of |
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Keep this open, it's still an issue. |
…a C header file (#20157) ### Description The dml_provider_factory header file can't be used in C programs as it defines C++ inline operators. This PR rearranges that header file so that it looks like valid C when used from C, and also makes a couple of small modifications to the Java code so it correctly binds to the DML EP at build time. I'm having some difficulty testing it as I think it's pulling in the old version of DirectML on my computer and I can't figure out what the library loading path is in Java to make it look at the recent version I downloaded. So the test I added fails with: ``` InferenceTest > testDirectML() FAILED ai.onnxruntime.OrtException: Error code - ORT_RUNTIME_EXCEPTION - message: Exception during initialization: <path-to-ort>\onnxruntime\core\providers\dml\DmlExecutionProvider\src\AbiCustomRegistry.cpp(518)\onnxruntime.dll!00007FFF74819333: (caller: 00007FFF74793509) Exception(3) tid(4f58) 80070057 The parameter is incorrect. at app//ai.onnxruntime.OrtSession.createSession(Native Method) at app//ai.onnxruntime.OrtSession.<init>(OrtSession.java:74) at app//ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:236) at app//ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:221) at app//ai.onnxruntime.InferenceTest.openSessionSqueezeNet(InferenceTest.java:1961) at app//ai.onnxruntime.InferenceTest.runProvider(InferenceTest.java:665) at app//ai.onnxruntime.InferenceTest.testDirectML(InferenceTest.java:657) ``` But it does correctly compile, and this error seems very similar to other issues with the DML provider when it doesn't like a model due to the loaded library being old. The test is using the squeezenet file that's been in the repo since 2019. If someone can help me figure out how to get the right version of DML in the library path I can test it more on my end. I tried adding the folder with the new version into the system path, but I'm not very familiar with Windows' library loading behaviour. ### Motivation and Context Fixes #19656 to allow use of the DirectML EP from ORT Java. cc @martinb35
…a C header file (microsoft#20157) ### Description The dml_provider_factory header file can't be used in C programs as it defines C++ inline operators. This PR rearranges that header file so that it looks like valid C when used from C, and also makes a couple of small modifications to the Java code so it correctly binds to the DML EP at build time. I'm having some difficulty testing it as I think it's pulling in the old version of DirectML on my computer and I can't figure out what the library loading path is in Java to make it look at the recent version I downloaded. So the test I added fails with: ``` InferenceTest > testDirectML() FAILED ai.onnxruntime.OrtException: Error code - ORT_RUNTIME_EXCEPTION - message: Exception during initialization: <path-to-ort>\onnxruntime\core\providers\dml\DmlExecutionProvider\src\AbiCustomRegistry.cpp(518)\onnxruntime.dll!00007FFF74819333: (caller: 00007FFF74793509) Exception(3) tid(4f58) 80070057 The parameter is incorrect. at app//ai.onnxruntime.OrtSession.createSession(Native Method) at app//ai.onnxruntime.OrtSession.<init>(OrtSession.java:74) at app//ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:236) at app//ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:221) at app//ai.onnxruntime.InferenceTest.openSessionSqueezeNet(InferenceTest.java:1961) at app//ai.onnxruntime.InferenceTest.runProvider(InferenceTest.java:665) at app//ai.onnxruntime.InferenceTest.testDirectML(InferenceTest.java:657) ``` But it does correctly compile, and this error seems very similar to other issues with the DML provider when it doesn't like a model due to the loaded library being old. The test is using the squeezenet file that's been in the repo since 2019. If someone can help me figure out how to get the right version of DML in the library path I can test it more on my end. I tried adding the folder with the new version into the system path, but I'm not very familiar with Windows' library loading behaviour. ### Motivation and Context Fixes microsoft#19656 to allow use of the DirectML EP from ORT Java. cc @martinb35
Describe the issue
Build fails with syntax errors when building with --build_java and --use_dml.
This appears to be due to the java native code being C that depends on header files which contain C++-only code even when ifdef __cplusplus is false.
Use case
I want to make a multiplatform desktop application that works out of the box and uses onnx for local hardware accelerated inference. Jetpack Compose Desktop is a good solution for this, which along with Conveyor makes distribution easy and is arguably a better solution than Electron for multiplatform desktop apps (pros: get to use Kotlin, the JVM is nice for multi OS interop and actually has lower overhead than Electron, enables more ergonomic multi-threading, and is much less prone to exposing OS APIs to XSS attacks. cons: less mature)
Onnx is great for this use case, for Mac it's working out of the box, Linux with cuda or rocm works when it's installed separately and the env vars point to it, which is expected for Linux users, and Windows could easily work out of the box regardless of hardware using DML, though for now seemingly has to fall back to CPU until this is fixed
Other details
I have tried a bit of converting dml_provider_factory.h to be C compatible, but it seems there are other DML files then which are problematic. Maybe the play is to make the C files under java/src/main/native compiled as C++? (I did try this a bit as well, though I'm not very familiar with C/C++ build tools, and the C code does not actually seem to be fully valid C++ code)
Happy to pair up on this, though I have little experience with C and C++ so I won't be able to do much on my own.
Urgency
Currently rare but high potential use case. Fixing would be immediately beneficial
Target platform
Java, Windows, DML, all arches
Build script
(rel 1.17.1 checked out)
(I also manually placed DirectML.lib files into build/Windows/packages/Microsoft.AI.DirectML.1.13.1/bin/* from a .nupkg because they were missing from the automatically downloaded one 🤷 )
\build.bat --config Debug --cmake_generator "Visual Studio 17 2022" --build_java --use_dml --skip_tests --skip_submodule_sync --compile_no_warning_as_error
(also tried with Release and RelWithDebInfo)
Error / output
Visual Studio Version
Community 2022 17.9.1
GCC / Compiler Version
tried with
cl
printingMicrosoft (R) C/C++ Optimizing Compiler Version 19.38.33133 for x86
andMicrosoft (R) C/C++ Optimizing Compiler Version 19.16.27051 for x86
The text was updated successfully, but these errors were encountered: