-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge import latency caused by Term
, GPUArrays
, and CUDA
#156
Comments
Term
and CUDA
Term
and CUDA
Term
, GPUArrays
, and CUDA
I suppose an argument can be made for removing |
I'm wondering if you just need |
Ah, that's a good point... technically this supports Anyone else have opinions on this? I'm definitely willing to do the PR and make the change if there is consensus, but I'm not completely confident that nobody will take issue with it. |
yeah, thanks for the GPU support, I'm just experience slow loading time on my poor cluster login node so no rush |
IMO even GPUArrays alone seems to increase loading times quite significantly. It's nice to have GPU support but personally usually I do not have access to dedicated GPUs, so to me it rather seems to be an extension than a basic part of the package (also highlighted by the fact that GPU support was added only recently). Maybe it would be better to use weak dependencies on Julia >= 1.9 and Requires on older Julia versions (https://pkgdocs.julialang.org/dev/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)). |
yeah I agree with this |
I'm not sure exactly what can be done with weak dependencies, but I'm certainly open to exploring it once 1.9 is released. Personally I'm not too fond of the argument that stuff should be removed just because it has to compile. Compilation is just part of life, it's an issue for the compiler, not individual packages. That dependencies should be trimmed where possible seems like a significantly better argument to me. That said, I'm at least open to all the specific proposals made here: yes using Of course I'm not the only maintainer of this, so my opinion is hardly authoritative. |
I too am having issues with unexpectedly depending on CUDA. In my case I am using PackageCompiler - here's a (cut down) snippet:
I would really like my bundle to be 2GB smaller. I'm assuming just using |
Are you running on a machine with CUDA 11? If so, you might be able to eliminate the large CUDA artifacts by switching. The I wonder if we would be able to handle the CUDA deps with the new conditional package loading: https://pkgdocs.julialang.org/dev/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions) |
That was what I was referring to with weak dependencies in #156 (comment) 🙂 It's great, I'm already using it in a few packages, but it requires Julia 1.9 - if one wants to support the conditional features on older Julia versions one either has to add the weak dependencies as hard dependencies on these Julia versions or use Requires (which does not support precompilation). In the case of XGBoost, the most natural approach would seem to keep the already existing dependencies as hard dependencies on Julia < 1.9. |
Just to add that at RAI we're in exactly the same situation @andyferris mentioned above, where we're using PackageCompiler and do not want to include But given these deps seem to have been added in e.g. when updating a unrelated package we see things like (Repo) pkg> add --preserve=all XUnit @1.1.5
Resolving package versions...
Updating `~/Project.toml`
[3e3c03f2] ↑ XUnit v1.1.4 ⇒ v1.1.5
Updating `~/Manifest.toml`
[3e3c03f2] ↑ XUnit v1.1.4 ⇒ v1.1.5
[4ee394cb] + CUDA_Driver_jll v0.3.0+0
[76a88914] + CUDA_Runtime_jll v0.3.0+2
⌃ [a5c6f535] ↑ XGBoost_jll v1.7.1+0 ⇒ v1.7.1+1
Info Packages marked with ⌃ have new versions available and may be upgradable.
Precompiling project...
✓ XGBoost_jll
✓ XUnit
✓ XGBoost
✓ Repo
4 dependencies successfully precompiled in 42 seconds. 259 already precompiled. 1 skipped during auto due to previous errors. |
We will definitely demote both |
@nickrobinson251 I did manage to pin both XGBoost at 2.0.2 and XGBoost_jll at 1.6.2+0, and that seemed to work for me. Looking forward to Julia 1.9 still :) |
As a follow up of this, I see there are some weakdep stuff in the Project.toml, and when installing on Julia 1.9.2 I get this:
I see that |
Yes, the JLL still pulls in CUDA JLL dependencies. I asked about weakdeps/optional dependencies and GPU-/non-GPU binaries in some issue in Yggdrasil a while ago but there does not seem to be a solution to this problem yet. (Some other libraries are built both in a GPU and non-GPU version on Yggdrasil but last time I checked there was no Julia package actually tried to combine them - also just depending on both would still pull in all the undesired binaries...). |
Ah I see - thank you. |
The text was updated successfully, but these errors were encountered: