-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENHANCEMENT]: Switch to cuda::memory_resource once it is available #289
Labels
Comments
sleeepyjack
added
type: feature request
New feature request
P2: Nice to have
Desired, but not necessary
labels
Apr 4, 2023
sleeepyjack
changed the title
[ENHANCEMENT]:
[ENHANCEMENT]: Switch to cuda::memory_resource once it is available
Apr 5, 2023
Will be available once rapidsai/rapids-cmake#399 is resolved |
7 tasks
rapids-bot bot
pushed a commit
to rapidsai/rapids-cmake
that referenced
this issue
Oct 16, 2023
This PR separates out the libcudacxx update from #399. I am proposing to update only libcudacxx to 2.1.0, and leave thrust/cub pinned at 1.17.2 until all of RAPIDS is ready to update. Then we can move forward with #399 next. Separating the update for libcudacxx should allow RAPIDS to use some of the new features we want while giving more time to RAPIDS libraries to migrate to CCCL 2.1.0 (particularly for breaking changes in Thrust/CUB). **Immediate benefits of bumping only libcudacxx to 2.1.0:** - Enables migration to Thrust/CUB 2.1.0 to be done more incrementally, because we could merge PRs using `cuda::proclaim_return_type` into cudf/etc. which would reduce the amount of unmerged code we're maintaining in the "testing PRs" while waiting for all RAPIDS repos to be ready for Thrust/CUB 2.1.0. - Unblocks work in rmm (rapidsai/rmm#1095) and quite a few planned changes for cuCollections (such as NVIDIA/cuCollections#332, NVIDIA/cuCollections#331, NVIDIA/cuCollections#289) **Risk Assessment:** This should be fairly low risk because libcudacxx 2.1.0 is similar to our current pinning of 1.9.1 -- the major version bump was meant to align with Thrust/CUB and isn't indicative of major breaking changes. Authors: - Bradley Dice (https://github.com/bdice) Approvers: - Robert Maynard (https://github.com/robertmaynard) - Vyas Ramasubramani (https://github.com/vyasr) URL: #464
sleeepyjack
pushed a commit
that referenced
this issue
Oct 16, 2023
This updates cuCollections to rapids-cmake 23.12. This comes with rapidsai/rapids-cmake#464, which updates libcudacxx to 2.1.0. That should unblock several cuCollections issues such as #332, #331, #289.
PointKernel
added
P3: Backlog
Unprioritized
and removed
P2: Nice to have
Desired, but not necessary
labels
Oct 25, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
We currently roll our own default
cuco::cuda_allocator
, which internally callscudaMalloc/cudaFree
.This approach doesn't leverage the concept of stream-ordered allocations, which might degrade performance for operations such as
size()
andinsert()
, where we allocate intermediate storage to retrieve the count.Describe the solution you'd like
libcu++ v2.0 introduces a new
cuda::memory_resource
(design, initial PR, final PR).We should use this facility instead.
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: