-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GraphBolt][CUDA] GPUCache performance fix. #7073
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To trigger regression tests:
|
mfbalin
changed the title
[GraphBolt][CUDA] Fixing GPU cache performance issue, adding it to multiGPU example.
[GraphBolt][CUDA] GPUCache fix and showcase it in multiGPU example.
Feb 2, 2024
mfbalin
changed the title
[GraphBolt][CUDA] GPUCache fix and showcase it in multiGPU example.
[GraphBolt][CUDA] GPUCache performance fix.
Feb 2, 2024
8 tasks
Will test it ASAP |
TristonC
reviewed
Feb 2, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With both PR 7073 and 7074, we saw good performance on an 8GPU runs. On the same DGX1V, per epoch time dropped from ~1.8 seconds to less than 1.2 seconds with --gpu-cache-size=1000000. Well done.
TristonC
approved these changes
Feb 2, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
The missing_len argument to the GPU cache needs to be on the GPU because the cache performs atomic operations on it. Results in a 1000x performance improvement. Before this change, it turns out that the GPUCache was practically not usable at all.
Testable with #7074, @TristonC could you test it on multiGPU machines with
--gpu-cache-size=1000000
, either before or after this PR is merged?When the GPUCachedFeature is used, the feature fetching operation has a GPU synchronization. We might want to run the feature fetching operation in another thread. Hoping that the main thread will make progress while the feature fetching thread is waiting for synchronization. However, if the GIL is not released while waiting, then it may not do what we expect, which will pose a problem, and the overlap optimization might not work as expected.
Checklist
Please feel free to remove inapplicable items for your PR.
Changes