Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GraphBolt][CUDA] GPUCache performance fix. #7073

Merged
merged 3 commits into from
Feb 2, 2024

Conversation

mfbalin
Copy link
Collaborator

@mfbalin mfbalin commented Feb 2, 2024

Description

The missing_len argument to the GPU cache needs to be on the GPU because the cache performs atomic operations on it. Results in a 1000x performance improvement. Before this change, it turns out that the GPUCache was practically not usable at all.

Testable with #7074, @TristonC could you test it on multiGPU machines with --gpu-cache-size=1000000, either before or after this PR is merged?

When the GPUCachedFeature is used, the feature fetching operation has a GPU synchronization. We might want to run the feature fetching operation in another thread. Hoping that the main thread will make progress while the feature fetching thread is waiting for synchronization. However, if the GIL is not released while waiting, then it may not do what we expect, which will pose a problem, and the overlap optimization might not work as expected.

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
  • Related issue is referred in this PR
  • If the PR is for a new model/paper, I've updated the example index here.

Changes

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 2, 2024

To trigger regression tests:

  • @dgl-bot run [instance-type] [which tests] [compare-with-branch];
    For example: @dgl-bot run g4dn.4xlarge all dmlc/master or @dgl-bot run c5.9xlarge kernel,api dmlc/master

@mfbalin mfbalin changed the title [GraphBolt][CUDA] Fixing GPU cache performance issue, adding it to multiGPU example. [GraphBolt][CUDA] GPUCache fix and showcase it in multiGPU example. Feb 2, 2024
@mfbalin mfbalin requested a review from TristonC February 2, 2024 12:31
@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 2, 2024

Commit ID: e54b358

Build ID: 1

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin mfbalin changed the title [GraphBolt][CUDA] GPUCache fix and showcase it in multiGPU example. [GraphBolt][CUDA] GPUCache performance fix. Feb 2, 2024
@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 2, 2024

Commit ID: fcdb1eb

Build ID: 2

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@TristonC
Copy link
Collaborator

TristonC commented Feb 2, 2024

Will test it ASAP

Copy link
Collaborator

@TristonC TristonC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With both PR 7073 and 7074, we saw good performance on an 8GPU runs. On the same DGX1V, per epoch time dropped from ~1.8 seconds to less than 1.2 seconds with --gpu-cache-size=1000000. Well done.

@TristonC TristonC merged commit d5b03bc into dmlc:master Feb 2, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants