Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UCT/CUDA: Update cuda_copy perf estimates for Grace-Hopper #10155

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

SeyedMir
Copy link
Collaborator

What

Update cuda_copy perf estimates for Grace-Hopper

Why ?

The bandwidth and latency values will be different for PCIe versus C2C links that connect CPU and GPU.

How ?

Update the cuda_copy bw config and UCX_CUDA_COPY_BW.

Comment on lines +420 to +425
perf_attr->bandwidth.shared = zcopy ? iface->config.bw.h2d :
iface->config.bw.h2d * 0.95;
} else if ((src_mem_type == UCS_MEMORY_TYPE_CUDA) &&
(dst_mem_type == UCS_MEMORY_TYPE_HOST)) {
perf_attr->bandwidth.shared = (zcopy ? 11660.0 : 9320.0) *
UCS_MBYTE;
perf_attr->bandwidth.shared = zcopy ? iface->config.bw.d2h :
iface->config.bw.d2h * 0.95;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why bcopy BW is slower than zcopy one? BTW 11660 * 0.95 is not equal to 9320. Maybe we need to introduce to different env variables like BCOPY_BW and ZCOPY_BW to control this values accurately. Or if we are OK with changing performance in common case, maybe better just not to distinguish bcopy/zcopy perf and set one value in both cases?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually not zcopy vs. bcopy; it's zcopy vs. short. Unlike zcopy, put/get short operations invoke cuStreamSynchronize per operation. Therefore, we want to advertise a slightly lower bw for the short vs zcopy operation for cuda_copy.
I'm not sure what 9320 represents. Why do you want it to be equal to 9320?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation. I am thinking about whether we need this difference to be made by this way because each change to performance estimation without proper performance testing can lead to unforeseen degradation in some cases. So if we want to tune performance for GH systems only I would like to leave performance for other platforms untouched. Or if we are OK to change performance on all platforms in that PR, I am wondering whether this 5% difference really matters or we can follow this KISS principle and set the same values for both zcopy and short cases.

@brminich @yosefe WDYT?

ucs_offsetof(uct_cuda_copy_iface_config_t, bw.d2h)},
{"d2d", "device to device bandwidth",
ucs_offsetof(uct_cuda_copy_iface_config_t, bw.d2d)},
{"other", "any other src-dest memory types bandwidth",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor

Suggested change
{"other", "any other src-dest memory types bandwidth",
{"other", "any other memory types combinations bandwidth",

ucs_offsetof(uct_cuda_copy_iface_config_t, bandwidth), UCS_CONFIG_TYPE_BW},
/* TODO: 1. Add separate keys for shared and dedicated bandwidth
2. Remove the "other" key (use pref_loc for managed memory) */
{"BW", "h2d:8300MBs,d2h:11660MBs,d2d:320GBs,other:10000MBs",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that possible to remove other and define value for other memory types combinations through value without label?

Suggested change
{"BW", "h2d:8300MBs,d2h:11660MBs,d2d:320GBs,other:10000MBs",
{"BW", "10000MBs,h2d:8300MBs,d2h:11660MBs,d2d:320GBs",

@@ -87,7 +92,12 @@ typedef struct uct_cuda_copy_iface_config {
uct_iface_config_t super;
unsigned max_poll;
unsigned max_cuda_events;
double bandwidth;
struct {
Copy link
Contributor

@Akshay-Venkatesh Akshay-Venkatesh Sep 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SeyedMir why not use bw[UCS_MEMORY_TYPE_LAST][UCS_MEMORY_TYPE_LAST] and avoid explicit fields for each direction? This way we don't have to introduce other field and we can populate the bandwidths by referring to the specific source-destination combination. For example, replace:

         {"h2d", "host to device bandwidth",
          ucs_offsetof(uct_cuda_copy_iface_config_t, bw.h2d)},
          ...

with

         {"h2d", "host to device bandwidth",
          ucs_offsetof(uct_cuda_copy_iface_config_t, bw.[UCS_MEMORY_TYPE_UNKNOWN][UCS_MEMORY_TYPE_CUDA])},

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about doing it that way initially in fact. But, then I thought that bw matrix will have entries for types that are completely irrelevant to CUDA, and there will be much more entries than the four in this struct. Having said that, I'm not strongly against using the matrix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants