Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test double split operations #105

Open
eleon opened this issue Apr 9, 2024 · 0 comments
Open

Test double split operations #105

eleon opened this issue Apr 9, 2024 · 0 comments

Comments

@eleon
Copy link
Member

eleon commented Apr 9, 2024

Create a program in tests to test the following scenarios. Let's say we have a dual socket node with 2 GPUs per socket. We also have four MPI tasks.

Scenario A:

  • Split user scope at NUMA boundaries resulting in two NUMA scopes:
    qv_scope_split_at(ctx, base_scope, QV_HW_OBJ_NUMANODE, rank%nnumas, &numa_scope);
    
    Two tasks are assigned to NUMA 0's resources and two tasks are assigned to NUMA 1's resources.
  • Split each of the NUMA scopes (two tasks per scope) to get exclusive cores per task:
    ntasks_per_numa = qv_scope_ntasks(numa_scope);
    qv_scope_split(ctx, numa_scope, ntasks_per_numa, rank%ntasks_per_numa, &sub_scope);
    

Scenario B (assumes #104 resolved):

  • Split user scope at GPU boundaries resulting in four NUMA scopes:
    qv_scope_split_at(ctx, base_scope, QV_HW_OBJ_GPU, rank%ngpus, &gpu_scope);
    
    This should result in four different GPU scopes. Each scope should have one GPU and cores that are not shared with other GPU scopes. For example, there would be two GPU scopes associated with NUMA 0, the first would have half of the cores in this NUMA domain and the second would have the other half of the cores.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant