Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Legion: default mapper disregards field constraints for region requirements with reduction privileges #1703

Closed
Tracked by #1032
mariodirenzo opened this issue Jun 20, 2024 · 10 comments

Comments

@mariodirenzo
Copy link

The attached C++ program, which tries to impose a field constraint on region requirements with reduction instances, shows that the default mapper disregards the field constraint.
If line 243 is commented out, the program works fine

constraintTest2.tar.gz

@elliottslaughter, can you please add this issue to #1032?

@mariodirenzo
Copy link
Author

Upon further investigation, it appears that the problem is here https://gitlab.com/StanfordLegion/legion/-/blob/master/runtime/mappers/default_mapper.cc#L2030-2031
@streichler @lightsighter, is it still necessary to have one field per reduction instance?

@lightsighter
Copy link
Contributor

There shouldn't be anything that prevents you from making a reduction instance with multiple fields.

@mariodirenzo
Copy link
Author

In that case, the following patch should fix the issue.

diff --git a/runtime/mappers/default_mapper.cc b/runtime/mappers/default_mapper.cc
index 7a5d77adb9..8e83aba967 100644
--- a/runtime/mappers/default_mapper.cc
+++ b/runtime/mappers/default_mapper.cc
@@ -2022,36 +2022,9 @@ namespace Legion {
                           size_t *footprint /*= NULL*/)
     //--------------------------------------------------------------------------
     {
-      // Special case for reduction instances, no point in checking
-      // for existing ones and we also know that currently we can only
-      // make a single instance for each field of a reduction
-      if (req.privilege == LEGION_REDUCE)
-      {
-        // Iterate over the fields one by one for now, once Realm figures
-        // out how to deal with reduction instances that contain
-        bool force_new_instances = true; // always have to force new instances
-        LayoutConstraintID our_layout_id =
-         default_policy_select_layout_constraints(ctx, target_memory, req,
-               TASK_MAPPING, needs_field_constraint_check, force_new_instances);
-        LayoutConstraintSet our_constraints =
-                      runtime->find_layout_constraints(ctx, our_layout_id);
-        instances.resize(instances.size() + req.privilege_fields.size());
-        unsigned idx = 0;
-        for (auto it =
-              req.privilege_fields.begin(); it !=
-              req.privilege_fields.end(); it++, idx++)
-        {
-          our_constraints.field_constraint.field_set.clear();
-          our_constraints.field_constraint.field_set.push_back(*it);
-          if (!default_make_instance(ctx, target_memory, our_constraints,
-                       instances[idx], TASK_MAPPING, force_new_instances,
-                       true/*meets*/, req, footprint))
-            return false;
-        }
-        return true;
-      }
+      // Special case for reduction instances, always have to force new instances
+      bool force_new_instances = (req.privilege == LEGION_REDUCE);

-      bool force_new_instances = false;
       // preprocess all the task constraint sets
       // partition them into field/non-field constraint sets + fields
       std::vector<std::vector<FieldID> > field_arrays, leftover_fields;

(Please double check that there aren't any side effects)

@seemamirch
Copy link
Contributor

In that case, the following patch should fix the issue.

diff --git a/runtime/mappers/default_mapper.cc b/runtime/mappers/default_mapper.cc
index 7a5d77adb9..8e83aba967 100644
--- a/runtime/mappers/default_mapper.cc
+++ b/runtime/mappers/default_mapper.cc
@@ -2022,36 +2022,9 @@ namespace Legion {
                           size_t *footprint /*= NULL*/)
     //--------------------------------------------------------------------------
     {
-      // Special case for reduction instances, no point in checking
-      // for existing ones and we also know that currently we can only
-      // make a single instance for each field of a reduction
-      if (req.privilege == LEGION_REDUCE)
-      {
-        // Iterate over the fields one by one for now, once Realm figures
-        // out how to deal with reduction instances that contain
-        bool force_new_instances = true; // always have to force new instances
-        LayoutConstraintID our_layout_id =
-         default_policy_select_layout_constraints(ctx, target_memory, req,
-               TASK_MAPPING, needs_field_constraint_check, force_new_instances);
-        LayoutConstraintSet our_constraints =
-                      runtime->find_layout_constraints(ctx, our_layout_id);
-        instances.resize(instances.size() + req.privilege_fields.size());
-        unsigned idx = 0;
-        for (auto it =
-              req.privilege_fields.begin(); it !=
-              req.privilege_fields.end(); it++, idx++)
-        {
-          our_constraints.field_constraint.field_set.clear();
-          our_constraints.field_constraint.field_set.push_back(*it);
-          if (!default_make_instance(ctx, target_memory, our_constraints,
-                       instances[idx], TASK_MAPPING, force_new_instances,
-                       true/*meets*/, req, footprint))
-            return false;
-        }
-        return true;
-      }
+      // Special case for reduction instances, always have to force new instances
+      bool force_new_instances = (req.privilege == LEGION_REDUCE);

-      bool force_new_instances = false;
       // preprocess all the task constraint sets
       // partition them into field/non-field constraint sets + fields
       std::vector<std::vector<FieldID> > field_arrays, leftover_fields;

(Please double check that there aren't any side effects)
With the above changes the mapper will give precedence to specialized constraints in task layout constraints. i.e. it will ignore/override the 'redop' field in the region requirement (no error checking in the mapper for mismatch).
The current behavior (master) always defaults to whatever is set in region requirement (redop field).
@lightsighter - I can add error checking in the mapper for mismatch and is the above new behavior ok? Are there any layout constraints that are not applicable to region requirements with reduction privileges? Does the 'force_new_instances' field need to be set to true above?

@lightsighter
Copy link
Contributor

I can add error checking in the mapper for mismatch and is the above new behavior ok?
Does the 'force_new_instances' field need to be set to true above?

I don't think we actually want this patch because this forces the creation of a new instance creation for every reduction region requirement, which is sub-optimal. I didn't do a bunch of hard work to allow for reuse of reduction instances for nothing. 😉 The default mapper should be able to do the check to see if it can reuse reduction instances the same as for normal instances. If there are conflicts you can pick it up that way too. Do you agree that should handle the mismatch case you were thinking about?

Are there any layout constraints that are not applicable to region requirements with reduction privileges?

A reduction instance is the same as a normal instance with two exceptions:

  1. The size of the fields used will correspond to the size of the RHS type of the reduction operator instead of the LHS (normal field size).
  2. Legion will automatically initialize reduction instances on their first (re-)use with the identity value of the reduction operator.

Other than that, all the normal layout constraints still apply and can be used to describe reduction instances.

I'm not sure I completely answered your questions so ask again if I missed something.

@mariodirenzo
Copy link
Author

I don't think we actually want this patch because this forces the creation of a new instance creation for every reduction region requirement, which is sub-optimal.

Note that the current version of the default mapper currently sets the force_new_instances variable to true for all reduction instances
https://gitlab.com/StanfordLegion/legion/-/blob/master/runtime/mappers/default_mapper.cc#L2032 . Interestilgly, profiles extracted with HTR show that the reduction instances are reused if the same task is launched multiple times in succession.

@lightsighter
Copy link
Contributor

Note that the current version of the default mapper currently sets the force_new_instances variable to true for all reduction instances

Yes, that should be fixed.

Interestilgly, profiles extracted with HTR show that the reduction instances are reused if the same task is launched multiple times in succession.

Right, because the default mapper is allowed to memoize the mappings of those tasks even if they have reduction-only region requirements now and replay them later so you don't even have to go through the path of calling default_make_instance.

@seemamirch
Copy link
Contributor

There's a PR for this and issue #1705 overlaps with it - reduc/default mapper (WIP)

@seemamirch
Copy link
Contributor

https://gitlab.com/StanfordLegion/legion/-/merge_requests/1337 has been merged into master

@lightsighter
Copy link
Contributor

Can we close this now?

elliottslaughter pushed a commit that referenced this issue Sep 13, 2024
issue #1703 + #1705

See merge request StanfordLegion/legion!1337
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants