-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overwrite interior and priorities #170
Comments
Your observations here make sense. My suggestion would be to use the method |
Thanks for your quick response. This is a simple patch layout for a 1 level only simulation. Each rectangle represents a patch, it is ran on 10 MPI processes, the global ID (rank#localID) of the patch box is written in each patch. We have 5 ghost nodes in each direction. If you look more specifically at patches p0#1 and p1#4 : these patches share border nodes that are part of their respective interior. Doing 1 schedule that sets overwrite_interior to true only when source globalID > dest globalID makes this border strictly equal on both cores, as you suggested. However, this border line extends on 5 ghost nodes on the p0#3 / p1#6 border. After the schedule is applied, these 5 ghosts of p0#1 and p1#4 still mismatch slightly. Our interpretation is that they do because they are processed by the schedule at a point where p0#3 / p1#6 have not yet exchanged/overwritten their own domain border nodes. As a result, we do not see how simply following your suggestion can fix the mismatch for both shared border nodes and ghost border nodes overlapping other domain border nodes, probably because, in our opinion applying only 1 schedule cannot make it? So what we did is :
The first schedule makes sure that once done, there is no domain border mismatch. The second schedule updates ghost nodes which will now get the same value on borders. We were wondering whether you would see a cleaner/simpler way to do this. Also, we have done this by implementing a custom VariableFillPattern which is pretty much a copy paste of the BoxGeometryVariableFillPattern except the calculateOverlap method is our own. We assumed this function was not used to calculate overlaps needed for refining data between levels and that was exclusively done by the computeFillBoxesOverlap method which we have let untouched from the BoxGeometryVariableFillPattern. It seems ok, but would feel better with your confirmation. |
I can see how this is possible for the nodes specifically on p0#3 / p1#6 border.
I think you have a reasonable approach. Other applications I have worked with have done something like this to separate the operations on patch boundaries from the operations in the ghost regions. One thing I can suggest is that you could use PatchLevelInteriorFillPattern for your first schedule, so that it only exchanges data on the patch boundaries. Since your second schedule writes into all of the ghosts, you don't need the first schedule to duplicate that.
This is correct, the calcluateOverlap methods is for overlaps within the same level of resolution. |
thanks |
We have decided to no longer group multiple components with potentially disparate geometries for we have seen Notably this interface A bit of debugging of the "equivalence classes" during registration of the schedule shows some "true" comparisons, The overlaps we receive from these operations, are correct for the first registered item, If you would like to see a reproduction of this issue just let me know and I'll put something together. |
Hi,
We have a geometry that's alike the one of NodeData, meaning that we get some nodes that are shared by adjacent patches on borders/corners, and for which the value should be equal.
In the code, these border/corners nodes are assigned values from large summations over floating point numbers (particles data), and although their final value should be identical, it's not exactly because of the accumulation of truncation errors.
In serial executions, this is not a problem because overlaps are sequentially processed, and only one value prevails for all patches.
In parallel however, if we unconditionally overwrite interior nodes when exchanging data with schedules, the border nodes basically gets swapped between the two PatchDatas concerned with the processed overlap. So that if they have slightly different values as a result of truncations errors, they still do after. If we unconditionally set overwrite_interior to false, then border nodes are simply not assigned and keep their slightly different values.
Over time, this slight mismatch appears to grow until shared nodes have totally different values which crashes the model.
How to deal with this ?
We were hoping that setting overwrite_interior to true or false conditionally would help having one value only prevailing.
The documentation says :
however, in our override of boxgeometry, we don't really understand how this conditions should be set.
In 1D, where only 2 patches can share the same node, we could say that lower rank is always overwritten by largest rank.
But in 2D it seems such a condition would end-up being a race condition since a node could be shared by 3 or 4 patches, and the assignement would depend on the order in which overlaps are processed.
Is there some example or general advice as to how to set the "priority between patches" as the doc refers to?
The text was updated successfully, but these errors were encountered: