-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[incubator-kie-issues#6112]Fixing non-deterministic test #6113
Conversation
I cannot reproduce this issue. |
Hi. Thanks for the comment. Did you try running:
Did you not see build failures after running this in the original code? |
I've never seen that failure in thousands of run of our CI and I don't know what that |
Here are the research papers based on Non-dex that would help you understand more on what the nondex pulgin does and how it affects the test execution: https://dl.acm.org/doi/10.1145/2950290.2983932 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7515461 To summarise: The NonDex plugin is a tool designed to detect and debug incorrect assumptions related to under-determined Java APIs. These APIs( for example HashSet, HashMap, or Collections.sort) do not guarantee a specific order of execution or behavior, and developers often unkowingly rely on their default behavior in a deterministic way. This can lead to flaky tests—tests that pass or fail intermittently—depending on whether the assumed behavior holds during execution for a particular environment. These kind of tests may pass for a specific JVM version, but then it may end up failing for another version on another host unknowingly. Even though you haven't encountered this failure in your CI pipeline so far, it doesn't guarantee that the issue won't arise in the future. The reason is that non-deterministic behavior is often subtle and environment-specific. For instance, a change in updated Java version may cause this test to fail during the builds in the future. The above research papers might help you in understanding this context better. Thanks! |
This a nice tool even though I believe it could produce a few false positive, as for example discussed here. For the specific fix suggested in this pull request I really don't understand how this is related when the potential issue that you described. Can you please clarify also this? |
Thanks for the reply! In my case: The error as stated in Issue: #6112 is rooted in non-deterministic behavior in the Drools rule execution due to the modify block. The modify($p) block combines fact ($p)changes and triggers rule re-evaluation of the rule automatically in the same step . Thus, the timing of this re-evaluation can vary, causing inconsistent rule firing orders, leading to random test failures. Because modify($p) triggers immediate re-evaluation of dependent rules, the order and timing of rule firing are varying. The critical point here is when the rule engine processes this re-evaluation. The modify block does not guarantee the order in which rules are re-triggered or when this happens during the rule evaluation cycle. This timing-related non-determinism may fire other rules dependent on $p at unpredictable points during the execution, before $p has reached its final state. Thus, this timing related flaky test does not seem to be a false positive detected by the non-dex tool, which was detected by non-dex because this timing issue was also causing the non determinsim relating to the order of rule firings, as seen in the error. How $p.setX() + update($p) instead of modify() fixes the Issue: Thanks! |
Sorry, but this whole description doesn't correspond to how drools works internally and ultimately is completely incorrect. The 2 statements, the one with the |
got it. Thank for your time |
Issue: #6112
The fix can be tested using the command: