-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow LIMHEL>0 in cudacpp (using the exact same algorithm as in fortran) #564
Comments
I have changed the cudacpp getGoodHel implementation in MR #705 to solve the mismatch in fortran/cpp cross sections for gg_uu described in issue #630. The new implementation in cudacpp is now using an algorithm that is a bit closer to that in fortran, and it would therefore be possible in principle to reimplement the exact same LIMHEL algorithm in cudacpp as in fortran. I actually tried a quick test in dcfabab but the fortran test crashed (not visible in the log, unfortunately). In any case I would not try the option to pass from fortran to cudacpp an input set of prefiltered helicities: I would indeed try to reimplement the excat same algorithm in cudacpp as in fortran. This task includes several sub tasks, including
|
Note this I had discussed here https://indico.cern.ch/event/1162062/ |
Before I forget, this is another followup of #419.
Presently fortran can use LIMHEL>0 to use a more relaxed helicity filtering (include helicities only if one of a few events has a ME above a threshold). In cudacpp the comparison is always to 0, and as a consequence the fortran code is also modified to use LIMHEL=0. If physicists can accept LIMHEL>0 to further speed up the code (at the cost of some precision), maybe this should be implemented in cudacpp too.
HOWEVER, I would like this to be done in such a way that the results can be compared exactly to fortran. Anpther aoption is to pass to cudacpp a pre-filtered set of helicities computed in fortran 'the fortran way'.
The text was updated successfully, but these errors were encountered: