Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow LIMHEL>0 in cudacpp (using the exact same algorithm as in fortran) #564

Open
valassi opened this issue Dec 11, 2022 · 3 comments
Open
Assignees

Comments

@valassi
Copy link
Member

valassi commented Dec 11, 2022

Before I forget, this is another followup of #419.

Presently fortran can use LIMHEL>0 to use a more relaxed helicity filtering (include helicities only if one of a few events has a ME above a threshold). In cudacpp the comparison is always to 0, and as a consequence the fortran code is also modified to use LIMHEL=0. If physicists can accept LIMHEL>0 to further speed up the code (at the cost of some precision), maybe this should be implemented in cudacpp too.

HOWEVER, I would like this to be done in such a way that the results can be compared exactly to fortran. Anpther aoption is to pass to cudacpp a pre-filtered set of helicities computed in fortran 'the fortran way'.

@valassi
Copy link
Member Author

valassi commented Aug 14, 2023

I have changed the cudacpp getGoodHel implementation in MR #705 to solve the mismatch in fortran/cpp cross sections for gg_uu described in issue #630.

The new implementation in cudacpp is now using an algorithm that is a bit closer to that in fortran, and it would therefore be possible in principle to reimplement the exact same LIMHEL algorithm in cudacpp as in fortran. I actually tried a quick test in dcfabab but the fortran test crashed (not visible in the log, unfortunately).

In any case I would not try the option to pass from fortran to cudacpp an input set of prefiltered helicities: I would indeed try to reimplement the excat same algorithm in cudacpp as in fortran.

This task includes several sub tasks, including

  • First of all, make sure that LIMHEL>0 does not crash in fortran.
  • Then, in cudacpp, implement the same algorithm. The point is that one should not compare to LIMHEL just the ME contribution from one helicity: one should compare it to the ME contribution for one helicity, times ncomb (all possible helicities), divided by the sum of ME contributions from all helicities. This is what is done in fortran. One should use debug printouts to check we are really comparing the same thing.
  • There is then also the issue of understanding the effect of LIMHEL on event selection and on cross sections, or at least checking that this goes on in the same way in fortran and cudacpp. As discussed in xsec from fortran and cpp differ in gg_uu tmad tests (bug in getGoodHel implementation) #630, small changes in the helicity selection may end up having large effects on event selection (and on cross sections, when few events are used and the cross sections are divergent like gguu), because helicity selection has a large effect ion numerators and denominators of the multichannel mode.

@valassi valassi changed the title Allow LIMHEL>0 in cudacpp? (Or allow an input set of pre-filtered helicities) Allow LIMHEL>0 in cudacpp (using the exact same algorithm as in fortran) Aug 14, 2023
@valassi
Copy link
Member Author

valassi commented Aug 21, 2024

Note this I had discussed here https://indico.cern.ch/event/1162062/

image

@valassi
Copy link
Member Author

valassi commented Sep 2, 2024

Note, some progress on #950 was done in PR #955

  • limhel in fortran can be set via a runcard
  • limhel=0 is set for cudacpp via the runcard (no need to patch genps.inc)

What remains to be done is what is described here, implement LIMHEL>0 in cudacpp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants