You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm a young researcher in particle physics and I've found this repo while looking for a machine learning approach to the unfolding problem.
The OmniFold method seems great and I've already played with the two demos provided here but I was wandering if, in order to do multi-dimensional unfolding, will me enough to concatenate multiple observables and then take slices from the unfolded data.
Another question I have is if I need the same amount of Monte Carlo (MC) data than real measured data in order to unfold a big dataset from an experiment. I could take batches from the data the same length as my MC samples and perform the unfolding on every batch, but that means I have to repeat step 1 and 2 every time? Is there something it can be reused from one batch to the next?
The text was updated successfully, but these errors were encountered:
Hi, I'm a young researcher in particle physics and I've found this repo while looking for a machine learning approach to the unfolding problem.
The OmniFold method seems great and I've already played with the two demos provided here but I was wandering if, in order to do multi-dimensional unfolding, will me enough to concatenate multiple observables and then take slices from the unfolded data.
Another question I have is if I need the same amount of Monte Carlo (MC) data than real measured data in order to unfold a big dataset from an experiment. I could take batches from the data the same length as my MC samples and perform the unfolding on every batch, but that means I have to repeat step 1 and 2 every time? Is there something it can be reused from one batch to the next?
The text was updated successfully, but these errors were encountered: