-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement an ensembler of MetaLearner
s
#53
Comments
If we want to make it work with in-sample data, we obviously need that they have been trained on exactly the same data. I think the best option for this is that the user provides already initialized metalearners (fitted or unfitted) and then we implement a I think that if the user wants to use only for oos data and metalearners with different training data, they can easily and it does not require a lot of work. |
Cool package. Nie & Wager’s R-loss gives you an approach for ensembling CATE estimators: stack many final-stage CATE estimators and minimize that loss. They discuss this in section 4.2 of the R-learner paper. Here’s a paper trying it out in case it’s helpful: https://arxiv.org/abs/2202.12445. On a general note, you can take the same ensembling approach to estimate nuisance components |
Hi @erikcs - apologies for the super late reply. |
sklearn
provides aBaseEnsemble
class which can be used to ensemble variousEstimator
s.Unfortunately,
sklearn
'sBaseEnsemble
does not work out of the box with aMetaLearner
frommetalearners
due to differences inpredict
andfit
signatures.In order to facilitate the ensembling of CATE estimates from various MetaLearners, it would be useful to implement helpers.
Some open questions:
MetaLearner
s or train theMetaLearner
s itself?MetaLearner
s to have been trained on exactly the same data?The text was updated successfully, but these errors were encountered: