Replies: 1 comment
-
Hi @hugocool, That would certainly be a super interesting addition to the library. However I personally have very little time to work on the library at the moment (just had a second baby :), so probably won't have time to implement it myself. This does seem like an application that could use a user-friendly interactive format with sensible defaults though, so I think it would be a good fit if someone has the time to implement it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Oege,
I love the explainerdashboard library and would like to extend some of its functionality.
Since now mostly shapley value based explanations are supported, however, to cover a broader area of ML explainers it could be quite interesting to include other types of ML explanation frameworks such as DICE.
I think some of this functionality is within reach due to the inclusion of dependence plots, however, counterfactual explanations take that concept to the model output stage. I do think the DICE package is a little limited in terms of the explanatory power it yields in terms of visualizing the relationship the model has learned w.r.t mapping the data to an outcome class.
Specifically, DICE can give examples of how perturbations in the input space of specific examples would have led to a classifier yielding a different predicted outcome. I could see this functionality being a fine addition to the shap contributions. Where in addition to showing how the inputs have contributed to the predicted outcome, we could visualize which perturbations would have led to a different outcome, what the range of those perturbations could have been (in terms of whether this could happen by coincidence or not).
There are many issues to consider here; chief among these is the fact that DICE doesn't work with XGboost or lightgbm.
Also, how should the counterfactual explanations be ordered, by gradient or by p-value?
Going a little deeper on the p-value, should this be determined non-parametrically or should we allow assumptions?
How can we extend counterfactuals to regression instead of only classification? (I mean we could discretize the outcomes and use multiclass/ordered/ranked classification).
And would such additions be in the scope of this library? Making this library a catch-all of explainer methods, instead of mostly Shapley value-based methods.
I am looking forward to any insights on these matters.
Beta Was this translation helpful? Give feedback.
All reactions