Replies: 1 comment
-
So we already support sklearn pipelines, but then the what if tab is on on the transformed data. The difficulty is that the shap values are calculated for the transformed data, and then it is not so trivial to calculate the shap values back to the input data. (for simple rescaling cases this could be possible, but pipelines can also add or remove columns and so forth). So the contributions plot would be difficult to implement on input data. However the pdp whatif plot could indeed plausibly be implemented on the input data. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Not sure if this has already been implemented, do let me know if I missed it out! But I was thinking if we could enable some sort of argument that enables for better interpretability when we use models that are trained on transformed data?
e.g. applying a log transformation on the input data. This results in values in the 'What if" tab being in the transformed state, along with the predicted output.
Can transformations be specified, and consequently happen behind the scenes in order to make the explainerdashboard more intuitive for users who are doing these transformations?
e.g. When the "transformation" argument in either the ExplainerDashboard or Explainer method is specified to be np.log(), users can input raw data into the "what if" tab fields, and the transformation can be called in the backend in order to generate the prediction.
Same can be said with the predicted output, where an np.exp() transformation can be applied to get the actual output.
Just a few thoughts :)
Beta Was this translation helpful? Give feedback.
All reactions