Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Enhance LLM-Powered Explanations for Edge Cases and Customization #86

Open
nazeyanehal opened this issue Oct 8, 2024 · 4 comments
Assignees

Comments

@nazeyanehal
Copy link

Description

Currently, ExplainableAI's LLM-powered explanations provide general insights into model predictions, which are helpful in most cases. However, there is a need for more tailored explanations when the model encounters edge cases, such as poor performance or imbalanced datasets. Additionally, there is no option for users to customize the level of detail in the explanations, which limits the tool's usability for both technical and non-technical users.

Problem it Solves

This feature will address the problem of generic LLM explanations by offering more specific insights when models perform poorly or encounter problematic data (e.g., outliers, imbalanced classes). It will also solve the issue of inflexible explanations by allowing users to control the depth of explanation, catering to both novice users and more advanced technical users who need detailed analyses.

Proposed Solution

Enhance LLM explanations to identify and explain edge cases, such as:
Poor model performance (e.g., low accuracy, high error rates).
Imbalanced datasets leading to biased predictions.
Outliers or anomalies in the data.
Add an optional parameter (e.g., explanation_level) that allows users to select between a high-level summary for non-technical users or an in-depth analysis for those who require detailed insights, such as feature importance breakdowns and model-specific diagnostics.

Alternatives Considered

One alternative would be to manually explain these edge cases by interpreting the results post-hoc, but this process can be time-consuming and inefficient, especially for non-experts. Additionally, separate documentation or tutorials could be provided to explain these scenarios, but having it built into the tool as an automated feature would be much more user-friendly and efficient.

@nazeyanehal nazeyanehal added the enhancement New feature or request label Oct 8, 2024
Copy link

github-actions bot commented Oct 8, 2024

👋 Thank you for raising an issue! We appreciate your effort in helping us improve. Our team will review it shortly. Stay tuned!

@ombhojane
Copy link
Owner

interesting! @nazeyanehal go ahead!

you may try out:
enhance the prompting,
feed the additional data, say EDA data we've performed, SHAP Analysis, etc.

@nazeyanehal
Copy link
Author

@ombhojane I've completed the enhancement with EDA and SHAP integration for the LLM explanations. Let me know if you need any changes!

Thanks!

@ombhojane
Copy link
Owner

hey @nazeyanehal can you mension the link of PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants