-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add Exception Handling for LiteLLM #22
Conversation
…correct exceptions
Hello @dyga01 and @hemanialaparthi there are two PRs connected to exception handling. Do you have a preference on the order in which they are reviewed and ultimately merged? |
We do not have a preference for the order in which both PRs are merged. We would suggest merging them chronologically, but we do not foresee any issues. |
Hello @rebekahrudd and @hemanialaparthi, the prior output shows examples like this:
My remaining concern is that using |
@gkapfham, Thanks! I understand what you mean. I have been working on making the exception messages more detailed and I want to know what you think about these messages: litellm_exceptions = {
"NotFoundError": "The requested LLM resource was not found. Please check if your model and/or endpoint is correct.",
"AuthenticationError": "There was an issue with your authentication while accessing the LLM API. Please verify your API key.",
"RateLimitError": "You've hit the rate limit for LLM API usage. Please try again later or adjust your usage.\nNOTE: This error can sometimes be caused by an invalid API key.",
"InvalidRequestError": "Your request to the LLM API was malformed. Please check the parameters you've sent.",
"APIError": "An internal LLM API error occurred. Please try again later.",
"APIConnectionError": "There was a connection issue with the LLM server.\nNOTE: This error can sometimes be caused by an invalid server URL. Please verify the URL you're using.",
} Would you want me to add more details and description or does this suffice? |
Hello @hemanialaparthi can you please consider making the descriptions of the errors shorter and more succinct? Can you also please show what this would look like when run in the terminal window? |
Yes! These are the new outputs, I have come up with descriptions that are both short and concise. This is how it handles invalid API Models: Exception Type: NotFoundError
Explanation: LLM resource not found. Please check your model and/or endpoint.
If your issue persists, ensure the model you entered is correct, such as:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free
Please visit https://docs.litellm.ai/docs/providers for more valid LiteLLM models
For server connectivity issues, please visit https://docs.litellm.ai/docs/simple_proxy for a valid LiteLLM proxy. This is how it handles invalid servers: Exception Type: APIConnectionError
Explanation: Connection failed.
NOTE: This error can sometimes be caused by an invalid server URL. Verify your server URL.
If your issue persists, ensure the model you entered is correct, such as:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free
Please visit https://docs.litellm.ai/docs/providers for more valid LiteLLM models
For server connectivity issues, please visit https://docs.litellm.ai/docs/simple_proxy for a valid LiteLLM proxy. And, here are the other descriptions of the errors: litellm_exceptions = {
"NotFoundError": "LLM resource not found. Please check your model and/or endpoint.",
"AuthenticationError": "API authentication failed. Please verify your API key.",
"RateLimitError": "Rate limit exceeded. Wait and retry or check API key.\nNOTE: This error can sometimes be caused by an invalid API key.",
"InvalidRequestError": "Malformed API request. Please review parameters.",
"APIError": "Internal LLM API error. Retry later.",
"APIConnectionError": "Connection failed. \nNOTE: This error can sometimes be caused by an invalid server URL. Verify your server URL.",
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all of your work on this feature @dyga01 and @hemanialaparthi. This looks like a useful feature. Before we merge it can you please confirm that this should only change the behavior of execexam
when it encounters an exception with the LLM? Is that correct?
@gkapfham That is correct. This feature is specifically for LiteLLM exceptions. |
…test cases are working as intented
We should consider adding |
…ted before sending details to the LLM
For PRs, if a feature is still in development, it should be labeled as a draft, not an active PR. Can we confirm that no further development is needed? @hemanialaparthi @dyga01 |
@CalebKendra This PR is complete and no longer in development. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for working hard on this one, looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request: Add Exception Handling for API key and API server exceptions for LiteLLM so it doesn't crash and give a long stack trace.
Aidan Dyga (@dyga01) and Hemani Alaparthi (@hemanialaparthi)
Need Exception Handling for Coding Mentor #3
bug, enhancement
This pull request aims to enhance LiteLLM's error handling capabilities, focusing on API key and server exceptions. By implementing the new exceptions.py file and making changes to advise.py, we hope to prevent crashes and long stack traces that can confuse users. The goal is to improve the library's stability and user experience by providing more graceful error handling and user-friendly error messages, making LiteLLM more robust and easier to debug in real-world scenarios.
Coverage will be maintained for the exceptions.py file, as we have implemented new tests to ensure the new feature is fully tested.
The tests have been conducted on a Mac. It would be great if I could have a Linux and Windows user test it out.
Output can vary based on the exceptions.
i) For example for the command where the
advice_model
is incorrect, the output should be the according:ii) If the
advice_server
is invalid, this should be the output:Exception Type: APIConnectionError Explanation: There was a connection issue to the server. NOTE: This error can sometimes be caused by an invalid server URL. Please verify the URL you're using. If your issue persists, ensure the model you entered is listed below: - anthropic/claude-3-haiku-20240307 - anthropic/claude-3-opus-20240229 - groq/llama3-8b-8192 - openrouter/meta-llama/llama-3.1-8b-instruct:free - openrouter/google/gemma-2-9b-it:free
iii) NOTE: If LiteLLM does change their exception handling, the program will still not crash because we implemented a default behaviour for non LiteLLM Exceptions and this will produce the error message and a general purpose output but not a stacktrace .