Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Exception Handling for LiteLLM #22

Merged
merged 37 commits into from
Nov 12, 2024

Conversation

dyga01
Copy link
Collaborator

@dyga01 dyga01 commented Sep 25, 2024

  1. Pull Request: Add Exception Handling for API key and API server exceptions for LiteLLM so it doesn't crash and give a long stack trace.

  2. Aidan Dyga (@dyga01) and Hemani Alaparthi (@hemanialaparthi)

  3. Need Exception Handling for Coding Mentor #3

  4. bug, enhancement

  5. This pull request aims to enhance LiteLLM's error handling capabilities, focusing on API key and server exceptions. By implementing the new exceptions.py file and making changes to advise.py, we hope to prevent crashes and long stack traces that can confuse users. The goal is to improve the library's stability and user experience by providing more graceful error handling and user-friendly error messages, making LiteLLM more robust and easier to debug in real-world scenarios.

  6. Coverage will be maintained for the exceptions.py file, as we have implemented new tests to ensure the new feature is fully tested.

  7. The tests have been conducted on a Mac. It would be great if I could have a Linux and Windows user test it out.

  8. Output can vary based on the exceptions.

i) For example for the command where the advice_model is incorrect, the output should be the according:

Exception Type: NotFoundError
Explanation: The requested resource was not found. Please check if your model or endpoint is correct.

If your issue persists, ensure the model you entered is listed below:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free
- openrouter/google/gemma-2-9b-it:free

ii) If the advice_server is invalid, this should be the output:

Exception Type: APIConnectionError
Explanation: There was a connection issue to the server.
NOTE: This error can sometimes be caused by an invalid server URL. Please verify the URL you're using.

If your issue persists, ensure the model you entered is listed below:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free
- openrouter/google/gemma-2-9b-it:free

iii) NOTE: If LiteLLM does change their exception handling, the program will still not crash because we implemented a default behaviour for non LiteLLM Exceptions and this will produce the error message and a general purpose output but not a stacktrace .

@hemanialaparthi hemanialaparthi added bug Something isn't working enhancement New feature or request labels Sep 25, 2024
@gkapfham
Copy link
Collaborator

Hello @dyga01 and @hemanialaparthi there are two PRs connected to exception handling. Do you have a preference on the order in which they are reviewed and ultimately merged?

@CalebKendra CalebKendra self-requested a review September 26, 2024 19:56
@dyga01
Copy link
Collaborator Author

dyga01 commented Sep 26, 2024

Hello @dyga01 and @hemanialaparthi there are two PRs connected to exception handling. Do you have a preference on the order in which they are reviewed and ultimately merged?

We do not have a preference for the order in which both PRs are merged. We would suggest merging them chronologically, but we do not foresee any issues.

@gkapfham
Copy link
Collaborator

Hello @rebekahrudd and @hemanialaparthi, the prior output shows examples like this:

Exception Type: APIConnectionError

My remaining concern is that using APIConnectionError does not indicate that this is an error specifically coming from the LLM interaction. While I agree that this is the title of the PR and thus it is an issue that is apparent to all of us who are working on the tool, I am trying to empathize with a person who is using the tool and then see this error and does not understand that the specific problem is an API that supports LLM communication. As such, I am suggesting that your further clarify the error message. See what I mean? What do you think?

@hemanialaparthi
Copy link
Collaborator

hemanialaparthi commented Oct 30, 2024

Hello @rebekahrudd and @hemanialaparthi, the prior output shows examples like this:

Exception Type: APIConnectionError

My remaining concern is that using APIConnectionError does not indicate that this is an error specifically coming from the LLM interaction. While I agree that this is the title of the PR and thus it is an issue that is apparent to all of us who are working on the tool, I am trying to empathize with a person who is using the tool and then see this error and does not understand that the specific problem is an API that supports LLM communication. As such, I am suggesting that your further clarify the error message. See what I mean? What do you think?

@gkapfham, Thanks! I understand what you mean. I have been working on making the exception messages more detailed and I want to know what you think about these messages:

litellm_exceptions = {
    "NotFoundError": "The requested LLM resource was not found. Please check if your model and/or endpoint is correct.",
    "AuthenticationError": "There was an issue with your authentication while accessing the LLM API. Please verify your API key.",
    "RateLimitError": "You've hit the rate limit for LLM API usage. Please try again later or adjust your usage.\nNOTE: This error can sometimes be caused by an invalid API key.",
    "InvalidRequestError": "Your request to the LLM API was malformed. Please check the parameters you've sent.",
    "APIError": "An internal LLM API error occurred. Please try again later.",
    "APIConnectionError": "There was a connection issue with the LLM server.\nNOTE: This error can sometimes be caused by an invalid server URL. Please verify the URL you're using.",
}

Would you want me to add more details and description or does this suffice?

@gkapfham
Copy link
Collaborator

Hello @hemanialaparthi can you please consider making the descriptions of the errors shorter and more succinct? Can you also please show what this would look like when run in the terminal window?

@hemanialaparthi
Copy link
Collaborator

hemanialaparthi commented Oct 31, 2024

Hello @hemanialaparthi can you please consider making the descriptions of the errors shorter and more succinct? Can you also please show what this would look like when run in the terminal window?

Yes! These are the new outputs, I have come up with descriptions that are both short and concise.

This is how it handles invalid API Models:

Exception Type: NotFoundError
Explanation: LLM resource not found. Please check your model and/or endpoint.

If your issue persists, ensure the model you entered is correct, such as:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free

Please visit https://docs.litellm.ai/docs/providers for more valid LiteLLM models

For server connectivity issues, please visit https://docs.litellm.ai/docs/simple_proxy for a valid LiteLLM proxy.

This is how it handles invalid servers:

Exception Type: APIConnectionError
Explanation: Connection failed. 
NOTE: This error can sometimes be caused by an invalid server URL. Verify your server URL.

If your issue persists, ensure the model you entered is correct, such as:
- anthropic/claude-3-haiku-20240307
- anthropic/claude-3-opus-20240229
- groq/llama3-8b-8192
- openrouter/meta-llama/llama-3.1-8b-instruct:free

Please visit https://docs.litellm.ai/docs/providers for more valid LiteLLM models

For server connectivity issues, please visit https://docs.litellm.ai/docs/simple_proxy for a valid LiteLLM proxy.

And, here are the other descriptions of the errors:

    litellm_exceptions = {
        "NotFoundError": "LLM resource not found. Please check your model and/or endpoint.",
        "AuthenticationError": "API authentication failed. Please verify your API key.",
        "RateLimitError": "Rate limit exceeded. Wait and retry or check API key.\nNOTE: This error can sometimes be caused by an invalid API key.",
        "InvalidRequestError": "Malformed API request. Please review parameters.",
        "APIError": "Internal LLM API error. Retry later.",
        "APIConnectionError": "Connection failed. \nNOTE: This error can sometimes be caused by an invalid server URL. Verify your server URL.",
    }

Copy link
Collaborator

@gkapfham gkapfham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all of your work on this feature @dyga01 and @hemanialaparthi. This looks like a useful feature. Before we merge it can you please confirm that this should only change the behavior of execexam when it encounters an exception with the LLM? Is that correct?

@dyga01
Copy link
Collaborator Author

dyga01 commented Oct 31, 2024

Thanks for all of your work on this feature @dyga01 and @hemanialaparthi. This looks like a useful feature. Before we merge it can you please confirm that this should only change the behavior of execexam when it encounters an exception with the LLM? Is that correct?

@gkapfham That is correct. This feature is specifically for LiteLLM exceptions.

@CalebKendra
Copy link
Collaborator

We should consider adding dismiss stale pull request approvals as a branch protection rule if we want to change PR's after approval.

@CalebKendra
Copy link
Collaborator

For PRs, if a feature is still in development, it should be labeled as a draft, not an active PR. Can we confirm that no further development is needed? @hemanialaparthi @dyga01

@dyga01
Copy link
Collaborator Author

dyga01 commented Nov 7, 2024

For PRs, if a feature is still in development, it should be labeled as a draft, not an active PR. Can we confirm that no further development is needed? @hemanialaparthi @dyga01

@CalebKendra This PR is complete and no longer in development.

Copy link
Collaborator

@CalebKendra CalebKendra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for working hard on this one, looks good.

Copy link
Collaborator

@boulais01 boulais01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@PCain02 PCain02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran the code on Windows 10 in various ways and it provided the correct output
image

I also ran the new tests and they all passed.

I made some comments/questions on the code changes because I would like to knowledge share about the advise.py file.

Overall, looks very good.

@hemanialaparthi hemanialaparthi requested review from hemanialaparthi and removed request for hemanialaparthi November 12, 2024 03:22
@boulais01 boulais01 merged commit a285c20 into GatorEducator:main Nov 12, 2024
3 checks passed
@PCain02 PCain02 mentioned this pull request Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants