Replies: 2 comments
-
Is it true that the pipeline organizes and tweaks your input, but the final response generation—where the LLM interprets the prompt and creates an answer—comes from OpenAI's cloud-based model, not from a model running on your computer? |
Beta Was this translation helpful? Give feedback.
0 replies
-
no, the reference to OpenAI API just refers to the specification of an OpenAPI client (https://github.com/openai/openai-openapi?tab=readme-ov-file). If you're using Ollama it does have an OpenAPI endpoint you can add to your UI. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
One article noted, "Pipelines is agnostic to the UI client, as long as the client supports the OpenAI API," which typically refers to cloud-based services. I've noticed some filters and pipelines make internet requests (e.g., Wikipedia). Does this mean that any filter or pipeline could potentially reach out to the cloud because of OpenAI's involvement?
Beta Was this translation helpful? Give feedback.
All reactions