-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support OpenAI Assistants #33
Comments
I saw this announcement! Very cool! I think the two aspects I find most interesting are:
|
There seems to be a big difference in how the chats are modelled though - there's a concept of Threads and Objects, these conversations are essentially stateful so I think it'd be more difficult to plug this into the existing implementation. I agree though the document retrieval is likely the biggest feature to come from this for now alongside the function calling that this library already supports. But again these functions operate differently as you need to create Assistants and provide them with functions and enable tools like code interpreter and retrieval etc. |
@chrisgreg something I'm also thinking about is how tightly it should be coupled to OpenAI's implementation and design or if it can be (at least partially) supported with other LLMs. Basically, is the goal to create an Elixir friendly wrapper around OpenAI's Assistants API? Or is the goal to follow more of the LangChain spirit of supporting other LLMs, perhaps even self hosted ones? The easy parts with a more generic approach is:
Then an area that need more work:
Development needed:
Yes, OpenAI is making this very easy. I'm also a bit concerned about being locked into their API because my product/business becomes locked to them and their decisions. |
Very fair and totally understand that line of thinking. I would say however that OpenAI though has essentially rendered a good portion of what LangChain offers moot with these new releases - obviously not everything and the chaining etc but I would say that these assistants house functionality that can be useful in most situations when chatting with an OpenAI LLM and it's only going to get bigger when the plugins marketplace becomes available - I can imagine most users wanting to use an OpenAI API wrapper would want a way to call these. I'm currently using this library for a large project at work and I know for a fact we'll need Assistants and retrieval as part of it in the near future. |
I also want Assistants. Document retrieval is helpful but less critical for me personally. I can basically create what I need right now from the LangChain library as it is today. Especially with the new gpt-4 preview release. I'm currently working on getting the Mistral LLM running on GPU hardware directly from Elixir. As I get that working and supported in LangChain, then I'll have a better idea of what a self-hosted solution looks like. My concern really is about becoming overly reliant on OpenAI's solution. If they decide to change policies or to become more restrictive on existing policies, then it kills my product. Still, I think an Assistant design can be managed well. The "conversation" model, especially when written to a database, is basically a thread. Tracking the logs of operations that were run is a good idea too. Basically, an Assistant is a higher-level API that may use multiple chains internally and track logs of what it's done. |
For instance, I went to use an Agent this morning and it's failing with a response from OpenAI of: UPDATE: They’re having a major outage right now. https://status.openai.com/ |
Agree with all these points - and totally, assistant design should be fairly straight forward with threads too. How are you doing document retrieval with the GPT-4 preview release using this library? OpenAI seems to be back online now too :) |
@chrisgreg I noticed that OpenAI's Assistants take some time to create. Meaning the call to update an assistant feels like it does a full replacement of it. The prompt instructions, functions, etc. It also does not support #29 (router chains). In their keynote, they tease that lots of stuff is coming in future releases, so perhaps this? Anyway, I'm now wanting a router for my own assistants. Just making me think. 🤔 |
What are your thoughts on integrating just a part of the assistants in with langchain? I.e. the same way you add functions you can set an assistant |
@chrisgreg I like the idea and plan to do it too. Before OpenAI announced the new API, I had already created an Assistant struct in a project that defined the initial messages and a set of functions. I think something like that will work well because we will have a way to convert that into the API call for OpenAI. Other models/services just won't have the ability to save it to an external service. A recent API change was to define a set of "tools" instead of "functions". With the release, the only type of tool is a function, but it leaves it open for more tool types. I'm thinking an Assistant might define something like this: %Assistant{
# ...
tools: %{functions: [...]}
} I don't yet know how to handle files/documents. |
I like it - I think a good initial step could be letting people define their assistants using the UI and pass files in that way? |
@chrisgreg I've been thinking more about Assistants because it's a great abstraction over a router (#29) as well. Here's what I'm thinking.
I'm still thinking about how the router might work. The Python router basically provides a map and executes lambdas to customize the decision process. That could easily be an anonymous Elixir function. It would be neat to add Elixir code to an Assistant so it could decide to use additional nested routers or not. 🤔 An Assistant can run multiple LLMChain calls without being seen or reflected in the messages. This is where the logs are important. I'm about to ready to start a branch while I play with it. To your point, I think we can define an assistant in our own application, and if talking to OpenAI, it could create/update the assistant through the OpenAI API. Files are still a question, but a workaround is to upload/update the files using OpenAI's interface. |
The more I think about it, it feels like it could be process like a GenServer. Then with callbacks it could have customizing code. But it would need persistence too. 🤔 |
A GenServer could work - a Dynamic Supervisor might be handy to handle multiple calls to an "open" assistant you may have? |
Wondering if any of this been tackled over the last month at all? I (and my team) would love to help implement this. |
I've been experimenting with this being implemented directly by the library. Not using OpenAI's Assistants though. Also, not specifically in a GenServer, but in a LiveView (both processes!) I added routing, which is part of it. I haven't done anything to add support for OpenAI's Assistants. I'm open to contributions and discussion! |
As an FYI I've built an assistant by basically (in part) persistint the message structs to the database and rehydrating them into to Langchain. It works pretty well. The only caveat is that the |
@jadengis This is what I do as well. I define a database schema that describes messages as "local display only", "server use only" or "both local and server". The "server use only" messages are system messages, function executions, templated prompt messages, etc. With this structure, I can have a natural UI with a highly customized agent that is serialized, searchable and can be resumed at any time. This OpenAI Assistants discussion is really about adding support for the ChatGPT API where they manage the conversation storage. They add the additional benefit of adding document support. That's the most compelling feature for me. |
Would this be best implemented as a ChatModel? I'd be game to work on that. Any advice on how you'd like to see it? My use case is where we have an Assistant that uses Files and the vector store. Where we would modify files locally and update them in the assistants files. Then stream messages on the assistant. |
Hi @GaltMidas! The Elixir LangChain library doesn't currently have any built-in or provided support for RAG data stores. There are several examples online of people implementing them in Elixir though. The easiest option is to code to OpenAI's Assistants API https://platform.openai.com/docs/api-reference/assistants. That has the RAG support and conversations about them all built-in. This issue is about adding support for that API. However, I'm not currently focused on that myself. But if someone else wants to tackle that, I'm open for PRs. |
Enables access to retrieval and code interpreter
https://platform.openai.com/docs/assistants/overview
The text was updated successfully, but these errors were encountered: