-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding MemGPT to open-interpreter #668
Comments
I'm actually trying to work on this integration myself, but new to the codebase and trying to work on getting the project running on windows first as thats where I work usually. memGPT still has some stuff they're working on to get things working in terms of gpt3.5 and gpt4 as there are some speed issues relating to context the last I checked. I'm paying close attention and trying to contribute where I can to help expedite this process. Would love to find some others interested in helping getting this badboy working on multiple platforms as that could be huge and then make things move quicker towards desktop also. |
@tommymaher15 @EsoCoding how is memgpt's approach different to openinterpreter's tokentrimming/ |
I am working on it |
My idea exactly!! |
Hey there, folks! I'm going to close this one as a |
I'm not sure if that is the same thing, its not just about remembering the conversation. Its about parsing the right context to the LLM. Thats a total different story then continue where one has left and quit the conversation. MemGPT is unique in this. |
@EsoCoding Both of these, and the other issues I redirected to that same core issue, are focused on approaches to conversational memory or conversational augmentation with memory-like functionality. While the specifics of the implementation details are different, we haven’t settled on any specific approach, so I believe the overall conversation can be contained to one Issue to reduce noise in a repo that is very noisy at the moment. If someone wants to start a PR and discuss this particular implementation further, I’d be happy to take a look and talk about the specifics of this approach, but while it’s just a suggestion in an Issue, I don’t think we need a separate issue at the moment. |
Is your feature request related to a problem? Please describe.
Yes, the program crashes often because of literals and other reasons. But when restarting, it reloads the cached conversation, which then gets trimmed, causing you to need to basicly start over with questioning. This especially using with GPT-4, which for me is the only model doing it right, is lets say, costly. And i believe MemGPT might be a solution if we could integrate the same principles.
Describe the solution you'd like
Check for MemGPT or Youtube to understand what i mean.
I recently discovered MemGPT, a fascinating technology that enables long-term memory for GPT models efficiently. It struck me as a valuable addition to the open-interpreter framework. This innovation has the potential to address some significant challenges, especially when it comes to costly repetitive requests. While open-interpreter already caches conversations, there's still the issue of reaching the maximum context length after a crash. This max context length is especially easily reached using open-interpreter. This often leads to repeating the questions and providing context again, which can be expensive, particularly when using GPT-4, a model that, in my opinion, performs better then others
i have tested. Most even gave poor results, but even these models should improve when long term memory like MemGPT has created.
Describe alternatives you've considered
No response
Additional context
The text was updated successfully, but these errors were encountered: