Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ideas from meeting with David and more ! #1

Open
2 of 5 tasks
agaldemas opened this issue Jul 24, 2024 · 7 comments
Open
2 of 5 tasks

ideas from meeting with David and more ! #1

agaldemas opened this issue Jul 24, 2024 · 7 comments
Assignees

Comments

@agaldemas
Copy link
Collaborator

agaldemas commented Jul 24, 2024

  • introduce Flowise in the loop

  • use local llm (ollama + compatible model)

  • add vector storage to store retrieved POIs as embeddings, then use as retriever to augment prompt instead of giving the context-broker results...

  • add additional agent that can search on the web, and add some results to the response

  • format better the output (display markdown return transformed in html)

@dncampo
Copy link
Collaborator

dncampo commented Jul 24, 2024

Alain. Please, create a branch from feature/fiware_integration in order the try out the local model using the CB

@agaldemas
Copy link
Collaborator Author

agaldemas commented Jul 25, 2024

sorry @dncampo, @javicond3,

I did not saw your message, I forked the repo @ mine: https://github.com/agaldemas/Poc_LLM_CB_FIWARE
and made a pull request...
may be it wasn't the best way to do...
but you can merge on the feature/fiware_integration branch !!!

Tell me if you want I create the branch anyway...

@agaldemas
Copy link
Collaborator Author

I will do a new branch for flowise integration...

@agaldemas
Copy link
Collaborator Author

agaldemas commented Jul 25, 2024

  • Still have to fix the display of markdown response transformed in html (for the moment the component display a string, not html, I have to find a way to display as html !

@agaldemas
Copy link
Collaborator Author

agaldemas commented Jul 25, 2024

@javicond3

Your idea by using openai.beta.threads& openai.beta.assistants from openai api, was to be able to keep the context in a thread and using an assistant to tune the behavior of the chat, provided by openai, but unfortunately it's not supported by ollama...

I had to use openai.chat.completions, which is working, but we loose the context of a conversation :( ...

=> this is a feature we can introduce using flowise (which use langchain under the hood)

  • For ollama version use a local context to keep conversation, through Flowise

@javicond3
Copy link
Member

Yes, threads are only supported now by OpenAI. You can send the context (or a summary) in every iteration but is not efficient... Does flowise support it?

@agaldemas
Copy link
Collaborator Author

Yes, threads are only supported now by OpenAI. You can send the context (or a summary) in every iteration but is not efficient... Does flowise support it?

you can do this in many different ways with flowise, manage prompt, store context, inject embeddings, prepared in vector db, etc....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants