Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiuser and simultaneous requests #8

Open
KonstantinMastak opened this issue Sep 16, 2023 · 3 comments
Open

Multiuser and simultaneous requests #8

KonstantinMastak opened this issue Sep 16, 2023 · 3 comments

Comments

@KonstantinMastak
Copy link

Congrats on great project! I started playing with it and have two questions so far:

  1. Does it support different sessions and several users?
  2. Does it support simultaneous requests for inference? Like two users on different browsers or computers opened web ui from the same server and prompted at the same time - what happens? Will there requests be threaded and inference happens simultaneously for both of them or do the one who was a bit faster just waits for server to respond to the other one first?

I also saw a binding which name assumes that I can connect this web ui to other lollm server - does it allow multiple inferences happening on the same server then?

@ParisNeo
Copy link
Owner

Hi, lollms is a mono-user project.
You can use multiple users if you install lollms (not lollms-webui), which creates a server for any client who wants to generate images. if you install it on a pc, you can use it from a remote PC or mac using lollms remote nodes.

@KonstantinMastak
Copy link
Author

Thank you for your fast answer! To make lollms multi-user and server concurrent requests should I use some separate web server like Apache / Nginx or is Flask default web server is enough for this? Also, can different users request for generation with different models or all users need use only one particular model which is selected server-side only?

@ParisNeo
Copy link
Owner

ParisNeo commented Oct 6, 2023

Hi and sorry for being this late. I'm very very buzzy and lollms is taking me a huge chunk of my night life.
To use lollms in remote mode, you need to install lollms on a server, and run lollms-server --host 0.0.0.0 which will start a lollms service on the PC at you ip address:9601.

Then you install lollms-webui on one or multiple PCS, each PC has its own local database, the server don't store your requets. in the bindings you select lollms remote nodes and you go to its settings and add http://ipaddress:9600 to the hosts list (you can have multiple hosts by the way).

For now, the server should be configured via lollms-settings command that will allow you to select the binding, model and mount as many persnoas as you want.

The users of the webui use the service with the selected model. technically you can run multiple services with different models but that may require you to have big ressources in terms of GPUs.

you can also do this remotely from another place or share servers with friends as the lollms remote nodes supports multiple servers. You can create a network of generators/clients and there is a queuing mechanism in case more queries than servers.

I hope this answers your question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants