You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just created a new template on Runpod with these settings:
I then deployed a new on-demand 4090 instance from the community cloud
deployment worked but I cannot access the ports 8188 or 8000, and if I try to SSH in, it works for a second and then immediately disconnects me.
I am able to deploy other templates on this community cloud GPU so I think the problem must have to do with the docker image or template.
I don't want to use the serverless offering on Runpod because the on-demand is much cheaper for me. Is there a reason why it isn't working if I don't set it to serverless?
Many thanks
The text was updated successfully, but these errors were encountered:
The purpose of this image is to be used as a serverless endpoint, but if you supply the environment variable SERVE_API_LOCALLY with the value true, then you can access the ComfyUI frontend via port 8188 and it will also not shut down immediately.
That being said, RunPod also provides ComfyUI templates for Pods, which you can find via Explore, for example "ComfyUI with Flux1-Dev".
Please let me know if this is working out for you!
I'm most interested in using the API that your worker exposes. My intent is only to use my pods via API and I especially like that your worker allows image uploads in base64 which seems to be something that other pod templates don't provide.
So all I have to do is set the env variable SERVE_API_LOCALLY to true and I'll be able to use it the same way I've been using it on my local network?
I do like Runpod's serverless offering with the auto scaling and all but I figure I can achieve the same functionality with regular pods and manual (auto)scaling for a fraction of the price if I use spot instances and instances from their community cloud. I'm not rich enough to go full serverless right now.
@vesper8 thanks for the clarification. So what you want is to use the API from this project, but use it on a Pod to save money.
I think this should work as when you activate SERVE_API_LOCALLY this project will expose an API that looks exactly the same as the one that is exposed on serverless. But I haven't done this on a pod yet, so please let me know how this is working out for you. Because in theory this https://github.com/blib-la/runpod-worker-comfy?tab=readme-ov-file#access-the-local-worker-api applies and you should be able to use the API on port 8000 (and of course you should expose this port on your pod if you want to access it).
I think the only thing that you will not get is some form of authentication, but it might be that this can also be enabled somehow, so there is some stuff to explore :D
I just created a new template on Runpod with these settings:
I then deployed a new on-demand 4090 instance from the community cloud
deployment worked but I cannot access the ports 8188 or 8000, and if I try to SSH in, it works for a second and then immediately disconnects me.
I am able to deploy other templates on this community cloud GPU so I think the problem must have to do with the docker image or template.
I don't want to use the serverless offering on Runpod because the on-demand is much cheaper for me. Is there a reason why it isn't working if I don't set it to serverless?
Many thanks
The text was updated successfully, but these errors were encountered: