Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to start and serve HTTP while server is in intermediate state i.e. shutting down #116

Open
gulshansainis opened this issue May 25, 2024 · 5 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@gulshansainis
Copy link

gulshansainis commented May 25, 2024

🐛 Bug

When server is in saving mode and going to sleep and, we send a request, we receive response "failed to start and serve HTTP"

To Reproduce

Steps to reproduce the behavior:

  1. Enable auto start on server
  2. Now shutdown server
  3. While server is shutting down/ sleeping, send a request

Code sample

# server.py
import litserve as ls

# STEP 1: DEFINE YOUR MODEL API
class SimpleLitAPI(ls.LitAPI):
    def setup(self, device):
        # Setup the model so it can be called in `predict`.
        self.model = lambda x: x**2

    def decode_request(self, request):
        # Convert the request payload to your model input.
        return request["input"]

    def predict(self, x):
        # Run the model on the input and return the output.
        return self.model(x)

    def encode_response(self, output):
        # Convert the model output to a response payload.
        return {"output": output}


# STEP 2: START THE SERVER
if __name__ == "__main__":
    api = SimpleLitAPI()
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)

STEP 3: SEND REQUEST

curl -X POST \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer <token>" \
     -d '{"input": 4.0}' \
     https://id.......cloudspaces.litng.ai/predict

Expected behavior

Since the we have defined accelerator="auto" a new server instance should spin-up and handle the request

Environment

Running on CPU mode only for now

  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Please let me know if you need further details

@gulshansainis gulshansainis added bug Something isn't working help wanted Extra attention is needed labels May 25, 2024
Copy link

Hi! thanks for your contribution!, great first issue!

@williamFalcon
Copy link
Contributor

williamFalcon commented May 27, 2024

@gulshansainis thanks for the report. can you share a link to a public studio that we can use to replicate this

  • start a studio
  • set it up to reproduce the error
  • publish it
  • paste the link here

https://lightning.ai/docs/overview/studios/publishing

@williamFalcon
Copy link
Contributor

ok, i think i understand the misunderstanding @gulshansainis .

  • you enabled this on a studio
  • you expect "auto" to enable the studio to autostart
  • but this didn't work

yes, litserve today is not coupled with the studio functionality. "auto" does not impact the studio runtime in any way shape or form.

to enable the studio to auto-start, you need to enable auto-start via the API builder. Here's an example of that:
https://lightning.ai/lightning-ai/studios/deploy-a-hugging-face-bert-model

However, i do agree that when litserve runs on a Studio, it should have a tighter integration with serverless (cc @nohalon @lantiga )

@gulshansainis
Copy link
Author

gulshansainis commented May 27, 2024

Hi @williamFalcon ,

Thank you for you response. Let me clarify the issue again, "The problem is when studio is saving and we send request".
I have recorded screen for same please have a look. I have setup the server to auto start. Let me know if need more help to understand issue.

https://youtu.be/5KJulPqrYF0

Regarding the code, its same as given on github https://github.com/Lightning-AI/LitServe?tab=readme-ov-file#implement-a-server

@aniketmaurya
Copy link
Collaborator

hi @gulshansainis, thank you for attaching the video. We are taking a look into it and should resolve soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants