Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I have the API return a 500 status code on error? #57

Open
andb0t opened this issue Mar 22, 2022 · 3 comments
Open

How can I have the API return a 500 status code on error? #57

andb0t opened this issue Mar 22, 2022 · 3 comments

Comments

@andb0t
Copy link

andb0t commented Mar 22, 2022

Hi everybody!

I am following the official tutorials and thus my run function looks like this:

@input_schema('data', StandardPythonParameterType(input_sample))
@output_schema(StandardPythonParameterType(output_sample))
def run(data):
    try:
        result = model.predict(data)
        return [result]
    except Exception as e:
        error = str(e)
        return error

So everything runs fine if the there is no error. But in case of error (e.g. the user inputing the data in the wrong format) the API returns a 200 status code as well accompanied by the string error message as output. This behavior is clear from reading the code. My question is now: how can I make the API return a 500 on error and output the error message?

I tried something like this, but then the 500 was simply included in the output string and it still returned a 200:

        return error, 500

This happily gives me the string ["some error message", 500] with a 200 status code :)

Any idea how I can achieve this?
Thanks in advance!

@ghost
Copy link

ghost commented Oct 17, 2022

@andb0t Did you figure out how to do this?

@andb0t
Copy link
Author

andb0t commented Oct 18, 2022

No, unfortunately I haven't found anything... It still still happily returns a 200 with the primitive error message as return string. We have chosen to live with this on the meantime. Makes life and debugging harder...

@ghost
Copy link

ghost commented Oct 18, 2022

I have not been able to test it yet, but might test it later.
You could possibly look into using this:
from azureml.contrib.services.aml_response import AMLResponse

example:

from azureml.contrib.services.aml_request import AMLRequest, rawhttp
from azureml.contrib.services.aml_response import AMLResponse
from PIL import Image
import json


def init():
    print("This is init()")
    

@rawhttp
def run(request):
    print("This is run()")
    
    if request.method == 'GET':
        # For this example, just return the URL for GETs.
        respBody = str.encode(request.full_path)
        return AMLResponse(respBody, 200)
    elif request.method == 'POST':
        file_bytes = request.files["image"]
        image = Image.open(file_bytes).convert('RGB')
        # For a real-world solution, you would load the data from reqBody
        # and send it to the model. Then return the response.

        # For demonstration purposes, this example just returns the size of the image as the response..
        return AMLResponse(json.dumps(image.size), 200)
    else:
        return AMLResponse("bad request", 500)

Found at: https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-deploy-advanced-entry-script

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant