You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, when we deploy the model in AML and call the API, it fails with the following message:
2021-01-12 09:39:50,919 | root | ERROR | Encountered Exception: Traceback (most recent call last):
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/site-packages/flask/app.py", line 1832, in full_dispatch_request
rv = self.dispatch_request()
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/site-packages/flask/app.py", line 1818, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/var/azureml-server/app.py", line 142, in score_realtime
return run_scoring(service_input, request.headers, request.environ.get('REQUEST_ID', '00000000-0000-0000-0000-000000000000'))
File "/var/azureml-server/app.py", line 274, in run_scoring
return AMLResponse(response_body, response_status_code, response_headers, json_str=True)
File "/var/azureml-server/azureml/contrib/services/aml_response.py", line 16, in __init__
super().__init__(json.dumps(message), status=status_code, mimetype='application/json')
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/azureml-envs/azureml_837afcc2e08970eb06ca005e02218b03/lib/python3.6/json/encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'int64' is not JSON serializable
When we change output_sample = [[1, 2, 0, 0, 0]] to output_sample = [[1.0, 2.0, 0.0, 0.0, 0.0]] it works, so there seems to be some issues with Integers?
Would be great if this could be fixed. Any help is appreciated.
Thanks
Clemens
The text was updated successfully, but these errors were encountered:
csiebler
changed the title
Integers in output_sample causes issues
Integers in output_sample cause TypeError
Jan 12, 2021
We're using the following input and output sample with latest inference-schema version:
The decorator for the run method in
score.py
looks like this:However, when we deploy the model in AML and call the API, it fails with the following message:
When we change
output_sample = [[1, 2, 0, 0, 0]]
tooutput_sample = [[1.0, 2.0, 0.0, 0.0, 0.0]]
it works, so there seems to be some issues with Integers?Would be great if this could be fixed. Any help is appreciated.
Thanks
Clemens
The text was updated successfully, but these errors were encountered: