You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixed intermittent hangs during model loading for Python backend.
Refer to the 23.02 column of the Frameworks Support Matrix for container image versions on which the 23.02 inference server container is based.
Known Issues
In some rare cases Triton might overwrite input tensors while they are still in use which leads to corrupt input data being used for inference with TensorRT models. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.
Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.
When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: Adding model metadata in TorchScript model file pytorch/pytorch#38273
Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
New Release Available
https://github.com/triton-inference-server/server/releases/tag/v2.31.0
What's New in 2.31.0
Known Issues
int8
toint32
casting pytorch/pytorch#66930 for more information.Beta Was this translation helpful? Give feedback.
All reactions