Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INVALID_ARGUMENT: getPluginCreator could not find plugin CustomEmbLayerNormPluginDynamic version 1 #2

Open
vilmara opened this issue Oct 9, 2020 · 5 comments

Comments

@vilmara
Copy link

vilmara commented Oct 9, 2020

Hi @TrojanXu, am trying to deploy the BERT TensorRT model with Triton following your steps but getting the below error:

Previously I built and copied the plugins to /opt/tritonserver
export LD_PRELOAD=/opt/tritonserver/libbert_plugins.so:/opt/tritontserver/libcommon.so

then I ran the triton server
sudo docker run --gpus all --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p443:443 -p8080:8080 -v /home/triton_server/docs/examples/model_repository/:/models nvcr.io/nvidia/tritonserver:20.03.1-py3 tritonserver --model-repository=/models

Error:
E1009 01:05:16.971516 1 logging.cc:43] INVALID_ARGUMENT: getPluginCreator could not find plugin CustomEmbLayerNormPluginDynamic version 1 E1009 01:05:16.971577 1 logging.cc:43] safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) E1009 01:05:16.971903 1 logging.cc:43] INVALID_STATE: std::exception E1009 01:05:16.971987 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed. I1009 01:05:16.972723 1 onnx_backend.cc:203] Creating instance densenet_onnx_0_gpu1 on GPU 1 (7.5) using model.onnx E1009 01:05:17.004133 1 model_repository_manager.cc:891] failed to load 'bert_trt' version 1: Internal: unable to create TensorRT engine

Some recommendations on how to fix this issue or if there is a recent version of the SW that simplified the deployment?

Versions used:
TensorRT: release 6.0
Triton docker image: tritonserver:20.03.1-py3

@TrojanXu
Copy link
Owner

TrojanXu commented Oct 9, 2020

Hi Vilmara, how you pass LD_PRELOAD to tritonserver inside the container?

@vilmara
Copy link
Author

vilmara commented Oct 9, 2020

Hi @TrojanXu, I have tried several methods without success:

Method 1: -e LD_PRELOAD=/opt/tritonserver/libbert_plugins.so:/opt/tritontserver/libcommon.so
sudo docker run --gpus all --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p443:443 -p8080:8080 -v /home/triton_server/docs/examples/model_repository/:/models -e LD_PRELOAD=/opt/tritonserver/libbert_plugins.so:/opt/tritontserver/libcommon.so nvcr.io/nvidia/tritonserver:20.03.1-py3 tritonserver --model-repository=/models

Error:

ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
NOTE: Legacy NVIDIA Driver detected.  Compatibility mode ENABLED.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritonserver/libbert_plugins.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/opt/tritontserver/libcommon.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
E1009 15:18:10.005124 1 logging.cc:43] INVALID_ARGUMENT: getPluginCreator could not find plugin CustomEmbLayerNormPluginDynamic version 1
E1009 15:18:10.005213 1 logging.cc:43] safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
E1009 15:18:10.005535 1 logging.cc:43] INVALID_STATE: std::exception
E1009 15:18:10.005628 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E1009 15:18:10.140418 1 model_repository_manager.cc:891] failed to load 'bert_trt' version 1: Internal: unable to create TensorRT engine

Method 2: once inside the docker export LD_PRELOAD=/opt/tritonserver/libbert_plugins.so:/opt/tritontserver/libcommon.so
sudo docker run --gpus all --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p443:443 -p8080:8080 -v /home/triton_server/docs/examples/model_repository/:/models nvcr.io/nvidia/tritonserver:20.03.1-py3

method 2 does nothing just starts the docker image and exits immediately without errors:

Triton Inference Server

NVIDIA Release 20.03.1 (build 12830698)

Copyright (c) 2018-2020, NVIDIA CORPORATION.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION.  All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.

NOTE: Legacy NVIDIA Driver detected.  Compatibility mode ENABLED.

home@server:~$

Further questions:
1- Do you recommend to try with the latest SW version, if so, which one?
2- What docker image should I use?, I see in your Triton presentation you used nvcr.io/nvidia/tensorrtserver instead of nvcr.io/nvidia/tritonserver

@TrojanXu
Copy link
Owner

Yes, my question is how you put file libbert_plugins.so to /opt/tritonserver/ inside the docker container? Have you modified nvcr.io/nvidia/tensorrtserver image? From the command you provided, I only see you were using unmodified image nvcr.io/nvidia/tritonserver:20.03.1-py3 from ngc. But have you put libbert_plugins.so itself into the docker image?

@vilmara
Copy link
Author

vilmara commented Oct 16, 2020

hi @TrojanXu, what is the best way to put the files libbert_plugins.so and libcommon.so to /opt/tritonserver/ inside the docker container?. I modified the nvcr.io/nvidia/tensorrtserver image editing the Dockerfile as shown below but I got errors rebuilding the image:

Dockerfile addition:

RUN cp /opt/tensorrtserver/libbert_plugins.so /opt/tensorrtserver/
RUN cp /opt/tensorrtserver/libcommon.so /opt/tensorrtserver/
ENV LD_PRELOAD /opt/tensorrtserver/libbert_plugins.so:/opt/tensorrtserver/libcommon.so

Error when building the image:

cp: cannot stat '/opt/tensorrtserver/libbert_plugins.so': No such file or directory
The command '/bin/sh -c cp /opt/tensorrtserver/libbert_plugins.so /opt/tensorrtserver/' returned a non-zero code: 1

@TrojanXu
Copy link
Owner

One best way is to launch this docker container interactively, and pass lib directory into the container via '-v' option. Actually this the way described in the method 2 you mentioned. But you need to add some additional flags, the full cmd should be,
sudo docker run --gpus all --rm -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p443:443 -p8080:8080 -v /home/triton_server/docs/examples/model_repository/:/models -v /path/to/your/libs/:/path/inside/container nvcr.io/nvidia/tritonserver:20.03.1-py3 bash
Then you can launch tritonserver manually inside the container by,
LD_PRELOAD=/path/inside/container/libxxx.so trtserver tritonserver --model-repository=/models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants