-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repository with all models in ONNX format #786
Comments
Hi @Kromtar , I am trying to convert these onnx models to tensorrt formats using NVIDIA TensorRT containers. Everything works fine with the detection model but when I am exporting the recognition model, it fails
However, I suspect this is only limited to smaller models of size ~approx. 15mb . I didn't try everyone of them but I did manage to convert 10_recognition_model.onnx to tensorrt formt without any problem. Do you suspect anything that is gong wrong? Also could you please clarify if you are using two different model architectures ? |
Hi @Kromtar. |
Hi @Kromtar [([[6, 4], [83, 4], [83, 59], [6, 59]], 'ISAP7', 0.07690704349016575), 'as' suppose to be 35 The result from easyOCR itself: And the bounding boxes are all correct, compare to running the same image with easyOCR. |
How to use these onnx samples? Is it possible to use these onnx without EasyOcr? |
@Kromtar not only conversion to onnx already discussed many times since two years ago, easyocr already provide the script to export the model into onnx format. What's the difference between your approach with the one provided officially by JaidedAI? |
This is what chatgpt spews out when I asked it to also consider dict and configs: import onnx
# Define the output ONNX file path
output_path = 'model.onnx'
# Convert the PyTorch model to ONNX format
torch.onnx.export(model, input_data, output_path, export_params=True, opset_version=11, input_names=['input'], output_names=['output'], dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}})
# Update the ONNX model with the dictionary and configuration files
model_proto = onnx.load(output_path)
model_proto.graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'batch_size'
model_proto.graph.output[0].type.tensor_type.shape.dim[0].dim_param = 'batch_size'
# Add the dictionary and configuration files as attributes to the ONNX model
dict_proto = onnx.helper.make_tensor("dictionary", onnx.TensorProto.STRING, [len(dictionary)], dictionary)
config_proto = onnx.helper.make_tensor("config", onnx.TensorProto.STRING, [1], [json.dumps(config)])
model_proto.graph.initializer.extend([dict_proto, config_proto]) I haven't checked the validity of this script yet, but I hope this will give inspirations for you @Kromtar. |
Hello @alib022 Did you manage to solve this problem? Can you show your config.pbtxt for detector.onnx and recognition.onnx models? |
right now I have these in dev environment output [ } } } } instance_group [ } |
@alib022 Thank you very much! |
Any latest models in onnx format? |
@light42 you said:
where is the script to convert my text recognition model to onnx? |
Hi @Kromtar @alib022 , I am trying to run EasyOCR on NVIDIA Triton with ONNX. Currently, the ONNX model has loaded as "input": [
{
"name": "input1",
"data_type": "TYPE_FP32",
"format": "FORMAT_NONE",
"dims": [1, 1, 64, -1],
"is_shape_tensor": false,
"allow_ragged_batch": false,
"optional": false
}
],
"output": [
{
"name": "output",
"data_type": "TYPE_FP32",
"dims": [1, -1, 97],
"label_filename": "",
"is_shape_tensor": false
}
], Can anyone help me with how to go about with making inferences and making sense of the output? What are the 64 and 97 in dimensions for? |
Hi, |
@alib022 |
Looking forward to it! |
Wondering if you are able to share your work? |
For the regconition network, im getting this error. |
@alib022 thanks for sharing your insights in this thread, can I get access to your code please? I have a similar use case |
I am trying to access the link to google drive, but I get 404 page. |
In this Drive folder you can find all EasyOCR models in ONNX format covering all currently available languages. Look at the text file in the folder to see which one corresponds to the language you are using.
ONNX is an agnostic and standardized format for storing models. Models in ONNX format can be imported and used in several Runtimes including PyTorch, TensorFlow and OpenVINO. The ONNX team also provides its own runtine, ONNX Runtime. This Runtime allows development in several languages, operating systems and acceleration hardware. By owning the EasyOCR models in ONNX it is possible to cover new scenarios where the use of Python and/or PyTorch is not adequate.
In my use case, I am using the EasyOCR and ONNX Runtime models to be able to develop C++ applications. This makes it much easier to bring EasyOCR functionalities into production.
In this issue you can find a general guide on how I have managed to modify EasyOCR to export the models, initially developed in Torch to ONNX. Note that the export process explained in the guide can only be achieved if you have CUDA acceleration. As not everyone has access to such hardware, I have decided to share a copy of the models already exported.
The models that I make available are a first version, I may optimize the exported models in the next weeks.
The text was updated successfully, but these errors were encountered: