Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getting Segmentation fault (core dumped) error while loading the onnx runtime model in c++ using onnxruntime 1.16 #18235

Closed
Keval-WOT opened this issue Nov 2, 2023 · 2 comments

Comments

@Keval-WOT
Copy link

Keval-WOT commented Nov 2, 2023

Describe the issue

I am getting issue while running the session of loaded onnxruntime model. the session is created and all other methods are working like getting shape info , type info , get output count and get input count etc. but when i try to run it gives me error to "Segmentation fault (core dumped)" . I have checked the model , it works fines it i use them as function it works but when i try to use them as class it gives error. Since i am new to c++ and anyone help me to figure this out

To reproduce

#include <iostream>
#include <opencv2/opencv.hpp>
#include <eigen3/Eigen/Dense>
#include <eigen3/Eigen/Core>
#include <vector>
#include <numeric>
#include <opencv2/dnn.hpp>
#include <onnxruntime/onnxruntime_cxx_api.h>
#include <optional>
#include <eigen3/Eigen/SparseCore>

using namespace Eigen;
using namespace cv;
using namespace std;
using namespace Ort;

template <typename T>

T vectorProduct(const std::vector<T> &v)
{
    /**
     * Calculates the product of all elements in a given vector.
     *
     * This function multiplies all elements in the input vector to calculate the product.
     *
     * @param v The input vector of elements.
     * @return The product of all elements in the vector.
     *
     * @tparam T The data type of the elements in the vector.
     *
     * @throws None
     *
     * Example:
     *   std::vector<int> numbers = {2, 3, 4, 5};
     *   int product = vectorProduct(numbers);
     *   product will contain the value 120 (2 * 3 * 4 * 5)
     */
    return accumulate(v.begin(), v.end(), 1, std::multiplies<T>());
}
struct Face
{
    VectorXf bbox;       // Bounding box information
    MatrixXf Keypoints;  // Key points associated with the face
    float confidance;    // Confidence score of the face detection
    VectorXf Embeddings; // Embeddings (feature vectors) of the detected face
};

class FaceDetector
{   private:
       
        SessionOptions session_options;
        Session session{nullptr};
        AllocatorWithDefaultOptions allocator;
        vector<string> input_names;
        vector<string> output_names;
        vector<int64_t> input_shape;
        vector<int64_t> output_shape;
        std::vector<const char *> input_names_char;
        std::vector<const char *> output_names_char;
        float det_threshold;
        float input_mean;
        float input_std;
        bool use_kps;
        int fmc;
        cv::Size input_size;
        float anchor_ratio;
        int num_anchors;
        map<tuple<int, int, int>, MatrixXf> center_cache;
        vector<int> feat_stride_fpn;
        float detScale;
        int input_width;
        int input_height;
        size_t inputTensorSize;
        vector<int64_t> inputsize;




    public:
        FaceDetector(float detection_threshold):det_threshold(det_threshold)
        {
            string model_path= "detection_640.onnx";
            Env env(ORT_LOGGING_LEVEL_WARNING, "Model Loading Session");
            session_options.SetGraphOptimizationLevel(
            GraphOptimizationLevel::ORT_ENABLE_EXTENDED);
            
            try {
               session = Session(env, model_path.c_str(), session_options);
            } catch (const Ort::Exception& exception) {
                std::cout << "Failed to create session: " << exception.what() << std::endl;
            }
            fetch_model_metadata();
            init_vars();
            
        }
        void init_vars()
        {
            for(int i=0;i<session.GetInputCount();i++)
            {
                input_names.emplace_back(session.GetInputNameAllocated(i,allocator).get());
                input_shape =session.GetInputTypeInfo(i).GetTensorTypeAndShapeInfo().GetShape();
            }
            //condition to fix dynamic shape
            for (auto &s : input_shape) {
                if (s < 0) {
                    s = 1;
                }
                }

            for (std::size_t i = 0; i < session.GetOutputCount(); i++) {
                output_names.emplace_back(
                    session.GetOutputNameAllocated(i, allocator).get());
                output_shape =
                    session.GetOutputTypeInfo(i).GetTensorTypeAndShapeInfo().GetShape();
                }
            input_names_char.resize(input_names.size());
            std::transform(std::begin(input_names), std::end(input_names),
                        std::begin(input_names_char),
                        [&](const std::string &str) { return str.c_str(); });

            output_names_char.resize(output_names.size());
            std::transform(std::begin(output_names), std::end(output_names),
                        std::begin(output_names_char),
                        [&](const std::string &str) { return str.c_str(); });
            input_size = {640,640};
            input_mean = 127.5;
            input_std = 129.0;
            use_kps = false;
            anchor_ratio = 1.0;
            num_anchors = 1;
            int num_outputs = session.GetOutputCount();
            if (num_outputs == 6)
            {
                fmc = 3;
                feat_stride_fpn = {8, 16, 32};
                num_anchors = 2;
            }
            else if (num_outputs == 9)
            {
                fmc = 3;
                feat_stride_fpn = {8, 16, 32};
                num_anchors = 2;
                use_kps = true;
            }
            else if (num_outputs == 10)
            {
                fmc = 5;
                feat_stride_fpn = {8, 16, 32, 64, 128};
                num_anchors = 1;
            }
            else if (num_outputs == 15)
            {
                fmc = 5;
                feat_stride_fpn = {8, 16, 32, 64, 128};
                num_anchors = 1;
                use_kps = true;
            }
            inputsize ={1,3,640,640};
            inputTensorSize = vectorProduct(inputsize);
            cout<<"Input Tensor size" << inputTensorSize;
        }
            
        Mat Preprocess(const Mat &image)
        {
            // cout << endl
            //      << image.rows << image.cols << endl;
            float imRatio = static_cast<float>(image.rows) / image.cols;
            float ModelRatio = static_cast<float>(input_size.height) / (input_size.width);
            int newWidth, newHeight;
            if (imRatio > ModelRatio)
            {
                newHeight = input_size.height;
                newWidth = static_cast<int>(newHeight / imRatio);
            }
            else
            {
                newWidth = input_size.width;
                newHeight = static_cast<int>(newWidth * imRatio);
                
            }
            detScale = static_cast<float>(newHeight) / image.rows;

            Mat ResizedImg;
            cv::resize(image, ResizedImg, Size(newWidth, newHeight));
            Mat detImg = Mat(input_size, CV_8UC3, cv::Scalar(0, 0, 0));
            ResizedImg.copyTo(detImg(Rect(0, 0, newWidth, newHeight)));
            cv::Mat preprocessedImg;
            cv::dnn::blobFromImage(detImg, preprocessedImg, 1.0 / input_std, input_size, cv::Scalar(input_mean, input_mean, input_mean), true, false);

            input_height = preprocessedImg.size[2];
            input_width = preprocessedImg.size[3];

            return preprocessedImg;
        }

        std::vector<Ort::Value> Inference(Mat &preprocessedImg)
        {   
            std::vector<Ort::Value> inputTensors;
            std::vector<float>inputTensorValues(inputTensorSize);
            std::vector<Ort::Value> PredictedTensors;

            int batch_size=1;
            for (int i = 0; i < batch_size; i++)
            {
            copy(preprocessedImg.begin<float>(), preprocessedImg.end<float>(), inputTensorValues.begin() + i * inputTensorSize / batch_size);
            }
            Ort::MemoryInfo memoryInfo = Ort::MemoryInfo::CreateCpu(OrtAllocatorType::OrtArenaAllocator, OrtMemType::OrtMemTypeDefault);
            inputTensors.push_back(Ort::Value::CreateTensor<float>(
            memoryInfo, inputTensorValues.data(), inputTensorSize, inputsize.data(),
            inputsize.size()));
            auto type_info = inputTensors[0].GetTensorTypeAndShapeInfo();

            PredictedTensors = session.Run(Ort::RunOptions{nullptr},
                                   input_names_char.data(),
                                   &(inputTensors[0]),
                                   session.GetInputCount(),
                                   output_names_char.data(),
                                   session.GetOutputCount());

            return PredictedTensors;
        }
        void fetch_model_metadata()
        {
            cout << "Model Metadata";
        }
        void Detect(Mat &image)
        {
            Mat Preprocessed_img = Preprocess(image);
            std::vector<Ort::Value> Predictions = Inference(Preprocessed_img);


        }
        
            
};


int main()
{

    FaceDetector detector(0.65);
    cv::Mat imageoriginal = cv::imread("multi-faces.jpg");
    detector.Detect(imageoriginal);
    return 0;
} 

Urgency

No response

Platform

Linux

OS Version

20.04 LTS

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16

ONNX Runtime API

C++

Architecture

X86

Execution Provider

Default CPU

Execution Provider Library Version

No response

Tasks

Preview Give feedback
No tasks being tracked yet.

Tasks

Preview Give feedback
No tasks being tracked yet.
@satyajandhyala
Copy link
Contributor

satyajandhyala commented Nov 8, 2023

There are no attachments, ONNX model (detection_640.onnx) and the input image (multi-face.jpg) used in this code to reproduce. Probably taking raw pointer from std:string elements in the vector could be causing issue.

@Keval-WOT
Copy link
Author

The issue is solved. it is due to at compile time it cant find the onnxlib file. After adding in to ldconfig the issue is solved.
@satyajandhyala Thanks for your response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants