Skip to content

A simple C++ Class for performing object detection on YOLOv8 with ONNXRuntime

License

Notifications You must be signed in to change notification settings

K4HVH/YOLOv8-ONNXRuntime-CPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOv8 ONNXRuntime Engine in c++

YOLOv10 Runtime is available here: YOLOv10-ONNXRuntime-CPP

Blog

You can find more in-depth discussion about this project on my blog

Overview

This project is a C++ implementation of a YOLOv8 inference engine using the ONNX Runtime. It is heavily based on the project yolov8-onnx-cpp by FourierMourier. The primary goal of this implementation is to provide a streamlined and efficient object detection pipeline that can be easily modified to suit various client needs.

Features

  • High Performance: Optimized for speed to allow running inference in a loop at maximum speed.
  • Simplicity: Simplified codebase, focusing solely on object detection.
  • Flexibility: Easy to modify and extend to fit specific requirements.

Prerequisites

  • ONNX Runtime: Make sure to have ONNX Runtime installed.
  • OpenCV: Required for image processing and display.
  • C++ Compiler: Compatible with C++11 or higher.

Getting Started

Installation

  1. Clone the repository:
git clone https://github.com/K4HVH/YOLOv8-ONNXRuntime-CPP
cd YOLOv8-ONNXRuntime-CPP
  1. Install dependencies:

Ensure that ONNX Runtime and OpenCV are installed on your system. You can find installation instructions for ONNX Runtime here.

Compilation

  1. Configure the project:

Edit the CMakeLists.txt in the project root directory. Replace "path/to/onnxruntime" with the actual path to your ONNX Runtime installation directory.

# Path to ONNX Runtime
set(ONNXRUNTIME_DIR "path/to/onnxruntime")
  1. Build the project:
mkdir build
cd build
cmake ..
make

Running the Inference

  1. Run the executable:
./yolo_inference
  1. Test with your image:

Modify the imagePath variable in main.cpp to point to your test image.

Project Structure

main.cpp: Entry point of the application. It initializes the inferencer and runs the detection on a sample image. engine.hpp: Header file for the YOLOv8 inferencer class, defining the structure and methods. engine.cpp: Implementation of the YOLOv8 inferencer, including preprocessing, forward pass, and postprocessing steps.

Example Usage

Here is a snippet from main.cpp demonstrating the usage:

#include "engine.hpp"
#include <opencv2/opencv.hpp>
#include <iostream>
#include <vector>
#include <string>

int main()
{
    std::wstring modelPath = L"best.onnx";
    const char* logid = "yolo_inference";
    const char* provider = "CPU"; // or "CUDA"

    YoloInferencer inferencer(modelPath, logid, provider);

    std::string imagePath = "test.jpg"; // Replace with your image path
    cv::Mat image = cv::imread(imagePath);

    if (image.empty()) {
        std::cerr << "Error: Unable to load image!" << std::endl;
        return -1;
    }

    std::vector<Detection> detections = inferencer.infer(image, 0.1, 0.5);

    for (const auto& detection : detections) {
        cv::rectangle(image, detection.box, cv::Scalar(0, 255, 0), 2);
        std::cout << "Detection: Class=" << detection.class_id << ", Confidence=" << detection.confidence
            << ", x=" << detection.box.x << ", y=" << detection.box.y
            << ", width=" << detection.box.width << ", height=" << detection.box.height << std::endl;
    }

    cv::imshow("output", image);
    cv::waitKey(0);

    return 0;
}

Contributing

Contributions are welcome! Please open an issue or submit a pull request with your changes.

License

This project is licensed under the GPL3.0 License.

Acknowledgments

This project borrows heavily from the original yolov8-onnx-cpp repository.

About

A simple C++ Class for performing object detection on YOLOv8 with ONNXRuntime

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published