Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用OpenCV实现简单的AR #99

Open
imuncle opened this issue Jan 16, 2020 · 0 comments
Open

使用OpenCV实现简单的AR #99

imuncle opened this issue Jan 16, 2020 · 0 comments
Labels
CV 计算机视觉学习 RM 参加RoboMaster比赛中的成长记录

Comments

@imuncle
Copy link
Owner

imuncle commented Jan 16, 2020

这是19年12月初的小项目,当时是为了给人培训PnP算法而写的一个小示例。

这一个半月都没写博客,但我几乎每天都在写代码学东西,一直没时间总结,终于明天就要离开学校回家了,今天便有了时间稍微写写过去这一个半月的所做所学。

OpenCV实现AR本质上是将一系列世界坐标投影到图像坐标的过程,属于最基础的3D知识。这里面的东西不多,首先从相机标定开始说。

相机标定

目前被大家普遍使用的是张正友标定法,这里就不给出具体的算法推导了,推荐几篇将算法原理的文章:

总之,从不同角度拍几张棋盘图,就可以使用张正友标定法计算出相机的内参矩阵。相机最重要的坐标变换关系如下(为简单起见,这里省略了畸变矩阵):

image

其中矩阵K就是相机的内参矩阵。在内参矩阵已知的情况下,对于一个点来说,其在世界坐标系下的坐标、在图像坐标系中的坐标、相机的外参矩阵,这三个量只要知道其中任意两个就可以求得第三个。

程序思路分析

我们要实现的是AR功能,举个例子,我们想将一个指定长宽高的方块放置在图像中的一个平面上,也就是说,我们给定了世界坐标,需要求解图像坐标,拿到方块八个顶点的图像坐标后,就可以相互连线绘制出方块。

根据上面的等式,我们还差相机的外参矩阵。这里就用到了PnP算法。

PnP算法的功能是根据一个平面的四个点的图像和真实世界坐标计算相机相对于该平面的姿态,即相机的外参矩阵。算法很简单,关于它的算法推导可以看这篇:3D-2D:PnP算法原理

所以我们的程序思路是,首先标定相机,获取相机的内参(这一步只需要拿到相机后标定一下就行,相机不变内参就不变),然后寻找我们目标平面的四个不共线的特征点,根据这四个点的世界坐标和图像坐标求解出相机的外参,然后给定要显示图像的各个点的坐标,使用上述矩阵等式求出对应的图像坐标,最后画线即可。

具体程序实现

int main() {
    VideoCapture camera(0);
    Mat cameraMatrix = (Mat_<float>(3,3) << 2042.778288788005, 0, 680.1102284350022,0, 2077.165291902286, 577.8248077989456,0, 0, 1);
    Mat distCoefficients = (Mat_<float>(5,1) << 0, 0, 0, 0, 0);

    for(;;)
    {
        if(waitKey(1) == 'q')
        {
            break;
        }
        Mat frame;
        camera >> frame;
        vector<Point2f> corner_points_buf;
		// 寻找棋盘上的角点
        if(findChessboardCorners(frame, Size(10,7), corner_points_buf) == 0)
        {
            imshow("原图", frame);
            continue;
        }
        vector<Point2f> target;
		// 以棋盘上四个顶点处的角点计算相机外参
        target.push_back(corner_points_buf[0]);
        target.push_back(corner_points_buf[9]);
        target.push_back(corner_points_buf[60]);
        target.push_back(corner_points_buf[69]);
        vector<Point3f> world_coodinate;
        world_coodinate.emplace_back(Point3f(0,0,0));
        world_coodinate.emplace_back(Point3f(230,0,0));
        world_coodinate.emplace_back(Point3f(0,161,0));
        world_coodinate.emplace_back(Point3f(230,161,0));
        Mat rvec, tvec, rMat;
		// PnP解算外参rvec
        solvePnP(world_coodinate, target, cameraMatrix, distCoefficients, rvec, tvec);
		// 给定要显示的点的世界坐标,这里显示的是边长为100mm的立方体
        vector<Mat> world_target;
        Mat world_point = (Mat_<float>(4,1) << 50,50,-50,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 50,100,-50,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 100,100,-50,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 100,50,-50,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 50,50,0,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 50,100,0,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 100,100,0,1);
        world_target.push_back(world_point);
        world_point = (Mat_<float>(4,1) << 100,50,0,1);
        world_target.push_back(world_point);
        vector<Point2f> image_points;
        Mat image_point(Size(1,3), CV_32FC1);
        Mat tramsfor_(Size(4,3), CV_32FC1);
		// PnP解算出来的是旋转向量,需要先把它转换成旋转矩阵
        Rodrigues(rvec, rMat);
		// 构建世界坐标到图像坐标的变换矩阵
        for(int i=0;i<3;i++)
        {
            for(int j=0;j<3;j++)
            {
                tramsfor_.at<Vec4f>(i)[j] = rMat.at<Vec3f>(i)[j];
            }
        }
        tramsfor_.at<Vec4f>(0)[3] = tvec.at<Vec3f>(0)[0];
        tramsfor_.at<Vec4f>(1)[3] = tvec.at<Vec3f>(0)[1];
        tramsfor_.at<Vec4f>(2)[3] = tvec.at<Vec3f>(0)[2];
        for(int i=0;i<world_target.size();i++)
        {
			// 矩阵运算
            image_point = cameraMatrix * tramsfor_ * world_target[i];
            image_point.at<float>(0) = image_point.at<float>(0)/image_point.at<float>(2);
            image_point.at<float>(1) = image_point.at<float>(1)/image_point.at<float>(2);
            image_point.at<float>(2) = 1;
            Point2f img_p = Point(image_point.at<float>(0), image_point.at<float>(1));
            image_points.push_back(img_p);
        }
		// 按顺序将点用直线连接
        vector<Point> points;
        points.push_back(image_points[0]);
        points.push_back(image_points[1]);
        points.push_back(image_points[2]);
        points.push_back(image_points[3]);
        polylines(frame, points, true, cv::Scalar(0, 255, 0), 3);
        points.clear();
        points.push_back(image_points[4]);
        points.push_back(image_points[5]);
        points.push_back(image_points[6]);
        points.push_back(image_points[7]);
        polylines(frame, points, true, cv::Scalar(0, 255, 0), 3);
        line(frame, image_points[0], image_points[4], Scalar(0,255,0), 2);
        line(frame, image_points[1], image_points[5], Scalar(0,255,0), 2);
        line(frame, image_points[2], image_points[6], Scalar(0,255,0), 2);
        line(frame, image_points[3], image_points[7], Scalar(0,255,0), 2);
        imshow("原图", frame);
    }
    waitKey(0);
    return 0;
}

实现效果

image

@imuncle imuncle added RM 参加RoboMaster比赛中的成长记录 CV 计算机视觉学习 labels Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CV 计算机视觉学习 RM 参加RoboMaster比赛中的成长记录
Projects
None yet
Development

No branches or pull requests

1 participant