How can I get a point on the optical axis to map to the center pixel when using rs2_project_point_to_pixel()? #13158
Replies: 4 comments 3 replies
-
Hi @multiplexcuriosus The advice at #11031 (comment) may be a good starting point for exploring the described problems with the center-point of the wooden panel. In regard to the cuboid, the typical approach when using rs2_project_point_to_pixel would be to (1) perform depth to color alignment to map depth and RGB data together, (2) perform deprojection with rs2_deproject_pixel_to_point, and then (3) perform rs2_project_point_to_pixel to convert the 3D points back into 2D pixels. I have not previously seen an approach that goes directly to projecting point to pixel without generating 3D points first. However, if you wanted to get the Z-depth of a corner on an RGB image then an easier approach would likely be to use the rs2_project_color_pixel_to_depth_pixel instruction to convert a 2D color pixel into a 3D depth pixel. |
Beta Was this translation helpful? Give feedback.
-
The main reason I would advise against depth in the case of the cuboid is not because of pose precision but because measuring along the corner edges of an object would likely be more difficult for the camera than a flat surface. For example, if two adjacent flat walls with a 90 degree corner between them are viewed by the camera then the corner could confuse the depth sensing. There is not a calibration tool for the L515 camera model. They are a different camera technology to the RealSense 400 Series depth models (lidar instead of two-sensor stereo depth) and so the L515 should retain its calibration throughout its lifetime. Working out the offset and writing in a mechanism to apply it to all measurements certainly is a valid approach if a 'proper' solution cannot be reached. |
Beta Was this translation helpful? Give feedback.
-
The RGB image does have a Brown-Conrady distortion model applied to it, This model is applied automatically by the camera hardware before the image is sent along the USB cable to the computer. It is possible to remove the distortion model with OpenCV code using its cv2.undistort() function. |
Beta Was this translation helpful? Give feedback.
-
You are very welcome. I'm pleased that I could be of help. :) |
Beta Was this translation helpful? Give feedback.
-
Hi
I am working with the L515 camera (v2.54.2) on ubuntu 20.04 and experimenting around with projection and deprojection.
I mounted the camera at a fixed height and rotation, placed a wooden panel exactly parallel to the camera surface and marked on the wooden panel the intersection point between the z-axis of the camera and the wooden panel plane.
(z-axis being normal to the camera surface and going through the center of the rgb-camera lens).
I then used the rs2_project_point_to_pixel() method to project the point p_C = [0,0,0.1] from camera space to pixel coordinates, expecting to see the point being projected to a point on the wooden panel with the same y_C coordinate i.e on the same height.
Instead, there was a slight offset, as can be seen in the following image.
I then looked at the principal point coordinates (ppx,ppy) in the intrinsics struct and saw that p_C was being mapped exactly to the principal point, which is not surprising, since this is the point where the optical axis pierces the image plane.
From the intrinsics struct I also learned that the distortion model active is the brown-conrady model.
For the project I am working on it is important that I can project precisely from camera space to image space and then later also deproject. So my question is: What can I do to get rid of the described projection error? My first guess was that the offset was due to lens distortion, but since the distortion coefficients in the intrinsics struct are non-zero the rs2_project_point_to_pixel() method should already take distortion into account, right?
To give a bit of background. I am trying to do object localization of a cuboid object. My pipeline is already able to find the corners of the cube in the image, but in order to do pose estimation I need to know where those corners are in 3d space. An added difficulty is that I was told that I cannot use depth information. To still be able to deproject 2D-points to 3D-points I am doing the following. I use the linear camera model equations to construct the ray which shoots out from the 2D corner coordinate, goes through the camera space origin (center of projection) and eventually also goes through the 3D corner coordinate. I intersect this ray with the ground plane (or the top plane) of the shelf to get the z-distance in camera space between the camera and the 3D corner point. When using this method I encounter two issues which imply that I am doing something fundamentally wrong. Lets consider for example this frame with the marked detected corners.
1.) When I deproject point C using the described method its z value is only about 80% of the actual z-distance.
2.) When I deproject point C and point E and calculate their distance, the distance measurement is also only 80% of the real distance.
My initially described test setup aims at figuring out why there is this distortion or scaling of the camera space. My hope is that the answer to the question in the title will also help me solve issues 1.) & 2.), since they do all seem connected.
Any thoughts and comments would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions