-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No semantic point cloud generated #34
Comments
Hi which type of camera are you running? Do you what is the DepthMapFactor of your camera? This might be related to this issue #17 |
Hi! Thanks for your respond and this is my rqt_graph. My camera type is Intel Realsense D435i and I don't know how to check my DepthMapFactor. I've checked issue #17 before but I can't quite understand their solution. |
Hi, the rqt_graph looks okay btw! You are very close I think you may need to declare your own DepthMapFactor in the .yaml file for your camera. For how to define the depth map factor, you may refer to the following file from ORB_SLAM3, you may search around for the corresponding DepthMapFactor value for Intel Realsense camera. For importing it into the python file, you may consult the ROS website. https://github.com/UZ-SLAMLab/ORB_SLAM3/blob/master/Examples/ROS/ORB_SLAM3/Asus.yaml |
hi, i think you can check out what frame your realsense camera is generating (roscd realsense2_camera maybe?). Also check the source code, especially the Tracking.cc and also those .yaml files? I had the same problem but forgot how to resolved it, the last i remember was modifying one of those above file |
Ok, thank you. It seems like there is nothing generate a frame like "/map" or "/world" so the frame is fixed with camera_link which makes camera cannot move. In RTAB-Map, the launch file generated a /map frame so camera_link can move around. I am not very familiar with C++ so maybe I should ask somebody to help me with the source codes. |
You can try to explore the launch file for the realsense camera. I don't think you need to modify any source code for this. |
Hello, I have no idea about how to run this project with a realsense D435, and luckily I saw your issue. Could you please tell me what I should do and how to modify the code,such as the .yaml file. Hope for your reply, many thanks! @frankSARU |
Hello! I just checked the issue #17 and found that we need to change 2 lines in |
Thanks a lot! |
Thanks a lot! But actually I am facing the problem that when I run "roslaunch semantic_slam camera.launch" with D435, it comes out [ INFO] [1611812367.601686804]: No matching device found.... waiting for devices. Reason: std::__cxx11::string openni2_wrapper::OpenNI2Driver::resolveDeviceURI(const string&) @ /tmp/binarydeb/ros-kinetic-openni2-camera-0.4.2/src/openni2_driver.cpp @ 737 : Invalid device number 1, there are 0 devices connected and nothing shows in rviz. Does it mean that I need to modify the launch file? |
I see. That's because the original camera.launch is for openni2 supported cameras, for realsense, you need to modify to your own camera launch file. Please check your |
Thanks a lot! With your tips, I can now successfully run camera.launch and slam.launch with D435, but another problem occurs, there is still nothing in rviz except the raw rgb images like below |
Hey @AnhTuDo1998 and @frankSARU |
Hi @germal, were your topics published? like can show in rostopic hz /topic ? Can this run on its own (like no octomap generation node?) How about your system's resource usage ? |
Hello @AnhTuDo1998 and thanks for your reply ! rostopic hz /semantic_pcl/semantic_pcl average rate: 0.204 The ROS master system is relatively "under stress" , but rviz subscribes on a remotely connected ROS client laptop with ram 16Gb. top - 09:13:11 up 26 min, 11 users, load average: 8,10, 7,44, 4,77 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND I would use the mapdata of Rtabmap and the semantic data of semantic_slam to create a map but they can only appear in 2 different rviz instances. When I try to subscribe them in the same rviz instance , rviz crashes immediately. |
Hi, I had the same problem about no world_frame. Have you solved the problem? |
anybody found the solution to no "world" frame issue ? [ WARN] [1620043419.754639065]: MessageFilter [target=/world ]: Dropped 100.00% of messages so far. Please turn the [ros.octomap_generator.message_filter] rosconsole logger to DEBUG for more information |
@shaheer34mts try renaming "world" to "/world" in the octomap_generator.yaml file. Let me know if this fixed the problem? |
Hi @AnhTuDo1998 It does not solve the problem, however, if I set fixed frame to "/camera_link" then I can see the octomap generated (as can bee seen in the attached picture). But since my camer_link is now fixed link, it does not move and I cannot generate a proper map. and as you can see in the picture, it still shows an error that it cannot find transform between /camera_link and /world. For some reason I am not able to generate a /world frame. Can you kindly tell me how to generate it? |
@shaheer34mts would it be possible for you to print out a /tf tree using the following commands as reference? Please show the output here.
May I know exactly the model of the camera u are using? |
@AnhTuDo1998 I am trying with both D435 and D435i. The attached file is for D435 |
@shaheer34mts This is without semantic_mapping node running ? |
@AnhTuDo1998 semantic_mapping is running, but SLAM node is not running. |
Would it possible to share with me how you are launching the camera? if possible please show an extract of the launch file and any nodelet file you are using to run the camera here. To my best memory, this might be a problem of camera transformation frame (Realsense might not have what the author said to be /camera_rgb_optical_frame in its default configuration of the launch file) |
In fact I am having a similar problem to yours, I am also using a D435 camera, have you made any progress? |
My problem solved with this method! Thank you! Now I can create true semantic octomap, here's the result. (I run this project with rtabmap so I have colored octomap & binary map together, if you run this project purely, you will only get semantic map) |
I have the same problem, I don't know how to add semantic information(semantic pointclouds) to rtabmap. |
I have successfully added semantic information to rtabmap, but not using semantic pointclouds. Instead I remap the rtabmap rgb input to semantic image output from the node. I also created a synced camera info for the semantic image and depth image. The problem that causes the semantic pointclouds does not works in rtabmap might be due to the extra information/fields in /semantic_pcl/semantic_pcl such as confidences levels. To save yourself from all these trouble, I would suggest to stay away from this package and just use rtabmap instead if you want to customize with your own camera since rtabmap have its built in SLAM and octomap generator. Any image segmentation node will do the job :) However, this packages does have its advantages such as the recoloring the points based on the confidences levels. I believe that rtabmap cant do this. Also on how to feed the slam mapping from rtabmap into this package octomap generator, just change the world_frame_id in octomap_generator.yaml to "map". The octomap generator uses tf listener to read the odom from world frame (map) while the rtabmap publish the tf using slam to frame "map". |
|
Hello @frankSARU @codieboomboom ; I am also trying to run this on a D435i camera, but I have been having some trouble. When I run
I am using the rs_rgbd.launch file, and I changed the semantic_cloud.yaml parameters to
I also made a settings file d435i.yaml to use, which I reference in slam.launch. I believe everything is set up correctly, but it isn't working. Could you please tell me how you were able to get it working? |
Hello there! I'm a student from Kyutech and I have a problem when running this project with a realsense D435i RGB-D camera. The problem is I cannot get the semantic point cloud. I can get semantic image and normal point cloud like below.
I don't know if octomap_generator problem is relevant with this issue, you can see from picture that I also get only 1 voxel in octomap. And as showed in console, I also got many warnings like this
[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow.ERROR: Error in search: [2724.48 468.161 1206.56] is out of OcTree bounds!
Hope someone who meet the same issue can help me work it out!
The text was updated successfully, but these errors were encountered: