-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't get semantic octomap when running TUM rosbag #41
Comments
And I get warning in terminal: |
I found "Intrinsic Camera Calibration of the Kinect" at https://vision.in.tum.de/data/datasets/rgbd-dataset/file_formats#intrinsic_camera_calibration_of_the_kinect fx = 525.0 # focal length x factor = 5000 # for the 16-bit PNG files OR: factor = 1 # for the 32-bit float images in the ROS bag filesfor v in range(depth_image.height): I changed fx, fy, cx, cy in semantic_cloud.yaml. What about the others ? |
Thank you for your enthusiastic reply! I saw your responses on many other question pages. After the rosbag playback is over, the terminal will still display the message "Dropped 9X.XX% of messages so far." |
|
Hi, I used the Kinect camera to record my bag file, but I have the same problem as you. What parameters did you change? |
You need to modify two .yaml files, octomap_generator.yaml and semantic_cloud.yaml, to set the tree_type and point_type to 0. |
You should check your rosbag infomation by use |
Have you solved the “generate map breaks”? |
I tried --clock -r 0.6, 0.1 and 0.01 , it didn't work. |
Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. |
Have you solved it? I'm having the same problem and it's still not solved after the same changes as you. |
Hi, I managed to get octomap, but I've had a very busy week lately, so maybe we can talk about it later
3195661858
***@***.***
…------------------ 原始邮件 ------------------
发件人: "floatlazer/semantic_slam" ***@***.***>;
发送时间: 2023年5月23日(星期二) 晚上8:57
***@***.***>;
***@***.******@***.***>;
主题: Re: [floatlazer/semantic_slam] can't get semantic octomap when running TUM rosbag (#41)
Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. Here is what I tried: I modified a total of three files in semantic_cloud.yaml to change the camera fx,fy,cx,cy and changed the two topic names of rosd to "/camera/rgb/image_color" and "/camera/depth/image". In color_pcl_generator.yaml, I modified "Camera instrinsic matrix" and adjusted it to the same value as semantic_cloud.yaml, then I tried to modify lines 118,119 to divide by 5000 but it didn't seem to work. In semantic_slam.rviz, I first changed the Image Topic in line 219 to "/camera/rgb/image_color". Now I can see the semantic segmented image in rviz, but no octree map, I think this may be a misconfiguration of semantic_slam.rviz? I tried to change class/TF as you answered but after that rviz does not work, I had to change it back again, 。 I am not familiar with ROS,can you help me? Here are some of my result :
Have you solved it? I'm having the same problem and it's still not solved after the same changes as you.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I was very excited to hear this news. I look forward to hearing from you about the solution when you are available! |
I have successfully run demo.bag . But when I tried to run TUM rosbag like "Handheld SLAM: fr1/room", I can't get semantic octomap in RVIZ.
I have checked rosbag info:
bie@bie:~/ros_ws/Sem_SLAM/src/semantic_slam$ rosbag info demo.bag</br>
path: demo.bag
version: 2.0
duration: 52.8s
start: Jul 19 2018 21:51:27.18 (1532008287.18)
end: Jul 19 2018 21:52:19.99 (1532008339.99)
size: 2.8 GB
messages: 3278
compression: none [2814/2814 chunks]
types: sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
topics: /camera/depth_registered/image_raw 1454 msgs : sensor_msgs/Image
/camera/rgb/image_raw 1360 msgs : sensor_msgs/Image
/tf 460 msgs : tf2_msgs/TFMessage
/tf_static 4 msgs : tf2_msgs/TFMessage
bie@bie:~/ros_ws/Sem_SLAM/src/semantic_slam$ rosbag info tum1.bag
path: tum1.bag
version: 2.0
duration: 49.3s
start: May 10 2011 20:51:46.96 (1305031906.96)
end: May 10 2011 20:52:36.24 (1305031956.24)
size: 2.7 GB
messages: 41803
compression: none [2724/2724 chunks]
types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
tf/tfMessage [94810edda583a504dfda3829e70d7eec]
visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7]
topics: /camera/depth/camera_info 1361 msgs : sensor_msgs/CameraInfo
/camera/depth/image 1360 msgs : sensor_msgs/Image
/camera/rgb/camera_info 1362 msgs : sensor_msgs/CameraInfo
/camera/rgb/image_color 1362 msgs : sensor_msgs/Image
/cortex_marker_array 4914 msgs : visualization_msgs/MarkerArray
/imu 24569 msgs : sensor_msgs/Imu
/tf 6875 msgs : tf/tfMessage
It looks like some different between these two bag. So I changed semantic_cloud.yaml params
color_image_topic: "/camera/rgb/image_color"
depth_image_topic: "/camera/depth/image"
It didn't work. So I check the semantic_mapping.rviz and use
rosrun rqt_tf_tree rqt_tf_tree
.tf_tree are different too.
So I changed semantic_mapping.rviz params:
Enabled: true
Frame Timeout: 15
Frames:
All Enabled: false
openni_depth_frame:
Value: false
openni_depth_optical_frame:
Value: false
openni_camera:
Value: false
openni_rgb_frame:
Value: false
openni_rgb_optical_frame:
Value: true
kinect:
Value: false
world:
Value: true
But it did'n work too. How can I get semantic octomap in RVIZ ?
The text was updated successfully, but these errors were encountered: