Skip to content

Commit

Permalink
Update to v280 (#392)
Browse files Browse the repository at this point in the history
  • Loading branch information
Serafadam authored Sep 7, 2023
1 parent 9c97b34 commit 436226c
Show file tree
Hide file tree
Showing 80 changed files with 2,560 additions and 593 deletions.
41 changes: 41 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,31 @@ This runs your camera as a ROS2 Component and gives you the ability to customize
Paramerers that begin with `r_` can be freely modified during runtime, for example with rqt.
Parameters that begin with `i_` are set when camera is initializing, to change them you have to call `stop` and `start` services. This can be used to hot swap NNs during runtime, changing resolutions, etc. Below you can see some examples:


#### Publishing TFs from extrinsics

By default, camera transforms are published from default URDF descriptions based on CAD models. This can be overriden by using TFPublisher class from `depthai_bridge`, which based on Device's camera calibration data republishes the description with updated information. To enable this behavior in `depthai_ros_driver`, you can use following parameters:
- `camera.i_publish_tf_from_calibration` - setting this to true launches TFPublisher

Then you can set following arguments:
- `camera.i_tf_camera_name` - if not set, defaults to the node name
- `camera.i_tf_camera_model` - if not set, it will be automatically detected. If the node is unable to detect STL file for the camera it is set to `OAK-D`. To explicitly set it in `camera.launch.py`, set `override_cam_model:=true`
- `camera.i_tf_base_frame`
- `camera.i_tf_parent_frame`
- `camera.i_tf_cam_pos_x`
- `camera.i_tf_cam_pos_y`
- `camera.i_tf_cam_pos_z`
- `camera.i_tf_cam_roll`
- `camera.i_tf_cam_pitch`
- `camera.i_tf_cam_yaw`

When using `camera.launch.py`, you can set `pass_tf_args_as_params:=true` so that TF arguments are used to fill those parameters. For example `ros2 launch depthai_ros_driver camera.launch.py pass_tf_args_as_params:=true parent_frame:=map cam_pos_x:=1.0 imu_from_descr:=true`

It is also possible to set custom URDF path (for now only absolute path works) and custom xacro arguments using `camera.i_tf_custom_urdf_path` and `camera.i_tf_custom_xacro_args`. Please note that robot_state_publisher must be running.

**NOTE ON IMU EXTRINSICS**
If your camera has uncalibrated IMU, a warning will be shown, and IMU will be published with zero rotation and translation. You can override this behavior by setting `camera.i_tf_imu_from_descr`: true. This will publish default IMU extrinsics from URDF based on camera model.

#### Setting RGB parameters

By default RGB camera outputs `ISP` frame. To set custom width and height of output image, you can set `i_isp_num` and `i_isp_den` which scale image dimensions (2 and 3 by default, so from 1920x1080 to 1280x720), note for RGBD alignment to work resulting width and height must be divisible by 16.
Expand All @@ -149,6 +174,19 @@ Additionally you can set `i.output_isp: false` to use `video` output and set cus
![](docs/param_rgb.gif)
#### Setting Stereo parameters
![](docs/param_stereo.gif)
##### Depth alignment
When setting `stereo.i_align_depth: true`, stereo output is aligned to board socket specified by `stereo.i_board_socket_id` parameter (by default 0/CAM_A)

You can enable rectified Stereo streams by setting, for example in the case of right stream `i_publish_right_rect: true`. You can also set `i_publish_synced_rect_pair: true` to get both images with the same timestamps.

##### Custom Sensor sockets

Configuration of which sensors are used for computing stereo pair can be done either programatically, by specifying them in a constructor of a Stereo node (for example when building a custom pipeline), or via parameters - `stereo.i_left_socket_id`/`stereo.i_right_socket_id`. Please note that currently if you want to use rgb/center socket instead of one of the given pairs you will need to build a custom pipeline for that.

#### Feature Tracker

Each sensor node (and rectified streams from Stereo node) has the option to add FeatureTracker node, which publishes `depthai_ros_msgs/msg/TrackedFeatures` messages.
To enable features on, for example rgb node, set `rgb: i_enable_feature_tracker: true`. To enable publishing on rectified streams, set for example `stereo: i_left_rect_enable_feature_tracker`

#### Setting IMU parameters
Parameters:
Expand Down Expand Up @@ -260,6 +298,9 @@ Available filters:
`ros2 launch depthai_filters example_seg_overlay.launch.py`
- WLS filter - stereo depth filter that smooths out overall depth image based on disparity data. It subscribes to `stereo/image_raw` and `left/image raw` topics. Parameters needed to enable it - `left.i_publish_topic`, `stereo.i_output_disparity`
an example can be seen by running `ros2 launch depthai_filters example_wls_filter.launch.py`
- SpatialBB - publishes bounding boxes as 3D line Markers based on spatial detections coming from driver node
- FeatureTrackerOverlay - publishes Tracked Features overlay based on features and images coming from the driver
- Features3D - uses depth image to republish features as 3D pointcloud


### Using external sources for NN inference or Stereo Depth
Expand Down
14 changes: 14 additions & 0 deletions depthai-ros/CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@
Changelog for package depthai-ros
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2.8.0 (2023-09-01)
-------------------
* Add camera image orientation param
* Performance update
* Feature tracker
* Handle USB speed when usb id is specified
* Change misleading error to a clearer message
* Watchdog
* Depth alignment update
* Synced stereo streams
* Lazy Publishing
* Urdf loader
* Add exposure offset

2.7.5 (2023-08-07)
-------------------
* IMU sync fix
Expand Down
2 changes: 1 addition & 1 deletion depthai-ros/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
cmake_minimum_required(VERSION 3.10.2) # CMake version in Ubuntu 18.04 LTS

project(depthai-ros VERSION 2.7.5 LANGUAGES CXX C)
project(depthai-ros VERSION 2.8.0 LANGUAGES CXX C)

set(CMAKE_CXX_STANDARD 14)

Expand Down
3 changes: 2 additions & 1 deletion depthai-ros/package.xml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<?xml version="1.0"?>
<package format="3">
<name>depthai-ros</name>
<version>2.7.5</version>
<version>2.8.0</version>
<description>The depthai-ros package</description>

<!-- One maintainer tag required, multiple allowed, one person per tag -->
Expand All @@ -19,6 +19,7 @@
<exec_depend>depthai_examples</exec_depend>
<exec_depend>depthai_ros_driver</exec_depend>
<exec_depend>depthai_descriptions</exec_depend>
<exec_depend>depthai_filters</exec_depend>

<export>
<build_type>ament_cmake</build_type>
Expand Down
6 changes: 5 additions & 1 deletion depthai_bridge/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
cmake_minimum_required(VERSION 3.10.2) # CMake version in Ubuntu 18.04 LTS
set(CMAKE_POSITION_INDEPENDENT_CODE ON)

project(depthai_bridge VERSION 2.7.5 LANGUAGES CXX C)
project(depthai_bridge VERSION 2.8.0 LANGUAGES CXX C)

set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
Expand Down Expand Up @@ -37,6 +37,7 @@ find_package(sensor_msgs REQUIRED)
find_package(stereo_msgs REQUIRED)
find_package(std_msgs REQUIRED)
find_package(vision_msgs REQUIRED)
find_package(tf2_ros REQUIRED)

set(dependencies
camera_info_manager
Expand All @@ -48,6 +49,7 @@ set(dependencies
stereo_msgs
std_msgs
vision_msgs
tf2_ros
)

include_directories(
Expand All @@ -62,6 +64,8 @@ file(GLOB LIB_SRC
"src/ImgDetectionConverter.cpp"
"src/SpatialDetectionConverter.cpp"
"src/ImuConverter.cpp"
"src/TFPublisher.cpp"
"src/TrackedFeaturesConverter.cpp"
)

add_library(${PROJECT_NAME} SHARED ${LIB_SRC})
Expand Down
29 changes: 19 additions & 10 deletions depthai_bridge/include/depthai_bridge/BridgePublisher.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -34,23 +34,26 @@ class BridgePublisher {
std::shared_ptr<rclcpp::Node> node,
std::string rosTopic,
ConvertFunc converter,
rclcpp::QoS qosSetting = rclcpp::QoS(rclcpp::KeepLast(1)).transient_local().reliable());
rclcpp::QoS qosSetting = rclcpp::QoS(rclcpp::KeepLast(1)).transient_local().reliable(),
bool lazyPublisher = true);

BridgePublisher(std::shared_ptr<dai::DataOutputQueue> daiMessageQueue,
std::shared_ptr<rclcpp::Node> node,
std::string rosTopic,
ConvertFunc converter,
size_t qosHistoryDepth,
std::string cameraParamUri = "",
std::string cameraName = "");
std::string cameraName = "",
bool lazyPublisher = true);

BridgePublisher(std::shared_ptr<dai::DataOutputQueue> daiMessageQueue,
std::shared_ptr<rclcpp::Node> node,
std::string rosTopic,
ConvertFunc converter,
size_t qosHistoryDepth,
ImageMsgs::CameraInfo cameraInfoData,
std::string cameraName);
std::string cameraName,
bool lazyPublisher = true);

/**
* Tag Dispacher function to to overload the Publisher to ImageTransport Publisher
Expand Down Expand Up @@ -95,6 +98,7 @@ class BridgePublisher {
std::unique_ptr<camera_info_manager::CameraInfoManager> _camInfoManager;
bool _isCallbackAdded = false;
bool _isImageMessage = false; // used to enable camera info manager
bool _lazyPublisher = true;
};

template <class RosMsg, class SimMsg>
Expand All @@ -105,8 +109,9 @@ BridgePublisher<RosMsg, SimMsg>::BridgePublisher(std::shared_ptr<dai::DataOutput
std::shared_ptr<rclcpp::Node> node,
std::string rosTopic,
ConvertFunc converter,
rclcpp::QoS qosSetting)
: _daiMessageQueue(daiMessageQueue), _node(node), _converter(converter), _it(node), _rosTopic(rosTopic) {
rclcpp::QoS qosSetting,
bool lazyPublisher)
: _daiMessageQueue(daiMessageQueue), _node(node), _converter(converter), _it(node), _rosTopic(rosTopic), _lazyPublisher(lazyPublisher) {
_rosPublisher = _node->create_publisher<RosMsg>(_rosTopic, qosSetting);
}

Expand All @@ -117,14 +122,16 @@ BridgePublisher<RosMsg, SimMsg>::BridgePublisher(std::shared_ptr<dai::DataOutput
ConvertFunc converter,
size_t qosHistoryDepth,
std::string cameraParamUri,
std::string cameraName)
std::string cameraName,
bool lazyPublisher)
: _daiMessageQueue(daiMessageQueue),
_node(node),
_converter(converter),
_it(node),
_rosTopic(rosTopic),
_cameraParamUri(cameraParamUri),
_cameraName(cameraName) {
_cameraName(cameraName),
_lazyPublisher(lazyPublisher) {
_rosPublisher = advertise(qosHistoryDepth, std::is_same<RosMsg, ImageMsgs::Image>{});
}

Expand All @@ -135,14 +142,16 @@ BridgePublisher<RosMsg, SimMsg>::BridgePublisher(std::shared_ptr<dai::DataOutput
ConvertFunc converter,
size_t qosHistoryDepth,
ImageMsgs::CameraInfo cameraInfoData,
std::string cameraName)
std::string cameraName,
bool lazyPublisher)
: _daiMessageQueue(daiMessageQueue),
_node(node),
_converter(converter),
_it(node),
_rosTopic(rosTopic),
_cameraInfoData(cameraInfoData),
_cameraName(cameraName) {
_cameraName(cameraName),
_lazyPublisher(lazyPublisher) {
_rosPublisher = advertise(qosHistoryDepth, std::is_same<RosMsg, ImageMsgs::Image>{});
}

Expand Down Expand Up @@ -222,7 +231,7 @@ void BridgePublisher<RosMsg, SimMsg>::publishHelper(std::shared_ptr<SimMsg> inDa
}
mainSubCount = _node->count_subscribers(_rosTopic);

if(mainSubCount > 0 || infoSubCount > 0) {
if(!_lazyPublisher || (mainSubCount > 0 || infoSubCount > 0)) {
_converter(inDataPtr, opMsgs);

while(opMsgs.size()) {
Expand Down
13 changes: 9 additions & 4 deletions depthai_bridge/include/depthai_bridge/ImageConverter.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -55,12 +55,12 @@ class ImageConverter {
_updateRosBaseTimeOnToRosMsg = update;
}

void toRosMsgFromBitStream(std::shared_ptr<dai::ImgFrame> inData,
std::deque<ImageMsgs::Image>& outImageMsgs,
dai::RawImgFrame::Type type,
const sensor_msgs::msg::CameraInfo& info);
void convertFromBitstream(dai::RawImgFrame::Type srcType);
void addExposureOffset(dai::CameraExposureOffset& offset);
void convertDispToDepth();

void toRosMsg(std::shared_ptr<dai::ImgFrame> inData, std::deque<ImageMsgs::Image>& outImageMsgs);
ImageMsgs::Image toRosMsgRawPtr(std::shared_ptr<dai::ImgFrame> inData, const sensor_msgs::msg::CameraInfo& info = sensor_msgs::msg::CameraInfo());
ImagePtr toRosMsgPtr(std::shared_ptr<dai::ImgFrame> inData);

void toDaiMsg(const ImageMsgs::Image& inMsg, dai::ImgFrame& outData);
Expand Down Expand Up @@ -95,6 +95,11 @@ class ImageConverter {
int64_t _totalNsChange{0};
// Whether to update the ROS base time on each message conversion
bool _updateRosBaseTimeOnToRosMsg{false};
dai::RawImgFrame::Type _srcType;
bool _fromBitstream = false;
bool _convertDispToDepth = false;
bool _addExpOffset = false;
dai::CameraExposureOffset _expOffset;
};

} // namespace ros
Expand Down
87 changes: 87 additions & 0 deletions depthai_bridge/include/depthai_bridge/TFPublisher.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
#pragma once
#include "depthai-shared/common/CameraFeatures.hpp"
#include "depthai/device/CalibrationHandler.hpp"
#include "geometry_msgs/msg/quaternion.hpp"
#include "geometry_msgs/msg/transform_stamped.hpp"
#include "nlohmann/json.hpp"
#include "rclcpp/node.hpp"
#include "rclcpp/parameter_client.hpp"
#include "tf2_ros/static_transform_broadcaster.h"

namespace dai {
namespace ros {
class TFPublisher {
public:
explicit TFPublisher(rclcpp::Node* node,
const dai::CalibrationHandler& calHandler,
const std::vector<dai::CameraFeatures>& camFeatures,
const std::string& camName,
const std::string& camModel,
const std::string& baseFrame,
const std::string& parentFrame,
const std::string& camPosX,
const std::string& camPosY,
const std::string& camPosZ,
const std::string& camRoll,
const std::string& camPitch,
const std::string& camYaw,
const std::string& imuFromDescr,
const std::string& customURDFLocation,
const std::string& customXacroArgs);
/*
* @brief Obtain URDF description by running Xacro with provided arguments.
*/
std::string getURDF();
geometry_msgs::msg::Quaternion quatFromRotM(nlohmann::json rotMatrix);
geometry_msgs::msg::Vector3 transFromExtr(nlohmann::json translation);

private:
/*
* @brief Converts model name to one of the available model families
*/
void convertModelName();
/*
* @brief Prepare arguments for xacro command. If custom URDF location is not provided, check if model name is available in depthai_descriptions package.
*/
std::string prepareXacroArgs();
/*
* @brief Get URDF description and set it as a parameter for robot_state_publisher
*/
void publishDescription();
/*
* @brief Publish camera transforms ("standard" and optical) based on calibration data.
* Frame names are based on socket names and use following convention: [base_frame]_[socket_name]_camera_frame and
* [base_frame]_[socket_name]_camera_optical_frame
*/
void publishCamTransforms(nlohmann::json camData, rclcpp::Node* node);
/*
* @brief Publish IMU transform based on calibration data.
* Frame name is based on IMU name and uses following convention: [base_frame]_imu_frame.
* If IMU extrinsics are not set, warning is printed out and imu frame is published with zero translation and rotation.
*/
void publishImuTransform(nlohmann::json json, rclcpp::Node* node);
/*
* @brief Check if model STL file is available in depthai_descriptions package.
*/
bool modelNameAvailable();
std::string getCamSocketName(int socketNum);
std::unique_ptr<rclcpp::AsyncParametersClient> _paramClient;
std::shared_ptr<tf2_ros::StaticTransformBroadcaster> _tfPub;
std::string _camName;
std::string _camModel;
std::string _baseFrame;
std::string _parentFrame;
std::string _camPosX;
std::string _camPosY;
std::string _camPosZ;
std::string _camRoll;
std::string _camPitch;
std::string _camYaw;
std::string _imuFromDescr;
std::string _customURDFLocation;
std::string _customXacroArgs;
std::vector<dai::CameraFeatures> _camFeatures;
rclcpp::Logger _logger;
};
} // namespace ros
} // namespace dai
Loading

0 comments on commit 436226c

Please sign in to comment.