From bed42ca5ae84ea341b4e3ff7ce0634235178e821 Mon Sep 17 00:00:00 2001 From: Taekjin LEE Date: Wed, 20 Nov 2024 16:41:02 +0900 Subject: [PATCH] docs: update package names of perception packages (#628) docs: update package names Signed-off-by: Taekjin LEE --- .../coding-guidelines/ros-nodes/parameters.md | 2 +- ...rence-implementaion-perception-diagram.drawio.svg | 2 +- .../faraway-object-detection.md | 12 ++++++------ .../radar-based-3d-detector.md | 2 +- .../data-types/radar-data/radar-objects-data.md | 8 ++++---- .../data-types/radar-data/radar-pointcloud-data.md | 2 +- .../reference-implementations/data-message.md | 2 +- .../creating-sensor-model/index.md | 2 +- .../integrating-autoware/launch-autoware/index.md | 2 +- .../launch-autoware/perception/index.md | 8 ++++---- .../others/running-autoware-without-cuda.md | 2 +- .../training-models.md | 4 ++-- 12 files changed, 24 insertions(+), 24 deletions(-) diff --git a/docs/contributing/coding-guidelines/ros-nodes/parameters.md b/docs/contributing/coding-guidelines/ros-nodes/parameters.md index 38866654529..34de44ed42e 100644 --- a/docs/contributing/coding-guidelines/ros-nodes/parameters.md +++ b/docs/contributing/coding-guidelines/ros-nodes/parameters.md @@ -149,7 +149,7 @@ Autoware has the following two types of parameter files for ROS packages: The schema file path is `INSERT_PATH_TO_PACKAGE/schema/` and the schema file name is `INSERT_NODE_NAME.schema.json`. To adapt the template to the ROS node, replace each `INSERT_...` and add all parameters `1..N`. -See example: _Lidar Apollo Segmentation TVM Nodes_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/lidar_apollo_segmentation_tvm_nodes/schema/lidar_apollo_segmentation_tvm_nodes.schema.json) +See example: _Image Projection Based Fusion - Pointpainting_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/universe/perception/autoware_image_projection_based_fusion/schema/pointpainting.schema.json) ### Attributes diff --git a/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg b/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg index 8abfd017a07..ef203ff59fc 100644 --- a/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg +++ b/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg @@ -1,4 +1,4 @@ -
To Planning
To Planning
Occupancy Grid Map
Occupancy Grid Map
Obstacle Segmentation
Obstacle Segmentation
Traffic Light Recognition
Traffic Light Recognition
Traffic Light Detector
Traffic Light Detector
Traffic Light Classifier
Traffic Light Classifier
traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks: Multi-Camera-Fusion: performed on single traffic light signal detected by different cameras. Group-Fusion: performed on traffic light signals within the same group, which means traffic lights sharing the same regulatory element id defined in lanelet2 map.traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks: Multi-Camera-Fusion: performed on single traffic light signal detected by different cameras. Group-Fusion: performed on traffic light signals within the same group, which means traffic lights sharing the same regulatory element id defined in lanelet2 map.
Multi Camera Fusion
Multi Camera Fusion
crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals. crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.
Crosswalk Traffic Light Estimator
Crosswalk Traffic Light Estimator
Traffic Light States
Traffic Light States
This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach. This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.
V2X Fusion node
V2X Fusion node
Camera Image
Camera Image
Point Cloud
Point Cloud
Sensing
Sensing
Point Cloud, Camera Image, Radar Object
Point Cloud, Camera Image, Radar Object
Object Recognition
Object Recognition
Occupancy Grid Map
Occupancy Grid Map
Obstacle Points
Obstacle Points
Point Cloud
Point Cloud
Vehicle Odometry
Vehicle Odometry
Localization
Localization
vector mapの情報を用いて,unknown objectをfilterする.lane内のunknown objectのみを残す.vector mapの情報を用いて,unknown objectをfilterする.lane内のunknown objectのみを残す.
Map based Filter
Map based Filter
Detected Objects
Detected Objects
detection同士のassignmentを取り,confidenceが高い方を採用する.overlapしたunknown objectはmergeするdetection同士のassignmentを取り,confidenceが高い方を採用する.overlapしたunknown objectはmergeする
Object Association
 Merger
Object Association...
Object Merger
Object Merger
Interpolator
Interpolator
tracker内部のclusterをマージし,shape fittingしたbboxを出力するtracker内部のclusterをマージし,shape fittingしたbboxを出力する
Detection by
Tracker
Detection by...
Detected Objects
Detected Objects
BBox内に存在するobstacle_segmentation後の点群数を用いて,false positiveを除くBBox内に存在するobstacle_segmentation後の点群数を用いて,false positiveを除く
Map based validator
Map based validator
DNNベースでLiDAR点群に物体のクラス情報を付与するDNNベースでLiDAR点群に物体のクラス情報を付与する
DNN based 3D detector
DNN based 3D detector
LiDAR pipeline
LiDAR pipeline
Detection
Detection
LiDAR clustering
LiDAR clustering
clustering結果に画像のdetection結果をprojectionしてlabelを付与するclustering結果に画像のdetection結果をprojectionしてlabelを付与する
Projection based fusion node
Projection based fusion node
DNNベースで画像に物体のクラス情報を付与するDNNベースで画像に物体のクラス情報を付与する
Camera DNN based 2D detector
Camera DNN based 2D detector
Camera-LiDAR pipeline
Camera-LiDAR pipeline
Radar pipeline
Radar pipeline
This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.
Radar Filter
Radar Filter
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.
Radar Object Clustering
Radar Object Clustering
This package try to merge two tracking objects from different sensor.This package try to merge two tracking objects from different sensor.
Tracking Merger
Tracking Merger
This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.
Radar Object Tracker
Radar Object Tracker
クラス+位置+形状情報に対してtrackingを行う。(最近上流が速度情報も出せるようになってきたらしい)クラス+位置+形状情報に対してtrackingを行う。(最近上流が速度情報も出せるようになってきたらしい)
Multi Object Tracker
Multi Object Tracker
Tracking
Tracking
Dynamic Objects
Dynamic Objects
高精度地図情報を用いて、trackingされた動物体情報の移動経路予測を行う高精度地図情報を用いて、trackingされた動物体情報の移動経路予測を行う
Map based Prediction
Map based Prediction
Prediction
Prediction
Perception
Perception
map
map
map
map
Vector Map,
Point Cloud Map
Vector Map,...
Vector Map
Vector Map
V2X interface
V2X interface
Text is not SVG - cannot display
\ No newline at end of file +
To Planning
To Planning
Occupancy Grid Map
Occupancy Grid Map
Obstacle Segmentation
Obstacle Segmentation
Traffic Light Recognition
Traffic Light Recognition
Traffic Light Detector
Traffic Light Detector
Traffic Light Classifier
Traffic Light Classifier
traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks: Multi-Camera-Fusion: performed on single traffic light signal detected by different cameras. Group-Fusion: performed on traffic light signals within the same group, which means traffic lights sharing the same regulatory element id defined in lanelet2 map.traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks: Multi-Camera-Fusion: performed on single traffic light signal detected by different cameras. Group-Fusion: performed on traffic light signals within the same group, which means traffic lights sharing the same regulatory element id defined in lanelet2 map.
Multi Camera Fusion
Multi Camera Fusion
crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals. crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.
Crosswalk Traffic Light Estimator
Crosswalk Traffic Light Estimator
Traffic Light States
Traffic Light States
This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach. This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.
V2X Fusion node
V2X Fusion node
Camera Image
Camera Image
Point Cloud
Point Cloud
Sensing
Sensing
Point Cloud, Camera Image, Radar Object
Point Cloud, Camera Image, Radar Object
Object Recognition
Object Recognition
Occupancy Grid Map
Occupancy Grid Map
Obstacle Points
Obstacle Points
Point Cloud
Point Cloud
Vehicle Odometry
Vehicle Odometry
Localization
Localization
vector mapの情報を用いて,unknown objectをfilterする.lane内のunknown objectのみを残す.vector mapの情報を用いて,unknown objectをfilterする.lane内のunknown objectのみを残す.
Map based Filter
Map based Filter
Detected Objects
Detected Objects
detection同士のassignmentを取り,confidenceが高い方を採用する.overlapしたunknown objectはmergeするdetection同士のassignmentを取り,confidenceが高い方を採用する.overlapしたunknown objectはmergeする
Object Association
 Merger
Object Association...
Object Merger
Object Merger
Interpolator
Interpolator
tracker内部のclusterをマージし,shape fittingしたbboxを出力するtracker内部のclusterをマージし,shape fittingしたbboxを出力する
Detection by
Tracker
Detection by...
Detected Objects
Detected Objects
BBox内に存在するobstacle_segmentation後の点群数を用いて,false positiveを除くBBox内に存在するobstacle_segmentation後の点群数を用いて,false positiveを除く
Map based validator
Map based validator
DNNベースでLiDAR点群に物体のクラス情報を付与するDNNベースでLiDAR点群に物体のクラス情報を付与する
DNN based 3D detector
DNN based 3D detector
LiDAR pipeline
LiDAR pipeline
Detection
Detection
LiDAR clustering
LiDAR clustering
clustering結果に画像のdetection結果をprojectionしてlabelを付与するclustering結果に画像のdetection結果をprojectionしてlabelを付与する
Projection based fusion node
Projection based fusion node
DNNベースで画像に物体のクラス情報を付与するDNNベースで画像に物体のクラス情報を付与する
Camera DNN based 2D detector
Camera DNN based 2D detector
Camera-LiDAR pipeline
Camera-LiDAR pipeline
Radar pipeline
Radar pipeline
This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.
Radar Filter
Radar Filter
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.
Radar Object Clustering
Radar Object Clustering
This package try to merge two tracking objects from different sensor.This package try to merge two tracking objects from different sensor.
Tracking Merger
Tracking Merger
This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.
Radar Object Tracker
Radar Object Tracker
クラス+位置+形状情報に対してtrackingを行う。(最近上流が速度情報も出せるようになってきたらしい)クラス+位置+形状情報に対してtrackingを行う。(最近上流が速度情報も出せるようになってきたらしい)
Multi Object Tracker
Multi Object Tracker
Tracking
Tracking
Dynamic Objects
Dynamic Objects
高精度地図情報を用いて、trackingされた動物体情報の移動経路予測を行う高精度地図情報を用いて、trackingされた動物体情報の移動経路予測を行う
Map based Prediction
Map based Prediction
Prediction
Prediction
Perception
Perception
map
map
map
map
Vector Map,
Point Cloud Map
Vector Map,...
Vector Map
Vector Map
V2X interface
V2X interface
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md index c157e360b9b..9d2cae150fe 100644 --- a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md +++ b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md @@ -10,13 +10,13 @@ This diagram describes the pipeline for radar faraway dynamic object detection. ### Crossing filter -- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_crossing_objects_noise_filter) +- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_crossing_objects_noise_filter) This package can filter the noise objects crossing to the ego vehicle, which are most likely ghost objects. ### Velocity filter -- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_velocity_splitter) +- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_velocity_splitter) Static objects include many noise like the objects reflected from ground. In many cases for radars, dynamic objects can be detected stably. @@ -24,14 +24,14 @@ To filter out static objects, `object_velocity_splitter` can be used. ### Range filter -- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_range_splitter) +- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_range_splitter) For some radars, ghost objects sometimes occur for near objects. To filter these objects, `object_range_splitter` can be used. ### Vector map filter -- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/detected_object_validation/object-lanelet-filter.md) +- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_detected_object_validation/object-lanelet-filter.md) In most cases, vehicles drive in drivable are. To filter objects that are out of drivable area, `object-lanelet-filter` can be used. @@ -41,12 +41,12 @@ Note that if you use `object-lanelet-filter` for radar faraway detection, you ne ### Radar object clustering -- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering) +- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_object_clustering) This package can combine multiple radar detections from one object into one and adjust class and size. It can suppress splitting objects in tracking module. -![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/radar_object_clustering/docs/radar_clustering.drawio.svg) +![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/autoware_radar_object_clustering/docs/radar_clustering.drawio.svg) ## Note diff --git a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md index e8992a05553..7499e882067 100644 --- a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md +++ b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md @@ -58,7 +58,7 @@ In detail, please see [this document](faraway-object-detection.md) ### Radar fusion to LiDAR-based 3D object detection -- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_fusion_to_detected_object) +- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_fusion_to_detected_object) This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can: diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md index e427fb77ed1..19a4fc74032 100644 --- a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md +++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md @@ -43,20 +43,20 @@ Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis v ### Message converter -- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter) +- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter) This package converts from `radar_msgs/msg/RadarTracks` into `autoware_auto_perception_msgs/msg/DetectedObject` with ego vehicle motion compensation and coordinate transform. ### Object merger -- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) +- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger) This package can merge 2 topics of `autoware_auto_perception_msgs/msg/DetectedObject`. -- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/simple_object_merger) +- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_simple_object_merger) This package can merge simply multiple topics of `autoware_auto_perception_msgs/msg/DetectedObject`. -Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger), this package doesn't use association algorithm and can merge with low calculation cost. +Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger), this package doesn't use association algorithm and can merge with low calculation cost. - [topic_tools](https://github.com/ros-tooling/topic_tools) diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md index 097a3a2a81d..a227196c5dd 100644 --- a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md +++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md @@ -64,7 +64,7 @@ For convenient use of radar pointcloud within existing LiDAR packages, we sugges For considered use cases, - Use [pointcloud_preprocessor](https://github.com/autowarefoundation/autoware.universe/tree/main/sensing/pointcloud_preprocessor) for radar scan. -- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/ground_segmentation) to radar points for LiDAR-less (camera + radar) systems. +- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_ground_segmentation) to radar points for LiDAR-less (camera + radar) systems. ## Appendix diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md index 8f210214969..5b1d50f7ec6 100644 --- a/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md +++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md @@ -78,7 +78,7 @@ uint16 BICYCLE = 32006; uint16 PEDESTRIAN = 32007; ``` -For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter). +For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter). ## Note diff --git a/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md b/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md index 8fc0ca9c257..2c79e955bd8 100644 --- a/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md +++ b/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md @@ -788,7 +788,7 @@ if you decided to use container for 2D detection pipeline are: for example, we will use `/perception/object_detection` as tensorrt_yolo node namespace, it will be explained in autoware usage section. For more information, - please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/image_projection_based_fusion) package. + please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_image_projection_based_fusion) package. After the preparing `camera_node_container.launch.py` to our forked `common_sensor_launch` package, we need to build the package: diff --git a/docs/how-to-guides/integrating-autoware/launch-autoware/index.md b/docs/how-to-guides/integrating-autoware/launch-autoware/index.md index c464656349a..7b8b2c8e674 100644 --- a/docs/how-to-guides/integrating-autoware/launch-autoware/index.md +++ b/docs/how-to-guides/integrating-autoware/launch-autoware/index.md @@ -161,7 +161,7 @@ but if you want to use `camera-lidar fusion` you need to change your perception If you want to use traffic light recognition and visualization, you can set `traffic_light_recognition/enable_fine_detection` as true (default). Please check -[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/traffic_light_fine_detector/) +[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_traffic_light_fine_detector/) page for more information. If you don't want to use traffic light classifier, then you can disable it: diff --git a/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md b/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md index 30b2411187d..3052f04b783 100644 --- a/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md +++ b/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md @@ -37,7 +37,7 @@ that we want to change it since `tier4_perception_component.launch.xml` is the top-level launch file of other perception launch files. Here are some predefined perception launch arguments: -- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/probabilistic_occupancy_grid_map/) package for detailed information. +- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_probabilistic_occupancy_grid_map/) package for detailed information. The default probabilistic occupancy grid map method is `pointcloud_based_occupancy_grid_map`. If you want to change it to the `laserscan_based_occupancy_grid_map`, you can change it here: @@ -47,7 +47,7 @@ Here are some predefined perception launch arguments: ``` - **`detected_objects_filter_method:`** This argument determines the filter method for detected objects. - Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about lanelet and position filter. + Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about lanelet and position filter. The default detected object filter method is `lanelet_filter`. If you want to change it to the `position_filter`, you can change it here: @@ -57,7 +57,7 @@ Here are some predefined perception launch arguments: ``` - **`detected_objects_validation_method:`** This argument determines the validation method for detected objects. - Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about validation methods. + Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about validation methods. The default detected object filter method is `obstacle_pointcloud`. If you want to change it to the `occupancy_grid`, you can change it here, but remember it requires `laserscan_based_occupancy_grid_map` method as `occupancy_grid_map_method`: @@ -99,7 +99,7 @@ we will apply these changes `tier4_perception_component.launch.xml` instead of ` Here are some example changes for the perception pipeline: - **`remove_unknown:`** This parameter determines the remove unknown objects at camera-lidar fusion. - Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information. + Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information. The default value is `true`. If you want to change it to the `false`, you can add this argument to `tier4_perception_component.launch.xml`, diff --git a/docs/how-to-guides/others/running-autoware-without-cuda.md b/docs/how-to-guides/others/running-autoware-without-cuda.md index 72bb8804f43..294eb4626c8 100644 --- a/docs/how-to-guides/others/running-autoware-without-cuda.md +++ b/docs/how-to-guides/others/running-autoware-without-cuda.md @@ -13,7 +13,7 @@ Autoware Universe's object detection can be run using one of five possible confi - `lidar-centerpoint` + `tensorrt_yolo` - `euclidean_cluster` -Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster). +Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_euclidean_cluster). ## Running traffic light detection without CUDA diff --git a/docs/how-to-guides/training-machine-learning-models/training-models.md b/docs/how-to-guides/training-machine-learning-models/training-models.md index 4f9e55cfef5..22b87787b64 100644 --- a/docs/how-to-guides/training-machine-learning-models/training-models.md +++ b/docs/how-to-guides/training-machine-learning-models/training-models.md @@ -26,14 +26,14 @@ the readme file accompanying **"traffic_light_classifier"** package. These instr the process of training the model using your own dataset. To facilitate your training, we have also provided an example dataset containing three distinct classes (green, yellow, red), which you can leverage during the training process. -Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/traffic_light_classifier/README.md)**. +Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_traffic_light_classifier/README.md)**. ## Training CenterPoint 3D object detection model The CenterPoint 3D object detection model within the Autoware has been trained using the **[autowarefoundation/mmdetection3d](https://github.com/autowarefoundation/mmdetection3d/blob/main/projects/AutowareCenterPoint/README.md)** repository. To train custom CenterPoint models and convert them into ONNX format for deployment in Autoware, please refer to the instructions provided in the README file included with Autoware's -**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model. +**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model. In order to assist you with your training process, we have also included an example dataset in the TIER IV dataset format.