diff --git a/docs/contributing/coding-guidelines/ros-nodes/parameters.md b/docs/contributing/coding-guidelines/ros-nodes/parameters.md
index 38866654529..34de44ed42e 100644
--- a/docs/contributing/coding-guidelines/ros-nodes/parameters.md
+++ b/docs/contributing/coding-guidelines/ros-nodes/parameters.md
@@ -149,7 +149,7 @@ Autoware has the following two types of parameter files for ROS packages:
The schema file path is `INSERT_PATH_TO_PACKAGE/schema/` and the schema file name is `INSERT_NODE_NAME.schema.json`. To adapt the template to the ROS node, replace each `INSERT_...` and add all parameters `1..N`.
-See example: _Lidar Apollo Segmentation TVM Nodes_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/lidar_apollo_segmentation_tvm_nodes/schema/lidar_apollo_segmentation_tvm_nodes.schema.json)
+See example: _Image Projection Based Fusion - Pointpainting_ [schema](https://github.com/autowarefoundation/autoware.universe/blob/main/universe/perception/autoware_image_projection_based_fusion/schema/pointpainting.schema.json)
### Attributes
diff --git a/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg b/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg
index 8abfd017a07..ef203ff59fc 100644
--- a/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg
+++ b/docs/design/autoware-architecture/perception/image/reference-implementaion-perception-diagram.drawio.svg
@@ -1,4 +1,4 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md
index c157e360b9b..9d2cae150fe 100644
--- a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md
+++ b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection.md
@@ -10,13 +10,13 @@ This diagram describes the pipeline for radar faraway dynamic object detection.
### Crossing filter
-- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_crossing_objects_noise_filter)
+- [radar_crossing_objects_noise_filter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_crossing_objects_noise_filter)
This package can filter the noise objects crossing to the ego vehicle, which are most likely ghost objects.
### Velocity filter
-- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_velocity_splitter)
+- [object_velocity_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_velocity_splitter)
Static objects include many noise like the objects reflected from ground.
In many cases for radars, dynamic objects can be detected stably.
@@ -24,14 +24,14 @@ To filter out static objects, `object_velocity_splitter` can be used.
### Range filter
-- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_range_splitter)
+- [object_range_splitter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_range_splitter)
For some radars, ghost objects sometimes occur for near objects.
To filter these objects, `object_range_splitter` can be used.
### Vector map filter
-- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/detected_object_validation/object-lanelet-filter.md)
+- [object-lanelet-filter](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_detected_object_validation/object-lanelet-filter.md)
In most cases, vehicles drive in drivable are.
To filter objects that are out of drivable area, `object-lanelet-filter` can be used.
@@ -41,12 +41,12 @@ Note that if you use `object-lanelet-filter` for radar faraway detection, you ne
### Radar object clustering
-- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering)
+- [radar_object_clustering](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_object_clustering)
This package can combine multiple radar detections from one object into one and adjust class and size.
It can suppress splitting objects in tracking module.
-![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/radar_object_clustering/docs/radar_clustering.drawio.svg)
+![radar_object_clustering](https://raw.githubusercontent.com/autowarefoundation/autoware.universe/main/perception/autoware_radar_object_clustering/docs/radar_clustering.drawio.svg)
## Note
diff --git a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md
index e8992a05553..7499e882067 100644
--- a/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md
+++ b/docs/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector.md
@@ -58,7 +58,7 @@ In detail, please see [this document](faraway-object-detection.md)
### Radar fusion to LiDAR-based 3D object detection
-- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_fusion_to_detected_object)
+- [radar_fusion_to_detected_object](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_fusion_to_detected_object)
This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:
diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md
index e427fb77ed1..19a4fc74032 100644
--- a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md
+++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data.md
@@ -43,20 +43,20 @@ Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis v
### Message converter
-- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter)
+- [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter)
This package converts from `radar_msgs/msg/RadarTracks` into `autoware_auto_perception_msgs/msg/DetectedObject` with ego vehicle motion compensation and coordinate transform.
### Object merger
-- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger)
+- [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger)
This package can merge 2 topics of `autoware_auto_perception_msgs/msg/DetectedObject`.
-- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/simple_object_merger)
+- [simple_object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_simple_object_merger)
This package can merge simply multiple topics of `autoware_auto_perception_msgs/msg/DetectedObject`.
-Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger), this package doesn't use association algorithm and can merge with low calculation cost.
+Different from [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_object_merger), this package doesn't use association algorithm and can merge with low calculation cost.
- [topic_tools](https://github.com/ros-tooling/topic_tools)
diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md
index 097a3a2a81d..a227196c5dd 100644
--- a/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md
+++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data.md
@@ -64,7 +64,7 @@ For convenient use of radar pointcloud within existing LiDAR packages, we sugges
For considered use cases,
- Use [pointcloud_preprocessor](https://github.com/autowarefoundation/autoware.universe/tree/main/sensing/pointcloud_preprocessor) for radar scan.
-- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/ground_segmentation) to radar points for LiDAR-less (camera + radar) systems.
+- Apply obstacle segmentation like [ground segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_ground_segmentation) to radar points for LiDAR-less (camera + radar) systems.
## Appendix
diff --git a/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md b/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md
index 8f210214969..5b1d50f7ec6 100644
--- a/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md
+++ b/docs/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message.md
@@ -78,7 +78,7 @@ uint16 BICYCLE = 32006;
uint16 PEDESTRIAN = 32007;
```
-For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter).
+For detail implementation, please see [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_radar_tracks_msgs_converter).
## Note
diff --git a/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md b/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md
index 8fc0ca9c257..2c79e955bd8 100644
--- a/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md
+++ b/docs/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.md
@@ -788,7 +788,7 @@ if you decided to use container for 2D detection pipeline are:
for example, we will use `/perception/object_detection` as tensorrt_yolo node namespace,
it will be explained in autoware usage section.
For more information,
- please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/image_projection_based_fusion) package.
+ please check [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_image_projection_based_fusion) package.
After the preparing `camera_node_container.launch.py` to our forked `common_sensor_launch` package,
we need to build the package:
diff --git a/docs/how-to-guides/integrating-autoware/launch-autoware/index.md b/docs/how-to-guides/integrating-autoware/launch-autoware/index.md
index c464656349a..7b8b2c8e674 100644
--- a/docs/how-to-guides/integrating-autoware/launch-autoware/index.md
+++ b/docs/how-to-guides/integrating-autoware/launch-autoware/index.md
@@ -161,7 +161,7 @@ but if you want to use `camera-lidar fusion` you need to change your perception
If you want to use traffic light recognition and visualization,
you can set `traffic_light_recognition/enable_fine_detection` as true (default).
Please check
-[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/traffic_light_fine_detector/)
+[traffic_light_fine_detector](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_traffic_light_fine_detector/)
page for more information.
If you don't want to use traffic light classifier, then you can disable it:
diff --git a/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md b/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md
index 30b2411187d..3052f04b783 100644
--- a/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md
+++ b/docs/how-to-guides/integrating-autoware/launch-autoware/perception/index.md
@@ -37,7 +37,7 @@ that we want
to change it since `tier4_perception_component.launch.xml` is the top-level launch file of other perception launch files.
Here are some predefined perception launch arguments:
-- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/probabilistic_occupancy_grid_map/) package for detailed information.
+- **`occupancy_grid_map_method:`** This argument determines the occupancy grid map method for perception stack. Please check [probabilistic_occupancy_grid_map](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_probabilistic_occupancy_grid_map/) package for detailed information.
The default probabilistic occupancy grid map method is `pointcloud_based_occupancy_grid_map`.
If you want to change it to the `laserscan_based_occupancy_grid_map`, you can change it here:
@@ -47,7 +47,7 @@ Here are some predefined perception launch arguments:
```
- **`detected_objects_filter_method:`** This argument determines the filter method for detected objects.
- Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about lanelet and position filter.
+ Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about lanelet and position filter.
The default detected object filter method is `lanelet_filter`.
If you want to change it to the `position_filter`, you can change it here:
@@ -57,7 +57,7 @@ Here are some predefined perception launch arguments:
```
- **`detected_objects_validation_method:`** This argument determines the validation method for detected objects.
- Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) package for detailed information about validation methods.
+ Please check [detected_object_validation](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_detected_object_validation/) package for detailed information about validation methods.
The default detected object filter method is `obstacle_pointcloud`.
If you want to change it to the `occupancy_grid`, you can change it here,
but remember it requires `laserscan_based_occupancy_grid_map` method as `occupancy_grid_map_method`:
@@ -99,7 +99,7 @@ we will apply these changes `tier4_perception_component.launch.xml` instead of `
Here are some example changes for the perception pipeline:
- **`remove_unknown:`** This parameter determines the remove unknown objects at camera-lidar fusion.
- Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information.
+ Please check [roi_cluster_fusion](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion.md) node for detailed information.
The default value is `true`.
If you want to change it to the `false`,
you can add this argument to `tier4_perception_component.launch.xml`,
diff --git a/docs/how-to-guides/others/running-autoware-without-cuda.md b/docs/how-to-guides/others/running-autoware-without-cuda.md
index 72bb8804f43..294eb4626c8 100644
--- a/docs/how-to-guides/others/running-autoware-without-cuda.md
+++ b/docs/how-to-guides/others/running-autoware-without-cuda.md
@@ -13,7 +13,7 @@ Autoware Universe's object detection can be run using one of five possible confi
- `lidar-centerpoint` + `tensorrt_yolo`
- `euclidean_cluster`
-Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster).
+Of these five configurations, only the last one (`euclidean_cluster`) can be run without CUDA. For more details, refer to the [`euclidean_cluster` module's README file](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_euclidean_cluster).
## Running traffic light detection without CUDA
diff --git a/docs/how-to-guides/training-machine-learning-models/training-models.md b/docs/how-to-guides/training-machine-learning-models/training-models.md
index 4f9e55cfef5..22b87787b64 100644
--- a/docs/how-to-guides/training-machine-learning-models/training-models.md
+++ b/docs/how-to-guides/training-machine-learning-models/training-models.md
@@ -26,14 +26,14 @@ the readme file accompanying **"traffic_light_classifier"** package. These instr
the process of training the model using your own dataset. To facilitate your training, we have also provided
an example dataset containing three distinct classes (green, yellow, red), which you can leverage during the training process.
-Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/traffic_light_classifier/README.md)**.
+Detailed instructions for training the traffic light classifier model can be found **[here](https://github.com/autowarefoundation/autoware.universe/blob/main/perception/autoware_traffic_light_classifier/README.md)**.
## Training CenterPoint 3D object detection model
The CenterPoint 3D object detection model within the Autoware has been trained using the **[autowarefoundation/mmdetection3d](https://github.com/autowarefoundation/mmdetection3d/blob/main/projects/AutowareCenterPoint/README.md)** repository.
To train custom CenterPoint models and convert them into ONNX format for deployment in Autoware, please refer to the instructions provided in the README file included with Autoware's
-**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model.
+**[lidar_centerpoint](https://autowarefoundation.github.io/autoware.universe/main/perception/autoware_lidar_centerpoint/)** package. These instructions will provide a step-by-step guide for training the CenterPoint model.
In order to assist you with your training process, we have also included an example dataset in the TIER IV dataset format.