【If you want to add anything to this repository, please create a PR or email to [email protected]】
Dataset | Sensors | Radar Data | Source | Annotations | url | Other |
---|---|---|---|---|---|---|
Astyx | 4D Radar,LiDAR, Camera | PC | 19'EuRAD | 3D bbox | github paper | ~500 frames |
RADIal | 4D Radar,LiDAR, Camera | PC, ADC, RT | 22'CVPR | 2D bbox, seg | github paper | 8,252 labeled frames |
View-of-Delft(VoD) | 4D Radar,LiDAR, Stereo Camera | PC | 22'RA-L | 3D bbox | website | 8,693 frames |
TJ4DRadSet | 4D Radar,LiDAR, Camera, GNSS | PC | 22'ITSC | 3D bbox, TrackID | github paper | 7,757 frames |
K-Radar | 4D Radar,LiDAR, Stereo Camera, RTK-GPS | RT | 22'NeurIPS | 3D bbox, TrackID | github paper | 35K frames; 360° Camera |
Dual Radar | dual 4D Radars,LiDAR, Camera | PC | 23'arXiv | 3D bbox, TrackID | paper | 10K frames |
L-RadSet | 4D Radar,LiDAR, 3 Cameras | PC | 24'TIV | 3D bbox, TrackID | github paper | 11.2K frames; Annos range to 220m |
ZJUODset | 4D Radar,LiDAR, Camera | PC | 23'ICVISP | 3D bbox, 2D bbox | github paper | 19,000 frames of raw data and 3,800 annotated frames. |
CMD | 32-beam LiDAR, 128-beam LiDAR, solid-state LiDAR, 4D Radar, 3 Cameras | PC | 24'ECCV | 3D bbox | github paper | 50 high-quality sequences, each spanning 20 seconds, equating to 200 frames per sensor |
V2X-R | 4D Radar,LiDAR, Camera (simulated) | PC | 24'arXiv | 3D bbox | github paper | V2X-R contains 12,079 scenarios with 37,727 frames of LiDAR and 4D radar point clouds, 150,908 images |
OmniHD-Scenes | 6 4D Radars,LiDAR, 6 Cameras, IMU | PC | 24'arXiv | 3D bbox, TrackID, OCC | website paper | totaling more than 450K synchronized frames |
- RPFA-Net: a 4D RaDAR Pillar Feature Attention Network for 3D Object Detection (21'ITSC)
- Multi-class road user detection with 3+1D radar in the View-of-Delft dataset (22'RA-L)
- 🔗Link: paper
- 🏫Affiliation:
- 📁Dataset: VoD
- 📖Note: baseline of VoD
- SMURF: Spatial multi-representation fusion for 3D object detection with 4D imaging radar (23'TIV)
- 🔗Link: paper
- 🏫Affiliation: Beihang University (Bing Zhu)
- 📁Dataset: VoD, TJ4DRadSet
- 📖Note:
- PillarDAN: Pillar-based Dual Attention Attention Network for 3D Object Detection with 4D RaDAR (23'ITSC)
- 🔗Link: paper
- 🏫Affiliation: Shanghai Jiao Tong University (Lin Yang)
- 📁Dataset: Astyx
- 📖Note:
- MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection (23'ICONIP)
- 🔗Link: paper
- 🏫Affiliation: Nanyang Technological University
- 📁Dataset: Astyx, VoD
- 📖Note:
- SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers (23'Sensors)
- 🔗Link: paper
- 🏫Affiliation: Tongji University
- 📁Dataset: VoD
- 📖Note:
- 3-D Object Detection for Multiframe 4-D Automotive Millimeter-Wave Radar Point Cloud (23'IEEE Sensors Journal)
- 🔗Link: paper
- 🏫Affiliation: Tongji University (Zhixiong Ma)
- 📁Dataset: TJ4DRadSet
- 📖Note:
- RMSA-Net: A 4D Radar Based Multi-Scale Attention Network for 3D Object Detection (23'ISCSIC)
- 🔗Link: paper
- 🏫Affiliation: Nanjing University of Aeronautics and Astronautics (Jie Hao)
- 📁Dataset: HR4D (self-collected and not open source)
- 📖Note:
- RadarPillars: Efficient Object Detection from 4D Radar Point Clouds (24'arXiv)
- 🔗Link: paper
- 🏫Affiliation: Mannheim University of Applied Sciences, Germany
- 📁Dataset: VoD
- 📖Note:
- VA-Net: 3D Object Detection with 4D Radar Based on Self-Attention (24'CVDL)
- 🔗Link: paper
- 🏫Affiliation: Hunan Normal University (Bo Yang)
- 📁Dataset: VoD
- 📖Note:
- RTNH+: Enhanced 4D Radar Object Detection Network using Two-Level Preprocessing and Vertical Encoding (24'TIV)
- RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud (24'ICRA)
- 🔗Link: code
- 🏫Affiliation: Royal College of Art, University College London (Chris Xiaoxuan Lu)
- 📁Dataset: VoD
- 📖Note:
- Feature Fusion and Interaction Network for 3D Object Detection based on 4D Millimeter Wave Radars (24'CCC)
- 🔗Link: paper
- 🏫Affiliation: University of Science and Technology of China (Qiang Ling)
- 📁Dataset: VoD
- 📖Note:
- Sparsity-Robust Feature Fusion for Vulnerable Road-User Detection with 4D Radar (24'Applied Sciences)
- 🔗Link: paper
- 🏫Affiliation: Mannheim University of Applied Sciences (Oliver Wasenmüller)
- 📁Dataset: VoD
- 📖Note:
- Enhanced 3D Object Detection using 4D Radar and Vision Fusion with Segmentation Assistance (24'preprint)
- RadarPillarDet: Multi-Pillar Feature Fusion with 4D Millimeter-Wave Radar for 3D Object Detection (24'SAE Technical Paper)
- 🔗Link: paper
- 🏫Affiliation: Tongji University (Zhixiong Ma)
- 📁Dataset: VoD
- 📖Note:
- MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection (24'ICANN)
- 🔗Link: paper
- 🏫Affiliation: Technical University of Munich (Xiangyuan Peng)
- 📁Dataset: VoD, TJ4DRadSet
- 📖Note:
- Multi-Scale Pillars Fusion for 4D Radar Object Detection with Radar Data Enhancement (24'IEEE Sensors Journal)
- 🔗Link: paper
- 🏫Affiliation: Chinese Academy of Sciences (Zhe Zhang)
- 📁Dataset: VoD
- 📖Note:
- SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object Detection (24'arXiv)
- 🔗Link: paper code(unfilled project)
- 🏫Affiliation: Zhejiang University (Zhiyu Xiang)
- 📁Dataset: VoD, ZJUODset
- 📖Note: The teacher is a Lidar-Radar bi-modality fusion network, while the student is a radaronly network. By the effective knowledge distillation of the teacher, the student can learn to extract sophisticated feature from the radar input and boost its detection performance.
- Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions (24'CVPR)
- CenterRadarNet: Joint 3D Object Detection and Tracking Framework using 4D FMCW Radar (24'ICIP)
- 🔗Link: paper
- 🏫Affiliation: University of Washington (Jen-Hao Cheng)
- 📁Dataset: K-Radar
- 📖Note:
-
InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection (22'IROS)
- 🔗Link: paper
- 🏫Affiliation: Tsinghua University (Li Wang)
- 📁Dataset: Astyx
- 📖Note:
-
Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving (23'TVT)
- 🔗Link: paper
- 🏫Affiliation: Tsinghua University (Li Wang)
- 📁Dataset: Astyx
- 📖Note:
-
L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (24'arXiv)
- 🔗Link: paper
- 🏫Affiliation: Xiamen University
- 📁Dataset: VoD, K-Radar
- 📖Note: For the K-Radar dataset, we preprocess the 4D radar spar setensor by selecting only the top 10240 points with high power measurement. This paper is submitted to 25'AAAI.
-
Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature Augmentation (24'ICRA)
-
Traffic Object Detection for Autonomous Driving Fusing LiDAR and Pseudo 4D-Radar Under Bird’s-Eye-View (24'TITS)
- 🔗Link: paper
- 🏫Affiliation: Xi’an Jiaotong University (Yonghong Song)
- 📁Dataset: Astyx
- 📖Note:
-
Fusing LiDAR and Radar with Pillars Attention for 3D Object Detection (24'International Symposium on Autonomous Systems (ISAS))
- 🔗Link: paper
- 🏫Affiliation: Zhejiang University (Liang Liu)
- 📁Dataset: VoD
- 📖Note:
-
RLNet: Adaptive Fusion of 4D Radar and Lidar for 3D Object Detection (24'ECCVW)
- 🔗Link: paper and reviews
- 🏫Affiliation: Zhejiang University (Zhiyu Xiang)
- 📁Dataset: ZJUODset
- 📖Note:
-
LEROjD: Lidar Extended Radar-Only Object Detection (24'ECCV)
-
V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion (24'arXiv)
- RCFusion: Fusing 4-D Radar and Camera With Bird’s-Eye View Features for 3-D Object Detection (23'TIM)
- 🔗Link: paper
- 🏫Affiliation: Tongji University (Zhixiong Ma)
- 📁Dataset: VoD, TJ4DRadSet
- 📖Note:
- GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection (23'SAE Technical Paper)
- 🔗Link: paper
- 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
- 📁Dataset: VoD
- 📖Note:
- LXL: LiDAR Excluded Lean 3D Object DetectionWith 4D Imaging Radar and Camera Fusion (24'TIV)
- 🔗Link: paper
- 🏫Affiliation: Beihang University (Bing Zhu)
- 📁Dataset: VoD, TJ4DRadSet
- 📖Note:
- TL-4DRCF: A Two-Level 4-D Radar–Camera Fusion Method for Object Detection in Adverse Weather (24'IEEE Sensors Journal)
- 🔗Link: paper
- 🏫Affiliation: South China University of Technology (Kai Wu)
- 📁Dataset: VoD
- 📖Note: Beyond the VoD, the LiDAR point cloud and images of the VoD dataset are processed with artificial fog to obtain the VoD-Fog dataset for validating our model.
- UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection (24'arXiv)
- 🔗Link: paper
- 🏫Affiliation: Xi'an Jiaotong - Liverpool University
- 📁Dataset: VoD, TJ4DRadSet
- 📖Note:
- RCBEVDet: Radar-camera Fusion in Bird’s Eye View for 3D Object Detection (24'CVPR)
- 🔗Link: paper
- 🏫Affiliation: Peking University (Yongtao Wang)
- 📁Dataset: VoD
- 📖Note: not only 4D mmWave Radar, but 3D Radar like Nuscenes
- MSSF: A 4D Radar and Camera Fusion Framework With Multi-Stage Sampling for 3D Object Detection in Autonomous Driving (24'arXiv)
- 🔗Link: paper
- 🏫Affiliation: University of Science andTechnology of China (Jun Liu)
- 📁Dataset: VoD, TJ4DRadset
- 📖Note:
- SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera (24'RA-L)
- ERC-Fusion: Fusing Enhanced 4D Radar and Camera for 3D Object Detection (24'DTPI)
- 🔗Link: paper
- 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
- 📁Dataset: VoD
- 📖Note:
- HGSFusion: Radar-Camera Fusion with Hybrid Generation and Synchronization for 3D Object Detection (25'AAAI)
-
LiDAR-based All-weather 3D Object Detection via Prompting and Distilling 4D Radar (24'ECCV)
- 🔗Link: paper code (unfilled project)
- 🏫Affiliation: KAIST (Yujeong Chae)
- 📁Dataset: K-Radar
- 📖Note:
-
Exploring Domain Shift on Radar-Based 3D Object Detection Amidst Diverse Environmental Conditions (24'ITSC)
- 🔗Link: paper
- 🏫Affiliation: Robert Bosch GmbH (Miao Zhang)
- 📁Dataset: K-Radar, Bosch-Radar
- 📖Note:
-
4D Millimeter-Wave Radar in Autonomous Driving: A Survey (23'arXiv)
- 🔗Link: paper
- 🏫Affiliation: Tsinghua University (Jianqiang Wang)
-
4D mmWave Radar for Autonomous Driving Perception: A Comprehensive Survey (24'TIV)
- 🔗Link: paper
- 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
-
A Survey of Deep Learning Based Radar and Vision Fusion for 3D Object Detection in Autonomous Driving (24'arXiv)
waiting for updates....................
3D object detection is able to obtain the position, size and orientation information of objects in 3D space, and is widely used in automatic driving perception, robot manipulation, and other applications. In 3D object detection, sensors such as LiDAR, RGB camera and depth camera are commonly used. In recent years, several works have been proposed to utilize 4D radar as a primary or secondary sensor to achieve 3D object detection. 4D radar, also known as 4D millimeter wave (mmWave) radar or 4D imaging radar. Compared to 3D radar, 4D radar not only obtains the distance, direction and relative velocity (Doppler velocity) of the target object, but also detects the height of the object. Due to its robustness against different weather conditions and lower cost, 4D radar is expected to replace low beam LiDAR in the future. This repo summarizes the 4D radar based 3D object detection methods and datasets.
- PC: Point Cloud
- ADC: Analog-to-Digital Converter signal
- RT: Radar Tensor (include Range-Azimuth-Doppler Tensor, Range-Azimuth Tensor, Range-Doppler Tensor)
- Li Wang (Postdoctoral Fellow) and his co-leader Xinyu Zhang @Tsinghua University, authors of Dual Radar
- Bing Zhu @Beihang University
- Lin Yang @Shanghai Jiao Tong University
- Chris Xiaoxuan Lu @University College London (UCL)
- Zhixiong Ma @Chinese Institute for Brain Research (ex. Tongji University), the author of TJ4DRadSet Dataset and OmniHD-Scenes Dataset
- Zhiyu Xiang @Zhejiang University, the author of ZJUODset Dataset
- Yujeong Chae and his PhD Advisor Kuk-Jin Yoon @Korea Advanced Institute of Science and Technology (KAIST)
- Lili Fan @Beijing Institute of Technology
- Chenglu Wen @Xiamen university, the author of CMD Dataset and V2X-R Dataset