Skip to content

A Collection of Works Related to 3D Object Detection with 4D mmWave Radar

Notifications You must be signed in to change notification settings

liuzengyun/Awesome-3D-Detection-with-4D-Radar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 

Repository files navigation

Awesome-3D-Detection-with-4D-Radar

【If you want to add anything to this repository, please create a PR or email to [email protected]

Aptiv_4D_Radar_og

Overview

Datasets

Dataset Sensors Radar Data Source Annotations url Other
Astyx 4D Radar,LiDAR, Camera PC 19'EuRAD 3D bbox github paper ~500 frames
RADIal 4D Radar,LiDAR, Camera PC, ADC, RT 22'CVPR 2D bbox, seg github paper 8,252 labeled frames
View-of-Delft(VoD) 4D Radar,LiDAR, Stereo Camera PC 22'RA-L 3D bbox website 8,693 frames
TJ4DRadSet 4D Radar,LiDAR, Camera, GNSS PC 22'ITSC 3D bbox, TrackID github paper 7,757 frames
K-Radar 4D Radar,LiDAR, Stereo Camera, RTK-GPS RT 22'NeurIPS 3D bbox, TrackID github paper 35K frames; 360° Camera
Dual Radar dual 4D Radars,LiDAR, Camera PC 23'arXiv 3D bbox, TrackID paper 10K frames
L-RadSet 4D Radar,LiDAR, 3 Cameras PC 24'TIV 3D bbox, TrackID github paper 11.2K frames; Annos range to 220m
ZJUODset 4D Radar,LiDAR, Camera PC 23'ICVISP 3D bbox, 2D bbox github paper 19,000 frames of raw data and 3,800 annotated frames.
CMD 32-beam LiDAR, 128-beam LiDAR, solid-state LiDAR, 4D Radar, 3 Cameras PC 24'ECCV 3D bbox github paper 50 high-quality sequences, each spanning 20 seconds, equating to 200 frames per sensor
V2X-R 4D Radar,LiDAR, Camera (simulated) PC 24'arXiv 3D bbox github paper V2X-R contains 12,079 scenarios with 37,727 frames of LiDAR and 4D radar point clouds, 150,908 images
OmniHD-Scenes 6 4D Radars,LiDAR, 6 Cameras, IMU PC 24'arXiv 3D bbox, TrackID, OCC website paper totaling more than 450K synchronized frames

SOTA Papers

From 4D Radar Point Cloud

  1. RPFA-Net: a 4D RaDAR Pillar Feature Attention Network for 3D Object Detection (21'ITSC)
    • 🔗Link: paper code
    • 🏫Affiliation: Tsinghua University (Xinyu Zhang)
    • 📁Dataset: Astyx
    • 📖Note:
  2. Multi-class road user detection with 3+1D radar in the View-of-Delft dataset (22'RA-L)
    • 🔗Link: paper
    • 🏫Affiliation:
    • 📁Dataset: VoD
    • 📖Note: baseline of VoD
  3. SMURF: Spatial multi-representation fusion for 3D object detection with 4D imaging radar (23'TIV)
    • 🔗Link: paper
    • 🏫Affiliation: Beihang University (Bing Zhu)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  4. PillarDAN: Pillar-based Dual Attention Attention Network for 3D Object Detection with 4D RaDAR (23'ITSC)
    • 🔗Link: paper
    • 🏫Affiliation: Shanghai Jiao Tong University (Lin Yang)
    • 📁Dataset: Astyx
    • 📖Note:
  5. MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection (23'ICONIP)
    • 🔗Link: paper
    • 🏫Affiliation: Nanyang Technological University
    • 📁Dataset: Astyx, VoD
    • 📖Note:
  6. SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers (23'Sensors)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University
    • 📁Dataset: VoD
    • 📖Note:
  7. 3-D Object Detection for Multiframe 4-D Automotive Millimeter-Wave Radar Point Cloud (23'IEEE Sensors Journal)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University (Zhixiong Ma)
    • 📁Dataset: TJ4DRadSet
    • 📖Note:
  8. RMSA-Net: A 4D Radar Based Multi-Scale Attention Network for 3D Object Detection (23'ISCSIC)
    • 🔗Link: paper
    • 🏫Affiliation: Nanjing University of Aeronautics and Astronautics (Jie Hao)
    • 📁Dataset: HR4D (self-collected and not open source)
    • 📖Note:
  9. RadarPillars: Efficient Object Detection from 4D Radar Point Clouds (24'arXiv)
    • 🔗Link: paper
    • 🏫Affiliation: Mannheim University of Applied Sciences, Germany
    • 📁Dataset: VoD
    • 📖Note:
  10. VA-Net: 3D Object Detection with 4D Radar Based on Self-Attention (24'CVDL)
    • 🔗Link: paper
    • 🏫Affiliation: Hunan Normal University (Bo Yang)
    • 📁Dataset: VoD
    • 📖Note:
  11. RTNH+: Enhanced 4D Radar Object Detection Network using Two-Level Preprocessing and Vertical Encoding (24'TIV)
    • 🔗Link: code paper
    • 🏫Affiliation: KAIST (Seung-Hyun Kong)
    • 📁Dataset: K-Radar
    • 📖Note: The enhanced baseline of K-Radar.
  12. RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud (24'ICRA)
    • 🔗Link: code
    • 🏫Affiliation: Royal College of Art, University College London (Chris Xiaoxuan Lu)
    • 📁Dataset: VoD
    • 📖Note:
  13. Feature Fusion and Interaction Network for 3D Object Detection based on 4D Millimeter Wave Radars (24'CCC)
    • 🔗Link: paper
    • 🏫Affiliation: University of Science and Technology of China (Qiang Ling)
    • 📁Dataset: VoD
    • 📖Note:
  14. Sparsity-Robust Feature Fusion for Vulnerable Road-User Detection with 4D Radar (24'Applied Sciences)
    • 🔗Link: paper
    • 🏫Affiliation: Mannheim University of Applied Sciences (Oliver Wasenmüller)
    • 📁Dataset: VoD
    • 📖Note:
  15. Enhanced 3D Object Detection using 4D Radar and Vision Fusion with Segmentation Assistance (24'preprint)
    • 🔗Link: paper code
    • 🏫Affiliation: Beijing Institute of Technology (Xuemei Chen)
    • 📁Dataset: VoD
    • 📖Note:
  16. RadarPillarDet: Multi-Pillar Feature Fusion with 4D Millimeter-Wave Radar for 3D Object Detection (24'SAE Technical Paper)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University (Zhixiong Ma)
    • 📁Dataset: VoD
    • 📖Note:
  17. MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection (24'ICANN)
    • 🔗Link: paper
    • 🏫Affiliation: Technical University of Munich (Xiangyuan Peng)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  18. Multi-Scale Pillars Fusion for 4D Radar Object Detection with Radar Data Enhancement (24'IEEE Sensors Journal)
    • 🔗Link: paper
    • 🏫Affiliation: Chinese Academy of Sciences (Zhe Zhang)
    • 📁Dataset: VoD
    • 📖Note:
  19. SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object Detection (24'arXiv)
    • 🔗Link: paper code(unfilled project)
    • 🏫Affiliation: Zhejiang University (Zhiyu Xiang)
    • 📁Dataset: VoD, ZJUODset
    • 📖Note: The teacher is a Lidar-Radar bi-modality fusion network, while the student is a radaronly network. By the effective knowledge distillation of the teacher, the student can learn to extract sophisticated feature from the radar input and boost its detection performance.

From 4D Radar Tensor

  1. Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions (24'CVPR)
    • 🔗Link: paper code
    • 🏫Affiliation: KAIST (Yujeong Chae)
    • 📁Dataset: K-Radar
    • 📖Note: This method takes LiDAR point cloud, 4D radar tensor (not point cloud) and image as input.
  2. CenterRadarNet: Joint 3D Object Detection and Tracking Framework using 4D FMCW Radar (24'ICIP)
    • 🔗Link: paper
    • 🏫Affiliation: University of Washington (Jen-Hao Cheng)
    • 📁Dataset: K-Radar
    • 📖Note:

Fusion of 4D Radar & LiDAR

  1. InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection (22'IROS)

    • 🔗Link: paper
    • 🏫Affiliation: Tsinghua University (Li Wang)
    • 📁Dataset: Astyx
    • 📖Note:
  2. Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving (23'TVT)

    • 🔗Link: paper
    • 🏫Affiliation: Tsinghua University (Li Wang)
    • 📁Dataset: Astyx
    • 📖Note:
  3. L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (24'arXiv)

    • 🔗Link: paper
    • 🏫Affiliation: Xiamen University
    • 📁Dataset: VoD, K-Radar
    • 📖Note: For the K-Radar dataset, we preprocess the 4D radar spar setensor by selecting only the top 10240 points with high power measurement. This paper is submitted to 25'AAAI.
  4. Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature Augmentation (24'ICRA)

    • 🔗Link: paper code
    • 🏫Affiliation: University of Edinburgh, University College London (Chris Xiaoxuan Lu)
    • 📁Dataset: VoD
    • 📖Note:
  5. Traffic Object Detection for Autonomous Driving Fusing LiDAR and Pseudo 4D-Radar Under Bird’s-Eye-View (24'TITS)

    • 🔗Link: paper
    • 🏫Affiliation: Xi’an Jiaotong University (Yonghong Song)
    • 📁Dataset: Astyx
    • 📖Note:
  6. Fusing LiDAR and Radar with Pillars Attention for 3D Object Detection (24'International Symposium on Autonomous Systems (ISAS))

    • 🔗Link: paper
    • 🏫Affiliation: Zhejiang University (Liang Liu)
    • 📁Dataset: VoD
    • 📖Note:
  7. RLNet: Adaptive Fusion of 4D Radar and Lidar for 3D Object Detection (24'ECCVW)

    • 🔗Link: paper and reviews
    • 🏫Affiliation: Zhejiang University (Zhiyu Xiang)
    • 📁Dataset: ZJUODset
    • 📖Note:
  8. LEROjD: Lidar Extended Radar-Only Object Detection (24'ECCV)

    • 🔗Link: paper code
    • 🏫Affiliation: TU Dortmund University (Patrick Palmer, Martin Krüger)
    • 📁Dataset: VoD
    • 📖Note: "Although lidar should not be used during inference, it can aid the training of radar-only object detectors."
  9. V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion (24'arXiv)

    • 🔗Link: paper code
    • 🏫Affiliation: Xiamen University (Chenglu Wen)
    • 📁Dataset: V2X-R
    • 📖Note: baseline method of V2X-R Datasets

Fusion of 4D Radar & RGB Camera

  1. RCFusion: Fusing 4-D Radar and Camera With Bird’s-Eye View Features for 3-D Object Detection (23'TIM)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University (Zhixiong Ma)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  2. GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection (23'SAE Technical Paper)
    • 🔗Link: paper
    • 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
    • 📁Dataset: VoD
    • 📖Note:
  3. LXL: LiDAR Excluded Lean 3D Object DetectionWith 4D Imaging Radar and Camera Fusion (24'TIV)
    • 🔗Link: paper
    • 🏫Affiliation: Beihang University (Bing Zhu)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  4. TL-4DRCF: A Two-Level 4-D Radar–Camera Fusion Method for Object Detection in Adverse Weather (24'IEEE Sensors Journal)
    • 🔗Link: paper
    • 🏫Affiliation: South China University of Technology (Kai Wu)
    • 📁Dataset: VoD
    • 📖Note: Beyond the VoD, the LiDAR point cloud and images of the VoD dataset are processed with artificial fog to obtain the VoD-Fog dataset for validating our model.
  5. UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection (24'arXiv)
    • 🔗Link: paper
    • 🏫Affiliation: Xi'an Jiaotong - Liverpool University
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  6. RCBEVDet: Radar-camera Fusion in Bird’s Eye View for 3D Object Detection (24'CVPR)
    • 🔗Link: paper
    • 🏫Affiliation: Peking University (Yongtao Wang)
    • 📁Dataset: VoD
    • 📖Note: not only 4D mmWave Radar, but 3D Radar like Nuscenes
  7. MSSF: A 4D Radar and Camera Fusion Framework With Multi-Stage Sampling for 3D Object Detection in Autonomous Driving (24'arXiv)
    • 🔗Link: paper
    • 🏫Affiliation: University of Science andTechnology of China (Jun Liu)
    • 📁Dataset: VoD, TJ4DRadset
    • 📖Note:
  8. SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera (24'RA-L)
    • 🔗Link: paper code
    • 🏫Affiliation: Zhejiang University (Huiliang Shen)
    • 📁Dataset: VoD, TJ4DRadset
    • 📖Note:
  9. ERC-Fusion: Fusing Enhanced 4D Radar and Camera for 3D Object Detection (24'DTPI)
    • 🔗Link: paper
    • 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
    • 📁Dataset: VoD
    • 📖Note:
  10. HGSFusion: Radar-Camera Fusion with Hybrid Generation and Synchronization for 3D Object Detection (25'AAAI)
    • 🔗Link: paper code
    • 🏫Affiliation: Southeast University (Yan Huang)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:

Others

  1. LiDAR-based All-weather 3D Object Detection via Prompting and Distilling 4D Radar (24'ECCV)

  2. Exploring Domain Shift on Radar-Based 3D Object Detection Amidst Diverse Environmental Conditions (24'ITSC)

    • 🔗Link: paper
    • 🏫Affiliation: Robert Bosch GmbH (Miao Zhang)
    • 📁Dataset: K-Radar, Bosch-Radar
    • 📖Note:

Survey Papers

  1. 4D Millimeter-Wave Radar in Autonomous Driving: A Survey (23'arXiv)

    • 🔗Link: paper
    • 🏫Affiliation: Tsinghua University (Jianqiang Wang)
  2. 4D mmWave Radar for Autonomous Driving Perception: A Comprehensive Survey (24'TIV)

    • 🔗Link: paper
    • 🏫Affiliation: Beijing Institute of Technology (Lili Fan)
  3. A Survey of Deep Learning Based Radar and Vision Fusion for 3D Object Detection in Autonomous Driving (24'arXiv)

    waiting for updates....................

Basic Knowledge

What is 4D Radar?

3D object detection is able to obtain the position, size and orientation information of objects in 3D space, and is widely used in automatic driving perception, robot manipulation, and other applications. In 3D object detection, sensors such as LiDAR, RGB camera and depth camera are commonly used. In recent years, several works have been proposed to utilize 4D radar as a primary or secondary sensor to achieve 3D object detection. 4D radar, also known as 4D millimeter wave (mmWave) radar or 4D imaging radar. Compared to 3D radar, 4D radar not only obtains the distance, direction and relative velocity (Doppler velocity) of the target object, but also detects the height of the object. Due to its robustness against different weather conditions and lower cost, 4D radar is expected to replace low beam LiDAR in the future. This repo summarizes the 4D radar based 3D object detection methods and datasets.

Different 4D Radar Data Representations

  • PC: Point Cloud
  • ADC: Analog-to-Digital Converter signal
  • RT: Radar Tensor (include Range-Azimuth-Doppler Tensor, Range-Azimuth Tensor, Range-Doppler Tensor)

Representative researchers

  • Li Wang (Postdoctoral Fellow) and his co-leader Xinyu Zhang @Tsinghua University, authors of Dual Radar
  • Bing Zhu @Beihang University
  • Lin Yang @Shanghai Jiao Tong University
  • Chris Xiaoxuan Lu @University College London (UCL)
  • Zhixiong Ma @Chinese Institute for Brain Research (ex. Tongji University), the author of TJ4DRadSet Dataset and OmniHD-Scenes Dataset
  • Zhiyu Xiang @Zhejiang University, the author of ZJUODset Dataset
  • Yujeong Chae and his PhD Advisor Kuk-Jin Yoon @Korea Advanced Institute of Science and Technology (KAIST)
  • Lili Fan @Beijing Institute of Technology
  • Chenglu Wen @Xiamen university, the author of CMD Dataset and V2X-R Dataset

About

A Collection of Works Related to 3D Object Detection with 4D mmWave Radar

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published