Skip to content

Latest commit

 

History

History
60 lines (50 loc) · 8.61 KB

datasets.md

File metadata and controls

60 lines (50 loc) · 8.61 KB

datasets

key words

dat.: dataset   |   cls.: classification   |   rel.: retrieval   |   seg.: segmentation
det.: detection   |   tra.: tracking   |   pos.: pose   |   dep.: depth
reg.: registration   |   rec.: reconstruction   |   aut.: autonomous driving
oth.: other, including normal-related, correspondence, mapping, matching, alignment, compression, generative model...

popular datsets

  • [ModelNet] The Princeton ModelNet . [cls.]
  • [ShapeNet] A collaborative dataset between researchers at Princeton, Stanford and TTIC. [seg.]
  • [S3DIS] The Stanford Large-Scale 3D Indoor Spaces Dataset. [seg.]
  • [ScanNet] Richly-annotated 3D Reconstructions of Indoor Scenes. [cls. seg.]
  • [SUNRGB-D] 19 object categories for predicting a 3D bounding box in real world dimension. [det.]
  • [Large-Scale Point Cloud Classification Benchmark(ETH)] This benchmark closes the gap and provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total. [cls.]
  • [Paris-Lille-3D] A large and high-quality ground truth urban point cloud dataset for automatic segmentation and classification. [cls. seg.]
  • [KITTI] The KITTI Vision Benchmark Suite. [det.]

other datasets

  • [PartNet] The PartNet dataset provides fine grained part annotation of objects in ShapeNetCore. [seg.]

  • [PartNet] PartNet benchmark from Nanjing University and National University of Defense Technology. [seg.]

  • [Stanford 3D] The Stanford 3D Scanning Repository. [reg.]

  • [UWA Dataset] . [cls. seg. reg.]

  • [Princeton Shape Benchmark] The Princeton Shape Benchmark.

  • [SYDNEY URBAN OBJECTS DATASET] This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees. [cls. match.]

  • [ASL Datasets Repository(ETH)] This site is dedicated to provide datasets for the Robotics community with the aim to facilitate result evaluations and comparisons. [cls. match. reg. det]

  • [Robotic 3D Scan Repository] The Canadian Planetary Emulation Terrain 3D Mapping Dataset is a collection of three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada.

  • [Radish] The Robotics Data Set Repository (Radish for short) provides a collection of standard robotics data sets.

  • [IQmulus & TerraMobilita Contest] The database contains 3D MLS data from a dense urban environment in Paris (France), composed of 300 million points. The acquisition was made in January 2013. [cls. seg. det.]

  • [Oakland 3-D Point Cloud Dataset] This repository contains labeled 3-D point cloud laser data collected from a moving platform in a urban environment.

  • [Robotic 3D Scan Repository] This repository provides 3D point clouds from robotic experiments,log files of robot runs and standard 3D data sets for the robotics community.

  • [Ford Campus Vision and Lidar Data Set] The dataset is collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck.

  • [The Stanford Track Collection] This dataset contains about 14,000 labeled tracks of objects as observed in natural street scenes by a Velodyne HDL-64E S2 LIDAR.

  • [PASCAL3D+] Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild. [pos. det.]

  • [3D MNIST] The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition. [cls.]

  • [WAD] This dataset is provided by Baidu Inc.

  • [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.

  • [PreSIL] Depth information, semantic segmentation (images), point-wise segmentation (point clouds), ground point labels (point clouds), and detailed annotations for all vehicles and people. [paper] [det. aut.]

  • [3D Match] Keypoint Matching Benchmark, Geometric Registration Benchmark, RGB-D Reconstruction Datasets. [reg. rec. oth.]

  • [BLVD] (a) 3D detection, (b) 4D tracking, (c) 5D interactive event recognition and (d) 5D intention prediction. [ICRA 2019 paper] [det. tra. aut. oth.]

  • [PedX] 3D Pose Estimation of Pedestrians, more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. [ICRA 2019 paper] [pos. aut.]

  • [H3D] Full-surround 3D multi-object detection and tracking dataset. [ICRA 2019 paper] [det. tra. aut.]

  • [Argoverse BY ARGO AI] Two public datasets (3D Tracking and Motion Forecasting) supported by highly detailed maps to test, experiment, and teach self-driving vehicles how to understand the world around them.[CVPR 2019 paper][tra. aut.]

  • [Matterport3D] RGB-D: 10,800 panoramic views from 194,400 RGB-D images. Annotations: surface reconstructions, camera poses, and 2D and 3D semantic segmentations. Keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification. [3DV 2017 paper] [code] [blog]

  • [SynthCity] SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Nine categories. [seg. aut.]

  • [Lyft Level 5] Include high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map. [det. seg. aut.]

  • [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper][seg. oth. aut.]

  • [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions.[det.]

  • [A*3D: An Autonomous Driving Dataset in Challeging Environments] A*3D: An Autonomous Driving Dataset in Challeging Environments.[det.]

references