Skip to content

Latest commit

 

History

History
12 lines (6 loc) · 1.9 KB

README.md

File metadata and controls

12 lines (6 loc) · 1.9 KB

With the advancement of 3D point cloud segmentation and classification methods through machine learning, there is an increasing need for high-quality and extensive datasets. The effectiveness of training models relies heavily on having large, well-labeled datasets that encompass a broad range of point types, classes, and segmentation qualities. While there are numerous datasets available for image classification and segmentation, stereo vision, visual odometry, vehicle and pedestrian detection in videos, and other optical methods, there remains a significant gap in the availability of urban 3D point cloud datasets that are both segmented and classified.

Many of the available point cloud datasets have been published through initiatives by universities or companies aimed at testing and comparing alternative methodologies. However, these datasets often fall short when it comes to specific applications such as assessing pedestrian accessibility within urban environments.

This motivated us to develop Victoriaville3D datasets. This Datasets aim to fill the existing gap, facilitating research and development efforts to improve urban accessibility and ensure that all environments are navigable for everyone.

This research project focuses on leveraging mobile LiDAR data to extract environmental factors essential for evaluating pedestrian network accessibility. Currently, there are no databases specifically designed for training algorithms to assess accessibility for persons with reduced mobility (PRM). Objects of interest in urban environments—such as Curb-cuts, Ramps, doors, and stairs—exhibit significant morphological variability and are often underrepresented in existing datasets.

Screenshot 2024-08-09 174645 Screenshot 2024-08-09 173418