Recent studies have applied deep learning algorithms to automatically extract rock traits (position, size, eccentricity, and orientation) using 2D orthorectified images, structured from unpiloted aircraft system (UAS) imagery. This repository introduced an offline pipeline for autonomous 3D rock detection, which can improve geometric analysis of geological features such as precariously balanced rocks, rocky slopes, fluvial channels, and debris flows. We first obtain both orthomosaics and point clouds using Structure-from-Motion algorithms on UAS imagery. Using the existing deep learning algorithms for 2D rock detection, individual rocks are identified and localized within 2D bounding boxes in orthomosaics. Based on the georeferences of the 2D bounding boxes, 3D bounding boxes are generated to include individual 3D rocks in point clouds. In each 3D bounding box, we then conduct point-cloud segmentation algorithms to categorize each point, such that the rock points in 3D bounding boxes can be extracted. This offline 3D rock detection pipeline, combining 2D rock detection and 3D rock segmentation, can segment individual rocks in point clouds to obtain accurate 3D geometric properties. Additionally, because rocks and supporting surfaces are semantically categorized, rock basal contact information can also be extracted, which is critical for the fragility study of balanced rocks. Our 3D rock detection extends deep learning applications from 2D geomorphological features to 3D, which supports quantitative research in rock fragility, slope stability, and landscape evolution.
- torch-points3d. To install torch-points3d, you need configure its requirements first.
- laspy:
pip3 install laspy
- rasterio, geopandas, rioxarray, and pyproj :
pip3 install rasterio geopandas rioxarray pyproj
The following data is needed to apply the 3D rock detection. The first two, orthomosaic and mesh models, are obtained from Structure-from-Motion software (e.g., Agisoft). They should have a coordinate reference system of WGS 84 with UTM projection. UTM zones can be found here: https://mangomap.com/robertyoung/maps/69585/what-utm-zone-am-i-in-#. The third data, point cloud, is subsampled from the mesh model. The subsampling can be done in CloudCompare.
- Orthomosaic: .tif with WGS 84 and UTM zone
- Mesh (with texture): .obj with WGS 84 and UTM zone (optional)
- Point cloud: .las, subsampled from .obj
- 2D rock detection in orthomosaic => bounding boxes
- Using the detected bounding boxes to crop points in the point cloud => pbr pointcloud candidates
- Classifying the pbr pointcloud candidates => pbr pointclouds
- Segmenting the pbr pointclouds => segmented pbr pointclouds
The objective of the third step is to reduce false detections from the first step (2D detection).
2D part:
- UAS-SfM: obtain the required data from UAS-SfM and point cloud sampling.
- 2D annotation: annotate rocks on orthomosaics and export a shapefile including rock polygons and rock categories. Create a data folder under
rock_detection_3d/notebooks/data/
. E.g.rock_detection_3d/notebooks/data/rocklas/
. And create a file structure as follows for your data:
rock_detection_3d/notebooks/data/rocklas/
annotation_shapefiles/your_annotation.shp ...
bbox_las/original_bbox_croped_points.las ...
bbox_las_annotation/annotated_bbox_cropped_points.las ...
your_data.tif
your_data.las
- generate tiles from shapefile (2D training dataset): Use this notebook, and it creates a folder with split tiles. Instances on tile boundaries are split to different tiles via our Tile Split algorithm.
- create training, validation, and test splits: refer to create your own dataset readme.md.
- train Mask R-CNN and conduct inference: Refer to the notebook.
- instance registration: Refer to the notebook to create a shapefile of your inference results. With the shapefile, you can edit the prediction polygons as needed.
3D part:
- 2D bounding box extraction: from the above inference shapefile (or original annotation shapefile), georeferenced 2D bounding boxes are extracted. Refer to the notebook.
- 3D rock extraction: this step uses the above georeferenced 2D bounding boxes to crop individual 3D rock point clouds. Refer to the notebook.
- 3D annotation: the above rock point clouds include both pedestal and PBR points. We need to classify each point in the above rock point clouds. The objective of this step is to annotate points on the rock of interest. Here is a tutorial of using cloudcompare for point annotation: https://www.youtube.com/watch?v=B61WNd7R_w4
- 3D point segmentation: before you start 3D point segmentation, you should prepare torch-points3d dataset (see a tutorial of create your own dataset). Then refer to this notebook to train your model.
2D part:
- create a folder for your data
rock_detection_3d/notebooks/data/rocklas/
bbox_las_annotation/annotated_bbox_cropped_points.las ... (to be generated)
your_data.tif
your_data.las
- generate tiles from shapefile (2D inference dataset): Use this notebook, and it creates a folder with split tiles. Instances on tile boundaries are split to different tiles via our Tile Split algorithm.
- create a json with all inference images: refer to create your own dataset readme.md.
- conduct inference using Mask R-CNN notebook.
- instance registration: Refer to the notebook to create a shapefile of your inference results. With the shapefile, you can edit the prediction polygons as needed.
3D part:
- 2D bounding box extraction: from the above inference shapefile (or original annotation shapefile), georeferenced 2D bounding boxes are extracted. Refer to the notebook.
- 3D rock extraction: this step uses the above georeferenced 2D bounding boxes to crop individual 3D rock point clouds. Refer to the notebook.
- 3D point segmentation: before you start 3D point segmentation, you should prepare torch-points3d dataset (see a tutorial of create your own dataset). Then refer to this notebook to conduct inference for your model.
- implement weighted loss to focus on edge point segmentation
- try different optimizers
- synthetic rock segmentation data
- generate synthetic rocks -> rock point cloud
- generate synthetic terrains (background) -> pedestal point cloud
- randomize orientations of the rock point cloud and the pedestal point cloud
- place the rock point cloud on the pedestal point cloud
- merge two point clouds
- iteratively remove points on the rock point cloud
- given a rock point (x, y, z), use (x, y) to search the pedestal point with the nearest (x_i, y_j). The nearest pedestal point has coordinates (x_i, y_j, z_j). Compare z and z_j. Remove the rock point, if z < z_j.