-
Notifications
You must be signed in to change notification settings - Fork 1
Seafloor Reconstruction Use Case
This use case describes a data processing workflow in which optical image data is acquired by diving robots, image
data is processed in-situ and ex-situ, photogrammetric reconstructions are calculated, and individual munition
objects are detected and identified in the created 3D models. All data and data products are being published FAIR
(Findable, Accessible, Interoperable, Re-usable).
A geographical seafloor target area for surveying is identified by a human expert. A mobile sensor carrier platform with capacities to acquire optical images and to process these images is selected for deployment in the selected area. Before the deployment, a human operator creates preliminary image metadata required for the creation of image FAIR Digital Objects (iFDOs, see https://www.marine-imaging.com/fair). Upon deployment, the camera system acquires RGB data and transfers the images to the onboard data processing unit (DPU). This DPU runs a minimum viable marispace instance. Each individual image is assigned a unique id (random UUID) which is written into the metadata header of the image file. The filename of the image is unique and includes information about the deployment (e.g. deployment ID), the camera hardware used (e.g. camera ID) and the date and time of acquisition. The image is written to disk. A process runs on the DPU that takes an image filepath as an input and creates an ASCI file containing salient key points within the image. For this step, free and proprietary software products exist, such as "Metashape". After the deployment a human operator downloads the image data and the created key point data from the carrier platform and transfers both to a fog computing component. On this machine also runs a marispace. There exists a process that takes the image and keypoint data and determines pair-wise matches to identify metrical camera observation offsets between images. A subsequent process creates a dense 3D point cloud from these correlations and all other image pixel information aside from the key points. A final process creates a contiguous surface by interpolating between the dense 3D points. In parallel, the iFDO metadata file is created from the prepared metadata, the acquired image data information and additional metadata provided by other sensors such as navigation data. The image data is transferred to a cloud storage to provide onshore access. This cloud storage is login protected. A persistent identifier (e.g. DOI) is registered for the entire image data set at a PID service. The iFDO file is transferred to a cloud storage and published openly. A PID is registered for the iFDO file at a PID service. The reconstructed 3D point cloud is transferred to a cloud storage and login protected. A PID is registered for the point cloud data product. The created 3D surface data product is transferred to a cloud storage. A PID is registered for the 3D surface data product. A cloud-based service provides a process that takes a 3D point cloud as an input and runs a detection process to identify parts of the 3D structure that protrude from an otherwise homogeneous surface. This process creates a list of candidate 3D positions (detections). Another cloud-based service provides a process that takes a 3D point cloud and candidate positions and compares the candidates to a database of known 3D shapes of munition objects. This process creates an identification or rejection of the candidate. Connections between cloud-based processes are made through PIDs. Another cloud-based marispace runs a login-protected dashboard that visualizes the image data, image metadata, 3D points clouds and 3D surface data as well as candidates and identifications.