Skip to content

Commit

Permalink
Fix dataset folder list (refresh more ofter + more recent at the top)…
Browse files Browse the repository at this point in the history
… + README
  • Loading branch information
remmel committed Mar 13, 2021
1 parent e32bcaa commit 8b26677
Show file tree
Hide file tree
Showing 8 changed files with 138 additions and 126 deletions.
129 changes: 23 additions & 106 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,117 +1,34 @@
# HUAWEI AR Engine Demo
# 3D Recorder

This if a fork of [hms-AREngine-demo](https://github.com/HMS-Core/hms-AREngine-demo)
I added 2 features :
- Recorder : save camera image and depth every x seconds
- Measure : display the distance (depth) of the object to the camera (center depth pixel)
3D Recorder allows you to save the RGB and Depth images along their world poses (rotation and position) from your Huawei phone (with a Tof camera).

## Recorder

<img src="Recorder3D/src/test/resources/00000012_image.jpg" width="240" /> ![](Recorder3D/src/test/resources/00000012_depth.png) <img src="doc/plymeshlab.png" width="240" />

Every seconds save the rgb and depth image, and its pose in the /storage/emulated/0/Android/data/com.remmel.recorder3d/files/

Features :
- Save depth (240x180 - fancy png, binary depth16)
- Save rgb images (3264x2448 - png + bin)
- Save poses in CSV [download](doc/poses.csv)
- Save rgbd ply [download](HwAREngineDemo/src/test/resources/00000012.ply) (TODO enable it from UI)
- Export to PNG Grayscale 16bits (tum; openCV: CV_16UC1) (TODO enable it from UI)

TODO:
- ply: get depth sensor intrinsics [#1083](https://github.com/google-ar/arcore-android-sdk/issues/1083) + extrinsics between between 2 cameras (try to get [arFrame.acquireSceneMesh()](https://developer.huawei.com/consumer/en/doc/development/HMSCore-References-V5/frame-0000001050121447-V5#EN-US_TOPIC_0000001050126786__section167911410271) ply/obj and compare with mine) [#638](https://github.com/google-ar/arcore-android-sdk/issues/638#issuecomment-438785104)
- merge PLY using poses via UI (only test currently)
- choose photo resolution
- hi-res only some key image as it slow down the app
- try using a thread to save hi-res image
- create obj from ply
- project images on obj
- send images to server merge depth (TSDF - linux - CPU)
- https://github.com/tum-vision/fastfusion (PNG depth format TODO; reconstrution only)
- https://pcl.readthedocs.io/projects/tutorials/en/latest/using_kinfu_large_scale.html - Point Cloud Library
- OpenCV kinfu https://docs.opencv.org/master/d8/d1f/classcv_1_1kinfu_1_1KinFu.html https://github.com/microsoft/Azure-Kinect-Samples/tree/master/opencv-kinfu-samples
- https://github.com/PrimozLavric/MarchingCubes
- https://github.com/ros-industrial/yak
- https://github.com/andyzeng/tsdf-fusion / https://github.com/andyzeng/tsdf-fusion-python (CUDA needed?) (Ubuntu)
- https://github.com/Nerei/kinfu_remake
- https://github.com/personalrobotics/OpenChisel
- https://github.com/MikeSafonov/java-marching-cubes
- https://github.com/sdmiller/cpu_tsdf
- https://github.com/pedropro/OMG_Depth_Fusion

Camera Intrinsics (Honor View 20 - AR Engine : [ARCameraIntrinsics](https://developer.huawei.com/consumer/en/doc/HMSCore-References-V5/camera_intrinsics-0000001051140882-V5) + [ARCamera](https://developer.huawei.com/consumer/en/doc/development/HMSCore-Library-V5/camera-0000001050121437-V5) )
- principalPoint : 718.911 543.41327 // (cx, cy)
- imageDimensions : 1440 1080 //(width, heigh)
- distortions : 0.122519 -0.229927 0.144746 -6.96E-4 -4.39E-4
- focalLength : 1072.9441 1075.7474 //(fx, fy ) in px - Why 2 values ?
- getProjectionMatrix : 1.4902 0 0 0 / 0 3.0466316 0 0 / 0.001512431 0.009666785 -1.002002 -1 / 0 0 -0.2002002 0
Calculated (image is landscape)
- horizontal fov: Math.atan2(1440/2,1075.7473)*180/Math.PI*2=67.6
- vertical fov: Math.atan2(1080/2,1075.7473)*180/Math.PI*2=53.3
- diagonal : Math.sqrt(Math.pow(1080,2)+Math.pow(1440,2))=1800
- diafov: Math.atan2(1800/2,1075.7473)*180/Math.PI*2=79.8
- cx = width * (1 - proj[2][0]) / 2 = 1440*(1-0.001512431)/2 = 718.911
- cy = height * (1 - proj[2][1]) / 2 = 1080*(1-0.009666785)/2 = 534.780
- fx = width * proj[0][0] / 2 = 1440*1.4902/2=1072.944
- fy = height * proj[1][1] / 2 = 1080*3.0466316/2=1645.18 ?!? seems incorrect!

Camera Intrinsics (Huawei P20 Pro - AR Engine)
- principalPoint: 535.04553 729.8055 //in px
- imageDimensions : 1080 1440 //in px
- distortions : 0.093129 -0.187359 0.138948 1.34E-4 -4.29E-4
- focalLength : 1101.3862 1100.9385 //in px
Thoses information can later be visualized in my [online viewer](https://remmel.github.com/image-processing-js/pose-viewer.html).

You can choose how often to save the images and the rgb resolution (up to 3968x2976). Thus if you choose a small resolution (eg 1440x1080) you can create a [3d video](https://remy-mellet.com/image-processing-js/rgbds-viewer.html).
If you choose a too big resolution the fps will drop and thus a few images per second will be saved.

To get both depth and RGB :
- AR Engine : [ARFrame](https://developer.huawei.com/consumer/en/doc/HMSCore-References-V5/frame-0000001050121447-V5)
- ARCore + Camera2 API ? : [SharedCamera](https://developers.google.com/ar/reference/java/com/google/ar/core/SharedCamera) - [Doc](https://developers.google.com/ar/develop/java/camera-sharing)
- Camera2 API?
<img src="doc/Screenshot_20210313_141028_com.remmel.recorder3d.jpg" height="480" />
<img src="doc/Screenshot_20210313_141143_com.remmel.recorder3d.jpg" height="480" />
<img src="doc/Screenshot_20210313_141157_com.remmel.recorder3d.jpg" height="480" />

<img src="Recorder3D/src/test/resources/00000012_image.jpg" width="240" /> ![](Recorder3D/src/test/resources/00000012_depth.png) <img src="doc/plymeshlab.png" width="240" />

API Camera2 - Honor View 20
(TotalCaptureResult result)
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 4.75
result.get(CaptureResult.LENS_APERTURE) = 1.8
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 1.0713588 //in mm
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0.0 8.0
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,0,0,0,0,0,0,0]

characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE) = 7.28x5.46 //in mm
characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) = 4.75
characteristics.get(CameraCharacteristics.LENS_INFO_MINIMUM_FOCUS_DISTANCE) = 8
characteristics.get(CameraCharacteristics.LENS_INFO_HYPERFOCAL_DISTANCE) = 0.2
characteristics.get(CameraCharacteristics.DEPTH_DEPTH_IS_EXCLUSIVE) = true
characteristics.get(CameraCharacteristics.LENS_INTRINSIC_CALIBRATION) = [0,0,0,0,0]
characteristics.get(CameraCharacteristics.LENS_POSE_TRANSLATION) = [0,0,0]
characteristics.get(CameraCharacteristics.LENS_POSE_ROTATION) = [1,0,0,0]
characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION) = 90


focalLengh in pixel = CaptureResult.LENS_FOCUS_DISTANCE * 8000 / CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE[0]

## Features
- Save depth (240x180, binary DEPTH16)
- Save 2 rgb images (1 VGA + 1 with choosen resolution max 3264x2448)
- Save poses in CSV [download](doc/poses.csv)
- (from Android Studio) Save rgbd ply [download](Recorder3D/src/test/resources/00000012.ply) (TODO enable it from UI)
- (from Android Studio) Export to PNG Grayscale 16bits (tum; openCV: CV_16UC1) (TODO enable it from UI)

Huawei P20 Pro #0
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 5.58
result.get(CaptureResult.LENS_APERTURE) = 1.8
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 1.9061583
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0 10
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,X7]
characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE) = 7.3x5.5 //in mm
## Todo
- Add FileManager to export to PNG grayscale, PLY, merge PLYs
- Integrate the online pose-viewer in the app

Huawei P20 Pro #camera zoom
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 7.48
result.get(CaptureResult.LENS_APERTURE) = 2.4
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 9.775171 //change
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0 10
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,X7]
request.get(CaptureRequest.LENS_FOCAL_LENGTH)= 7.48
## Bonus feature : Measure

https://quaternions.online/
Measure the distance of the center (depth) of the object to the camera (center depth pixel)
![Screenshot_resized25](doc/Screenshot_resized25.jpg)

## Measure
## Android smarphone

![Screenshot_resized25](doc/Screenshot_resized25.jpg)
Measure the distance of the center
That app has only been tested on Honor View 20 which has a tof sensor. This is the cheapest Huawei phone with Tof, you can find 2nd hand around 180€.
2 changes: 2 additions & 0 deletions Recorder3D/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ dependencies {
implementation 'com.huawei.hms:arenginesdk:2.15.0.1'
testImplementation 'junit:junit:4.12'

implementation group: 'org.apache.commons', name: 'commons-lang3', version: '3.12.0'

// implementation 'org.openpnp:opencv:4.3.0-3' //nu.pattern.OpenCV.loadLocally(); - not android

implementation "org.bytedeco:javacpp:1.5.4" // Loader.load(org.bytedeco.javacpp.opencv_java.class); //https://github.com/bytedeco/javacpp
Expand Down
29 changes: 17 additions & 12 deletions Recorder3D/src/main/java/com/remmel/recorder3d/ChooseActivity.java
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,10 @@
import com.remmel.recorder3d.recorder.RecorderRenderManager;
import com.remmel.recorder3d.recorder.preferences.RecorderPreferenceActivity;

import org.apache.commons.lang3.ArrayUtils;

import java.io.File;
import java.util.Arrays;

/**
* This class provides the permission verification and sub-AR example redirection functions.
Expand All @@ -30,6 +33,7 @@
*/
public class ChooseActivity extends Activity {
private static final String TAG = ChooseActivity.class.getSimpleName();
protected LinearLayout llFileManager;

@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
Expand All @@ -43,16 +47,12 @@ protected void onCreate(@Nullable Bundle savedInstanceState) {
TextView tvHeader = findViewById(R.id.choose_txt_header);
tvHeader.setText(getString(R.string.app_name) + " v" + BuildConfig.VERSION_NAME);

File dir = this.getExternalFilesDir(null);
TextView tvFolder = findViewById(R.id.choose_txt_filesfolder);
tvFolder.setText(dir + ":");

renderBrowser();

llFileManager = findViewById(R.id.choose_linearlayout);
}

@Override
protected void onResume() {
renderDirtyFileManager();
Log.d(TAG, "onResume");
super.onResume();
}
Expand Down Expand Up @@ -102,24 +102,29 @@ public void onClick(View view) {
}
}

protected void renderBrowser() {
protected void renderDirtyFileManager() {
File filesDir = this.getExternalFilesDir(null);
LinearLayout ll = findViewById(R.id.choose_linearlayout);
llFileManager.removeAllViewsInLayout();

TextView tvFolder = new TextView(this);
tvFolder.setText(filesDir + ":");
tvFolder.setTextSize(10);
llFileManager.addView(tvFolder);

String[] directories = filesDir.list(FilenameFilterUtils.isDir());
Arrays.sort(directories);
ArrayUtils.reverse(directories);

for (String dir : directories) {
TextView tv = new TextView(this);


File f = new File(filesDir, dir);
int nbRgbVga = f.list(FilenameFilterUtils.endsWith(RecorderRenderManager.FN_SUFFIX_IMAGEVGAJPG)).length;
int nbRgb = f.list(FilenameFilterUtils.endsWith(RecorderRenderManager.FN_SUFFIX_IMAGEJPG)).length;
int nbDepth = f.list(FilenameFilterUtils.endsWith(RecorderRenderManager.FN_SUFFIX_DEPTH16BIN)).length;

tv.setText("• " + dir+ " RGBVGA("+nbRgbVga+") RGB("+nbRgb+") Depth("+nbDepth+")");
ll.addView(tv);
llFileManager.addView(tv);
}
}


}
8 changes: 0 additions & 8 deletions Recorder3D/src/main/res/layout/activity_choose.xml
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,6 @@
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">

<TextView
android:id="@+id/choose_txt_filesfolder"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="/foo/bar/files"
android:textSize="10dp" />

</LinearLayout>
</ScrollView>

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
96 changes: 96 additions & 0 deletions misc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Dirty TODO
- ply: get depth sensor intrinsics [#1083](https://github.com/google-ar/arcore-android-sdk/issues/1083) + extrinsics between between 2 cameras (try to get [arFrame.acquireSceneMesh()](https://developer.huawei.com/consumer/en/doc/development/HMSCore-References-V5/frame-0000001050121447-V5#EN-US_TOPIC_0000001050126786__section167911410271) ply/obj and compare with mine) [#638](https://github.com/google-ar/arcore-android-sdk/issues/638#issuecomment-438785104)
- try using a thread to save hi-res image
- create obj from ply
- project images on obj


# Reconstruction

send images to server merge depth (TSDF - linux - CPU)
- https://github.com/tum-vision/fastfusion (PNG depth format TODO; reconstrution only)
- https://pcl.readthedocs.io/projects/tutorials/en/latest/using_kinfu_large_scale.html - Point Cloud Library
- OpenCV kinfu https://docs.opencv.org/master/d8/d1f/classcv_1_1kinfu_1_1KinFu.html https://github.com/microsoft/Azure-Kinect-Samples/tree/master/opencv-kinfu-samples
- https://github.com/PrimozLavric/MarchingCubes
- https://github.com/ros-industrial/yak
- https://github.com/andyzeng/tsdf-fusion / https://github.com/andyzeng/tsdf-fusion-python (CUDA needed?) (Ubuntu)
- https://github.com/Nerei/kinfu_remake
- https://github.com/personalrobotics/OpenChisel
- https://github.com/MikeSafonov/java-marching-cubes
- https://github.com/sdmiller/cpu_tsdf
- https://github.com/pedropro/OMG_Depth_Fusion


# Intrinsics and API image

Camera Intrinsics (Honor View 20 - AR Engine : [ARCameraIntrinsics](https://developer.huawei.com/consumer/en/doc/HMSCore-References-V5/camera_intrinsics-0000001051140882-V5) + [ARCamera](https://developer.huawei.com/consumer/en/doc/development/HMSCore-Library-V5/camera-0000001050121437-V5) )
- principalPoint : 718.911 543.41327 // (cx, cy)
- imageDimensions : 1440 1080 //(width, heigh)
- distortions : 0.122519 -0.229927 0.144746 -6.96E-4 -4.39E-4
- focalLength : 1072.9441 1075.7474 //(fx, fy ) in px - Why 2 values ?
- getProjectionMatrix : 1.4902 0 0 0 / 0 3.0466316 0 0 / 0.001512431 0.009666785 -1.002002 -1 / 0 0 -0.2002002 0
Calculated (image is landscape)
- horizontal fov: Math.atan2(1440/2,1075.7473)*180/Math.PI*2=67.6
- vertical fov: Math.atan2(1080/2,1075.7473)*180/Math.PI*2=53.3
- diagonal : Math.sqrt(Math.pow(1080,2)+Math.pow(1440,2))=1800
- diafov: Math.atan2(1800/2,1075.7473)*180/Math.PI*2=79.8
- cx = width * (1 - proj[2][0]) / 2 = 1440*(1-0.001512431)/2 = 718.911
- cy = height * (1 - proj[2][1]) / 2 = 1080*(1-0.009666785)/2 = 534.780
- fx = width * proj[0][0] / 2 = 1440*1.4902/2=1072.944
- fy = height * proj[1][1] / 2 = 1080*3.0466316/2=1645.18 ?!? seems incorrect!

Camera Intrinsics (Huawei P20 Pro - AR Engine)
- principalPoint: 535.04553 729.8055 //in px
- imageDimensions : 1080 1440 //in px
- distortions : 0.093129 -0.187359 0.138948 1.34E-4 -4.29E-4
- focalLength : 1101.3862 1100.9385 //in px


To get both depth and RGB :
- AR Engine : [ARFrame](https://developer.huawei.com/consumer/en/doc/HMSCore-References-V5/frame-0000001050121447-V5)
- ARCore + Camera2 API ? : [SharedCamera](https://developers.google.com/ar/reference/java/com/google/ar/core/SharedCamera) - [Doc](https://developers.google.com/ar/develop/java/camera-sharing)
- Camera2 API?


API Camera2 - Honor View 20
(TotalCaptureResult result)
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 4.75
result.get(CaptureResult.LENS_APERTURE) = 1.8
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 1.0713588 //in mm
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0.0 8.0
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,0,0,0,0,0,0,0]

characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE) = 7.28x5.46 //in mm
characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) = 4.75
characteristics.get(CameraCharacteristics.LENS_INFO_MINIMUM_FOCUS_DISTANCE) = 8
characteristics.get(CameraCharacteristics.LENS_INFO_HYPERFOCAL_DISTANCE) = 0.2
characteristics.get(CameraCharacteristics.DEPTH_DEPTH_IS_EXCLUSIVE) = true
characteristics.get(CameraCharacteristics.LENS_INTRINSIC_CALIBRATION) = [0,0,0,0,0]
characteristics.get(CameraCharacteristics.LENS_POSE_TRANSLATION) = [0,0,0]
characteristics.get(CameraCharacteristics.LENS_POSE_ROTATION) = [1,0,0,0]
characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION) = 90


focalLengh in pixel = CaptureResult.LENS_FOCUS_DISTANCE * 8000 / CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE[0]


Huawei P20 Pro #0
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 5.58
result.get(CaptureResult.LENS_APERTURE) = 1.8
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 1.9061583
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0 10
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,X7]
characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE) = 7.3x5.5 //in mm

Huawei P20 Pro #camera zoom
result.get(CaptureResult.LENS_FOCAL_LENGTH) = 7.48
result.get(CaptureResult.LENS_APERTURE) = 2.4
result.get(CaptureResult.LENS_DISTORTION) = [0,0,0,0,0]
result.get(CaptureResult.LENS_FOCUS_DISTANCE) = 9.775171 //change
result.get(CaptureResult.LENS_FOCUS_RANGE) = 0 10
result.get(CaptureResult.LENS_INTRINSIC_CALIBRATION) = [0,X7]
request.get(CaptureRequest.LENS_FOCAL_LENGTH)= 7.48

https://quaternions.online/

0 comments on commit 8b26677

Please sign in to comment.