Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisit a way to surface depth API to the app? #10

Open
bialpio opened this issue Nov 30, 2020 · 2 comments
Open

Revisit a way to surface depth API to the app? #10

bialpio opened this issue Nov 30, 2020 · 2 comments

Comments

@bialpio
Copy link
Contributor

bialpio commented Nov 30, 2020

Quote from twitter's @mrmaxm:
https://twitter.com/mrmaxm/status/1333516895975305218

"Or look into similar approach of Hand Tracking API with providing allocated array into function, which will fill it with data.
Allocations - is very important issue with realtime apps."

@bialpio
Copy link
Contributor Author

bialpio commented Nov 30, 2020

We have few options here:

  1. Keep the API the way it is now & specify that XRDepthInformation is only usable when the frame it came from is active.
  2. Expose a method that will populate app-provided Uint8Array with the current depth data.
  3. (related to Allow data to be accessed directly by the GPU #4) Expose depth data via a WebGLTexture. Its lifetime will likely also need to be limited as in pt.1 above.

A bit of background regarding Chrome's current implementation: renderer receives new depth information data on every frame (via a shared memory buffer coming from device's process). The allocation + copy on the device side is unavoidable since we need to have a way of getting the data out of ARCore. The depth data buffer is passed on & stored on the renderer side, and is copied every time an app requests an instance of XRDepthInformation.

@bialpio
Copy link
Contributor Author

bialpio commented Dec 3, 2020

More thoughts on the above options and how they could change Chrome's implementation.

  1. Ensuring that the data is only valid during an active XRFrame allows us to skip a copy of the depth buffer when the application is requesting depth information - since instances of XRDepthInformation are usable only when a frame is active, they can now share the underlying depth buffer among themselves (and once the frame becomes inactive, we could reclaim the buffer). Drawback is that it'd mean the app can accidentally overwrite entries in this buffer (and those overwritten entries will be visible via other XRDepthInformation instances).
  2. I believe this will not actually help in our case. Filling out app-provided array would incur a copy of potentially non-trivial amount of data, and would only save on allocating a Uint8Array object (which AFAICT is not that expensive if the array is just a view into another buffer). What this approach would would help with though is that if the app writes to its own array, it would only overwrite its own copy.
  3. Similarly to pt.1 above, with the same drawback - app can upload new data to the texture (but this is harder to do accidentally, so it's worth changing the API just for that benefit).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant