-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling of depth buffers for stereoscopic systems #5
Comments
I agree that it should be per view but I don't know how much of burden it is to calculate that. |
@toji agreed that we can define that gpu depth sensing always returns a texture array. This would simplify the spec and there would be less of a chance for user confusion. /agenda should we always expose the depth as a texture array? |
Splitting @cabanier's question into a new issue:
The way I thought about it is that XRDepthInformation that we return must be relevant to the XRView that was used to retrieve it. In case of a stereo system w/ only one depth buffer, there would be 2 options: either reprojecting the buffer so that each of XRViews gets the appropriate XRDepthInformation, or exposing an additional XRView that would be used only to obtain the single depth buffer (but then it'd be up to the app to reproject, there are some XRViews for which XRDepthInformation would be null, and we are creating a synthetic XRView so maybe not ideal). If we were to require the implementation to reproject the depth buffer, how big of a burden would that be?
The text was updated successfully, but these errors were encountered: