You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our pipeline we are currently doing one unnecessary GPU -> CPU copy in order to expose our raw frame buffer as a
gstreamer buffer. The rationale behind it is explained in
the docs.
This will cause some overhead that will eventually be a bottleneck when going with high resolutions and frame rates (on
Discord it has been reported that the current pipeline can't handle 1080p when going over 200 fps).
There are at least two ways to achieve this in Gstreamer:
OpenGL: exposing the buffer as
a GstGLBuffer, design docs
We'll have to share the OpenGL context between our comp and the Gstreamer pipeline.
I guess worst case this will end up doing GPU->GPU copy?
DMA-BUF: exposing the buffer as a GstBuffer backed by
a dmabuf, design docs
Cuda doesn't seem to support this directly, (vaapi should) it should be possible to import DMABuf
using glupload -> cudaupload.
I guess worst case this will end up doing GPU->GPU copy?
In our pipeline we are currently doing one unnecessary GPU -> CPU copy in order to expose our raw frame buffer as a
gstreamer buffer. The rationale behind it is explained in
the docs.
This will cause some overhead that will eventually be a bottleneck when going with high resolutions and frame rates (on
Discord it has been reported that the current pipeline can't handle 1080p when going over 200 fps).
There are at least two ways to achieve this in Gstreamer:
a
GstGLBuffer
, design docsGstBuffer
backed bya
dmabuf
, design docsvaapi
should) it should be possible to importDMABuf
using
glupload -> cudaupload
.This needs more research...
Useful links
The text was updated successfully, but these errors were encountered: