You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i like the idea of Element Capture. But it seems limited on capturing Dom element. Zoom supports user-drawed rectangle to do the region capture while sharing a screen.
Why not supporting the API to all kinds of capturing. for example.
const stream = await navigator.mediaDevices.getDisplayMedia();
Then add a API to return the position of the display stream in the whole desktop. [(x,y),(width, height)]
Then cropTarget could be any rectangle [(a,b),(width,height)]
In the end, do the crop of the stream based on the overlap of the two rectangles.
And the position of the display stream with in the whole desktop, it is very useful in later remote control or annotation.
For Element Capture, it is to return the cropping region. and then use the same way as region capture. and Element Capture could be unified as one?
And it will be suitable for capturing Tab,App and Screen
The text was updated successfully, but these errors were encountered:
fideltian
changed the title
Region Capture and Element Capture could be unified as one.
Custom Rectangle Region Capture
Mar 27, 2023
Then add a API to return the position of the display stream in the whole desktop. [(x,y),(width, height)]
Similar functionality exists in the Multi-screen Window Placement API. In Chrome, this is gated by an API prompt. Tying that API with getDisplayMedia() sounds very interesting. Off the top of my head, maybe:
If the user chooses to capture a screen, we could expose some of the details of the screen. Maybe the screen's original resolution, which could differ from that of the MediaStreamTrack. Maybe also whether it's the "current screen" or not (a non-trivial concept on some operating systems, btw).
If the user chooses to capture a window, we could expose the location of the window within the screen. (Another non-trivial concept for some operating systems.)
But these (very interesting) possibilities seem to me unrelated to Element Capture.
The purpose of Element Capture is to address the needs of Web apps that want to capture a DOM subtree.
The purpose of Region Capture is to address the needs of Web apps that want to capture a section of a tab, and be robust to asynchronous changes of these coordinates.
The API you propose - if I understand you correctly - would crop an arbitrary MediaStreamTrack, but would not be able to guarantee that miscropping would not occur for a few frames when things move around asynchronously. It's a different use-case and set of guarantees.
i like the idea of Element Capture. But it seems limited on capturing Dom element. Zoom supports user-drawed rectangle to do the region capture while sharing a screen.
Why not supporting the API to all kinds of capturing. for example.
const stream = await navigator.mediaDevices.getDisplayMedia();
Then add a API to return the position of the display stream in the whole desktop. [(x,y),(width, height)]
Then cropTarget could be any rectangle [(a,b),(width,height)]
In the end, do the crop of the stream based on the overlap of the two rectangles.
And the position of the display stream with in the whole desktop, it is very useful in later remote control or annotation.
For Element Capture, it is to return the cropping region. and then use the same way as region capture. and Element Capture could be unified as one?
And it will be suitable for capturing Tab,App and Screen
The text was updated successfully, but these errors were encountered: