-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to export drawImage() rendering algorithm #10210
Comments
You want to reuse step 6 specifically? I think if you make it its own algorithm (probably directly following the algorithm it is part of now) that would work. Should add some asserts as well then to ensure the various arguments have the correct values as Web Codecs would be responsible for ensuring those are correct. |
Yes, I need to reference "whatever the UA does when it converts VideoFrame to canvas output bitmap". |
Do you really want drawImage though? Sounds like you'd be better with createImageBitmap to generate a bitmap from the VideoFrame and then rewrite/export the step 6 of getImageData to get the actual pixel data from it. |
Since It means that having an extra step of createImageBitmap doesn't help in any way. |
I may be wrong and will let the editors correct me, but you don't need and probably don't want a full canvas for that.
I think you could either write the same kind of algo on your side over the bitmap data that createImageBitmap produced, or write a new algo that does just this and link it from both getImageData and your specs. And you can probably even avoid the |
Making step 6 of |
The output context is also needed though. |
This is correct. VideoFrames can be in variety of pixel formats and color spaces. Basically, WebCodecs should allow to const canvas = new OffscreenCanvas(frame.codedWidth, frame.codedHeight);
const ctx = canvas.getContext('2d');
ctx.drawImage(frame, 0, 0);
const imageData = ctx.getImageData(0, 0, frame.codedWidth, frame.codedHeight);
const buffer = imageData.data; with this code const options = {
format: 'RGBA',
colorSpace: 'sRGB'
};
const bufSize = frame.allocationSize(options);
const buffer = new Uint8ClampedArray(bufSize);
await frame.copyTo(buffer, options); The same color conversion code is expected to be called without having to spend all the resources on creating the canvas, context etc. |
Right, so what you need is:
However nothing in there did the conversion to RGBA, at least not explicitly. That conversion is ensured in the specs through the Canvas Pixel ArrayBuffer which only concerns So my, non-authoritative take would be that once you get the bitmap, you'd do the same as getImageData, without an actual ImageData object, because just like you don't want an actual
HTML would then need to export cropping bitmap, and Canvas Pixel ArrayBuffer or the step 2 of initialize an ImageData. And we would need to amend cropping bitmap to accept an actual PredefinedColorSpace input too. Would that make sense? Footnotes
|
What is the issue with the HTML Standard?
Currently drawImage() is very vague about details of rendering implementation.
It is a very useful algorithm, even though rendering is very much implementation specific.
I'd like to reuse this algorithm in WebCodecs spec to describe a way how a VideoFrame can be converted to an RGB bitmap in a JS ArrayBuffer. (more details: w3c/webcodecs#754)
Can we export such a vague algorithm for external usage in other specs?
The text was updated successfully, but these errors were encountered: