Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to export drawImage() rendering algorithm #10210

Open
Djuffin opened this issue Mar 19, 2024 · 9 comments
Open

Possibility to export drawImage() rendering algorithm #10210

Djuffin opened this issue Mar 19, 2024 · 9 comments
Labels
integration Better coordination across standards needed topic: canvas

Comments

@Djuffin
Copy link

Djuffin commented Mar 19, 2024

What is the issue with the HTML Standard?

Currently drawImage() is very vague about details of rendering implementation.

Paint the region of the image argument specified by the source rectangle on the region of the rendering context's output bitmap specified by the destination rectangle, after applying the current transformation matrix to the destination rectangle.

It is a very useful algorithm, even though rendering is very much implementation specific.
I'd like to reuse this algorithm in WebCodecs spec to describe a way how a VideoFrame can be converted to an RGB bitmap in a JS ArrayBuffer. (more details: w3c/webcodecs#754)

Can we export such a vague algorithm for external usage in other specs?

@annevk
Copy link
Member

annevk commented Mar 19, 2024

You want to reuse step 6 specifically?

I think if you make it its own algorithm (probably directly following the algorithm it is part of now) that would work. Should add some asserts as well then to ensure the various arguments have the correct values as Web Codecs would be responsible for ensuring those are correct.

@annevk annevk added topic: canvas integration Better coordination across standards needed labels Mar 19, 2024
@Djuffin
Copy link
Author

Djuffin commented Mar 19, 2024

Yes, I need to reference "whatever the UA does when it converts VideoFrame to canvas output bitmap".

@Kaiido
Copy link
Member

Kaiido commented Mar 20, 2024

Do you really want drawImage though? Sounds like you'd be better with createImageBitmap to generate a bitmap from the VideoFrame and then rewrite/export the step 6 of getImageData to get the actual pixel data from it.

@Djuffin
Copy link
Author

Djuffin commented Mar 20, 2024

Since ImageBitmap doesn't have methods to obtain pixel data from it, sooner or later drawImage() will need to be called before getImageData()

It means that having an extra step of createImageBitmap doesn't help in any way.

@Kaiido
Copy link
Member

Kaiido commented Mar 20, 2024

I may be wrong and will let the editors correct me, but you don't need and probably don't want a full canvas for that.
Currently the step 6 of getImageData handwavily gets the pixel data from the canvas output bitmap.

Set the pixel values of imageData to be the pixels of this's output bitmap

I think you could either write the same kind of algo on your side over the bitmap data that createImageBitmap produced, or write a new algo that does just this and link it from both getImageData and your specs.

And you can probably even avoid the ImageBitmap object entirely and only deal with the bitmap data that the algo produces.

@annevk
Copy link
Member

annevk commented Mar 20, 2024

Making step 6 of drawImage() reusable seems okay to me. Bitmap will have to become an input parameter to the algorithm, same for the smoothing attributes.

@Kaiido
Copy link
Member

Kaiido commented Mar 20, 2024

The output context is also needed though.
I guess what I'm missing here is where it's supposed to be drawn, and why. IIUC the goal is to get the pixel data from a VideoFrame, I don't see why drawImage, which draws an input bitmap on a canvas output bitmap is needed.

@Djuffin
Copy link
Author

Djuffin commented Mar 20, 2024

IIUC the goal is to get the pixel data from a VideoFrame

This is correct. VideoFrames can be in variety of pixel formats and color spaces.
Currently canvas and drawImage() is the most convenient way to convert any VideoFrame to RGB/(sRGB|p3)
w3c/webcodecs#754 is a proposal to remove the canvas from the picture and allow direct conversion of, let's say, YUV pixel data to RGB. But UA is expected to perform pixel data conversion in the very same way.

Basically, WebCodecs should allow to
replace this code

const canvas = new OffscreenCanvas(frame.codedWidth, frame.codedHeight);
const ctx = canvas.getContext('2d');
ctx.drawImage(frame, 0, 0);
const imageData = ctx.getImageData(0, 0, frame.codedWidth, frame.codedHeight);
const buffer = imageData.data;

with this code

const options = { 
  format: 'RGBA',
  colorSpace: 'sRGB'
};
const bufSize = frame.allocationSize(options);
const buffer = new Uint8ClampedArray(bufSize);
await frame.copyTo(buffer, options);

The same color conversion code is expected to be called without having to spend all the resources on creating the canvas, context etc.
Hence the question: what's the best way to say "please do whatever the HTML canvas does" ?

@Kaiido
Copy link
Member

Kaiido commented Mar 21, 2024

Right, so what you need is:

  • Create a bitmap1 from the VideoFrame using the passed color-space and source rectangle.
  • Get the pixel data of this bitmap in the correct format.

drawImage will only get the first step somehow. But it requires a destination bitmap on which to draw. You don't have one yet. It also introduces concepts of destination rectangle, shadows, global alpha, etc. that you don't need. All you'll ever do in copyTo() is crop to the source rectangle, the destination one will always be 0 0 sourceWidth, sourceHeight. And the color space conversion would only apply on a 2D context.

createImageBitmap's cropping algo basically does the same thing as the step 6 of drawImage, without the need for a destination (the bitmap is itself the destination). It has the same features: cropping to a source rectangle, and resizing, image-smoothing, and it even has more: flip Y, and alpha premultiplication. The only thing is that its colors-space conversion is currently only "none" or "default". We can probably amend the algo so that it also accepts a PredefinedColorSpace and handle that from the caller site.
To me this algorithm is the best candidate to be exported as it seems to be the most general purpose one. It might even make sense to rewrite drawImage to use that algo instead, but I digress.

However nothing in there did the conversion to RGBA, at least not explicitly. That conversion is ensured in the specs through the Canvas Pixel ArrayBuffer which only concerns ImageData objects. There are ongoing discussions, e.g. #8708, where it's proposed that getImageData() would be extended to get a pixelFormat setting that would do an actual conversion, so I guess it's safe to assume the conversion should be made when getting the pixel data, not before.

So my, non-authoritative take would be that once you get the bitmap, you'd do the same as getImageData, without an actual ImageData object, because just like you don't want an actual OffscreenCanvas, you don't want an actual ImageData object either. So I guess you could do something along the lines of

HTML would then need to export cropping bitmap, and Canvas Pixel ArrayBuffer or the step 2 of initialize an ImageData. And we would need to amend cropping bitmap to accept an actual PredefinedColorSpace input too.

Would that make sense?

Footnotes

  1. It's not well specced what a "bitmap" actually is, but canvas, CanvasRenderingContext2D and ImageBitmap obects have such a "bitmap".

  2. This one has to be defined.

  3. https://html.spec.whatwg.org/multipage/canvas.html#initialize-an-imagedata-object step 2. Maybe to be exported too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integration Better coordination across standards needed topic: canvas
Development

No branches or pull requests

3 participants