-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow 2dcontexts to use deeper color buffers #299
Comments
If we did make an ImageDataFloat, it would be nice to use 16-bit floating point rather than 32. For this type of data we don't need 4 bytes. |
We could perhaps overload |
Am I right that a 16-bit floating point value has just enough precision for 10bpp images, but not higher depths? I wonder if instead we should just be using integers with a greater range. If there is concern that authors would forget to check for the specific channel depth, and only expect values in the range [0,1023] say, we could normalise the values to be always in the range [0,65535] or something. Are there image formats where colours are really floating point values, where it would definitely make sense to use a floating point type to expose them? |
I'm (we're) still not sure which is better: A. Expose 32-bit floats for each channel and allow the implementation to squash them into however much precision the backing store provides. Pros: same code "works" on all backing stores. Cons: performance since everything needs to be converted + copied, memory use in the ImageData, and colors would not match on different profiles. B. Same as A but with 16-bits. Cons: Same except slightly better memory wastage. Also, no real system support for half-float types. C. Expose 32 or 16-bit integers for each channel and a way to detect how many bits are available (and thus the maximum value you can put in the data, 2^8 for 8bpc, 2^10 for 10bpc). Pros: Allows exact precision. Cons: still needs a copy + conversion when coming from a 10bpc backing store. D. As Cameron suggests, 32 or 16-bit integers that are effectively normalized into a 0-1 range. Pros and Cons are similar to A and B. E. Depending on the backing store type, expose the buffer as directly as possible. For example, if you have RGB101010 then each pixel is a 32-bit unsigned integer. Pros: Copying is easy. No conversion needed. Cons: Requires ugly bit math to use the data. F. Something else. I kind-of think option A, B or D are the nicest to use on the developer side. However, it will likely be very slow and is definitely wasteful. E is efficient but probably horrible to use. |
Is it possible/practical to implement A/B/D on top of E in a JS lib? What is the solution to specify wide gamut colors in CSS? |
Exposing A/B/C/D is all possible on top of E. And E has the benefit of being the most performant, so I'm tempted to suggest that. We'd still need some extra information on the result, such as the layout. For example, a wide gamut opaque buffer might be 10-10-10-2, and a non-opaque buffer might be the same with a chunk of 8s appended for the alpha. As for CSS colors, see the thread http://www.w3.org/mid/[email protected] |
@grorg what is the backing store that OS X has? JavaScript doesn't have 16-bit floating point arrays, so I think that's out. We don't want to introduce another TypedArray object unless TC39 is interested in that. It sounds like 16-bit integers is a poor mapping for what is going on underneath? E sounds tempting, if it's likely that another OS would have the same representation. Otherwise it seems like we might want to have at least one layer of abstraction. |
So my opinion (as a former canvas implementer/spec writer-type person), and as an engineer on a javascript engine. Don't bother with floats unless you're thinking in terms of dynamic range (and very important thing to think about these days). If we do believe floats are the way to go, don't mess around with halfs, CPU don't support them, so they will be slow -- all operations will require half->double expansion which does not have hardware support, followed by a subsequent double->half conversion on store, in addition to all the clamping you'd be expecting, For integral sized backing stores, efficiency means running in multiples of 8bit. Also, I have encountered cameras that produce 12bpp RAW, so defining things in terms of 10bpp is just asking for many subsequent variants on the spec for 11,12,23... bit. My recommendation is to provide an option for a 16bpp backing store, we can trivially support it, there are already defined Uint16Array types, there's no increase memory cost vs 8>bpp<16, graphics operations can be performed efficiently internally as CPUs have hardware to handle 16bit integers directly. It's also somewhat future proof vs. ever increasing channel depth. The only time you need to worry about 16bpp vs 10bpp (etc) is when extracting the canvas content as an actual image, where it may be desirable to set the destination channel depth if the image format supports higher channel depths. |
I see it as two separate issues:
For 1 you technically don't need > 8bbp. Most image formats use a color profile and "stretch" the 8 bits of image data to the destination gamut. That's simple and convenient. For the gamut case I'd suggest API that takes a color profile from an image: Pre-defined names like "adobe rgb 1998" might work too, although then it'd be impossible to support custom profles from digital cameras.
|
I consider both of those good reasons for float (I had previously considered the possibility of float per channel). Just note that it will be substantially slower than integer paths. |
@ojhunt said:
That's not quite true. We'll probably expose a buffer that is 10bpp, which can fit into a 32-bit integer when opaque. i.e. won't be any additional memory cost over 8bpp, but will be less than 16bpp. |
@pornel said:
Yep. This is what we already do with our wider gamut hardware. We just can't create a canvas with a profile.
Yes, this is what I was proposing, although I think we can start with just well-known names. On the CSS discussion (linked above) I suggested P3 and Rec.2020, although maybe just Rec.2020 is enough. However, I think that you are going to want the larger backing store, which I'll reply to separately. |
@pornel said:
Whose sanity? The developer? I sort-of agree, but there is a big advantage to exposing the same format as the backing store is that the get and put operations don't require any conversion and will be fast. Yes, you'll have to do some ugly bitmasking and shifting, but the users of these APIs are probably already doing some nasty things for every pixel.
Yep.
The issue here is that you're probably exploding the memory use by a factor of 3 or 4, and requiring a conversion for every channel of every pixel when reading and writing, and losing some precision in the process. On the positive side, it means you can write code that will work with any depth of backing store, including the current RGBA8. Another positive is that this form can be polyfilled from the direct-access method. i.e. if I gave you the raw 30-bit RGB + 8-bit Alpha buffers, you could provide a 32-bit floating access in JS. Note about the lack of half float: while the CPU doesn't support it, we could use unsigned short and normalize values from 0-2^16 into 0-1. There would still be some memory wastage (if the backing store is 10bpc) and conversion. |
@grorg said:
I meant 10bpc (channel). That is a 32-bit value that is 10bits of R, G and B. |
you van fit 10bpc in a 32bit value only if you drop alpha, given we're talking about canvas that seems like an unreasonable tradeoff. Furthermore (given we're talking canvas apis) direct fiddling is via typed arrays, which do not have non-byte multiple variants, and the bit fiddling required for 10bpc would make it unreasonably expensive -- even pure indexing becomes slow. Sure the images can be 10bpc -- safari already supports that (and more) -- but thats because image files are constructed to be as small as possible. The performance trade-offs for live (in memory) content are different than for files being stored. |
@ojhunt said:
Yes, but nothing stops you from providing alpha in another byte. For example, given a canvas that is w * h, the data array could be (w * h * 5) long, with the first (w * h * 4) bytes being used for (w * h) 32 bit integers that are 10/10/10/2 for RGB (the last 2 is unused). Then the next (w * h) bytes are 8-bit alpha. That way you get wider color while only using 1 extra byte per pixel. And this is a backing store format that we're considering using internally. However, I understand if people want to only expose a 64bpp (16/16/16/16) store, under the assumption that it is wide enough for practical use and doesn't require bit masking/shifting. On the downside, we'd still have to describe how 10,11,12 bit values are converted into 16 bits (clamped, normalized?). Also, since there isn't a native half-float type, the values would be unsigned shorts which I assume we will normalize to 0-1. Maybe this is the easiest solution, even if it does potentially waste a bit of memory. |
@oHunt and I talked in person and we're now agreed on a 64bpp (16*4) backing store. This would be an option to the initial getContext() call. This would make the R,G,B and A values Uint16s. We could then keep get/putImageData as they are, but we'll need tweaks to ImageData. We'll need a Uint16Array member. Someone else here can tell me if this is better exposed as a new type of ImageData, or extra attributes on the existing one, or something else. |
The constructor will still need a colorspace identifier to allow colors outside of sRGB. For this I suggest waiting to see what CSS decides on (hopefully some keywords). |
I was thinking we can just overload the |
Sounds good. |
That is true in the sense that current 8 bit stores technically don't need 8, either. They could use 6 for example (and some displays do exactly that, although with dithering to mask the effects). It just looks worse. |
Yeah, I think we've settled on 16bpc store now. It should be noted however that this will cause a significant increase in memory use, especially if you getImageData on a very large canvas. At that point you'll have two instances of a large buffer (the actual backing store and the ImageData copy). As for floating point vs Uint16, I don't have a strong opinion. We don't have a half-float type in JS, which is why we've basically settled on integers. And it also is consistent with the existing API, which uses 0-255. People know that they divide/multiply by 255 to normalize.
Definitely.
The new iMacs don't use DCI P3, but something very close. Regarding AdobeRGB - does it map well to a display workflow? DCI P3 only covers 94% of Adobe RGB. I fear that while it might be popular with designers, they are likely to run into the same trouble as today if they use it. This is why I think we should keep the keywords to a small set that map to the displays we expect to see in the nearish future. That would be P3-ish and Rec. 2020-ish. We can always add more later, and I liked the suggestion that you can point to an HTMLImageElement to get a profile. |
Note that for GPU-accelerating the canvas, using 16-bit floats is a lot more convenient than using 16-bit integers internally. There are some kind of 16-bit integer texture formats in all modern graphics APIs, but there's no normalized filterable format in any version of GLES. 16-bit float formats on the other hand are widely supported and used. I think this makes specifying around 16-bit integers a no-go. Maybe the spec could be such that both alternatives could be allowed under the hood, so you could implement the spec efficiently both on CPU and GPU, but one of the alternatives would probably end up being less accurate in this case. I agree with grorg's view on color spaces, that having "P3-ish" and "Rec-2020-ish" tiers would make sense. |
Using 10bpc is not future proof and does not match the human visual system. Thankfully you seem to have settled on 16 bits. However the comment that "you need 10 or 12 bits to give the same resolution as you had inside the sRGB gamut" is not correct. To smoothly shade from black to white with a linear encoding, such that a human will perceive no steps, needs about 14 bits/channel. See http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#smoothly_shade. 16 bits/channel is what was used in the Pixar computer aeons ago. Assuming 16 bits/channel and a linear encoding, perhaps it makes sense to define a color profile for the backing store or use the same profile as the display. Incoming images and color values can be converted to this profile. On output to the display the colors can be converted as necessary. On output via getImageData the colors can be converted to a requested profile. Having a linear encoding makes blending easier. |
The number of bits per channel required for a linear encoding that matches the precision of human vision depends a lot on the dynamic range of the display device that is being targeted. 16bit-float has similar precision characteristics to logarithmic encodings, while preserving the convenience of linear arithmetic. This is a big deal. Having high resolution in the near-black range is key to being scalable to future high dynamic range devices. |
I agree that we should use a 16bpc (float) backing store. The issue is what do we do about ImageBuffer and get/putImageData. We don't have a 16-bit float type in JS, nor a TypedArray specialisation. I guess we'll just have to use Float32 and live with the fact that using these functions will be both slow (conversion), inaccurate (clipped) and wasteful (double the memory use). |
@annevk @grorg @bterlson @domenic I strongly feel Float32Array should be used for the purpose of getImageData/putImageData to high-bit-depth backing store canvases, and not try to standardize a Float16Array. A Float16Array built in to the JavaScript VM will not be significantly faster than a pure JS implementation of the same, and is a lot more assembly code to be built in to the virtual machine. Further, the dominant cost of reading back the contents of a canvas will be the readback from the GPU, and not the conversion from Float16 -> Float32 and vice versa. |
According to Sean, the primary use case is to minimize memory and maximize performance. Also, if the display is 8 bit and we don't lose information when going to screen, would we introduce extra banding? |
It's not enough to say that the display is 8 bit - we need to know the gamut of the display. Ultimately, the answer is "it depends". If the display has a wide gamut (approaching Rec.2020), then 8 bit output is going to be fairly banded, but it's going to look no worse than naive output of typical applications rendering to 8 bit and not doing color management. If the display is significantly narrower than Rec.2020, then the rendering intent applied when reducing the gamut will have a big impact on the output. Perceptual mapping will cause the final result to be fairly similar to have simply done the work in sRGB - which is about as high quality can be produced on that hardware. But applying either of the Colorimetric intents will produce significantly more banding, as the number of 8-bit (Rec.2020) values that actually fit in the sRGB or similarly small gamut will be quite small. |
@cabanier To squeeze large rec-2020 in 8-bits you would either have to drop high bits that enable the wide gamut or drop low bits that are needed for precise non-posterized colors. Both options seem unappealing to me, because it'll either be as limited as sRGB, or look worse than sRGB. |
true. The intent is that you'd only use this mode if you go to a wide gamut display. Otherwise, an author should stay in the sRGB color space. |
I will ping Sean to get his thoughts on the matter. |
There are use cases where banding is not a concern (because there are no long gradients), where people may want use wide gamuts. |
Low precision also makes alpha blending worse. What are actual uses for low-quality, but very saturated color? |
Can you elaborate why that is the case? |
When blending semitransparent colors, the rgb channels are multiplied, added together and then truncated back to original precision. When you do that on small numeric range, then the numeric error from the computation is relatively bigger. Transparent blending may need to change the backround slightly, but with fewer bits of precision small-enough changes may not possible, so colors will drift. Multiple blendings in an overlapping area (e.g. for smoke particles in a game, multiple layers in an image editor) amplify that error. |
Yes, if you will do lots of compositing, the quality will degrade. According to our color people, 10 bit is a minimum for animated content so 8 bit shouldn't be used for that. For simple content it should be ok. |
@junov can we add the p3 color space to match the media query? |
Yes, we were planning to suggest exactly the same thing, for different reasons. P3 (or Adobe, they're pretty close) is by far the most common gamut on new desktop monitors (at least sampling all of the ones in my office). It also provides a gamut where we can reasonably work in 8 bit. Although I previously argued that 8-bit 2020 was possible, I still don't think it's a good idea. I also had a hard time imagining that any application wanting that much range would be willing to do math at 8 bit precision. More likely, in my mind, is that apps will simply want "more than sRGB", and P3 or similar will fit the bill nicely. The only real "problem" is that Adobe RGB and DCI P3 have mis-match in both directions. If you had to pick arbitrarily, the media query argument suggests P3, it just means that the common case of users with an Adobe RGB display will be subject to slight gamut mapping when viewing P3 canvases. |
@brianosman I agree. I didn't know the gamut of 2020 was that much bigger than P3/Adobe RGB. |
@brianosman are you suggesting that you would like to have both p3 and Adobe RGB? |
Not really, although I could certainly see someone making that argument. (It's not likely that any particular application is going to prefer one or the other, but if we keep the "optimal" logic, then perhaps a photo editing app would want the better match for the attached display). I think that one or the other is sufficient. We were already going to be requiring the browser to handle mapping rec2020 to much smaller gamut monitors - the mis-match between P3 and Adobe should be far less of a problem. |
P3 may be confusing, because Apple Display P3 has different gamma than DCI-P3. I actually like idea of not having a wide choice of color profiles. To me color profiles are analogous to character encodings. In the '90s we had applications try to offer all the different encodings and convert between them, and it was a mess. Eventually we've settled on ASCII + one or two Unicode encodings. I see sRGB as the ASCII of imaging, and I think it'd be great if there was just one wide "Unicode" equivalent for pixels. Linear Rec.2020 may be it. So as an application developer, instead of having to support a number of encodings and juggling barely enough bits of precision when converting between them, I strongly prefer to just hardcode and support only one that's big enough. |
Do you have a source for this? I'm not turning anything up (but that might just be due to lots of user/media confusion?) |
From: http://www.astramael.com/ (it's a great description of wide gamut and Apple's P3) |
The idea behind linear-rec-2020 was to limit the number of color spaces that need to be natively supported by implementations by providing a one size fits all space that meets or exceeds the gamuts and precisions required for: a) targeting any current or near-future consumer device; b) HDR and wide gamut image processing. In terms of implementation practicality, supporting spaces with different primaries is not a big deal (it's just a tweak to the conversion matrix). What would be drag is having to support a multitude bit-depths, and gamma curves that don't have built-in HW support. Apple P3 displays don't just have wider gamuts, they also have a higher dynamic range, which is why they have 10-bits per component. So if we want to save on space with respect to 16bpc rec-2020, without losing the precision or gamut of an Apple P3 display, we'd have to do something in the middle like a 10bpc or 12bpc mode, which is obviously impractical to implement (even more impractical for WebGL). I am not sure it is reasonable to bake into the spec a mode that would be so device-specific. I have an alternate suggestion: Make provisions in the spec for vendor-specific color spaces that can be selected via {colorspace: "optimal"}. That way, implementors are free to go the extra mile to optimize for a specific device, and we won't have to revisit the spec every time a new awesomer class of devices hits the market. For the purposes of interoperability, it would simply be required that any vendor-specific color space that is selectable via "optimal" must obey the rules that compositing, filtering and interpolation be done in linear space. In such spaces, getImageData and putImageData may require a format conversion in order to map component values to native JS types. Similar issue with toBlob/toDataURL. If we retain this idea, we'll have to figure out the specific format conversion rules for those cases. We'll cross that bridge when we get there... To avoid fingerprinting, the vendor-specific spaces should be limited in number and should not be directly mapped the device's output profile. So, does that general idea sound reasonable? Or does the prospect of explicitly allowing non-standard spaces sound scary? About AdobeRGB: this color space is practical because 8-bit AdobeRGB is pretty close to the display profiles of a lot of current devices, so offering it in 8-bit format makes sense. The gamma curve is a pure gamma function that does not exactly match the sRGB curve. Perhaps it would make sense to not put AdobeRGB in the spec, and let browsers implement more optimal alternatives as vendor-specific color spaces, such as a franken-color space that uses the AdobeRGB primaries with the sRGB transfer curves (which we get for free on GPUs). If we do put AdobeRGB in the spec, I would expect that implementors would cut corners by using the sRGB curves on it, which is probably not a big deal. Thoughts on that? |
Are you suggesting that we set up another page separate that lists a limited number of spaces?
Even though linear is the "ideal" way of doing compositing, few applications and rendering engines do so. Is this also something that all UAs can implement? (ie smfr mentioned that they don't have control over this but maybe I misheard)
The "Adobe" part of the name is an issue. It's ok to create something with a different name but the exact same values. |
All I am suggesting is that we should keep the number of standard color spaces to a minimum. Just what we need to cover all fundamental use cases. Then browser vendors could add their own non-standard color spaces (which, let's face it, was going to happen anyway), some of which may become standard in the future. Regarding the "optimal" option, all I am suggesting is that the spec give some guidance on the behavior of non-standard color spaces that can be selected by the UA when the user asks for "optimal", to guarantee some degree of standardization in the behavior. That said, browser vendors could also implement whacky modes that break all the rules, as long as "optimal" never ends up selecting that mode.
Good point. Browsers have been mostly doing things the "wrong" way since forever, and everything was fine. Or was it? One of the objectives of this proposal is to break away from our old ways to offer better standardized behavior so that apps that do care about this sort of detail can lean on a reliable standard.
LOL. So many Adobe branded technologies have become de facto standards that we sometimes forget it's a trademark. It's a compliment, really. Thanks for pointing it out. That said, we'll need to do a review of issues with IP that is referenced by the proposal (e.g. rec-2020) before making it a standard. |
There are people who helped formulate this feature proposal who are unable to participate in the discussion in this venue. For that reason I intend to move this discussion to a thread on W3C's WICG. I will capture the feedback gathered from this thread into an updated proposal that I will use to start a new WICG discussion later today. As soon as that is set-up I will point you all to the new thread. Don't worry, the thread will be open to all (not just W3C members), as long as you agree to the terms. |
Yeah, it seems reasonable to incubate such a feature in a venue like the WICG. Looking forward to its graduation into the HTML Standard in the future :). We can continue using this issue to track that graduation. |
Update: my W3C account is in a broken state. Not sure when I'll be able to start the new thread. |
W3C WICG thread started on Discourse, with an updated proposal that integrates recent ideas, issues and objections that were raised in this thread and in the Khronos thread and conference call. Please continue the discussion here: |
If you're not sticking to just sRGB, why stick to the arbitrary 0-255 range, instead of 0-1? I think that would be much more intuitive. |
This is a summary of the current implementation surface on Chrome to address this issue: #4167 |
I think with that this can be considered resolved/a duplicate. |
It is now common to come across displays that support more than 8 bits per channel. HTML Image elements can use these wider-gamut displays by embedding a color profile in the resource. The CSS Working Group is adding a way to specify the profile used in a color definition, and ensuring that colors are not clipped to 0-255.
That leaves Canvas objects. We need at least a few things in order to use better colors in canvas:
All existing methods that take images or ImageData would still keep the existing behaviour. That is, if you create a deep canvas and putImageData into it, that data is assumed to be in sRGB.
The text was updated successfully, but these errors were encountered: