-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify/simplify graph execution on GPU / MLCommandEncoder #333
Comments
@zolkis we should focus this issue to updates to the algorithms outside the We have identified the WebGPU interoperability as a feature at risk for CR initial publication in #240. We agreed this feature needs more implementation experience and the |
Same question as #264 (comment) It's my understanding that the Can we close this issue? |
Per our discussion https://www.w3.org/2024/01/25-webmachinelearning-minutes.html#t09 the group wants to clarify interaction between WebNN and WebGPU timelines as a priority, using MLBuffer as a prototype. That has a separate issue, closing this. @zolkis, feel free to salvage any bits from this issue as needed. |
A few purely API design related comments on MLCommandEncoder.
(Context: I am trying to update some of the algorithms, and specify missing ones, and I stumbled upon these questions).
Graph initialization
MLCommandEncoder.initializeGraph()
is synchronous (looks highly parallel, on a different timeline)? If it's indeed synchronous with no side effects, and done exactly once per graph, why not include it in the createCommandEncoder() constructor, or in the current case, the factory method? To clarify this, we need an algorithm, or a note that informally describes graph init (per context type, eventually).initializeGraph()
method takes a graph as an argument. One might think that a single MLCommandEncoder object might be used for handling multiple graphs, but from the text the intent seem to be that the command encoder is meant to cover the execution phases for a single graph. For another graph, one could use a different MLCommandEncoder object (even if in the implementation it is bound to a singleton).Proposal (question): collate the command encoder factory with graph initialization, to bind the MLCommandEncoder to that graph (and context). That might even be an async method.
Command encoder steps
Assuming the change above, the interface becomes:
It is not clear why
dispatch()
andfinish()
need to be separated? I guess it is to provide clarity on when the optional descriptor may be used? We need some examples to cast clarity on this. How are scripts supposed to manage the command encoding? Maydispatch()
be called multiple times beforefinish()
? The intent is not clear, since AFAIK a single dispatch will manage all the parallel operations involved in queuing graph execution. I understand it follows the DirectML API flow, but I challenge that the usage in this API can be simpler, because it is not a generic GPU API, but a specific application of it.In principle, we should encapsulate what we can, and we should expose only the control points for scripts.
Actually I even challenge why (in the Web ML spec) is not enough to specify a single async method in MLContext?
The implementation may still follow the workflow of initializing the graph, dispatch and finish - but why expose that to the WebML scripts?
GPUCommandBuffer
.In summary, I would appreciate more explanation/examples/script use cases for this (graph init and graph execution on GPU).
I am aware of this relevant comment and the (still open) discussion in #264 as well - somewhat related to this.
The text was updated successfully, but these errors were encountered: