-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UI <-> Replay thoughts/ideas? #7
Comments
It should work. Unfortunately that's a Chromium issue because of some SharedImage problems, see the commit message of https://chromium-review.googlesource.com/c/chromium/src/+/3233071 Weirdly I can't find a Chromium bug tracking this? So maybe that's what you're running into. Not sure.
My initial thought was that the replay would just give a
Sounds good, and there would maybe be some hints to checkpoint at a specific higher node if the users is moving back and forth between drawcalls. The replay should also be able to give you the current "state" set on the encoder at a step. Ideally the replay handles resetting resources to their initial state and moving between steps? I'll try to get that working after I get the alien head sample record/replaying correctly. (while waiting for TS from Seb) |
Looking at the Replay object, something I can't currently tell. Compare In the real API it's
But in the replay those have been changed to
I can't translate that backward I'd either translate it as this
or
To put it another way the actual arguments to Is that an easy change? |
It also seems like it would be nice to show what the user actually passed. So for example draw was called like this
But the replay has
Which means I have to show
I think I'd prefer to see |
For your first comment, we could change the type of For 2) definitely, it was a mistake to have reified the values on the capture side, and instead the replay should have done the reifying, providnig both the raw and reified values to the UI. I'll do that when we have Typescript set up for the repo. |
So I've been trying to get it so you can click "capture" and it will capture a replay and show it in the debugger. And if you click "capture" again would be 2 replays and you can select between them. At the moment I have these 2 apis in mind. This is the API the UI calls to interface with the replay code (eventually this code would be in the UI when it integrates the replay code) interface ReplayAPI {
/**
* Capture a single frame. Should call `addReplay` When done
*/
captureFrame: () => Promise<void>;
/**
* Start capturing frames until `endCapture` is called
*/
startCapture: () => void;
/**
* Stop capturing (see `startCapture`)
*/
endCapture: () => void;
/**
* Request to playback to a specific part of a playback
* @param replay The replay to replay
* @param id this is the form of steps -> sub steps -> sub sub steps.
*
* Imagine a replay represents n frames, then [2, 3, 4, 5] would represent
* frame[2]
* device/queue function[3] (assume this happens to be queue.submit)
* command buffer[4]
* command [5]
*
*/
playTo: (replay: Replay, id: number[]) => void;
getBufferData: (buffer: GPUBuffer, offset: number, size: number) => Promise<ArrayBuffer>;
getTextureData: (
texture: GPUTexture,
level: number,
x: number,
y: number,
z: number,
width: number,
height: number,
depth: number
) => Promise<ArrayBuffer>;
getTextureImage: (texture: GPUTexture, level: number) => Promise<ImageBitmap>;
} This is the API the Debugger (UI) exposes interface DebuggerAPI {
/**
* Register commands to interface with replay
*/
registerAPI: (api: ReplayAPI) => void;
/**
* Adds a replay to the UI
*/
addReplay: (replay: Replay) => void;
/**
* Pass a canvas of the playback result
*/
setResult: (canvas: HTMLCanvasElement) => void;
} So in my test at <script src="../src/capture/registry.js"></script>
<script src="../src/replay/lib.js"></script>
<script type="module">
import ui from '../dist/index.js';
ui.registerAPI({
playTo(replay, id) {
// TBD: reply the given 'replay' to the specified id
console.log(replay, id);
},
startCapture() {
// TBD: Capture until stop capture is called
},
endCapture() {
// TBD: Stop Capturing
},
async captureFrame() {
const trace = await spector2.traceFrame();
// Trace the frame and set up the replay.
console.log(trace);
spector2.revertEntryPoints();
const replay = await loadReplay(trace);
console.log(replay);
ui.addReplay(replay);
// Go through each command, and show the presented texture of the trace on the capture canvas.
const captureCanvas = document.getElementById('captureCanvas');
const context = captureCanvas.getContext('webgpu');
for (const c of replay.commands) {
replay.execute(c);
if (c.name === 'present') {
const textureState = c.args.texture;
const device = textureState.device.webgpuObject;
captureCanvas.width = textureState.size.width;
captureCanvas.height = textureState.size.height;
context.configure({
device,
usage: GPUTextureUsage.COPY_DST,
format: textureState.format,
alphaMode: 'opaque',
});
const encoder = device.createCommandEncoder();
encoder.copyTextureToTexture(
{ texture: textureState.webgpuObject },
{ texture: context.getCurrentTexture() },
textureState.size
);
device.queue.submit([encoder.finish()]);
// TODO: should probably have this generate
// an imagebitmap and pass in that instead?
ui.setResult(captureCanvas);
}
}
},
}); It works to capture the first frame. It fails while capturing the second. Screen.Recording.2022-10-27.at.7.49.21.PM.movIn the video it captures a replay and then clicking the various steps you can see it outputs the
One thing that came up is how a
That way the UI doesn't need to know how to walk the hierarchy (kind of like DOM nodes). As it is, it's hardcoded that if the Today (and most of the time of the last 2 weeks) has been fighting React. Today I spent 2-3hrs refactoring the way state is stored but it ended up not working and I reverted to what I have just to get back to a working state. I then hooked up the capture like above but ran into another react issue. After a couple of hours I figured out the pane library I'm using is not re-rendering the components. I reached out to the dev to see if there's a simple fix. Otherwise I spent a couple of hours looking for an alternative and writing a sample to make sure it wasn't going to have the same issue. Maybe later tonight or for sure tomorrow I'll get one solution in or another. |
Some other things we need to decide on, how the UI knows what to display. Similar to the I guess that's which pipline, which bindgroups, which blend constants, the current viewport, etc.... Again it would nice to get in a generic way if that's possible. Ideally the backend could do something like const states = await getStateForStep(replay, [1, 0, 0, 7]); where {
viewport: {x: 0, y: 0, width: 300, height: 150, minDepth: 0, maxDepth: 1 },
scissor: {x: 0, y: 0, width: 300, height: 150},
blendConstant: [0, 0, 0, 0],
bindGroups: [
[ GPUBindGroup, GPUBindGroup, GPUBindGroup, ...],
[ GPUBindGroup, GPUBindGroup, GPUBindGroup, ...],
...
],
pipeline: GPUPipeLine,
...
} And the code to display those would be as generic as possible. For things like |
Nice work getting a UI working, it seems like frontend development is still hell!
I have some concerns about having both the tracing and replay APIs sharing the same interface. What we'd like to be able to do is produce a trace in one page and open it in another one, so the only interaction between capture and replay are through that JSON blob and we can have two separate interfaces. interface Tracer {
// This is essentially what we have in capture.js today
captureFrame: () => Promise<void>;
startCapture: () => void;
endCapture: () => Promise<Trace>; // DIFFERENCE WITH YOUR SUGGESTION
} For the replay lib (not the debugger UI), I'm thinking of something like this: interface CommandPath {}; // Right now it is an Array<Number>
interface Replay {
loadTrace: () => Promise<void>;
iterateCommandPaths: *(options: CommandIterationOptions) => CommandAndPath;
playTo: (path : CommandPath) => void;
getState: () =>ReplayState; // returns the current state in the replay
}
interface ReplayState {
viewport, bindgroups, scissor, etc.
}
That would be
Is it ok if you have to
Agreed, we should do that once we have typescript working. In general it can't be completely generic though because some nodes need encoding in command encoders while others execute directly. But as far as the UI is concerned it would just be a tree. |
Sound great! I'll try to adjust the UI so it takes a capture (json) and then calls In good news. I finally figured out what my big UI/React issue was so things are rendering when they should. Cross my fingers progress picks up. |
I'd like to hack on things next week. I don't plan to make too many change in the replay/capture code but I need to make a few small ones to make progress I think. We can of course change anything when you get back. But I just wanted to check, if you're not going to touch the code for a week do you mind if I make some small changes to the replay/capture code without a PR, just push? That's only if you don't plan to be editing until you get back. Right now I've added saving the API wrappers on init, and adding a This way I can do
That way I can get multiple captures (which mostly seems to be working) I'm going to move the "playback replay" code into the replay class. At least for now. If it doesn't belong there we can move when you get back. But..., it got me thinking, is this going to work as it's currently designed? If I understand correctly, the library wraps all the entry points and tracks all the objects even when not tracing. Then we trace a frame. Then we But, if any objects are created between It seems like we don't want to "revertEntryPoints". Instead we need our own way to call the real API, unwrapped, so
or some object does this for us. Right? Anyway, if so, there's no rush. That's not a change I need to make since the examples are not creating new objects. |
Another thing, just thinking out-loud Should maybe right now it's hacked where the samples playback on their own and if a But I guess from the POV if the UI, it just wants to call
Where state includes all the stuff we mentioned before Then the UI would do something like
I think you were right before, you should just pass me the textures and I'll have to render them myself. But, like I mentioned in the previous comment. To do that I need to call things unwrapped so my stuff doesn't get captured and so my resources don't get added to the capture. |
I see you already added |
Awesome!
Yes, please just push, as long as at least some examples work, then it's ok to introduce maybe slight breakage. I can get back to a known (somewhat) good state using the git history.
That can work but the UI will need to wrap / unwrap entrypoints each time it calls the replay in a way that might cause WebGPU commands to be executed. Otherwise the tracer will see the objects created by the UI and start tracing them, which will add a bunch of noise. ...And you had realized that already :) But like you saw earlier this could cause some difficulties still. I wonder if we could tag the objects created by the UI so they are somehow not traced. Not sure.
Like you found, this is
getState() returns to you the list of color attachments so that you can use the first attachment's To do WebGPU operations wihtout the replay seeing them you can do |
Yesterday I made a way to get an object that functions like a So the UI/replay can do
And then use The results are cached in a At least that's the hope. I haven't tested it yet 😅 |
okay, after testing the code I realized I needed to make anyway, I tried it out and it seems to work. Commited: cf6043a |
Finally getting something visible! Screen.Recording.2022-10-30.at.11.13.31.PM.movRight now I just hacked in setting some state. You can see some examples here 3220ace#diff-a81d772e513c2e04e99d8e99a45a6154f08bf3c769b8dba1ef8dee82734d3439 I only wired up The idea is, whatever "State" is returned from I'm looking for UI input. I'd prefer to get it functional before spending endless time noodling UI but, I'm a little conserned with the direction I'm going and if it's okay or not. In particular, there are "Pane"s and right now there are 4 main types "Steps" (the replay), "Frames" (think I'm going to get rid of this), "State", "Result", "Other" Of course I plan to let you add more of each but ..... for now it's laid out like this
When you click "Steps" it would update "Result" and "State". When you click something in state, like a buffer or a texture or a pipeline, etc, it would update "Other". The future idea is if there are 2 or more "Other" panes then if you click something in "State" it would update whichever "Other" pane was the most recently used (which seems to be how most text editors function). I could also probably add drag and drop so you drag the GPURenderPipeline and a new pane appears and you can drop it wherever. But, complications: "Result" is really just a texture. Is it special? Kind of seems like maybe because it's nice to see it update when you click different "Steps", but really, you can look at "State" and see the colorAttachments, click a Texture there and, under the current design, "Other" would show that texture, same as results. So do we really need "Result"? I think we do just because it's nice to see but it is interesting that it's just a texture and in sense it's redundant. Another issue, (or not), Under the current design, you click a Step, You see a Pipeline in State. You click it, "Other" shows the Pipeline. In the Pipeline you click the vertex shader entrypoint and "Other" will show the Shader (so you lose the pipeline). Once I add the ability to make more "Other" panes I could make it use any other "Other" pane to show the shader but the default would be there's only one "Other" pane. Is that bad that it replaces the pipeline with the shader in that case? Anyway, I don't really want to focus on those kinds of details yet but I just thought it would be to good to share so ideas can percolate. Other thoughts, should it show arguments to functions as it's doing,
or should it show
I can certainly make it a user setting, and if they're
Also right now the commands wrap if they don't fit in the pane.
I can make them not wrap and you have to scroll right to see. (can also be a user setting) No rush on anything, just thinking out loud and looking for input. |
Oh, one more thing. I have a UI TODO list here. It's just kind of notes and it's not remotely complete. I'll try to get a few more objects viable as values (like the state) and then try to get textures and buffers viewable (as in their content). I'll start with the easy cases but there are obviously lots more difficult cases. |
I hacked things together so I could get the
Replay
object over to the UI. Nothing is interactive but the commands show up, and a command buffer shows its commands. They're a mess at the moment.I also tried to hookup showing the result in the UI. At the moment it just passes the captureCanvas into the UI
The UI then draws that canvas into another canvas. The code to draw it is just
I'm not sure why it's coming out empty.
I think probably the Replay could just generate an imagebitmap and pass that into the UI and the UI can just attach it to a bitmaprenderer canvas? I'm not sure that's best or not. If the user clicks on the result should the UI look into the canvas to find a color or should it call out to the replay to ask "what color is this pixel?" One advantage to calling out would be the Replay might support other formats but only pass a renderable format for displaying to the user.
For the steps, I was hoping to be as generic as possible, meaning the UI ideally, shouldn't have to know the contents of anything. It should just have a tree of steps and display them and if the user clicks on one the UI would call a function or emit an event, "play to this step". That could be indices as like "playToStep([5, 12, 7]), so play to the 5th item in top level, 12th item in the sublevel of the 5th item, and the 7th item in that.
I'm not sure that's possible or if rather I need to write custom code for each type of step. At the moment i've special cased
queueSubmit
and maybe that's fine, I was just wondering if there was a more generic way for me to know when the UI needs to next subcommands.Tomorrow I'll try to emit or call a function for the playback so if you want you can work on playing to a specific step. After that I'll try to clean up the steps display. Then try to make it so if you click on a buffer or a texture it shows the appropriate view.
I guess after that would be to show pipeline settings, render pass settings, followed by shader source.
The text was updated successfully, but these errors were encountered: