-
-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/remote server #2041
base: main
Are you sure you want to change the base?
Feat/remote server #2041
Conversation
This commit implements the bulk of the handling for the remote server, particularly the passthrough handling for the internal requests.
In some testing I'm doing (using this branch) I see from the 'initial state' commit 08010c9 log lines like
which is fine when handled, but more confusing when followed by the unhandled message e.g.
This is however intended if the remote is attached and then does not handle the request. I guess in this context, that the first lines are intended only for debugging whilst developing (MSW, not user code)? I separately receive (in the testing side, from commit 3ea224f)
or
in setupRemoteServer.ts: line 101, which perhaps serves a similar purpose (if you like those lines, I'll remove the trailing space which gives a double space between 'Ignored'/'Handled' and 'GET'). |
Awesome PR, hope this gets merged. This unlocks overriding server mocks in frameworks like nextjs and remix |
@marval2 - I agree, but just to check, did you try this branch on some code and did it work for you? (I'd be good to know that it works for someone else too in practice, not just that you like the idea, although I'm pleased even if it's only the latter). |
Thank you for your work on this! This needs a really thorough review. I don't have the time right now to dive into this. Others are also welcome to review the code and share their thoughts. |
In the spirit of your X posts regarding opening PRs and your comment on Discord with a mid-April timeline, how's this looking? :) |
Hi, @chrisb2244! Sorry, I haven't found time to look at this yet. Realistically, this will take a while until I get to it. If I have a spare afternoon, I will go through this sooner than that. Apologies for the wait. |
Thanks for the update. Looking forward to feedback in the future :) |
what would be your recommended way of pulling this in to test? Doing a |
@petejodo - I'm not sure this is the best way, but you can clone this repo/branch, then run I think that correctly installs the local (built) version, although there may be more effective methods. You should see something like |
ok I just spent way to much time trying to get nextjs, cypress and msw working in a temporary state while waiting for this feature (by running next in cypress process) only for it to blow up in my face with what I feel like is a next bug but not sure. Anyway I need to get other work done now to make up for that but the moment I have free time next week, I'm gonna try this out. We desperately need something to improve our testing situation as nothing really seems to work well. Thanks for this btw!!! |
I know I said a week but I meant a month 😅 To give some preface, prior to this I had Cypress start Next in process so as to share the MSW server and have the specs (that run in the browser) issue commands for updating the MSW server e.g. export type MockRequest = {
path: `/${string}`;
method: "get" | "post" | "put" | "delete";
response: JsonBodyType;
};
declare global {
namespace Cypress {
interface Chainable {
mockMSW(mockRequest: MockRequest): void;
}
}
}
Cypress.Commands.add("mockMSW", (mockRequest) => {
cy.task("mockMSW", mockRequest);
}); This connects with async setupNodeEvents(on, config) {
const { server } = await import("./path/to/server-with-default-handlers");
await startNext();
on("before:run", () => {
server.listen({ port: 12345 });
});
on("task", {
mockMSW(req: MockRequest) {
server.use(createMock(req));
return null;
},
clearMSW() {
server.resetHandlers();
return null;
},
});
} Besides unrelated issues, this works. Specs pass. To consume these changes, nothing had to change with commands and I changed async setupNodeEvents(on, config) {
const remote = setupRemoteServer();
on("before:run", () => {
remote.listen({ port: 12345 });
});
on("task", {
mockMSW(req: MockRequest) {
remote.use(createMock(req));
return null;
},
clearMSW() {
remote.resetHandlers();
return null;
},
});
} Nextjs now runs entirely separate and to integrate MSW with it, I enabled instrumentation hooks and created an export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
const { server } = await import("./path/to/server-with-default-handlers");
server.listen({ remotePort: 12345 });
}
} This didn't work and saw logs like the following on the Cypress (remote) side:
Then I thought to try to setting the default handlers on the
I did this on an actual Nextjs app and so I can't share that code. When I find the time, I'll create a repo that reproduces these issues. Oh and to get this working locally, I cloned the repo, built it and then in my app's project directory, I did:
|
I recently created a simple repository to demonstrate the integration of MSW Server Remote API with the Playwright test runner. It is a basic application that renders list of recipes using data fetched from an external API, which are then mocked in the tests using MSW. You can use this repository as playground with the MSW Remote Server API. @chrisb2244 In testing, I found that the happy path (mocking successful requests) works perfectly 💪🏻 🎉. However, on the downside, I encountered issues when trying to mock an error response, resulting in the following error:
Additionally, I haven't found an option to disable logging from the remote server ( |
@SebastianSedzik thanks for the feedback! Good catch with the error response, if I get some feedback that this might be merged then I'll take a look at that, and probably it should be easy to add a test to check the code works correctly. With regards verbosity - yes, I recall it being a bit noisy. I suspect adding some option to tone it down should be fairly straightforward, but I'll defer to however Artem would want that handling in line with the rest of the repo (I believe I maintained the verbosity from the branch that I used as an initial source, but those logging statements may have been intended only for development, and to be removed before merging?) |
Link to release relating to error handling changes (maybe related): https://github.com/mswjs/msw/releases/tag/v2.3.0 Whilst this branch doesn't share that code and I suspect @SebastianSedzik's code doesn't use it either if using this branch (I saw a compressed file in the example repo, although I didn't check the interceptors library version where I believe the actual change occurred), when writing code to handle his described problem in future it should respect the new MSW design, not what was in place when this PR was created. |
@kettanaito (sorry for nagging you, hope you don't mind) Do you have any plans to review this PR? Is it likely to be a matter of a few weeks, months, or is it hard to say at this point (which is also a perfectly fine answer)? Knowing a realistic timeline would be very helpful for our internal technical decisions. Also, is there anything that I (or others interested) who are not maintainers of MSW can do to help move this forward? |
Sorry ahead of time for the word salad. I'm not 100% familiar with all the terminology for msw's internals. Been playing with this a bit, and to make sure I fully understood what I was going on, I did a minimal reimplementation from scratch in a sample project. One thing I'm confused about, is how ports are supposed to work. In #1617 there was an assumption made that handlers would affect the whole app. That's okay, but I still don't get how it can work with the assumption of a single port. If I'm running Playwright with parallel executions (the default of 1 file runs in one worker. Let's not get into fullyparallel where each individual tests could run in separate workers as it introduces extra complexity), then I may have 1 remote server running for every worker. If I understadn what I see in the sample app above and reading the code, the remote port is set in the listen call on both the client (which is the node MSW) and the "server" (which runs in the browser. There can be N workers so N browsers). In that case, the servers would conflict as they try to listen to the same ports. If you assign a random port, then the client doesn't know which server to connect to. Am I understanding well? In my little proof of concept, I went around this by using a randomly available port (passing as the port Unfortunately, while Playwright can intercept all requests to inject this header globally in a beforeEach (so no code's request to forward the header from browser -> server), forwarding that header for the server to server http requests is harder. In my case, I'm using Remix, so I did a Remix-specific adapter where the header is handled automatically and passed to my http client. That implementation is not only Remix specific, but also specific to my http client, and thus would be unacceptable for something agnostic like MSW. I don't know how that would be solved. Have you folks mused on this problem at all? I thought there could be an initial handshake when creating the remote api to tell the Node MSW about the available servers, but that still wouldn't let it know which server to connect to for which request. The only realistic option I can think of is proving a couple of environment agnostic primitives to build a platform/framework specific wrapper with. The good news is that this can all be done in userland without any change to MSW. But I'd love a better option if someone smarter than me has thought of it. |
@Phoenixmatrix - I'm not currently able to look at this in detail, but I can confirm that as I recall, when I implemented this PR I followed the assumption you highlighted - that the MSW handlers were unique. As you indicated, there is no need for this to be true (and various advantages, again as you noted, to avoid this assumption). Apologies for the vague response below, but I'm away from a pc and may be for some time, so I'd prefer to at least give my initial, unchecked thoughts. I am unsure to what extent it affects or mitigates the issue you're describing, but have you considered the use of MSW's boundaries to limit the scope of an added handler? Artem wrote an article introducing server boundaries here: https://mswjs.io/blog/introducing-server-boundary/ If that doesn't address the issue you raise, please feel free to point out that I've gone in completely the wrong direction here, and that independent ports are required for your purposes. |
I wasn't aware of boundaries, but I don't think it solves the issue. From reading the docs, server boundaries handle concurrency within a single process. The problem is the playwright side spawns multiple workers (process). That means multiple instances of the remote process, and thus multiple calls to listen. These need to happen on different ports to avoid conflicting with each other, at which point the client (which from our app's point of view happens on the node server) now has to know which socket server to connect to, as there will be N of them (one per worker). Reading the code, I believe there's an attempt to cache the server on the global scope to reuse it across remote api instances, but there's one global scope per worker, so that won't change anything here. If the workers were in the same process they could share a single remote server and then use boundaries to avoid conflicting, but they don't. There's going to be N process where N is the parallelism amount allowed, and thus N socket servers on N different ports. If we disable parallel execution then things work fine as is, but that's not ideal. My initial thought is that if we use a random unassigned port on the MSW server and make it easy for the test to access, a framework wrapper author could forward it in a header as part of the requests. Then in the node MSW call, instead of a static remote port, we could pass in a callback in the form (Request) => remotePort instead of a fixed port, allowing us to snatch the port from the request header. Internally for these requests there would be multiple socket connections managed and cached, and then everything would work as expected, at the cost of some more complex setup. It would keep MSW framework agnostic, but would be more complex to use (while the single process scenario could still "just work" via some reasonable defaults). I could even see the above published as a separate package from msw aimed at library authors, rather than MSW users. |
@chrisb2244, hi! Could you please rebase this against #1617? Your pull request goes to I pushed some improvements to the base branch, fixing the tests and adding some fixes. |
@kettanaito - I messaged you privately regarding my (present lack of) availability, but I took a look at the current #1617 branch and your recent comment there, and so I'll post here now for others following this thread and to partially address your request above. The approach being taken in the current 1617 branch changes (with the permissive catch-all handler for initial start, and changing the handling of the requests by the remote) are quite significantly different to the 1617 from which this PR branched out. However, in particular that second point obviates some of the changes here, which specifically are written to allow the remote handlers to correctly respond to unhandled requests and allow the local server to consider them unhandled rather than broken according to the previous code comments and direction (iirc - it's been a while...). Some of the changes that need resolution when rebasing onto the new 1617 are easily achieved (e.g. renames in some util files, perhaps due to changes in main that were merged to 1617), and others appear to be fairly close to consistent (movement of the onAnyEvent handling from SetupServerCommonApi.ts into SetupServerApi.ts via a check on If this PR remains open in the coming months then I'll return to it as I gain more time, but given the different direction for #1617 it might not be practical to try and use this code alongside/on-top-of that set of changes. I'm hopeful that regardless of the method, the end goal can be achieved for MSW users - the remote server feature seems powerful for various different testing scenarios around server-side fetching. |
Thanks for a detailed response, @chrisb2244! I must admit, I still struggle to understand what exactly the proposed changed do here. I understand it's been a while, and I also respect your time, so that's okay, you don't have to rebase this branch just so I could see a nicer diff. The state of the original feature branch is really good right now, and the tests are the main thing that remains. It behaves exactly as I want it to. I've posted more detailed updates on that pull request for anybody interested. I think it's highly likely I will close this pull request in the upcoming future. I am sorry for not collaborating on this sooner. I try to do my best with the time I have. Your efforts hasn't been wasted here though, as this is a nice reference for anybody wanting to learn more about this feature. |
Much cleaner PR for the remote-server handling.
The commits here are not as accurate a reflection of the manner in which they were implemented, and the "initial state" commit may not reflect exactly truthfully the initial state of the msw/feat/ws-sync-handlers branch, from which this code was based.
However, the changed files and number of commits prevents the massive PR in #2028
Only one of these two should be merged (if approved/at all), but this may be easier to review (the final point is almost identical, the other has removed an originally existing console.log line whereas here it is retained).