-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lockup at client when awaiting JSON body #26
Comments
Messages sent on a |
jnthn: Where are you thinking about adding additional concurrency? I can imagine making changes anywhere from the most specific (hacking additional concurrency specifically in the frame parser / message parser boundary) to the most general (making sure that type of circular wait can't happen in rakudo), and lots of points in between (such as allowing cro pipelines to declare that they do fan-in or fan-out and making sure the pipeline implementation accounts for that). |
Mostly taken from examples in t/, the majority of these test the performance of one of the key Cro::WebSocket modules, but there are two exceptions: * masking-perf.p6 tests *only* the masking operation in isolation. * router-perf.p6 tests pretty much the whole stack. The tests all have a default number of iterations, but this can be overridden on the command line; they all accept a single positional argument. Note that the default iteration count for all tests except masking-perf.p6 assumes that the first optimization (faster masking) has been applied; otherwise the frame modules will run two orders of magnitude slower. Finally, note that router-perf.p6 exposes a deadlock in Cro::WebSocket, hence the reason it defaults to one iteration and has debug prints to show where the lockup occurs. See croservices#26 for more details.
I have been converting the cro-websocket test files into performance tests of the various pieces. This had been working fine with most of the parser/serializer/handler bits, but when I did a test of the whole client/server pair, I ran into a lockup at the client.
See https://gist.github.com/japhb/c07f5699dbb6d2e45a392865b52abe58 for the test file. (The perf test version of this file just comments out the
say
calls and raises$repeat
to something in the ~1000 range.) It's pretty straightforward; I was comparing the performance of round trips of text and JSON bodies, using a server side taken nearly as-is fromt/http-router-websocket.t
. The client should be pretty uncontroversial, but though it seems to work fine for plain text round trips, it locks up while awaiting the body-text on the client side for JSON round trips.Any ideas what's going on here? The only obvious thing I can see different between the plain text and JSON cases with
CRO_TRACE=1
is that the plain text response is sent unfragmented, while the JSON response message is sent as a fragment frame containing the entirety of the JSON data followed by an empty continuation frame, but if that's really the problem I'm surprised cro-websocket passes it's own test suite.The text was updated successfully, but these errors were encountered: