-
Notifications
You must be signed in to change notification settings - Fork 682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to call debug_traceBlockByHash on large blocks #984
Comments
LOL. Block 0x145064ae22b9e189e4adbc9a536f8f42a833540ca7881b5cbeeb43ee687bbce2 (for example) has a single transaction, but if you enable memory tracing it requires more than math.MaxInt32 to store the JSON.
I noticed that grpc internally uses FWIW, as I think the mechanism you have for passing the response back to the HTTP server actually supports multiple calls to Write, the core issue with writing this JSON--ignoring the higher-level issues with this API that I feel fine ignoring as I have hundreds of gigabytes of RAM--is that Go's JSON encoder is awkwardly layered and not only doesn't support SAX-style streaming but internally buffers its result before writing it in a single shot to the io.Writer... the API frankly may as well just return a Buffer at that point to avoid assuming reasonable design. So it could be that just having some kind of re-buffered io.Writer--which takes incoming calls to Write for large sizes and breaks them up into smaller writes--would be sufficient to fix this. (It just feels really weird as what I'd expect is for the JSON encoder to do tiny writes and then instead of having its own buffer you could add a layer similar to a java.io.StringOutputStream if you wanted to get the buffer, or something similar to a java.io.BufferedOutputStream if you wanted to aggregate the writes into larger chunks, which would make this stack more intuitive). Another option maybe worth considering is having these channels use the GRPC compression mechanism I kept coming across while trying to fix all of these limits: these JSON results compress extremely well (which is why I'm bothering getting these files). Even using zstd -3 this 3G file compresses down to 7M (though gzip apparently sucks--probably due to a window size difference--despite using a lot more CPU... 690M? I swear I've even checked round-tripping this through zstd and it really is just 7M; and zstd -19--which takes a long time, though--gets it all the way down to 5M ;P). |
I made a PR to implement your suggestion for As for setting the remaining values - you're correct. There's no reason to limit the message sizes. I'll make a PR to address that. |
We have removed the usage of the broker in |
Blocks have been getting larger, and thereby so has the result of debug_traceBlockByHash. I'm trying to trace block #8942019, hash 0x112b4b6bf1097fa43a8f7406310184040a3a63716091fa3c3322d434617840d6, and I just don't get any result at all. I've managed to determine that it has no issue generating the answer, but writeJSONSkipDeadline is returning an error that is being eaten by handleMsg. I modified coreth to output the error and was able to determine it was weirdly due to a grpc client in avalanchego that refused to receive all of the data.
I've determined the following patch--which notably affects the second block of repeated, similar code: the gresponsewriter--allows me to get the full response. Note that math.MaxInt32 is what grpc uses internally as the default for some other similar limits and is what the go-plugin project also uses to disable all such limits in its client, so I believe it is more correct than merely setting a larger arbitrary limit, assuming I am understanding correctly that this is for responses from APIs (and a user can't connect and send an infinite amount of data to this server).
The text was updated successfully, but these errors were encountered: