Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ProxyClient should heed instruction to shut up #801

Open
dnwiebe opened this issue Jun 26, 2024 · 1 comment · May be fixed by MASQ-Project/Node#450
Open

ProxyClient should heed instruction to shut up #801

dnwiebe opened this issue Jun 26, 2024 · 1 comment · May be fixed by MASQ-Project/Node#450
Assignees
Labels
hardenning Getting a feature working stronger

Comments

@dnwiebe
Copy link

dnwiebe commented Jun 26, 2024

The way the ProxyClient presently works is that when a TCP stream to the server is established, it is immediately split into a Read half and a Write half. The Write half is stored in StreamHandlerPoolRealInner::stream_writer_channels, keyed by its StreamKey, and the Read half is spawned off into its own Future that listens for data from the server and sends Actor messages to the ProxyClient when it arrives.

When the server closes such a connection, the Read half of the stream detects the closure and informs the ProxyClient via its stream_killer channel. The ProxyClient then sends a last_data = true CORES package to the originating Node, and everything is taken care of in order.

When the originating Node sends a last_data = true CORES package, the ProxyClient looks up the Write half of the stream in stream_writer_channels and instructs it to close the stream. When the server sees that its client wants to close the stream, IN MOST CASES it will acknowledge the closure, which the Read half of the stream will detect, as above, and everything will be taken care of in order.

However, sometimes the server is in the middle of sending lots and lots of data (for example, when the user is watching a video stream) and might not get around to checking whether the ProxyClient has requested closure of the stream for hours. In that case, the Read half of the stream will continue to receive the data and pass it on in Actor messages to the ProxyClient, which will continue to provide (and charge for) exit services for that data. The routing Nodes will continue to provide (and charge for) routing services for the data, and the originating Node will have nowhere to send it, since the browser has shut down the stream. Not only that: the originating Node, having retired the StreamKey, will have no idea whom to pay for all that unwanted data, and so will go rapidly delinquent with the exit Node and the relay Nodes and will be banned.

The problem here is that there is no way for the ProxyClient, when it receives a last_data = true CORES package from the originating Node, to tell the Read half of the stream to shut down and stop passing data from the server.

Task: Create some kind of communication method between the ProxyClient (or perhaps its StreamHandlerPool, if that is more appropriate) to allow the ProxyClient to instruct the Read half of the stream associated with a particular StreamKey to shut down and abandon its stream from the server. Probably this will involved modifying the poll() method of the StreamReader to check for that communication either before or after every time it handles incoming data from the stream.

Whatever this communication method ends up being, modify the ProxyClient to employ it when it receives a last_data = true CORES package from the originating Node.

Note: We have run into situations before, when closing virtual streams, where too many last_data = true CORES packages are sent. For example: the browser closes its stream, so the originating Node sends last_data to the exit Node. The exit Node has its Write half close the stream to the server, whereupon the server acknowledges by closing its end of the stream as well. The exit Node's read half detects this closure and informs the ProxyClient, at which point the Proxy Client sends a last_data CORES package back to the originating Node to tell it the stream has been closed. This last CORES package is 1) unnecessary, 2) a needless expense, and 3) unrecognized and thrown away by the originating Node because its StreamKey has already been retired on that end. When you design your solution for the Task: above, don't make this problem any worse.

Extra Credit: See if you can find a way to clean up the needless-last_data issue above for all stream closing scenarios, not just browser-initiated ones.

@github-project-automation github-project-automation bot moved this to 🆕 New in MASQ Node v2 Jun 26, 2024
@kauri-hero kauri-hero assigned kauri-hero and utkarshg6 and unassigned kauri-hero Aug 4, 2024
@kauri-hero kauri-hero added the hardenning Getting a feature working stronger label Aug 4, 2024
@kauri-hero kauri-hero moved this from 🆕 New to 🏗 Development In Progress in MASQ Node v2 Aug 4, 2024
@utkarshg6 utkarshg6 linked a pull request Sep 17, 2024 that will close this issue
@dnwiebe
Copy link
Author

dnwiebe commented Sep 20, 2024

Idea for testing:

In order to test this functionality, you have to be an exit Node. I can see two possibilities.

First, you could set up an ad-hoc Network where each Node in the network was started with log-level debug, run the test, figure out which one of the Nodes had been selected as an exit, and examine its log file.

Second, you could start up a single Node in Zero-Hop mode (and log-level debug), so that it is originating and exit Nodes in one; this card doesn't really care about routing data through a real Network.

Anyway, for this test you'll need a badly-behaved server that ignores clients that shut down streams and keeps sending data regardless. I don't know of such a server off the top of my head, but I know we've encountered them before, which is why this card exists.

Find one of those servers, start up a Node or Network, connect to the server, start a transfer, and use the browser to stop it. Wait a few seconds, then take down the Node or Network and look at the exit-Node logfile. At the point where you stopped the transfer, there should be a StreamHandlerPool log that looks something like this:

A shutdown signal was sent to the StreamReader for stream key {:?}.

Shortly thereafter, there should be a log from the StreamReader (which will have a complex name including the stream key and peer address):

Shutting down for stream: {:?}

If you see these logs, it means that the shutdown instruction from the browser was propagated to the ProxyClient's StreamReader and made it stop listening to the server even though the server was perfectly willing to keep talking forever.

Come to think of it, there should probably also be logs from the ProxyClient about sending a last-data = true CORES package back to the originating Node signaling the end of the stream, so that the originating Node can forget about it. If you see that, it should give you a warm fuzzy feeling, but this card isn't specifically about that package.

@utkarshg6 utkarshg6 moved this from 👩‍💻 Code Review Unfinished to 📃 Quality Assurance Unfinished in MASQ Node v2 Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hardenning Getting a feature working stronger
Projects
Status: 👩‍💻 Code Review Unfinished
Development

Successfully merging a pull request may close this issue.

3 participants