Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Received END_STREAM with byte still pending #386

Closed
fekle opened this issue Aug 1, 2024 · 7 comments · Fixed by #387
Closed

Received END_STREAM with byte still pending #386

fekle opened this issue Aug 1, 2024 · 7 comments · Fixed by #387

Comments

@fekle
Copy link

fekle commented Aug 1, 2024

Hi there,

I'm using Phoenix with Bandit via PhoenixAdapter.
My application is running in DigitalOcean Kubernetes behind an Nginx Ingress.
Today I wanted to try HTTP/2 proxying with Nginx and Bandit, so I enabled HTTPS on my Phoenix endpoint and HTTPS proxying in Nginx via nginx.ingress.kubernetes.io/backend-protocol: "HTTPS".

From a functionality perspective, everything seems to work fine, but Bandit logs a lot of errors like this:

[...]
14:35:39.157 [error] ** (Bandit.HTTP2.Errors.StreamError) Received END_STREAM with byte still pending
14:36:11.363 [error] ** (Bandit.HTTP2.Errors.StreamError) Received END_STREAM with byte still pending
14:36:39.346 [error] ** (Bandit.HTTP2.Errors.StreamError) Received END_STREAM with byte still pending
[...]

I haven't found anything about this yet, so I'm not sure what to do. I tried disabling SSL passthrough in Nginx, to no avail.

Here is my Endpoint configuration:

config :my_app, MyAppWeb.Endpoint,
  adapter: Bandit.PhoenixAdapter,
  pubsub_server: MyApp.PubSub,
  url: [port: 443, scheme: "https", host: app_host],
  http: [
    ip: {0, 0, 0, 0, 0, 0, 0, 0},
    port: app_port
  ],
  https: [
    ip: {0, 0, 0, 0, 0, 0, 0, 0},
    port: app_tls_port,
    cipher_suite: :strong,
    keyfile: app_tls_key,
    certfile: app_tls_cert
  ],
  force_ssl: [rewrite_on: [:x_forwarded_proto]],
  secret_key_base: app_secret_key_base

Please let me know if there is any additional information that i could provide :-)

EDIT: Damn, i keep forgetting that:

os_version = debian-bookworm-20240722-slim
elixir_version = 1.17.2
otp_version = 27.0.1
phoenix_version = 1.7.14
bandit_version = 1.5.5
nginx_version = 1.11.1
@mtrudel
Copy link
Owner

mtrudel commented Aug 1, 2024

That error means that the client is closing an HTTP/2 stream before having sent the full body length that its content-length header specifies. For example, if a client sends a content-length of 5000 bytes but then ends the stream after sending only 4999 bytes, you'll see this error.

Without knowing the specifics of the upstream client, this suggests a client error (where the client in this case is your nginx proxy). I would doubt that SSL has anything directly to do with it.

Do you have any idea of the size of the payloads in question? Is it possible for you to reproduce this on demand? And if so, is it possible for you to run some capturing functions on the node?

@fekle
Copy link
Author

fekle commented Aug 1, 2024

Hi and thanks for your prompt reply!

I would doubt that SSL has anything directly to do with it.

I agree, it is not related to SSL, but rather HTTP/2 (which I'm using via HTTPS, h2c is not worth the hassle).

Do you have any idea of the size of the payloads in question? Is it possible for you to reproduce this on demand?

I can reproduce it on demand by quickly reloading the web app multiple times.

And if so, is it possible for you to run some capturing functions on the node?

Sure!

I completely forgot to mention that I'm running in clustered mode with 2 or more Elixir nodes running Phoenix and Bandit. After some more testing, I can say with some confidence that the error occurs when Nginx is switching from one backend server to the other.
Using sticky sessions (Nginx affinity cookie) greatly reduces the occurrence of said log entries. So it looks like Nginx is not cleanly closing the HTTP/2 connections when switching backends. Unfortunately, using sticky cookies prevents the use of SSL passthrough.

So it looks like this is an issue with Nginx HTTP/2 proxy handling and not Bandit?

@fekle
Copy link
Author

fekle commented Aug 1, 2024

I enabled log_protocol_errors: :verbose:

15:41:33.833 [error] ** (Bandit.HTTP2.Errors.StreamError) Received END_STREAM with byte still pending
    (bandit 1.5.5) lib/bandit/http2/stream.ex:553: Bandit.HTTPTransport.Bandit.HTTP2.Stream.stream_error!/2
    (bandit 1.5.5) lib/bandit/pipeline.ex:185: anonymous fn/1 in Bandit.Pipeline.commit_response!/1
    (bandit 1.5.5) lib/bandit/pipeline.ex:40: Bandit.Pipeline.run/4
    (bandit 1.5.5) lib/bandit/http2/stream_process.ex:27: Bandit.HTTP2.StreamProcess.handle_continue/2
    (stdlib 6.0.1) gen_server.erl:2163: :gen_server.try_handle_continue/3
    (stdlib 6.0.1) gen_server.erl:2072: :gen_server.loop/7
    (stdlib 6.0.1) proc_lib.erl:329: :proc_lib.init_p_do_apply/3

@mtrudel
Copy link
Owner

mtrudel commented Aug 1, 2024

I think I see the issue - if you're not reading the body in your plug, Bandit isn't properly handling the end stream case.

Can you try deploying the branch for #387 and see if that resolves your issue?

{:bandit, github:"mtrudel/bandit", branch:"read_data_during_stream_shutdown", override: true}

@fekle
Copy link
Author

fekle commented Aug 1, 2024

I tried your test branch, and yes - it seems to work!
I've had it running for a few minutes now and haven't seen the log yet. Before that it would start coming in immediately.

Thank you very much!

@fekle fekle changed the title Received END_STREAM with byte still pending Received END_STREAM with byte still pending Aug 1, 2024
@mtrudel
Copy link
Owner

mtrudel commented Aug 1, 2024

Great! I'll get this out in a release later today

@fekle fekle closed this as completed Aug 1, 2024
@mtrudel
Copy link
Owner

mtrudel commented Aug 1, 2024

Published as 1.5.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants