You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm testing the tool in a non-production cluster, and I'm experiencing some timeouts.
I wonder if the solution would be to add some timeout configurations, or if the tool is not intended to run on a cluster with a certain number of resources.
k get replicasets | wc -l
+ exec kubectl get replicasets --context xxx --namespace yyy
4944
k get pods | wc -l
+ exec kubectl get pods --context xxx --namespace yyy
972
Listing the replica sets via kubectl take around 17s.
Current Behaviour
The tool fails with a timeout error.
Code snippet
N/A
Possible Solution
No response
Steps to Reproduce
eksup analyze --cluster xxx --region us-east-1
eksup version
latest
Operating system
macOS x86_64
Error output
DEBUG hyper::proto::h1::conn: incoming body decode error: timed out
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.14.28/src/proto/h1/conn.rs:321
TRACE hyper::proto::h1::conn: State::close()
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.14.28/src/proto/h1/conn.rs:948
TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.14.28/src/proto/h1/conn.rs:731
TRACE hyper::proto::h1::conn: shut down IO complete
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.14.28/src/proto/h1/conn.rs:738
TRACE tower::buffer::worker: worker polling for next message
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tower-0.4.13/src/buffer/worker.rs:108
TRACE tower::buffer::worker: buffer already closed
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tower-0.4.13/src/buffer/worker.rs:62
...
TRACE hyper::client::pool: pool closed, canceling idle interval
at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.14.28/src/client/pool.rs:759
Error: Failed to list ReplicaSets
Caused by:
0: HyperError: error reading a body from connection: error reading a body from connection: timed out
1: error reading a body from connection: error reading a body from connection: timed out
2: error reading a body from connection: timed out
3: timed out
The text was updated successfully, but these errors were encountered:
I'm also facing issue when fetching ReplicaSets. In the error messages, it doesn't indicate timeout issue. I'm not sure it is the same root cause
Error: Failed to list ReplicaSets
Caused by:
0: HyperError: error reading a body from connection: error reading a body from connection: unexpected EOF during chunk size line
1: error reading a body from connection: error reading a body from connection: unexpected EOF during chunk size line
2: error reading a body from connection: unexpected EOF during chunk size line
3: unexpected EOF during chunk size line
Expected Behaviour
I'm testing the tool in a non-production cluster, and I'm experiencing some timeouts.
I wonder if the solution would be to add some timeout configurations, or if the tool is not intended to run on a cluster with a certain number of resources.
Listing the replica sets via kubectl take around 17s.
Current Behaviour
The tool fails with a timeout error.
Code snippet
Possible Solution
No response
Steps to Reproduce
eksup analyze --cluster xxx --region us-east-1
eksup version
latest
Operating system
macOS x86_64
Error output
The text was updated successfully, but these errors were encountered: