Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer pods throwing TLS handshake errors. #18

Open
Gaurang10 opened this issue Jul 9, 2019 · 11 comments
Open

Peer pods throwing TLS handshake errors. #18

Gaurang10 opened this issue Jul 9, 2019 · 11 comments

Comments

@Gaurang10
Copy link

We keep on getting below error on peer pods created in Part1.

EOF server=PeerServer remoteaddress=192.168.97.19:27001
2019-07-09 09:07:37.123 UTC [core.comm] ServerHandshake -> ERRO 3bb TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.191.113:31574
2019-07-09 09:07:37.252 UTC [core.comm] ServerHandshake -> ERRO 3bc TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.191.113:10806
2019-07-09 09:07:37.315 UTC [core.comm] ServerHandshake -> ERRO 3bd TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.191.113:10914
2019-07-09 09:07:37.330 UTC [core.comm] ServerHandshake -> ERRO 3be TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.97.19:58938
2019-07-09 09:07:37.414 UTC [core.comm] ServerHandshake -> ERRO 3bf TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.97.19:63795

@MCLDG-zz
Copy link

MCLDG-zz commented Jul 10, 2019 via email

@msolefonte
Copy link

If you do not strictly require TLS for your network you can avoid it by setting the CORE_PEER_TLS_ENABLED and CORE_PEER_TLS_CLIENTAUTHREQUIRED to false.

If you need it, it is usually a problem with the certs generated/stored.

@Gaurang10
Copy link
Author

@MCLDG We were trying to install our chaincode and node application and kept seeing TLS error. Saw the above issue in the peer logs. Do you think we can safely ignore this?

@msolefonte Thank you so much. We are going ahead for now by shutting down all TLS but I am not sure if that is how we can go into production. Is there an alternative way to ensure encrypted communication between components?

@msolefonte
Copy link

msolefonte commented Jul 10, 2019

I ran a PoC and disabled TLS because I trusted in the underlay network security. I was using AWS and a Virtual Private Cloud, so it was not a problem for me, but you can try to achieve the same by using some plugin to Kubernetes like Istio or Linkerd.

However, I have to say that Hyperledger Fabric manages certs pretty badly. Instead of auto create them by default, you have to pregenerate them and trust on each of the nodes knowing all of the other nodes pub/priv keys by sharing that private data thought EFS or NFS. Harder if you want to use more than one infrastructure.

@Gaurang10
Copy link
Author

@msolefonte spot on. Our ETH/Quorum deployment automation took about an order of magnitude lesser time and complexity. However nothing even approaches the kind of throughput that Fabric can give (Sawtooth isn't anywhere near the required level of maturity) So we are stuck trying to make this thing work :(

@MCLDG-zz
Copy link

MCLDG-zz commented Jul 10, 2019 via email

@msolefonte
Copy link

Been there. If you want some recommendations, try to add anti-affinity policies between orderers themselves and between peers themselves. This is not a all-the-same network, so you are going to have bottlenecks if you rely on Kubernetes to schedule. You would like to use requests and limits too, but as long as Hyperledger has not pronounced ever about it, it is all about guessing.

On the other hand, If you want a production ready environment, you are going to need to modify peers and orderers to keep blockchain and configuration persistent. I did some pull request (not revised yet) about this topic. Perhaps it can help you.

And excuse me for the flood. Last off-topic reply.

@Gaurang10
Copy link
Author

@MCLDG Thanks. Will try and figure out a way to verify cert generation and storage.

@pleerock
Copy link

Anyone got a resolution for this error?

@Gaurang10
Copy link
Author

update TLS cert generation cmd in scripts/start-peer.sh
fabric-ca-client enroll -d --enrollment.profile tls -u $ENROLLMENT_URL -M /tmp/tls --csr.hosts $PEER_HOST --csr.hosts "localhost" --csr.hosts "127.0.0.1"

and scripts/start-orderer.sh
fabric-ca-client enroll -d --enrollment.profile tls -u $ENROLLMENT_URL -M /tmp/tls --csr.hosts $ORDERER_HOST --csr.hosts "localhost" --csr.hosts "127.0.0.1"

@ryvers
Copy link

ryvers commented Nov 30, 2022

Hi,
in our case, such errors were caused by liveness probes which were applied to the pod (orderer and peers, respectively).
The first thing is it cannot be set to grpc port, which is exposed by default, and secondly, there is a health check mechanism implemented within the operations service which can be queried. Check if for your fabric version it is also available.
https://hyperledger-fabric.readthedocs.io/en/release-2.4/operations_service.html#health-checks

So our orderer YAML definitions were extended with such a snippet.

.... 
           ports:
            ...
            - containerPort: 8443
          livenessProbe:
            httpGet:
              port: 8443
              path: /healthz
            initialDelaySeconds: 60
            periodSeconds: 60
            failureThreshold: 3
          env: 
          ...
          - name: ORDERER_OPERATIONS_LISTENADDRESS
            value: "0.0.0.0:8443"
          - name: ORDERER_OPERATIONS_TLS_ENABLED
            value: "false"
          - name: ORDERER_OPERATIONS_TLS_CLIENTAUTHREQUIRED
            value: "false"

We went through many issues that have similar outputs but without proper solutions, and everyone indicated TLS bad configuration - IMHO, there is a slight difference between EOF and a bad certificate returned as an error message.
I hope that it will help someone in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants