Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-1018279: After Version 2.1.5 Snowflake.Data makes a new Network Egress call which can be blocked in networking #855

Closed
sgroznyvarde opened this issue Jan 24, 2024 · 2 comments
Assignees
Labels

Comments

@sgroznyvarde
Copy link

sgroznyvarde commented Jan 24, 2024

  1. What version of .NET driver are you using?
    2.1.5 and 2.2.0

  2. What operating system and processor architecture are you using?
    AWS Lambda running .NET 6.0

  3. What version of .NET framework are you using?
    .NET 6.0

  4. What did you do?
    We have a lambda in a VPC that makes a Snowflake DB connection. After upgrading from 2.1.3 to 2.2.0 we started getting timeouts opening the Snowflake connection. No code changes were made outside of the update, and our logs showed specifically that a connection.open() line was the last action take by the lambda before a timeout. This only occurred in the code deployed to the lambda but not from my local machine, which continued to work as expected.

  5. What did you expect to see?
    Ideally we expected that the lambda Snowflake connectivity would continue to work instead of timing out. After some investigation we found that our VPC security group needed an extra egress port added on an additional port(we opened up everything, and have not narrowed down the actual port being used). We had 443 already open, but after the update to 2.1.5 and 2.2.0 we needed another port open. This lambda makes a simple Snowflake query formats the data and returns it so no other processes would have required this extra port.

We have mitigated the error by opening egress traffic to all ports. Please let us know what port we need traffic opened to so we can secure our lambda a bit more

Also note, we tried every version from 2.13, 2.1.4, 2.15, and 2.2.0. Our connectivity started breaking on version 2.1.5

image

@github-actions github-actions bot changed the title After Version 2.1.5 Snowflake.Data makes a new HTTP call out which can be blocked in networking SNOW-1018279: After Version 2.1.5 Snowflake.Data makes a new HTTP call out which can be blocked in networking Jan 24, 2024
@sgroznyvarde sgroznyvarde changed the title SNOW-1018279: After Version 2.1.5 Snowflake.Data makes a new HTTP call out which can be blocked in networking SNOW-1018279: After Version 2.1.5 Snowflake.Data makes a new Network Egress call which can be blocked in networking Jan 24, 2024
@sfc-gh-dszmolka
Copy link
Contributor

hello and thank you for submitting this issue ! this looks like something which works as expected. Please note that all Snowflake drivers (by default) perform certificate validation against certain hosts , which hosts are defined in the certificates itselves.

  • using OCSP protocol for all non-.NET drivers
  • using CRL for .NET driver here

both of those work on port 80.
Which exact hosts need to be reachable ? For non-.NET drivers it's simple to tell, you just run select system$allowlist(); in your Snowflake account; look for anything in the output starts with OCSP_ and allowlist those to be able to communicate over port 80.

For .NET its unfortunately not that straightforward, but what I would do if I would only need to allow only the specific CRL hosts on port 80:

  • run the above system function in Snowflake
  • take a note of the SNOWFLAKE_DEPLOYMENT and STAGE hosts
  • connect to each of them, and grab the whole certificate chain. Iterate over all the certificates in the chain and get the CRL URLs from it. With a dumb shell script, something like this (using one of the actual STAGE URLs in AWS EU Frankfurt deployment):
export hostname="sfc-eu-ds1-22-customer-stage.s3.amazonaws.com"
echo | openssl s_client -showcerts -connect "$hostname":443 -servername "$hostname" 2>/dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".pem"; print >out}'; for cert in cert*.pem; do echo "--> $cert"; openssl x509 -noout -text -subject -issuer -startdate -enddate -in $cert | grep -i crl; echo; done

I'm certain there are more sophisticated methods as well, but this would output

--> cert1.pem
            X509v3 CRL Distribution Points: 
                  URI:http://crl.r2m01.amazontrust.com/r2m01.crl

--> cert2.pem
                Digital Signature, Certificate Sign, CRL Sign
            X509v3 CRL Distribution Points: 
                  URI:http://crl.rootca1.amazontrust.com/rootca1.crl

--> cert3.pem
                Digital Signature, Certificate Sign, CRL Sign
            X509v3 CRL Distribution Points: 
                  URI:http://crl.rootg2.amazontrust.com/rootg2.crl

--> cert4.pem
                Digital Signature, Certificate Sign, CRL Sign
            X509v3 CRL Distribution Points: 
                  URI:http://s.ss2.us/r.crl

so does the job.

  • allow those hosts on port 80, which are serving the CRLs the certificates indicate. Don't forget to also allow the CRL URL's in the root CA's which sign the last intermediary CA in the chain (that is, if the root CA has any CRL, did not check. Might not have it.)

Some notes:

  • note1: this might still break in the future, if Snowflake decides to implement OCSP in .NET driver too, in parity with the rest of the driver software. Then you'll need to allow the hosts as mentioned above
  • note2: allowlisting only the particular CRL hosts only work up until the point the certificates are the same. If e.g. an intermediary CA is changed and it has a different CRL endpoint, you'll need to allow that too. If you have an option to allow hostnames with a wildcard, then the procedure explained at https://docs.snowflake.com/en/user-guide/ocsp#snowflake-on-aws might work better and would provide resiliency)

Hope this helps. Closing this issue as it is not an issue with the .NET driver and is an expected behaviour.

@sfc-gh-dszmolka
Copy link
Contributor

Also note, we tried every version from 2.13, 2.1.4, 2.15, and 2.2.0. Our connectivity started breaking on version 2.1.5

On this one: we fixed an important security hole with 2.1.5 (was there between 2.0.25 - 2.1.4) where on the affected versions the driver would not contact the CRL hosts for certificate verification. This was unexpected behaviour and result of a bug.
More details in the security advisory we published in this same repo: GHSA-hwcc-4cv8-cf3h

Starting from 2.1.5 the driver again behaves as expected -> contacts the CRL hosts on port 80. This is likely the explanation to the behaviour you're seeing.

@sfc-gh-dszmolka sfc-gh-dszmolka self-assigned this Jan 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants