Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-1049322: Failing to load large data sets with snowflake-sdk ^v1.9.0 with message "Request to S3/Blob failed", works well with lower versions #763

Closed
bhaskarbanerjee opened this issue Feb 8, 2024 · 20 comments
Assignees
Labels
status-triage_done Initial triage done, will be further handled by the driver team

Comments

@bhaskarbanerjee
Copy link

  1. What version of NodeJS driver are you using? ^1.9.0

  2. What operating system and processor architecture are you using? Darwin and arm

  3. What version of NodeJS are you using? node 16.20.0 and npm 8.19.4

  4. What are the component versions in the environment (npm list)?
    └── [email protected]

5.Server version:* 8.5.1
6. What did you do?

Tried out this sample code from https://docs.snowflake.com/en/developer-guide/sql-api/submitting-requests but because my data set size is between 6-7MB, it is failing with message Request to S3/Blob failed
We are observing this while upgrading snowflake-sdk from 1.6.23 to ^1.9.0. Things seem to be working fine with version 1.6.*, 1.7.0 and 1.8.0.
Is there a resolution for fetching large data sets with sdk version ^1.9.0?

`// Load the Snowflake Node.js driver.
var snowflake = require('snowflake-sdk');
// Create a Connection object that we can use later to connect.
var connection = snowflake.createConnection({
    account: "MY_SF_ACCOUNT",
    database: "MY_DB",
    schema: "MY_SCHEMA",
    warehouse: "MY_WH",
    username: "MY_USER",
    password: "MY_PWD"
});
// Try to connect to Snowflake, and check whether the connection was successful.
connection.connect( 
    function(err, conn) {
        if (err) {
            console.error('Unable to connect: ' + err.message);
            } 
        else {
            console.log('Successfully connected to Snowflake.');
            // Optional: store the connection ID.
            connection_ID = conn.getId();
            }
    }
);

var statement = connection.execute({
  sqlText: "Select * from LargeDataSet limit 100",
//sqlText: "Select * from LargeDataSet", -- fails with Request to S3/Blob failed
  complete: function(err, stmt, rows) {
    if (err) {
      console.error('Failed to execute statement due to the following error: ' + err.message);
    } else {
      console.log('Successfully executed statement: ' + stmt.getSqlText());
    }
  }
});`
  1. What did you expect to see? With a minor version upgrade, we were expecting the code to be backward compatible. Expected data to be returned in same way as with v1.6.23 or v1.7.0 or v1.8.0

  2. Can you set logging to DEBUG and collect the logs? Can't upload logs due to company security policies.

  3. What is your Snowflake account identifier, if any? (Optional)

@bhaskarbanerjee bhaskarbanerjee added the bug Something isn't working label Feb 8, 2024
@github-actions github-actions bot changed the title Failing to load large data sets with snowflake-sdk ^v1.9.0 with message "Request to S3/Blob failed", works well with lower versions SNOW-1049322: Failing to load large data sets with snowflake-sdk ^v1.9.0 with message "Request to S3/Blob failed", works well with lower versions Feb 8, 2024
@sfc-gh-dszmolka sfc-gh-dszmolka self-assigned this Feb 8, 2024
@sfc-gh-dszmolka sfc-gh-dszmolka added status-triage Issue is under initial triage and removed bug Something isn't working labels Feb 8, 2024
@sfc-gh-dszmolka
Copy link
Collaborator

hi - thank you for creating the issue. So you tested the ALLOWLIST output with SnowCD and generally it worked well.

Next part would be seeing the logs, but as you mentioned you cannot share it here - can't share even in a sanitized way ? If so, i recommend opening a Snowflake Support case, where you can work 1:1 with a support engineer and only Snowflake can see the logs, not anyone who comes across this issue publicly.

Without the trace logs it would be very hard to figure out what is failing in your particular situation. Generally, the driver is able to fetch data from S3-based internal stages.

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-information_needed Additional information is required from the reporter label Feb 8, 2024
@bhaskarbanerjee
Copy link
Author

@sfc-gh-dszmolka
Yes I tested the snowcd and generally it worked well with the 2 VPCs of type STAGE. Below are my sanitized logs:

winston:create-logger: Define prototype method for "error"
winston:create-logger: Define prototype method for "warn"
winston:create-logger: Define prototype method for "info"
winston:create-logger: Define prototype method for "http"
winston:create-logger: Define prototype method for "verbose"
winston:create-logger: Define prototype method for "debug"
winston:create-logger: Define prototype method for "silly"
winston:create-logger: Define prototype method for "error"
winston:create-logger: Define prototype method for "warn"
winston:create-logger: Define prototype method for "info"
winston:create-logger: Define prototype method for "http"
winston:create-logger: Define prototype method for "verbose"
winston:create-logger: Define prototype method for "debug"
winston:create-logger: Define prototype method for "silly"
winston:create-logger: Define prototype method for "OFF"
winston:create-logger: Define prototype method for "ERROR"
winston:create-logger: Define prototype method for "WARN"
winston:create-logger: Define prototype method for "INFO"
winston:create-logger: Define prototype method for "DEBUG"
winston:create-logger: Define prototype method for "TRACE"
{"level":"DEBUG","message":"[10:03:29.132 AM]: 300"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.154 AM]: Contacting SF: /session/v1/login-request?requestId=48369573-6933-4c35-830b-fb882219ef1f&warehouse=MY_WAREHOUSE&databaseName=MY_DB&schemaName=MY_SCHEMA, (1/7)"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:29.160 AM]: Create and add to cache new agent https://MY_PROD_SNOWFLAKE.snowflakecomputing.com:443-keepAlive"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.163 AM]: Proxy settings used in requests: // PROXY environment variables: HTTP_PROXY: MY_PROXY HTTPS_PROXY: MY_PROXY NO_PROXY: 127.0.0.1,localhost,.local,.internal,.kdc.MY_COMPANY.com,.prod.MY_COMPANY.com,.qa.MY_COMPANY.com,.prod.EU.MY_COMPANY.com,.qa.EU.MY_COMPANY.com,169.254.169.254,s3.amazonaws.com,.s3.amazonaws.com."}
winston:file: written true false
{"level":"TRACE","message":"[10:03:29.165 AM]: CALL POST with timeout 90000: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com/session/v1/login-request?requestId=48369573-6933-4c35-830b-fb882219ef1f&warehouse=MY_WAREHOUSE&databaseName=MY_DB&schemaName=MY_SCHEMA"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.166 AM]: --createStatementPreExec"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.168 AM]: numBinds = 0"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.168 AM]: threshold = 100000"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.169 AM]: RowStatementPreExec"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:29.171 AM]: context.bindStage=undefined"}
winston:file: written true false
winston:file: logged 53 {"level":"DEBUG","message":"[10:03:29.132 AM]: 300"}

winston:file: logged 259 {"level":"DEBUG","message":"[10:03:29.154 AM]: Contacting SF: /session/v1/login-request?requestId=48369573-6933-4c35-830b-fb882219ef1f&warehouse=MY_WAREHOUSE&databaseName=MY_DB&schemaName=MY_SCHEMA, (1/7)"}

winston:file: logged 413 {"level":"TRACE","message":"[10:03:29.160 AM]: Create and add to cache new agent https://MY_PROD_SNOWFLAKE.snowflakecomputing.com:443-keepAlive"}

winston:file: logged 851 {"level":"DEBUG","message":"[10:03:29.163 AM]: Proxy settings used in requests: // PROXY environment variables: HTTP_PROXY: MY_PROXY HTTPS_PROXY: MY_PROXY NO_PROXY: 127.0.0.1,localhost,.local,.internal,.kdc.MY_COMPANY.com,.prod.MY_COMPANY.com,.qa.MY_COMPANY.com,.prod.EU.MY_COMPANY.com,.qa.EU.MY_COMPANY.com,169.254.169.254,s3.amazonaws.com,.s3.amazonaws.com."}

winston:file: logged 1121 {"level":"TRACE","message":"[10:03:29.165 AM]: CALL POST with timeout 90000: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com/session/v1/login-request?requestId=48369573-6933-4c35-830b-fb882219ef1f&warehouse=MY_WAREHOUSE&databaseName=MY_DB&schemaName=MY_SCHEMA"}

winston:file: logged 1195 {"level":"DEBUG","message":"[10:03:29.166 AM]: --createStatementPreExec"}

winston:file: logged 1257 {"level":"DEBUG","message":"[10:03:29.168 AM]: numBinds = 0"}

winston:file: logged 1325 {"level":"DEBUG","message":"[10:03:29.168 AM]: threshold = 100000"}

winston:file: logged 1394 {"level":"DEBUG","message":"[10:03:29.169 AM]: RowStatementPreExec"}

winston:file: logged 1471 {"level":"DEBUG","message":"[10:03:29.171 AM]: context.bindStage=undefined"}

winston:file: stat done: snowflake.log { size: 539 }
winston:file: create stream start snowflake.log { flags: 'a' }
winston:file: create stream ok snowflake.log
winston:file: file open ok snowflake.log
Successfully connected to Snowflake.
{"level":"TRACE","message":"[10:03:29.743 AM]: Get agent with id: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com:443-keepAlive from cache"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:29.747 AM]: CALL POST with timeout 90000: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com/queries/v1/query-request?requestId=93d23f9c-5465-4c6a-9f78-ebccb423e082"}
winston:file: written true false
winston:file: logged 689 {"level":"TRACE","message":"[10:03:29.743 AM]: Get agent with id: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com:443-keepAlive from cache"}

winston:file: logged 897 {"level":"TRACE","message":"[10:03:29.747 AM]: CALL POST with timeout 90000: https://MY_PROD_SNOWFLAKE.snowflakecomputing.com/queries/v1/query-request?requestId=93d23f9c-5465-4c6a-9f78-ebccb423e082"}

{"level":"TRACE","message":"[10:03:31.514 AM]: Mapping columns in resultset (total: 9)"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.519 AM]: Finished mapping columns."}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.520 AM]: Downloading 2951 chunks"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.534 AM]: deserializeQueryContext() called: data from server: {\"entries\":[{\"id\":0,\"timestamp\":201348722507844,\"priority\":0}]}"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.535 AM]: deserializeQueryContextElement `context` field is empty"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.537 AM]: checkCacheCapacity() called. treeSet size 1 cache capacity 5"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.537 AM]: checkCacheCapacity() returns. treeSet size 1 cache capacity 5"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.538 AM]: Cache Entry: id: 0 timestamp: 201348722507844 priority: 0"}
winston:file: written true false
winston:file: logged 986 {"level":"TRACE","message":"[10:03:31.514 AM]: Mapping columns in resultset (total: 9)"}

winston:file: logged 1061 {"level":"TRACE","message":"[10:03:31.519 AM]: Finished mapping columns."}

winston:file: logged 1134 {"level":"TRACE","message":"[10:03:31.520 AM]: Downloading 2951 chunks"}

winston:file: logged 1307 {"level":"DEBUG","message":"[10:03:31.534 AM]: deserializeQueryContext() called: data from server: {\"entries\":[{\"id\":0,\"timestamp\":201348722507844,\"priority\":0}]}"}

winston:file: logged 1412 {"level":"DEBUG","message":"[10:03:31.535 AM]: deserializeQueryContextElement `context` field is empty"}

winston:file: logged 1522 {"level":"DEBUG","message":"[10:03:31.537 AM]: checkCacheCapacity() called. treeSet size 1 cache capacity 5"}

winston:file: logged 1633 {"level":"DEBUG","message":"[10:03:31.537 AM]: checkCacheCapacity() returns. treeSet size 1 cache capacity 5"}

winston:file: logged 1740 {"level":"DEBUG","message":"[10:03:31.538 AM]: Cache Entry: id: 0 timestamp: 201348722507844 priority: 0"}

{"level":"TRACE","message":"[10:03:31.542 AM]: Create and add to cache new agent https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive"}
winston:file: written true false
{"level":"DEBUG","message":"[10:03:31.544 AM]: Proxy settings used in requests: // PROXY environment variables: HTTP_PROXY: MY_PROXY HTTPS_PROXY: MY_PROXY NO_PROXY: 127.0.0.1,localhost,.local,.internal,.kdc.MY_COMPANY.com,.prod.MY_COMPANY.com,.qa.MY_COMPANY.com,.prod.EU.MY_COMPANY.com,.qa.EU.MY_COMPANY.com,169.254.169.254,s3.amazonaws.com,.s3.amazonaws.com."}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.545 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_0?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.545 AM]: Get agent with id: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive from cache"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.545 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_1?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.546 AM]: Get agent with id: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive from cache"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.547 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_2?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}
winston:file: written true false
winston:file: logged 1902 {"level":"TRACE","message":"[10:03:31.542 AM]: Create and add to cache new agent https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive"}

winston:file: logged 2340 {"level":"DEBUG","message":"[10:03:31.544 AM]: Proxy settings used in requests: // PROXY environment variables: HTTP_PROXY: MY_PROXY HTTPS_PROXY: MY_PROXY NO_PROXY: 127.0.0.1,localhost,.local,.internal,.kdc.MY_COMPANY.com,.prod.MY_COMPANY.com,.qa.MY_COMPANY.com,.prod.EU.MY_COMPANY.com,.qa.EU.MY_COMPANY.com,169.254.169.254,s3.amazonaws.com,.s3.amazonaws.com."}

winston:file: logged 2890 {"level":"TRACE","message":"[10:03:31.545 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_0?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}

winston:file: logged 3048 {"level":"TRACE","message":"[10:03:31.545 AM]: Get agent with id: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive from cache"}

winston:file: logged 3598 {"level":"TRACE","message":"[10:03:31.545 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_1?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}

winston:file: logged 3756 {"level":"TRACE","message":"[10:03:31.546 AM]: Get agent with id: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com:443-keepAlive from cache"}

winston:file: logged 4306 {"level":"TRACE","message":"[10:03:31.547 AM]: CALL GET with timeout 90000: https://MY_SNOWFLAKE_S3_VPC_STAGE.s3.us-west-2.amazonaws.com/MY_SNOWFLAKE_BUCKET/results/MY_SNOWFLAKE_BUCKET_KEY/main/data_0_0_2?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240208T160330Z&X-Amz-SignedHeaders=host&X-Amz-Expires=21599&X-Amz-Credential=MY_AWS_CREDENTIALus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=MY_AWS_SIGNATURE"}

{"level":"DEBUG","message":"[10:03:31.710 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.712 AM]: Request won't be retried"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.713 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
winston:file: logged 4638 {"level":"DEBUG","message":"[10:03:31.710 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

winston:file: logged 4712 {"level":"TRACE","message":"[10:03:31.712 AM]: Request won't be retried"}

winston:file: logged 4970 {"level":"TRACE","message":"[10:03:31.713 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

{"level":"DEBUG","message":"[10:03:31.718 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.720 AM]: Request won't be retried"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.720 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
winston:file: logged 5302 {"level":"DEBUG","message":"[10:03:31.718 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

winston:file: logged 5376 {"level":"TRACE","message":"[10:03:31.720 AM]: Request won't be retried"}

winston:file: logged 5634 {"level":"TRACE","message":"[10:03:31.720 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

{"level":"DEBUG","message":"[10:03:31.725 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.727 AM]: Request won't be retried"}
winston:file: written true false
{"level":"TRACE","message":"[10:03:31.727 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}
winston:file: written true false
winston:file: logged 5966 {"level":"DEBUG","message":"[10:03:31.725 AM]: Encountered an error when getting data from cloud storage: status: 400 \"Bad Request\" headers: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

winston:file: logged 6040 {"level":"TRACE","message":"[10:03:31.727 AM]: Request won't be retried"}

winston:file: logged 6298 {"level":"TRACE","message":"[10:03:31.727 AM]: Response headers are: {\"cache-control\":\"no-cache\",\"pragma\":\"no-cache\",\"content-type\":\"text/html; charset=utf-8\",\"proxy-connection\":\"close\",\"connection\":\"close\",\"content-length\":\"8693\"}"}

@sfc-gh-dszmolka
Copy link
Collaborator

thank you for sharing the logs ! so the biggest difference between 1.8.0 and 1.9.0 (and above) is that we replaced the main HTTP library urllib2 to axios.
A very quick search into the axios bugs indicate that other people are also struggling with HTTP400 Bad request with axios. (example issues 5256, 6119) - although it would be too early to say if it's really related or just something similar.

furthermore the AWS SDK was also upgrade to v2 -> v3 in 1.9.0 and above.

even furthermore, we upgraded https-proxy-agent in version 1.9.2 so as you see there's a couple of moving parts here.

If you have a bit of time, could you please:

  • try to install 1.9.1 instead of 1.9.3. Do you still see the issue? If yes, then it's less likely to be related to the https-proxy-agent bump and we need to focus on axios or even the aws-sdk. If no, the issue doesn't occur with 1.9.1, then it's a good pointer to continue looking into https-proxy-agent instead of axios.
  • (with 1.9.2 or 1.9.3) if you have the option to do this: as a test, try to unset HTTP_PROXY / HTTPS_PROXY / NO_PROXY envvars and let the snowflake-sdk Connection options do the job. Example:
var connection = snowflake.createConnection({
    account: "MY_SF_ACCOUNT",
    database: "MY_DB",
    schema: "MY_SCHEMA",
    warehouse: "MY_WH",
    username: "MY_USER",
    password: "MY_PWD",
    proxyHost: "MY_PROXY", //please do not prepend http:// or https://
    proxyPort: 8080 // please input the numerical proxyport
    noProxy: "127.0.0.1|localhost|.local,.internal|.kdc.MY_COMPANY.com|.prod.MY_COMPANY.com|.qa.MY_COMPANY.com|.prod.EU.MY_COMPANY.com|.qa.EU.MY_COMPANY.com|169.254.169.254|s3.amazonaws.com|.s3.amazonaws.com",
    proxyProtocol: "https" // only add this if your proxy is HTTPS. if it's HTTP, you can leave this option out, default is HTTP
});

(Reference: https://docs.snowflake.com/en/developer-guide/node-js/nodejs-driver-connect#connecting-through-an-authenticated-proxy)

Finally: while I'll also try and reproduce the HTTP400 Bad Request issue, it might be particularly difficult to simulate your environment and it's possible that eventually we'll need to resort to such tools (like logging out the actual request which results in HTTP400), and especially if this issue affects a production flow for you (I see you're a Snowflake customer), I highly recommend creating an official case with Snowflake Support and continuing there.

Otherwise if this issue is not urgent for you, we can continue the conversation here. Next step is on me, trying to set up a repro environment and reproduce the issue.

@sfc-gh-dszmolka sfc-gh-dszmolka removed the status-information_needed Additional information is required from the reporter label Feb 8, 2024
@srikarm16
Copy link

srikarm16 commented Feb 9, 2024

@sfc-gh-dszmolka I'm working on this issue as well, and I've tried what you mentioned about unsetting the noProxy value. I've never really initialized the Snowflake object with that property, but I tried doing so and it results in the same error either way. As far as the major versions go, this issue is persistent from snowflake-sdk 1.9.0 and above, not just 1.9.3.

When I do debug the S3 request being sent by the sdk and try fetching the response manually through an http client, I'm actually able to successfully fetch a response without an issue with the proper headers. Although admittedly, I haven't been able to replicate the same behavior through a manual axios fetch in code (or urllib), so that's something I'll play around with to see if I can make any progress.

@sfc-gh-dszmolka
Copy link
Collaborator

thanks @srikarm16 much appreciated! Good to know issue already is there with 1.9.0, it narrows it down a little bit (i.e. probably not related to the https-proxy-agent change)

Couple of things:

  1. we never documented that using the envvars with the snowflake-sdk should work. I do understand that it is a loss of function between 1.8.0 and 1.9.0+ and I'm still very much interested in making this work or at least being able to pinpoint the problem, and will continue to troubleshooting this with all my available resources. Still, it's important to mention, that this approach is not officially supported. Would be of course more convenient to be able to use the same envvars everything else is using on the same host.
  2. what is officially supported, is using the proxyHost etc. in the Connection settings. should populate the proxy settings into HttpsProxyOcspAgent and, well, work. Hence the question: if you clear the envvars and use only Connection settings for the proxy, does it change anything?
  3. observed that as soon as the envvars are populated, it is overriding the proxy settings for the https.Agent (coming from Connection settings) and we're now susceptible to possible axios bugs, which to be honest, does not take a lot of search in the repo over there to bump into quite some, still unfixed.
  4. we have an array of automated tests for proxy + large result sets and manual ones w/proxy and against large result set, shows that it is generally working, so therefore I'm super interested learning about the environmental nuances. E.g. if by any chance, is the proxy ZScaler ? We already had problem with it in interaction with proxy-in-envvars vs snowflake-sdk.

Nevertheless, I'll keep looking into this and keep this thread posted.

@sfc-gh-dszmolka
Copy link
Collaborator

added some extra logging around where the requests towards the cloud storage are handled in a simple way, which should provide more insight:

# diff -Nur ./node_modules/snowflake-sdk/lib/services/large_result_set.js.original ./node_modules/snowflake-sdk/lib/services/large_result_set.js
--- ./node_modules/snowflake-sdk/lib/services/large_result_set.js.original	2024-02-09 10:23:25.524881721 +0000
+++ ./node_modules/snowflake-sdk/lib/services/large_result_set.js	2024-02-09 11:08:45.501816409 +0000
@@ -7,6 +7,7 @@
 const Errors = require('../errors');
 const Logger = require('../logger');
 const ErrorCodes = Errors.codes;
+const util = require('util');
 
 /**
  * Creates a new instance of an LargeResultSetService.
@@ -62,6 +63,10 @@
 
     // invoked when the request completes
     const callback = function callback(err, response, body) {
+
+       // gh-nodejs-763
+       console.log(JSON.stringify(response, Util.getCircularReplacer()));
+
       // err happens on timeouts and response is passed when server responded
       if (err || isUnsuccessfulResponse(response)) {
         // if we're running in DEBUG loglevel, probably we want to see the full error too

(has a massive output as it logs both the request and response objects) of course we don't expect to share it here, but if you can perhaps take a look in your environment which reproduces the problem, and see any obvious reasons in the request (headers etc) sent by snowflake-sdk which in turn triggest S3 to return HTTP400 that would perhaps be a great help in further narrowing down why the issue happens.

perhaps don't even need to get the full original query results, a SELECT C_CUSTKEY FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1000.CUSTOMER LIMIT 8000 result or so, should be big enough to trigger contact to the S3.

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-information_needed Additional information is required from the reporter label Feb 9, 2024
@srikarm16
Copy link

srikarm16 commented Feb 9, 2024

@sfc-gh-dszmolka So you're saying I should try establishing the snowflake connection without the use of proxyHost? Currently, I don't specify any extra options besides the base information required as well as the host and port.

What I do want to mention, seeing as it's not already said in this issue here, the sdk does work for a very small subset of data (seemingly less than 500kb, I haven't max tested the true upper limit). Since axios was replaced as the main http client 1.9.0 onwards, I was thinking if this sdk works fine with small subsets of data, axios necessarily might not be the problem, unless there's some underlying issue with request batching or joining.

I did notice that the S3 getUrl for both versions of the aws-sdk are different during the same testing period, disregarding the token values. Not sure if they're supposed to be different, but even if I retest after 20-30 minutes, the urls don't usually change. But again, both sdk urls do work manually when submitting through a client outside of the sdk, so it might not be the issue. I'll try to do a little more debugging around the response object and other places to see if I can get any more information.

@srikarm16
Copy link

Hey @sfc-gh-dszmolka Something interesting I found after doing some more testing. I was able to fetch the S3 get url and successfully send it using urllib with proxy settings. However, I wasn't able to do the same with axios 1.6.7 or the older version this sdk was using with 0.27.2.

This was the error I got.
AxiosError: write EPROTO 409C7EDA01000000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:355

It seems like other people have gotten this issue before with Axios (Issue#4840)

Since the snowflake-sdk didn't ever really use Axios for the GET requests being used to fetch the S3 data, I'm wondering if this might be the culprit after all. Just wanted to update you on this.

@sfc-gh-dszmolka
Copy link
Collaborator

thank you for all these info @srikarm16 ! regarding the bug being or not being related to axios : I would still not rule it out as I remembered the requests to the S3 are also sent by it. But now set up some axios debugging:

  1. npm i axios-debug-log
  2. declaring require('axios-debug-log') at top of the script before any axios calls
  3. export DEBUG=axios envvar
  4. observe large result set requests coming from axios:
  axios GET https://sfc-eu-ds1-22-customer-stage.s3.eu-central-1.amazonaws.com/stage/results/query/main/data_0_0_0?..cf7 +9ms

So i still think what you found might be relevant. There's really a wide area of choice for the various bugs in axios when it comes to proxies, and in addition to not having access to your environment and not being able to reproduce the same issue doesn't make debugging easier.

What you quoted in your last comment, seems to happen to many users due to how a https protocol connection is attempted to be used on a proxy serving over http, this might even a similar manifestation of multiple underlying causes.

That's why i was asking you to try without the envvars (HTTPS_PROXY etc), using the settings in Connection (proxyHost etc.) which hopefully doesn't have issues.

At the same time, if you can share any details on the proxy so maybe if it has a trial license, we could set it up here locally and try and reproduce the issue for ourselves to learn more about it.

@sfc-gh-dszmolka
Copy link
Collaborator

i think i managed to reproduce both problems (HTTP400 Bad Request + SSL routines:ssl3_get_record:wrong version number) or at least the symptoms described so far.

  1. took a random, freely available proxy which is listening out there and is HTTP-only
  2. set it as HTTPS_PROXY envvar like HTTPS_PROXY=http://public.pro.xy:80 . Even a curl call to a https site like a Snowflake test account doesn't work, and throws HTTP400
  3. configured this same HTTP-only proxy as
proxyHost='public.pro.xy',
proxyPort=80

in the Connection settings, while have no *_PROXY envvar set. Result:

{"level":"TRACE","message":"[4:52:45.856 PM]: CALL POST with timeout 90000: https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId=.."}
  axios POST https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId=.. +5s
{"level":"DEBUG","message":"[4:52:45.859 PM]: Using proxy=public.pro.xy for host myaccount.snowflakecomputing.com"}
  axios AxiosError: Request failed with status code 400 (POST https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId=) +15ms

HTTP400 Bad Request reproduces.
4. now cleared proxyHost and proxyPort from Connection, and passed HTTPS_PROXY=public.pro.xy:80 as envvar . Result:

{"level":"TRACE","message":"[4:55:39.892 PM]: CALL POST with timeout 90000: https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId=.."}
  axios POST https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId.. +2s
  axios Error: 140129327441856:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
  axios  (POST https://myaccount.snowflakecomputing.com/session/v1/login-request?requestId=..) +24ms
{"level":"DEBUG","message":"[4:55:39.923 PM]: Encountered an error when sending the request. Details: {\"message\":\"140129327441856:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\\n\",\"name\":\"Error\",\"stack\":\"Error: 140129327441856:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\\n\\n    at Function.AxiosError.from (/test/node_modules/axios/dist/node/axios.cjs:837:14)\\n    at RedirectableRequest.handleReque"}

To me, even if the situation is a little bit different in the repro (Snowflake login-request vs. request to S3), this looks like very similar to the error you're seeing. when using axios.

Do you think it's possible something similar happens to the request sent to the S3 buckets, when sent through the proxy? Perhaps it's worth considering comparing the proxy's configuration regarding requests towards .snowflakecomputing.com vs. requests towards .amazonaws.com, as there seems to be no error in the requests hitting the Snowflake account, only when it is going to the bucket.

@sfc-gh-dszmolka sfc-gh-dszmolka added status-triage_done Initial triage done, will be further handled by the driver team and removed status-triage Issue is under initial triage status-information_needed Additional information is required from the reporter labels Feb 13, 2024
@srikarm16
Copy link

srikarm16 commented Feb 13, 2024

Hey @sfc-gh-dszmolka I'm glad you were able to reproduce a similar error, this helps us trim down the problem further.

I do want to mention that I have tried using the Axios debugging to eventually get the same results. Even removing proxies set through env variables didn't really change anything.

What I want to know is why does this sdk seem to work for a small amount of rows then? I don't change the proxy settings whatsoever, but in my SQL query if I simply add a LIMIT X where the number of records is small enough (seemingly close to around 250kb I think?), this request goest through. Do you perhaps reach the same errors above when you limit your SQL query to only pull a small set of data using the proxy configurations you had above to reproduce the errors, or does it error or the same way?

I guess I'm wondering if a small set of data is somehow a special case where the proxy configuration for s3 doesn't matter, or if there's a difference in how the largeResultSet chains the s3 requests with misconfigured proxies by any chance. To me it doesn't make sense why the exact same request to the same S3 bucket will work on a small enough data but fail on a large amount of data potentially due to proxy.

@sfc-gh-dszmolka
Copy link
Collaborator

The big difference here is, that when you put a LIMIT clause to the query (or query a table which has anyways small set of data), the result is not coming from the S3 bucket at all.

It is coming directly from the Snowflake account host (youraccount.snowflakecomputing.com)
Only if the resultset exceeds a certain limit (~100KB), then it is downloaded from the S3 bucket.

That's why I was asking if there's any difference in the proxy configuration for the .snowflakecomputing.com vs the .amazonaws hosts ? I even see in the log snippets above, that the .amazonaws.com is attempted to be subjected to NO_PROXY - and the .snowflakecomputing.com is not. Not having any other information than which is available in this issue, suggests to me that the Snowflake requests (== small result sets, for example with LIMIT) are sent through the proxy, and the ones hitting the internal stage / S3 bucket (== larger result sets, for example without LIMIT) are not sent through the proxy. Would like to quickly add, there might be other settings on the host which could be interfering, like for example what I got to know last week, ZScaler's agents which if I understood correctly, are sitting on the host and forcing all the traffic from a network interface still over the proxy, even without envvars.

There's still a lot of moving parts, but I'm quite certain it's still related to one of the axios bugs. I'll keep troubleshooting as time permits.

@srikarm16
Copy link

@sfc-gh-dszmolka Ahh I see, that makes more sense, thanks for the explanation.

I did some more digging, and I think it's the http url through the HTTPS_PROXY variable. And it seemed like it was an issue with Axios that a lot of people had. It seems like they had a PR to fix this issue. However, some people I believe still get this issue.

It looks like someone made a custom fix to this issue and forked the Axios repo to make a new lib called axios-https-proxy-fix. I fetched the s3 url and headers that were failing before and with this custom library, I was actually able to successfully submit the request and fetch data. I'm not sure how to modify the existing sdk code to incorporate the fix this new library is using because it seems both are using https-proxy-agent but perhaps you're able to find some nuances that I couldn't.

I know you talked about proxy differences, but I got this custom version of Axios to successfully fetch a request with the same proxyHost and port set, whereas before Axios gave me the write EPROTO error. I'm wondering if this http in HTTPS_PROXY was the issue after all.

As for you other questions, I'm doing my best to find out more about proxy configurations in the meantime.

@sfc-gh-dszmolka
Copy link
Collaborator

@srikarm16 yes i also came across the axios-https-poxy-fix library but did not want to recommend just yet without being able to test, as it looks to be an abandoned fork (updated 6 years ago). Also the same fix which helps some people, causes some new issues to other people, as seen in the thread you linked.
Still, happy to hear it helped you !

Yes, the issue is very likely due to how axios behaves when needing to send https traffic with a http proxy. The challenge is, that we need to be very careful in changing anything related to the main job of snowflake-sdk (sending HTTP requests to Snowflake) to not introduce other errors while fixing one, like axios-https-poxy-fix apparently does for some people.

Until we can find a way to reproduce the error on axios only and not for other clients like curl , I'm not sure if we can set up a test case to confirm and validate the fix working and not hurting something else.

@srikarm16
Copy link

@sfc-gh-dszmolka So question for you, when you reproduced the errors, have you tried submitting the same request url and headers through a different client to see if you get the same errors? For me personally, I've had success through thunderclient and urllib. So it seemed like an Axios only issue for me, not sure if it was the same for you.

I'll try to mess around to see if I can get anything working through the client Agent setup in the sdk. Or do you think this might be something to potentially open up directly with Axios.

@sfc-gh-dszmolka
Copy link
Collaborator

sfc-gh-dszmolka commented Feb 16, 2024

have you tried submitting the same request url and headers through a different client to see if you get the same errors?

yes, this is exactly my point.

when I managed to reproduce the error symptoms (HTTP400 and ssl3_get_record:wrong version number, using the random proxy I found on the net), the error was also reproducible with other client as well as mentioned, curl

so while it did reproduce the symptoms, it did not reproduce the actual error, which is exclusively specific to axios and the various bugs we discussed in this thread. Hope i'm making sense :)

the bug is indeed with axios, but judging from the number of still open issues, I'm not sure of the timeline they can address it. However, we can indeed do changes on our own side to see how we can mitigate the issues brought in by axios.
The big challenge, or blocker I might say, is that there's apparently a very specific proxy setup needed for reproducing the issue only in axios and not other http clients.

I did not have time yet to move forward with this and finding a proxy which behaves well with other http clients and only breaks axios.

Of course since you already have the environment which I don't have, that would be a massive help from your side if you can tweak the agent setup. Otherwise I'll keep researching how to set up the reproduction environment so we could see the exact issue for ourselves (only with axios , not other http client) and therefore be able to write a fix and tests for it.

@srikarm16
Copy link

@sfc-gh-dszmolka I was able to avoid using the outdated library and use the latest https-proxy-agent library that you guys use, and I was able to fetch data from S3 when I specified a specific url and headers. So, I tried doing the same in the sdk code by directly using the https-proxy-agent instead of the HttpsProxyOcspAgent class that is created, but that didn't seem to affect anything.

So, I tried something weird. Normally my code for trying a certain url and headers is as follows:

import { HttpsProxyAgent } from "https-proxy-agent";
import axios from 'axios';

let httpsAgent = new HttpsProxyAgent({
    host: hostString,
    port: portNum
});

let requestOptions = {
    method: 'GET',
    url: requestURL,
    headers: {
        "header1": "",
        "header2": ""
      },
    httpsAgent: httpsAgent
};

let res = axios.request(requestOptions)
    .then(res => {
      console.log(res);
    })
    .catch(e => {
      console.log(e);
});

This usually is able to fetch me all the data.

However on a whim, I decided to try this exact same code inside the sendRequest() in the large_result_set.js and remove everything else. I even just used the same url and headers instead of the ones passed through options. The only difference in the code was that I had to use require instead of import for the 2 packages. For some reason, this request is unable to go through and gives me a 400. I find this behavior a little odd as it should ignore any setting previously defined in the code and essentially mimic the same environment that I tested this code in, which was just another random folder on my computer.

Since HttpsProxyOcspAgent just extends https-proxy-agent, I was wondering if you could try sending one of the S3 requests that failed for you outside of the sdk like above to see if it seems to go through. This makes me think the agent setup and everything is fine but wonder if the sdk code flow is doing something strange, or if it's somehow still an Axios bug.

@sfc-gh-dszmolka sfc-gh-dszmolka added status-triage Issue is under initial triage and removed status-triage_done Initial triage done, will be further handled by the driver team labels Feb 21, 2024
@sfc-gh-dszmolka
Copy link
Collaborator

@srikarm16 thank you for your continued efforts on this vague issue, really appreciated ! I tried your code with the proxy I used to see the error symptoms, but I ended up seeing the same error what I see even outside axios (with curl e.g.) so I'm almost 100% confident that it's the issue with the random proxy I found out there.

anyhow, I now involved the driver dev team to aid us in debugging this issue further .

In the meantime I'm still searching for a proxy/setup which can reproduce the issue from snowflake-sdk (and not any client outside of snowflake-sdk). Tried mitmproxy, squid so far in 'regular proxy' mode; they seem to just nicely handle the CONNECT from the client and just proxy it to Snowflake/S3. Will keep this thread posted.

@sfc-gh-dszmolka
Copy link
Collaborator

still trying to reproduce the issue locally and searching for the proxy/config which could help, no breakthroughs so far.

However I made another discovery which might be relevant. Took a step back and installed [email protected] (which still had urllib instead of axios) to see how it plays around with proxies.

When I launched the test script

DEBUG=urllib HTTP_PROXY=http://my.pro.xy:8080 HTTPS_PROXY=http://my.pro.xy:8080 NO_PROXY=127.0.0.1,localhost,.local,.internal,.kdc.MY_COMPANY.com,.prod.MY_COMPANY.com,.qa.MY_COMPANY.com,.prod.EU.MY_COMPANY.com,.qa.EU.MY_COMPANY.com,169.254.169.254,s3.amazonaws.com,.s3.amazonaws.com node test.js 

(note I'm using the exact same NO_PROXY as you are, from the log snippet available to me

I observed absolutely nothing is sent to the proxy. The traffic is not sent to the proxy, neither Snowflake-destined nor the bucket-destined traffic is actually, not even hitting the proxy.
Could not believe so I put a packet capture on the traffic, which was also silent. No packets incoming at all on the proxy ports, when urllib is used.

Maybe this is the source of all your problems? No traffic was sent to the proxy at all while urllib was used, and now that axios took over from 1.9.0+, now the traffic is actually hitting your proxy, which is perhaps not configured to bypass S3 related traffic (== destination you see an issue with), only for the Snowflake related traffic (== destination you do not see an issue with). Knowing nothing of the expected traffic flow and architecture, I'm sort of guessing here of course.

Also the configuration for NO_PROXY looks incorrect: please note that neither s3.amazonaws.com nor .s3.amazonaws.com will match the S3 traffic which seems to be going towards sfc-XX-XXX-XX-customer-stage.s3.MYS3REGION.amazonaws.com.
Observe the MYS3REGION part between s3 and amazonaws.com - causing the wildcard to not match.

Would it be possible to try appending .amazonaws.com at the end of the NO_PROXY list and see if it helps? This should make the driver to avoid sending S3-related traffic to the proxy altogether.
Like it apparently did with urllib with version 1.8.0 and before.

Of course this doesn't answer any questions or solve how axios behaves with https traffic over a http proxy, but maybe could provide a relief for you.

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-information_needed Additional information is required from the reporter label Mar 9, 2024
@sfc-gh-dszmolka
Copy link
Collaborator

As a quick summary:

  • in the meantime an official Support Case was opened with Snowflake Support
  • which is now closed since then, because the setup and issue seems to be related to the proxies used in the particular environment

therefore closing this issue as well.

@sfc-gh-dszmolka sfc-gh-dszmolka added status-triage_done Initial triage done, will be further handled by the driver team and removed status-triage Issue is under initial triage status-information_needed Additional information is required from the reporter labels Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status-triage_done Initial triage done, will be further handled by the driver team
Projects
None yet
Development

No branches or pull requests

3 participants