diff --git a/README.md b/README.md index 6bb3ad8bc..f00c7d579 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ The Snowflake .NET connector supports the the following .NET framework and libra - .NET Framework 4.7.2 - .NET 6.0 -Please refer to the Notice section below for information about safe usage of the .NET Driver +Please refer to the [Notice](#notice) section below for information about safe usage of the .NET Driver # Coding conventions for the project @@ -68,945 +68,46 @@ Alternatively, packages can also be downloaded using Package Manager Console: PM> Install-Package Snowflake.Data ``` -# Testing the Connector +# Testing and Code Coverage -Before running tests, create a parameters.json file under Snowflake.Data.Tests\ directory. In this file, specify username, password and account info that tests will run against. Here is a sample parameters.json file +[Running tests](doc/Testing.md) -``` -{ - "testconnection": { - "SNOWFLAKE_TEST_USER": "snowman", - "SNOWFLAKE_TEST_PASSWORD": "XXXXXXX", - "SNOWFLAKE_TEST_ACCOUNT": "TESTACCOUNT", - "SNOWFLAKE_TEST_WAREHOUSE": "TESTWH", - "SNOWFLAKE_TEST_DATABASE": "TESTDB", - "SNOWFLAKE_TEST_SCHEMA": "TESTSCHEMA", - "SNOWFLAKE_TEST_ROLE": "TESTROLE", - "SNOWFLAKE_TEST_HOST": "testaccount.snowflakecomputing.com" - } -} -``` - -## Command Prompt - -The build solution file builds the connector and tests binaries. Issue the following command from the command line to run the tests. The test binary is located in the Debug directory if you built the solution file in Debug mode. - -```{r, engine='bash', code_block_name} -cd Snowflake.Data.Tests -dotnet test -f net6.0 -l "console;verbosity=normal" -``` - -Tests can also be run under code coverage: - -```{r, engine='bash', code_block_name} -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_coverage.xml --output-format cobertura --settings coverage.config -``` - -You can run only specific suite of tests (integration or unit). - -Running unit tests: - -```bash -cd Snowflake.Data.Tests -dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~UnitTests -l console;verbosity=normal -``` - -Running integration tests: - -```bash -cd Snowflake.Data.Tests -dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~IntegrationTests -``` +[Code coverage](doc/CodeCoverage.md) -## Visual Studio 2017 - -Tests can also be run under Visual Studio 2017. Open the solution file in Visual Studio 2017 and run tests using Test Explorer. +--- # Usage ## Create a Connection -To connect to Snowflake, specify a valid connection string composed of key-value pairs separated by semicolons, -i.e "\=\;\=\...". - -**Note**: If the keyword or value contains an equal sign (=), you must precede the equal sign with another equal sign. For example, if the keyword is "key" and the value is "value_part1=value_part2", use "key=value_part1==value_part2". - -The following table lists all valid connection properties: -
- -| Connection Property | Required | Comment | -|--------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ACCOUNT | Yes | Your full account name might include additional segments that identify the region and cloud platform where your account is hosted | -| APPLICATION | No | **_Snowflake partner use only_**: Specifies the name of a partner application to connect through .NET. The name must match the following pattern: ^\[A-Za-z](\[A-Za-z0-9.-]){1,50}$ (one letter followed by 1 to 50 letter, digit, .,- or, \_ characters). | -| DB | No | | -| HOST | No | Specifies the hostname for your account in the following format: \.snowflakecomputing.com.
If no value is specified, the driver uses \.snowflakecomputing.com. | -| PASSWORD | Depends | Required if AUTHENTICATOR is set to `snowflake` (the default value) or the URL for native SSO through Okta. Ignored for all the other authentication types. | -| ROLE | No | | -| SCHEMA | No | | -| USER | Depends | If AUTHENTICATOR is set to `externalbrowser` this is optional. For native SSO through Okta, set this to the login name for your identity provider (IdP). | -| WAREHOUSE | No | | -| CONNECTION_TIMEOUT | No | Total timeout in seconds when connecting to Snowflake. The default is 300 seconds | -| RETRY_TIMEOUT | No | Total timeout in seconds for supported endpoints of retry policy. The default is 300 seconds. The value can only be increased from the default value or set to 0 for infinite timeout | -| MAXHTTPRETRIES | No | Maximum number of times to retry failed HTTP requests (default: 7). You can set `MAXHTTPRETRIES=0` to remove the retry limit, but doing so runs the risk of the .NET driver infinitely retrying failed HTTP calls. | -| CLIENT_SESSION_KEEP_ALIVE | No | Whether to keep the current session active after a period of inactivity, or to force the user to login again. If the value is `true`, Snowflake keeps the session active indefinitely, even if there is no activity from the user. If the value is `false`, the user must log in again after four hours of inactivity. The default is `false`. Setting this value overrides the server session property for the current session. | -| BROWSER_RESPONSE_TIMEOUT | No | Number to seconds to wait for authentication in an external browser (default: 120). | -| DISABLERETRY | No | Set this property to `true` to prevent the driver from reconnecting automatically when the connection fails or drops. The default value is `false`. | -| AUTHENTICATOR | No | The method of authentication. Currently supports the following values:
- snowflake (default): You must also set USER and PASSWORD.
- [the URL for native SSO through Okta](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#native-sso-okta-only): You must also set USER and PASSWORD.
- [externalbrowser](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#browser-based-sso): You must also set USER.
- [snowflake_jwt](https://docs.snowflake.com/en/user-guide/key-pair-auth.html): You must also set PRIVATE_KEY_FILE or PRIVATE_KEY.
- [oauth](https://docs.snowflake.com/en/user-guide/oauth.html): You must also set TOKEN. | -| VALIDATE_DEFAULT_PARAMETERS | No | Whether DB, SCHEMA and WAREHOUSE should be verified when making connection. Default to be true. | -| PRIVATE_KEY_FILE | Depends | The path to the private key file to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt | -| PRIVATE_KEY_PWD | No | The passphrase to use for decrypting the private key, if the key is encrypted. | -| PRIVATE_KEY | Depends | The private key to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt.
If the private key value includes any equal signs (=), make sure to replace each equal sign with two signs (==) to ensure that the connection string is parsed correctly. | -| TOKEN | Depends | The OAuth token to use for OAuth authentication. Must be used in combination with AUTHENTICATOR=oauth. | -| INSECUREMODE | No | Set to true to disable the certificate revocation list check. Default is false. | -| USEPROXY | No | Set to true if you need to use a proxy server. The default value is false.

This parameter was introduced in v2.0.4. | -| PROXYHOST | Depends | The hostname of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | -| PROXYPORT | Depends | The port number of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | -| PROXYUSER | No | The username for authenticating to the proxy server.

This parameter was introduced in v2.0.4. | -| PROXYPASSWORD | Depends | The password for authenticating to the proxy server.

If USEPROXY is `true` and PROXYUSER is set, you must set this parameter.

This parameter was introduced in v2.0.4. | -| NONPROXYHOSTS | No | The list of hosts that the driver should connect to directly, bypassing the proxy server. Separate the hostnames with a pipe symbol (\|). You can also use an asterisk (`*`) as a wildcard.
The host target value should fully match with any item from the proxy host list to bypass the proxy server.

This parameter was introduced in v2.0.4. | -| FILE_TRANSFER_MEMORY_THRESHOLD | No | The maximum number of bytes to store in memory used in order to provide a file encryption. If encrypting/decrypting file size exceeds provided value a temporary file will be created and the work will be continued in the temporary file instead of memory.
If no value provided 1MB will be used as a default value (that is 1048576 bytes).
It is possible to configure any integer value bigger than zero representing maximal number of bytes to reside in memory. | -| CLIENT_CONFIG_FILE | No | The location of the client configuration json file. In this file you can configure easy logging feature. | -| ALLOWUNDERSCORESINHOST | No | Specifies whether to allow underscores in account names. This impacts PrivateLink customers whose account names contain underscores. In this situation, you must override the default value by setting allowUnderscoresInHost to true. | -| QUERY_TAG | No | Optional string that can be used to tag queries and other SQL statements executed within a connection. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions.
To set QUERY_TAG on the statement level you can use SnowflakeDbCommand.QueryTag. | -| MAXPOOLSIZE | No | Maximum number of connections in a pool. Default value is 10. `maxPoolSize` value cannot be lower than `minPoolSize` value. | -| MINPOOLSIZE | No | Expected minimum number of connections in pool. When you get a connection from the pool, more connections might be initialised in background to increase the pool size to `minPoolSize`. If you specify 0 or 1 there will be no attempts to create extra initialisations in background. The default value is 2. `maxPoolSize` value cannot be lower than `minPoolSize` value. The parameter is used only in a new version of connection pool. | -| CHANGEDSESSION | No | Specifies what should happen with a closed connection when some of its session variables are altered (e. g. you used `ALTER SESSION SET SCHEMA` to change the databese schema). The default behaviour is `OriginalPool` which means the session stays in the original pool. Currently no other option is possible. Parameter used only in a new version of connection pool. | -| WAITINGFORIDLESESSIONTIMEOUT | No | Timeout for waiting for an idle session when pool is full. It happens when there is no idle session and we cannot create a new one because of reaching `maxPoolSize`. The default value is 30 seconds. Usage of units possible and allowed are: e. g. `1000ms` (milliseconds), `15s` (seconds), `2m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate fail for new connection to open when session is full. You cannot specify infinite value. | -| EXPIRATIONTIMEOUT | No | Timeout for using each connection. Connections which last more than specified timeout are considered to be expired and are being removed from the pool. The default is 1 hour. Usage of units possible and allowed are: e. g. `360000ms` (milliseconds), `3600s` (seconds), `60m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate expiration of the connection just after its creation. Expiration timeout cannot be set to infinity. | -| POOLINGENABLED | No | Boolean flag indicating if the connection should be a part of a pool. The default value is `true`. | - -
- -### Password-based Authentication - -The following example demonstrates how to open a connection to Snowflake. This example uses a password for authentication. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); -} -``` - - - -Beginning with version 2.0.18, the .NET connector uses Microsoft [DbConnectionStringBuilder](https://learn.microsoft.com/en-us/dotnet/api/system.data.oledb.oledbconnection.connectionstring?view=dotnet-plat-ext-6.0#remarks) to follow the .NET specification for escaping characters in connection strings. - -The following examples show how you can include different types of special characters in a connection string: - -- To include a single quote (') character: - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test'password;" - ); - ``` - -- To include a double quote (") character: - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test\"password;" - ); - ``` - -- To include a semicolon (;): - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=\"test;password\";" - ); - ``` - -- To include an equal sign (=): - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test=password;" - ); - ``` - - Note that previously you needed to use a double equal sign (==) to escape the character. However, beginning with version 2.0.18, you can use a single equal size. - - -Snowflake supports using [double quote identifiers](https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers) for object property values (WAREHOUSE, DATABASE, SCHEMA AND ROLES). The value should be delimited with `\"` in the connection string. The value is case-sensitive and allow to use special characters as part of the value. - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "database=\"testDB\";" - ); - ``` - - To include a `"` character as part of the value should be escaped using `\"\"`. - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "database=\"\"\"test\"\"user\"\"\";" // DATABASE => ""test"db"" - ); - ``` - - -### Other Authentication Methods - -If you are using a different method for authentication, see the examples below: - -- **Key-pair authentication** - - After setting up [key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html), you can specify the - private key for authentication in one of the following ways: - - - Specify the file containing an unencrypted private key: - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - - - Specify the file containing an encrypted private key: - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};private_key_pwd={passwordForDecryptingThePrivateKey};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - - `{passwordForDecryptingThePrivateKey}` is the password for decrypting the private key. - - - Specify an unencrypted private key (read from a file): - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - string privateKeyContent = File.ReadAllText({pathToThePrivateKeyFile}); - - conn.ConnectionString = String.Format("account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key={0};db=testdb;schema=testschema", privateKeyContent); - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - -- **OAuth** - - After setting up [OAuth](https://docs.snowflake.com/en/user-guide/oauth.html), set `AUTHENTICATOR=oauth` and `TOKEN` to the - OAuth token in the connection string. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;user=testuser;authenticator=oauth;token={oauthTokenValue};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{oauthTokenValue}` is the oauth token to use for authentication. - -- **Browser-based SSO** - - In the connection string, set `AUTHENTICATOR=externalbrowser`. - Optionally, `USER` can be set. In that case only if user authenticated via external browser matches the one from configuration, authentication will complete. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=externalbrowser;user={login_name_for_IdP};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{login_name_for_IdP}` is your login name for your IdP. - - You can override the default timeout after which external browser authentication is marked as failed. - The timeout prevents the infinite hang when the user does not provide the login details, e.g. when closing the browser tab. - To override, you can provide `BROWSER_RESPONSE_TIMEOUT` parameter (in seconds). - -- **Native SSO through Okta** - - In the connection string, set `AUTHENTICATOR` to the - [URL of the endpoint for your Okta account](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#label-native-sso-okta), - and set `USER` to the login name for your IdP. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator={okta_url_endpoint};user={login_name_for_IdP};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{okta_url_endpoint}` is the URL for the endpoint for your Okta account (e.g. `https://.okta.com`). - - `{login_name_for_IdP}` is your login name for your IdP. - -In v2.0.4 and later releases, you can configure the driver to connect through a proxy server. The following example configures the -driver to connect through the proxy server `myproxyserver` on port `8888`. The driver authenticates to the proxy server as the -user `test` with the password `test`: - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema;useProxy=true;proxyHost=myproxyserver;proxyPort=8888;proxyUser=test;proxyPassword=test"; - - conn.Open(); - - conn.Close(); -} -``` - -The NONPROXYHOSTS property could be set to specify if the server proxy should be bypassed by an specified host. This should be defined using the full host url or including the url + `*` wilcard symbol. - -Examples: - -- `*` (Bypassed all hosts from the proxy server) -- `*.snowflakecomputing.com` ('Bypass all host that ends with `snowflakecomputing.com`') -- `https:\\testaccount.snowflakecomputing.com` (Bypass proxy server using full host url). -- `*.myserver.com | *testaccount*` (You can specify multiple regex for the property divided by `|`) - - -> Note: The nonproxyhost value should match the full url including the http or https section. The '*' wilcard could be added to bypass the hostname successfully. - -- `myaccount.snowflakecomputing.com` (Not bypassed). -- `*myaccount.snowflakecomputing.com` (Bypassed). - +To create a connection get familiar with: [Connecting and Authentication Methods](doc/Connecting.md) ## Using Connection Pools -Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed. Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs. - -The Snowflake .NET driver provides the following functions for managing connection pools. +Connection pooling description: [Multiple Connection Pools](doc/ConnectionPooling.md). -| Function | Description | -| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------- | -| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. | -| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. | -| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. | -| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. | -| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connecing pooling. Default: `true`. | +Pooling prior to v4.0.0 is described: [Single Connection Pool](doc/ConnectionPoolingDeprecated.md) - `deprecated` -The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool. +## Data Types and Formats -```cs -public void TestConnectionPoolClean() -{ - SnowflakeDbConnectionPool.ClearAllPools(); - SnowflakeDbConnectionPool.SetMaxPoolSize(2); - var conn1 = new SnowflakeDbConnection(); - conn1.ConnectionString = ConnectionString; - conn1.Open(); - Assert.AreEqual(ConnectionState.Open, conn1.State); +Snowflake data types and their .NET types is covered in: [Data Types and Data Formats](doc/DataTypes.md) - var conn2 = new SnowflakeDbConnection(); - conn2.ConnectionString = ConnectionString + " retryCount=1"; - conn2.Open(); - Assert.AreEqual(ConnectionState.Open, conn2.State); - Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn1.Close(); - conn2.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - var conn3 = new SnowflakeDbConnection(); - conn3.ConnectionString = ConnectionString + " retryCount=2"; - conn3.Open(); - Assert.AreEqual(ConnectionState.Open, conn3.State); +## Querying Data - var conn4 = new SnowflakeDbConnection(); - conn4.ConnectionString = ConnectionString + " retryCount=3"; - conn4.Open(); - Assert.AreEqual(ConnectionState.Open, conn4.State); +How execute a query, use query bindings, run queries synchronously and asynchronously: +[Running Queries and Reading Results](doc/QueryingData.md) - conn3.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn4.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); +## Stage Files - Assert.AreEqual(ConnectionState.Closed, conn1.State); - Assert.AreEqual(ConnectionState.Closed, conn2.State); - Assert.AreEqual(ConnectionState.Closed, conn3.State); - Assert.AreEqual(ConnectionState.Closed, conn4.State); -} -``` - -## Mapping .NET and Snowflake Data Types - -The .NET driver supports the following mappings from .NET to Snowflake data types. - -| .NET Framekwork Data Type | Data Type in Snowflake | -| ------------------------- | ---------------------- | -| `int`, `long` | `NUMBER(38, 0)` | -| `decimal` | `NUMBER(38, )` | -| `double` | `REAL` | -| `string` | `TEXT` | -| `bool` | `BOOLEAN` | -| `byte` | `BINARY` | -| `datetime` | `DATE` | - -## Arrow data format - -The .NET connector, starting with v2.1.3, supports the [Arrow data format](https://arrow.apache.org/) -as a [preview](https://docs.snowflake.com/en/release-notes/preview-features) feature for data transfers -between Snowflake and a .NET client. The Arrow data format avoids extra -conversions between binary and textual representations of the data. The Arrow -data format can improve performance and reduce memory consumption in clients. - -The data format is controlled by the -DOTNET_QUERY_RESULT_FORMAT parameter. To use Arrow format, execute: - -```snowflake --- at the session level -ALTER SESSION SET DOTNET_QUERY_RESULT_FORMAT = ARROW; --- or at the user level -ALTER USER SET DOTNET_QUERY_RESULT_FORMAT = ARROW; --- or at the account level -ALTER ACCOUNT SET DOTNET_QUERY_RESULT_FORMAT = ARROW; -``` - -The valid values for the parameter are: - -- ARROW -- JSON (default) - -## Run a Query and Read Data - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = connectionString; - conn.Open(); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "select * from t"; - IDataReader reader = cmd.ExecuteReader(); - - while(reader.Read()) - { - Console.WriteLine(reader.GetString(0)); - } - - conn.Close(); -} -``` - -Note that for a `TIME` column, the reader returns a `System.DateTime` value. If you need a `System.TimeSpan` column, call the -`getTimeSpan` method in `SnowflakeDbDataReader`. This method was introduced in the v2.0.4 release. - -Note that because this method is not available in the generic `IDataReader` interface, you must cast the object as -`SnowflakeDbDataReader` before calling the method. For example: - -```cs -TimeSpan timeSpanTime = ((SnowflakeDbDataReader)reader).GetTimeSpan(13); -``` - -## Execute a query asynchronously on the server - -You can run the query asynchronously on the server. The server responds immediately with `queryId` and continues to execute the query asynchronously. -Then you can use this `queryId` to check the query status or wait until the query is completed and get the results. -It is fine to start the query in one session and continue to query for the results in another one based on the queryId. - -**Note**: There are 2 levels of asynchronous execution. One is asynchronous execution in terms of C# language (`async await`). -Another is asynchronous execution of the query by the server (you can recognize it by `InAsyncMode` containing method names, e. g. `ExecuteInAsyncMode`, `ExecuteAsyncInAsyncMode`). - -Example of synchronous code starting a query to be executed asynchronously on the server: -```cs -using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) -{ - conn.Open(); - SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand(); - cmd.CommandText = "SELECT ..."; - var queryId = cmd.ExecuteInAsyncMode(); - // ... -} -``` - -Example of asynchronous code starting a query to be executed asynchronously on the server: -```cs -using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) -{ - await conn.OpenAsync(CancellationToken.None).ConfigureAwait(false); - SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand()) - cmd.CommandText = "SELECT ..."; - var queryId = await cmd.ExecuteAsyncInAsyncMode(CancellationToken.None).ConfigureAwait(false); - // ... -} -``` - -You can check the status of a query executed asynchronously on the server either in synchronous code: -```cs -var queryStatus = cmd.GetQueryStatus(queryId); -Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running -Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error -``` -or the same in an asynchronous code: -```cs -var queryStatus = await cmd.GetQueryStatusAsync(queryId, CancellationToken.None).ConfigureAwait(false); -Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running -Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error -``` - -The following example shows how to get query results. -The operation will repeatedly check the query status until the query is completed or timeout happened or reaching the maximum number of attempts. -The synchronous code example: -```cs -DbDataReader reader = cmd.GetResultsFromQueryId(queryId); -``` -and the asynchronous code example: -```cs -DbDataReader reader = await cmd.GetResultsFromQueryIdAsync(queryId, CancellationToken.None).ConfigureAwait(false); -``` - -**Note**: GET/PUT operations are currently not enabled for asynchronous executions. - -## Executing a Batch of SQL Statements (Multi-Statement Support) - -With version 2.0.18 and later of the .NET connector, you can send -a batch of SQL statements, separated by semicolons, -to be executed in a single request. - -**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. - ---- - -**Note** - -By default, Snowflake returns an error for queries issued with multiple statements to protect against SQL injection attacks. The multiple statements feature makes your system more vulnerable to SQL injections, and so it should be used carefully. You can reduce the risk by using the MULTI_STATEMENT_COUNT parameter to specify the number of statements to be executed, which makes it more difficult to inject a statement by appending to it. - ---- - -You can execute multiple statements as a batch in the same way you execute queries with single statements, except that the query string contains multiple statements separated by semicolons. Note that multiple statements execute sequentially, not in parallel. - -You can set this parameter at the session level using the following command: - -``` -ALTER SESSION SET MULTI_STATEMENT_COUNT = <0/1>; -``` - -where: - -- **0**: Enables an unspecified number of SQL statements in a query. - - Using this value allows batch queries to contain any number of SQL statements without needing to specify the MULTI_STATEMENT_COUNT statement parameter. However, be aware that using this value reduces the protection against SQL injection attacks. - -- **1**: Allows one SQL statement or a specified number of statement in a query string (default). - - You must include MULTI_STATEMENT_COUNT as a statement parameter to specify the number of statements included when the query string contains more than one statement. If the number of statements sent in the query string does not match the MULTI_STATEMENT_COUNT value, the .NET driver rejects the request. You can, however, omit this parameter if you send a single statement. - -The following example sets the MULTI_STATEMENT_COUNT session parameter to 1. Then for an individual command, it sets MULTI_STATEMENT_COUNT=3 to indicate that the query contains precisely three SQL commands. The query string, `cmd.CommandText` , then contains the three statements to execute. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = ConnectionString; - conn.Open(); - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "ALTER SESSION SET MULTI_STATEMENT_COUNT = 1;"; - cmd.ExecuteNonQuery(); - conn.Close(); -} - -using (DbCommand cmd = conn.CreateCommand()) -{ - // Set statement count - var stmtCountParam = cmd.CreateParameter(); - stmtCountParam.ParameterName = "MULTI_STATEMENT_COUNT"; - stmtCountParam.DbType = DbType.Int16; - stmtCountParam.Value = 3; - cmd.Parameters.Add(stmtCountParam); - cmd.CommandText = "CREATE OR REPLACE TABLE test(n int); INSERT INTO test values(1), (2); SELECT * FROM test ORDER BY n; - DbDataReader reader = cmd.ExecuteReader(); - do - { - if (reader.HasRow) - { - while (reader.Read()) - { - // read data - } - } - } - while (reader.NextResult()); -} -``` - -## Bind Parameter - -**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. - -This example shows how bound parameters are converted from C# data types to -Snowflake data types. For example, if the data type of the Snowflake column -is INTEGER, then you can bind C# data types Int32 or Int16. - -This example inserts 3 rows into a table with one column. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = connectionString; - conn.Open(); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "create or replace table T(cola int)"; - int count = cmd.ExecuteNonQuery(); - Assert.AreEqual(0, count); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "insert into t values (?), (?), (?)"; - - var p1 = cmd.CreateParameter(); - p1.ParameterName = "1"; - p1.Value = 10; - p1.DbType = DbType.Int32; - cmd.Parameters.Add(p1); - - var p2 = cmd.CreateParameter(); - p2.ParameterName = "2"; - p2.Value = 10000L; - p2.DbType = DbType.Int32; - cmd.Parameters.Add(p2); - - var p3 = cmd.CreateParameter(); - p3.ParameterName = "3"; - p3.Value = (short)1; - p3.DbType = DbType.Int16; - cmd.Parameters.Add(p3); - - var count = cmd.ExecuteNonQuery(); - Assert.AreEqual(3, count); - - cmd.CommandText = "drop table if exists T"; - count = cmd.ExecuteNonQuery(); - Assert.AreEqual(0, count); - - conn.Close(); -} -``` - -## Bind Array Variables - -The sample code creates a table with a single integer column and then uses array binding to populate the table with values 0 to 70000. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = ConnectionString; - conn.Open(); - - using (IDbCommand cmd = conn.CreateCommand()) - { - cmd.CommandText = "create or replace table putArrayBind(colA integer)"; - cmd.ExecuteNonQuery(); - - string insertCommand = "insert into putArrayBind values (?)"; - cmd.CommandText = insertCommand; - - int total = 70000; - - List arrint = new List(); - for (int i = 0; i < total; i++) - { - arrint.Add(i); - } - var p1 = cmd.CreateParameter(); - p1.ParameterName = "1"; - p1.DbType = DbType.Int16; - p1.Value = arrint.ToArray(); - cmd.Parameters.Add(p1); - - count = cmd.ExecuteNonQuery(); // count = 70000 - } - - conn.Close(); -} -``` - -## PUT local files to stage - -PUT command can be used to upload files of a local directory or a single local file to the Snowflake stages (named, internal table stage or internal user stage). -Such staging files can be used to load data into a table. -More on this topic: [File staging with PUT](https://docs.snowflake.com/en/sql-reference/sql/put). - -In the driver the command can be executed in a bellow way: - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - try - { - conn.ConnectionString = ""; - conn.Open(); - var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command - - cmd.CommandText = "PUT file://some_data.csv @my_schema.my_stage AUTO_COMPRESS=TRUE"; - var reader = cmd.ExecuteReader(); - Assert.IsTrue(reader.read()); - Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); - } - catch (SnowflakeDbException e) - { - Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // when failed - Assert.That(e.InnerException.GetType(), Is.EqualTo(typeof(FileNotFoundException))); - } -``` - -In case of a failure a SnowflakeDbException exception will be thrown with affected QueryId if possible. -If it was after the query got executed this exception will be a SnowflakeDbException containing affected QueryId. -In case of the initial phase of execution QueryId might not be provided. -Inner exception (if applicable) will provide some details on the failure cause and -it will be for example: FileNotFoundException, DirectoryNotFoundException. - -## GET stage files - -GET command allows to download stage directories or files to a local directory. -It can be used in connection with named stage, table internal stage or user stage. -Detailed information on the command: [Downloading files with GET](https://docs.snowflake.com/en/sql-reference/sql/get). - -To use the command in a driver similar code can be executed in a client app: - -```cs - try - { - conn.ConnectionString = ""; - conn.Open(); - var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command - - cmd.CommandText = "GET @my_schema.my_stage/stage_file.csv file://local_file.csv AUTO_COMPRESS=TRUE"; - var reader = cmd.ExecuteReader(); - Assert.IsTrue(reader.read()); // True on success, False if failure - Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); - } - catch (SnowflakeDbException e) - { - Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // on failure - } -``` - -In case of a failure a SnowflakeDbException will be thrown with affected QueryId if possible. -When no technical or syntax errors occurred but the DBDataReader has no data to process it returns False -without throwing an exception. - -## Close the Connection - -To close the connection, call the `Close` method of `SnowflakeDbConnection`. - -If you want to avoid blocking threads while the connection is closing, call the `CloseAsync` method instead, passing in a -`CancellationToken`. This method was introduced in the v2.0.4 release. - -Note that because this method is not available in the generic `IDbConnection` interface, you must cast the object as -`SnowflakeDbConnection` before calling the method. For example: - -```cs -CancellationTokenSource cancellationTokenSource = new CancellationTokenSource(); -// Close the connection -((SnowflakeDbConnection)conn).CloseAsync(cancellationTokenSource.Token); -``` - -## Evict the Connection - -For the open connection, call the `PreventPooling()` to mark the connection to be removed on close instead being still pooled. -The busy sessions counter will be decreased when the connection is closed. +Using stage files within PUT/GET commands: +[PUT and GET Files to/from Stage](doc/StageFiles.md) ## Logging -The Snowflake Connector for .NET uses [log4net](http://logging.apache.org/log4net/) as the logging framework. - -Here is a sample app.config file that uses [log4net](http://logging.apache.org/log4net/) - -```xml - -
- - - - - - - - - - - - - - - - - - - - - -``` - -## Easy logging - -The Easy Logging feature lets you change the log level for all driver classes and add an extra file appender for logs from the driver's classes at runtime. You can specify the log levels and the directory in which to save log files in a configuration file (default: `sf_client_config.json`). - -You typically change log levels only when debugging your application. - -**Note** -This logging configuration file features support only the following log levels: - -- OFF -- ERROR -- WARNING -- INFO -- DEBUG -- TRACE - -This configuration file uses JSON to define the `log_level` and `log_path` logging parameters, as follows: - -```json -{ - "common": { - "log_level": "INFO", - "log_path": "c:\\some-path\\some-directory" - } -} -``` - -where: - -- `log_level` is the desired logging level. -- `log_path` is the location to store the log files. The driver automatically creates a `dotnet` subdirectory in the specified `log_path`. For example, if you set log_path to `c:\logs`, the drivers creates the `c:\logs\dotnet` directory and stores the logs there. - -The driver looks for the location of the configuration file in the following order: - -- `CLIENT_CONFIG_FILE` connection parameter, containing the full path to the configuration file (e.g. `"ACCOUNT=test;USER=test;PASSWORD=test;CLIENT_CONFIG_FILE=C:\\some-path\\client_config.json;"`) -- `SF_CLIENT_CONFIG_FILE` environment variable, containing the full path to the configuration file. -- .NET driver/application directory, where the file must be named `sf_client_config.json`. -- User’s home directory, where the file must be named `sf_client_config.json`. +Logging description and configuration: +[Logging and Easy Logging](doc/Logging.md) -**Note** -To enhance security, the driver no longer searches a temporary directory for easy logging configurations. Additionally, the driver now requires the logging configuration file on Unix-style systems to limit file permissions to allow only the file owner to modify the files (such as `chmod 0600` or `chmod 0644`). - -To minimize the number of searches for a configuration file, the driver reads the file only for: - -- The first connection. -- The first connection with `CLIENT_CONFIG_FILE` parameter. - -The extra logs are stored in a `dotnet` subfolder of the specified directory, such as `C:\some-path\some-directory\dotnet`. - -If a client uses the `log4net` library for application logging, enabling easy logging affects the log level in those logs as well. - -## Getting the code coverage - -1. Go to .NET project directory - -2. Clean the directory - -``` -dotnet clean snowflake-connector-net.sln && dotnet nuget locals all --clear -``` - -3. Create parameters.json containing connection info for AWS, AZURE, or GCP account and place inside the Snowflake.Data.Tests folder - -4. Build the project for .NET6 - -``` -dotnet build snowflake-connector-net.sln /p:DebugType=Full -``` - -5. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AWS_coverage.xml --output-format cobertura --settings coverage.config -``` - -6. Build the project for .NET Framework - -``` -msbuild snowflake-connector-net.sln -p:Configuration=Release -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AWS_coverage.xml --output-format cobertura --settings coverage.config -``` - -
-Repeat steps 3, 5, and 7 for the other cloud providers.
-Note: no need to rebuild the connector again.

- -For Azure:
- -3. Create parameters.json containing connection info for AZURE account and place inside the Snowflake.Data.Tests folder - -4. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AZURE_coverage.xml --output-format cobertura --settings coverage.config -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AZURE_coverage.xml --output-format cobertura --settings coverage.config -``` - -
-For GCP:
- -3. Create parameters.json containing connection info for GCP account and place inside the Snowflake.Data.Tests folder - -4. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_GCP_coverage.xml --output-format cobertura --settings coverage.config -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_GCP_coverage.xml --output-format cobertura --settings coverage.config -``` +--------------- ## Notice @@ -1037,4 +138,14 @@ dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;ve Snowflake has identified an issue where the driver is globally enforcing TLS 1.2 and certificate revocation checks with the .NET Driver v1.2.1 and earlier versions. Starting with v2.0.0, the driver will set these locally. +4. Certificate Revocation List not performed where insecureMode was disabled - + Snowflake has identified vulnerability where the checks against the Certificate Revocation List (CRL) + were not performed where the insecureMode flag was set to false, which is the default setting. + From version v2.1.5 CRL is working back as intended. + Note that the driver is now targeting .NET 6.0. When upgrading, you might also need to run “Update-Package -reinstall” to update the dependencies. + +See more: +* [Security Policy](SECURITY.md) +* [Security Advisories](/security/advisories) + diff --git a/doc/CodeCoverage.md b/doc/CodeCoverage.md new file mode 100644 index 000000000..497219494 --- /dev/null +++ b/doc/CodeCoverage.md @@ -0,0 +1,72 @@ +## Getting the code coverage + +1. Go to .NET project directory + +2. Clean the directory + +``` +dotnet clean snowflake-connector-net.sln && dotnet nuget locals all --clear +``` + +3. Create parameters.json containing connection info for AWS, AZURE, or GCP account and place inside the Snowflake.Data.Tests folder + +4. Build the project for .NET6 + +``` +dotnet build snowflake-connector-net.sln /p:DebugType=Full +``` + +5. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AWS_coverage.xml --output-format cobertura --settings coverage.config +``` + +6. Build the project for .NET Framework + +``` +msbuild snowflake-connector-net.sln -p:Configuration=Release +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AWS_coverage.xml --output-format cobertura --settings coverage.config +``` + +
+Repeat steps 3, 5, and 7 for the other cloud providers.
+Note: no need to rebuild the connector again.

+ +For Azure:
+ +3. Create parameters.json containing connection info for AZURE account and place inside the Snowflake.Data.Tests folder + +4. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AZURE_coverage.xml --output-format cobertura --settings coverage.config +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AZURE_coverage.xml --output-format cobertura --settings coverage.config +``` + +
+For GCP:
+ +3. Create parameters.json containing connection info for GCP account and place inside the Snowflake.Data.Tests folder + +4. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_GCP_coverage.xml --output-format cobertura --settings coverage.config +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_GCP_coverage.xml --output-format cobertura --settings coverage.config +``` diff --git a/doc/Connecting.md b/doc/Connecting.md new file mode 100644 index 000000000..d32fc0a13 --- /dev/null +++ b/doc/Connecting.md @@ -0,0 +1,293 @@ +## Connecting + +To connect to Snowflake, specify a valid connection string composed of key-value pairs separated by semicolons, +i.e "\=\;\=\...". + +**Note**: If the keyword or value contains an equal sign (=), you must precede the equal sign with another equal sign. For example, if the keyword is "key" and the value is "value_part1=value_part2", use "key=value_part1==value_part2". + +The following table lists all valid connection properties: +
+ +| Connection Property | Required | Comment | +|--------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| ACCOUNT | Yes | Your full account name might include additional segments that identify the region and cloud platform where your account is hosted | +| APPLICATION | No | **_Snowflake partner use only_**: Specifies the name of a partner application to connect through .NET. The name must match the following pattern: ^\[A-Za-z](\[A-Za-z0-9.-]){1,50}$ (one letter followed by 1 to 50 letter, digit, .,- or, \_ characters). | +| DB | No | | +| HOST | No | Specifies the hostname for your account in the following format: \.snowflakecomputing.com.
If no value is specified, the driver uses \.snowflakecomputing.com. | +| PASSWORD | Depends | Required if AUTHENTICATOR is set to `snowflake` (the default value) or the URL for native SSO through Okta. Ignored for all the other authentication types. | +| ROLE | No | | +| SCHEMA | No | | +| USER | Depends | If AUTHENTICATOR is set to `externalbrowser` this is optional. For native SSO through Okta, set this to the login name for your identity provider (IdP). | +| WAREHOUSE | No | | +| CONNECTION_TIMEOUT | No | Total timeout in seconds when connecting to Snowflake. The default is 300 seconds | +| RETRY_TIMEOUT | No | Total timeout in seconds for supported endpoints of retry policy. The default is 300 seconds. The value can only be increased from the default value or set to 0 for infinite timeout | +| MAXHTTPRETRIES | No | Maximum number of times to retry failed HTTP requests (default: 7). You can set `MAXHTTPRETRIES=0` to remove the retry limit, but doing so runs the risk of the .NET driver infinitely retrying failed HTTP calls. | +| CLIENT_SESSION_KEEP_ALIVE | No | Whether to keep the current session active after a period of inactivity, or to force the user to login again. If the value is `true`, Snowflake keeps the session active indefinitely, even if there is no activity from the user. If the value is `false`, the user must log in again after four hours of inactivity. The default is `false`. Setting this value overrides the server session property for the current session. | +| BROWSER_RESPONSE_TIMEOUT | No | Number to seconds to wait for authentication in an external browser (default: 120). | +| DISABLERETRY | No | Set this property to `true` to prevent the driver from reconnecting automatically when the connection fails or drops. The default value is `false`. | +| AUTHENTICATOR | No | The method of authentication. Currently supports the following values:
- snowflake (default): You must also set USER and PASSWORD.
- [the URL for native SSO through Okta](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#native-sso-okta-only): You must also set USER and PASSWORD.
- [externalbrowser](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#browser-based-sso): You must also set USER.
- [snowflake_jwt](https://docs.snowflake.com/en/user-guide/key-pair-auth.html): You must also set PRIVATE_KEY_FILE or PRIVATE_KEY.
- [oauth](https://docs.snowflake.com/en/user-guide/oauth.html): You must also set TOKEN. | +| VALIDATE_DEFAULT_PARAMETERS | No | Whether DB, SCHEMA and WAREHOUSE should be verified when making connection. Default to be true. | +| PRIVATE_KEY_FILE | Depends | The path to the private key file to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt | +| PRIVATE_KEY_PWD | No | The passphrase to use for decrypting the private key, if the key is encrypted. | +| PRIVATE_KEY | Depends | The private key to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt.
If the private key value includes any equal signs (=), make sure to replace each equal sign with two signs (==) to ensure that the connection string is parsed correctly. | +| TOKEN | Depends | The OAuth token to use for OAuth authentication. Must be used in combination with AUTHENTICATOR=oauth. | +| INSECUREMODE | No | Set to true to disable the certificate revocation list check. Default is false. | +| USEPROXY | No | Set to true if you need to use a proxy server. The default value is false.

This parameter was introduced in v2.0.4. | +| PROXYHOST | Depends | The hostname of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | +| PROXYPORT | Depends | The port number of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | +| PROXYUSER | No | The username for authenticating to the proxy server.

This parameter was introduced in v2.0.4. | +| PROXYPASSWORD | Depends | The password for authenticating to the proxy server.

If USEPROXY is `true` and PROXYUSER is set, you must set this parameter.

This parameter was introduced in v2.0.4. | +| NONPROXYHOSTS | No | The list of hosts that the driver should connect to directly, bypassing the proxy server. Separate the hostnames with a pipe symbol (\|). You can also use an asterisk (`*`) as a wildcard.
The host target value should fully match with any item from the proxy host list to bypass the proxy server.

This parameter was introduced in v2.0.4. | +| FILE_TRANSFER_MEMORY_THRESHOLD | No | The maximum number of bytes to store in memory used in order to provide a file encryption. If encrypting/decrypting file size exceeds provided value a temporary file will be created and the work will be continued in the temporary file instead of memory.
If no value provided 1MB will be used as a default value (that is 1048576 bytes).
It is possible to configure any integer value bigger than zero representing maximal number of bytes to reside in memory. | +| CLIENT_CONFIG_FILE | No | The location of the client configuration json file. In this file you can configure easy logging feature. | +| ALLOWUNDERSCORESINHOST | No | Specifies whether to allow underscores in account names. This impacts PrivateLink customers whose account names contain underscores. In this situation, you must override the default value by setting allowUnderscoresInHost to true. | +| QUERY_TAG | No | Optional string that can be used to tag queries and other SQL statements executed within a connection. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions.
To set QUERY_TAG on the statement level you can use SnowflakeDbCommand.QueryTag. | +| MAXPOOLSIZE | No | Maximum number of connections in a pool. Default value is 10. `maxPoolSize` value cannot be lower than `minPoolSize` value. | +| MINPOOLSIZE | No | Expected minimum number of connections in pool. When you get a connection from the pool, more connections might be initialised in background to increase the pool size to `minPoolSize`. If you specify 0 or 1 there will be no attempts to create extra initialisations in background. The default value is 2. `maxPoolSize` value cannot be lower than `minPoolSize` value. The parameter is used only in a new version of connection pool. | +| CHANGEDSESSION | No | Specifies what should happen with a closed connection when some of its session variables are altered (e. g. you used `ALTER SESSION SET SCHEMA` to change the databese schema). The default behaviour is `OriginalPool` which means the session stays in the original pool. Currently no other option is possible. Parameter used only in a new version of connection pool. | +| WAITINGFORIDLESESSIONTIMEOUT | No | Timeout for waiting for an idle session when pool is full. It happens when there is no idle session and we cannot create a new one because of reaching `maxPoolSize`. The default value is 30 seconds. Usage of units possible and allowed are: e. g. `1000ms` (milliseconds), `15s` (seconds), `2m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate fail for new connection to open when session is full. You cannot specify infinite value. | +| EXPIRATIONTIMEOUT | No | Timeout for using each connection. Connections which last more than specified timeout are considered to be expired and are being removed from the pool. The default is 1 hour. Usage of units possible and allowed are: e. g. `360000ms` (milliseconds), `3600s` (seconds), `60m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate expiration of the connection just after its creation. Expiration timeout cannot be set to infinity. | +| POOLINGENABLED | No | Boolean flag indicating if the connection should be a part of a pool. The default value is `true`. | + +
+ +### Password-based Authentication + +The following example demonstrates how to open a connection to Snowflake. This example uses a password for authentication. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); +} +``` + + + +Beginning with version 2.0.18, the .NET connector uses Microsoft [DbConnectionStringBuilder](https://learn.microsoft.com/en-us/dotnet/api/system.data.oledb.oledbconnection.connectionstring?view=dotnet-plat-ext-6.0#remarks) to follow the .NET specification for escaping characters in connection strings. + +The following examples show how you can include different types of special characters in a connection string: + +- To include a single quote (') character: + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test'password;" + ); + ``` + +- To include a double quote (") character: + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test\"password;" + ); + ``` + +- To include a semicolon (;): + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=\"test;password\";" + ); + ``` + +- To include an equal sign (=): + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test=password;" + ); + ``` + + Note that previously you needed to use a double equal sign (==) to escape the character. However, beginning with version 2.0.18, you can use a single equal size. + + +Snowflake supports using [double quote identifiers](https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers) for object property values (WAREHOUSE, DATABASE, SCHEMA AND ROLES). The value should be delimited with `\"` in the connection string. The value is case-sensitive and allow to use special characters as part of the value. + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "database=\"testDB\";" + ); + ``` +- To include a `"` character as part of the value should be escaped using `\"\"`. + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "database=\"\"\"test\"\"user\"\"\";" // DATABASE => ""test"db"" + ); + ``` + +### Other Authentication Methods + +If you are using a different method for authentication, see the examples below: + +- **Key-pair authentication** + + After setting up [key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html), you can specify the + private key for authentication in one of the following ways: + + - Specify the file containing an unencrypted private key: + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + + - Specify the file containing an encrypted private key: + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};private_key_pwd={passwordForDecryptingThePrivateKey};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + - `{passwordForDecryptingThePrivateKey}` is the password for decrypting the private key. + + - Specify an unencrypted private key (read from a file): + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + string privateKeyContent = File.ReadAllText({pathToThePrivateKeyFile}); + + conn.ConnectionString = String.Format("account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key={0};db=testdb;schema=testschema", privateKeyContent); + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + +- **OAuth** + + After setting up [OAuth](https://docs.snowflake.com/en/user-guide/oauth.html), set `AUTHENTICATOR=oauth` and `TOKEN` to the + OAuth token in the connection string. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;user=testuser;authenticator=oauth;token={oauthTokenValue};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{oauthTokenValue}` is the oauth token to use for authentication. + +- **Browser-based SSO** + + In the connection string, set `AUTHENTICATOR=externalbrowser`. + Optionally, `USER` can be set. In that case only if user authenticated via external browser matches the one from configuration, authentication will complete. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=externalbrowser;user={login_name_for_IdP};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{login_name_for_IdP}` is your login name for your IdP. + + You can override the default timeout after which external browser authentication is marked as failed. + The timeout prevents the infinite hang when the user does not provide the login details, e.g. when closing the browser tab. + To override, you can provide `BROWSER_RESPONSE_TIMEOUT` parameter (in seconds). + +- **Native SSO through Okta** + + In the connection string, set `AUTHENTICATOR` to the + [URL of the endpoint for your Okta account](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#label-native-sso-okta), + and set `USER` to the login name for your IdP. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator={okta_url_endpoint};user={login_name_for_IdP};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{okta_url_endpoint}` is the URL for the endpoint for your Okta account (e.g. `https://.okta.com`). + - `{login_name_for_IdP}` is your login name for your IdP. + +In v2.0.4 and later releases, you can configure the driver to connect through a proxy server. The following example configures the +driver to connect through the proxy server `myproxyserver` on port `8888`. The driver authenticates to the proxy server as the +user `test` with the password `test`: + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema;useProxy=true;proxyHost=myproxyserver;proxyPort=8888;proxyUser=test;proxyPassword=test"; + + conn.Open(); + + conn.Close(); +} +``` + +The NONPROXYHOSTS property could be set to specify if the server proxy should be bypassed by an specified host. This should be defined using the full host url or including the url + `*` wilcard symbol. + +Examples: + +- `*` (Bypassed all hosts from the proxy server) +- `*.snowflakecomputing.com` ('Bypass all host that ends with `snowflakecomputing.com`') +- `https:\\testaccount.snowflakecomputing.com` (Bypass proxy server using full host url). +- `*.myserver.com | *testaccount*` (You can specify multiple regex for the property divided by `|`) + + +> Note: The nonproxyhost value should match the full url including the http or https section. The '*' wilcard could be added to bypass the hostname successfully. + +- `myaccount.snowflakecomputing.com` (Not bypassed). +- `*myaccount.snowflakecomputing.com` (Bypassed). + diff --git a/doc/ConnectionPooling.md b/doc/ConnectionPooling.md new file mode 100644 index 000000000..f9b8dad86 --- /dev/null +++ b/doc/ConnectionPooling.md @@ -0,0 +1,405 @@ +## Using Connection Pools + +### Multiple Connection Pools + +Snowflake .NET Driver v4.0.0 provides multiple pools with couple of additional features in comparison to the previous implementation. + +Each pool is identified by the entire connection string. Order of connection string parameters is significant and the same connection parameters +ordered differently lead to two different pools being used. + +All the pool parameters can be controlled from the connection string. + +Pool interface is also maintained by the [SnowflakeDbConnectionPool.cs](https://github.com/snowflakedb/snowflake-connector-net/blob/master/Snowflake.Data/Client/SnowflakeDbConnectionPool.cs). +However, some operations (eg. setting pool parameters from this SnowflakeDbConnectionPool class) are not possible having in mind multiple pools and possibly their different setup. +For that a [SnowflakeDbSessionPool.cs](/Snowflake.Data/Client/SnowflakeDbSessionPool.cs) is provided by +- [SnowflakeDbSessionPool.GetPool(connectionString)](https://github.com/snowflakedb/snowflake-connector-net/blob/master/Snowflake.Data/Client/SnowflakeDbConnectionPool.cs#L45) +- [SnowflakeDbSessionPool.GetPool(connectionString, securePassword)](https://github.com/snowflakedb/snowflake-connector-net/blob/master/Snowflake.Data/Client/SnowflakeDbConnectionPool.cs#L51). +to control pool settings from the code. Changed pool settings are not reflected by their connection string therefore recommended way is to control the pool from the connection string. + +### Pool Lifecycle + +Single pool is instantiated each time an application creates and opens a connection for the first time using particular connection string. +Pool can be also initialized when accessing for the first time from SnowflakeDbConnectionPool.GetPool. + +From that moment the pool tracks and maintains connections matching exactly this connection string. +Pool is responsible for destroying and recreating connections which are old enough (see [Expiration Timeout](#expiration-timeout)). +Number of connections is maintained within [Minimum pool size](#minimum-pool-size) and [Maximum pool size](#maximum-pool-size). +Connections in all their statuses are tracked: +- opening phase +- busy phase +- closed and returned to the pool (idle) +User can clean up the pool using methods: [Clear Pool](#clear-pool). + +### Connection Lifecycle + +#### Opening + +When an application request to open a connection from the pool there are couple of possibilities: + +1) Pool has idle connections already opened and they are provided immediately to the application +2) Pool has no idle connections but [Maximum pool size](#maximum-pool-size) is not reached in which case pool will open connection. +The slot for the new connection is reserved in the pool from the very beginning. +Even though opening a connection may take a while other threads are not blocked from accessing the pool. +3) When [Maximum pool size](#maximum-pool-size) is reached connection is waiting to be opened within period of +time controlled with [Pool Size Exceeded Timeout](#pool-size-exceeded-timeout). +When the timeout is exceeded then an exception will be thrown. + +#### Busy + +`Busy` connection is provided by the pool and it is counted to the pool size. It is returned to be reused during Close operation. +When application does not close connections it may hit the limit of [Maximum pool size](#maximum-pool-size). + +#### Closing + +When application closes the connection couple of things happen: +- Pending transactions will be rolled back (if any) +- Connection can be pooled when its properties are not changed +- Connection with changed: database, schema, warehouse or role can be: + - pooled when OriginalPool mode enabled, see more: [Changed Session Behavior](#changed-session-behavior) + - destroyed when Destroy mode is set + +#### Evicting Connection + +In order to prevent connection pooling the easiest way is to disable pooling. More on this here: [Pooling Enabled](#pooling-enabled). + +However, in special cases an application may need to mark a single, opened connection to evict without turning off the pool. +When such a connection is closed it will not be pooled. Pool will create a new connection to maintain [Minimum pool size](#minimum-pool-size) if needed. + +```cs +using (var connection = new SnowflakeDbConnection(ConnectionString)) +{ + connection.Open(); + connection.PreventPooling(); +} +``` + +### Pool Interfaces + +| Connection Pool Feature | Connection String Parameter | Default | Method | Info | +|-----------------------------------------------------------|------------------------------|---------|---------------------------------|----------------------------------------------------------------------------------------------------------------------------| +| [Multiple pools](#multiple-pools) | | | | | +| [Minimum pool size](#minimum-pool-size) | MinPoolSize | 2 | | | +| [Maximum pool size](#maximum-pool-size) | MaxPoolSize | 10 | | | +| [Changed Session Behavior](#changed-session-behavior) | ChangedSession | Destroy | | Destroy or OriginalPool | +| [Pool Size Exceeded Timeout](#pool-size-exceeded-timeout) | WaitingForIdleSessionTimeout | 30s | | Values can be provided with postfix [ms], [s], [m] | +| [Expiration Timeout](#expiration-timeout) | ExpirationTimeout | 60m | | | +| [Pooling Enabled](#connection-timeout) | PoolingEnabled | true | | Pooling connections authenticated with External Browser or Key-Pair Authentication without password is disabled by default | +| [Connection Timeout](#pooling-enabled) | | 300s | | | +| [Current Pool Size](#current-pool-size) | | | GetCurrentPoolSize() | | +| [Clear Pool](#clear-pool) | | | ClearPool() or ClearAllPools() | | + +#### Multiple pools + +When a first connection is opened, a connection pool is created based on an exact matching algorithm that associates the pool with the connection string of the connection. Each connection pool is associated with a distinct connection string. When a new connection is opened, if the connection string is not an exact match to an existing pool, a new pool is created. + +Different pools can have separate settings from the above settings for instance: minimum pool size or changed session behavior. + +```cs +using (var connection = new SnowflakeDbConnection(ConnectionString + ";application=App1")) +{ + connection.Open(); + // Pool 1 is created +} + +using (var connection = new SnowflakeDbConnection(ConnectionString + ";application=App2")) +{ + connection.Open(); + // Pool 2 is created +} +``` + +#### Minimum pool size + +Ensures minimum specified size of the connections in a pool. Additional connections are created in the background during connection opening request. +When connections are being closed Connection Timeout is analysed for all the connections in a pool and the expired ones are being closed. +After that some connections will get recreated to ensure minimum size of the pool. + +```cs +var connectionString = ConnectionString + ";MinPoolSize=10"; +using (var connection = new SnowflakeDbConnection(connectionString)) +{ + connection.Open(); + // Pool of size 10 is created +} +var poolSize = SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize(); +Assert.AreEqual(10, poolSize); +``` + +#### Maximum pool size + +Latest pool version ensures maximum size of the pool. +What counts for that are: +- idle connections +- busy connections (provided by the pool to the application) +- connections during opening phase + +When a maximum pool size is reached any request to provide (open) another connection is waiting for any idle session to be returned to the pool. +When an Idle Session Timeout is reached and an idle session is not returned within that period an exception will get thrown. + +```cs +var connectionString = ConnectionString + ";MaxPoolSize=2"; + +Task[] tasks = new Task[8]; +for (int i = 0; i < tasks.Length; i++) +{ + var taskName = $"Task {i}"; + tasks[i] = Task.Run(() => + { + using (var connection = new SnowflakeDbConnection(connectionString)) + { + StopWatch sw = new StopWatch(); + + // register opening time + sw.Start(); + connection.Open(); + sw.Stop(); + + // output + Console.WriteLine($"{taskName} waited {Math.Round((double)sw.ElapsedMilliseconds / 1000)} seconds"); + + // wait 2s before closing the connection + Thread.Sleep(2000); + } + }); +} +Task.WaitAll(tasks); + +// check current pool size +var poolSize = SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize(); +Assert.AreEqual(2, poolSize); + +// output: +// Task 1 waited 0 seconds +// Task 4 waited 0 seconds +// Task 7 waited 2 seconds +// Task 0 waited 2 seconds +// Task 6 waited 4 seconds +// Task 3 waited 4 seconds +// Task 2 waited 6 seconds +// Task 5 waited 6 seconds +``` + +#### Changed Session Behavior + +When an application does a change to the connection using one of SQL commands, for instance: +* `use schema`, `create schema` +* `use database`, `create database` +* `use warehouse`, `create warehouse` +* `use role`, `create role` +* `drop` +then such an affected connection is marked internally as no longer matching with the pool it originated from (it becomes a "dirty" connection). +Keep in mind that create commands automatically set active the created object within current connection +(eg. [create database](https://docs.snowflake.com/en/sql-reference/sql/create-database#general-usage-notes)). + +Pool has two different approaches to connections altered with above way: +* Destroy connection +* Pool it back to the Original Pool + +1) Destroy Connection Mode + +To enable this pool mode parameter ChangedSession should be set to `Destroy` or entirely skipped (Destroy is the default pool behavior). +In this mode application may safely alter connection properties: schema, database, warehouse or role. Such a dirty connection no longer matching +with the connection string will not get pooled any more. The pool marks it internally as `dirty` and ensures it gets removed +when no longer used (closed) by the application. + +Since such connections do not return to the pool, it will recreate necessary number of connections to satisfy the Minimum Pool Size requirement. + +```cs +var connectionString = ConnectionString + ";ChangedSession=Destroy"; +var connection = new SnowflakeDbConnection(connectionString); + +connection.Open(); +var randomSchemaName = Guid.NewGuid(); +connection.CreateCommand($"create schema \"{randomSchemaName}\").ExecuteNonQuery(); // schema gets changed +// application is running commands on a schema with random name +connection.Close(); // connection does not return to the original pool and gets destroyed; pool will reconstruct the pool + // with new connections accordingly to the MinPoolSize + +var connection2 = new SnowflakeDbConnection(connectionString); +connection2.Open(); +// operations here will be performed against schema indicated in the ConnectionString +``` + +2) Pooling Changed Session to the Original Pool + +When parameter ChangedSession is set to `OriginalPool` it allows the connection to be pooled back to the original pool from which it came from. + +Disclaimer for OriginalPool Mode + +When application reuses connections affected by the above commands (use/create) it might get to a point when using a connection +provided by the pool it gets SQL syntax errors since tables, procedures, stages and other database objects do not exists because the operations +are executed using changed database, schema, user or role no longer matching connection string. +Reusing connection from a pool requires attention from the code perspective and ensuring that each retrieved connection uses appropriate database, schema, warehouse or role. +This mode is purely for backward compatibility but is not recommended to be used. It is also not a default. + +```cs +var connectionString = ConnectionString + ";ChangedSession=OriginalPool;MinPoolSize=1;MaxPoolSize=1"; +var connection = new SnowflakeDbConnection(connectionString); + +connection.Open(); +var randomSchemaName = Guid.NewGuid(); +connection.CreateCommand($"create schema \"{randomSchemaName}\").ExecuteNonQuery(); // schema gets changed +// application is running commands on a schema with random name +connection.Close(); // connection returns to the original pool but it's schema will no longer match with initial value + +var connection2 = new SnowflakeDbConnection(connectionString); +connection2.Open(); +// operations here will be performed against schema: randomSchemaName +``` + +#### Pool Size Exceeded Timeout + +The timeout for providing a connection when Max Pool Size is reached. +* When timeout to provide new connection is exceeded and there are no idle connections in the pool an exception will be thrown +* When specified as 0, an exception will be thrown immediately if there are no idle connections in the pool + +```cs +var connectionString = ConnectionString + ";MaxPoolSize=2;WaitingForIdleSessionTimeout=3"; + +Task[] tasks = new Task[8]; +for (int i = 0; i < tasks.Length; i++) +{ + var taskName = $"Task {i}"; + tasks[i] = Task.Run(() => + { + try + { + using (var connection = new SnowflakeDbConnection(connectionString)) + { + StopWatch sw = new StopWatch(); + + // register opening time + sw.Start(); + connection.Open(); + sw.Stop(); + + // output + Console.WriteLine($"{taskName} waited {Math.Round((double)sw.ElapsedMilliseconds / 1000)} seconds"); + + // wait 2s before closing the connection + Thread.Sleep(2000); + } + } + catch (SnowflakeDbException ex) + { + Console.WriteLine($"{taskName} - {ex.Message}"); + } + }); +} +Task.WaitAll(tasks); + +// check current pool size +var poolSize = SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize(); +Assert.AreEqual(2, poolSize); + +// output: +// Task 3 waited 0 seconds +// Task 0 waited 0 seconds +// Task 5 waited 2 seconds +// Task 6 waited 2 seconds +// Task 4 - Error: Snowflake Internal Error: Unable to connect. Could not obtain a connection from the pool within a given timeout SqlState: 08006, VendorCode: 270001, QueryId: +// Task 7 - Error: Snowflake Internal Error: Unable to connect. Could not obtain a connection from the pool within a given timeout SqlState: 08006, VendorCode: 270001, QueryId: +// Task 1 - Error: Snowflake Internal Error: Unable to connect. Could not obtain a connection from the pool within a given timeout SqlState: 08006, VendorCode: 270001, QueryId: +// Task 2 - Error: Snowflake Internal Error: Unable to connect. Could not obtain a connection from the pool within a given timeout SqlState: 08006, VendorCode: 270001, QueryId: +``` + +#### Expiration Timeout + +Overall timeout for entire connection lifetime +* When reached connection is always removed +* After pruning, Min Pool Size is checked to achieve expected number of connections in the pool + +```cs +var connectionString = ConnectionString + ";MinPoolSize=1;ExpirationTimeout=2"; +var connection1 = new SnowflakeDbConnection(connectionString); +var connection2 = new SnowflakeDbConnection(connectionString); +var connection3 = new SnowflakeDbConnection(connectionString); + +connection1.Open(); +connection2.Open(); +connection1.Close(); +connection2.Close(); + +// 2 connections are in the pool +Assert.AreEqual(2, SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize()); + +Thread.Sleep(2000); + +connection3.Open(); +connection3.Close(); + +// both previous connections have expired +Assert.AreEqual(1, SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize()); +``` + +#### Connection Timeout + +Total timeout in seconds when connecting to Snowflake. +Equivalent of https://learn.microsoft.com/en-us/dotnet/api/system.data.idbconnection.connectiontimeout?view=net-6.0 + +```cs +var connectionString = ConnectionString + ";connection_timeout=160"; +using (var connection = new SnowflakeDbConnection(connectionString)) +{ + connection.Open(); +} +``` + +#### Pooling Enabled + +Enables or disables connection pooling for the pool identified by a given connection string. + +For security reasons pooling is disabled by default for External Browser or Key-Pair Authentication (unless password for key is provided). + +It can be enabled with a connection string parameter if needed. +However, be warned that using: +- token key file accessible by others and used to authorize connection +- shared environment with an external browser authenticated connections +leads to vulnerabilities and is not recommended. + +```cs +var connectionString = ConnectionString + ";PoolingEnabled=false"; +using (var connection = new SnowflakeDbConnection(connectionString)) +{ + connection.Open(); +} + +// no connection in the pool +var poolSize = SnowflakeDbConnectionPool.GetPool(connectionString).GetCurrentPoolSize(); +Assert.AreEqual(0, poolSize); +``` + +#### Current Pool Size + +Allows to check size of the given pool programatically. It is total number of all the connections: idle, busy and during initialization. + +```cs +var pool = SnowflakeDbConnectionPool.GetPool(connectionString); +var poolSize = pool.GetCurrentPoolSize(); +// default pool size is 2 +Assert.AreEqual(2, poolSize); +``` + +At the SnowflakeDbConnectionPool there is also a way to get sum of connections from all the pools. + +```cs +var pool1 = SnowflakeDbConnectionPool.GetPool(connectionString + ";MinPoolSize=2"); +var pool2 = SnowflakeDbConnectionPool.GetPool(connectionString + ";MinPoolSize=3"); +var poolsSize = SnowflakeDbConnectionPool.GetCurrentPoolSize(); +Assert.AreEqual(5, poolSize); +``` + +#### Clear Pool + +Interface allows to clear a particular pool or all the pools initiated by an application. +Please keep in mind that a default of min pool size will be maintained. + +```cs +var pool = SnowflakeDbConnectionPool.GetPool(connectionString); +``` + +There is also a way to clear all the pools initiated by an application. + +```cs +SnowflakeDbConnectionPool.ClearAllPools(); +``` diff --git a/doc/ConnectionPoolingDeprecated.md b/doc/ConnectionPoolingDeprecated.md new file mode 100644 index 000000000..f5f6005e9 --- /dev/null +++ b/doc/ConnectionPoolingDeprecated.md @@ -0,0 +1,70 @@ +## Using Connection Pools + +### Single Connection Pool (DEPRECATED) + +DEPRECATED VERSION + +Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed. +Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs. + +The Snowflake .NET driver provides the following functions for managing connection pools. + +| Function | Description | +|-------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. | +| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. | +| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. | +| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. | +| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connection pooling. Default: `true`. | + +The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool. + +```cs +public void TestConnectionPoolClean() +{ + SnowflakeDbConnectionPool.ClearAllPools(); + SnowflakeDbConnectionPool.SetMaxPoolSize(2); + var conn1 = new SnowflakeDbConnection(); + conn1.ConnectionString = ConnectionString; + conn1.Open(); + Assert.AreEqual(ConnectionState.Open, conn1.State); + + var conn2 = new SnowflakeDbConnection(); + conn2.ConnectionString = ConnectionString + " retryCount=1"; + conn2.Open(); + Assert.AreEqual(ConnectionState.Open, conn2.State); + Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + conn1.Close(); + conn2.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + var conn3 = new SnowflakeDbConnection(); + conn3.ConnectionString = ConnectionString + " retryCount=2"; + conn3.Open(); + Assert.AreEqual(ConnectionState.Open, conn3.State); + + var conn4 = new SnowflakeDbConnection(); + conn4.ConnectionString = ConnectionString + " retryCount=3"; + conn4.Open(); + Assert.AreEqual(ConnectionState.Open, conn4.State); + + conn3.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + conn4.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + + Assert.AreEqual(ConnectionState.Closed, conn1.State); + Assert.AreEqual(ConnectionState.Closed, conn2.State); + Assert.AreEqual(ConnectionState.Closed, conn3.State); + Assert.AreEqual(ConnectionState.Closed, conn4.State); +} +``` + +Note +Some of the features and configurations available for [Multiple Connection Pools](ConnectionPooling.md) are not available for the old pool. +Following configurations/settings have no effect on [Single Connection Pool](ConnectionPoolingDeprecated.md): +- `poolingEnabled` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetPooling(false)` +- `changedSession` setting, only `OriginalPool` behavior available +- `maxPoolSize` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetMaxPoolSize()` +- `minPoolSize` setting, feature not available +- `waitingForIdleSessionTimeout` setting, feature not available +- `expirationTimeout` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetTimeout()`. diff --git a/doc/DataTypes.md b/doc/DataTypes.md new file mode 100644 index 000000000..7db51cdb7 --- /dev/null +++ b/doc/DataTypes.md @@ -0,0 +1,41 @@ +## Data Types and Formats + +## Mapping .NET and Snowflake Data Types + +The .NET driver supports the following mappings from .NET to Snowflake data types. + +| .NET Framekwork Data Type | Data Type in Snowflake | +| ------------------------- | ---------------------- | +| `int`, `long` | `NUMBER(38, 0)` | +| `decimal` | `NUMBER(38, )` | +| `double` | `REAL` | +| `string` | `TEXT` | +| `bool` | `BOOLEAN` | +| `byte` | `BINARY` | +| `datetime` | `DATE` | + +## Arrow data format + +The .NET connector, starting with v2.1.3, supports the [Arrow data format](https://arrow.apache.org/) +as a [preview](https://docs.snowflake.com/en/release-notes/preview-features) feature for data transfers +between Snowflake and a .NET client. The Arrow data format avoids extra +conversions between binary and textual representations of the data. The Arrow +data format can improve performance and reduce memory consumption in clients. + +The data format is controlled by the +DOTNET_QUERY_RESULT_FORMAT parameter. To use Arrow format, execute: + +```snowflake +-- at the session level +ALTER SESSION SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +-- or at the user level +ALTER USER SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +-- or at the account level +ALTER ACCOUNT SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +``` + +The valid values for the parameter are: + +- ARROW +- JSON (default) + diff --git a/doc/Disconnecting.md b/doc/Disconnecting.md new file mode 100644 index 000000000..b6fd5b18a --- /dev/null +++ b/doc/Disconnecting.md @@ -0,0 +1,21 @@ +## Close the Connection + +To close the connection, call the `Close` method of `SnowflakeDbConnection`. + +If you want to avoid blocking threads while the connection is closing, call the `CloseAsync` method instead, passing in a +`CancellationToken`. This method was introduced in the v2.0.4 release. + +Note that because this method is not available in the generic `IDbConnection` interface, you must cast the object as +`SnowflakeDbConnection` before calling the method. For example: + +```cs +CancellationTokenSource cancellationTokenSource = new CancellationTokenSource(); +// Close the connection +((SnowflakeDbConnection)conn).CloseAsync(cancellationTokenSource.Token); +``` + +## Evict the Connection + +For the open connection, call the `PreventPooling()` to mark the connection to be removed on close instead being still pooled. +The busy sessions counter will be decreased when the connection is closed. + diff --git a/doc/Logging.md b/doc/Logging.md new file mode 100644 index 000000000..18c235e7e --- /dev/null +++ b/doc/Logging.md @@ -0,0 +1,82 @@ +## Logging + +The Snowflake Connector for .NET uses [log4net](http://logging.apache.org/log4net/) as the logging framework. + +Here is a sample app.config file that uses [log4net](http://logging.apache.org/log4net/) + +```xml + +
+ + + + + + + + + + + + + + + + + + + + + +``` + +## Easy logging + +The Easy Logging feature lets you change the log level for all driver classes and add an extra file appender for logs from the driver's classes at runtime. You can specify the log levels and the directory in which to save log files in a configuration file (default: `sf_client_config.json`). + +You typically change log levels only when debugging your application. + +**Note** +This logging configuration file features support only the following log levels: + +- OFF +- ERROR +- WARNING +- INFO +- DEBUG +- TRACE + +This configuration file uses JSON to define the `log_level` and `log_path` logging parameters, as follows: + +```json +{ + "common": { + "log_level": "INFO", + "log_path": "c:\\some-path\\some-directory" + } +} +``` + +where: + +- `log_level` is the desired logging level. +- `log_path` is the location to store the log files. The driver automatically creates a `dotnet` subdirectory in the specified `log_path`. For example, if you set log_path to `c:\logs`, the drivers creates the `c:\logs\dotnet` directory and stores the logs there. + +The driver looks for the location of the configuration file in the following order: + +- `CLIENT_CONFIG_FILE` connection parameter, containing the full path to the configuration file (e.g. `"ACCOUNT=test;USER=test;PASSWORD=test;CLIENT_CONFIG_FILE=C:\\some-path\\client_config.json;"`) +- `SF_CLIENT_CONFIG_FILE` environment variable, containing the full path to the configuration file. +- .NET driver/application directory, where the file must be named `sf_client_config.json`. +- User’s home directory, where the file must be named `sf_client_config.json`. + +**Note** +To enhance security, the driver no longer searches a temporary directory for easy logging configurations. Additionally, the driver now requires the logging configuration file on Unix-style systems to limit file permissions to allow only the file owner to modify the files (such as `chmod 0600` or `chmod 0644`). + +To minimize the number of searches for a configuration file, the driver reads the file only for: + +- The first connection. +- The first connection with `CLIENT_CONFIG_FILE` parameter. + +The extra logs are stored in a `dotnet` subfolder of the specified directory, such as `C:\some-path\some-directory\dotnet`. + +If a client uses the `log4net` library for application logging, enabling easy logging affects the log level in those logs as well. diff --git a/doc/QueryingData.md b/doc/QueryingData.md new file mode 100644 index 000000000..cec2323bb --- /dev/null +++ b/doc/QueryingData.md @@ -0,0 +1,252 @@ +## Run a Query and Read Data + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = connectionString; + conn.Open(); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "select * from t"; + IDataReader reader = cmd.ExecuteReader(); + + while(reader.Read()) + { + Console.WriteLine(reader.GetString(0)); + } + + conn.Close(); +} +``` + +Note that for a `TIME` column, the reader returns a `System.DateTime` value. If you need a `System.TimeSpan` column, call the +`getTimeSpan` method in `SnowflakeDbDataReader`. This method was introduced in the v2.0.4 release. + +Note that because this method is not available in the generic `IDataReader` interface, you must cast the object as +`SnowflakeDbDataReader` before calling the method. For example: + +```cs +TimeSpan timeSpanTime = ((SnowflakeDbDataReader)reader).GetTimeSpan(13); +``` + +## Execute a query asynchronously on the server + +You can run the query asynchronously on the server. The server responds immediately with `queryId` and continues to execute the query asynchronously. +Then you can use this `queryId` to check the query status or wait until the query is completed and get the results. +It is fine to start the query in one session and continue to query for the results in another one based on the queryId. + +**Note**: There are 2 levels of asynchronous execution. One is asynchronous execution in terms of C# language (`async await`). +Another is asynchronous execution of the query by the server (you can recognize it by `InAsyncMode` containing method names, e. g. `ExecuteInAsyncMode`, `ExecuteAsyncInAsyncMode`). + +Example of synchronous code starting a query to be executed asynchronously on the server: +```cs +using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) +{ + conn.Open(); + SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand(); + cmd.CommandText = "SELECT ..."; + var queryId = cmd.ExecuteInAsyncMode(); + // ... +} +``` + +Example of asynchronous code starting a query to be executed asynchronously on the server: +```cs +using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) +{ + await conn.OpenAsync(CancellationToken.None).ConfigureAwait(false); + SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand()) + cmd.CommandText = "SELECT ..."; + var queryId = await cmd.ExecuteAsyncInAsyncMode(CancellationToken.None).ConfigureAwait(false); + // ... +} +``` + +You can check the status of a query executed asynchronously on the server either in synchronous code: +```cs +var queryStatus = cmd.GetQueryStatus(queryId); +Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running +Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error +``` +or the same in an asynchronous code: +```cs +var queryStatus = await cmd.GetQueryStatusAsync(queryId, CancellationToken.None).ConfigureAwait(false); +Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running +Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error +``` + +The following example shows how to get query results. +The operation will repeatedly check the query status until the query is completed or timeout happened or reaching the maximum number of attempts. +The synchronous code example: +```cs +DbDataReader reader = cmd.GetResultsFromQueryId(queryId); +``` +and the asynchronous code example: +```cs +DbDataReader reader = await cmd.GetResultsFromQueryIdAsync(queryId, CancellationToken.None).ConfigureAwait(false); +``` + +**Note**: GET/PUT operations are currently not enabled for asynchronous executions. + +## Executing a Batch of SQL Statements (Multi-Statement Support) + +With version 2.0.18 and later of the .NET connector, you can send +a batch of SQL statements, separated by semicolons, +to be executed in a single request. + +**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. + +--- + +**Note** + +By default, Snowflake returns an error for queries issued with multiple statements to protect against SQL injection attacks. The multiple statements feature makes your system more vulnerable to SQL injections, and so it should be used carefully. You can reduce the risk by using the MULTI_STATEMENT_COUNT parameter to specify the number of statements to be executed, which makes it more difficult to inject a statement by appending to it. + +--- + +You can execute multiple statements as a batch in the same way you execute queries with single statements, except that the query string contains multiple statements separated by semicolons. Note that multiple statements execute sequentially, not in parallel. + +You can set this parameter at the session level using the following command: + +``` +ALTER SESSION SET MULTI_STATEMENT_COUNT = <0/1>; +``` + +where: + +- **0**: Enables an unspecified number of SQL statements in a query. + + Using this value allows batch queries to contain any number of SQL statements without needing to specify the MULTI_STATEMENT_COUNT statement parameter. However, be aware that using this value reduces the protection against SQL injection attacks. + +- **1**: Allows one SQL statement or a specified number of statement in a query string (default). + + You must include MULTI_STATEMENT_COUNT as a statement parameter to specify the number of statements included when the query string contains more than one statement. If the number of statements sent in the query string does not match the MULTI_STATEMENT_COUNT value, the .NET driver rejects the request. You can, however, omit this parameter if you send a single statement. + +The following example sets the MULTI_STATEMENT_COUNT session parameter to 1. Then for an individual command, it sets MULTI_STATEMENT_COUNT=3 to indicate that the query contains precisely three SQL commands. The query string, `cmd.CommandText` , then contains the three statements to execute. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = ConnectionString; + conn.Open(); + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "ALTER SESSION SET MULTI_STATEMENT_COUNT = 1;"; + cmd.ExecuteNonQuery(); + conn.Close(); +} + +using (DbCommand cmd = conn.CreateCommand()) +{ + // Set statement count + var stmtCountParam = cmd.CreateParameter(); + stmtCountParam.ParameterName = "MULTI_STATEMENT_COUNT"; + stmtCountParam.DbType = DbType.Int16; + stmtCountParam.Value = 3; + cmd.Parameters.Add(stmtCountParam); + cmd.CommandText = "CREATE OR REPLACE TABLE test(n int); INSERT INTO test values(1), (2); SELECT * FROM test ORDER BY n; + DbDataReader reader = cmd.ExecuteReader(); + do + { + if (reader.HasRow) + { + while (reader.Read()) + { + // read data + } + } + } + while (reader.NextResult()); +} +``` + +## Bind Parameter + +**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. + +This example shows how bound parameters are converted from C# data types to +Snowflake data types. For example, if the data type of the Snowflake column +is INTEGER, then you can bind C# data types Int32 or Int16. + +This example inserts 3 rows into a table with one column. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = connectionString; + conn.Open(); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "create or replace table T(cola int)"; + int count = cmd.ExecuteNonQuery(); + Assert.AreEqual(0, count); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "insert into t values (?), (?), (?)"; + + var p1 = cmd.CreateParameter(); + p1.ParameterName = "1"; + p1.Value = 10; + p1.DbType = DbType.Int32; + cmd.Parameters.Add(p1); + + var p2 = cmd.CreateParameter(); + p2.ParameterName = "2"; + p2.Value = 10000L; + p2.DbType = DbType.Int32; + cmd.Parameters.Add(p2); + + var p3 = cmd.CreateParameter(); + p3.ParameterName = "3"; + p3.Value = (short)1; + p3.DbType = DbType.Int16; + cmd.Parameters.Add(p3); + + var count = cmd.ExecuteNonQuery(); + Assert.AreEqual(3, count); + + cmd.CommandText = "drop table if exists T"; + count = cmd.ExecuteNonQuery(); + Assert.AreEqual(0, count); + + conn.Close(); +} +``` + +## Bind Array Variables + +The sample code creates a table with a single integer column and then uses array binding to populate the table with values 0 to 70000. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = ConnectionString; + conn.Open(); + + using (IDbCommand cmd = conn.CreateCommand()) + { + cmd.CommandText = "create or replace table putArrayBind(colA integer)"; + cmd.ExecuteNonQuery(); + + string insertCommand = "insert into putArrayBind values (?)"; + cmd.CommandText = insertCommand; + + int total = 70000; + + List arrint = new List(); + for (int i = 0; i < total; i++) + { + arrint.Add(i); + } + var p1 = cmd.CreateParameter(); + p1.ParameterName = "1"; + p1.DbType = DbType.Int16; + p1.Value = arrint.ToArray(); + cmd.Parameters.Add(p1); + + count = cmd.ExecuteNonQuery(); // count = 70000 + } + + conn.Close(); +} +``` + diff --git a/doc/StageFiles.md b/doc/StageFiles.md new file mode 100644 index 000000000..aa59b82b9 --- /dev/null +++ b/doc/StageFiles.md @@ -0,0 +1,64 @@ +## PUT local files to stage + +PUT command can be used to upload files of a local directory or a single local file to the Snowflake stages (named, internal table stage or internal user stage). +Such staging files can be used to load data into a table. +More on this topic: [File staging with PUT](https://docs.snowflake.com/en/sql-reference/sql/put). + +In the driver the command can be executed in a bellow way: + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + try + { + conn.ConnectionString = ""; + conn.Open(); + var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command + + cmd.CommandText = "PUT file://some_data.csv @my_schema.my_stage AUTO_COMPRESS=TRUE"; + var reader = cmd.ExecuteReader(); + Assert.IsTrue(reader.read()); + Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); + } + catch (SnowflakeDbException e) + { + Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // when failed + Assert.That(e.InnerException.GetType(), Is.EqualTo(typeof(FileNotFoundException))); + } +``` + +In case of a failure a SnowflakeDbException exception will be thrown with affected QueryId if possible. +If it was after the query got executed this exception will be a SnowflakeDbException containing affected QueryId. +In case of the initial phase of execution QueryId might not be provided. +Inner exception (if applicable) will provide some details on the failure cause and +it will be for example: FileNotFoundException, DirectoryNotFoundException. + +## GET stage files + +GET command allows to download stage directories or files to a local directory. +It can be used in connection with named stage, table internal stage or user stage. +Detailed information on the command: [Downloading files with GET](https://docs.snowflake.com/en/sql-reference/sql/get). + +To use the command in a driver similar code can be executed in a client app: + +```cs + try + { + conn.ConnectionString = ""; + conn.Open(); + var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command + + cmd.CommandText = "GET @my_schema.my_stage/stage_file.csv file://local_file.csv AUTO_COMPRESS=TRUE"; + var reader = cmd.ExecuteReader(); + Assert.IsTrue(reader.read()); // True on success, False if failure + Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); + } + catch (SnowflakeDbException e) + { + Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // on failure + } +``` + +In case of a failure a SnowflakeDbException will be thrown with affected QueryId if possible. +When no technical or syntax errors occurred but the DBDataReader has no data to process it returns False +without throwing an exception. diff --git a/doc/Testing.md b/doc/Testing.md new file mode 100644 index 000000000..70ee63f28 --- /dev/null +++ b/doc/Testing.md @@ -0,0 +1,54 @@ +# Testing the Connector + +Before running tests, create a parameters.json file under Snowflake.Data.Tests\ directory. In this file, specify username, password and account info that tests will run against. Here is a sample parameters.json file + +``` +{ + "testconnection": { + "SNOWFLAKE_TEST_USER": "snowman", + "SNOWFLAKE_TEST_PASSWORD": "XXXXXXX", + "SNOWFLAKE_TEST_ACCOUNT": "TESTACCOUNT", + "SNOWFLAKE_TEST_WAREHOUSE": "TESTWH", + "SNOWFLAKE_TEST_DATABASE": "TESTDB", + "SNOWFLAKE_TEST_SCHEMA": "TESTSCHEMA", + "SNOWFLAKE_TEST_ROLE": "TESTROLE", + "SNOWFLAKE_TEST_HOST": "testaccount.snowflakecomputing.com" + } +} +``` + +## Command Prompt + +The build solution file builds the connector and tests binaries. Issue the following command from the command line to run the tests. The test binary is located in the Debug directory if you built the solution file in Debug mode. + +```{r, engine='bash', code_block_name} +cd Snowflake.Data.Tests +dotnet test -f net6.0 -l "console;verbosity=normal" +``` + +Tests can also be run under code coverage: + +```{r, engine='bash', code_block_name} +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_coverage.xml --output-format cobertura --settings coverage.config +``` + +You can run only specific suite of tests (integration or unit). + +Running unit tests: + +```bash +cd Snowflake.Data.Tests +dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~UnitTests -l console;verbosity=normal +``` + +Running integration tests: + +```bash +cd Snowflake.Data.Tests +dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~IntegrationTests +``` + +## Visual Studio 2017 + +Tests can also be run under Visual Studio 2017. Open the solution file in Visual Studio 2017 and run tests using Test Explorer. +