Skip to content

Commit

Permalink
Pool/SNOW-1337069 pool docs (#938)
Browse files Browse the repository at this point in the history
### Description
New connection pool documentation.

### Checklist
- [ ] Code compiles correctly
- [ ] Code is formatted according to [Coding
Conventions](../blob/master/CodingConventions.md)
- [ ] Created tests which fail without the change (if possible)
- [ ] All tests passing (`dotnet test`)
- [x] Extended the README / documentation, if necessary
- [x] Provide JIRA issue id (if possible) or GitHub issue id in PR name

---------

Co-authored-by: Dariusz Stempniak <[email protected]>
Co-authored-by: Krzysztof Nozderko <[email protected]>
  • Loading branch information
3 people authored May 17, 2024
1 parent ace8f90 commit b2b0ddc
Show file tree
Hide file tree
Showing 11 changed files with 1,383 additions and 918 deletions.
947 changes: 29 additions & 918 deletions README.md

Large diffs are not rendered by default.

72 changes: 72 additions & 0 deletions doc/CodeCoverage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
## Getting the code coverage

1. Go to .NET project directory

2. Clean the directory

```
dotnet clean snowflake-connector-net.sln && dotnet nuget locals all --clear
```

3. Create parameters.json containing connection info for AWS, AZURE, or GCP account and place inside the Snowflake.Data.Tests folder

4. Build the project for .NET6

```
dotnet build snowflake-connector-net.sln /p:DebugType=Full
```

5. Run dotnet-cover on the .NET6 build

```
dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AWS_coverage.xml --output-format cobertura --settings coverage.config
```

6. Build the project for .NET Framework

```
msbuild snowflake-connector-net.sln -p:Configuration=Release
```

7. Run dotnet-cover on the .NET Framework build

```
dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AWS_coverage.xml --output-format cobertura --settings coverage.config
```

<br />
Repeat steps 3, 5, and 7 for the other cloud providers. <br />
Note: no need to rebuild the connector again. <br /><br />

For Azure:<br />

3. Create parameters.json containing connection info for AZURE account and place inside the Snowflake.Data.Tests folder

4. Run dotnet-cover on the .NET6 build

```
dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AZURE_coverage.xml --output-format cobertura --settings coverage.config
```

7. Run dotnet-cover on the .NET Framework build

```
dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AZURE_coverage.xml --output-format cobertura --settings coverage.config
```

<br />
For GCP:<br />

3. Create parameters.json containing connection info for GCP account and place inside the Snowflake.Data.Tests folder

4. Run dotnet-cover on the .NET6 build

```
dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_GCP_coverage.xml --output-format cobertura --settings coverage.config
```

7. Run dotnet-cover on the .NET Framework build

```
dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_GCP_coverage.xml --output-format cobertura --settings coverage.config
```
293 changes: 293 additions & 0 deletions doc/Connecting.md

Large diffs are not rendered by default.

405 changes: 405 additions & 0 deletions doc/ConnectionPooling.md

Large diffs are not rendered by default.

70 changes: 70 additions & 0 deletions doc/ConnectionPoolingDeprecated.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
## Using Connection Pools

### Single Connection Pool (DEPRECATED)

DEPRECATED VERSION

Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed.
Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs.

The Snowflake .NET driver provides the following functions for managing connection pools.

| Function | Description |
|-------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. |
| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. |
| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. |
| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. |
| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connection pooling. Default: `true`. |

The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool.

```cs
public void TestConnectionPoolClean()
{
SnowflakeDbConnectionPool.ClearAllPools();
SnowflakeDbConnectionPool.SetMaxPoolSize(2);
var conn1 = new SnowflakeDbConnection();
conn1.ConnectionString = ConnectionString;
conn1.Open();
Assert.AreEqual(ConnectionState.Open, conn1.State);

var conn2 = new SnowflakeDbConnection();
conn2.ConnectionString = ConnectionString + " retryCount=1";
conn2.Open();
Assert.AreEqual(ConnectionState.Open, conn2.State);
Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize());
conn1.Close();
conn2.Close();
Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize());
var conn3 = new SnowflakeDbConnection();
conn3.ConnectionString = ConnectionString + " retryCount=2";
conn3.Open();
Assert.AreEqual(ConnectionState.Open, conn3.State);

var conn4 = new SnowflakeDbConnection();
conn4.ConnectionString = ConnectionString + " retryCount=3";
conn4.Open();
Assert.AreEqual(ConnectionState.Open, conn4.State);

conn3.Close();
Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize());
conn4.Close();
Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize());

Assert.AreEqual(ConnectionState.Closed, conn1.State);
Assert.AreEqual(ConnectionState.Closed, conn2.State);
Assert.AreEqual(ConnectionState.Closed, conn3.State);
Assert.AreEqual(ConnectionState.Closed, conn4.State);
}
```

<u>Note</u>
Some of the features and configurations available for [Multiple Connection Pools](ConnectionPooling.md) are not available for the old pool.
Following configurations/settings have no effect on [Single Connection Pool](ConnectionPoolingDeprecated.md):
- `poolingEnabled` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetPooling(false)`
- `changedSession` setting, only `OriginalPool` behavior available
- `maxPoolSize` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetMaxPoolSize()`
- `minPoolSize` setting, feature not available
- `waitingForIdleSessionTimeout` setting, feature not available
- `expirationTimeout` setting, feature not configurable by connection string, instead you could use `SnowflakeDbConnectionPool.SetTimeout()`.
41 changes: 41 additions & 0 deletions doc/DataTypes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
## Data Types and Formats

## Mapping .NET and Snowflake Data Types

The .NET driver supports the following mappings from .NET to Snowflake data types.

| .NET Framekwork Data Type | Data Type in Snowflake |
| ------------------------- | ---------------------- |
| `int`, `long` | `NUMBER(38, 0)` |
| `decimal` | `NUMBER(38, <scale>)` |
| `double` | `REAL` |
| `string` | `TEXT` |
| `bool` | `BOOLEAN` |
| `byte` | `BINARY` |
| `datetime` | `DATE` |

## Arrow data format

The .NET connector, starting with v2.1.3, supports the [Arrow data format](https://arrow.apache.org/)
as a [preview](https://docs.snowflake.com/en/release-notes/preview-features) feature for data transfers
between Snowflake and a .NET client. The Arrow data format avoids extra
conversions between binary and textual representations of the data. The Arrow
data format can improve performance and reduce memory consumption in clients.

The data format is controlled by the
DOTNET_QUERY_RESULT_FORMAT parameter. To use Arrow format, execute:

```snowflake
-- at the session level
ALTER SESSION SET DOTNET_QUERY_RESULT_FORMAT = ARROW;
-- or at the user level
ALTER USER SET DOTNET_QUERY_RESULT_FORMAT = ARROW;
-- or at the account level
ALTER ACCOUNT SET DOTNET_QUERY_RESULT_FORMAT = ARROW;
```

The valid values for the parameter are:

- ARROW
- JSON (default)

21 changes: 21 additions & 0 deletions doc/Disconnecting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
## Close the Connection

To close the connection, call the `Close` method of `SnowflakeDbConnection`.

If you want to avoid blocking threads while the connection is closing, call the `CloseAsync` method instead, passing in a
`CancellationToken`. This method was introduced in the v2.0.4 release.

Note that because this method is not available in the generic `IDbConnection` interface, you must cast the object as
`SnowflakeDbConnection` before calling the method. For example:

```cs
CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
// Close the connection
((SnowflakeDbConnection)conn).CloseAsync(cancellationTokenSource.Token);
```

## Evict the Connection

For the open connection, call the `PreventPooling()` to mark the connection to be removed on close instead being still pooled.
The busy sessions counter will be decreased when the connection is closed.

82 changes: 82 additions & 0 deletions doc/Logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
## Logging

The Snowflake Connector for .NET uses [log4net](http://logging.apache.org/log4net/) as the logging framework.

Here is a sample app.config file that uses [log4net](http://logging.apache.org/log4net/)

```xml
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
</configSections>

<log4net>
<appender name="MyRollingFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="snowflake_dotnet.log" />
<appendToFile value="true"/>
<rollingStyle value="Size" />
<maximumFileSize value="10MB" />
<staticLogFileName value="true" />
<maxSizeRollBackups value="10" />
<layout type="log4net.Layout.PatternLayout">
<!-- <header value="[DateTime] [Thread] [Level] [ClassName] Message&#13;&#10;" /> -->
<conversionPattern value="[%date] [%t] [%-5level] [%logger] %message%newline" />
</layout>
</appender>

<root>
<level value="ALL" />
<appender-ref ref="MyRollingFileAppender" />
</root>
</log4net>
```

## Easy logging

The Easy Logging feature lets you change the log level for all driver classes and add an extra file appender for logs from the driver's classes at runtime. You can specify the log levels and the directory in which to save log files in a configuration file (default: `sf_client_config.json`).

You typically change log levels only when debugging your application.

**Note**
This logging configuration file features support only the following log levels:

- OFF
- ERROR
- WARNING
- INFO
- DEBUG
- TRACE

This configuration file uses JSON to define the `log_level` and `log_path` logging parameters, as follows:

```json
{
"common": {
"log_level": "INFO",
"log_path": "c:\\some-path\\some-directory"
}
}
```

where:

- `log_level` is the desired logging level.
- `log_path` is the location to store the log files. The driver automatically creates a `dotnet` subdirectory in the specified `log_path`. For example, if you set log_path to `c:\logs`, the drivers creates the `c:\logs\dotnet` directory and stores the logs there.

The driver looks for the location of the configuration file in the following order:

- `CLIENT_CONFIG_FILE` connection parameter, containing the full path to the configuration file (e.g. `"ACCOUNT=test;USER=test;PASSWORD=test;CLIENT_CONFIG_FILE=C:\\some-path\\client_config.json;"`)
- `SF_CLIENT_CONFIG_FILE` environment variable, containing the full path to the configuration file.
- .NET driver/application directory, where the file must be named `sf_client_config.json`.
- User’s home directory, where the file must be named `sf_client_config.json`.

**Note**
To enhance security, the driver no longer searches a temporary directory for easy logging configurations. Additionally, the driver now requires the logging configuration file on Unix-style systems to limit file permissions to allow only the file owner to modify the files (such as `chmod 0600` or `chmod 0644`).

To minimize the number of searches for a configuration file, the driver reads the file only for:

- The first connection.
- The first connection with `CLIENT_CONFIG_FILE` parameter.

The extra logs are stored in a `dotnet` subfolder of the specified directory, such as `C:\some-path\some-directory\dotnet`.

If a client uses the `log4net` library for application logging, enabling easy logging affects the log level in those logs as well.
Loading

0 comments on commit b2b0ddc

Please sign in to comment.