Skip to content

Commit

Permalink
Merge branch '@s3-store/stream-handling-improvements' of github.com:s…
Browse files Browse the repository at this point in the history
…upabase/tus-node-server into @s3-store/stream-handling-improvements
  • Loading branch information
fenos committed Feb 5, 2024
2 parents 7236f44 + 84eaff5 commit 7073828
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions packages/s3-store/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,21 +91,21 @@ you need to provide a cache implementation that is shared between all instances

See the exported [KV stores][kvstores] from `@tus/server` for more information.

### `options.maxConcurrentPartUploads`
#### `options.maxConcurrentPartUploads`

This setting determines the maximum number of simultaneous part uploads to an S3 storage service.
The default value is 60. This default is chosen in conjunction with the typical partSize of 8MiB, aiming for an effective transfer rate of 3.84Gbit/s.

**Considerations:**
The ideal value for maxConcurrentPartUploads varies based on your `partSize` and the upload bandwidth to your S3 bucket. A larger partSize means less overall upload bandwidth available for other concurrent uploads.
The ideal value for `maxConcurrentPartUploads` varies based on your `partSize` and the upload bandwidth to your S3 bucket. A larger partSize means less overall upload bandwidth available for other concurrent uploads.

- **Lowering the Value**: Reducing maxConcurrentPartUploads decreases the number of simultaneous upload requests to S3. This can be beneficial for conserving memory, CPU, and disk I/O resources, especially in environments with limited system resources or where the upload speed it low or the part size is large.
- **Lowering the Value**: Reducing `maxConcurrentPartUploads` decreases the number of simultaneous upload requests to S3. This can be beneficial for conserving memory, CPU, and disk I/O resources, especially in environments with limited system resources or where the upload speed it low or the part size is large.


- **Increasing the Value**: A higher value potentially enhances the data transfer rate to the server, but at the cost of increased resource usage (memory, CPU, and disk I/O). This can be advantageous when the goal is to maximize throughput, and sufficient system resources are available.


- **Bandwidth Considerations**: It's important to note that if your upload bandwidth to S3 is a limiting factor, increasing maxConcurrentPartUploads won’t lead to higher throughput. Instead, it will result in additional resource consumption without proportional gains in transfer speed.
- **Bandwidth Considerations**: It's important to note that if your upload bandwidth to S3 is a limiting factor, increasing `maxConcurrentPartUploads` won’t lead to higher throughput. Instead, it will result in additional resource consumption without proportional gains in transfer speed.

## Extensions

Expand Down

0 comments on commit 7073828

Please sign in to comment.