[Snyk] Upgrade mongodb from 6.5.0 to 6.11.0 #4504
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Snyk has created this PR to upgrade mongodb from 6.5.0 to 6.11.0.
ℹ️ Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 96 versions ahead of your current version.
The recommended version was released on a month ago.
Issues fixed by the recommended upgrade:
SNYK-JS-MONGOOSE-8446504
Release notes
Package name: mongodb
6.11.0 (2024-11-22)
The MongoDB Node.js team is pleased to announce version 6.11.0 of the
mongodb
package!Release Notes
Client Side Operations Timeout (CSOT)
We've been working hard to try to simplify how setting timeouts works in the driver and are excited to finally put Client Side Operation Timeouts (CSOT) in your hands! We're looking forward to hearing your feedback on this new feature during its trial period in the driver, so feel free to file Improvements, Questions or Bug reports on our Jira Project or leave comments on this community forum thread: Node.js Driver 6.11 Forum Discussion!
CSOT is the common drivers solution for timing out the execution of an operation at the different stages of an operation's lifetime. At its simplest, CSOT allows you to specify one option,
timeoutMS
that determines when the driver will interrupt an operation and return a timeout error.For example, when executing a potentially long-running query, you would specify
timeoutMS
as follows:Warning
This feature is experimental and subject to change at any time. We do not recommend using this feature in production applications until it is stable.
What's new?
timeoutMS
The main new option introduced with CSOT is the
timeoutMS
option. This option can be applied directly as a client option, as well as at the database, collection, session, transaction and operation layers, following the same inheritance behaviours as other driver options.When the
timeoutMS
option is specified, it will always take precedence over the following options:socketTimeoutMS
waitQueueTimeoutMS
wTimeoutMS
maxTimeMS
maxCommitTimeMS
Note, however that
timeoutMS
DOES NOT unconditionally override theserverSelectionTimeoutMS
option.When
timeoutMS
is specified, the duration of time allotted to the server selection and connection checkout portions of command execution is defined bymin(serverSelectionTimeoutMS, timeoutMS)
if both are>0
. A zero value for either timeout value represents an infinite timeout. A finite timeout will always be used unless both timeouts are specified as0
. Note also that the driver has a default value forserverSelectionTimeoutMS
of30000
.After server selection and connection checkout are complete, the time remaining bounds the execution of the remainder of the operation.
Note
Specifying
timeoutMS
is not a hard guarantee that an operation will take exactly the duration specified. In the circumstances identified below, the driver's internal cleanup logic can result in an operation exceeding the duration specified bytimeoutMS
.AbstractCursor.toArray()
- can take up to2 * timeoutMS
in'cursorLifetimeMode'
and(n+1) * timeoutMS
when returning n batches in'iteration'
modeAbstractCursor.[Symbol.asyncIterator]()
- can take up to2 * timeoutMS
in'cursorLifetimeMode'
and (n+1)*timeoutMS
when returning n batches in'iteration'
modeMongoClient.bulkWrite()
- can take up to 2 * timeoutMS in error scenarios when the driver must clean up cursors used internally.In the
AbstractCursor.toArray
case and theAbstractCursor.[Symbol.asyncIterator]
case, this occurs as these methods close the cursor when they finish returning their documents. As detailed in the following section, this results in a refreshing of the timeout before sending thekillCursors
command to close the cursor on the server.The
MongoClient.bulkWrite
and autoencryption implementations use cursors under the hood and so inherit this issue.Cursors,
timeoutMS
andtimeoutMode
Cursors require special handling with the new timout paradigm introduced here. Cursors can be configured to interact with CSOT in two ways.
The first,
'cursorLifetime'
mode, uses thetimeoutMS
to bound the entire lifetime of a cursor and is the default timeout mode for non-tailable cursors (find, aggregate*, listCollections, etc.). This means that the initialization of the cursor and all subsequentgetMore
calls MUST finish withintimeoutMS
or a timeout error will be thrown. Note, however that the closing of a cursor, either as part of atoArray()
call or manually via theclose()
method resets the timeout before sending akillCursors
operation to the server.e.g.
The second,
'iteration'
mode, usestimeoutMS
to bound eachnext
/hasNext
/tryNext
call, refreshing the timeout after each call completes. This is the default mode for all tailable cursors (tailable find cursors on capped collections, change streams, etc.). e.g.Note that
timeoutMode
is also configurable on a per-cursor basis.GridFS and
timeoutMS
GridFS streams interact with
timeoutMS
in a similar manner to cursors in'cursorLifeTime'
mode in thattimeoutMS
bounds the entire lifetime of the stream.In addition,
GridFSBucket.find
,GridFSBucket.rename
andGridFSBucket.drop
all support thetimeoutMS
option and behave in the same way as other operations.Sessions, Transactions,
timeoutMS
anddefaultTimeoutMS
ClientSessions have a new option:
defaultTimeoutMS
, which specifies thetimeoutMS
value to use for:commitTransaction
abortTransaction
withTransaction
endSession
Note
If
defaultTimeoutMS
is not specified, then it will inherit thetimeoutMS
of the parentMongoClient
.When using
ClientSession.withTransaction
, thetimeoutMS
can be configured either in the options on thewithTransaction
call or inherited from the session'sdefaultTimeoutMS
. ThistimeoutMS
will apply to the entirety of thewithTransaction
callback provided that the session is correctly passed into each database operation. If the session is not passed into the operation, it will not respect the configured timeout. Also be aware that trying to override thetimeoutMS
at the operation level for operations making use of the explicit session inside thewithTransaction
callback will result in an error being thrown.const coll = client.db('db').collection('coll');
// ❌ Incorrect; will throw an error
await session.withTransaction(async function(session) {
await coll.insertOne({x:1}, { session, timeoutMS: 600 });
})
// ❌ Incorrect; will not respect timeoutMS configured on session
await session.withTransaction(async function(session) {
await coll.insertOne({x:1}, {});
})
ClientEncryption and
timeoutMS
The
ClientEncryption
class now supports thetimeoutMS
option. IftimeoutMS
is provided when constructing aClientEncryption
instance, it will be used to govern the lifetime of all operations performed on instance, otherwise, it will inherit from thetimeoutMS
set on theMongoClient
provided to theClientEncryption
constructor.If
timeoutMS
is set on both the client and provided to ClientEncryption directly, the option provided toClientEncryption
takes precedence.await encryption.createDataKey('local'); // will not take longer than 1_000ms
const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 1_000 }));
await encryption.createDataKey('local'); // will not take longer than 1_000ms
const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 5_000 }), { timeoutMS: 1_000 });
await encryption.createDataKey('local'); // will not take longer than 1_000ms
Limitations
At the time of writing, when using the driver's autoconnect feature alongside CSOT, the time taken for the command doing the autonnection will not be bound by the configured
timeoutMS
. We made this design choice because the client's connection logic handles a number of potentially long-running I/O and other setup operations including reading certificate files, DNS lookups, instantiating server monitors, and launching external processes for client encryption.We recommend manually connecting the
MongoClient
if intending to make use of CSOT, or otherwise ensuring that the driver is already connected when running commands that make use oftimeoutMS
.// ❌ No guarantee to finish in specified time
await client.db('db').collection('coll').insertOne({x:1});
// ✔️ Will have expected behaviour
await client.connect();
await client.db('db').collection('coll').insertOne({x:1});
Explain helpers support
timeoutMS
Explain helpers support timeoutMS:
Note
Providing a
maxTimeMS
value with atimeoutMS
value will throw errors.MONGODB-OIDC Authentication now supports Kubernetes Environments.
For k8s environments running in Amazon's EKS (Elastic Kubernetes Service), Google's GKE (Google Kubernetes Engine), or Azure's AKS (Azure Kubernetes Service) simply provide an
ENVIRONMENT
auth mechanism property in the URI orMongoClient
options of "k8s".Example:
BSON Binary Vector Support!
Checkout BSON's release notes for more information: https://github.com/mongodb/js-bson/releases/tag/v6.10.0
ConnectionClosedEvents always follow PoolClearedEvents
When Connection Monitoring and Pooling events are listened for,
ConnectionClosedEvent
s are now always emitted afterPoolClearEvent
s.Features
Bug Fixes
Performance Improvements
Documentation
We invite you to try the
mongodb
library immediately, and report any issues to the NODE project.6.10.0 (2024-10-21)
The MongoDB Node.js team is pleased to announce version 6.10.0 of the
mongodb
package!Release Notes
Warning
Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.
Support for new client bulkWrite API (8.0+)
A new bulk write API on the
MongoClient
is now supported for users on server versions 8.0 and higher.This API is meant to replace the existing bulk write API on the
Collection
as it supports a bulkwrite across multiple databases and collections in a single call.
Usage
Users of this API call
MongoClient#bulkWrite
and provide a list of bulk write models and options.The models have a structure as follows:
Insert One
Note that when no
_id
field is provided in the document, the driver will generate a BSONObjectId
automatically.
Update One
Update Many
Note that write errors occuring with an update many model present are not retryable.
Replace One
Delete One
Delete Many
Note that write errors occuring with a delete many model present are not retryable.*
Example
Below is a mixed model example of using the new API:
The bulk write specific options that can be provided to the API are as follows:
ordered
: Optional boolean that indicates the bulk write as ordered. Defaults to true.verboseResults
: Optional boolean to indicate to provide verbose results. Defaults to false.bypassDocumentValidation
: Optional boolean to bypass document validation rules. Defaults to false.let
: Optional document of parameter names and values that can be accessed using $$var. No default.The object returned by the bulk write API is:
Error Handling
Server side errors encountered during a bulk write will throw a
MongoClientBulkWriteError
. This errorhas the following properties:
writeConcernErrors
: Ann array of documents for each write concern error that occurred.writeErrors
: A map of index pointing at the models provided and the individual write error.partialResult
: The client bulk write result at the point where the error was thrown.Schema assertion support
name: string;
authorName: string;
}
interface Author {
name: string;
}
type MongoDBSchemas = {
'db.books': Book;
'db.authors': Author;
}
const model: ClientBulkWriteModel<MongoDBSchemas> = {
namespace: 'db.books'
name: 'insertOne',
document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
// error
authorName
cannot be number};
Notice how authorName is type checked against the
Book
type because namespace is set to"db.books"
.Allow SRV hostnames with ...