Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Snyk] Upgrade mongodb from 6.5.0 to 6.11.0 #4504

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

prernaadev01
Copy link
Collaborator

snyk-top-banner

Snyk has created this PR to upgrade mongodb from 6.5.0 to 6.11.0.

ℹ️ Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.


  • The recommended version is 96 versions ahead of your current version.

  • The recommended version was released on a month ago.

Issues fixed by the recommended upgrade:

Issue Score Exploit Maturity
high severity Improper Neutralization of Special Elements in Data Query Logic
SNYK-JS-MONGOOSE-8446504
721 No Known Exploit
Release notes
Package name: mongodb
  • 6.11.0 - 2024-11-22

    6.11.0 (2024-11-22)

    The MongoDB Node.js team is pleased to announce version 6.11.0 of the mongodb package!

    Release Notes

    Client Side Operations Timeout (CSOT)

    We've been working hard to try to simplify how setting timeouts works in the driver and are excited to finally put Client Side Operation Timeouts (CSOT) in your hands! We're looking forward to hearing your feedback on this new feature during its trial period in the driver, so feel free to file Improvements, Questions or Bug reports on our Jira Project or leave comments on this community forum thread: Node.js Driver 6.11 Forum Discussion!

    CSOT is the common drivers solution for timing out the execution of an operation at the different stages of an operation's lifetime. At its simplest, CSOT allows you to specify one option,timeoutMS that determines when the driver will interrupt an operation and return a timeout error.

    For example, when executing a potentially long-running query, you would specify timeoutMS as follows:

    await collection.find({}, {timeoutMS: 600_000}).toArray(); // Ensures that the find will throw a timeout error if all documents are not retrieved within 10 minutes
    // Potential Stack trace if this were to time out:
    // Uncaught MongoOperationTimeoutError: Timed out during socket read (600000ms)
    //    at Connection.readMany (mongodb/lib/cmap/connection.js:427:31)
    //    at async Connection.sendWire (mongodb/lib/cmap/connection.js:246:30)
    //    at async Connection.sendCommand (mongodb/lib/cmap/connection.js:281:24)
    //    at async Connection.command (mongodb/lib/cmap/connection.js:323:26)
    //    at async Server.command (mongodb/lib/sdam/server.js:170:29)
    //    at async GetMoreOperation.execute (mongodb/lib/operations/get_more.js:58:16)
    //    at async tryOperation (mongodb/lib/operations/execute_operation.js:203:20)
    //    at async executeOperation (mongodb/lib/operations/execute_operation.js:73:16)
    //    at async FindCursor.getMore (mongodb/lib/cursor/abstract_cursor.js:590:16)

    Warning

    This feature is experimental and subject to change at any time. We do not recommend using this feature in production applications until it is stable.

    What's new?

    timeoutMS

    The main new option introduced with CSOT is the timeoutMS option. This option can be applied directly as a client option, as well as at the database, collection, session, transaction and operation layers, following the same inheritance behaviours as other driver options.

    When the timeoutMS option is specified, it will always take precedence over the following options:

    • socketTimeoutMS
    • waitQueueTimeoutMS
    • wTimeoutMS
    • maxTimeMS
    • maxCommitTimeMS

    Note, however that timeoutMS DOES NOT unconditionally override the serverSelectionTimeoutMS option.

    When timeoutMS is specified, the duration of time allotted to the server selection and connection checkout portions of command execution is defined by min(serverSelectionTimeoutMS, timeoutMS) if both are >0. A zero value for either timeout value represents an infinite timeout. A finite timeout will always be used unless both timeouts are specified as 0. Note also that the driver has a default value for serverSelectionTimeoutMS of 30000.

    After server selection and connection checkout are complete, the time remaining bounds the execution of the remainder of the operation.

    Note

    Specifying timeoutMS is not a hard guarantee that an operation will take exactly the duration specified. In the circumstances identified below, the driver's internal cleanup logic can result in an operation exceeding the duration specified by timeoutMS.

    • AbstractCursor.toArray() - can take up to 2 * timeoutMS in 'cursorLifetimeMode' and (n+1) * timeoutMS when returning n batches in 'iteration' mode
    • AbstractCursor.[Symbol.asyncIterator]() - can take up to 2 * timeoutMS in 'cursorLifetimeMode' and (n+1)*timeoutMS when returning n batches in 'iteration' mode
    • MongoClient.bulkWrite() - can take up to 2 * timeoutMS in error scenarios when the driver must clean up cursors used internally.
    • CSFLE/QE - can take up to 2 * timeoutMS in rare error scenarios when the driver must clean up cursors used internally when fetching keys from the keyvault or listing collections.

    In the AbstractCursor.toArray case and the AbstractCursor.[Symbol.asyncIterator] case, this occurs as these methods close the cursor when they finish returning their documents. As detailed in the following section, this results in a refreshing of the timeout before sending the killCursors command to close the cursor on the server.
    The MongoClient.bulkWrite and autoencryption implementations use cursors under the hood and so inherit this issue.

    Cursors, timeoutMS and timeoutMode

    Cursors require special handling with the new timout paradigm introduced here. Cursors can be configured to interact with CSOT in two ways.
    The first, 'cursorLifetime' mode, uses the timeoutMS to bound the entire lifetime of a cursor and is the default timeout mode for non-tailable cursors (find, aggregate*, listCollections, etc.). This means that the initialization of the cursor and all subsequent getMore calls MUST finish within timeoutMS or a timeout error will be thrown. Note, however that the closing of a cursor, either as part of a toArray() call or manually via the close() method resets the timeout before sending a killCursors operation to the server.

    e.g.

    // This will ensure that the initialization of the cursor and retrieval of all docments will occur within 1000ms, throwing an error if it exceeds this time limit
    const docs = await collection.find({}, {timeoutMS: 1000}).toArray();

    The second, 'iteration' mode, uses timeoutMS to bound each next/hasNext/tryNext call, refreshing the timeout after each call completes. This is the default mode for all tailable cursors (tailable find cursors on capped collections, change streams, etc.). e.g.

    // Each turn of the async iterator will take up to 1000ms before it throws
    for await (const doc of cappedCollection.find({}, {tailable: true, timeoutMS: 1000})) {
        // process document
    }

    Note that timeoutMode is also configurable on a per-cursor basis.

    GridFS and timeoutMS

    GridFS streams interact with timeoutMS in a similar manner to cursors in 'cursorLifeTime' mode in that timeoutMS bounds the entire lifetime of the stream.
    In addition, GridFSBucket.find, GridFSBucket.rename and GridFSBucket.drop all support the timeoutMS option and behave in the same way as other operations.

    Sessions, Transactions, timeoutMS and defaultTimeoutMS

    ClientSessions have a new option: defaultTimeoutMS, which specifies the timeoutMS value to use for:

    • commitTransaction
    • abortTransaction
    • withTransaction
    • endSession

    Note

    If defaultTimeoutMS is not specified, then it will inherit the timeoutMS of the parent MongoClient.

    When using ClientSession.withTransaction, the timeoutMS can be configured either in the options on the withTransaction call or inherited from the session's defaultTimeoutMS. This timeoutMS will apply to the entirety of the withTransaction callback provided that the session is correctly passed into each database operation. If the session is not passed into the operation, it will not respect the configured timeout. Also be aware that trying to override the timeoutMS at the operation level for operations making use of the explicit session inside the withTransaction callback will result in an error being thrown.

    const session = client.startSession({defaultTimeoutMS: 1000});
    const coll = client.db('db').collection('coll');
    // ❌ Incorrect; will throw an error
    await session.withTransaction(async function(session) {
    await coll.insertOne({x:1}, { session, timeoutMS: 600 });
    })

    // ❌ Incorrect; will not respect timeoutMS configured on session
    await session.withTransaction(async function(session) {
    await coll.insertOne({x:1}, {});
    })

    ClientEncryption and timeoutMS

    The ClientEncryption class now supports the timeoutMS option. If timeoutMS is provided when constructing a ClientEncryption instance, it will be used to govern the lifetime of all operations performed on instance, otherwise, it will inherit from the timeoutMS set on the MongoClient provided to the ClientEncryption constructor.
    If timeoutMS is set on both the client and provided to ClientEncryption directly, the option provided to ClientEncryption takes precedence.

    const encryption = new ClientEncryption(new MongoClient('localhost:27027'), { timeoutMS: 1_000 });
    await encryption.createDataKey('local'); // will not take longer than 1_000ms

    const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 1_000 }));
    await encryption.createDataKey('local'); // will not take longer than 1_000ms

    const encryption = new ClientEncryption(new MongoClient('localhost:27027', { timeoutMS: 5_000 }), { timeoutMS: 1_000 });
    await encryption.createDataKey('local'); // will not take longer than 1_000ms

    Limitations

    At the time of writing, when using the driver's autoconnect feature alongside CSOT, the time taken for the command doing the autonnection will not be bound by the configured timeoutMS. We made this design choice because the client's connection logic handles a number of potentially long-running I/O and other setup operations including reading certificate files, DNS lookups, instantiating server monitors, and launching external processes for client encryption.
    We recommend manually connecting the MongoClient if intending to make use of CSOT, or otherwise ensuring that the driver is already connected when running commands that make use of timeoutMS.

    const client = new MongoClient(uri, { timeoutMS: 1000 });
    // ❌ No guarantee to finish in specified time
    await client.db('db').collection('coll').insertOne({x:1});

    // ✔️ Will have expected behaviour
    await client.connect();
    await client.db('db').collection('coll').insertOne({x:1});

    Explain helpers support timeoutMS

    Explain helpers support timeoutMS:

    await collection.deleteMany({}, { timeoutMS: 1_000, explain: true });
    await collection.find().explain(
      { verbosity: 'queryPlanner' },
      { timeoutMS: 1_000 }
    )

    Note

    Providing a maxTimeMS value with a timeoutMS value will throw errors.

    MONGODB-OIDC Authentication now supports Kubernetes Environments.

    For k8s environments running in Amazon's EKS (Elastic Kubernetes Service), Google's GKE (Google Kubernetes Engine), or Azure's AKS (Azure Kubernetes Service) simply provide an ENVIRONMENT auth mechanism property in the URI or MongoClient options of "k8s".

    Example:

    const client = new MongoClient('mongodb://host:port/?authMechanism=MONGODB-OIDC&authMechanismProperties=ENVIRONMENT:k8s');

    BSON Binary Vector Support!

    Checkout BSON's release notes for more information: https://github.com/mongodb/js-bson/releases/tag/v6.10.0

    ConnectionClosedEvents always follow PoolClearedEvents

    When Connection Monitoring and Pooling events are listened for, ConnectionClosedEvents are now always emitted after PoolClearEvents.

    Features

    Bug Fixes

    Performance Improvements

    Documentation

    We invite you to try the mongodb library immediately, and report any issues to the NODE project.

  • 6.11.0-dev.20241210.sha.37613f1a - 2024-12-10
  • 6.11.0-dev.20241207.sha.ea8a33f1 - 2024-12-07
  • 6.11.0-dev.20241206.sha.ed2bdbe5 - 2024-12-06
  • 6.11.0-dev.20241205.sha.55585731 - 2024-12-05
  • 6.11.0-dev.20241204.sha.260e052e - 2024-12-04
  • 6.11.0-dev.20241128.sha.4842cd8a - 2024-11-28
  • 6.11.0-dev.20241123.sha.32f7ac63 - 2024-11-23
  • 6.10.0 - 2024-10-21

    6.10.0 (2024-10-21)

    The MongoDB Node.js team is pleased to announce version 6.10.0 of the mongodb package!

    Release Notes

    Warning

    Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.

    Support for new client bulkWrite API (8.0+)

    A new bulk write API on the MongoClient is now supported for users on server versions 8.0 and higher.
    This API is meant to replace the existing bulk write API on the Collection as it supports a bulk
    write across multiple databases and collections in a single call.

    Usage

    Users of this API call MongoClient#bulkWrite and provide a list of bulk write models and options.
    The models have a structure as follows:

    Insert One

    Note that when no _id field is provided in the document, the driver will generate a BSON ObjectId
    automatically.

    {
      namespace: '<db>.<collection>',
      name: 'insertOne',
      document: Document
    }

    Update One

    {
      namespace: '<db>.<collection>',
      name: 'updateOne',
      filter: Document,
      update: Document | Document[],
      arrayFilters?: Document[],
      hint?: Document | string,
      collation?: Document,
      upsert: boolean
    }

    Update Many

    Note that write errors occuring with an update many model present are not retryable.

    {
      namespace: '<db>.<collection>',
      name: 'updateMany',
      filter: Document,
      update: Document | Document[],
      arrayFilters?: Document[],
      hint?: Document | string,
      collation?: Document,
      upsert: boolean
    }

    Replace One

    {
      namespace: '<db>.<collection>',
      name: 'replaceOne',
      filter: Document,
      replacement: Document,
      hint?: Document | string,
      collation?: Document
    }

    Delete One

    {
      namespace: '<db>.<collection>',
      name: 'deleteOne',
      filter: Document,
      hint?: Document | string,
      collation?: Document
    }

    Delete Many

    Note that write errors occuring with a delete many model present are not retryable.*

    {
      namespace: '<db>.<collection>',
      name: 'deleteMany',
      filter: Document,
      hint?: Document | string,
      collation?: Document
    }

    Example

    Below is a mixed model example of using the new API:

    const client = new MongoClient(process.env.MONGODB_URI);
    const models = [
      {
        name: 'insertOne',
        namespace: 'db.authors',
        document: { name: 'King' }
      },
      {
        name: 'insertOne',
        namespace: 'db.books',
        document: { name: 'It' }
      },
      {
        name: 'updateOne',
        namespace: 'db.books',
        filter: { name: 'it' },
        update: { $set: { year: 1986 } }
      }
    ];
    const result = await client.bulkWrite(models);

    The bulk write specific options that can be provided to the API are as follows:

    • ordered: Optional boolean that indicates the bulk write as ordered. Defaults to true.
    • verboseResults: Optional boolean to indicate to provide verbose results. Defaults to false.
    • bypassDocumentValidation: Optional boolean to bypass document validation rules. Defaults to false.
    • let: Optional document of parameter names and values that can be accessed using $$var. No default.

    The object returned by the bulk write API is:

    interface ClientBulkWriteResult {
      // Whether the bulk write was acknowledged.
      readonly acknowledged: boolean;
      // The total number of documents inserted across all insert operations.
      readonly insertedCount: number;
      // The total number of documents upserted across all update operations.
      readonly upsertedCount: number;
      // The total number of documents matched across all update operations.
      readonly matchedCount: number;
      // The total number of documents modified across all update operations.
      readonly modifiedCount: number;
      // The total number of documents deleted across all delete operations.
      readonly deletedCount: number;
      // The results of each individual insert operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly insertResults?: ReadonlyMap<number, ClientInsertOneResult>;
      // The results of each individual update operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly updateResults?: ReadonlyMap<number, ClientUpdateResult>;
      // The results of each individual delete operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly deleteResults?: ReadonlyMap<number, ClientDeleteResult>;
    }

    Error Handling

    Server side errors encountered during a bulk write will throw a MongoClientBulkWriteError. This error
    has the following properties:

    • writeConcernErrors: Ann array of documents for each write concern error that occurred.
    • writeErrors: A map of index pointing at the models provided and the individual write error.
    • partialResult: The client bulk write result at the point where the error was thrown.

    Schema assertion support

    interface Book {
    name: string;
    authorName: string;
    }

    interface Author {
    name: string;
    }

    type MongoDBSchemas = {
    'db.books': Book;
    'db.authors': Author;
    }

    const model: ClientBulkWriteModel<MongoDBSchemas> = {
    namespace: 'db.books'
    name: 'insertOne',
    document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
    // error authorName cannot be number
    };

    Notice how authorName is type checked against the Book type because namespace is set to "db.books".

    Allow SRV hostnames with ...

Snyk has created this PR to upgrade mongodb from 6.5.0 to 6.11.0.

See this package in npm:
mongodb

See this project in Snyk:
https://app.snyk.io/org/guardian-y7t/project/64e6c9f0-8136-4f80-bbfd-85a42218ad81?utm_source=github&utm_medium=referral&page=upgrade-pr
@prernaadev01 prernaadev01 requested review from a team as code owners December 19, 2024 03:42
Copy link

github-actions bot commented Dec 19, 2024

Test Results

67 tests   67 ✅  0s ⏱️
55 suites   0 💤
 3 files     0 ❌

Results for commit d5f7743.

♻️ This comment has been updated with latest results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants